Sample records for spectral decomposition method

  1. Multi-Centrality Graph Spectral Decompositions and Their Application to Cyber Intrusion Detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Pin-Yu; Choudhury, Sutanay; Hero, Alfred

    Many modern datasets can be represented as graphs and hence spectral decompositions such as graph principal component analysis (PCA) can be useful. Distinct from previous graph decomposition approaches based on subspace projection of a single topological feature, e.g., the centered graph adjacency matrix (graph Laplacian), we propose spectral decomposition approaches to graph PCA and graph dictionary learning that integrate multiple features, including graph walk statistics, centrality measures and graph distances to reference nodes. In this paper we propose a new PCA method for single graph analysis, called multi-centrality graph PCA (MC-GPCA), and a new dictionary learning method for ensembles ofmore » graphs, called multi-centrality graph dictionary learning (MC-GDL), both based on spectral decomposition of multi-centrality matrices. As an application to cyber intrusion detection, MC-GPCA can be an effective indicator of anomalous connectivity pattern and MC-GDL can provide discriminative basis for attack classification.« less

  2. A TV-constrained decomposition method for spectral CT

    NASA Astrophysics Data System (ADS)

    Guo, Xiaoyue; Zhang, Li; Xing, Yuxiang

    2017-03-01

    Spectral CT is attracting more and more attention in medicine, industrial nondestructive testing and security inspection field. Material decomposition is an important issue to a spectral CT to discriminate materials. Because of the spectrum overlap of energy channels, as well as the correlation of basis functions, it is well acknowledged that decomposition step in spectral CT imaging causes noise amplification and artifacts in component coefficient images. In this work, we propose materials decomposition via an optimization method to improve the quality of decomposed coefficient images. On the basis of general optimization problem, total variance minimization is constrained on coefficient images in our overall objective function with adjustable weights. We solve this constrained optimization problem under the framework of ADMM. Validation on both a numerical dental phantom in simulation and a real phantom of pig leg on a practical CT system using dual-energy imaging is executed. Both numerical and physical experiments give visually obvious better reconstructions than a general direct inverse method. SNR and SSIM are adopted to quantitatively evaluate the image quality of decomposed component coefficients. All results demonstrate that the TV-constrained decomposition method performs well in reducing noise without losing spatial resolution so that improving the image quality. The method can be easily incorporated into different types of spectral imaging modalities, as well as for cases with energy channels more than two.

  3. Calibration methods influence quantitative material decomposition in photon-counting spectral CT

    NASA Astrophysics Data System (ADS)

    Curtis, Tyler E.; Roeder, Ryan K.

    2017-03-01

    Photon-counting detectors and nanoparticle contrast agents can potentially enable molecular imaging and material decomposition in computed tomography (CT). Material decomposition has been investigated using both simulated and acquired data sets. However, the effect of calibration methods on material decomposition has not been systematically investigated. Therefore, the objective of this study was to investigate the influence of the range and number of contrast agent concentrations within a modular calibration phantom on quantitative material decomposition. A commerciallyavailable photon-counting spectral micro-CT (MARS Bioimaging) was used to acquire images with five energy bins selected to normalize photon counts and leverage the contrast agent k-edge. Material basis matrix values were determined using multiple linear regression models and material decomposition was performed using a maximum a posteriori estimator. The accuracy of quantitative material decomposition was evaluated by the root mean squared error (RMSE), specificity, sensitivity, and area under the curve (AUC). An increased maximum concentration (range) in the calibration significantly improved RMSE, specificity and AUC. The effects of an increased number of concentrations in the calibration were not statistically significant for the conditions in this study. The overall results demonstrated that the accuracy of quantitative material decomposition in spectral CT is significantly influenced by calibration methods, which must therefore be carefully considered for the intended diagnostic imaging application.

  4. Improving 3D Wavelet-Based Compression of Hyperspectral Images

    NASA Technical Reports Server (NTRS)

    Klimesh, Matthew; Kiely, Aaron; Xie, Hua; Aranki, Nazeeh

    2009-01-01

    Two methods of increasing the effectiveness of three-dimensional (3D) wavelet-based compression of hyperspectral images have been developed. (As used here, images signifies both images and digital data representing images.) The methods are oriented toward reducing or eliminating detrimental effects of a phenomenon, referred to as spectral ringing, that is described below. In 3D wavelet-based compression, an image is represented by a multiresolution wavelet decomposition consisting of several subbands obtained by applying wavelet transforms in the two spatial dimensions corresponding to the two spatial coordinate axes of the image plane, and by applying wavelet transforms in the spectral dimension. Spectral ringing is named after the more familiar spatial ringing (spurious spatial oscillations) that can be seen parallel to and near edges in ordinary images reconstructed from compressed data. These ringing phenomena are attributable to effects of quantization. In hyperspectral data, the individual spectral bands play the role of edges, causing spurious oscillations to occur in the spectral dimension. In the absence of such corrective measures as the present two methods, spectral ringing can manifest itself as systematic biases in some reconstructed spectral bands and can reduce the effectiveness of compression of spatially-low-pass subbands. One of the two methods is denoted mean subtraction. The basic idea of this method is to subtract mean values from spatial planes of spatially low-pass subbands prior to encoding, because (a) such spatial planes often have mean values that are far from zero and (b) zero-mean data are better suited for compression by methods that are effective for subbands of two-dimensional (2D) images. In this method, after the 3D wavelet decomposition is performed, mean values are computed for and subtracted from each spatial plane of each spatially-low-pass subband. The resulting data are converted to sign-magnitude form and compressed in a manner similar to that of a baseline hyperspectral- image-compression method. The mean values are encoded in the compressed bit stream and added back to the data at the appropriate decompression step. The overhead incurred by encoding the mean values only a few bits per spectral band is negligible with respect to the huge size of a typical hyperspectral data set. The other method is denoted modified decomposition. This method is so named because it involves a modified version of a commonly used multiresolution wavelet decomposition, known in the art as the 3D Mallat decomposition, in which (a) the first of multiple stages of a 3D wavelet transform is applied to the entire dataset and (b) subsequent stages are applied only to the horizontally-, vertically-, and spectrally-low-pass subband from the preceding stage. In the modified decomposition, in stages after the first, not only is the spatially-low-pass, spectrally-low-pass subband further decomposed, but also spatially-low-pass, spectrally-high-pass subbands are further decomposed spatially. Either method can be used alone to improve the quality of a reconstructed image (see figure). Alternatively, the two methods can be combined by first performing modified decomposition, then subtracting the mean values from spatial planes of spatially-low-pass subbands.

  5. Utilization of a balanced steady state free precession signal model for improved fat/water decomposition.

    PubMed

    Henze Bancroft, Leah C; Strigel, Roberta M; Hernando, Diego; Johnson, Kevin M; Kelcz, Frederick; Kijowski, Richard; Block, Walter F

    2016-03-01

    Chemical shift based fat/water decomposition methods such as IDEAL are frequently used in challenging imaging environments with large B0 inhomogeneity. However, they do not account for the signal modulations introduced by a balanced steady state free precession (bSSFP) acquisition. Here we demonstrate improved performance when the bSSFP frequency response is properly incorporated into the multipeak spectral fat model used in the decomposition process. Balanced SSFP allows for rapid imaging but also introduces a characteristic frequency response featuring periodic nulls and pass bands. Fat spectral components in adjacent pass bands will experience bulk phase offsets and magnitude modulations that change the expected constructive and destructive interference between the fat spectral components. A bSSFP signal model was incorporated into the fat/water decomposition process and used to generate images of a fat phantom, and bilateral breast and knee images in four normal volunteers at 1.5 Tesla. Incorporation of the bSSFP signal model into the decomposition process improved the performance of the fat/water decomposition. Incorporation of this model allows rapid bSSFP imaging sequences to use robust fat/water decomposition methods such as IDEAL. While only one set of imaging parameters were presented, the method is compatible with any field strength or repetition time. © 2015 Wiley Periodicals, Inc.

  6. Amplitude-cyclic frequency decomposition of vibration signals for bearing fault diagnosis based on phase editing

    NASA Astrophysics Data System (ADS)

    Barbini, L.; Eltabach, M.; Hillis, A. J.; du Bois, J. L.

    2018-03-01

    In rotating machine diagnosis different spectral tools are used to analyse vibration signals. Despite the good diagnostic performance such tools are usually refined, computationally complex to implement and require oversight of an expert user. This paper introduces an intuitive and easy to implement method for vibration analysis: amplitude cyclic frequency decomposition. This method firstly separates vibration signals accordingly to their spectral amplitudes and secondly uses the squared envelope spectrum to reveal the presence of cyclostationarity in each amplitude level. The intuitive idea is that in a rotating machine different components contribute vibrations at different amplitudes, for instance defective bearings contribute a very weak signal in contrast to gears. This paper also introduces a new quantity, the decomposition squared envelope spectrum, which enables separation between the components of a rotating machine. The amplitude cyclic frequency decomposition and the decomposition squared envelope spectrum are tested on real word signals, both at stationary and varying speeds, using data from a wind turbine gearbox and an aircraft engine. In addition a benchmark comparison to the spectral correlation method is presented.

  7. Spectral Diffusion: An Algorithm for Robust Material Decomposition of Spectral CT Data

    PubMed Central

    Clark, Darin P.; Badea, Cristian T.

    2014-01-01

    Clinical successes with dual energy CT, aggressive development of energy discriminating x-ray detectors, and novel, target-specific, nanoparticle contrast agents promise to establish spectral CT as a powerful functional imaging modality. Common to all of these applications is the need for a material decomposition algorithm which is robust in the presence of noise. Here, we develop such an algorithm which uses spectrally joint, piece-wise constant kernel regression and the split Bregman method to iteratively solve for a material decomposition which is gradient sparse, quantitatively accurate, and minimally biased. We call this algorithm spectral diffusion because it integrates structural information from multiple spectral channels and their corresponding material decompositions within the framework of diffusion-like denoising algorithms (e.g. anisotropic diffusion, total variation, bilateral filtration). Using a 3D, digital bar phantom and a material sensitivity matrix calibrated for use with a polychromatic x-ray source, we quantify the limits of detectability (CNR = 5) afforded by spectral diffusion in the triple-energy material decomposition of iodine (3.1 mg/mL), gold (0.9 mg/mL), and gadolinium (2.9 mg/mL) concentrations. We then apply spectral diffusion to the in vivo separation of these three materials in the mouse kidneys, liver, and spleen. PMID:25296173

  8. Spectral diffusion: an algorithm for robust material decomposition of spectral CT data.

    PubMed

    Clark, Darin P; Badea, Cristian T

    2014-11-07

    Clinical successes with dual energy CT, aggressive development of energy discriminating x-ray detectors, and novel, target-specific, nanoparticle contrast agents promise to establish spectral CT as a powerful functional imaging modality. Common to all of these applications is the need for a material decomposition algorithm which is robust in the presence of noise. Here, we develop such an algorithm which uses spectrally joint, piecewise constant kernel regression and the split Bregman method to iteratively solve for a material decomposition which is gradient sparse, quantitatively accurate, and minimally biased. We call this algorithm spectral diffusion because it integrates structural information from multiple spectral channels and their corresponding material decompositions within the framework of diffusion-like denoising algorithms (e.g. anisotropic diffusion, total variation, bilateral filtration). Using a 3D, digital bar phantom and a material sensitivity matrix calibrated for use with a polychromatic x-ray source, we quantify the limits of detectability (CNR = 5) afforded by spectral diffusion in the triple-energy material decomposition of iodine (3.1 mg mL(-1)), gold (0.9 mg mL(-1)), and gadolinium (2.9 mg mL(-1)) concentrations. We then apply spectral diffusion to the in vivo separation of these three materials in the mouse kidneys, liver, and spleen.

  9. Identification of channel geometries applying seismic attributes and spectral decomposition techniques, Temsah Field, Offshore East Nile Delta, Egypt

    NASA Astrophysics Data System (ADS)

    Othman, Adel A. A.; Fathy, M.; Negm, Adel

    2018-06-01

    The Temsah field is located in eastern part of the Nile delta to seaward. The main reservoirs of the area are Middle Pliocene mainly consist from siliciclastic which associated with a close deep marine environment. The Distribution pattern of the reservoir facies is limited scale indicating fast lateral and vertical changes which are not easy to resolve by applying of conventional seismic attribute. The target of the present study is to create geophysical workflows to a better image of the channel sand distribution in the study area. We apply both Average Absolute Amplitude and Energy attribute which are indicated on the distribution of the sand bodies in the study area but filled to fully described the channel geometry. So another tool, which offers more detailed geometry description is needed. The spectral decomposition analysis method is an alternative technique focused on processing Discrete Fourier Transform which can provide better results. Spectral decomposition have been done over the upper channel shows that the frequency in the eastern part of the channel is the same frequency in places where the wells are drilled, which confirm the connection of both the eastern and western parts of the upper channel. Results suggest that application of the spectral decomposition method leads to reliable inferences. Hence, using the spectral decomposition method alone or along with other attributes has a positive impact on reserves growth and increased production where the reserve in the study area increases to 75bcf.

  10. 3D tensor-based blind multispectral image decomposition for tumor demarcation

    NASA Astrophysics Data System (ADS)

    Kopriva, Ivica; Peršin, Antun

    2010-03-01

    Blind decomposition of multi-spectral fluorescent image for tumor demarcation is formulated exploiting tensorial structure of the image. First contribution of the paper is identification of the matrix of spectral responses and 3D tensor of spatial distributions of the materials present in the image from Tucker3 or PARAFAC models of 3D image tensor. Second contribution of the paper is clustering based estimation of the number of the materials present in the image as well as matrix of their spectral profiles. 3D tensor of the spatial distributions of the materials is recovered through 3-mode multiplication of the multi-spectral image tensor and inverse of the matrix of spectral profiles. Tensor representation of the multi-spectral image preserves its local spatial structure that is lost, due to vectorization process, when matrix factorization-based decomposition methods (such as non-negative matrix factorization and independent component analysis) are used. Superior performance of the tensor-based image decomposition over matrix factorization-based decompositions is demonstrated on experimental red-green-blue (RGB) image with known ground truth as well as on RGB fluorescent images of the skin tumor (basal cell carcinoma).

  11. G W calculations using the spectral decomposition of the dielectric matrix: Verification, validation, and comparison of methods

    DOE PAGES

    Pham, T. Anh; Nguyen, Huy -Viet; Rocca, Dario; ...

    2013-04-26

    Inmore » a recent paper we presented an approach to evaluate quasiparticle energies based on the spectral decomposition of the static dielectric matrix. This method does not require the calculation of unoccupied electronic states or the direct diagonalization of large dielectric matrices, and it avoids the use of plasmon-pole models. The numerical accuracy of the approach is controlled by a single parameter, i.e., the number of eigenvectors used in the spectral decomposition of the dielectric matrix. Here we present a comprehensive validation of the method, encompassing calculations of ionization potentials and electron affinities of various molecules and of band gaps for several crystalline and disordered semiconductors. Lastly, we demonstrate the efficiency of our approach by carrying out G W calculations for systems with several hundred valence electrons.« less

  12. Compression of hyper-spectral images using an accelerated nonnegative tensor decomposition

    NASA Astrophysics Data System (ADS)

    Li, Jin; Liu, Zilong

    2017-12-01

    Nonnegative tensor Tucker decomposition (NTD) in a transform domain (e.g., 2D-DWT, etc) has been used in the compression of hyper-spectral images because it can remove redundancies between spectrum bands and also exploit spatial correlations of each band. However, the use of a NTD has a very high computational cost. In this paper, we propose a low complexity NTD-based compression method of hyper-spectral images. This method is based on a pair-wise multilevel grouping approach for the NTD to overcome its high computational cost. The proposed method has a low complexity under a slight decrease of the coding performance compared to conventional NTD. We experimentally confirm this method, which indicates that this method has the less processing time and keeps a better coding performance than the case that the NTD is not used. The proposed approach has a potential application in the loss compression of hyper-spectral or multi-spectral images

  13. Delineating gas bearing reservoir by using spectral decomposition attribute: Case study of Steenkool formation, Bintuni Basin

    NASA Astrophysics Data System (ADS)

    Haris, A.; Pradana, G. S.; Riyanto, A.

    2017-07-01

    Tectonic setting of the Bird Head Papua Island becomes an important model for petroleum system in Eastern part of Indonesia. The current exploration has been started since the oil seepage finding in Bintuni and Salawati Basin. The biogenic gas in shallow layer turns out to become an interesting issue in the hydrocarbon exploration. The hydrocarbon accumulation appearance in a shallow layer with dry gas type, appeal biogenic gas for further research. This paper aims at delineating the sweet spot hydrocarbon potential in shallow layer by applying the spectral decomposition technique. The spectral decomposition is decomposing the seismic signal into an individual frequency, which has significant geological meaning. One of spectral decomposition methods is Continuous Wavelet Transform (CWT), which transforms the seismic signal into individual time and frequency simultaneously. This method is able to make easier time-frequency map analysis. When time resolution increases, the frequency resolution will be decreased, and vice versa. In this study, we perform low-frequency shadow zone analysis in which the amplitude anomaly at a low frequency of 15 Hz was observed and we then compare it to the amplitude at the mid (20 Hz) and the high-frequency (30 Hz). The appearance of the amplitude anomaly at a low frequency was disappeared at high frequency, this anomaly disappears. The spectral decomposition by using CWT algorithm has been successfully applied to delineate the sweet spot zone.

  14. A practical material decomposition method for x-ray dual spectral computed tomography.

    PubMed

    Hu, Jingjing; Zhao, Xing

    2016-03-17

    X-ray dual spectral CT (DSCT) scans the measured object with two different x-ray spectra, and the acquired rawdata can be used to perform the material decomposition of the object. Direct calibration methods allow a faster material decomposition for DSCT and can be separated in two groups: image-based and rawdata-based. The image-based method is an approximative method, and beam hardening artifacts remain in the resulting material-selective images. The rawdata-based method generally obtains better image quality than the image-based method, but this method requires geometrically consistent rawdata. However, today's clinical dual energy CT scanners usually measure different rays for different energy spectra and acquire geometrically inconsistent rawdata sets, and thus cannot meet the requirement. This paper proposes a practical material decomposition method to perform rawdata-based material decomposition in the case of inconsistent measurement. This method first yields the desired consistent rawdata sets from the measured inconsistent rawdata sets, and then employs rawdata-based technique to perform material decomposition and reconstruct material-selective images. The proposed method was evaluated by use of simulated FORBILD thorax phantom rawdata and dental CT rawdata, and simulation results indicate that this method can produce highly quantitative DSCT images in the case of inconsistent DSCT measurements.

  15. Regularization of nonlinear decomposition of spectral x-ray projection images.

    PubMed

    Ducros, Nicolas; Abascal, Juan Felipe Perez-Juste; Sixou, Bruno; Rit, Simon; Peyrin, Françoise

    2017-09-01

    Exploiting the x-ray measurements obtained in different energy bins, spectral computed tomography (CT) has the ability to recover the 3-D description of a patient in a material basis. This may be achieved solving two subproblems, namely the material decomposition and the tomographic reconstruction problems. In this work, we address the material decomposition of spectral x-ray projection images, which is a nonlinear ill-posed problem. Our main contribution is to introduce a material-dependent spatial regularization in the projection domain. The decomposition problem is solved iteratively using a Gauss-Newton algorithm that can benefit from fast linear solvers. A Matlab implementation is available online. The proposed regularized weighted least squares Gauss-Newton algorithm (RWLS-GN) is validated on numerical simulations of a thorax phantom made of up to five materials (soft tissue, bone, lung, adipose tissue, and gadolinium), which is scanned with a 120 kV source and imaged by a 4-bin photon counting detector. To evaluate the method performance of our algorithm, different scenarios are created by varying the number of incident photons, the concentration of the marker and the configuration of the phantom. The RWLS-GN method is compared to the reference maximum likelihood Nelder-Mead algorithm (ML-NM). The convergence of the proposed method and its dependence on the regularization parameter are also studied. We show that material decomposition is feasible with the proposed method and that it converges in few iterations. Material decomposition with ML-NM was very sensitive to noise, leading to decomposed images highly affected by noise, and artifacts even for the best case scenario. The proposed method was less sensitive to noise and improved contrast-to-noise ratio of the gadolinium image. Results were superior to those provided by ML-NM in terms of image quality and decomposition was 70 times faster. For the assessed experiments, material decomposition was possible with the proposed method when the number of incident photons was equal or larger than 10 5 and when the marker concentration was equal or larger than 0.03 g·cm -3 . The proposed method efficiently solves the nonlinear decomposition problem for spectral CT, which opens up new possibilities such as material-specific regularization in the projection domain and a parallelization framework, in which projections are solved in parallel. © 2017 American Association of Physicists in Medicine.

  16. Learning Low-Rank Decomposition for Pan-Sharpening With Spatial-Spectral Offsets.

    PubMed

    Yang, Shuyuan; Zhang, Kai; Wang, Min

    2017-08-25

    Finding accurate injection components is the key issue in pan-sharpening methods. In this paper, a low-rank pan-sharpening (LRP) model is developed from a new perspective of offset learning. Two offsets are defined to represent the spatial and spectral differences between low-resolution multispectral and high-resolution multispectral (HRMS) images, respectively. In order to reduce spatial and spectral distortions, spatial equalization and spectral proportion constraints are designed and cast on the offsets, to develop a spatial and spectral constrained stable low-rank decomposition algorithm via augmented Lagrange multiplier. By fine modeling and heuristic learning, our method can simultaneously reduce spatial and spectral distortions in the fused HRMS images. Moreover, our method can efficiently deal with noises and outliers in source images, for exploring low-rank and sparse characteristics of data. Extensive experiments are taken on several image data sets, and the results demonstrate the efficiency of the proposed LRP.

  17. A New View of Earthquake Ground Motion Data: The Hilbert Spectral Analysis

    NASA Technical Reports Server (NTRS)

    Huang, Norden; Busalacchi, Antonio J. (Technical Monitor)

    2000-01-01

    A brief description of the newly developed Empirical Mode Decomposition (ENID) and Hilbert Spectral Analysis (HSA) method will be given. The decomposition is adaptive and can be applied to both nonlinear and nonstationary data. Example of the method applied to a sample earthquake record will be given. The results indicate those low frequency components, totally missed by the Fourier analysis, are clearly identified by the new method. Comparisons with Wavelet and window Fourier analysis show the new method offers much better temporal and frequency resolutions.

  18. Matrix Methods for Estimating the Coherence Functions from Estimates of the Cross-Spectral Density Matrix

    DOE PAGES

    Smallwood, D. O.

    1996-01-01

    It is shown that the usual method for estimating the coherence functions (ordinary, partial, and multiple) for a general multiple-input! multiple-output problem can be expressed as a modified form of Cholesky decomposition of the cross-spectral density matrix of the input and output records. The results can be equivalently obtained using singular value decomposition (SVD) of the cross-spectral density matrix. Using SVD suggests a new form of fractional coherence. The formulation as a SVD problem also suggests a way to order the inputs when a natural physical order of the inputs is absent.

  19. Effects of calibration methods on quantitative material decomposition in photon-counting spectral computed tomography using a maximum a posteriori estimator.

    PubMed

    Curtis, Tyler E; Roeder, Ryan K

    2017-10-01

    Advances in photon-counting detectors have enabled quantitative material decomposition using multi-energy or spectral computed tomography (CT). Supervised methods for material decomposition utilize an estimated attenuation for each material of interest at each photon energy level, which must be calibrated based upon calculated or measured values for known compositions. Measurements using a calibration phantom can advantageously account for system-specific noise, but the effect of calibration methods on the material basis matrix and subsequent quantitative material decomposition has not been experimentally investigated. Therefore, the objective of this study was to investigate the influence of the range and number of contrast agent concentrations within a modular calibration phantom on the accuracy of quantitative material decomposition in the image domain. Gadolinium was chosen as a model contrast agent in imaging phantoms, which also contained bone tissue and water as negative controls. The maximum gadolinium concentration (30, 60, and 90 mM) and total number of concentrations (2, 4, and 7) were independently varied to systematically investigate effects of the material basis matrix and scaling factor calibration on the quantitative (root mean squared error, RMSE) and spatial (sensitivity and specificity) accuracy of material decomposition. Images of calibration and sample phantoms were acquired using a commercially available photon-counting spectral micro-CT system with five energy bins selected to normalize photon counts and leverage the contrast agent k-edge. Material decomposition of gadolinium, calcium, and water was performed for each calibration method using a maximum a posteriori estimator. Both the quantitative and spatial accuracy of material decomposition were most improved by using an increased maximum gadolinium concentration (range) in the basis matrix calibration; the effects of using a greater number of concentrations were relatively small in magnitude by comparison. The material basis matrix calibration was more sensitive to changes in the calibration methods than the scaling factor calibration. The material basis matrix calibration significantly influenced both the quantitative and spatial accuracy of material decomposition, while the scaling factor calibration influenced quantitative but not spatial accuracy. Importantly, the median RMSE of material decomposition was as low as ~1.5 mM (~0.24 mg/mL gadolinium), which was similar in magnitude to that measured by optical spectroscopy on the same samples. The accuracy of quantitative material decomposition in photon-counting spectral CT was significantly influenced by calibration methods which must therefore be carefully considered for the intended diagnostic imaging application. © 2017 American Association of Physicists in Medicine.

  20. Analysis of Vibration and Noise of Construction Machinery Based on Ensemble Empirical Mode Decomposition and Spectral Correlation Analysis Method

    NASA Astrophysics Data System (ADS)

    Chen, Yuebiao; Zhou, Yiqi; Yu, Gang; Lu, Dan

    In order to analyze the effect of engine vibration on cab noise of construction machinery in multi-frequency bands, a new method based on ensemble empirical mode decomposition (EEMD) and spectral correlation analysis is proposed. Firstly, the intrinsic mode functions (IMFs) of vibration and noise signals were obtained by EEMD method, and then the IMFs which have the same frequency bands were selected. Secondly, we calculated the spectral correlation coefficients between the selected IMFs, getting the main frequency bands in which engine vibration has significant impact on cab noise. Thirdly, the dominated frequencies were picked out and analyzed by spectral analysis method. The study result shows that the main frequency bands and dominated frequencies in which engine vibration have serious impact on cab noise can be identified effectively by the proposed method, which provides effective guidance to noise reduction of construction machinery.

  1. Rapid Transient Pressure Field Computations in the Nearfield of Circular Transducers using Frequency Domain Time-Space Decomposition

    PubMed Central

    Alles, E. J.; Zhu, Y.; van Dongen, K. W. A.; McGough, R. J.

    2013-01-01

    The fast nearfield method, when combined with time-space decomposition, is a rapid and accurate approach for calculating transient nearfield pressures generated by ultrasound transducers. However, the standard time-space decomposition approach is only applicable to certain analytical representations of the temporal transducer surface velocity that, when applied to the fast nearfield method, are expressed as a finite sum of products of separate temporal and spatial terms. To extend time-space decomposition such that accelerated transient field simulations are enabled in the nearfield for an arbitrary transducer surface velocity, a new transient simulation method, frequency domain time-space decomposition (FDTSD), is derived. With this method, the temporal transducer surface velocity is transformed into the frequency domain, and then each complex-valued term is processed separately. Further improvements are achieved by spectral clipping, which reduces the number of terms and the computation time. Trade-offs between speed and accuracy are established for FDTSD calculations, and pressure fields obtained with the FDTSD method for a circular transducer are compared to those obtained with Field II and the impulse response method. The FDTSD approach, when combined with the fast nearfield method and spectral clipping, consistently achieves smaller errors in less time and requires less memory than Field II or the impulse response method. PMID:23160476

  2. Tensor-based Dictionary Learning for Spectral CT Reconstruction

    PubMed Central

    Zhang, Yanbo; Wang, Ge

    2016-01-01

    Spectral computed tomography (CT) produces an energy-discriminative attenuation map of an object, extending a conventional image volume with a spectral dimension. In spectral CT, an image can be sparsely represented in each of multiple energy channels, and are highly correlated among energy channels. According to this characteristics, we propose a tensor-based dictionary learning method for spectral CT reconstruction. In our method, tensor patches are extracted from an image tensor, which is reconstructed using the filtered backprojection (FBP), to form a training dataset. With the Candecomp/Parafac decomposition, a tensor-based dictionary is trained, in which each atom is a rank-one tensor. Then, the trained dictionary is used to sparsely represent image tensor patches during an iterative reconstruction process, and the alternating minimization scheme is adapted for optimization. The effectiveness of our proposed method is validated with both numerically simulated and real preclinical mouse datasets. The results demonstrate that the proposed tensor-based method generally produces superior image quality, and leads to more accurate material decomposition than the currently popular popular methods. PMID:27541628

  3. Domain decomposition preconditioners for the spectral collocation method

    NASA Technical Reports Server (NTRS)

    Quarteroni, Alfio; Sacchilandriani, Giovanni

    1988-01-01

    Several block iteration preconditioners are proposed and analyzed for the solution of elliptic problems by spectral collocation methods in a region partitioned into several rectangles. It is shown that convergence is achieved with a rate which does not depend on the polynomial degree of the spectral solution. The iterative methods here presented can be effectively implemented on multiprocessor systems due to their high degree of parallelism.

  4. Breast density evaluation using spectral mammography, radiologist reader assessment and segmentation techniques: a retrospective study based on left and right breast comparison

    PubMed Central

    Molloi, Sabee; Ding, Huanjun; Feig, Stephen

    2015-01-01

    Purpose The purpose of this study was to compare the precision of mammographic breast density measurement using radiologist reader assessment, histogram threshold segmentation, fuzzy C-mean segmentation and spectral material decomposition. Materials and Methods Spectral mammography images from a total of 92 consecutive asymptomatic women (50–69 years old) who presented for annual screening mammography were retrospectively analyzed for this study. Breast density was estimated using 10 radiologist reader assessment, standard histogram thresholding, fuzzy C-mean algorithm and spectral material decomposition. The breast density correlation between left and right breasts was used to assess the precision of these techniques to measure breast composition relative to dual-energy material decomposition. Results In comparison to the other techniques, the results of breast density measurements using dual-energy material decomposition showed the highest correlation. The relative standard error of estimate for breast density measurements from left and right breasts using radiologist reader assessment, standard histogram thresholding, fuzzy C-mean algorithm and dual-energy material decomposition was calculated to be 1.95, 2.87, 2.07 and 1.00, respectively. Conclusion The results indicate that the precision of dual-energy material decomposition was approximately factor of two higher than the other techniques with regard to better correlation of breast density measurements from right and left breasts. PMID:26031229

  5. Measuring Glial Metabolism in Repetitive Brain Trauma and Alzheimer’s Disease

    DTIC Science & Technology

    2016-09-01

    Six methods: Single value decomposition (SVD), wavelet, sliding window, sliding window with Gaussian weighting, spline and spectral improvements...comparison of a range of different denoising methods for dynamic MRS. Six denoising methods were considered: Single value decomposition (SVD), wavelet...project by improving the software required for the data analysis by developing six different denoising methods. He also assisted with the testing

  6. Absorption spectrum analysis based on singular value decomposition for photoisomerization and photodegradation in organic dyes

    NASA Astrophysics Data System (ADS)

    Kawabe, Yutaka; Yoshikawa, Toshio; Chida, Toshifumi; Tada, Kazuhiro; Kawamoto, Masuki; Fujihara, Takashi; Sassa, Takafumi; Tsutsumi, Naoto

    2015-10-01

    In order to analyze the spectra of inseparable chemical mixtures, many mathematical methods have been developed to decompose them into the components relevant to species from series of spectral data obtained under different conditions. We formulated a method based on singular value decomposition (SVD) of linear algebra, and applied it to two example systems of organic dyes, being successful in reproducing absorption spectra assignable to cis/trans azocarbazole dyes from the spectral data after photoisomerization and to monomer/dimer of cyanine dyes from those during photodegaradation process. For the example of photoisomerization, polymer films containing the azocarbazole dyes were prepared, which have showed updatable holographic stereogram for real images with high performance. We made continuous monitoring of absorption spectrum after optical excitation and found that their spectral shapes varied slightly after the excitation and during recovery process, of which fact suggested the contribution from a generated photoisomer. Application of the method was successful to identify two spectral components due to trans and cis forms of azocarbazoles. Temporal evolution of their weight factors suggested important roles of long lifetimed cis states in azocarbazole derivatives. We also applied the method to the photodegradation of cyanine dyes doped in DNA-lipid complexes which have shown efficient and durable optical amplification and/or lasing under optical pumping. The same SVD method was successful in the extraction of two spectral components presumably due to monomer and H-type dimer. During the photodegradation process, absorption magnitude gradually decreased due to decomposition of molecules and their decaying rates strongly depended on the spectral components, suggesting that the long persistency of the dyes in DNA-complex related to weak tendency of aggregate formation.

  7. RIO: a new computational framework for accurate initial data of binary black holes

    NASA Astrophysics Data System (ADS)

    Barreto, W.; Clemente, P. C. M.; de Oliveira, H. P.; Rodriguez-Mueller, B.

    2018-06-01

    We present a computational framework ( Rio) in the ADM 3+1 approach for numerical relativity. This work enables us to carry out high resolution calculations for initial data of two arbitrary black holes. We use the transverse conformal treatment, the Bowen-York and the puncture methods. For the numerical solution of the Hamiltonian constraint we use the domain decomposition and the spectral decomposition of Galerkin-Collocation. The nonlinear numerical code solves the set of equations for the spectral modes using the standard Newton-Raphson method, LU decomposition and Gaussian quadratures. We show the convergence of the Rio code. This code allows for easy deployment of large calculations. We show how the spin of one of the black holes is manifest in the conformal factor.

  8. Spectral Data Reduction via Wavelet Decomposition

    NASA Technical Reports Server (NTRS)

    Kaewpijit, S.; LeMoigne, J.; El-Ghazawi, T.; Rood, Richard (Technical Monitor)

    2002-01-01

    The greatest advantage gained from hyperspectral imagery is that narrow spectral features can be used to give more information about materials than was previously possible with broad-band multispectral imagery. For many applications, the new larger data volumes from such hyperspectral sensors, however, present a challenge for traditional processing techniques. For example, the actual identification of each ground surface pixel by its corresponding reflecting spectral signature is still one of the most difficult challenges in the exploitation of this advanced technology, because of the immense volume of data collected. Therefore, conventional classification methods require a preprocessing step of dimension reduction to conquer the so-called "curse of dimensionality." Spectral data reduction using wavelet decomposition could be useful, as it does not only reduce the data volume, but also preserves the distinctions between spectral signatures. This characteristic is related to the intrinsic property of wavelet transforms that preserves high- and low-frequency features during the signal decomposition, therefore preserving peaks and valleys found in typical spectra. When comparing to the most widespread dimension reduction technique, the Principal Component Analysis (PCA), and looking at the same level of compression rate, we show that Wavelet Reduction yields better classification accuracy, for hyperspectral data processed with a conventional supervised classification such as a maximum likelihood method.

  9. Nonlocal low-rank and sparse matrix decomposition for spectral CT reconstruction

    NASA Astrophysics Data System (ADS)

    Niu, Shanzhou; Yu, Gaohang; Ma, Jianhua; Wang, Jing

    2018-02-01

    Spectral computed tomography (CT) has been a promising technique in research and clinics because of its ability to produce improved energy resolution images with narrow energy bins. However, the narrow energy bin image is often affected by serious quantum noise because of the limited number of photons used in the corresponding energy bin. To address this problem, we present an iterative reconstruction method for spectral CT using nonlocal low-rank and sparse matrix decomposition (NLSMD), which exploits the self-similarity of patches that are collected in multi-energy images. Specifically, each set of patches can be decomposed into a low-rank component and a sparse component, and the low-rank component represents the stationary background over different energy bins, while the sparse component represents the rest of the different spectral features in individual energy bins. Subsequently, an effective alternating optimization algorithm was developed to minimize the associated objective function. To validate and evaluate the NLSMD method, qualitative and quantitative studies were conducted by using simulated and real spectral CT data. Experimental results show that the NLSMD method improves spectral CT images in terms of noise reduction, artifact suppression and resolution preservation.

  10. Recent advances in the modeling of plasmas with the Particle-In-Cell methods

    NASA Astrophysics Data System (ADS)

    Vay, Jean-Luc; Lehe, Remi; Vincenti, Henri; Godfrey, Brendan; Lee, Patrick; Haber, Irv

    2015-11-01

    The Particle-In-Cell (PIC) approach is the method of choice for self-consistent simulations of plasmas from first principles. The fundamentals of the PIC method were established decades ago but improvements or variations are continuously being proposed. We report on several recent advances in PIC related algorithms, including: (a) detailed analysis of the numerical Cherenkov instability and its remediation, (b) analytic pseudo-spectral electromagnetic solvers in Cartesian and cylindrical (with azimuthal modes decomposition) geometries, (c) arbitrary-order finite-difference and generalized pseudo-spectral Maxwell solvers, (d) novel analysis of Maxwell's solvers' stencil variation and truncation, in application to domain decomposition strategies and implementation of Perfectly Matched Layers in high-order and pseudo-spectral solvers. Work supported by US-DOE Contracts DE-AC02-05CH11231 and the US-DOE SciDAC program ComPASS. Used resources of NERSC, supported by US-DOE Contract DE-AC02-05CH11231.

  11. Spectral response model for a multibin photon-counting spectral computed tomography detector and its applications.

    PubMed

    Liu, Xuejin; Persson, Mats; Bornefalk, Hans; Karlsson, Staffan; Xu, Cheng; Danielsson, Mats; Huber, Ben

    2015-07-01

    Variations among detector channels in computed tomography can lead to ring artifacts in the reconstructed images and biased estimates in projection-based material decomposition. Typically, the ring artifacts are corrected by compensation methods based on flat fielding, where transmission measurements are required for a number of material-thickness combinations. Phantoms used in these methods can be rather complex and require an extensive number of transmission measurements. Moreover, material decomposition needs knowledge of the individual response of each detector channel to account for the detector inhomogeneities. For this purpose, we have developed a spectral response model that binwise predicts the response of a multibin photon-counting detector individually for each detector channel. The spectral response model is performed in two steps. The first step employs a forward model to predict the expected numbers of photon counts, taking into account parameters such as the incident x-ray spectrum, absorption efficiency, and energy response of the detector. The second step utilizes a limited number of transmission measurements with a set of flat slabs of two absorber materials to fine-tune the model predictions, resulting in a good correspondence with the physical measurements. To verify the response model, we apply the model in two cases. First, the model is used in combination with a compensation method which requires an extensive number of transmission measurements to determine the necessary parameters. Our spectral response model successfully replaces these measurements by simulations, saving a significant amount of measurement time. Second, the spectral response model is used as the basis of the maximum likelihood approach for projection-based material decomposition. The reconstructed basis images show a good separation between the calcium-like material and the contrast agents, iodine and gadolinium. The contrast agent concentrations are reconstructed with more than 94% accuracy.

  12. Spectral response model for a multibin photon-counting spectral computed tomography detector and its applications

    PubMed Central

    Liu, Xuejin; Persson, Mats; Bornefalk, Hans; Karlsson, Staffan; Xu, Cheng; Danielsson, Mats; Huber, Ben

    2015-01-01

    Abstract. Variations among detector channels in computed tomography can lead to ring artifacts in the reconstructed images and biased estimates in projection-based material decomposition. Typically, the ring artifacts are corrected by compensation methods based on flat fielding, where transmission measurements are required for a number of material-thickness combinations. Phantoms used in these methods can be rather complex and require an extensive number of transmission measurements. Moreover, material decomposition needs knowledge of the individual response of each detector channel to account for the detector inhomogeneities. For this purpose, we have developed a spectral response model that binwise predicts the response of a multibin photon-counting detector individually for each detector channel. The spectral response model is performed in two steps. The first step employs a forward model to predict the expected numbers of photon counts, taking into account parameters such as the incident x-ray spectrum, absorption efficiency, and energy response of the detector. The second step utilizes a limited number of transmission measurements with a set of flat slabs of two absorber materials to fine-tune the model predictions, resulting in a good correspondence with the physical measurements. To verify the response model, we apply the model in two cases. First, the model is used in combination with a compensation method which requires an extensive number of transmission measurements to determine the necessary parameters. Our spectral response model successfully replaces these measurements by simulations, saving a significant amount of measurement time. Second, the spectral response model is used as the basis of the maximum likelihood approach for projection-based material decomposition. The reconstructed basis images show a good separation between the calcium-like material and the contrast agents, iodine and gadolinium. The contrast agent concentrations are reconstructed with more than 94% accuracy. PMID:26839904

  13. Effect of gamma-irradiation on thermal decomposition kinetics, X-ray diffraction pattern and spectral properties of tris(1,2-diaminoethane)nickel(II)sulphate

    NASA Astrophysics Data System (ADS)

    Jayashri, T. A.; Krishnan, G.; Rema Rani, N.

    2014-12-01

    Tris(1,2-diaminoethane)nickel(II)sulphate was prepared, and characterised by various chemical and spectral techniques. The sample was irradiated with 60Co gamma rays for varying doses. Sulphite ion and ammonia were detected and estimated in the irradiated samples. Non-isothermal decomposition kinetics, X-ray diffraction pattern, Fourier transform infrared spectroscopy, electronic, fast atom bombardment mass spectra, and surface morphology of the complex were studied before and after irradiation. Kinetic parameters were evaluated by integral, differential, and approximation methods. Irradiation enhanced thermal decomposition, lowering thermal and kinetic parameters. The mechanism of decomposition is controlled by R3 function. From X-ray diffraction studies, change in lattice parameters and subsequent changes in unit cell volume and average crystallite size were observed. Both unirradiated and irradiated samples of the complex belong to trigonal crystal system. Decrease in the intensity of the peaks was observed in the infrared spectra of irradiated samples. Electronic spectral studies revealed that the M-L interaction is unaffected by irradiation. Mass spectral studies showed that the fragmentation patterns of the unirradiated and irradiated samples are similar. The additional fragment with m/z 256 found in the irradiated sample is attributed to S8+. Surface morphology of the complex changed upon irradiation.

  14. Breast tissue decomposition with spectral distortion correction: A postmortem study

    PubMed Central

    Ding, Huanjun; Zhao, Bo; Baturin, Pavlo; Behroozi, Farnaz; Molloi, Sabee

    2014-01-01

    Purpose: To investigate the feasibility of an accurate measurement of water, lipid, and protein composition of breast tissue using a photon-counting spectral computed tomography (CT) with spectral distortion corrections. Methods: Thirty-eight postmortem breasts were imaged with a cadmium-zinc-telluride-based photon-counting spectral CT system at 100 kV. The energy-resolving capability of the photon-counting detector was used to separate photons into low and high energy bins with a splitting energy of 42 keV. The estimated mean glandular dose for each breast ranged from 1.8 to 2.2 mGy. Two spectral distortion correction techniques were implemented, respectively, on the raw images to correct the nonlinear detector response due to pulse pileup and charge-sharing artifacts. Dual energy decomposition was then used to characterize each breast in terms of water, lipid, and protein content. In the meantime, the breasts were chemically decomposed into their respective water, lipid, and protein components to provide a gold standard for comparison with dual energy decomposition results. Results: The accuracy of the tissue compositional measurement with spectral CT was determined by comparing to the reference standard from chemical analysis. The averaged root-mean-square error in percentage composition was reduced from 15.5% to 2.8% after spectral distortion corrections. Conclusions: The results indicate that spectral CT can be used to quantify the water, lipid, and protein content in breast tissue. The accuracy of the compositional analysis depends on the applied spectral distortion correction technique. PMID:25281953

  15. Image-based spectral distortion correction for photon-counting x-ray detectors

    PubMed Central

    Ding, Huanjun; Molloi, Sabee

    2012-01-01

    Purpose: To investigate the feasibility of using an image-based method to correct for distortions induced by various artifacts in the x-ray spectrum recorded with photon-counting detectors for their application in breast computed tomography (CT). Methods: The polyenergetic incident spectrum was simulated with the tungsten anode spectral model using the interpolating polynomials (TASMIP) code and carefully calibrated to match the x-ray tube in this study. Experiments were performed on a Cadmium-Zinc-Telluride (CZT) photon-counting detector with five energy thresholds. Energy bins were adjusted to evenly distribute the recorded counts above the noise floor. BR12 phantoms of various thicknesses were used for calibration. A nonlinear function was selected to fit the count correlation between the simulated and the measured spectra in the calibration process. To evaluate the proposed spectral distortion correction method, an empirical fitting derived from the calibration process was applied on the raw images recorded for polymethyl methacrylate (PMMA) phantoms of 8.7, 48.8, and 100.0 mm. Both the corrected counts and the effective attenuation coefficient were compared to the simulated values for each of the five energy bins. The feasibility of applying the proposed method to quantitative material decomposition was tested using a dual-energy imaging technique with a three-material phantom that consisted of water, lipid, and protein. The performance of the spectral distortion correction method was quantified using the relative root-mean-square (RMS) error with respect to the expected values from simulations or areal analysis of the decomposition phantom. Results: The implementation of the proposed method reduced the relative RMS error of the output counts in the five energy bins with respect to the simulated incident counts from 23.0%, 33.0%, and 54.0% to 1.2%, 1.8%, and 7.7% for 8.7, 48.8, and 100.0 mm PMMA phantoms, respectively. The accuracy of the effective attenuation coefficient of PMMA estimate was also improved with the proposed spectral distortion correction. Finally, the relative RMS error of water, lipid, and protein decompositions in dual-energy imaging was significantly reduced from 53.4% to 6.8% after correction was applied. Conclusions: The study demonstrated that dramatic distortions in the recorded raw image yielded from a photon-counting detector could be expected, which presents great challenges for applying the quantitative material decomposition method in spectral CT. The proposed semi-empirical correction method can effectively reduce these errors caused by various artifacts, including pulse pileup and charge sharing effects. Furthermore, rather than detector-specific simulation packages, the method requires a relatively simple calibration process and knowledge about the incident spectrum. Therefore, it may be used as a generalized procedure for the spectral distortion correction of different photon-counting detectors in clinical breast CT systems. PMID:22482608

  16. Nonconforming mortar element methods: Application to spectral discretizations

    NASA Technical Reports Server (NTRS)

    Maday, Yvon; Mavriplis, Cathy; Patera, Anthony

    1988-01-01

    Spectral element methods are p-type weighted residual techniques for partial differential equations that combine the generality of finite element methods with the accuracy of spectral methods. Presented here is a new nonconforming discretization which greatly improves the flexibility of the spectral element approach as regards automatic mesh generation and non-propagating local mesh refinement. The method is based on the introduction of an auxiliary mortar trace space, and constitutes a new approach to discretization-driven domain decomposition characterized by a clean decoupling of the local, structure-preserving residual evaluations and the transmission of boundary and continuity conditions. The flexibility of the mortar method is illustrated by several nonconforming adaptive Navier-Stokes calculations in complex geometry.

  17. A polychromatic adaption of the Beer-Lambert model for spectral decomposition

    NASA Astrophysics Data System (ADS)

    Sellerer, Thorsten; Ehn, Sebastian; Mechlem, Korbinian; Pfeiffer, Franz; Herzen, Julia; Noël, Peter B.

    2017-03-01

    We present a semi-empirical forward-model for spectral photon-counting CT which is fully compatible with state-of-the-art maximum-likelihood estimators (MLE) for basis material line integrals. The model relies on a minimum calibration effort to make the method applicable in routine clinical set-ups with the need for periodic re-calibration. In this work we present an experimental verifcation of our proposed method. The proposed method uses an adapted Beer-Lambert model, describing the energy dependent attenuation of a polychromatic x-ray spectrum using additional exponential terms. In an experimental dual-energy photon-counting CT setup based on a CdTe detector, the model demonstrates an accurate prediction of the registered counts for an attenuated polychromatic spectrum. Thereby deviations between model and measurement data lie within the Poisson statistical limit of the performed acquisitions, providing an effectively unbiased forward-model. The experimental data also shows that the model is capable of handling possible spectral distortions introduced by the photon-counting detector and CdTe sensor. The simplicity and high accuracy of the proposed model provides a viable forward-model for MLE-based spectral decomposition methods without the need of costly and time-consuming characterization of the system response.

  18. Modal analysis of 2-D sedimentary basin from frequency domain decomposition of ambient vibration array recordings

    NASA Astrophysics Data System (ADS)

    Poggi, Valerio; Ermert, Laura; Burjanek, Jan; Michel, Clotaire; Fäh, Donat

    2015-01-01

    Frequency domain decomposition (FDD) is a well-established spectral technique used in civil engineering to analyse and monitor the modal response of buildings and structures. The method is based on singular value decomposition of the cross-power spectral density matrix from simultaneous array recordings of ambient vibrations. This method is advantageous to retrieve not only the resonance frequencies of the investigated structure, but also the corresponding modal shapes without the need for an absolute reference. This is an important piece of information, which can be used to validate the consistency of numerical models and analytical solutions. We apply this approach using advanced signal processing to evaluate the resonance characteristics of 2-D Alpine sedimentary valleys. In this study, we present the results obtained at Martigny, in the Rhône valley (Switzerland). For the analysis, we use 2 hr of ambient vibration recordings from a linear seismic array deployed perpendicularly to the valley axis. Only the horizontal-axial direction (SH) of the ground motion is considered. Using the FDD method, six separate resonant frequencies are retrieved together with their corresponding modal shapes. We compare the mode shapes with results from classical standard spectral ratios and numerical simulations of ambient vibration recordings.

  19. RESOLVING THE ACTIVE GALACTIC NUCLEUS AND HOST EMISSION IN THE MID-INFRARED USING A MODEL-INDEPENDENT SPECTRAL DECOMPOSITION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hernán-Caballero, Antonio; Alonso-Herrero, Almudena; Hatziminaoglou, Evanthia

    2015-04-20

    We present results on the spectral decomposition of 118 Spitzer Infrared Spectrograph (IRS) spectra from local active galactic nuclei (AGNs) using a large set of Spitzer/IRS spectra as templates. The templates are themselves IRS spectra from extreme cases where a single physical component (stellar, interstellar, or AGN) completely dominates the integrated mid-infrared emission. We show that a linear combination of one template for each physical component reproduces the observed IRS spectra of AGN hosts with unprecedented fidelity for a template fitting method with no need to model extinction separately. We use full probability distribution functions to estimate expectation values andmore » uncertainties for observables, and find that the decomposition results are robust against degeneracies. Furthermore, we compare the AGN spectra derived from the spectral decomposition with sub-arcsecond resolution nuclear photometry and spectroscopy from ground-based observations. We find that the AGN component derived from the decomposition closely matches the nuclear spectrum with a 1σ dispersion of 0.12 dex in luminosity and typical uncertainties of ∼0.19 in the spectral index and ∼0.1 in the silicate strength. We conclude that the emission from the host galaxy can be reliably removed from the IRS spectra of AGNs. This allows for unbiased studies of the AGN emission in intermediate- and high-redshift galaxies—currently inaccesible to ground-based observations—with archival Spitzer/IRS data and in the future with the Mid-InfraRed Instrument of the James Webb Space Telescope. The decomposition code and templates are available at http://denebola.org/ahc/deblendIRS.« less

  20. Application of an improved spectral decomposition method to examine earthquake source scaling in Southern California

    NASA Astrophysics Data System (ADS)

    Trugman, Daniel T.; Shearer, Peter M.

    2017-04-01

    Earthquake source spectra contain fundamental information about the dynamics of earthquake rupture. However, the inherent tradeoffs in separating source and path effects, when combined with limitations in recorded signal bandwidth, make it challenging to obtain reliable source spectral estimates for large earthquake data sets. We present here a stable and statistically robust spectral decomposition method that iteratively partitions the observed waveform spectra into source, receiver, and path terms. Unlike previous methods of its kind, our new approach provides formal uncertainty estimates and does not assume self-similar scaling in earthquake source properties. Its computational efficiency allows us to examine large data sets (tens of thousands of earthquakes) that would be impractical to analyze using standard empirical Green's function-based approaches. We apply the spectral decomposition technique to P wave spectra from five areas of active contemporary seismicity in Southern California: the Yuha Desert, the San Jacinto Fault, and the Big Bear, Landers, and Hector Mine regions of the Mojave Desert. We show that the source spectra are generally consistent with an increase in median Brune-type stress drop with seismic moment but that this observed deviation from self-similar scaling is both model dependent and varies in strength from region to region. We also present evidence for significant variations in median stress drop and stress drop variability on regional and local length scales. These results both contribute to our current understanding of earthquake source physics and have practical implications for the next generation of ground motion prediction assessments.

  1. Optimization of data analysis for the in vivo neutron activation analysis of aluminum in bone.

    PubMed

    Mohseni, H K; Matysiak, W; Chettle, D R; Byun, S H; Priest, N; Atanackovic, J; Prestwich, W V

    2016-10-01

    An existing system at McMaster University has been used for the in vivo measurement of aluminum in human bone. Precise and detailed analysis approaches are necessary to determine the aluminum concentration because of the low levels of aluminum found in the bone and the challenges associated with its detection. Phantoms resembling the composition of the human hand with varying concentrations of aluminum were made for testing the system prior to the application to human studies. A spectral decomposition model and a photopeak fitting model involving the inverse-variance weighted mean and a time-dependent analysis were explored to analyze the results and determine the model with the best performance and lowest minimum detection limit. The results showed that the spectral decomposition and the photopeak fitting model with the inverse-variance weighted mean both provided better results compared to the other methods tested. The spectral decomposition method resulted in a marginally lower detection limit (5μg Al/g Ca) compared to the inverse-variance weighted mean (5.2μg Al/g Ca), rendering both equally applicable to human measurements. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Spectral decomposition of nonlinear systems with memory

    NASA Astrophysics Data System (ADS)

    Svenkeson, Adam; Glaz, Bryan; Stanton, Samuel; West, Bruce J.

    2016-02-01

    We present an alternative approach to the analysis of nonlinear systems with long-term memory that is based on the Koopman operator and a Lévy transformation in time. Memory effects are considered to be the result of interactions between a system and its surrounding environment. The analysis leads to the decomposition of a nonlinear system with memory into modes whose temporal behavior is anomalous and lacks a characteristic scale. On average, the time evolution of a mode follows a Mittag-Leffler function, and the system can be described using the fractional calculus. The general theory is demonstrated on the fractional linear harmonic oscillator and the fractional nonlinear logistic equation. When analyzing data from an ill-defined (black-box) system, the spectral decomposition in terms of Mittag-Leffler functions that we propose may uncover inherent memory effects through identification of a small set of dynamically relevant structures that would otherwise be obscured by conventional spectral methods. Consequently, the theoretical concepts we present may be useful for developing more general methods for numerical modeling that are able to determine whether observables of a dynamical system are better represented by memoryless operators, or operators with long-term memory in time, when model details are unknown.

  3. Multiscale Characterization of PM2.5 in Southern Taiwan based on Noise-assisted Multivariate Empirical Mode Decomposition and Time-dependent Intrinsic Correlation

    NASA Astrophysics Data System (ADS)

    Hsiao, Y. R.; Tsai, C.

    2017-12-01

    As the WHO Air Quality Guideline indicates, ambient air pollution exposes world populations under threat of fatal symptoms (e.g. heart disease, lung cancer, asthma etc.), raising concerns of air pollution sources and relative factors. This study presents a novel approach to investigating the multiscale variations of PM2.5 in southern Taiwan over the past decade, with four meteorological influencing factors (Temperature, relative humidity, precipitation and wind speed),based on Noise-assisted Multivariate Empirical Mode Decomposition(NAMEMD) algorithm, Hilbert Spectral Analysis(HSA) and Time-dependent Intrinsic Correlation(TDIC) method. NAMEMD algorithm is a fully data-driven approach designed for nonlinear and nonstationary multivariate signals, and is performed to decompose multivariate signals into a collection of channels of Intrinsic Mode Functions (IMFs). TDIC method is an EMD-based method using a set of sliding window sizes to quantify localized correlation coefficients for multiscale signals. With the alignment property and quasi-dyadic filter bank of NAMEMD algorithm, one is able to produce same number of IMFs for all variables and estimates the cross correlation in a more accurate way. The performance of spectral representation of NAMEMD-HSA method is compared with Complementary Empirical Mode Decomposition/ Hilbert Spectral Analysis (CEEMD-HSA) and Wavelet Analysis. The nature of NAMAMD-based TDICC analysis is then compared with CEEMD-based TDIC analysis and the traditional correlation analysis.

  4. Color Restoration of RGBN Multispectral Filter Array Sensor Images Based on Spectral Decomposition.

    PubMed

    Park, Chulhee; Kang, Moon Gi

    2016-05-18

    A multispectral filter array (MSFA) image sensor with red, green, blue and near-infrared (NIR) filters is useful for various imaging applications with the advantages that it obtains color information and NIR information simultaneously. Because the MSFA image sensor needs to acquire invisible band information, it is necessary to remove the IR cut-offfilter (IRCF). However, without the IRCF, the color of the image is desaturated by the interference of the additional NIR component of each RGB color channel. To overcome color degradation, a signal processing approach is required to restore natural color by removing the unwanted NIR contribution to the RGB color channels while the additional NIR information remains in the N channel. Thus, in this paper, we propose a color restoration method for an imaging system based on the MSFA image sensor with RGBN filters. To remove the unnecessary NIR component in each RGB color channel, spectral estimation and spectral decomposition are performed based on the spectral characteristics of the MSFA sensor. The proposed color restoration method estimates the spectral intensity in NIR band and recovers hue and color saturation by decomposing the visible band component and the NIR band component in each RGB color channel. The experimental results show that the proposed method effectively restores natural color and minimizes angular errors.

  5. Color Restoration of RGBN Multispectral Filter Array Sensor Images Based on Spectral Decomposition

    PubMed Central

    Park, Chulhee; Kang, Moon Gi

    2016-01-01

    A multispectral filter array (MSFA) image sensor with red, green, blue and near-infrared (NIR) filters is useful for various imaging applications with the advantages that it obtains color information and NIR information simultaneously. Because the MSFA image sensor needs to acquire invisible band information, it is necessary to remove the IR cut-offfilter (IRCF). However, without the IRCF, the color of the image is desaturated by the interference of the additional NIR component of each RGB color channel. To overcome color degradation, a signal processing approach is required to restore natural color by removing the unwanted NIR contribution to the RGB color channels while the additional NIR information remains in the N channel. Thus, in this paper, we propose a color restoration method for an imaging system based on the MSFA image sensor with RGBN filters. To remove the unnecessary NIR component in each RGB color channel, spectral estimation and spectral decomposition are performed based on the spectral characteristics of the MSFA sensor. The proposed color restoration method estimates the spectral intensity in NIR band and recovers hue and color saturation by decomposing the visible band component and the NIR band component in each RGB color channel. The experimental results show that the proposed method effectively restores natural color and minimizes angular errors. PMID:27213381

  6. Synthetic Hounsfield units from spectral CT data

    NASA Astrophysics Data System (ADS)

    Bornefalk, Hans

    2012-04-01

    Beam-hardening-free synthetic images with absolute CT numbers that radiologists are used to can be constructed from spectral CT data by forming ‘dichromatic’ images after basis decomposition. The CT numbers are accurate for all tissues and the method does not require additional reconstruction. This method prevents radiologists from having to relearn new rules-of-thumb regarding absolute CT numbers for various organs and conditions as conventional CT is replaced by spectral CT. Displaying the synthetic Hounsfield unit images side-by-side with images reconstructed for optimal detectability for a certain task can ease the transition from conventional to spectral CT.

  7. Algorithms for Spectral Decomposition with Applications to Optical Plume Anomaly Detection

    NASA Technical Reports Server (NTRS)

    Srivastava, Askok N.; Matthews, Bryan; Das, Santanu

    2008-01-01

    The analysis of spectral signals for features that represent physical phenomenon is ubiquitous in the science and engineering communities. There are two main approaches that can be taken to extract relevant features from these high-dimensional data streams. The first set of approaches relies on extracting features using a physics-based paradigm where the underlying physical mechanism that generates the spectra is used to infer the most important features in the data stream. We focus on a complementary methodology that uses a data-driven technique that is informed by the underlying physics but also has the ability to adapt to unmodeled system attributes and dynamics. We discuss the following four algorithms: Spectral Decomposition Algorithm (SDA), Non-Negative Matrix Factorization (NMF), Independent Component Analysis (ICA) and Principal Components Analysis (PCA) and compare their performance on a spectral emulator which we use to generate artificial data with known statistical properties. This spectral emulator mimics the real-world phenomena arising from the plume of the space shuttle main engine and can be used to validate the results that arise from various spectral decomposition algorithms and is very useful for situations where real-world systems have very low probabilities of fault or failure. Our results indicate that methods like SDA and NMF provide a straightforward way of incorporating prior physical knowledge while NMF with a tuning mechanism can give superior performance on some tests. We demonstrate these algorithms to detect potential system-health issues on data from a spectral emulator with tunable health parameters.

  8. Reduced quantum dynamics with arbitrary bath spectral densities: hierarchical equations of motion based on several different bath decomposition schemes.

    PubMed

    Liu, Hao; Zhu, Lili; Bai, Shuming; Shi, Qiang

    2014-04-07

    We investigated applications of the hierarchical equation of motion (HEOM) method to perform high order perturbation calculations of reduced quantum dynamics for a harmonic bath with arbitrary spectral densities. Three different schemes are used to decompose the bath spectral density into analytical forms that are suitable to the HEOM treatment: (1) The multiple Lorentzian mode model that can be obtained by numerically fitting the model spectral density. (2) The combined Debye and oscillatory Debye modes model that can be constructed by fitting the corresponding classical bath correlation function. (3) A new method that uses undamped harmonic oscillator modes explicitly in the HEOM formalism. Methods to extract system-bath correlations were investigated for the above bath decomposition schemes. We also show that HEOM in the undamped harmonic oscillator modes can give detailed information on the partial Wigner transform of the total density operator. Theoretical analysis and numerical simulations of the spin-Boson dynamics and the absorption line shape of molecular dimers show that the HEOM formalism for high order perturbations can serve as an important tool in studying the quantum dissipative dynamics in the intermediate coupling regime.

  9. Reduced quantum dynamics with arbitrary bath spectral densities: Hierarchical equations of motion based on several different bath decomposition schemes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liu, Hao; Zhu, Lili; Bai, Shuming

    2014-04-07

    We investigated applications of the hierarchical equation of motion (HEOM) method to perform high order perturbation calculations of reduced quantum dynamics for a harmonic bath with arbitrary spectral densities. Three different schemes are used to decompose the bath spectral density into analytical forms that are suitable to the HEOM treatment: (1) The multiple Lorentzian mode model that can be obtained by numerically fitting the model spectral density. (2) The combined Debye and oscillatory Debye modes model that can be constructed by fitting the corresponding classical bath correlation function. (3) A new method that uses undamped harmonic oscillator modes explicitly inmore » the HEOM formalism. Methods to extract system-bath correlations were investigated for the above bath decomposition schemes. We also show that HEOM in the undamped harmonic oscillator modes can give detailed information on the partial Wigner transform of the total density operator. Theoretical analysis and numerical simulations of the spin-Boson dynamics and the absorption line shape of molecular dimers show that the HEOM formalism for high order perturbations can serve as an important tool in studying the quantum dissipative dynamics in the intermediate coupling regime.« less

  10. Spectral CT of the extremities with a silicon strip photon counting detector

    NASA Astrophysics Data System (ADS)

    Sisniega, A.; Zbijewski, W.; Stayman, J. W.; Xu, J.; Taguchi, K.; Siewerdsen, J. H.

    2015-03-01

    Purpose: Photon counting x-ray detectors (PCXDs) are an important emerging technology for spectral imaging and material differentiation with numerous potential applications in diagnostic imaging. We report development of a Si-strip PCXD system originally developed for mammography with potential application to spectral CT of musculoskeletal extremities, including challenges associated with sparse sampling, spectral calibration, and optimization for higher energy x-ray beams. Methods: A bench-top CT system was developed incorporating a Si-strip PCXD, fixed anode x-ray source, and rotational and translational motions to execute complex acquisition trajectories. Trajectories involving rotation and translation combined with iterative reconstruction were investigated, including single and multiple axial scans and longitudinal helical scans. The system was calibrated to provide accurate spectral separation in dual-energy three-material decomposition of soft-tissue, bone, and iodine. Image quality and decomposition accuracy were assessed in experiments using a phantom with pairs of bone and iodine inserts (3, 5, 15 and 20 mm) and an anthropomorphic wrist. Results: The designed trajectories improved the sampling distribution from 56% minimum sampling of voxels to 75%. Use of iterative reconstruction (viz., penalized likelihood with edge preserving regularization) in combination with such trajectories resulted in a very low level of artifacts in images of the wrist. For large bone or iodine inserts (>5 mm diameter), the error in the estimated material concentration was <16% for (50 mg/mL) bone and <8% for (5 mg/mL) iodine with strong regularization. For smaller inserts, errors of 20-40% were observed and motivate improved methods for spectral calibration and optimization of the edge-preserving regularizer. Conclusion: Use of PCXDs for three-material decomposition in joint imaging proved feasible through a combination of rotation-translation acquisition trajectories and iterative reconstruction with optimized regularization.

  11. A spectral analysis of the domain decomposed Monte Carlo method for linear systems

    DOE PAGES

    Slattery, Stuart R.; Evans, Thomas M.; Wilson, Paul P. H.

    2015-09-08

    The domain decomposed behavior of the adjoint Neumann-Ulam Monte Carlo method for solving linear systems is analyzed using the spectral properties of the linear oper- ator. Relationships for the average length of the adjoint random walks, a measure of convergence speed and serial performance, are made with respect to the eigenvalues of the linear operator. In addition, relationships for the effective optical thickness of a domain in the decomposition are presented based on the spectral analysis and diffusion theory. Using the effective optical thickness, the Wigner rational approxi- mation and the mean chord approximation are applied to estimate the leakagemore » frac- tion of random walks from a domain in the decomposition as a measure of parallel performance and potential communication costs. The one-speed, two-dimensional neutron diffusion equation is used as a model problem in numerical experiments to test the models for symmetric operators with spectral qualities similar to light water reactor problems. We find, in general, the derived approximations show good agreement with random walk lengths and leakage fractions computed by the numerical experiments.« less

  12. Spectral-decomposition techniques for the identification of periodic and anomalous phenomena in radon time-series.

    NASA Astrophysics Data System (ADS)

    Crockett, R. G. M.; Perrier, F.; Richon, P.

    2009-04-01

    Building on independent investigations by research groups at both IPGP, France, and the University of Northampton, UK, hourly-sampled radon time-series of durations exceeding one year have been investigated for periodic and anomalous phenomena using a variety of established and novel techniques. These time-series have been recorded in locations having no routine human behaviour and thus are effectively free of significant anthropogenic influences. With regard to periodic components, the long durations of these time-series allow, in principle, very high frequency resolutions for established spectral-measurement techniques such as Fourier and maximum-entropy. However, as has been widely observed, the stochastic nature of radon emissions from rocks and soils, coupled with sensitivity to a wide variety influences such as temperature, wind-speed and soil moisture-content has made interpretation of the results obtained by such techniques very difficult, with uncertain results, in many cases. We here report developments in the investigation of radon-time series for periodic and anomalous phenomena using spectral-decomposition techniques. These techniques, in variously separating ‘high', ‘middle' and ‘low' frequency components, effectively ‘de-noise' the data by allowing components of interest to be isolated from others which (might) serve to obscure weaker information-containing components. Once isolated, these components can be investigated using a variety of techniques. Whilst this is very much work in early stages of development, spectral decomposition methods have been used successfully to indicate the presence of diurnal and sub-diurnal cycles in radon concentration which we provisionally attribute to tidal influences. Also, these methods have been used to enhance the identification of short-duration anomalies, attributable to a variety of causes including, for example, earthquakes and rapid large-magnitude changes in weather conditions. Keywords: radon; earthquakes; tidal-influences; anomalies; time series; spectral-decomposition.

  13. Seismic spectral decomposition and analysis based on Wigner-Ville distribution for sandstone reservoir characterization in West Sichuan depression

    NASA Astrophysics Data System (ADS)

    Wu, Xiaoyang; Liu, Tianyou

    2010-06-01

    Reflections from a hydrocarbon-saturated zone are generally expected to have a tendency to be low frequency. Previous work has shown the application of seismic spectral decomposition for low-frequency shadow detection. In this paper, we further analyse the characteristics of spectral amplitude in fractured sandstone reservoirs with different fluid saturations using the Wigner-Ville distribution (WVD)-based method. We give a description of the geometric structure of cross-terms due to the bilinear nature of WVD and eliminate cross-terms using smoothed pseudo-WVD (SPWVD) with time- and frequency-independent Gaussian kernels as smoothing windows. SPWVD is finally applied to seismic data from West Sichuan depression. We focus our study on the comparison of SPWVD spectral amplitudes resulting from different fluid contents. It shows that prolific gas reservoirs feature higher peak spectral amplitude at higher peak frequency, which attenuate faster than low-quality gas reservoirs and dry or wet reservoirs. This can be regarded as a spectral attenuation signature for future exploration in the study area.

  14. Basis material decomposition in spectral CT using a semi-empirical, polychromatic adaption of the Beer-Lambert model.

    PubMed

    Ehn, S; Sellerer, T; Mechlem, K; Fehringer, A; Epple, M; Herzen, J; Pfeiffer, F; Noël, P B

    2017-01-07

    Following the development of energy-sensitive photon-counting detectors using high-Z sensor materials, application of spectral x-ray imaging methods to clinical practice comes into reach. However, these detectors require extensive calibration efforts in order to perform spectral imaging tasks like basis material decomposition. In this paper, we report a novel approach to basis material decomposition that utilizes a semi-empirical estimator for the number of photons registered in distinct energy bins in the presence of beam-hardening effects which can be termed as a polychromatic Beer-Lambert model. A maximum-likelihood estimator is applied to the model in order to obtain estimates of the underlying sample composition. Using a Monte-Carlo simulation of a typical clinical CT acquisition, the performance of the proposed estimator was evaluated. The estimator is shown to be unbiased and efficient according to the Cramér-Rao lower bound. In particular, the estimator is capable of operating with a minimum number of calibration measurements. Good results were obtained after calibration using less than 10 samples of known composition in a two-material attenuation basis. This opens up the possibility for fast re-calibration in the clinical routine which is considered an advantage of the proposed method over other implementations reported in the literature.

  15. Basis material decomposition in spectral CT using a semi-empirical, polychromatic adaption of the Beer-Lambert model

    NASA Astrophysics Data System (ADS)

    Ehn, S.; Sellerer, T.; Mechlem, K.; Fehringer, A.; Epple, M.; Herzen, J.; Pfeiffer, F.; Noël, P. B.

    2017-01-01

    Following the development of energy-sensitive photon-counting detectors using high-Z sensor materials, application of spectral x-ray imaging methods to clinical practice comes into reach. However, these detectors require extensive calibration efforts in order to perform spectral imaging tasks like basis material decomposition. In this paper, we report a novel approach to basis material decomposition that utilizes a semi-empirical estimator for the number of photons registered in distinct energy bins in the presence of beam-hardening effects which can be termed as a polychromatic Beer-Lambert model. A maximum-likelihood estimator is applied to the model in order to obtain estimates of the underlying sample composition. Using a Monte-Carlo simulation of a typical clinical CT acquisition, the performance of the proposed estimator was evaluated. The estimator is shown to be unbiased and efficient according to the Cramér-Rao lower bound. In particular, the estimator is capable of operating with a minimum number of calibration measurements. Good results were obtained after calibration using less than 10 samples of known composition in a two-material attenuation basis. This opens up the possibility for fast re-calibration in the clinical routine which is considered an advantage of the proposed method over other implementations reported in the literature.

  16. Simulation of multivariate stationary stochastic processes using dimension-reduction representation methods

    NASA Astrophysics Data System (ADS)

    Liu, Zhangjun; Liu, Zenghui; Peng, Yongbo

    2018-03-01

    In view of the Fourier-Stieltjes integral formula of multivariate stationary stochastic processes, a unified formulation accommodating spectral representation method (SRM) and proper orthogonal decomposition (POD) is deduced. By introducing random functions as constraints correlating the orthogonal random variables involved in the unified formulation, the dimension-reduction spectral representation method (DR-SRM) and the dimension-reduction proper orthogonal decomposition (DR-POD) are addressed. The proposed schemes are capable of representing the multivariate stationary stochastic process with a few elementary random variables, bypassing the challenges of high-dimensional random variables inherent in the conventional Monte Carlo methods. In order to accelerate the numerical simulation, the technique of Fast Fourier Transform (FFT) is integrated with the proposed schemes. For illustrative purposes, the simulation of horizontal wind velocity field along the deck of a large-span bridge is proceeded using the proposed methods containing 2 and 3 elementary random variables. Numerical simulation reveals the usefulness of the dimension-reduction representation methods.

  17. The identification of multi-cave combinations in carbonate reservoirs based on sparsity constraint inverse spectral decomposition

    NASA Astrophysics Data System (ADS)

    Li, Qian; Di, Bangrang; Wei, Jianxin; Yuan, Sanyi; Si, Wenpeng

    2016-12-01

    Sparsity constraint inverse spectral decomposition (SCISD) is a time-frequency analysis method based on the convolution model, in which minimizing the l1 norm of the time-frequency spectrum of the seismic signal is adopted as a sparsity constraint term. The SCISD method has higher time-frequency resolution and more concentrated time-frequency distribution than the conventional spectral decomposition methods, such as short-time Fourier transformation (STFT), continuous-wavelet transform (CWT) and S-transform. Due to these good features, the SCISD method has gradually been used in low-frequency anomaly detection, horizon identification and random noise reduction for sandstone and shale reservoirs. However, it has not yet been used in carbonate reservoir prediction. The carbonate fractured-vuggy reservoir is the major hydrocarbon reservoir in the Halahatang area of the Tarim Basin, north-west China. If reasonable predictions for the type of multi-cave combinations are not made, it may lead to an incorrect explanation for seismic responses of the multi-cave combinations. Furthermore, it will result in large errors in reserves estimation of the carbonate reservoir. In this paper, the energy and phase spectra of the SCISD are applied to identify the multi-cave combinations in carbonate reservoirs. The examples of physical model data and real seismic data illustrate that the SCISD method can detect the combination types and the number of caves of multi-cave combinations and can provide a favourable basis for the subsequent reservoir prediction and quantitative estimation of the cave-type carbonate reservoir volume.

  18. Estimation of slip distribution using an inverse method based on spectral decomposition of Green's function utilizing Global Positioning System (GPS) data

    NASA Astrophysics Data System (ADS)

    Jin, Honglin; Kato, Teruyuki; Hori, Muneo

    2007-07-01

    An inverse method based on the spectral decomposition of the Green's function was employed for estimating a slip distribution. We conducted numerical simulations along the Philippine Sea plate (PH) boundary in southwest Japan using this method to examine how to determine the essential parameters which are the number of deformation function modes and their coefficients. Japanese GPS Earth Observation Network (GEONET) Global Positioning System (GPS) data were used for three years covering 1997-1999 to estimate interseismic back slip distribution in this region. The estimated maximum back slip rate is about 7 cm/yr, which is consistent with the Philippine Sea plate convergence rate. Areas of strong coupling are confined between depths of 10 and 30 km and three areas of strong coupling were delineated. These results are consistent with other studies that have estimated locations of coupling distribution.

  19. Efficient material decomposition method for dual-energy X-ray cargo inspection system

    NASA Astrophysics Data System (ADS)

    Lee, Donghyeon; Lee, Jiseoc; Min, Jonghwan; Lee, Byungcheol; Lee, Byeongno; Oh, Kyungmin; Kim, Jaehyun; Cho, Seungryong

    2018-03-01

    Dual-energy X-ray inspection systems are widely used today for it provides X-ray attenuation contrast of the imaged object and also its material information. Material decomposition capability allows a higher detection sensitivity of potential targets including purposely loaded impurities in agricultural product inspections and threats in security scans for example. Dual-energy X-ray transmission data can be transformed into two basis material thickness data, and its transformation accuracy heavily relies on a calibration of material decomposition process. The calibration process in general can be laborious and time consuming. Moreover, a conventional calibration method is often challenged by the nonuniform spectral characteristics of the X-ray beam in the entire field-of-view (FOV). In this work, we developed an efficient material decomposition calibration process for a linear accelerator (LINAC) based high-energy X-ray cargo inspection system. We also proposed a multi-spot calibration method to improve the decomposition performance throughout the entire FOV. Experimental validation of the proposed method has been demonstrated by use of a cargo inspection system that supports 6 MV and 9 MV dual-energy imaging.

  20. Spectral simplicity of apparent complexity. II. Exact complexities and complexity spectra

    NASA Astrophysics Data System (ADS)

    Riechers, Paul M.; Crutchfield, James P.

    2018-03-01

    The meromorphic functional calculus developed in Part I overcomes the nondiagonalizability of linear operators that arises often in the temporal evolution of complex systems and is generic to the metadynamics of predicting their behavior. Using the resulting spectral decomposition, we derive closed-form expressions for correlation functions, finite-length Shannon entropy-rate approximates, asymptotic entropy rate, excess entropy, transient information, transient and asymptotic state uncertainties, and synchronization information of stochastic processes generated by finite-state hidden Markov models. This introduces analytical tractability to investigating information processing in discrete-event stochastic processes, symbolic dynamics, and chaotic dynamical systems. Comparisons reveal mathematical similarities between complexity measures originally thought to capture distinct informational and computational properties. We also introduce a new kind of spectral analysis via coronal spectrograms and the frequency-dependent spectra of past-future mutual information. We analyze a number of examples to illustrate the methods, emphasizing processes with multivariate dependencies beyond pairwise correlation. This includes spectral decomposition calculations for one representative example in full detail.

  1. Efficient solution of the Wigner-Liouville equation using a spectral decomposition of the force field

    NASA Astrophysics Data System (ADS)

    Van de Put, Maarten L.; Sorée, Bart; Magnus, Wim

    2017-12-01

    The Wigner-Liouville equation is reformulated using a spectral decomposition of the classical force field instead of the potential energy. The latter is shown to simplify the Wigner-Liouville kernel both conceptually and numerically as the spectral force Wigner-Liouville equation avoids the numerical evaluation of the highly oscillatory Wigner kernel which is nonlocal in both position and momentum. The quantum mechanical evolution is instead governed by a term local in space and non-local in momentum, where the non-locality in momentum has only a limited range. An interpretation of the time evolution in terms of two processes is presented; a classical evolution under the influence of the averaged driving field, and a probability-preserving quantum-mechanical generation and annihilation term. Using the inherent stability and reduced complexity, a direct deterministic numerical implementation using Chebyshev and Fourier pseudo-spectral methods is detailed. For the purpose of illustration, we present results for the time-evolution of a one-dimensional resonant tunneling diode driven out of equilibrium.

  2. Proper orthogonal decomposition-based spectral higher-order stochastic estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baars, Woutijn J., E-mail: wbaars@unimelb.edu.au; Tinney, Charles E.

    A unique routine, capable of identifying both linear and higher-order coherence in multiple-input/output systems, is presented. The technique combines two well-established methods: Proper Orthogonal Decomposition (POD) and Higher-Order Spectra Analysis. The latter of these is based on known methods for characterizing nonlinear systems by way of Volterra series. In that, both linear and higher-order kernels are formed to quantify the spectral (nonlinear) transfer of energy between the system's input and output. This reduces essentially to spectral Linear Stochastic Estimation when only first-order terms are considered, and is therefore presented in the context of stochastic estimation as spectral Higher-Order Stochastic Estimationmore » (HOSE). The trade-off to seeking higher-order transfer kernels is that the increased complexity restricts the analysis to single-input/output systems. Low-dimensional (POD-based) analysis techniques are inserted to alleviate this void as POD coefficients represent the dynamics of the spatial structures (modes) of a multi-degree-of-freedom system. The mathematical framework behind this POD-based HOSE method is first described. The method is then tested in the context of jet aeroacoustics by modeling acoustically efficient large-scale instabilities as combinations of wave packets. The growth, saturation, and decay of these spatially convecting wave packets are shown to couple both linearly and nonlinearly in the near-field to produce waveforms that propagate acoustically to the far-field for different frequency combinations.« less

  3. Characterizing CDOM Spectral Variability Across Diverse Regions and Spectral Ranges

    NASA Astrophysics Data System (ADS)

    Grunert, Brice K.; Mouw, Colleen B.; Ciochetto, Audrey B.

    2018-01-01

    Satellite remote sensing of colored dissolved organic matter (CDOM) has focused on CDOM absorption (aCDOM) at a reference wavelength, as its magnitude provides insight into the underwater light field and large-scale biogeochemical processes. CDOM spectral slope, SCDOM, has been treated as a constant or semiconstant parameter in satellite retrievals of aCDOM despite significant regional and temporal variabilities. SCDOM and other optical metrics provide insights into CDOM composition, processing, food web dynamics, and carbon cycling. To date, much of this work relies on fluorescence techniques or aCDOM in spectral ranges unavailable to current and planned satellite sensors (e.g., <300 nm). In preparation for anticipated future hyperspectral satellite missions, we take the first step here of exploring global variability in SCDOM and fit deviations in the aCDOM spectra using the recently proposed Gaussian decomposition method. From this, we investigate if global variability in retrieved SCDOM and Gaussian components is significant and regionally distinct. We iteratively decreased the spectral range considered and analyzed the number, location, and magnitude of fitted Gaussian components to understand if a reduced spectral range impacts information obtained within a common spectral window. We compared the fitted slope from the Gaussian decomposition method to absorption-based indices that indicate CDOM composition to determine the ability of satellite-derived slope to inform the analysis and modeling of large-scale biogeochemical processes. Finally, we present implications of the observed variability for remote sensing of CDOM characteristics via SCDOM.

  4. Spectral Regression Discriminant Analysis for Hyperspectral Image Classification

    NASA Astrophysics Data System (ADS)

    Pan, Y.; Wu, J.; Huang, H.; Liu, J.

    2012-08-01

    Dimensionality reduction algorithms, which aim to select a small set of efficient and discriminant features, have attracted great attention for Hyperspectral Image Classification. The manifold learning methods are popular for dimensionality reduction, such as Locally Linear Embedding, Isomap, and Laplacian Eigenmap. However, a disadvantage of many manifold learning methods is that their computations usually involve eigen-decomposition of dense matrices which is expensive in both time and memory. In this paper, we introduce a new dimensionality reduction method, called Spectral Regression Discriminant Analysis (SRDA). SRDA casts the problem of learning an embedding function into a regression framework, which avoids eigen-decomposition of dense matrices. Also, with the regression based framework, different kinds of regularizes can be naturally incorporated into our algorithm which makes it more flexible. It can make efficient use of data points to discover the intrinsic discriminant structure in the data. Experimental results on Washington DC Mall and AVIRIS Indian Pines hyperspectral data sets demonstrate the effectiveness of the proposed method.

  5. A neural network-based method for spectral distortion correction in photon counting x-ray CT

    NASA Astrophysics Data System (ADS)

    Touch, Mengheng; Clark, Darin P.; Barber, William; Badea, Cristian T.

    2016-08-01

    Spectral CT using a photon counting x-ray detector (PCXD) shows great potential for measuring material composition based on energy dependent x-ray attenuation. Spectral CT is especially suited for imaging with K-edge contrast agents to address the otherwise limited contrast in soft tissues. We have developed a micro-CT system based on a PCXD. This system enables both 4 energy bins acquisition, as well as full-spectrum mode in which the energy thresholds of the PCXD are swept to sample the full energy spectrum for each detector element and projection angle. Measurements provided by the PCXD, however, are distorted due to undesirable physical effects in the detector and can be very noisy due to photon starvation in narrow energy bins. To address spectral distortions, we propose and demonstrate a novel artificial neural network (ANN)-based spectral distortion correction mechanism, which learns to undo the distortion in spectral CT, resulting in improved material decomposition accuracy. To address noise, post-reconstruction denoising based on bilateral filtration, which jointly enforces intensity gradient sparsity between spectral samples, is used to further improve the robustness of ANN training and material decomposition accuracy. Our ANN-based distortion correction method is calibrated using 3D-printed phantoms and a model of our spectral CT system. To enable realistic simulations and validation of our method, we first modeled the spectral distortions using experimental data acquired from 109Cd and 133Ba radioactive sources measured with our PCXD. Next, we trained an ANN to learn the relationship between the distorted spectral CT projections and the ideal, distortion-free projections in a calibration step. This required knowledge of the ground truth, distortion-free spectral CT projections, which were obtained by simulating a spectral CT scan of the digital version of a 3D-printed phantom. Once the training was completed, the trained ANN was used to perform distortion correction on any subsequent scans of the same system with the same parameters. We used joint bilateral filtration to perform noise reduction by jointly enforcing intensity gradient sparsity between the reconstructed images for each energy bin. Following reconstruction and denoising, the CT data was spectrally decomposed using the photoelectric effect, Compton scattering, and a K-edge material (i.e. iodine). The ANN-based distortion correction approach was tested using both simulations and experimental data acquired in phantoms and a mouse with our PCXD-based micro-CT system for 4 bins and full-spectrum acquisition modes. The iodine detectability and decomposition accuracy were assessed using the contrast-to-noise ratio and relative error in iodine concentration estimation metrics in images with and without distortion correction. In simulation, the material decomposition accuracy in the reconstructed data was vastly improved following distortion correction and denoising, with 50% and 20% reductions in material concentration measurement error in full-spectrum and 4 energy bins cases, respectively. Overall, experimental data confirms that full-spectrum mode provides superior results to 4-energy mode when the distortion corrections are applied. The material decomposition accuracy in the reconstructed data was vastly improved following distortion correction and denoising, with as much as a 41% reduction in material concentration measurement error for full-spectrum mode, while also bringing the iodine detectability to 4-6 mg ml-1. Distortion correction also improved the 4 bins mode data, but to a lesser extent. The results demonstrate the experimental feasibility and potential advantages of ANN-based distortion correction and joint bilateral filtration-based denoising for accurate K-edge imaging with a PCXD. Given the computational efficiency with which the ANN can be applied to projection data, the proposed scheme can be readily integrated into existing CT reconstruction pipelines.

  6. Cloud parallel processing of tandem mass spectrometry based proteomics data.

    PubMed

    Mohammed, Yassene; Mostovenko, Ekaterina; Henneman, Alex A; Marissen, Rob J; Deelder, André M; Palmblad, Magnus

    2012-10-05

    Data analysis in mass spectrometry based proteomics struggles to keep pace with the advances in instrumentation and the increasing rate of data acquisition. Analyzing this data involves multiple steps requiring diverse software, using different algorithms and data formats. Speed and performance of the mass spectral search engines are continuously improving, although not necessarily as needed to face the challenges of acquired big data. Improving and parallelizing the search algorithms is one possibility; data decomposition presents another, simpler strategy for introducing parallelism. We describe a general method for parallelizing identification of tandem mass spectra using data decomposition that keeps the search engine intact and wraps the parallelization around it. We introduce two algorithms for decomposing mzXML files and recomposing resulting pepXML files. This makes the approach applicable to different search engines, including those relying on sequence databases and those searching spectral libraries. We use cloud computing to deliver the computational power and scientific workflow engines to interface and automate the different processing steps. We show how to leverage these technologies to achieve faster data analysis in proteomics and present three scientific workflows for parallel database as well as spectral library search using our data decomposition programs, X!Tandem and SpectraST.

  7. SVD analysis of Aura TES spectral residuals

    NASA Technical Reports Server (NTRS)

    Beer, Reinhard; Kulawik, Susan S.; Rodgers, Clive D.; Bowman, Kevin W.

    2005-01-01

    Singular Value Decomposition (SVD) analysis is both a powerful diagnostic tool and an effective method of noise filtering. We present the results of an SVD analysis of an ensemble of spectral residuals acquired in September 2004 from a 16-orbit Aura Tropospheric Emission Spectrometer (TES) Global Survey and compare them to alternative methods such as zonal averages. In particular, the technique highlights issues such as the orbital variation of instrument response and incompletely modeled effects of surface emissivity and atmospheric composition.

  8. EXPLORING FUNCTIONAL CONNECTIVITY IN FMRI VIA CLUSTERING.

    PubMed

    Venkataraman, Archana; Van Dijk, Koene R A; Buckner, Randy L; Golland, Polina

    2009-04-01

    In this paper we investigate the use of data driven clustering methods for functional connectivity analysis in fMRI. In particular, we consider the K-Means and Spectral Clustering algorithms as alternatives to the commonly used Seed-Based Analysis. To enable clustering of the entire brain volume, we use the Nyström Method to approximate the necessary spectral decompositions. We apply K-Means, Spectral Clustering and Seed-Based Analysis to resting-state fMRI data collected from 45 healthy young adults. Without placing any a priori constraints, both clustering methods yield partitions that are associated with brain systems previously identified via Seed-Based Analysis. Our empirical results suggest that clustering provides a valuable tool for functional connectivity analysis.

  9. A spectral X-ray CT simulation study for quantitative determination of iron

    NASA Astrophysics Data System (ADS)

    Su, Ting; Kaftandjian, Valérie; Duvauchelle, Philippe; Zhu, Yuemin

    2018-06-01

    Iron is an essential element in the human body and disorders in iron such as iron deficiency or overload can cause serious diseases. This paper aims to explore the ability of spectral X-ray CT to quantitatively separate iron from calcium and potassium and to investigate the influence of different acquisition parameters on material decomposition performance. We simulated spectral X-ray CT imaging of a PMMA phantom filled with iron, calcium, and potassium solutions at various concentrations (15-200 mg/cc). Different acquisition parameters were considered, such as the number of energy bins (6, 10, 15, 20, 30, 60) and exposure factor per projection (0.025, 0.1, 1, 10, 100 mA s). Based on the simulation data, we investigated the performance of two regularized material decomposition approaches: projection domain method and image domain method. It was found that the former method discriminated iron from calcium, potassium and water in all cases and tended to benefit from lower number of energy bins for lower exposure factor acquisition. The latter method succeeded in iron determination only when the number of energy bins equals 60, and in this case, the contrast-to-noise ratios of the decomposed iron images are higher than those obtained using the projection domain method. The results demonstrate that both methods are able to discriminate and quantify iron from calcium, potassium and water under certain conditions. Their performances vary with the acquisition parameters of spectral CT. One can use one method or the other to benefit better performance according to the data available.

  10. Single-Trial Normalization for Event-Related Spectral Decomposition Reduces Sensitivity to Noisy Trials

    PubMed Central

    Grandchamp, Romain; Delorme, Arnaud

    2011-01-01

    In electroencephalography, the classical event-related potential model often proves to be a limited method to study complex brain dynamics. For this reason, spectral techniques adapted from signal processing such as event-related spectral perturbation (ERSP) – and its variant event-related synchronization and event-related desynchronization – have been used over the past 20 years. They represent average spectral changes in response to a stimulus. These spectral methods do not have strong consensus for comparing pre- and post-stimulus activity. When computing ERSP, pre-stimulus baseline removal is usually performed after averaging the spectral estimate of multiple trials. Correcting the baseline of each single-trial prior to averaging spectral estimates is an alternative baseline correction method. However, we show that this method leads to positively skewed post-stimulus ERSP values. We eventually present new single-trial-based ERSP baseline correction methods that perform trial normalization or centering prior to applying classical baseline correction methods. We show that single-trial correction methods minimize the contribution of artifactual data trials with high-amplitude spectral estimates and are robust to outliers when performing statistical inference testing. We then characterize these methods in terms of their time–frequency responses and behavior compared to classical ERSP methods. PMID:21994498

  11. Galerkin-collocation domain decomposition method for arbitrary binary black holes

    NASA Astrophysics Data System (ADS)

    Barreto, W.; Clemente, P. C. M.; de Oliveira, H. P.; Rodriguez-Mueller, B.

    2018-05-01

    We present a new computational framework for the Galerkin-collocation method for double domain in the context of ADM 3 +1 approach in numerical relativity. This work enables us to perform high resolution calculations for initial sets of two arbitrary black holes. We use the Bowen-York method for binary systems and the puncture method to solve the Hamiltonian constraint. The nonlinear numerical code solves the set of equations for the spectral modes using the standard Newton-Raphson method, LU decomposition and Gaussian quadratures. We show convergence of our code for the conformal factor and the ADM mass. Thus, we display features of the conformal factor for different masses, spins and linear momenta.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    I. W. Ginsberg

    Multiresolutional decompositions known as spectral fingerprints are often used to extract spectral features from multispectral/hyperspectral data. In this study, the authors investigate the use of wavelet-based algorithms for generating spectral fingerprints. The wavelet-based algorithms are compared to the currently used method, traditional convolution with first-derivative Gaussian filters. The comparison analyses consists of two parts: (a) the computational expense of the new method is compared with the computational costs of the current method and (b) the outputs of the wavelet-based methods are compared with those of the current method to determine any practical differences in the resulting spectral fingerprints. The resultsmore » show that the wavelet-based algorithms can greatly reduce the computational expense of generating spectral fingerprints, while practically no differences exist in the resulting fingerprints. The analysis is conducted on a database of hyperspectral signatures, namely, Hyperspectral Digital Image Collection Experiment (HYDICE) signatures. The reduction in computational expense is by a factor of about 30, and the average Euclidean distance between resulting fingerprints is on the order of 0.02.« less

  13. System-independent characterization of materials using dual-energy computed tomography

    DOE PAGES

    Azevedo, Stephen G.; Martz, Jr., Harry E.; Aufderheide, III, Maurice B.; ...

    2016-02-01

    In this study, we present a new decomposition approach for dual-energy computed tomography (DECT) called SIRZ that provides precise and accurate material description, independent of the scanner, over diagnostic energy ranges (30 to 200 keV). System independence is achieved by explicitly including a scanner-specific spectral description in the decomposition method, and a new X-ray-relevant feature space. The feature space consists of electron density, ρ e, and a new effective atomic number, Z e, which is based on published X-ray cross sections. Reference materials are used in conjunction with the system spectral response so that additional beam-hardening correction is not necessary.more » The technique is tested against other methods on DECT data of known specimens scanned by diverse spectra and systems. Uncertainties in accuracy and precision are less than 3% and 2% respectively for the (ρ e, Z e) results compared to prior methods that are inaccurate and imprecise (over 9%).« less

  14. TP89 - SIRZ Decomposition Spectral Estimation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seetho, Isacc M.; Azevedo, Steve; Smith, Jerel

    2016-12-08

    The primary objective of this test plan is to provide X-ray CT measurements of known materials for the purposes of generating and testing MicroCT and EDS spectral estimates. These estimates are to be used in subsequent Ze/RhoE decomposition analyses of acquired data.

  15. Coherent mode decomposition using mixed Wigner functions of Hermite-Gaussian beams.

    PubMed

    Tanaka, Takashi

    2017-04-15

    A new method of coherent mode decomposition (CMD) is proposed that is based on a Wigner-function representation of Hermite-Gaussian beams. In contrast to the well-known method using the cross spectral density (CSD), it directly determines the mode functions and their weights without solving the eigenvalue problem. This facilitates the CMD of partially coherent light whose Wigner functions (and thus CSDs) are not separable, in which case the conventional CMD requires solving an eigenvalue problem with a large matrix and thus is numerically formidable. An example is shown regarding the CMD of synchrotron radiation, one of the most important applications of the proposed method.

  16. An efficient computational approach to model statistical correlations in photon counting x-ray detectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Faby, Sebastian; Maier, Joscha; Sawall, Stefan

    2016-07-15

    Purpose: To introduce and evaluate an increment matrix approach (IMA) describing the signal statistics of energy-selective photon counting detectors including spatial–spectral correlations between energy bins of neighboring detector pixels. The importance of the occurring correlations for image-based material decomposition is studied. Methods: An IMA describing the counter increase patterns in a photon counting detector is proposed. This IMA has the potential to decrease the number of required random numbers compared to Monte Carlo simulations by pursuing an approach based on convolutions. To validate and demonstrate the IMA, an approximate semirealistic detector model is provided, simulating a photon counting detector inmore » a simplified manner, e.g., by neglecting count rate-dependent effects. In this way, the spatial–spectral correlations on the detector level are obtained and fed into the IMA. The importance of these correlations in reconstructed energy bin images and the corresponding detector performance in image-based material decomposition is evaluated using a statistically optimal decomposition algorithm. Results: The results of IMA together with the semirealistic detector model were compared to other models and measurements using the spectral response and the energy bin sensitivity, finding a good agreement. Correlations between the different reconstructed energy bin images could be observed, and turned out to be of weak nature. These correlations were found to be not relevant in image-based material decomposition. An even simpler simulation procedure based on the energy bin sensitivity was tested instead and yielded similar results for the image-based material decomposition task, as long as the fact that one incident photon can increase multiple counters across neighboring detector pixels is taken into account. Conclusions: The IMA is computationally efficient as it required about 10{sup 2} random numbers per ray incident on a detector pixel instead of an estimated 10{sup 8} random numbers per ray as Monte Carlo approaches would need. The spatial–spectral correlations as described by IMA are not important for the studied image-based material decomposition task. Respecting the absolute photon counts and thus the multiple counter increases by a single x-ray photon, the same material decomposition performance could be obtained with a simpler detector description using the energy bin sensitivity.« less

  17. Empirical mode decomposition for analyzing acoustical signals

    NASA Technical Reports Server (NTRS)

    Huang, Norden E. (Inventor)

    2005-01-01

    The present invention discloses a computer implemented signal analysis method through the Hilbert-Huang Transformation (HHT) for analyzing acoustical signals, which are assumed to be nonlinear and nonstationary. The Empirical Decomposition Method (EMD) and the Hilbert Spectral Analysis (HSA) are used to obtain the HHT. Essentially, the acoustical signal will be decomposed into the Intrinsic Mode Function Components (IMFs). Once the invention decomposes the acoustic signal into its constituting components, all operations such as analyzing, identifying, and removing unwanted signals can be performed on these components. Upon transforming the IMFs into Hilbert spectrum, the acoustical signal may be compared with other acoustical signals.

  18. Iterative filtering decomposition based on local spectral evolution kernel

    PubMed Central

    Wang, Yang; Wei, Guo-Wei; Yang, Siyang

    2011-01-01

    The synthesizing information, achieving understanding, and deriving insight from increasingly massive, time-varying, noisy and possibly conflicting data sets are some of most challenging tasks in the present information age. Traditional technologies, such as Fourier transform and wavelet multi-resolution analysis, are inadequate to handle all of the above-mentioned tasks. The empirical model decomposition (EMD) has emerged as a new powerful tool for resolving many challenging problems in data processing and analysis. Recently, an iterative filtering decomposition (IFD) has been introduced to address the stability and efficiency problems of the EMD. Another data analysis technique is the local spectral evolution kernel (LSEK), which provides a near prefect low pass filter with desirable time-frequency localizations. The present work utilizes the LSEK to further stabilize the IFD, and offers an efficient, flexible and robust scheme for information extraction, complexity reduction, and signal and image understanding. The performance of the present LSEK based IFD is intensively validated over a wide range of data processing tasks, including mode decomposition, analysis of time-varying data, information extraction from nonlinear dynamic systems, etc. The utility, robustness and usefulness of the proposed LESK based IFD are demonstrated via a large number of applications, such as the analysis of stock market data, the decomposition of ocean wave magnitudes, the understanding of physiologic signals and information recovery from noisy images. The performance of the proposed method is compared with that of existing methods in the literature. Our results indicate that the LSEK based IFD improves both the efficiency and the stability of conventional EMD algorithms. PMID:22350559

  19. Rapid acquisition of data dense solid-state CPMG NMR spectral sets using multi-dimensional statistical analysis

    DOE PAGES

    Mason, H. E.; Uribe, E. C.; Shusterman, J. A.

    2018-01-01

    Tensor-rank decomposition methods have been applied to variable contact time 29 Si{ 1 H} CP/CPMG NMR data sets to extract NMR dynamics information and dramatically decrease conventional NMR acquisition times.

  20. Rapid acquisition of data dense solid-state CPMG NMR spectral sets using multi-dimensional statistical analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mason, H. E.; Uribe, E. C.; Shusterman, J. A.

    Tensor-rank decomposition methods have been applied to variable contact time 29 Si{ 1 H} CP/CPMG NMR data sets to extract NMR dynamics information and dramatically decrease conventional NMR acquisition times.

  1. Offset-sparsity decomposition for enhancement of color microscopic image of stained specimen in histopathology: further results

    NASA Astrophysics Data System (ADS)

    Kopriva, Ivica; Popović Hadžija, Marijana; Hadžija, Mirko; Aralica, Gorana

    2016-03-01

    Recently, novel data-driven offset-sparsity decomposition (OSD) method was proposed by us to increase colorimetric difference between tissue-structures present in the color microscopic image of stained specimen in histopathology. The OSD method performs additive decomposition of vectorized spectral images into image-adapted offset term and sparse term. Thereby, the sparse term represents an enhanced image. The method was tested on images of the histological slides of human liver stained with hematoxylin and eosin, anti-CD34 monoclonal antibody and Sudan III. Herein, we present further results related to increase of colorimetric difference between tissue structures present in the images of human liver specimens with pancreatic carcinoma metastasis stained with Gomori, CK7, CDX2 and LCA, and with colon carcinoma metastasis stained with Gomori, CK20 and PAN CK. Obtained relative increase of colorimetric difference is in the range [19.36%, 103.94%].

  2. On the velocity space discretization for the Vlasov-Poisson system: Comparison between implicit Hermite spectral and Particle-in-Cell methods

    NASA Astrophysics Data System (ADS)

    Camporeale, E.; Delzanno, G. L.; Bergen, B. K.; Moulton, J. D.

    2016-01-01

    We describe a spectral method for the numerical solution of the Vlasov-Poisson system where the velocity space is decomposed by means of an Hermite basis, and the configuration space is discretized via a Fourier decomposition. The novelty of our approach is an implicit time discretization that allows exact conservation of charge, momentum and energy. The computational efficiency and the cost-effectiveness of this method are compared to the fully-implicit PIC method recently introduced by Markidis and Lapenta (2011) and Chen et al. (2011). The following examples are discussed: Langmuir wave, Landau damping, ion-acoustic wave, two-stream instability. The Fourier-Hermite spectral method can achieve solutions that are several orders of magnitude more accurate at a fraction of the cost with respect to PIC.

  3. Spectral estimation—What is new? What is next?

    NASA Astrophysics Data System (ADS)

    Tary, Jean Baptiste; Herrera, Roberto Henry; Han, Jiajun; van der Baan, Mirko

    2014-12-01

    Spectral estimation, and corresponding time-frequency representation for nonstationary signals, is a cornerstone in geophysical signal processing and interpretation. The last 10-15 years have seen the development of many new high-resolution decompositions that are often fundamentally different from Fourier and wavelet transforms. These conventional techniques, like the short-time Fourier transform and the continuous wavelet transform, show some limitations in terms of resolution (localization) due to the trade-off between time and frequency localizations and smearing due to the finite size of the time series of their template. Well-known techniques, like autoregressive methods and basis pursuit, and recently developed techniques, such as empirical mode decomposition and the synchrosqueezing transform, can achieve higher time-frequency localization due to reduced spectral smearing and leakage. We first review the theory of various established and novel techniques, pointing out their assumptions, adaptability, and expected time-frequency localization. We illustrate their performances on a provided collection of benchmark signals, including a laughing voice, a volcano tremor, a microseismic event, and a global earthquake, with the intention to provide a fair comparison of the pros and cons of each method. Finally, their outcomes are discussed and possible avenues for improvements are proposed.

  4. Statistical iterative material image reconstruction for spectral CT using a semi-empirical forward model

    NASA Astrophysics Data System (ADS)

    Mechlem, Korbinian; Ehn, Sebastian; Sellerer, Thorsten; Pfeiffer, Franz; Noël, Peter B.

    2017-03-01

    In spectral computed tomography (spectral CT), the additional information about the energy dependence of attenuation coefficients can be exploited to generate material selective images. These images have found applications in various areas such as artifact reduction, quantitative imaging or clinical diagnosis. However, significant noise amplification on material decomposed images remains a fundamental problem of spectral CT. Most spectral CT algorithms separate the process of material decomposition and image reconstruction. Separating these steps is suboptimal because the full statistical information contained in the spectral tomographic measurements cannot be exploited. Statistical iterative reconstruction (SIR) techniques provide an alternative, mathematically elegant approach to obtaining material selective images with improved tradeoffs between noise and resolution. Furthermore, image reconstruction and material decomposition can be performed jointly. This is accomplished by a forward model which directly connects the (expected) spectral projection measurements and the material selective images. To obtain this forward model, detailed knowledge of the different photon energy spectra and the detector response was assumed in previous work. However, accurately determining the spectrum is often difficult in practice. In this work, a new algorithm for statistical iterative material decomposition is presented. It uses a semi-empirical forward model which relies on simple calibration measurements. Furthermore, an efficient optimization algorithm based on separable surrogate functions is employed. This partially negates one of the major shortcomings of SIR, namely high computational cost and long reconstruction times. Numerical simulations and real experiments show strongly improved image quality and reduced statistical bias compared to projection-based material decomposition.

  5. Simulation of time-dispersion spectral device with sample spectra accumulation

    NASA Astrophysics Data System (ADS)

    Zhdanov, Arseny; Khansuvarov, Ruslan; Korol, Georgy

    2014-09-01

    This research is conducted in order to design a spectral device for light sources power spectrum analysis. The spectral device should process radiation from sources, direct contact with radiation of which is either impossible or undesirable. Such sources include jet blast of an aircraft, optical radiation in metallurgy and textile industry. In proposed spectral device optical radiation is guided out of unfavorable environment via a piece of optical fiber with high dispersion. It is necessary for analysis to make samples of analyzed radiation as short pulses. Dispersion properties of such optical fiber cause spectral decomposition of input optical pulses. The faster time of group delay vary the stronger the spectral decomposition effect. This effect allows using optical fiber with high dispersion as a major element of proposed spectral device. Duration of sample must be much shorter than group delay time difference of a dispersive system. In the given frequency range this characteristic has to be linear. The frequency range is 400 … 500 THz for typical optical fiber. Using photonic-crystal fiber (PCF) gives much wider spectral range for analysis. In this paper we propose simulation of single pulse transmission through dispersive system with linear dispersion characteristic and quadratic-detected output responses accumulation. During simulation we propose studying influence of optical fiber dispersion characteristic angle on spectral measurement results. We also consider pulse duration and group delay time difference impact on output pulse shape and duration. Results show the most suitable dispersion characteristic that allow choosing the structure of PCF - major element of time-dispersion spectral analysis method and required number of samples for reliable assessment of measured spectrum.

  6. Ensemble empirical mode decomposition based fluorescence spectral noise reduction for low concentration PAHs

    NASA Astrophysics Data System (ADS)

    Wang, Shu-tao; Yang, Xue-ying; Kong, De-ming; Wang, Yu-tian

    2017-11-01

    A new noise reduction method based on ensemble empirical mode decomposition (EEMD) is proposed to improve the detection effect for fluorescence spectra. Polycyclic aromatic hydrocarbons (PAHs) pollutants, as a kind of important current environmental pollution source, are highly oncogenic. Using the fluorescence spectroscopy method, the PAHs pollutants can be detected. However, instrument will produce noise in the experiment. Weak fluorescent signals can be affected by noise, so we propose a way to denoise and improve the detection effect. Firstly, we use fluorescence spectrometer to detect PAHs to obtain fluorescence spectra. Subsequently, noises are reduced by EEMD algorithm. Finally, the experiment results show the proposed method is feasible.

  7. Bayesian inference of spectral induced polarization parameters for laboratory complex resistivity measurements of rocks and soils

    NASA Astrophysics Data System (ADS)

    Bérubé, Charles L.; Chouteau, Michel; Shamsipour, Pejman; Enkin, Randolph J.; Olivo, Gema R.

    2017-08-01

    Spectral induced polarization (SIP) measurements are now widely used to infer mineralogical or hydrogeological properties from the low-frequency electrical properties of the subsurface in both mineral exploration and environmental sciences. We present an open-source program that performs fast multi-model inversion of laboratory complex resistivity measurements using Markov-chain Monte Carlo simulation. Using this stochastic method, SIP parameters and their uncertainties may be obtained from the Cole-Cole and Dias models, or from the Debye and Warburg decomposition approaches. The program is tested on synthetic and laboratory data to show that the posterior distribution of a multiple Cole-Cole model is multimodal in particular cases. The Warburg and Debye decomposition approaches yield unique solutions in all cases. It is shown that an adaptive Metropolis algorithm performs faster and is less dependent on the initial parameter values than the Metropolis-Hastings step method when inverting SIP data through the decomposition schemes. There are no advantages in using an adaptive step method for well-defined Cole-Cole inversion. Finally, the influence of measurement noise on the recovered relaxation time distribution is explored. We provide the geophysics community with a open-source platform that can serve as a base for further developments in stochastic SIP data inversion and that may be used to perform parameter analysis with various SIP models.

  8. Reduction of Metal Artifact in Single Photon-Counting Computed Tomography by Spectral-Driven Iterative Reconstruction Technique

    PubMed Central

    Nasirudin, Radin A.; Mei, Kai; Panchev, Petar; Fehringer, Andreas; Pfeiffer, Franz; Rummeny, Ernst J.; Fiebich, Martin; Noël, Peter B.

    2015-01-01

    Purpose The exciting prospect of Spectral CT (SCT) using photon-counting detectors (PCD) will lead to new techniques in computed tomography (CT) that take advantage of the additional spectral information provided. We introduce a method to reduce metal artifact in X-ray tomography by incorporating knowledge obtained from SCT into a statistical iterative reconstruction scheme. We call our method Spectral-driven Iterative Reconstruction (SPIR). Method The proposed algorithm consists of two main components: material decomposition and penalized maximum likelihood iterative reconstruction. In this study, the spectral data acquisitions with an energy-resolving PCD were simulated using a Monte-Carlo simulator based on EGSnrc C++ class library. A jaw phantom with a dental implant made of gold was used as an object in this study. A total of three dental implant shapes were simulated separately to test the influence of prior knowledge on the overall performance of the algorithm. The generated projection data was first decomposed into three basis functions: photoelectric absorption, Compton scattering and attenuation of gold. A pseudo-monochromatic sinogram was calculated and used as input in the reconstruction, while the spatial information of the gold implant was used as a prior. The results from the algorithm were assessed and benchmarked with state-of-the-art reconstruction methods. Results Decomposition results illustrate that gold implant of any shape can be distinguished from other components of the phantom. Additionally, the result from the penalized maximum likelihood iterative reconstruction shows that artifacts are significantly reduced in SPIR reconstructed slices in comparison to other known techniques, while at the same time details around the implant are preserved. Quantitatively, the SPIR algorithm best reflects the true attenuation value in comparison to other algorithms. Conclusion It is demonstrated that the combination of the additional information from Spectral CT and statistical reconstruction can significantly improve image quality, especially streaking artifacts caused by the presence of materials with high atomic numbers. PMID:25955019

  9. Identification and modification of dominant noise sources in diesel engines

    NASA Astrophysics Data System (ADS)

    Hayward, Michael D.

    Determination of dominant noise sources in diesel engines is an integral step in the creation of quiet engines, but is a process which can involve an extensive series of expensive, time-consuming fired and motored tests. The goal of this research is to determine dominant noise source characteristics of a diesel engine in the near and far-fields with data from fewer tests than is currently required. Pre-conditioning and use of numerically robust methods to solve a set of cross-spectral density equations results in accurate calculation of the transfer paths between the near- and far-field measurement points. Application of singular value decomposition to an input cross-spectral matrix determines the spectral characteristics of a set of independent virtual sources, that, when scaled and added, result in the input cross spectral matrix. Each virtual source power spectral density is a singular value resulting from the decomposition performed over a range of frequencies. The complex relationship between virtual and physical sources is estimated through determination of virtual source contributions to each input measurement power spectral density. The method is made more user-friendly through use of a percentage contribution color plotting technique, where different normalizations can be used to help determine the presence of sources and the strengths of their contributions. Convolution of input measurements with the estimated path impulse responses results in a set of far-field components, to which the same singular value contribution plotting technique can be applied, thus allowing dominant noise source characteristics in the far-field to also be examined. Application of the methods presented results in determination of the spectral characteristics of dominant noise sources both in the near- and far-fields from one fired test, which significantly reduces the need for extensive fired and motored testing. Finally, it is shown that the far-field noise time history of a physically altered engine can be simulated through modification of singular values and recalculation of transfer paths between input and output measurements of previously recorded data.

  10. A Spectral Element Ocean Model on the Cray T3D: the interannual variability of the Mediterranean Sea general circulation

    NASA Astrophysics Data System (ADS)

    Molcard, A. J.; Pinardi, N.; Ansaloni, R.

    A new numerical model, SEOM (Spectral Element Ocean Model, (Iskandarani et al, 1994)), has been implemented in the Mediterranean Sea. Spectral element methods combine the geometric flexibility of finite element techniques with the rapid convergence rate of spectral schemes. The current version solves the shallow water equations with a fifth (or sixth) order accuracy spectral scheme and about 50.000 nodes. The domain decomposition philosophy makes it possible to exploit the power of parallel machines. The original MIMD master/slave version of SEOM, written in F90 and PVM, has been ported to the Cray T3D. When critical for performance, Cray specific high-performance one-sided communication routines (SHMEM) have been adopted to fully exploit the Cray T3D interprocessor network. Tests performed with highly unstructured and irregular grid, on up to 128 processors, show an almost linear scalability even with unoptimized domain decomposition techniques. Results from various case studies on the Mediterranean Sea are shown, involving realistic coastline geometry, and monthly mean 1000mb winds from the ECMWF's atmospheric model operational analysis from the period January 1987 to December 1994. The simulation results show that variability in the wind forcing considerably affect the circulation dynamics of the Mediterranean Sea.

  11. Fast analytical spectral filtering methods for magnetic resonance perfusion quantification.

    PubMed

    Reddy, Kasireddy V; Mitra, Abhishek; Yalavarthy, Phaneendra K

    2016-08-01

    The deconvolution in the perfusion weighted imaging (PWI) plays an important role in quantifying the MR perfusion parameters. The PWI application to stroke and brain tumor studies has become a standard clinical practice. The standard approach for this deconvolution is oscillatory-limited singular value decomposition (oSVD) and frequency domain deconvolution (FDD). The FDD is widely recognized as the fastest approach currently available for deconvolution of MR perfusion data. In this work, two fast deconvolution methods (namely analytical fourier filtering and analytical showalter spectral filtering) are proposed. Through systematic evaluation, the proposed methods are shown to be computationally efficient and quantitatively accurate compared to FDD and oSVD.

  12. Domain decomposition methods for systems of conservation laws: Spectral collocation approximations

    NASA Technical Reports Server (NTRS)

    Quarteroni, Alfio

    1989-01-01

    Hyperbolic systems of conversation laws are considered which are discretized in space by spectral collocation methods and advanced in time by finite difference schemes. At any time-level a domain deposition method based on an iteration by subdomain procedure was introduced yielding at each step a sequence of independent subproblems (one for each subdomain) that can be solved simultaneously. The method is set for a general nonlinear problem in several space variables. The convergence analysis, however, is carried out only for a linear one-dimensional system with continuous solutions. A precise form of the error reduction factor at each iteration is derived. Although the method is applied here to the case of spectral collocation approximation only, the idea is fairly general and can be used in a different context as well. For instance, its application to space discretization by finite differences is straight forward.

  13. Reliable and Efficient Parallel Processing Algorithms and Architectures for Modern Signal Processing. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Liu, Kuojuey Ray

    1990-01-01

    Least-squares (LS) estimations and spectral decomposition algorithms constitute the heart of modern signal processing and communication problems. Implementations of recursive LS and spectral decomposition algorithms onto parallel processing architectures such as systolic arrays with efficient fault-tolerant schemes are the major concerns of this dissertation. There are four major results in this dissertation. First, we propose the systolic block Householder transformation with application to the recursive least-squares minimization. It is successfully implemented on a systolic array with a two-level pipelined implementation at the vector level as well as at the word level. Second, a real-time algorithm-based concurrent error detection scheme based on the residual method is proposed for the QRD RLS systolic array. The fault diagnosis, order degraded reconfiguration, and performance analysis are also considered. Third, the dynamic range, stability, error detection capability under finite-precision implementation, order degraded performance, and residual estimation under faulty situations for the QRD RLS systolic array are studied in details. Finally, we propose the use of multi-phase systolic algorithms for spectral decomposition based on the QR algorithm. Two systolic architectures, one based on triangular array and another based on rectangular array, are presented for the multiphase operations with fault-tolerant considerations. Eigenvectors and singular vectors can be easily obtained by using the multi-pase operations. Performance issues are also considered.

  14. Fluorescence Intrinsic Characterization of Excitation-Emission Matrix Using Multi-Dimensional Ensemble Empirical Mode Decomposition

    PubMed Central

    Chang, Chi-Ying; Chang, Chia-Chi; Hsiao, Tzu-Chien

    2013-01-01

    Excitation-emission matrix (EEM) fluorescence spectroscopy is a noninvasive method for tissue diagnosis and has become important in clinical use. However, the intrinsic characterization of EEM fluorescence remains unclear. Photobleaching and the complexity of the chemical compounds make it difficult to distinguish individual compounds due to overlapping features. Conventional studies use principal component analysis (PCA) for EEM fluorescence analysis, and the relationship between the EEM features extracted by PCA and diseases has been examined. The spectral features of different tissue constituents are not fully separable or clearly defined. Recently, a non-stationary method called multi-dimensional ensemble empirical mode decomposition (MEEMD) was introduced; this method can extract the intrinsic oscillations on multiple spatial scales without loss of information. The aim of this study was to propose a fluorescence spectroscopy system for EEM measurements and to describe a method for extracting the intrinsic characteristics of EEM by MEEMD. The results indicate that, although PCA provides the principal factor for the spectral features associated with chemical compounds, MEEMD can provide additional intrinsic features with more reliable mapping of the chemical compounds. MEEMD has the potential to extract intrinsic fluorescence features and improve the detection of biochemical changes. PMID:24240806

  15. Accurate modeling of plasma acceleration with arbitrary order pseudo-spectral particle-in-cell methods

    DOE PAGES

    Jalas, S.; Dornmair, I.; Lehe, R.; ...

    2017-03-20

    Particle in Cell (PIC) simulations are a widely used tool for the investigation of both laser- and beam-driven plasma acceleration. It is a known issue that the beam quality can be artificially degraded by numerical Cherenkov radiation (NCR) resulting primarily from an incorrectly modeled dispersion relation. Pseudo-spectral solvers featuring infinite order stencils can strongly reduce NCR - or even suppress it - and are therefore well suited to correctly model the beam properties. For efficient parallelization of the PIC algorithm, however, localized solvers are inevitable. Arbitrary order pseudo-spectral methods provide this needed locality. Yet, these methods can again be pronemore » to NCR. Here in this paper, we show that acceptably low solver orders are sufficient to correctly model the physics of interest, while allowing for parallel computation by domain decomposition.« less

  16. Breast Tissue Characterization with Photon-counting Spectral CT Imaging: A Postmortem Breast Study

    PubMed Central

    Ding, Huanjun; Klopfer, Michael J.; Ducote, Justin L.; Masaki, Fumitaro

    2014-01-01

    Purpose To investigate the feasibility of breast tissue characterization in terms of water, lipid, and protein contents with a spectral computed tomographic (CT) system based on a cadmium zinc telluride (CZT) photon-counting detector by using postmortem breasts. Materials and Methods Nineteen pairs of postmortem breasts were imaged with a CZT-based photon-counting spectral CT system with beam energy of 100 kVp. The mean glandular dose was estimated to be in the range of 1.8–2.2 mGy. The images were corrected for pulse pile-up and other artifacts by using spectral distortion corrections. Dual-energy decomposition was then applied to characterize each breast into water, lipid, and protein contents. The precision of the three-compartment characterization was evaluated by comparing the composition of right and left breasts, where the standard error of the estimations was determined. The results of dual-energy decomposition were compared by using averaged root mean square to chemical analysis, which was used as the reference standard. Results The standard errors of the estimations of the right-left correlations obtained from spectral CT were 7.4%, 6.7%, and 3.2% for water, lipid, and protein contents, respectively. Compared with the reference standard, the average root mean square error in breast tissue composition was 2.8%. Conclusion Spectral CT can be used to accurately quantify the water, lipid, and protein contents in breast tissue in a laboratory study by using postmortem specimens. © RSNA, 2014 PMID:24814180

  17. Diverse Power Iteration Embeddings and Its Applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang H.; Yoo S.; Yu, D.

    2014-12-14

    Abstract—Spectral Embedding is one of the most effective dimension reduction algorithms in data mining. However, its computation complexity has to be mitigated in order to apply it for real-world large scale data analysis. Many researches have been focusing on developing approximate spectral embeddings which are more efficient, but meanwhile far less effective. This paper proposes Diverse Power Iteration Embeddings (DPIE), which not only retains the similar efficiency of power iteration methods but also produces a series of diverse and more effective embedding vectors. We test this novel method by applying it to various data mining applications (e.g. clustering, anomaly detectionmore » and feature selection) and evaluating their performance improvements. The experimental results show our proposed DPIE is more effective than popular spectral approximation methods, and obtains the similar quality of classic spectral embedding derived from eigen-decompositions. Moreover it is extremely fast on big data applications. For example in terms of clustering result, DPIE achieves as good as 95% of classic spectral clustering on the complex datasets but 4000+ times faster in limited memory environment.« less

  18. A study of the Herald-Phillipstown fault in the Wabash Valley using drillhole and 3-D seismic reflection data

    NASA Astrophysics Data System (ADS)

    Kroenke, Samantha E.

    In June 2009, a 2.2 square mile 3-D high resolution seismic reflection survey was shot in southeastern Illinois in the Phillipstown Consolidated oilfield. A well was drilled in the 3-D survey area to tie the seismic to the geological data with a synthetic seismogram from the sonic log. The objectives of the 3-D seismic survey were three-fold: (1) To image and interpret faulting of the Herald-Phillipstown Fault using drillhole-based geological and seismic cross-sections and structural contour maps created from the drillhole data and seismic reflection data, (2) To test the effectiveness of imaging the faults by selected seismic attributes, and (3) To compare spectral decomposition amplitude maps with an isochron map and an isopach map of a selected geologic interval (VTG interval). Drillhole and seismic reflection data show that various formation offsets increase near the main Herald-Phillipstown fault, and that the fault and its large offset subsidiary faults penetrate the Precambrian crystalline basement. A broad, northeast-trending 10,000 feet wide graben is consistently observed in the drillhole data. Both shallow and deep formations in the geological cross-sections reveal small horst and graben features within the broad graben created possibly in response to fault reactivations. The HPF faults have been interpreted as originally Precambrian age high-angle, normal faults reactivated with various amounts and types of offset. Evidence for strike-slip movement is also clear on several faults. Changes in the seismic attribute values in the selected interval and along various time slices throughout the whole dataset correlate with the Herald-Phillipstown faults. Overall, seismic attributes could provide a means of mapping large offset faults in areas with limited or absent drillhole data. Results of the spectral decomposition suggest that if the interval velocity is known for a particular formation or interval, high-resolution 3-D seismic reflection surveys could utilize these amplitudes as an alternative seismic interpretation method for estimating formation thicknesses. A VTG isopach map was compared with an isochron map and a spectral decomposition amplitude map. The results reveal that the isochron map strongly correlates with the isopach map as well as the spectral decomposition map. It was also found that thicker areas in the isopach correlated with higher amplitude values in the spectral decomposition amplitude map. Offsets along the faults appear sharper in these amplitudes and isochron maps than in the isopach map, possibly as a result of increased spatial sampling.

  19. The demodulated band transform

    PubMed Central

    Kovach, Christopher K.; Gander, Phillip E.

    2016-01-01

    Background Windowed Fourier decompositions (WFD) are widely used in measuring stationary and non-stationary spectral phenomena and in describing pairwise relationships among multiple signals. Although a variety of WFDs see frequent application in electrophysiological research, including the short-time Fourier transform, continuous wavelets, band-pass filtering and multitaper-based approaches, each carries certain drawbacks related to computational efficiency and spectral leakage. This work surveys the advantages of a WFD not previously applied in electrophysiological settings. New Methods A computationally efficient form of complex demodulation, the demodulated band transform (DBT), is described. Results DBT is shown to provide an efficient approach to spectral estimation with minimal susceptibility to spectral leakage. In addition, it lends itself well to adaptive filtering of non-stationary narrowband noise. Comparison with existing methods A detailed comparison with alternative WFDs is offered, with an emphasis on the relationship between DBT and Thomson's multitaper. DBT is shown to perform favorably in combining computational efficiency with minimal introduction of spectral leakage. Conclusion DBT is ideally suited to efficient estimation of both stationary and non-stationary spectral and cross-spectral statistics with minimal susceptibility to spectral leakage. These qualities are broadly desirable in many settings. PMID:26711370

  20. Eigenvalue-eigenvector decomposition (EED) analysis of dissimilarity and covariance matrix obtained from total synchronous fluorescence spectral (TSFS) data sets of herbal preparations: Optimizing the classification approach

    NASA Astrophysics Data System (ADS)

    Tarai, Madhumita; Kumar, Keshav; Divya, O.; Bairi, Partha; Mishra, Kishor Kumar; Mishra, Ashok Kumar

    2017-09-01

    The present work compares the dissimilarity and covariance based unsupervised chemometric classification approaches by taking the total synchronous fluorescence spectroscopy data sets acquired for the cumin and non-cumin based herbal preparations. The conventional decomposition method involves eigenvalue-eigenvector analysis of the covariance of the data set and finds the factors that can explain the overall major sources of variation present in the data set. The conventional approach does this irrespective of the fact that the samples belong to intrinsically different groups and hence leads to poor class separation. The present work shows that classification of such samples can be optimized by performing the eigenvalue-eigenvector decomposition on the pair-wise dissimilarity matrix.

  1. Rayleigh imaging in spectral mammography

    NASA Astrophysics Data System (ADS)

    Berggren, Karl; Danielsson, Mats; Fredenberg, Erik

    2016-03-01

    Spectral imaging is the acquisition of multiple images of an object at different energy spectra. In mammography, dual-energy imaging (spectral imaging with two energy levels) has been investigated for several applications, in particular material decomposition, which allows for quantitative analysis of breast composition and quantitative contrast-enhanced imaging. Material decomposition with dual-energy imaging is based on the assumption that there are two dominant photon interaction effects that determine linear attenuation: the photoelectric effect and Compton scattering. This assumption limits the number of basis materials, i.e. the number of materials that are possible to differentiate between, to two. However, Rayleigh scattering may account for more than 10% of the linear attenuation in the mammography energy range. In this work, we show that a modified version of a scanning multi-slit spectral photon-counting mammography system is able to acquire three images at different spectra and can be used for triple-energy imaging. We further show that triple-energy imaging in combination with the efficient scatter rejection of the system enables measurement of Rayleigh scattering, which adds an additional energy dependency to the linear attenuation and enables material decomposition with three basis materials. Three available basis materials have the potential to improve virtually all applications of spectral imaging.

  2. Implementation of spectral clustering with partitioning around medoids (PAM) algorithm on microarray data of carcinoma

    NASA Astrophysics Data System (ADS)

    Cahyaningrum, Rosalia D.; Bustamam, Alhadi; Siswantining, Titin

    2017-03-01

    Technology of microarray became one of the imperative tools in life science to observe the gene expression levels, one of which is the expression of the genes of people with carcinoma. Carcinoma is a cancer that forms in the epithelial tissue. These data can be analyzed such as the identification expressions hereditary gene and also build classifications that can be used to improve diagnosis of carcinoma. Microarray data usually served in large dimension that most methods require large computing time to do the grouping. Therefore, this study uses spectral clustering method which allows to work with any object for reduces dimension. Spectral clustering method is a method based on spectral decomposition of the matrix which is represented in the form of a graph. After the data dimensions are reduced, then the data are partitioned. One of the famous partition method is Partitioning Around Medoids (PAM) which is minimize the objective function with exchanges all the non-medoid points into medoid point iteratively until converge. Objectivity of this research is to implement methods spectral clustering and partitioning algorithm PAM to obtain groups of 7457 genes with carcinoma based on the similarity value. The result in this study is two groups of genes with carcinoma.

  3. Circular Mixture Modeling of Color Distribution for Blind Stain Separation in Pathology Images.

    PubMed

    Li, Xingyu; Plataniotis, Konstantinos N

    2017-01-01

    In digital pathology, to address color variation and histological component colocalization in pathology images, stain decomposition is usually performed preceding spectral normalization and tissue component segmentation. This paper examines the problem of stain decomposition, which is a naturally nonnegative matrix factorization (NMF) problem in algebra, and introduces a systematical and analytical solution consisting of a circular color analysis module and an NMF-based computation module. Unlike the paradigm of existing stain decomposition algorithms where stain proportions are computed from estimated stain spectra using a matrix inverse operation directly, the introduced solution estimates stain spectra and stain depths via probabilistic reasoning individually. Since the proposed method pays extra attentions to achromatic pixels in color analysis and stain co-occurrence in pixel clustering, it achieves consistent and reliable stain decomposition with minimum decomposition residue. Particularly, aware of the periodic and angular nature of hue, we propose the use of a circular von Mises mixture model to analyze the hue distribution, and provide a complete color-based pixel soft-clustering solution to address color mixing introduced by stain overlap. This innovation combined with saturation-weighted computation makes our study effective for weak stains and broad-spectrum stains. Extensive experimentation on multiple public pathology datasets suggests that our approach outperforms state-of-the-art blind stain separation methods in terms of decomposition effectiveness.

  4. Application of a spectrally filtered probing light beam and RGB decomposition of microphotographs for flow registration of ultrasonically enhanced agglutination of erythrocytes

    NASA Astrophysics Data System (ADS)

    Doubrovski, V. A.; Ganilova, Yu. A.; Zabenkov, I. V.

    2013-08-01

    We propose a development of the flow microscopy method to increase the resolving power upon registration of erythrocyte agglutination. We experimentally show that the action of a ultrasonic standing wave on an agglutinating mixture blood-serum leads to the formation of so large erythrocytic immune complexes that it seems possible to propose a new two-wave optical method of registration of the process of erythrocyte agglutination using the RGB decomposition of microphotographs of the flow of the mixture under study. This approach increases the reliability of registration of erythrocyte agglutination and, consequently, increases the reliability of blood typing. Our results can be used in the development of instruments for automatic human blood typing.

  5. A facile thermal decomposition route to synthesise CoFe2O4 nanostructures

    NASA Astrophysics Data System (ADS)

    Kalpanadevi, K.; Sinduja, C. R.; Manimekalai, R.

    2014-01-01

    The synthesis of CoFe2O4 nanoparticles has been achieved by a simple thermal decomposition method from an inorganic precursor, cobalt ferrous cinnamate hydrazinate (CoFe2(cin)3(N2H4)3) which was obtained by a novel precipitation method from the corresponding metal salts, cinnamic acid and hydrazine hydrate. The precursor was characterized by hydrazine and metal analyses, infrared spectral analysis and thermo gravimetric analysis. Under appropriate annealing, CoFe2(cin)3(N2H4)3 yielded CoFe2O4 nanoparticles, which were characterized for their size and structure using X-Ray diffraction (XRD), high resolution transmission electron microscopic (HRTEM), selected area electron diffraction (SAED) and scanning electron microscopic (SEM) techniques.

  6. Synthesis & characterization of Bi7.38Ce0.62O12.3 and its optical and electrocatalytic property

    NASA Astrophysics Data System (ADS)

    Padmanaban, A.; Dhanasekaran, T.; Kumar, S. Praveen; Gnanamoorthy, G.; Stephen, A.; Narayanan, V.

    2017-05-01

    Bismuth cerium oxide was synthesized by thermal decomposition method. The material was characterized by X-ray diffraction technique, DRS UV-Vis, Raman spectral methods and FE-SEM. The electrocatalytic sensing activity of bismuth cerium oxide modified GCE toward 4-nitrophenol exhibits better activity than the bare GCE. The modified electrode shows higher anodic current response with lower potential.

  7. Spectral element method for elastic and acoustic waves in frequency domain

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shi, Linlin; Zhou, Yuanguo; Wang, Jia-Min

    Numerical techniques in time domain are widespread in seismic and acoustic modeling. In some applications, however, frequency-domain techniques can be advantageous over the time-domain approach when narrow band results are desired, especially if multiple sources can be handled more conveniently in the frequency domain. Moreover, the medium attenuation effects can be more accurately and conveniently modeled in the frequency domain. In this paper, we present a spectral-element method (SEM) in frequency domain to simulate elastic and acoustic waves in anisotropic, heterogeneous, and lossy media. The SEM is based upon the finite-element framework and has exponential convergence because of the usemore » of GLL basis functions. The anisotropic perfectly matched layer is employed to truncate the boundary for unbounded problems. Compared with the conventional finite-element method, the number of unknowns in the SEM is significantly reduced, and higher order accuracy is obtained due to its spectral accuracy. To account for the acoustic-solid interaction, the domain decomposition method (DDM) based upon the discontinuous Galerkin spectral-element method is proposed. Numerical experiments show the proposed method can be an efficient alternative for accurate calculation of elastic and acoustic waves in frequency domain.« less

  8. The recognition of ocean red tide with hyper-spectral-image based on EMD

    NASA Astrophysics Data System (ADS)

    Zhao, Wencang; Wei, Hongli; Shi, Changjiang; Ji, Guangrong

    2008-05-01

    A new technique is introduced in this paper regarding red tide recognition with remotely sensed hyper-spectral images based on empirical mode decomposition (EMD), from an artificial red tide experiment in the East China Sea in 2002. A set of characteristic parameters that describe absorbing crest and reflecting crest of the red tide and its recognition methods are put forward based on general picture data, with which the spectral information of certain non-dominant alga species of a red tide occurrence is analyzed for establishing the foundation to estimate the species. Comparative experiments have proved that the method is effective. Meanwhile, the transitional area between red-tide zone and non-red-tide zone can be detected with the information of thickness of algae influence, with which a red tide can be forecast.

  9. Spectral decomposition of seismic data with reassigned smoothed pseudo Wigner-Ville distribution

    NASA Astrophysics Data System (ADS)

    Wu, Xiaoyang; Liu, Tianyou

    2009-07-01

    Seismic signals are nonstationary mainly due to absorption and attenuation of seismic energy in strata. Referring to spectral decomposition of seismic data, the conventional method using short-time Fourier transform (STFT) limits temporal and spectral resolution by a predefined window length. Continuous-wavelet transform (CWT) uses dilation and translation of a wavelet to produce a time-scale map. However, the wavelets utilized should be orthogonal in order to obtain a satisfactory resolution. The less applied, Wigner-Ville distribution (WVD) being superior in energy distribution concentration, is confronted with cross-terms interference (CTI) when signals are multi-component. In order to reduce the impact of CTI, Cohen class uses kernel function as low-pass filter. Nevertheless it also weakens energy concentration of auto-terms. In this paper, we employ smoothed pseudo Wigner-Ville distribution (SPWVD) with Gauss kernel function to reduce CTI in time and frequency domain, then reassign values of SPWVD (called reassigned SPWVD) according to the center of gravity of the considering energy region so that distribution concentration is maintained simultaneously. We conduct the method above on a multi-component synthetic seismic record and compare with STFT and CWT spectra. Two field examples reveal that RSPWVD potentially can be applied to detect low-frequency shadows caused by hydrocarbons and to delineate the space distribution of abnormal geological body more precisely.

  10. Quantification of breast lesion compositions using low-dose spectral mammography: A feasibility study

    PubMed Central

    Ding, Huanjun; Sennung, David; Cho, Hyo-Min; Molloi, Sabee

    2016-01-01

    Purpose: The positive predictive power for malignancy can potentially be improved, if the chemical compositions of suspicious breast lesions can be reliably measured in screening mammography. The purpose of this study is to investigate the feasibility of quantifying breast lesion composition, in terms of water and lipid contents, with spectral mammography. Methods: Phantom and tissue samples were imaged with a spectral mammography system based on silicon-strip photon-counting detectors. Dual-energy calibration was performed for material decomposition, using plastic water and adipose-equivalent phantoms as the basis materials. The step wedge calibration phantom consisted of 20 calibration configurations, which ranged from 2 to 8 cm in thickness and from 0% to 100% in plastic water density. A nonlinear rational fitting function was used in dual-energy calibration of the imaging system. Breast lesion phantoms, made from various combinations of plastic water and adipose-equivalent disks, were embedded in a breast mammography phantom with a heterogeneous background pattern. Lesion phantoms with water densities ranging from 0% to 100% were placed at different locations of the heterogeneous background phantom. The water density in the lesion phantoms was measured using dual-energy material decomposition. The thickness and density of the background phantom were varied to test the accuracy of the decomposition technique in different configurations. In addition, an in vitro study was also performed using mixtures of lean and fat bovine tissue of 25%, 50%, and 80% lean weight percentages as the background. Lesions were simulated by using breast lesion phantoms, as well as small bovine tissue samples, composed of carefully weighed lean and fat bovine tissues. The water densities in tissue samples were measured using spectral mammography and compared to measurement using chemical decomposition of the tissue. Results: The thickness of measured and known water contents was compared for various lesion configurations. There was a good linear correlation between the measured and the known values. The root-mean-square errors in water thickness measurements were 0.3 and 0.2 mm for the plastic phantom and bovine tissue backgrounds, respectively. Conclusions: The results indicate that spectral mammography can be used to accurately characterize breast lesion composition in terms of their equivalent water and lipid contents. PMID:27782705

  11. Blind decomposition of Herschel-HIFI spectral maps of the NGC 7023 nebula

    NASA Astrophysics Data System (ADS)

    Berné, O.; Joblin, C.; Deville, Y.; Pilleri, P.; Pety, J.; Teyssier, D.; Gerin, M.; Fuente, A.

    2012-12-01

    Large spatial-spectral surveys are more and more common in astronomy. This calls for the need of new methods to analyze such mega- to giga-pixel data-cubes. In this paper we present a method to decompose such observations into a limited and comprehensive set of components. The original data can then be interpreted in terms of linear combinations of these components. The method uses non-negative matrix factorization (NMF) to extract latent spectral end-members in the data. The number of needed end-members is estimated based on the level of noise in the data. A Monte-Carlo scheme is adopted to estimate the optimal end-members, and their standard deviations. Finally, the maps of linear coefficients are reconstructed using non-negative least squares. We apply this method to a set of hyperspectral data of the NGC 7023 nebula, obtained recently with the HIFI instrument onboard the Herschel space observatory, and provide a first interpretation of the results in terms of 3-dimensional dynamical structure of the region.

  12. Characterization of cancer and normal tissue fluorescence through wavelet transform and singular value decomposition

    NASA Astrophysics Data System (ADS)

    Gharekhan, Anita H.; Biswal, Nrusingh C.; Gupta, Sharad; Pradhan, Asima; Sureshkumar, M. B.; Panigrahi, Prasanta K.

    2008-02-01

    The statistical and characteristic features of the polarized fluorescence spectra from cancer, normal and benign human breast tissues are studied through wavelet transform and singular value decomposition. The discrete wavelets enabled one to isolate high and low frequency spectral fluctuations, which revealed substantial randomization in the cancerous tissues, not present in the normal cases. In particular, the fluctuations fitted well with a Gaussian distribution for the cancerous tissues in the perpendicular component. One finds non-Gaussian behavior for normal and benign tissues' spectral variations. The study of the difference of intensities in parallel and perpendicular channels, which is free from the diffusive component, revealed weak fluorescence activity in the 630nm domain, for the cancerous tissues. This may be ascribable to porphyrin emission. The role of both scatterers and fluorophores in the observed minor intensity peak for the cancer case is experimentally confirmed through tissue-phantom experiments. Continuous Morlet wavelet also highlighted this domain for the cancerous tissue fluorescence spectra. Correlation in the spectral fluctuation is further studied in different tissue types through singular value decomposition. Apart from identifying different domains of spectral activity for diseased and non-diseased tissues, we found random matrix support for the spectral fluctuations. The small eigenvalues of the perpendicular polarized fluorescence spectra of cancerous tissues fitted remarkably well with random matrix prediction for Gaussian random variables, confirming our observations about spectral fluctuations in the wavelet domain.

  13. Eigenvalue-eigenvector decomposition (EED) analysis of dissimilarity and covariance matrix obtained from total synchronous fluorescence spectral (TSFS) data sets of herbal preparations: Optimizing the classification approach.

    PubMed

    Tarai, Madhumita; Kumar, Keshav; Divya, O; Bairi, Partha; Mishra, Kishor Kumar; Mishra, Ashok Kumar

    2017-09-05

    The present work compares the dissimilarity and covariance based unsupervised chemometric classification approaches by taking the total synchronous fluorescence spectroscopy data sets acquired for the cumin and non-cumin based herbal preparations. The conventional decomposition method involves eigenvalue-eigenvector analysis of the covariance of the data set and finds the factors that can explain the overall major sources of variation present in the data set. The conventional approach does this irrespective of the fact that the samples belong to intrinsically different groups and hence leads to poor class separation. The present work shows that classification of such samples can be optimized by performing the eigenvalue-eigenvector decomposition on the pair-wise dissimilarity matrix. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Fast sparse Raman spectral unmixing for chemical fingerprinting and quantification

    NASA Astrophysics Data System (ADS)

    Yaghoobi, Mehrdad; Wu, Di; Clewes, Rhea J.; Davies, Mike E.

    2016-10-01

    Raman spectroscopy is a well-established spectroscopic method for the detection of condensed phase chemicals. It is based on scattered light from exposure of a target material to a narrowband laser beam. The information generated enables presumptive identification from measuring correlation with library spectra. Whilst this approach is successful in identification of chemical information of samples with one component, it is more difficult to apply to spectral mixtures. The capability of handling spectral mixtures is crucial for defence and security applications as hazardous materials may be present as mixtures due to the presence of degradation, interferents or precursors. A novel method for spectral unmixing is proposed here. Most modern decomposition techniques are based on the sparse decomposition of mixture and the application of extra constraints to preserve the sum of concentrations. These methods have often been proposed for passive spectroscopy, where spectral baseline correction is not required. Most successful methods are computationally expensive, e.g. convex optimisation and Bayesian approaches. We present a novel low complexity sparsity based method to decompose the spectra using a reference library of spectra. It can be implemented on a hand-held spectrometer in near to real-time. The algorithm is based on iteratively subtracting the contribution of selected spectra and updating the contribution of each spectrum. The core algorithm is called fast non-negative orthogonal matching pursuit, which has been proposed by the authors in the context of nonnegative sparse representations. The iteration terminates when the maximum number of expected chemicals has been found or the residual spectrum has a negligible energy, i.e. in the order of the noise level. A backtracking step removes the least contributing spectrum from the list of detected chemicals and reports it as an alternative component. This feature is particularly useful in detection of chemicals with small contributions, which are normally not detected. The proposed algorithm is easily reconfigurable to include new library entries and optional preferential threat searches in the presence of predetermined threat indicators. Under Ministry of Defence funding, we have demonstrated the algorithm for fingerprinting and rough quantification of the concentration of chemical mixtures using a set of reference spectral mixtures. In our experiments, the algorithm successfully managed to detect the chemicals with concentrations below 10 percent. The running time of the algorithm is in the order of one second, using a single core of a desktop computer.

  15. Toward quantifying the composition of soft tissues by spectral CT with Medipix3.

    PubMed

    Ronaldson, J Paul; Zainon, Rafidah; Scott, Nicola Jean Agnes; Gieseg, Steven Paul; Butler, Anthony P; Butler, Philip H; Anderson, Nigel G

    2012-11-01

    To determine the potential of spectral computed tomography (CT) with Medipix3 for quantifying fat, calcium, and iron in soft tissues within small animal models and surgical specimens of diseases such as fatty liver (metabolic syndrome) and unstable atherosclerosis. The spectroscopic method was applied to tomographic data acquired using a micro-CT system incorporating a Medipix3 detector array with silicon sensor layer and microfocus x-ray tube operating at 50 kVp. A 10 mm diameter perspex phantom containing a fat surrogate (sunflower oil) and aqueous solutions of ferric nitrate, calcium chloride, and iodine was imaged with multiple energy bins. The authors used the spectroscopic characteristics of the CT number to establish a basis for the decomposition of soft tissue components. The potential of the method of constrained least squares for quantifying different sets of materials was evaluated in terms of information entropy and degrees of freedom, with and without the use of a volume conservation constraint. The measurement performance was evaluated quantitatively using atheroma and mouse equivalent phantoms. Finally the decomposition method was assessed qualitatively using a euthanized mouse and an excised human atherosclerotic plaque. Spectral CT measurements of a phantom containing tissue surrogates confirmed the ability to distinguish these materials by the spectroscopic characteristics of their CT number. The assessment of performance potential in terms of information entropy and degrees of freedom indicated that certain sets of up to three materials could be decomposed by the method of constrained least squares. However, there was insufficient information within the data set to distinguish calcium from iron within soft tissues. The quantification of calcium concentration and fat mass fraction within atheroma and mouse equivalent phantoms by spectral CT correlated well with the nominal values (R(2) = 0.990 and R(2) = 0.985, respectively). In the euthanized mouse and excised human atherosclerotic plaque, regions of calcium and fat were appropriately decomposed according to their spectroscopic characteristics. Spectral CT, using the Medipix3 detector and silicon sensor layer, can quantify certain sets of up to three materials using the proposed method of constrained least squares. The system has some ability to independently distinguish calcium, fat, and water, and these have been quantified within phantom equivalents of fatty liver and atheroma. In this configuration, spectral CT cannot distinguish iron from calcium within soft tissues.

  16. Image enhancement by spectral-error correction for dual-energy computed tomography.

    PubMed

    Park, Kyung-Kook; Oh, Chang-Hyun; Akay, Metin

    2011-01-01

    Dual-energy CT (DECT) was reintroduced recently to use the additional spectral information of X-ray attenuation and aims for accurate density measurement and material differentiation. However, the spectral information lies in the difference between low and high energy images or measurements, so that it is difficult to acquire accurate spectral information due to amplification of high pixel noise in the resulting difference image. In this work, an image enhancement technique for DECT is proposed, based on the fact that the attenuation of a higher density material decreases more rapidly as X-ray energy increases. We define as spectral error the case when a pixel pair of low and high energy images deviates far from the expected attenuation trend. After analyzing the spectral-error sources of DECT images, we propose a DECT image enhancement method, which consists of three steps: water-reference offset correction, spectral-error correction, and anti-correlated noise reduction. It is the main idea of this work that makes spectral errors distributed like random noise over the true attenuation and suppressed by the well-known anti-correlated noise reduction. The proposed method suppressed noise of liver lesions and improved contrast between liver lesions and liver parenchyma in DECT contrast-enhanced abdominal images and their two-material decomposition.

  17. Spectral decompositions of multiple time series: a Bayesian non-parametric approach.

    PubMed

    Macaro, Christian; Prado, Raquel

    2014-01-01

    We consider spectral decompositions of multiple time series that arise in studies where the interest lies in assessing the influence of two or more factors. We write the spectral density of each time series as a sum of the spectral densities associated to the different levels of the factors. We then use Whittle's approximation to the likelihood function and follow a Bayesian non-parametric approach to obtain posterior inference on the spectral densities based on Bernstein-Dirichlet prior distributions. The prior is strategically important as it carries identifiability conditions for the models and allows us to quantify our degree of confidence in such conditions. A Markov chain Monte Carlo (MCMC) algorithm for posterior inference within this class of frequency-domain models is presented.We illustrate the approach by analyzing simulated and real data via spectral one-way and two-way models. In particular, we present an analysis of functional magnetic resonance imaging (fMRI) brain responses measured in individuals who participated in a designed experiment to study pain perception in humans.

  18. Membrane covered duct lining for high-frequency noise attenuation: prediction using a Chebyshev collocation method.

    PubMed

    Huang, Lixi

    2008-11-01

    A spectral method of Chebyshev collocation with domain decomposition is introduced for linear interaction between sound and structure in a duct lined with flexible walls backed by cavities with or without a porous material. The spectral convergence is validated by a one-dimensional problem with a closed-form analytical solution, and is then extended to the two-dimensional configuration and compared favorably against a previous method based on the Fourier-Galerkin procedure and a finite element modeling. The nonlocal, exact Dirichlet-to-Neumann boundary condition is embedded in the domain decomposition scheme without imposing extra computational burden. The scheme is applied to the problem of high-frequency sound absorption by duct lining, which is normally ineffective when the wavelength is comparable with or shorter than the duct height. When a tensioned membrane covers the lining, however, it scatters the incident plane wave into higher-order modes, which then penetrate the duct lining more easily and get dissipated. For the frequency range of f=0.3-3 studied here, f=0.5 being the first cut-on frequency of the central duct, the membrane cover is found to offer an additional 0.9 dB attenuation per unit axial distance equal to half of the duct height.

  19. Decomposition of the complex system into nonlinear spatio-temporal modes: algorithm and application to climate data mining

    NASA Astrophysics Data System (ADS)

    Feigin, Alexander; Gavrilov, Andrey; Loskutov, Evgeny; Mukhin, Dmitry

    2015-04-01

    Proper decomposition of the complex system into well separated "modes" is a way to reveal and understand the mechanisms governing the system behaviour as well as discover essential feedbacks and nonlinearities. The decomposition is also natural procedure that provides to construct adequate and concurrently simplest models of both corresponding sub-systems, and of the system in whole. In recent works two new methods of decomposition of the Earth's climate system into well separated modes were discussed. The first method [1-3] is based on the MSSA (Multichannel Singular Spectral Analysis) [4] for linear expanding vector (space-distributed) time series and makes allowance delayed correlations of the processes recorded in spatially separated points. The second one [5-7] allows to construct nonlinear dynamic modes, but neglects delay of correlations. It was demonstrated [1-3] that first method provides effective separation of different time scales, but prevent from correct reduction of data dimension: slope of variance spectrum of spatio-temporal empirical orthogonal functions that are "structural material" for linear spatio-temporal modes, is too flat. The second method overcomes this problem: variance spectrum of nonlinear modes falls essentially sharply [5-7]. However neglecting time-lag correlations brings error of mode selection that is uncontrolled and increases with growth of mode time scale. In the report we combine these two methods in such a way that the developed algorithm allows constructing nonlinear spatio-temporal modes. The algorithm is applied for decomposition of (i) multi hundreds years globally distributed data generated by the INM RAS Coupled Climate Model [8], and (ii) 156 years time series of SST anomalies distributed over the globe [9]. We compare efficiency of different methods of decomposition and discuss the abilities of nonlinear spatio-temporal modes for construction of adequate and concurrently simplest ("optimal") models of climate systems. 1. Feigin A.M., Mukhin D., Gavrilov A., Volodin E.M., and Loskutov E.M. (2013) "Separation of spatial-temporal patterns ("climatic modes") by combined analysis of really measured and generated numerically vector time series", AGU 2013 Fall Meeting, Abstract NG33A-1574. 2. Alexander Feigin, Dmitry Mukhin, Andrey Gavrilov, Evgeny Volodin, and Evgeny Loskutov (2014) "Approach to analysis of multiscale space-distributed time series: separation of spatio-temporal modes with essentially different time scales", Geophysical Research Abstracts, Vol. 16, EGU2014-6877. 3. Dmitry Mukhin, Dmitri Kondrashov, Evgeny Loskutov, Andrey Gavrilov, Alexander Feigin, and Michael Ghil (2014) "Predicting critical transitions in ENSO models, Part II: Spatially dependent models", Journal of Climate (accepted, doi: 10.1175/JCLI-D-14-00240.1). 4. Ghil, M., R. M. Allen, M. D. Dettinger, K. Ide, D. Kondrashov, et al. (2002) "Advanced spectral methods for climatic time series", Rev. Geophys. 40(1), 3.1-3.41. 5. Dmitry Mukhin, Andrey Gavrilov, Evgeny M Loskutov and Alexander M Feigin (2014) "Nonlinear Decomposition of Climate Data: a New Method for Reconstruction of Dynamical Modes", AGU 2014 Fall Meeting, Abstract NG43A-3752. 6. Andrey Gavrilov, Dmitry Mukhin, Evgeny Loskutov, and Alexander Feigin (2015) "Empirical decomposition of climate data into nonlinear dynamic modes", Geophysical Research Abstracts, Vol. 17, EGU2015-627. 7. Dmitry Mukhin, Andrey Gavrilov, Evgeny Loskutov, Alexander Feigin, and Juergen Kurths (2015) "Reconstruction of principal dynamical modes from climatic variability: nonlinear approach", Geophysical Research Abstracts, Vol. 17, EGU2015-5729. 8. http://83.149.207.89/GCM_DATA_PLOTTING/GCM_INM_DATA_XY_en.htm. 9. http://iridl.ldeo.columbia.edu/SOURCES/.KAPLAN/.EXTENDED/.v2/.ssta/.

  20. Spectral functions with the density matrix renormalization group: Krylov-space approach for correction vectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None, None

    Frequency-dependent correlations, such as the spectral function and the dynamical structure factor, help illustrate condensed matter experiments. Within the density matrix renormalization group (DMRG) framework, an accurate method for calculating spectral functions directly in frequency is the correction-vector method. The correction vector can be computed by solving a linear equation or by minimizing a functional. Our paper proposes an alternative to calculate the correction vector: to use the Krylov-space approach. This paper also studies the accuracy and performance of the Krylov-space approach, when applied to the Heisenberg, the t-J, and the Hubbard models. The cases we studied indicate that themore » Krylov-space approach can be more accurate and efficient than the conjugate gradient, and that the error of the former integrates best when a Krylov-space decomposition is also used for ground state DMRG.« less

  1. Spectral functions with the density matrix renormalization group: Krylov-space approach for correction vectors

    DOE PAGES

    None, None

    2016-11-21

    Frequency-dependent correlations, such as the spectral function and the dynamical structure factor, help illustrate condensed matter experiments. Within the density matrix renormalization group (DMRG) framework, an accurate method for calculating spectral functions directly in frequency is the correction-vector method. The correction vector can be computed by solving a linear equation or by minimizing a functional. Our paper proposes an alternative to calculate the correction vector: to use the Krylov-space approach. This paper also studies the accuracy and performance of the Krylov-space approach, when applied to the Heisenberg, the t-J, and the Hubbard models. The cases we studied indicate that themore » Krylov-space approach can be more accurate and efficient than the conjugate gradient, and that the error of the former integrates best when a Krylov-space decomposition is also used for ground state DMRG.« less

  2. [Spectral characteristics of decomposition of incorporated straw in compound polluted arid loess].

    PubMed

    Fan, Chun-Hui; Zhang, Ying-Chao; Xu, Ji-Ting; Wang, Jia-Hong

    2014-04-01

    The original loess from western China was used as soil sample, the spectral methods of scanning electron microscope-energy dispersive X-ray spectroscopy (SEM-EDS), elemental analysis, Fourier transform infrared spectroscopy (FT-IR) and 13C nuclear magnetic resonance (13C NMR) were used to investigate the characteristics of decomposed straw and formed humic acids in compound polluted arid loess. The SEM micrographs show the variation from dense to decomposed surface, and finally to damaged structure, and the EDS data reveal the phenomenon of element transfer. The newly-formed humic acids are of low aromaticity, helpful for increasing the activity of organic matters in loess. The FTIR spectra in the whole process are similar, indicating the complexity of transformation dynamics of humic acids. The molecular structure of humic acids becomes simpler, shown from 13C NMR spectra. The spectral methods are useful for humic acids identification in loess region in straw incorporation process.

  3. The spectral properties of uranium hexafluoride and its thermal decomposition products

    NASA Technical Reports Server (NTRS)

    Krascella, N. L.

    1976-01-01

    This investigation was initiated to provide basic spectral data for gases of interest to the plasma core reactor concept. The attenuation of vacuum ultraviolet (VUV) radiation by helium at pressures up to 20 atm over path lengths of about 61 cm and in the approximate wavelength range between 80 and 300 nm was studied. Measurements were also conducted to provide basic VUV data with respect to UF6 and UF6/argon mixtures in the wavelength range between 80 and 120 nm. Finally, an investigation was initiated to provide basic spectral emission and absorption data for UF6 and possible thermal decomposition products of UF6 at elevated temperatures.

  4. A time domain frequency-selective multivariate Granger causality approach.

    PubMed

    Leistritz, Lutz; Witte, Herbert

    2016-08-01

    The investigation of effective connectivity is one of the major topics in computational neuroscience to understand the interaction between spatially distributed neuronal units of the brain. Thus, a wide variety of methods has been developed during the last decades to investigate functional and effective connectivity in multivariate systems. Their spectrum ranges from model-based to model-free approaches with a clear separation into time and frequency range methods. We present in this simulation study a novel time domain approach based on Granger's principle of predictability, which allows frequency-selective considerations of directed interactions. It is based on a comparison of prediction errors of multivariate autoregressive models fitted to systematically modified time series. These modifications are based on signal decompositions, which enable a targeted cancellation of specific signal components with specific spectral properties. Depending on the embedded signal decomposition method, a frequency-selective or data-driven signal-adaptive Granger Causality Index may be derived.

  5. TU-CD-207-01: Characterization of Breast Tissue Composition Using Spectral Mammography

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ding, H; Cho, H; Kumar, N

    Purpose: To investigate the feasibility of characterizing the chemical composition of breast tissue, in terms of water and lipid, by using spectral mammography in simulation and postmortem studies. Methods: Analytical simulations were performed to obtain low- and high-energy signals of breast tissue based on previously reported water, lipid, and protein contents. Dual-energy decomposition was used to characterize the simulated breast tissue into water and lipid basis materials and the measured water density was compared to the known value. In experimental studies, postmortem breasts were imaged with a spectral mammography system based on a scanning multi-slit Si strip photon-counting detector. Low-more » and high-energy images were acquired simultaneously from a single exposure by sorting the recorded photons into the corresponding energy bins. Dual-energy material decomposition of the low- and high-energy images yielded individual pixel measurements of breast tissue composition in terms of water and lipid thicknesses. After imaging, each postmortem breast was chemically decomposed into water, lipid and protein. The water density calculated from chemical analysis was used as the reference gold standard. Correlation of the water density measurements between spectral mammography and chemical analysis was analyzed using linear regression. Results: Both simulation and postmortem studies showed good linear correlation between the decomposed water thickness using spectral mammography and chemical analysis. The slope of the linear fitting function in the simulation and postmortem studies were 1.15 and 1.21, respectively. Conclusion: The results indicate that breast tissue composition, in terms of water and lipid, can be accurately measured using spectral mammography. Quantitative breast tissue composition can potentially be used to stratify patients according to their breast cancer risk.« less

  6. Use of the Morlet mother wavelet in the frequency-scale domain decomposition technique for the modal identification of ambient vibration responses

    NASA Astrophysics Data System (ADS)

    Le, Thien-Phu

    2017-10-01

    The frequency-scale domain decomposition technique has recently been proposed for operational modal analysis. The technique is based on the Cauchy mother wavelet. In this paper, the approach is extended to the Morlet mother wavelet, which is very popular in signal processing due to its superior time-frequency localization. Based on the regressive form and an appropriate norm of the Morlet mother wavelet, the continuous wavelet transform of the power spectral density of ambient responses enables modes in the frequency-scale domain to be highlighted. Analytical developments first demonstrate the link between modal parameters and the local maxima of the continuous wavelet transform modulus. The link formula is then used as the foundation of the proposed modal identification method. Its practical procedure, combined with the singular value decomposition algorithm, is presented step by step. The proposition is finally verified using numerical examples and a laboratory test.

  7. Application of spectral decomposition algorithm for mapping water quality in a turbid lake (Lake Kasumigaura, Japan) from Landsat TM data

    NASA Astrophysics Data System (ADS)

    Oyama, Youichi; Matsushita, Bunkei; Fukushima, Takehiko; Matsushige, Kazuo; Imai, Akio

    The remote sensing of Case 2 water has been far less successful than that of Case 1 water, due mainly to the complex interactions among optically active substances (e.g., phytoplankton, suspended sediments, colored dissolved organic matter, and water) in the former. To address this problem, we developed a spectral decomposition algorithm (SDA), based on a spectral linear mixture modeling approach. Through a tank experiment, we found that the SDA-based models were superior to conventional empirical models (e.g. using single band, band ratio, or arithmetic calculation of band) for accurate estimates of water quality parameters. In this paper, we develop a method for applying the SDA to Landsat-5 TM data on Lake Kasumigaura, a eutrophic lake in Japan characterized by high concentrations of suspended sediment, for mapping chlorophyll-a (Chl-a) and non-phytoplankton suspended sediment (NPSS) distributions. The results show that the SDA-based estimation model can be obtained by a tank experiment. Moreover, by combining this estimation model with satellite-SRSs (standard reflectance spectra: i.e., spectral end-members) derived from bio-optical modeling, we can directly apply the model to a satellite image. The same SDA-based estimation model for Chl-a concentration was applied to two Landsat-5 TM images, one acquired in April 1994 and the other in February 2006. The average Chl-a estimation error between the two was 9.9%, a result that indicates the potential robustness of the SDA-based estimation model. The average estimation error of NPSS concentration from the 2006 Landsat-5 TM image was 15.9%. The key point for successfully applying the SDA-based estimation model to satellite data is the method used to obtain a suitable satellite-SRS for each end-member.

  8. An integrated condition-monitoring method for a milling process using reduced decomposition features

    NASA Astrophysics Data System (ADS)

    Liu, Jie; Wu, Bo; Wang, Yan; Hu, Youmin

    2017-08-01

    Complex and non-stationary cutting chatter affects productivity and quality in the milling process. Developing an effective condition-monitoring approach is critical to accurately identify cutting chatter. In this paper, an integrated condition-monitoring method is proposed, where reduced features are used to efficiently recognize and classify machine states in the milling process. In the proposed method, vibration signals are decomposed into multiple modes with variational mode decomposition, and Shannon power spectral entropy is calculated to extract features from the decomposed signals. Principal component analysis is adopted to reduce feature size and computational cost. With the extracted feature information, the probabilistic neural network model is used to recognize and classify the machine states, including stable, transition, and chatter states. Experimental studies are conducted, and results show that the proposed method can effectively detect cutting chatter during different milling operation conditions. This monitoring method is also efficient enough to satisfy fast machine state recognition and classification.

  9. Scalable Parallel Computation for Extended MHD Modeling of Fusion Plasmas

    NASA Astrophysics Data System (ADS)

    Glasser, Alan H.

    2008-11-01

    Parallel solution of a linear system is scalable if simultaneously doubling the number of dependent variables and the number of processors results in little or no increase in the computation time to solution. Two approaches have this property for parabolic systems: multigrid and domain decomposition. Since extended MHD is primarily a hyperbolic rather than a parabolic system, additional steps must be taken to parabolize the linear system to be solved by such a method. Such physics-based preconditioning (PBP) methods have been pioneered by Chac'on, using finite volumes for spatial discretization, multigrid for solution of the preconditioning equations, and matrix-free Newton-Krylov methods for the accurate solution of the full nonlinear preconditioned equations. The work described here is an extension of these methods using high-order spectral element methods and FETI-DP domain decomposition. Application of PBP to a flux-source representation of the physics equations is discussed. The resulting scalability will be demonstrated for simple wave and for ideal and Hall MHD waves.

  10. Quadratic Blind Linear Unmixing: A Graphical User Interface for Tissue Characterization

    PubMed Central

    Gutierrez-Navarro, O.; Campos-Delgado, D.U.; Arce-Santana, E. R.; Jo, Javier A.

    2016-01-01

    Spectral unmixing is the process of breaking down data from a sample into its basic components and their abundances. Previous work has been focused on blind unmixing of multi-spectral fluorescence lifetime imaging microscopy (m-FLIM) datasets under a linear mixture model and quadratic approximations. This method provides a fast linear decomposition and can work without a limitation in the maximum number of components or end-members. Hence this work presents an interactive software which implements our blind end-member and abundance extraction (BEAE) and quadratic blind linear unmixing (QBLU) algorithms in Matlab. The options and capabilities of our proposed software are described in detail. When the number of components is known, our software can estimate the constitutive end-members and their abundances. When no prior knowledge is available, the software can provide a completely blind solution to estimate the number of components, the end-members and their abundances. The characterization of three case studies validates the performance of the new software: ex-vivo human coronary arteries, human breast cancer cell samples, and in-vivo hamster oral mucosa. The software is freely available in a hosted webpage by one of the developing institutions, and allows the user a quick, easy-to-use and efficient tool for multi/hyper-spectral data decomposition. PMID:26589467

  11. Quadratic blind linear unmixing: A graphical user interface for tissue characterization.

    PubMed

    Gutierrez-Navarro, O; Campos-Delgado, D U; Arce-Santana, E R; Jo, Javier A

    2016-02-01

    Spectral unmixing is the process of breaking down data from a sample into its basic components and their abundances. Previous work has been focused on blind unmixing of multi-spectral fluorescence lifetime imaging microscopy (m-FLIM) datasets under a linear mixture model and quadratic approximations. This method provides a fast linear decomposition and can work without a limitation in the maximum number of components or end-members. Hence this work presents an interactive software which implements our blind end-member and abundance extraction (BEAE) and quadratic blind linear unmixing (QBLU) algorithms in Matlab. The options and capabilities of our proposed software are described in detail. When the number of components is known, our software can estimate the constitutive end-members and their abundances. When no prior knowledge is available, the software can provide a completely blind solution to estimate the number of components, the end-members and their abundances. The characterization of three case studies validates the performance of the new software: ex-vivo human coronary arteries, human breast cancer cell samples, and in-vivo hamster oral mucosa. The software is freely available in a hosted webpage by one of the developing institutions, and allows the user a quick, easy-to-use and efficient tool for multi/hyper-spectral data decomposition. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  12. Biologically-inspired data decorrelation for hyper-spectral imaging

    NASA Astrophysics Data System (ADS)

    Picon, Artzai; Ghita, Ovidiu; Rodriguez-Vaamonde, Sergio; Iriondo, Pedro Ma; Whelan, Paul F.

    2011-12-01

    Hyper-spectral data allows the construction of more robust statistical models to sample the material properties than the standard tri-chromatic color representation. However, because of the large dimensionality and complexity of the hyper-spectral data, the extraction of robust features (image descriptors) is not a trivial issue. Thus, to facilitate efficient feature extraction, decorrelation techniques are commonly applied to reduce the dimensionality of the hyper-spectral data with the aim of generating compact and highly discriminative image descriptors. Current methodologies for data decorrelation such as principal component analysis (PCA), linear discriminant analysis (LDA), wavelet decomposition (WD), or band selection methods require complex and subjective training procedures and in addition the compressed spectral information is not directly related to the physical (spectral) characteristics associated with the analyzed materials. The major objective of this article is to introduce and evaluate a new data decorrelation methodology using an approach that closely emulates the human vision. The proposed data decorrelation scheme has been employed to optimally minimize the amount of redundant information contained in the highly correlated hyper-spectral bands and has been comprehensively evaluated in the context of non-ferrous material classification

  13. Investigation of KDP crystal surface based on an improved bidimensional empirical mode decomposition method

    NASA Astrophysics Data System (ADS)

    Lu, Lei; Yan, Jihong; Chen, Wanqun; An, Shi

    2018-03-01

    This paper proposed a novel spatial frequency analysis method for the investigation of potassium dihydrogen phosphate (KDP) crystal surface based on an improved bidimensional empirical mode decomposition (BEMD) method. Aiming to eliminate end effects of the BEMD method and improve the intrinsic mode functions (IMFs) for the efficient identification of texture features, a denoising process was embedded in the sifting iteration of BEMD method. With removing redundant information in decomposed sub-components of KDP crystal surface, middle spatial frequencies of the cutting and feeding processes were identified. Comparative study with the power spectral density method, two-dimensional wavelet transform (2D-WT), as well as the traditional BEMD method, demonstrated that the method developed in this paper can efficiently extract texture features and reveal gradient development of KDP crystal surface. Furthermore, the proposed method was a self-adaptive data driven technique without prior knowledge, which overcame shortcomings of the 2D-WT model such as the parameters selection. Additionally, the proposed method was a promising tool for the application of online monitoring and optimal control of precision machining process.

  14. Integrating seasonal optical and thermal infrared spectra to characterize urban impervious surfaces with extreme spectral complexity: a Shanghai case study

    NASA Astrophysics Data System (ADS)

    Wang, Wei; Yao, Xinfeng; Ji, Minhe

    2016-01-01

    Despite recent rapid advancement in remote sensing technology, accurate mapping of the urban landscape in China still faces a great challenge due to unusually high spectral complexity in many big cities. Much of this complication comes from severe spectral confusion of impervious surfaces with polluted water bodies and bright bare soils. This paper proposes a two-step land cover decomposition method, which combines optical and thermal spectra from different seasons to cope with the issue of urban spectral complexity. First, a linear spectral mixture analysis was employed to generate fraction images for three preliminary endmembers (high albedo, low albedo, and vegetation). Seasonal change analysis on land surface temperature induced from thermal infrared spectra and coarse component fractions obtained from the first step was then used to reduce the confusion between impervious surfaces and nonimpervious materials. This method was tested with two-date Landsat multispectral data in Shanghai, one of China's megacities. The results showed that the method was capable of consistently estimating impervious surfaces in highly complex urban environments with an accuracy of R2 greater than 0.70 and both root mean square error and mean average error less than 0.20 for all test sites. This strategy seemed very promising for landscape mapping of complex urban areas.

  15. Model-Based Speech Signal Coding Using Optimized Temporal Decomposition for Storage and Broadcasting Applications

    NASA Astrophysics Data System (ADS)

    Athaudage, Chandranath R. N.; Bradley, Alan B.; Lech, Margaret

    2003-12-01

    A dynamic programming-based optimization strategy for a temporal decomposition (TD) model of speech and its application to low-rate speech coding in storage and broadcasting is presented. In previous work with the spectral stability-based event localizing (SBEL) TD algorithm, the event localization was performed based on a spectral stability criterion. Although this approach gave reasonably good results, there was no assurance on the optimality of the event locations. In the present work, we have optimized the event localizing task using a dynamic programming-based optimization strategy. Simulation results show that an improved TD model accuracy can be achieved. A methodology of incorporating the optimized TD algorithm within the standard MELP speech coder for the efficient compression of speech spectral information is also presented. The performance evaluation results revealed that the proposed speech coding scheme achieves 50%-60% compression of speech spectral information with negligible degradation in the decoded speech quality.

  16. Multispectral Wavefronts Retrieval in Digital Holographic Three-Dimensional Imaging Spectrometry

    NASA Astrophysics Data System (ADS)

    Yoshimori, Kyu

    2010-04-01

    This paper deals with a recently developed passive interferometric technique for retrieving a set of spectral components of wavefronts that are propagating from a spatially incoherent, polychromatic object. The technique is based on measurement of 5-D spatial coherence function using a suitably designed interferometer. By applying signal processing, including aperture synthesis and spectral decomposition, one may obtains a set of wavefronts of different spectral bands. Since each wavefront is equivalent to the complex Fresnel hologram at a particular spectrum of the polychromatic object, application of the conventional Fresnel transform yields 3-D image of different spectrum. Thus, this technique of multispectral wavefronts retrieval provides a new type of 3-D imaging spectrometry based on a fully passive interferometry. Experimental results are also shown to demonstrate the validity of the method.

  17. Time-Frequency Analysis And Pattern Recognition Using Singular Value Decomposition Of The Wigner-Ville Distribution

    NASA Astrophysics Data System (ADS)

    Boashash, Boualem; Lovell, Brian; White, Langford

    1988-01-01

    Time-Frequency analysis based on the Wigner-Ville Distribution (WVD) is shown to be optimal for a class of signals where the variation of instantaneous frequency is the dominant characteristic. Spectral resolution and instantaneous frequency tracking is substantially improved by using a Modified WVD (MWVD) based on an Autoregressive spectral estimator. Enhanced signal-to-noise ratio may be achieved by using 2D windowing in the Time-Frequency domain. The WVD provides a tool for deriving descriptors of signals which highlight their FM characteristics. These descriptors may be used for pattern recognition and data clustering using the methods presented in this paper.

  18. Co-simulation coupling spectral/finite elements for 3D soil/structure interaction problems

    NASA Astrophysics Data System (ADS)

    Zuchowski, Loïc; Brun, Michael; De Martin, Florent

    2018-05-01

    The coupling between an implicit finite elements (FE) code and an explicit spectral elements (SE) code has been explored for solving the elastic wave propagation in the case of soil/structure interaction problem. The coupling approach is based on domain decomposition methods in transient dynamics. The spatial coupling at the interface is managed by a standard coupling mortar approach, whereas the time integration is dealt with an hybrid asynchronous time integrator. An external coupling software, handling the interface problem, has been set up in order to couple the FE software Code_Aster with the SE software EFISPEC3D.

  19. Novel approaches to address spectral distortions in photon counting x-ray CT using artificial neural networks

    NASA Astrophysics Data System (ADS)

    Touch, M.; Clark, D. P.; Barber, W.; Badea, C. T.

    2016-04-01

    Spectral CT using a photon-counting x-ray detector (PCXD) can potentially increase accuracy of measuring tissue composition. However, PCXD spectral measurements suffer from distortion due to charge sharing, pulse pileup, and Kescape energy loss. This study proposes two novel artificial neural network (ANN)-based algorithms: one to model and compensate for the distortion, and another one to directly correct for the distortion. The ANN-based distortion model was obtained by training to learn the distortion from a set of projections with a calibration scan. The ANN distortion was then applied in the forward statistical model to compensate for distortion in the projection decomposition. ANN was also used to learn to correct distortions directly in projections. The resulting corrected projections were used for reconstructing the image, denoising via joint bilateral filtration, and decomposition into three-material basis functions: Compton scattering, the photoelectric effect, and iodine. The ANN-based distortion model proved to be more robust to noise and worked better compared to using an imperfect parametric distortion model. In the presence of noise, the mean relative errors in iodine concentration estimation were 11.82% (ANN distortion model) and 16.72% (parametric model). With distortion correction, the mean relative error in iodine concentration estimation was improved by 50% over direct decomposition from distorted data. With our joint bilateral filtration, the resulting material image quality and iodine detectability as defined by the contrast-to-noise ratio were greatly enhanced allowing iodine concentrations as low as 2 mg/ml to be detected. Future work will be dedicated to experimental evaluation of our ANN-based methods using 3D-printed phantoms.

  20. A multi-domain spectral method for time-fractional differential equations

    NASA Astrophysics Data System (ADS)

    Chen, Feng; Xu, Qinwu; Hesthaven, Jan S.

    2015-07-01

    This paper proposes an approach for high-order time integration within a multi-domain setting for time-fractional differential equations. Since the kernel is singular or nearly singular, two main difficulties arise after the domain decomposition: how to properly account for the history/memory part and how to perform the integration accurately. To address these issues, we propose a novel hybrid approach for the numerical integration based on the combination of three-term-recurrence relations of Jacobi polynomials and high-order Gauss quadrature. The different approximations used in the hybrid approach are justified theoretically and through numerical examples. Based on this, we propose a new multi-domain spectral method for high-order accurate time integrations and study its stability properties by identifying the method as a generalized linear method. Numerical experiments confirm hp-convergence for both time-fractional differential equations and time-fractional partial differential equations.

  1. Determination of seasonals using wavelets in terms of noise parameters changeability

    NASA Astrophysics Data System (ADS)

    Klos, Anna; Bogusz, Janusz; Figurski, Mariusz

    2015-04-01

    The reliable velocities of GNSS-derived observations are becoming of high importance nowadays. The fact on how we determine and subtract the seasonals may all cause the time series autocorrelation and affect uncertainties of linear parameters. The periodic changes in GNSS time series are commonly assumed as the sum of annual and semi-annual changes with amplitudes and phases being constant in time and the Least-Squares Estimation (LSE) is used in general to model these sine waves. However, not only seasonals' time-changeability, but also their higher harmonics should be considered. In this research, we focused on more than 230 globally distributed IGS stations that were processed at the Military University of Technology EPN Local Analysis Centre (MUT LAC) in Bernese 5.0 software. The network was divided into 7 different sub-networks with few of overlapping stations and processed separately with newest models. Here, we propose a wavelet-based trend and seasonals determination and removal of whole frequency spectrum between Chandler and quarter-annual periods from North, East and Up components and compare it with LSE-determined values. We used a Meyer symmetric, orthogonal wavelet and assumed nine levels of decomposition. The details from 6 up to 9 were analyzed here as periodic components with frequencies between 0.3-2.5 cpy. The characteristic oscillations for each of frequency band were pointed out. The details lower than 6 summed together with detrended approximation were considered as residua. The power spectral densities (PSDs) of original and decomposed data were stacked for North, East and Up components for each of sub-networks so as to show what power was removed with each of decomposition levels. Moreover, the noises that the certain frequency band follows (in terms of spectral indices of power-law dependencies) were estimated here using a spectral method and compared for all processed sub-networks. It seems, that lowest frequencies up to 0.7 cpy are characterized by lower spectral indices in comparison to higher ones being close to white noise. Basing on the fact, that decomposition levels overlap each other, the frequency-window choice becomes a main point in spectral index estimation. Our results were compared with those obtained by Maximum Likelihood Estimation (MLE) and possible differences as well as their impact on velocity uncertainties pointed out. The results show that the spectral indices estimated in time and frequency domains differ of 0.15 in maximum. Moreover, we compared the removed power basing on wavelet decomposition levels with the one subtracted with LSE, assuming the same periodicities. In comparison to LSE, the wavelet-based approach leaves the residua being closer to white noise with lower power-law amplitudes of them, what strictly reduces velocity uncertainties. The last approximation was analyzed here as long-term trend, being the non-linear and compared with LSE-determined linear one. It seems that these two trends differ at the level of 0.3 mm/yr in the most extreme case, what makes wavelet decomposition being useful for velocity determination.

  2. Information content in spectral dependencies of optical unit volume parameters under action of He-Ne laser on blood

    NASA Astrophysics Data System (ADS)

    Khairullina, Alphiya Y.; Oleinik, Tatiana V.

    1995-01-01

    Our previous works concerned with the development of methods for studying blood and action of low-intensity laser radiation on blood and erythrocyte suspensions had shown the light- scattering methods gave a large body of information on a medium studied due to the methodological relationship between irradiation processes and techniques for investigations. Detail analysis of spectral diffuse reflectivities and transmissivities of optically thick blood layers, spectral absorptivities calculated on this basis over 600 - 900 nm, by using different approximations, for a pathological state owing to hypoxia testifies to the optical significance of not only hemoglobin derivatives but also products of hemoglobin decomposition. Laser action on blood is specific and related to an initial state of blood absorption due to different composition of chromoproteids. This work gives the interpretation of spectral observations. Analysis of spectral dependencies of the exinction coefficient e, mean cosine m of phase function, and parameter Q equals (epsilon) (1-(mu) )H/(lambda) (H - hematocrit) testifies to decreasing the relative index of refraction of erythrocytes and to morphological changes during laser action under pathology owing to hypoxia. The possibility to obtain physical and chemical information on the state of blood under laser action in vivo is shown to be based on the method proposed by us for calculating multilayered structures modeling human organs and on the technical implementation of this method.

  3. Spectral-decomposition techniques for the identification of radon anomalies temporally associated with earthquakes occurring in the UK in 2002 and 2008.

    NASA Astrophysics Data System (ADS)

    Crockett, R. G. M.; Gillmore, G. K.

    2009-04-01

    During the second half of 2002, the University of Northampton Radon Research Group operated two continuous hourly-sampling radon detectors 2.25 km apart in Northampton, in the (English) East Midlands. This period included the Dudley earthquake (22/09/2002) which was widely noticed by members of the public in the Northampton area. Also, at various periods during 2008 the Group has operated another pair of continuous hourly-sampling radon detectors similar distances apart in Northampton. One such period included the Market Rasen earthquake (27/02/2008) which was also widely noticed by members of the public in the Northampton area. During each period of monitoring, two time-series of radon readings were obtained, one from each detector. These have been analysed for evidence of simultaneous similar anomalies: the premise being that big disturbances occurring at big distances (in relation to the detector separation) should produce simultaneous similar anomalies but that simultaneous anomalies occurring by chance will be dissimilar. As previously reported, cross-correlating the two 2002 time-series over periods of 1-30 days duration, rolled forwards through the time-series at one-hour intervals produced two periods of significant correlation, i.e. two periods of simultaneous similar behaviour in the radon concentrations. One of these periods corresponded in time to the Dudley earthquake, the other corresponded in time to a smaller earthquake which occurred in the English Channel (26/08/2002). We here report subsequent investigation of the 2002 time-series and the 2008 time-series using spectral-decomposition techniques. These techniques have revealed additional simultaneous similar behaviour in the two radon concentrations, not revealed by the rolling correlation on the raw data. These correspond in time to the Manchester earthquake swarm of October 2002 and the Market Rasen earthquake of February 2008. The spectral-decomposition techniques effectively ‘de-noise' the data, and also remove lower-frequency variations (e.g. tidal variations), revealing the simultaneous similarities. Whilst this is very much work in progress, there is the potential that such techniques enhance the possibility that simultaneous real-time monitoring of radon levels - for short-term simultaneous anomalies - at several locations in earthquake areas might provide the core of an earthquake prediction method. Keywords: Radon; earthquakes; time series; cross-correlation; spectral-decomposition; real-time simultaneous monitoring.

  4. CHROMOPHORIC DISSOLVED ORGANIC MATTER (CDOM) DERIVED FROM DECOMPOSITION OF VARIOUS VASCULAR PLANT AND ALGAL SOURCES

    EPA Science Inventory

    Chromophoric dissolved organic (CDOM) in aquatic environments is derived from the microbial decomposition of terrestrial and microbial organic matter. Here we present results of studies of the spectral properties and photoreactivity of the CDOM derived from several organic matter...

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jalas, S.; Dornmair, I.; Lehe, R.

    Particle in Cell (PIC) simulations are a widely used tool for the investigation of both laser- and beam-driven plasma acceleration. It is a known issue that the beam quality can be artificially degraded by numerical Cherenkov radiation (NCR) resulting primarily from an incorrectly modeled dispersion relation. Pseudo-spectral solvers featuring infinite order stencils can strongly reduce NCR - or even suppress it - and are therefore well suited to correctly model the beam properties. For efficient parallelization of the PIC algorithm, however, localized solvers are inevitable. Arbitrary order pseudo-spectral methods provide this needed locality. Yet, these methods can again be pronemore » to NCR. Here in this paper, we show that acceptably low solver orders are sufficient to correctly model the physics of interest, while allowing for parallel computation by domain decomposition.« less

  6. PHOTOREACTIVITY OF CHROMOPHORIC DISSOLVED ORGANIC MATTER (CDOM) DERIVED FROM DECOMPOSITION OF VARIOUS VASCULAR PLANT AND ALGAL SOURCES

    EPA Science Inventory

    Chromophoric dissolved organic matter (CDOM) in aquatic environments is derived from the microbial decomposition of terrestrial and microbial organic matter. Here we present results of studies of the spectral properties and photoreactivity of the CDOM derived from several organi...

  7. Spectral studies related to dissociation of HBr, HCl and BrO

    NASA Technical Reports Server (NTRS)

    Ginter, M. L.

    1986-01-01

    Concern over halogen catalyzed decomposition of O3 in the upper atmosphere has generated need for data on the atomic and molecular species X, HX and XO (where X is Cl and Br). Of special importance are Cl produced from freon decomposition and Cl and Br produced from natural processes and from other industrial and agricultural chemicals. Basic spectral data is provided on HCl, HBr, and BrO necessary to detect specific states and energy levels, to enable detailed modeling of the processes involving molecular dissociation, ionization, etc., and to help evaluate field experiments to check the validity of model calculations for these species in the upper atmosphere. Results contained in four published papers and two major spectral compilations are summarized together with other results obtained.

  8. Groupwise shape analysis of the hippocampus using spectral matching

    NASA Astrophysics Data System (ADS)

    Shakeri, Mahsa; Lombaert, Hervé; Lippé, Sarah; Kadoury, Samuel

    2014-03-01

    The hippocampus is a prominent subcortical feature of interest in many neuroscience studies. Its subtle morphological changes often predicate illnesses, including Alzheimer's, schizophrenia or epilepsy. The precise location of structural differences requires a reliable correspondence between shapes across a population. In this paper, we propose an automated method for groupwise hippocampal shape analysis based on a spectral decomposition of a group of shapes to solve the correspondence problem between sets of meshes. The framework generates diffeomorphic correspondence maps across a population, which enables us to create a mean shape. Morphological changes are then located between two groups of subjects. The performance of the proposed method was evaluated on a dataset of 42 hippocampus shapes and compared with a state-of-the-art structural shape analysis approach, using spherical harmonics. Difference maps between mean shapes of two test groups demonstrates that the two approaches showed results with insignificant differences, while Gaussian curvature measures calculated between matched vertices showed a better fit and reduced variability with spectral matching.

  9. Wavelet-based unsupervised learning method for electrocardiogram suppression in surface electromyograms.

    PubMed

    Niegowski, Maciej; Zivanovic, Miroslav

    2016-03-01

    We present a novel approach aimed at removing electrocardiogram (ECG) perturbation from single-channel surface electromyogram (EMG) recordings by means of unsupervised learning of wavelet-based intensity images. The general idea is to combine the suitability of certain wavelet decomposition bases which provide sparse electrocardiogram time-frequency representations, with the capacity of non-negative matrix factorization (NMF) for extracting patterns from images. In order to overcome convergence problems which often arise in NMF-related applications, we design a novel robust initialization strategy which ensures proper signal decomposition in a wide range of ECG contamination levels. Moreover, the method can be readily used because no a priori knowledge or parameter adjustment is needed. The proposed method was evaluated on real surface EMG signals against two state-of-the-art unsupervised learning algorithms and a singular spectrum analysis based method. The results, expressed in terms of high-to-low energy ratio, normalized median frequency, spectral power difference and normalized average rectified value, suggest that the proposed method enables better ECG-EMG separation quality than the reference methods. Copyright © 2015 IPEM. Published by Elsevier Ltd. All rights reserved.

  10. An adaptive angle-doppler compensation method for airborne bistatic radar based on PAST

    NASA Astrophysics Data System (ADS)

    Hang, Xu; Jun, Zhao

    2018-05-01

    Adaptive angle-Doppler compensation method extract the requisite information based on the data itself adaptively, thus avoiding the problem of performance degradation caused by inertia system error. However, this method requires estimation and egiendecomposition of sample covariance matrix, which has a high computational complexity and limits its real-time application. In this paper, an adaptive angle Doppler compensation method based on projection approximation subspace tracking (PAST) is studied. The method uses cyclic iterative processing to quickly estimate the positions of the spectral center of the maximum eigenvector of each range cell, and the computational burden of matrix estimation and eigen-decompositon is avoided, and then the spectral centers of all range cells is overlapped by two dimensional compensation. Simulation results show the proposed method can effectively reduce the no homogeneity of airborne bistatic radar, and its performance is similar to that of egien-decomposition algorithms, but the computation load is obviously reduced and easy to be realized.

  11. Applications of Hilbert Spectral Analysis for Speech and Sound Signals

    NASA Technical Reports Server (NTRS)

    Huang, Norden E.

    2003-01-01

    A new method for analyzing nonlinear and nonstationary data has been developed, and the natural applications are to speech and sound signals. The key part of the method is the Empirical Mode Decomposition method with which any complicated data set can be decomposed into a finite and often small number of Intrinsic Mode Functions (IMF). An IMF is defined as any function having the same numbers of zero-crossing and extrema, and also having symmetric envelopes defined by the local maxima and minima respectively. The IMF also admits well-behaved Hilbert transform. This decomposition method is adaptive, and, therefore, highly efficient. Since the decomposition is based on the local characteristic time scale of the data, it is applicable to nonlinear and nonstationary processes. With the Hilbert transform, the Intrinsic Mode Functions yield instantaneous frequencies as functions of time, which give sharp identifications of imbedded structures. This method invention can be used to process all acoustic signals. Specifically, it can process the speech signals for Speech synthesis, Speaker identification and verification, Speech recognition, and Sound signal enhancement and filtering. Additionally, as the acoustical signals from machinery are essentially the way the machines are talking to us. Therefore, the acoustical signals, from the machines, either from sound through air or vibration on the machines, can tell us the operating conditions of the machines. Thus, we can use the acoustic signal to diagnosis the problems of machines.

  12. Multi-material decomposition of spectral CT images

    NASA Astrophysics Data System (ADS)

    Mendonça, Paulo R. S.; Bhotika, Rahul; Maddah, Mahnaz; Thomsen, Brian; Dutta, Sandeep; Licato, Paul E.; Joshi, Mukta C.

    2010-04-01

    Spectral Computed Tomography (Spectral CT), and in particular fast kVp switching dual-energy computed tomography, is an imaging modality that extends the capabilities of conventional computed tomography (CT). Spectral CT enables the estimation of the full linear attenuation curve of the imaged subject at each voxel in the CT volume, instead of a scalar image in Hounsfield units. Because the space of linear attenuation curves in the energy ranges of medical applications can be accurately described through a two-dimensional manifold, this decomposition procedure would be, in principle, limited to two materials. This paper describes an algorithm that overcomes this limitation, allowing for the estimation of N-tuples of material-decomposed images. The algorithm works by assuming that the mixing of substances and tissue types in the human body has the physicochemical properties of an ideal solution, which yields a model for the density of the imaged material mix. Under this model the mass attenuation curve of each voxel in the image can be estimated, immediately resulting in a material-decomposed image triplet. Decomposition into an arbitrary number of pre-selected materials can be achieved by automatically selecting adequate triplets from an application-specific material library. The decomposition is expressed in terms of the volume fractions of each constituent material in the mix; this provides for a straightforward, physically meaningful interpretation of the data. One important application of this technique is in the digital removal of contrast agent from a dual-energy exam, producing a virtual nonenhanced image, as well as in the quantification of the concentration of contrast observed in a targeted region, thus providing an accurate measure of tissue perfusion.

  13. Improving radiation data quality of USDA UV-B monitoring and research program and evaluating UV decomposition in DayCent and its ecological impacts

    NASA Astrophysics Data System (ADS)

    Chen, Maosi

    Solar radiation impacts many aspects of the Earth's atmosphere and biosphere. The total solar radiation impacts the atmospheric temperature profile and the Earth's surface radiative energy budget. The solar visible (VIS) radiation is the energy source of photosynthesis. The solar ultraviolet (UV) radiation impacts plant's physiology, microbial activities, and human and animal health. Recent studies found that solar UV significantly shifts the mass loss and nitrogen patterns of plant litter decomposition in semi-arid and arid ecosystems. The potential mechanisms include the production of labile materials from direct and indirect photolysis of complex organic matters, the facilitation of microbial decomposition with more labile materials, and the UV inhibition of microbes' population. However, the mechanisms behind UV decomposition and its ecological impacts are still uncertain. Accurate and reliable ground solar radiation measurements help us better retrieve the atmosphere composition, validate satellite radiation products, and simulate ecosystem processes. Incorporating the UV decomposition into the DayCent biogeochemical model helps to better understand long-term ecological impacts. Improving the accuracy of UV irradiance data is the goal of the first part of this research and examining the importance of UV radiation in the biogeochemical model DayCent is the goal of the second part of the work. Thus, although the dissertation is separated into two parts, accurate UV irradiance measurement links them in what follows. In part one of this work the accuracy and reliability of the current operational calibration method for the (UV-) Multi-Filter Rotating Shadowband Radiometer (MFRSR), which is used by the U.S. Department of Agriculture UV-B Monitoring and Research Program (UVMRP), is improved. The UVMRP has monitored solar radiation in the 14 narrowband UV and VIS spectral channels at 37 sites across U.S. since 1992. The improvements in the quality of the data result from an improved cloud screening algorithm that utilizes an iterative rejection of cloudy points based on a decreasing tolerance of unstable optical depth behavior when calibration information is unknown. A MODTRAN radiative transfer model simulation showed the new cloud screening algorithm was capable of screening cloudy points while retaining clear-sky points. The comparison results showed that the cloud-free points determined by the new cloud screening algorithm generated significantly (56%) more and unbiased Langley offset voltages (VLOs) for both partly cloudy days and sunny days at two testing sites, Hawaii and Florida. The V¬LOs are proportional to the radiometric sensitivity. The stability of the calibration is also improved by the development of a two-stage reference channel calibration method for collocated UV-MFRSR and MFRSR instruments. Special channels where aerosol is the only contributor to total optical depth (TOD) variation (e.g. 368-nm channel) were selected and the radiative transfer model (MODTRAN) used to calculate direct normal and diffuse horizontal ratios which were used to evaluate the stability of TOD in cloud-free points. The spectral dependence of atmospheric constituents' optical properties and previously calibrated channels were used to find stable TOD points and perform Langley calibration at spectrally adjacent channels. The test of this method on the UV-B program site at Homestead, Florida (FL02) showed that the new method generated more clustered and abundant VLOs at all (UV-) MFRSR channels and potentially improved the accuracy by 2-4% at most channels and over 10% at 300-nm and 305-nm channels. In the second major part of this work, I calibrated the DayCent-UV model with ecosystem variables (e.g. soil water, live biomass), allowed maximum photodecay rate to vary with litter's initial lignin fraction in the model, and validated the optimized model with LIDET observation of remaining carbon and nitrogen at three semi-arid sites. I also explored the ecological impacts of UV decomposition with the optimized DayCent-UV model. The DayCent-UV model showed significant better performance compared to models without UV decomposition in simulating the observed linear carbon loss pattern and the persistent net nitrogen mineralization in the 10-year LIDET experiment at the three sites. The DayCent-UV equilibrium model runs showed that UV decomposition increased aboveground and belowground plant production, surface net nitrogen mineralization, and surface litter nitrogen pool, while decreased surface litter carbon, soil net nitrogen mineralization and mineral soil carbon and nitrogen. In addition, UV decomposition showed minimal impacts (i.e. less than 1% change) on trace gases emission and biotic decomposition rates. Overall, my dissertation provided a comprehensive solution to improve the calibration accuracy and reliability of MFRSR and therefore the quality of radiation products. My dissertation also improved the understanding of UV decomposition and its long-term ecological impacts.

  14. Comparative Analysis of Haar and Daubechies Wavelet for Hyper Spectral Image Classification

    NASA Astrophysics Data System (ADS)

    Sharif, I.; Khare, S.

    2014-11-01

    With the number of channels in the hundreds instead of in the tens Hyper spectral imagery possesses much richer spectral information than multispectral imagery. The increased dimensionality of such Hyper spectral data provides a challenge to the current technique for analyzing data. Conventional classification methods may not be useful without dimension reduction pre-processing. So dimension reduction has become a significant part of Hyper spectral image processing. This paper presents a comparative analysis of the efficacy of Haar and Daubechies wavelets for dimensionality reduction in achieving image classification. Spectral data reduction using Wavelet Decomposition could be useful because it preserves the distinction among spectral signatures. Daubechies wavelets optimally capture the polynomial trends while Haar wavelet is discontinuous and resembles a step function. The performance of these wavelets are compared in terms of classification accuracy and time complexity. This paper shows that wavelet reduction has more separate classes and yields better or comparable classification accuracy. In the context of the dimensionality reduction algorithm, it is found that the performance of classification of Daubechies wavelets is better as compared to Haar wavelet while Daubechies takes more time compare to Haar wavelet. The experimental results demonstrate the classification system consistently provides over 84% classification accuracy.

  15. Acoustical Applications of the HHT Method

    NASA Technical Reports Server (NTRS)

    Huang, Norden E.

    2003-01-01

    A document discusses applications of a method based on the Huang-Hilbert transform (HHT). The method was described, without the HHT name, in Analyzing Time Series Using EMD and Hilbert Spectra (GSC-13817), NASA Tech Briefs, Vol. 24, No. 10 (October 2000), page 63. To recapitulate: The method is especially suitable for analyzing time-series data that represent nonstationary and nonlinear physical phenomena. The method involves the empirical mode decomposition (EMD), in which a complicated signal is decomposed into a finite number of functions, called intrinsic mode functions (IMFs), that admit well-behaved Hilbert transforms. The HHT consists of the combination of EMD and Hilbert spectral analysis.

  16. Time-dependent density functional theory for open systems with a positivity-preserving decomposition scheme for environment spectral functions

    NASA Astrophysics Data System (ADS)

    Wang, RuLin; Zheng, Xiao; Kwok, YanHo; Xie, Hang; Chen, GuanHua; Yam, ChiYung

    2015-04-01

    Understanding electronic dynamics on material surfaces is fundamentally important for applications including nanoelectronics, inhomogeneous catalysis, and photovoltaics. Practical approaches based on time-dependent density functional theory for open systems have been developed to characterize the dissipative dynamics of electrons in bulk materials. The accuracy and reliability of such approaches depend critically on how the electronic structure and memory effects of surrounding material environment are accounted for. In this work, we develop a novel squared-Lorentzian decomposition scheme, which preserves the positive semi-definiteness of the environment spectral matrix. The resulting electronic dynamics is guaranteed to be both accurate and convergent even in the long-time limit. The long-time stability of electronic dynamics simulation is thus greatly improved within the current decomposition scheme. The validity and usefulness of our new approach are exemplified via two prototypical model systems: quasi-one-dimensional atomic chains and two-dimensional bilayer graphene.

  17. Exact nonlinear model reduction for a von Kármán beam: Slow-fast decomposition and spectral submanifolds

    NASA Astrophysics Data System (ADS)

    Jain, Shobhit; Tiso, Paolo; Haller, George

    2018-06-01

    We apply two recently formulated mathematical techniques, Slow-Fast Decomposition (SFD) and Spectral Submanifold (SSM) reduction, to a von Kármán beam with geometric nonlinearities and viscoelastic damping. SFD identifies a global slow manifold in the full system which attracts solutions at rates faster than typical rates within the manifold. An SSM, the smoothest nonlinear continuation of a linear modal subspace, is then used to further reduce the beam equations within the slow manifold. This two-stage, mathematically exact procedure results in a drastic reduction of the finite-element beam model to a one-degree-of freedom nonlinear oscillator. We also introduce the technique of spectral quotient analysis, which gives the number of modes relevant for reduction as output rather than input to the reduction process.

  18. Factorization-based texture segmentation

    DOE PAGES

    Yuan, Jiangye; Wang, Deliang; Cheriyadat, Anil M.

    2015-06-17

    This study introduces a factorization-based approach that efficiently segments textured images. We use local spectral histograms as features, and construct an M × N feature matrix using M-dimensional feature vectors in an N-pixel image. Based on the observation that each feature can be approximated by a linear combination of several representative features, we factor the feature matrix into two matrices-one consisting of the representative features and the other containing the weights of representative features at each pixel used for linear combination. The factorization method is based on singular value decomposition and nonnegative matrix factorization. The method uses local spectral histogramsmore » to discriminate region appearances in a computationally efficient way and at the same time accurately localizes region boundaries. Finally, the experiments conducted on public segmentation data sets show the promise of this simple yet powerful approach.« less

  19. Spectral Estimation: An Overdetermined Rational Model Equation Approach.

    DTIC Science & Technology

    1982-09-15

    A-A123 122 SPECTRAL ESTIMATION: AN OVERDETERMINEO RATIONAL MODEL 1/2 EQUATION APPROACH..(U) ARIZONA STATE UNIV TEMPE DEPT OF ELECTRICAL AND COMPUTER...2 0 447,_______ 4. TITLE (mAd Sabile) S. TYPE or REPORT a PEP40D COVERED Spectral Estimation; An Overdeteruined Rational Final Report 9/3 D/8 to...andmmd&t, by uwek 7a5 4 Rational Spectral Estimation, ARMA mo~Ie1, AR model, NMA Mdle, Spectrum, Singular Value Decomposition. Adaptivb Implementatlan

  20. [EMD Time-Frequency Analysis of Raman Spectrum and NIR].

    PubMed

    Zhao, Xiao-yu; Fang, Yi-ming; Tan, Feng; Tong, Liang; Zhai, Zhe

    2016-02-01

    This paper analyzes the Raman spectrum and Near Infrared Spectrum (NIR) with time-frequency method. The empirical mode decomposition spectrum becomes intrinsic mode functions, which the proportion calculation reveals the Raman spectral energy is uniform distributed in each component, while the NIR's low order intrinsic mode functions only undertakes fewer primary spectroscopic effective information. Both the real spectrum and numerical experiments show that the empirical mode decomposition (EMD) regard Raman spectrum as the amplitude-modulated signal, which possessed with high frequency adsorption property; and EMD regards NIR as the frequency-modulated signal, which could be preferably realized high frequency narrow-band demodulation during first-order intrinsic mode functions. The first-order intrinsic mode functions Hilbert transform reveals that during the period of empirical mode decomposes Raman spectrum, modal aliasing happened. Through further analysis of corn leaf's NIR in time-frequency domain, after EMD, the first and second orders components of low energy are cut off, and reconstruct spectral signal by using the remaining intrinsic mode functions, the root-mean-square error is 1.001 1, and the correlation coefficient is 0.981 3, both of these two indexes indicated higher accuracy in re-construction; the decomposition trend term indicates the absorbency is ascending along with the decreasing to wave length in the near-infrared light wave band; and the Hilbert transform of characteristic modal component displays, 657 cm⁻¹ is the specific frequency by the corn leaf stress spectrum, which could be regarded as characteristic frequency for identification.

  1. Prediction of the spectral reflectance of laser-generated color prints by combination of an optical model and learning methods.

    PubMed

    Nébouy, David; Hébert, Mathieu; Fournel, Thierry; Larina, Nina; Lesur, Jean-Luc

    2015-09-01

    Recent color printing technologies based on the principle of revealing colors on pre-functionalized achromatic supports by laser irradiation offer advanced functionalities, especially for security applications. However, for such technologies, the color prediction is challenging, compared to classic ink-transfer printing systems. The spectral properties of the coloring materials modified by the lasers are not precisely known and may strongly vary, depending on the laser settings, in a nonlinear manner. We show in this study, through the example of the color laser marking (CLM) technology, based on laser bleaching of a mixture of pigments, that the combination of an adapted optical reflectance model and learning methods to get the model's parameters enables prediction of the spectral reflectance of any printable color with rather good accuracy. Even though the pigment mixture is formulated from three colored pigments, an analysis of the dimensionality of the spectral space generated by CLM printing, thanks to a principal component analysis decomposition, shows that at least four spectral primaries are needed for accurate spectral reflectance predictions. A polynomial interpolation is then used to relate RGB laser intensities with virtual coordinates of new basis vectors. By studying the influence of the number of calibration patches on the prediction accuracy, we can conclude that a reasonable number of 130 patches are enough to achieve good accuracy in this application.

  2. Neuroelectrical Decomposition of Spontaneous Brain Activity Measured with Functional Magnetic Resonance Imaging

    PubMed Central

    Liu, Zhongming; de Zwart, Jacco A.; Chang, Catie; Duan, Qi; van Gelderen, Peter; Duyn, Jeff H.

    2014-01-01

    Spontaneous activity in the human brain occurs in complex spatiotemporal patterns that may reflect functionally specialized neural networks. Here, we propose a subspace analysis method to elucidate large-scale networks by the joint analysis of electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) data. The new approach is based on the notion that the neuroelectrical activity underlying the fMRI signal may have EEG spectral features that report on regional neuronal dynamics and interregional interactions. Applying this approach to resting healthy adults, we indeed found characteristic spectral signatures in the EEG correlates of spontaneous fMRI signals at individual brain regions as well as the temporal synchronization among widely distributed regions. These spectral signatures not only allowed us to parcel the brain into clusters that resembled the brain's established functional subdivision, but also offered important clues for disentangling the involvement of individual regions in fMRI network activity. PMID:23796947

  3. A time-domain decomposition iterative method for the solution of distributed linear quadratic optimal control problems

    NASA Astrophysics Data System (ADS)

    Heinkenschloss, Matthias

    2005-01-01

    We study a class of time-domain decomposition-based methods for the numerical solution of large-scale linear quadratic optimal control problems. Our methods are based on a multiple shooting reformulation of the linear quadratic optimal control problem as a discrete-time optimal control (DTOC) problem. The optimality conditions for this DTOC problem lead to a linear block tridiagonal system. The diagonal blocks are invertible and are related to the original linear quadratic optimal control problem restricted to smaller time-subintervals. This motivates the application of block Gauss-Seidel (GS)-type methods for the solution of the block tridiagonal systems. Numerical experiments show that the spectral radii of the block GS iteration matrices are larger than one for typical applications, but that the eigenvalues of the iteration matrices decay to zero fast. Hence, while the GS method is not expected to convergence for typical applications, it can be effective as a preconditioner for Krylov-subspace methods. This is confirmed by our numerical tests.A byproduct of this research is the insight that certain instantaneous control techniques can be viewed as the application of one step of the forward block GS method applied to the DTOC optimality system.

  4. Eastern Mediterranean Sea Spatial and Temporal Variability of Thermohaline Structure and Circulation Identified from Observational (T, S) Profiles

    DTIC Science & Technology

    2015-12-01

    effect of Etesian winds between the late May and early October. Although they are generally dry, cool and moderate; they may turn into a windstorm...very significant to provide the realization of ocean modeling and prediction. The Optimal Spectral Decomposition (OSD) method is an effective ...represents the potential density, by differentiating this equation with respect to z and multiplying with the coriolis parameter f, conservation of

  5. A Kullback-Leibler approach for 3D reconstruction of spectral CT data corrupted by Poisson noise

    NASA Astrophysics Data System (ADS)

    Hohweiller, Tom; Ducros, Nicolas; Peyrin, Françoise; Sixou, Bruno

    2017-09-01

    While standard computed tomography (CT) data do not depend on energy, spectral computed tomography (SPCT) acquire energy-resolved data, which allows material decomposition of the object of interest. Decompo- sitions in the projection domain allow creating projection mass density (PMD) per materials. From decomposed projections, a tomographic reconstruction creates 3D material density volume. The decomposition is made pos- sible by minimizing a cost function. The variational approach is preferred since this is an ill-posed non-linear inverse problem. Moreover, noise plays a critical role when decomposing data. That is why in this paper, a new data fidelity term is used to take into account of the photonic noise. In this work two data fidelity terms were investigated: a weighted least squares (WLS) term, adapted to Gaussian noise, and the Kullback-Leibler distance (KL), adapted to Poisson noise. A regularized Gauss-Newton algorithm minimizes the cost function iteratively. Both methods decompose materials from a numerical phantom of a mouse. Soft tissues and bones are decomposed in the projection domain; then a tomographic reconstruction creates a 3D material density volume for each material. Comparing relative errors, KL is shown to outperform WLS for low photon counts, in 2D and 3D. This new method could be of particular interest when low-dose acquisitions are performed.

  6. Different techniques of multispectral data analysis for vegetation fraction retrieval

    NASA Astrophysics Data System (ADS)

    Kancheva, Rumiana; Georgiev, Georgi

    2012-07-01

    Vegetation monitoring is one of the most important applications of remote sensing technologies. In respect to farmlands, the assessment of crop condition constitutes the basis of growth, development, and yield processes monitoring. Plant condition is defined by a set of biometric variables, such as density, height, biomass amount, leaf area index, and etc. The canopy cover fraction is closely related to these variables, and is state-indicative of the growth process. At the same time it is a defining factor of the soil-vegetation system spectral signatures. That is why spectral mixtures decomposition is a primary objective in remotely sensed data processing and interpretation, specifically in agricultural applications. The actual usefulness of the applied methods depends on their prediction reliability. The goal of this paper is to present and compare different techniques for quantitative endmember extraction from soil-crop patterns reflectance. These techniques include: linear spectral unmixing, two-dimensional spectra analysis, spectral ratio analysis (vegetation indices), spectral derivative analysis (red edge position), colorimetric analysis (tristimulus values sum, chromaticity coordinates and dominant wavelength). The objective is to reveal their potential, accuracy and robustness for plant fraction estimation from multispectral data. Regression relationships have been established between crop canopy cover and various spectral estimators.

  7. Novelmetal-organic photocatalysts: Synthesis, characterization and decomposition of organic dyes

    NASA Astrophysics Data System (ADS)

    Gopal Reddy, N. B.; Murali Krishna, P.; Kottam, Nagaraju

    2015-02-01

    An efficient method for the photocatalytic degradation of methylene blue in an aqueous medium was developed using metal-organic complexes. Two novel complexes were synthesized using, Schiff base ligand, N‧-[(E)-(4-ethylphenyl)methylidene]-4-hydroxybenzohydrazide (HL) and Ni(II) (Complex 1)/Co(II) (Complex 2) chloride respectively. These complexes were characterized using microanalysis, various spectral techniques. Spectral studies reveal that the complexes exhibit square planar geometry with ligand coordination through azomethine nitrogen and enolic oxygen. The effects of catalyst dosage, irradiation time and aqueous pH on the photocatalytic activity were studied systematically. The photocatalytic activity was found to be more efficient in the presence of Ni(II) complexes than the Co(II) complex. Possible mechanistic aspects were discussed.

  8. Model reduction of dynamical systems by proper orthogonal decomposition: Error bounds and comparison of methods using snapshots from the solution and the time derivatives [Proper orthogonal decomposition model reduction of dynamical systems: error bounds and comparison of methods using snapshots from the solution and the time derivatives

    DOE PAGES

    Kostova-Vassilevska, Tanya; Oxberry, Geoffrey M.

    2017-09-17

    In this study, we consider two proper orthogonal decomposition (POD) methods for dimension reduction of dynamical systems. The first method (M1) uses only time snapshots of the solution, while the second method (M2) augments the snapshot set with time-derivative snapshots. The goal of the paper is to analyze and compare the approximation errors resulting from the two methods by using error bounds. We derive several new bounds of the error from POD model reduction by each of the two methods. The new error bounds involve a multiplicative factor depending on the time steps between the snapshots. For method M1 themore » factor depends on the second power of the time step, while for method 2 the dependence is on the fourth power of the time step, suggesting that method M2 can be more accurate for small between-snapshot intervals. However, three other factors also affect the size of the error bounds. These include (i) the norm of the second (for M1) and fourth derivatives (M2); (ii) the first neglected singular value and (iii) the spectral properties of the projection of the system’s Jacobian in the reduced space. Because of the interplay of these factors neither method is more accurate than the other in all cases. Finally, we present numerical examples demonstrating that when the number of collected snapshots is small and the first neglected singular value has a value of zero, method M2 results in a better approximation.« less

  9. Model reduction of dynamical systems by proper orthogonal decomposition: Error bounds and comparison of methods using snapshots from the solution and the time derivatives [Proper orthogonal decomposition model reduction of dynamical systems: error bounds and comparison of methods using snapshots from the solution and the time derivatives

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kostova-Vassilevska, Tanya; Oxberry, Geoffrey M.

    In this study, we consider two proper orthogonal decomposition (POD) methods for dimension reduction of dynamical systems. The first method (M1) uses only time snapshots of the solution, while the second method (M2) augments the snapshot set with time-derivative snapshots. The goal of the paper is to analyze and compare the approximation errors resulting from the two methods by using error bounds. We derive several new bounds of the error from POD model reduction by each of the two methods. The new error bounds involve a multiplicative factor depending on the time steps between the snapshots. For method M1 themore » factor depends on the second power of the time step, while for method 2 the dependence is on the fourth power of the time step, suggesting that method M2 can be more accurate for small between-snapshot intervals. However, three other factors also affect the size of the error bounds. These include (i) the norm of the second (for M1) and fourth derivatives (M2); (ii) the first neglected singular value and (iii) the spectral properties of the projection of the system’s Jacobian in the reduced space. Because of the interplay of these factors neither method is more accurate than the other in all cases. Finally, we present numerical examples demonstrating that when the number of collected snapshots is small and the first neglected singular value has a value of zero, method M2 results in a better approximation.« less

  10. Robust demarcation of basal cell carcinoma by dependent component analysis-based segmentation of multi-spectral fluorescence images.

    PubMed

    Kopriva, Ivica; Persin, Antun; Puizina-Ivić, Neira; Mirić, Lina

    2010-07-02

    This study was designed to demonstrate robust performance of the novel dependent component analysis (DCA)-based approach to demarcation of the basal cell carcinoma (BCC) through unsupervised decomposition of the red-green-blue (RGB) fluorescent image of the BCC. Robustness to intensity fluctuation is due to the scale invariance property of DCA algorithms, which exploit spectral and spatial diversities between the BCC and the surrounding tissue. Used filtering-based DCA approach represents an extension of the independent component analysis (ICA) and is necessary in order to account for statistical dependence that is induced by spectral similarity between the BCC and surrounding tissue. This generates weak edges what represents a challenge for other segmentation methods as well. By comparative performance analysis with state-of-the-art image segmentation methods such as active contours (level set), K-means clustering, non-negative matrix factorization, ICA and ratio imaging we experimentally demonstrate good performance of DCA-based BCC demarcation in two demanding scenarios where intensity of the fluorescent image has been varied almost two orders of magnitude. Copyright 2010 Elsevier B.V. All rights reserved.

  11. Isotopic determination of uranium in soil by laser induced breakdown spectroscopy

    DOE PAGES

    Chan, George C. -Y.; Choi, Inhee; Mao, Xianglei; ...

    2016-03-26

    Laser-induced breakdown spectroscopy (LIBS) operated under ambient pressure has been evaluated for isotopic analysis of uranium in real-world samples such as soil, with U concentrations in the single digit percentage levels. The study addresses the requirements for spectral decomposition of 235U and 238U atomic emission peaks that are only partially resolved. Although non-linear least-square fitting algorithms are typically able to locate the optimal combination of fitting parameters that best describes the experimental spectrum even when all fitting parameters are treated as free independent variables, the analytical results of such an unconstrained free-parameter approach are ambiguous. In this work, five spectralmore » decomposition algorithms were examined, with different known physical properties (e.g., isotopic splitting, hyperfine structure) of the spectral lines sequentially incorporated into the candidate algorithms as constraints. It was found that incorporation of such spectral-line constraints into the decomposition algorithm is essential for the best isotopic analysis. The isotopic abundance of 235U was determined from a simple two-component Lorentzian fit on the U II 424.437 nm spectral profile. For six replicate measurements, each with only fifteen laser shots, on a soil sample with U concentration at 1.1% w/w, the determined 235U isotopic abundance was (64.6 ± 4.8)%, and agreed well with the certified value of 64.4%. Another studied U line - U I 682.691 nm possesses hyperfine structure that is comparatively broad and at a significant fraction as the isotopic shift. Thus, 235U isotopic analysis with this U I line was performed with spectral decomposition involving individual hyperfine components. For the soil sample with 1.1% w/w U, the determined 235U isotopic abundance was (60.9 ± 2.0)%, which exhibited a relative bias about 6% from the certified value. The bias was attributed to the spectral resolution of our measurement system - the measured line width for this U I line was larger than its isotopic splitting. In conclusion, although not the best emission line for isotopic analysis, this U I emission line is sensitive for element analysis with a detection limit of 500 ppm U in the soil matrix; the detection limit for the U II 424.437 nm line was 2000 ppm.« less

  12. Diurnal characteristics of turbulent intermittency in the Taklimakan Desert

    NASA Astrophysics Data System (ADS)

    Wei, Wei; Wang, Minzhong; Zhang, Hongsheng; He, Qing; Ali, Mamtimin; Wang, Yinjun

    2017-12-01

    A case study is performed to investigate the behavior of turbulent intermittency in the Taklimakan Desert using an intuitive, direct, and adaptive method, the arbitrary-order Hilbert spectral analysis (arbitrary-order HSA). Decomposed modes from the vertical wind speed series confirm the dyadic filter-bank essence of the empirical mode decomposition processes. Due to the larger eddies in the CBL, higher energy modes occur during the day. The second-order Hilbert spectra L2 (ω ) delineate the spectral gap separating fine-scale turbulence from large-scale motions. Both the values of kurtosis and the Hilbert-based scaling exponent ξ ( q ) reveal that the turbulence intermittency at night is much stronger than that during the day, and the stronger intermittency is associated with more stable stratification under clear-sky conditions. This study fills the gap in the characteristics of turbulence intermittency in the Taklimakan Desert area using a relatively new method.

  13. Detection of cretaceous incised-valley shale for resource play, Miano gas field, SW Pakistan: Spectral decomposition using continuous wavelet transform

    NASA Astrophysics Data System (ADS)

    Naseer, Muhammad Tayyab; Asim, Shazia

    2017-10-01

    Unconventional resource shales can play a critical role in economic growth throughout the world. The hydrocarbon potential of faults/fractured shales is the most significant challenge for unconventional prospect generation. The continuous wavelet transforms (CWT) of spectral decomposition (SD) technology is applied for shale gas prospects on high-resolution 3D seismic data from the Miano area in the Indus platform, SW Pakistan. Schmoker' technique reveals high-quality shales with total organic carbon (TOC) of 9.2% distributed in the western regions. The seismic amplitude, root-mean-square (RMS), and most positive curvature attributes show limited ability to resolve the prospective fractured shale components. The CWT is used to identify the hydrocarbon-bearing faulted/fractured compartments encased within the non-hydrocarbon bearing shale units. The hydrocarbon-bearing shales experience higher amplitudes (4694 dB and 3439 dB) than the non-reservoir shales (3290 dB). Cross plots between sweetness, 22 Hz spectral decomposition, and the seismic amplitudes are found more effective tools than the conventional seismic attribute mapping for discriminating the seal and reservoir elements within the incised-valley petroleum system. Rock physics distinguish the productive sediments from the non-productive sediments, suggesting the potential for future shale play exploration.

  14. Fourier-Bessel Particle-In-Cell (FBPIC) v0.1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lehe, Remi; Kirchen, Manuel; Jalas, Soeren

    The Fourier-Bessel Particle-In-Cell code is a scientific simulation software for relativistic plasma physics. It is a Particle-In-Cell code whose distinctive feature is to use a spectral decomposition in cylindrical geometry. This decomposition allows to combine the advantages of spectral 3D Cartesian PIC codes (high accuracy and stability) and those of finite-difference cylindrical PIC codes with azimuthal decomposition (orders-of-magnitude speedup when compared to 3D simulations). The code is built on Python and can run both on CPU and GPU (the GPU runs being typically 1 or 2 orders of magnitude faster than the corresponding CPU runs.) The code has the exactmore » same output format as the open-source PIC codes Warp and PIConGPU (openPMD format: openpmd.org) and has a very similar input format as Warp (Python script with many similarities). There is therefore tight interoperability between Warp and FBPIC, and this interoperability will increase even more in the future.« less

  15. AUTONOMOUS GAUSSIAN DECOMPOSITION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lindner, Robert R.; Vera-Ciro, Carlos; Murray, Claire E.

    2015-04-15

    We present a new algorithm, named Autonomous Gaussian Decomposition (AGD), for automatically decomposing spectra into Gaussian components. AGD uses derivative spectroscopy and machine learning to provide optimized guesses for the number of Gaussian components in the data, and also their locations, widths, and amplitudes. We test AGD and find that it produces results comparable to human-derived solutions on 21 cm absorption spectra from the 21 cm SPectral line Observations of Neutral Gas with the EVLA (21-SPONGE) survey. We use AGD with Monte Carlo methods to derive the H i line completeness as a function of peak optical depth and velocitymore » width for the 21-SPONGE data, and also show that the results of AGD are stable against varying observational noise intensity. The autonomy and computational efficiency of the method over traditional manual Gaussian fits allow for truly unbiased comparisons between observations and simulations, and for the ability to scale up and interpret the very large data volumes from the upcoming Square Kilometer Array and pathfinder telescopes.« less

  16. SaaS Platform for Time Series Data Handling

    NASA Astrophysics Data System (ADS)

    Oplachko, Ekaterina; Rykunov, Stanislav; Ustinin, Mikhail

    2018-02-01

    The paper is devoted to the description of MathBrain, a cloud-based resource, which works as a "Software as a Service" model. It is designed to maximize the efficiency of the current technology and to provide a tool for time series data handling. The resource provides access to the following analysis methods: direct and inverse Fourier transforms, Principal component analysis and Independent component analysis decompositions, quantitative analysis, magnetoencephalography inverse problem solution in a single dipole model based on multichannel spectral data.

  17. Assessments on GOCE-based Gravity Field Model Comparisons with Terrestrial Data Using Wavelet Decomposition and Spectral Enhancement Approaches

    NASA Astrophysics Data System (ADS)

    Erol, Serdar; Serkan Isık, Mustafa; Erol, Bihter

    2016-04-01

    The recent Earth gravity field satellite missions data lead significant improvement in Global Geopotential Models in terms of both accuracy and resolution. However the improvement in accuracy is not the same everywhere in the Earth and therefore quantifying the level of improvement locally is necessary using the independent data. The validations of the level-3 products from the gravity field satellite missions, independently from the estimation procedures of these products, are possible using various arbitrary data sets, as such the terrestrial gravity observations, astrogeodetic vertical deflections, GPS/leveling data, the stationary sea surface topography. Quantifying the quality of the gravity field functionals via recent products has significant importance for determination of the regional geoid modeling, base on the satellite and terrestrial data fusion with an optimal algorithm, beside the statistical reporting the improvement rates depending on spatial location. In the validations, the errors and the systematic differences between the data and varying spectral content of the compared signals should be considered in order to have comparable results. In this manner this study compares the performance of Wavelet decomposition and spectral enhancement techniques in validation of the GOCE/GRACE based Earth gravity field models using GPS/leveling and terrestrial gravity data in Turkey. The terrestrial validation data are filtered using Wavelet decomposition technique and the numerical results from varying levels of decomposition are compared with the results which are derived using the spectral enhancement approach with contribution of an ultra-high resolution Earth gravity field model. The tests include the GO-DIR-R5, GO-TIM-R5, GOCO05S, EIGEN-6C4 and EGM2008 global models. The conclusion discuss the superiority and drawbacks of both concepts as well as reporting the performance of tested gravity field models with an estimate of their contribution to modeling the geoid in Turkish territory.

  18. Crystal growth, spectral, structural and optical studies of π-conjugated stilbazolium crystal: 4-bromobenzaldehyde-4'-N'-methylstilbazolium tosylate.

    PubMed

    Krishna Kumar, M; Sudhahar, S; Bhagavannarayana, G; Mohan Kumar, R

    2014-05-05

    Nonlinear optical (NLO) organic compound, 4-bromobenzaldehyde-4'-N'-methylstilbazolium tosylate was synthesized by reflux method. The formation of molecular complex was confirmed from (1)H NMR, FT-IR and FT-Raman spectral analyses. The single crystals were grown by slow evaporation solution growth method and the crystal structure and atomic packing of grown crystal was identified. The morphology and growth axis of grown crystal were determined. The crystal perfection was analyzed using high resolution X-ray diffraction study on (001) plane. Thermal stability, decomposition stages and melting point of the grown crystal were analyzed. The optical absorption coefficient (α) and energy band gap (E(g)) of the crystal were determined using UV-visible absorption studies. Second harmonic generation efficiency of the grown crystal was examined by Kurtz powder method with different particle size using 1064 nm laser. Laser induced damage threshold study was carried out for the grown crystal using Nd:YAG laser. Copyright © 2014 Elsevier B.V. All rights reserved.

  19. Spectral functions of strongly correlated extended systems via an exact quantum embedding

    NASA Astrophysics Data System (ADS)

    Booth, George H.; Chan, Garnet Kin-Lic

    2015-04-01

    Density matrix embedding theory (DMET) [Phys. Rev. Lett. 109, 186404 (2012), 10.1103/PhysRevLett.109.186404], introduced an approach to quantum cluster embedding methods whereby the mapping of strongly correlated bulk problems to an impurity with finite set of bath states was rigorously formulated to exactly reproduce the entanglement of the ground state. The formalism provided similar physics to dynamical mean-field theory at a tiny fraction of the cost but was inherently limited by the construction of a bath designed to reproduce ground-state, static properties. Here, we generalize the concept of quantum embedding to dynamic properties and demonstrate accurate bulk spectral functions at similarly small computational cost. The proposed spectral DMET utilizes the Schmidt decomposition of a response vector, mapping the bulk dynamic correlation functions to that of a quantum impurity cluster coupled to a set of frequency-dependent bath states. The resultant spectral functions are obtained on the real-frequency axis, without bath discretization error, and allows for the construction of arbitrary dynamic correlation functions. We demonstrate the method on the one- (1D) and two-dimensional (2D) Hubbard model, where we obtain zero temperature and thermodynamic limit spectral functions, and show the trivial extension to two-particle Green's functions. This advance therefore extends the scope and applicability of DMET in condensed-matter problems as a computationally tractable route to correlated spectral functions of extended systems and provides a competitive alternative to dynamical mean-field theory for dynamic quantities.

  20. WE-FG-207B-12: Quantitative Evaluation of a Spectral CT Scanner in a Phantom Study: Results of Spectral Reconstructions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duan, X; Arbique, G; Guild, J

    Purpose: To evaluate the quantitative image quality of spectral reconstructions of phantom data from a spectral CT scanner. Methods: The spectral CT scanner (IQon Spectral CT, Philips Healthcare) is equipped with a dual-layer detector and generates conventional 80-140 kVp images and variety of spectral reconstructions, e.g., virtual monochromatic (VM) images, virtual non-contrast (VNC) images, iodine maps, and effective atomic number (Z) images. A cylindrical solid water phantom (Gammex 472, 33 cm diameter and 5 cm thick) with iodine (2.0-20.0 mg I/ml) and calcium (50-600 mg/ml) rod inserts was scanned at 120 kVp and 27 mGy CTDIvol. Spectral reconstructions were evaluatedmore » by comparing image measurements with theoretical values calculated from nominal rod compositions provided by the phantom manufacturer. The theoretical VNC was calculated using water and iodine basis material decomposition, and the theoretical Z was calculated using two common methods, the chemical formula method (Z1) and the dual-energy ratio method (Z2). Results: Beam-hardening-like artifacts between high-attenuation calcium rods (≥300 mg/ml, >800 HU) influenced quantitative measurements, so the quantitative analysis was only performed on iodine rods using the images from the scan with all the calcium rods removed. The CT numbers of the iodine rods in the VM images (50∼150 keV) were close to theoretical values with average difference of 2.4±6.9 HU. Compared with theoretical values, the average difference for iodine concentration, VNC CT number and effective Z of iodine rods were −0.10±0.38 mg/ml, −0.1±8.2 HU, 0.25±0.06 (Z1) and −0.23±0.07 (Z2). Conclusion: The results indicate that the spectral CT scanner generates quantitatively accurate spectral reconstructions at clinically relevant iodine concentrations. Beam-hardening-like artifacts still exist when high-attenuation objects are present and their impact on patient images needs further investigation. YY is an employee of Philips Healthcare.« less

  1. The Convergence Problems of Eigenfunction Expansions of Elliptic Differential Operators

    NASA Astrophysics Data System (ADS)

    Ahmedov, Anvarjon

    2018-03-01

    In the present research we investigate the problems concerning the almost everywhere convergence of multiple Fourier series summed over the elliptic levels in the classes of Liouville. The sufficient conditions for the almost everywhere convergence problems, which are most difficult problems in Harmonic analysis, are obtained. The methods of approximation by multiple Fourier series summed over elliptic curves are applied to obtain suitable estimations for the maximal operator of the spectral decompositions. Obtaining of such estimations involves very complicated calculations which depends on the functional structure of the classes of functions. The main idea on the proving the almost everywhere convergence of the eigenfunction expansions in the interpolation spaces is estimation of the maximal operator of the partial sums in the boundary classes and application of the interpolation Theorem of the family of linear operators. In the present work the maximal operator of the elliptic partial sums are estimated in the interpolation classes of Liouville and the almost everywhere convergence of the multiple Fourier series by elliptic summation methods are established. The considering multiple Fourier series as an eigenfunction expansions of the differential operators helps to translate the functional properties (for example smoothness) of the Liouville classes into Fourier coefficients of the functions which being expanded into such expansions. The sufficient conditions for convergence of the multiple Fourier series of functions from Liouville classes are obtained in terms of the smoothness and dimensions. Such results are highly effective in solving the boundary problems with periodic boundary conditions occurring in the spectral theory of differential operators. The investigations of multiple Fourier series in modern methods of harmonic analysis incorporates the wide use of methods from functional analysis, mathematical physics, modern operator theory and spectral decomposition. New method for the best approximation of the square-integrable function by multiple Fourier series summed over the elliptic levels are established. Using the best approximation, the Lebesgue constant corresponding to the elliptic partial sums is estimated. The latter is applied to obtain an estimation for the maximal operator in the classes of Liouville.

  2. Solar rotational modulations of spectral irradiance and correlations with the variability of total solar irradiance

    NASA Astrophysics Data System (ADS)

    Lee, Jae N.; Cahalan, Robert F.; Wu, Dong L.

    2016-09-01

    Aims: We characterize the solar rotational modulations of spectral solar irradiance (SSI) and compare them with the corresponding changes of total solar irradiance (TSI). Solar rotational modulations of TSI and SSI at wavelengths between 120 and 1600 nm are identified over one hundred Carrington rotational cycles during 2003-2013. Methods: The SORCE (Solar Radiation and Climate Experiment) and TIMED (Thermosphere Ionosphere Mesosphere Energetics and Dynamics)/SEE (Solar EUV Experiment) measured and SATIRE-S modeled solar irradiances are analyzed using the EEMD (Ensemble Empirical Mode Decomposition) method to determine the phase and amplitude of 27-day solar rotational variation in TSI and SSI. Results: The mode decomposition clearly identifies 27-day solar rotational variations in SSI between 120 and 1600 nm, and there is a robust wavelength dependence in the phase of the rotational mode relative to that of TSI. The rotational modes of visible (VIS) and near infrared (NIR) are in phase with the mode of TSI, but the phase of the rotational mode of ultraviolet (UV) exhibits differences from that of TSI. While it is questionable that the VIS to NIR portion of the solar spectrum has yet been observed with sufficient accuracy and precision to determine the 11-year solar cycle variations, the temporal variations over one hundred cycles of 27-day solar rotation, independent of the two solar cycles in which they are embedded, show distinct solar rotational modulations at each wavelength.

  3. De-biasing the dynamic mode decomposition for applied Koopman spectral analysis of noisy datasets

    NASA Astrophysics Data System (ADS)

    Hemati, Maziar S.; Rowley, Clarence W.; Deem, Eric A.; Cattafesta, Louis N.

    2017-08-01

    The dynamic mode decomposition (DMD)—a popular method for performing data-driven Koopman spectral analysis—has gained increased popularity for extracting dynamically meaningful spatiotemporal descriptions of fluid flows from snapshot measurements. Often times, DMD descriptions can be used for predictive purposes as well, which enables informed decision-making based on DMD model forecasts. Despite its widespread use and utility, DMD can fail to yield accurate dynamical descriptions when the measured snapshot data are imprecise due to, e.g., sensor noise. Here, we express DMD as a two-stage algorithm in order to isolate a source of systematic error. We show that DMD's first stage, a subspace projection step, systematically introduces bias errors by processing snapshots asymmetrically. To remove this systematic error, we propose utilizing an augmented snapshot matrix in a subspace projection step, as in problems of total least-squares, in order to account for the error present in all snapshots. The resulting unbiased and noise-aware total DMD (TDMD) formulation reduces to standard DMD in the absence of snapshot errors, while the two-stage perspective generalizes the de-biasing framework to other related methods as well. TDMD's performance is demonstrated in numerical and experimental fluids examples. In particular, in the analysis of time-resolved particle image velocimetry data for a separated flow, TDMD outperforms standard DMD by providing dynamical interpretations that are consistent with alternative analysis techniques. Further, TDMD extracts modes that reveal detailed spatial structures missed by standard DMD.

  4. Solar Rotational Modulations of Spectral Irradiance and Correlations with the Variability of Total Solar Irradiance

    NASA Technical Reports Server (NTRS)

    Lee, Jae N.; Cahalan, Robert F.; Wu, Dong L.

    2016-01-01

    Aims: We characterize the solar rotational modulations of spectral solar irradiance (SSI) and compare them with the corresponding changes of total solar irradiance (TSI). Solar rotational modulations of TSI and SSI at wavelengths between 120 and 1600 nm are identified over one hundred Carrington rotational cycles during 2003-2013. Methods: The SORCE (Solar Radiation and Climate Experiment) and TIMED (Thermosphere Ionosphere Mesosphere Energetics and Dynamics)/SEE (Solar EUV Experiment) measured and SATIRE-S modeled solar irradiances are analyzed using the EEMD (Ensemble Empirical Mode Decomposition) method to determine the phase and amplitude of 27-day solar rotational variation in TSI and SSI. Results: The mode decomposition clearly identifies 27-day solar rotational variations in SSI between 120 and 1600 nm, and there is a robust wavelength dependence in the phase of the rotational mode relative to that of TSI. The rotational modes of visible (VIS) and near infrared (NIR) are in phase with the mode of TSI, but the phase of the rotational mode of ultraviolet (UV) exhibits differences from that of TSI. While it is questionable that the VIS to NIR portion of the solar spectrum has yet been observed with sufficient accuracy and precision to determine the 11-year solar cycle variations, the temporal variations over one hundred cycles of 27-day solar rotation, independent of the two solar cycles in which they are embedded, show distinct solar rotational modulations at each wavelength.

  5. Assessing plant residue decomposition in soil using DRIFT spectroscopy

    NASA Astrophysics Data System (ADS)

    Ouellette, Lance; Van Eerd, Laura; Voroney, Paul

    2016-04-01

    Assessment of the decomposition of plant residues typically involves the use of tracer techniques combined with measurements of soil respiration. This laboratory study evaluated use of Diffuse Reflectance Fourier Transform (DRIFT) spectroscopy for its potential to assess plant residue decomposition in soil. A sandy loam soil (Orthic Humic Gleysol) obtained from a field research plot was passed through a 4.75 mm sieve moist (~70% of field capacity) to remove larger crop residues. The experimental design consisted of a randomized complete block with four replicates of ten above-ground cover crop residue-corn stover combinations, where sampling time was blocked. Two incubations were set up for 1) Drift analysis: field moist soil (250 g ODW) was placed in 500 mL glass jars, and 2) CO2 evolution: 100 g (ODW) was placed in 2 L jars. Soils were amended with the plant residues (oven-dried at 60°C and ground to <2 mm) at rates equivalent to field mean above-ground biomass yields, then moistened to 60% water holding capacity and incubated in the dark at 22±3°C. Measurements for DRIFT and CO2-C evolved were taken after 0.5, 2, 4, 7, 10, 15, 22, 29, 36, 43, 50 64 and 72 d. DRIFT spectral data (100co-added scans per sample) were recorded with a Varian Cary 660 FT-IR Spectrometer equipped with an EasiDiff Diffuse Reflectance accessory operated at a resolution of 4 cm-1 over the mid-infrared spectrum from 4000 to 400 cm-1. DRIFT spectra of amended soils indicated peak areas of aliphatics at 2930 cm-1, of aromatics at 1620, and 1530 cm-1 and of polysaccharides at 1106 and 1036 cm-1. Evolved CO2 was measured by the alkali trap method (1 M NaOH); the amount of plant residue-C remaining in soil was calculated from the difference in the quantity of plant residue C added and the additional CO2-C evolved from the amended soil. First-order model parameters of the change in polysaccharide peak area over the incubation were related to those generated from the plant residue C decay curves obtained from respiration measurements. The DRIFT method demonstrated that spectral areas consistent with labile aliphatic-C bands (2930 cm-1) can also be used to describe residue C decomposition. This is the first study to demonstrate the usefulness of DRIFT spectroscopy to characterize plant decomposition in soil.

  6. S-EMG signal compression based on domain transformation and spectral shape dynamic bit allocation

    PubMed Central

    2014-01-01

    Background Surface electromyographic (S-EMG) signal processing has been emerging in the past few years due to its non-invasive assessment of muscle function and structure and because of the fast growing rate of digital technology which brings about new solutions and applications. Factors such as sampling rate, quantization word length, number of channels and experiment duration can lead to a potentially large volume of data. Efficient transmission and/or storage of S-EMG signals are actually a research issue. That is the aim of this work. Methods This paper presents an algorithm for the data compression of surface electromyographic (S-EMG) signals recorded during isometric contractions protocol and during dynamic experimental protocols such as the cycling activity. The proposed algorithm is based on discrete wavelet transform to proceed spectral decomposition and de-correlation, on a dynamic bit allocation procedure to code the wavelets transformed coefficients, and on an entropy coding to minimize the remaining redundancy and to pack all data. The bit allocation scheme is based on mathematical decreasing spectral shape models, which indicates a shorter digital word length to code high frequency wavelets transformed coefficients. Four bit allocation spectral shape methods were implemented and compared: decreasing exponential spectral shape, decreasing linear spectral shape, decreasing square-root spectral shape and rotated hyperbolic tangent spectral shape. Results The proposed method is demonstrated and evaluated for an isometric protocol and for a dynamic protocol using a real S-EMG signal data bank. Objective performance evaluations metrics are presented. In addition, comparisons with other encoders proposed in scientific literature are shown. Conclusions The decreasing bit allocation shape applied to the quantized wavelet coefficients combined with arithmetic coding results is an efficient procedure. The performance comparisons of the proposed S-EMG data compression algorithm with the established techniques found in scientific literature have shown promising results. PMID:24571620

  7. SDSS-IV MaNGA: bulge-disc decomposition of IFU data cubes (BUDDI)

    NASA Astrophysics Data System (ADS)

    Johnston, Evelyn J.; Häußler, Boris; Aragón-Salamanca, Alfonso; Merrifield, Michael R.; Bamford, Steven; Bershady, Matthew A.; Bundy, Kevin; Drory, Niv; Fu, Hai; Law, David; Nitschelm, Christian; Thomas, Daniel; Roman Lopes, Alexandre; Wake, David; Yan, Renbin

    2017-02-01

    With the availability of large integral field unit (IFU) spectral surveys of nearby galaxies, there is now the potential to extract spectral information from across the bulges and discs of galaxies in a systematic way. This information can address questions such as how these components built up with time, how galaxies evolve and whether their evolution depends on other properties of the galaxy such as its mass or environment. We present bulge-disc decomposition of IFU data cubes (BUDDI), a new approach to fit the two-dimensional light profiles of galaxies as a function of wavelength to extract the spectral properties of these galaxies' discs and bulges. The fitting is carried out using GALFITM, a modified form of GALFIT which can fit multiwaveband images simultaneously. The benefit of this technique over traditional multiwaveband fits is that the stellar populations of each component can be constrained using knowledge over the whole image and spectrum available. The decomposition has been developed using commissioning data from the Sloan Digital Sky Survey-IV Mapping Nearby Galaxies at APO (MaNGA) survey with redshifts z < 0.14 and coverage of at least 1.5 effective radii for a spatial resolution of 2.5 arcsec full width at half-maximum and field of view of > 22 arcsec, but can be applied to any IFU data of a nearby galaxy with similar or better spatial resolution and coverage. We present an overview of the fitting process, the results from our tests, and we finish with example stellar population analyses of early-type galaxies from the MaNGA survey to give an indication of the scientific potential of applying bulge-disc decomposition to IFU data.

  8. International Symposium on Numerical Methods in Engineering, 5th, Ecole Polytechnique Federale de Lausanne, Switzerland, Sept. 11-15, 1989, Proceedings. Volumes 1 & 2

    NASA Astrophysics Data System (ADS)

    Gruber, Ralph; Periaux, Jaques; Shaw, Richard Paul

    Recent advances in computational mechanics are discussed in reviews and reports. Topics addressed include spectral superpositions on finite elements for shear banding problems, strain-based finite plasticity, numerical simulation of hypersonic viscous continuum flow, constitutive laws in solid mechanics, dynamics problems, fracture mechanics and damage tolerance, composite plates and shells, contact and friction, metal forming and solidification, coupling problems, and adaptive FEMs. Consideration is given to chemical flows, convection problems, free boundaries and artificial boundary conditions, domain-decomposition and multigrid methods, combustion and thermal analysis, wave propagation, mixed and hybrid FEMs, integral-equation methods, optimization, software engineering, and vector and parallel computing.

  9. Enhancing our View of the Reservoir: New Insights into Deepwater Gulf of Mexico fields using Frequency Decomposition

    NASA Astrophysics Data System (ADS)

    Murat, M.

    2017-12-01

    Color-blended frequency decomposition is a seismic attribute that can be used to educe or draw out and visualize geomorphological features enabling a better understanding of reservoir architecture and connectivity for both exploration and field development planning. Color-blended frequency decomposition was applied to seismic data in several areas of interest in the Deepwater Gulf of Mexico. The objective was stratigraphic characterization to better define reservoir extent, highlight depositional features, identify thicker reservoir zones and examine potential connectivity issues due to stratigraphic variability. Frequency decomposition is a technique to analyze changes in seismic frequency caused by changes in the reservoir thickness, lithology and fluid content. This technique decomposes or separates the seismic frequency spectra into discrete bands of frequency limited seismic data using digital filters. The workflow consists of frequency (spectral) decomposition, RGB color blending of three frequency slices, and horizon or stratal slicing of the color blended frequency data for interpretation. Patterns were visualized and identified in the data that were not obvious on standard stacked seismic sections. These seismic patterns were interpreted and compared to known geomorphological patterns and their environment of deposition. From this we inferred the distribution of potential reservoir sand versus non-reservoir shale and even finer scale details such as the overall direction of the sediment transport and relative thickness. In exploratory areas, stratigraphic characterization from spectral decomposition is used for prospect risking and well planning. Where well control exists, we can validate the seismic observations and our interpretation and use the stratigraphic/geomorphological information to better inform decisions on the need for and placement of development wells.

  10. Auto-combustion synthesis, Mössbauer study and catalytic properties of copper-manganese ferrites

    NASA Astrophysics Data System (ADS)

    Velinov, N.; Petrova, T.; Tsoncheva, T.; Genova, I.; Koleva, K.; Kovacheva, D.; Mitov, I.

    2016-12-01

    Spinel ferrites with nominal composition Cu 0.5Mn 0.5Fe 2 O 4 and different distribution of the ions are obtained by auto-combustion method. Mössbauer spectroscopy, X-ray Diffraction, Thermogravimetry-Differential Scanning Calorimetry, Scanning Electron Microscopy and catalytic test in the reaction of methanol decomposition is used for characterization of synthesized materials. The spectral results evidence that the phase composition, microstructure of the synthesized materials and the cation distribution depend on the preparation conditions. Varying the pH of the initial solution microstructure, ferrite crystallite size, cation oxidation state and distribution of ions in the in the spinel structure could be controlled. The catalytic behaviour of ferrites in the reaction of methanol decomposition also depends on the pH of the initial solution. Reduction transformations of mixed ferrites accompanied with the formation of Hägg carbide χ-Fe 5 C 2 were observed by the influence of the reaction medium.

  11. Gyroscope-driven mouse pointer with an EMOTIV® EEG headset and data analysis based on Empirical Mode Decomposition.

    PubMed

    Rosas-Cholula, Gerardo; Ramirez-Cortes, Juan Manuel; Alarcon-Aquino, Vicente; Gomez-Gil, Pilar; Rangel-Magdaleno, Jose de Jesus; Reyes-Garcia, Carlos

    2013-08-14

    This paper presents a project on the development of a cursor control emulating the typical operations of a computer-mouse, using gyroscope and eye-blinking electromyographic signals which are obtained through a commercial 16-electrode wireless headset, recently released by Emotiv. The cursor position is controlled using information from a gyroscope included in the headset. The clicks are generated through the user's blinking with an adequate detection procedure based on the spectral-like technique called Empirical Mode Decomposition (EMD). EMD is proposed as a simple and quick computational tool, yet effective, aimed to artifact reduction from head movements as well as a method to detect blinking signals for mouse control. Kalman filter is used as state estimator for mouse position control and jitter removal. The detection rate obtained in average was 94.9%. Experimental setup and some obtained results are presented.

  12. Gyroscope-Driven Mouse Pointer with an EMOTIV® EEG Headset and Data Analysis Based on Empirical Mode Decomposition

    PubMed Central

    Rosas-Cholula, Gerardo; Ramirez-Cortes, Juan Manuel; Alarcon-Aquino, Vicente; Gomez-Gil, Pilar; Rangel-Magdaleno, Jose de Jesus; Reyes-Garcia, Carlos

    2013-01-01

    This paper presents a project on the development of a cursor control emulating the typical operations of a computer-mouse, using gyroscope and eye-blinking electromyographic signals which are obtained through a commercial 16-electrode wireless headset, recently released by Emotiv. The cursor position is controlled using information from a gyroscope included in the headset. The clicks are generated through the user's blinking with an adequate detection procedure based on the spectral-like technique called Empirical Mode Decomposition (EMD). EMD is proposed as a simple and quick computational tool, yet effective, aimed to artifact reduction from head movements as well as a method to detect blinking signals for mouse control. Kalman filter is used as state estimator for mouse position control and jitter removal. The detection rate obtained in average was 94.9%. Experimental setup and some obtained results are presented. PMID:23948873

  13. Application of Koopmans' theorem for density functional theory to full valence-band photoemission spectroscopy modeling.

    PubMed

    Li, Tsung-Lung; Lu, Wen-Cai

    2015-10-05

    In this work, Koopmans' theorem for Kohn-Sham density functional theory (KS-DFT) is applied to the photoemission spectra (PES) modeling over the entire valence-band. To examine the validity of this application, a PES modeling scheme is developed to facilitate a full valence-band comparison of theoretical PES spectra with experiments. The PES model incorporates the variations of electron ionization cross-sections over atomic orbitals and a linear dispersion of spectral broadening widths. KS-DFT simulations of pristine rubrene (5,6,11,12-tetraphenyltetracene) and potassium-rubrene complex are performed, and the simulation results are used as the input to the PES models. Two conclusions are reached. First, decompositions of the theoretical total spectra show that the dissociated electron of the potassium mainly remains on the backbone and has little effect on the electronic structures of phenyl side groups. This and other electronic-structure results deduced from the spectral decompositions have been qualitatively obtained with the anionic approximation to potassium-rubrene complexes. The qualitative validity of the anionic approximation is thus verified. Second, comparison of the theoretical PES with the experiments shows that the full-scale simulations combined with the PES modeling methods greatly enhance the agreement on spectral shapes over the anionic approximation. This agreement of the theoretical PES spectra with the experiments over the full valence-band can be regarded, to some extent, as a collective validation of the application of Koopmans' theorem for KS-DFT to valence-band PES, at least, for this hydrocarbon and its alkali-adsorbed complex. Copyright © 2015 Elsevier B.V. All rights reserved.

  14. Framework for computing the spatial coherence effects of polycapillary x-ray optics

    PubMed Central

    Zysk, Adam M.; Schoonover, Robert W.; Xu, Qiaofeng; Anastasio, Mark A.

    2012-01-01

    Despite the extensive use of polycapillary x-ray optics for focusing and collimating applications, there remains a significant need for characterization of the coherence properties of the output wavefield. In this work, we present the first quantitative computational method for calculation of the spatial coherence effects of polycapillary x-ray optical devices. This method employs the coherent mode decomposition of an extended x-ray source, geometric optical propagation of individual wavefield modes through a polycapillary device, output wavefield calculation by ray data resampling onto a uniform grid, and the calculation of spatial coherence properties by way of the spectral degree of coherence. PMID:22418154

  15. Spectral biclustering of microarray data: coclustering genes and conditions.

    PubMed

    Kluger, Yuval; Basri, Ronen; Chang, Joseph T; Gerstein, Mark

    2003-04-01

    Global analyses of RNA expression levels are useful for classifying genes and overall phenotypes. Often these classification problems are linked, and one wants to find "marker genes" that are differentially expressed in particular sets of "conditions." We have developed a method that simultaneously clusters genes and conditions, finding distinctive "checkerboard" patterns in matrices of gene expression data, if they exist. In a cancer context, these checkerboards correspond to genes that are markedly up- or downregulated in patients with particular types of tumors. Our method, spectral biclustering, is based on the observation that checkerboard structures in matrices of expression data can be found in eigenvectors corresponding to characteristic expression patterns across genes or conditions. In addition, these eigenvectors can be readily identified by commonly used linear algebra approaches, in particular the singular value decomposition (SVD), coupled with closely integrated normalization steps. We present a number of variants of the approach, depending on whether the normalization over genes and conditions is done independently or in a coupled fashion. We then apply spectral biclustering to a selection of publicly available cancer expression data sets, and examine the degree to which the approach is able to identify checkerboard structures. Furthermore, we compare the performance of our biclustering methods against a number of reasonable benchmarks (e.g., direct application of SVD or normalized cuts to raw data).

  16. RIACS

    NASA Technical Reports Server (NTRS)

    Oliger, Joseph

    1997-01-01

    Topics considered include: high-performance computing; cognitive and perceptual prostheses (computational aids designed to leverage human abilities); autonomous systems. Also included: development of a 3D unstructured grid code based on a finite volume formulation and applied to the Navier-stokes equations; Cartesian grid methods for complex geometry; multigrid methods for solving elliptic problems on unstructured grids; algebraic non-overlapping domain decomposition methods for compressible fluid flow problems on unstructured meshes; numerical methods for the compressible navier-stokes equations with application to aerodynamic flows; research in aerodynamic shape optimization; S-HARP: a parallel dynamic spectral partitioner; numerical schemes for the Hamilton-Jacobi and level set equations on triangulated domains; application of high-order shock capturing schemes to direct simulation of turbulence; multicast technology; network testbeds; supercomputer consolidation project.

  17. Investigation of shock-induced chemical decomposition of sensitized nitromethane through time-resolved Raman spectroscopy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pangilinan, G.I.; Constantinou, C.P.; Gruzdkov, Y.A.

    1996-07-01

    Molecular processes associated with shock induced chemical decomposition of a mixture of nitromethane with ethylenediamine (0.1 wt%) are examined using time-resolved, Raman scattering. When shocked by stepwise loading to 14.2 GPa pressure, changes in the nitromethane vibrational modes and the spectral background characterize the onset of reaction. The CN stretch mode softens and disappears even as the NO{sub 2} and CH{sub 3} stretch modes, though modified, retain their identities. The shape and intensity of the spectral background also shows changes characteristic of reaction. Changes in the background, which are observed even at lower peak pressures of 11.4 GPa, are assignedmore » to luminescence from reaction intermediates. The implications of these results to various molecular models of sensitization are discussed.« less

  18. Bird sound spectrogram decomposition through Non-Negative Matrix Factorization for the acoustic classification of bird species.

    PubMed

    Ludeña-Choez, Jimmy; Quispe-Soncco, Raisa; Gallardo-Antolín, Ascensión

    2017-01-01

    Feature extraction for Acoustic Bird Species Classification (ABSC) tasks has traditionally been based on parametric representations that were specifically developed for speech signals, such as Mel Frequency Cepstral Coefficients (MFCC). However, the discrimination capabilities of these features for ABSC could be enhanced by accounting for the vocal production mechanisms of birds, and, in particular, the spectro-temporal structure of bird sounds. In this paper, a new front-end for ABSC is proposed that incorporates this specific information through the non-negative decomposition of bird sound spectrograms. It consists of the following two different stages: short-time feature extraction and temporal feature integration. In the first stage, which aims at providing a better spectral representation of bird sounds on a frame-by-frame basis, two methods are evaluated. In the first method, cepstral-like features (NMF_CC) are extracted by using a filter bank that is automatically learned by means of the application of Non-Negative Matrix Factorization (NMF) on bird audio spectrograms. In the second method, the features are directly derived from the activation coefficients of the spectrogram decomposition as performed through NMF (H_CC). The second stage summarizes the most relevant information contained in the short-time features by computing several statistical measures over long segments. The experiments show that the use of NMF_CC and H_CC in conjunction with temporal integration significantly improves the performance of a Support Vector Machine (SVM)-based ABSC system with respect to conventional MFCC.

  19. Bird sound spectrogram decomposition through Non-Negative Matrix Factorization for the acoustic classification of bird species

    PubMed Central

    Quispe-Soncco, Raisa

    2017-01-01

    Feature extraction for Acoustic Bird Species Classification (ABSC) tasks has traditionally been based on parametric representations that were specifically developed for speech signals, such as Mel Frequency Cepstral Coefficients (MFCC). However, the discrimination capabilities of these features for ABSC could be enhanced by accounting for the vocal production mechanisms of birds, and, in particular, the spectro-temporal structure of bird sounds. In this paper, a new front-end for ABSC is proposed that incorporates this specific information through the non-negative decomposition of bird sound spectrograms. It consists of the following two different stages: short-time feature extraction and temporal feature integration. In the first stage, which aims at providing a better spectral representation of bird sounds on a frame-by-frame basis, two methods are evaluated. In the first method, cepstral-like features (NMF_CC) are extracted by using a filter bank that is automatically learned by means of the application of Non-Negative Matrix Factorization (NMF) on bird audio spectrograms. In the second method, the features are directly derived from the activation coefficients of the spectrogram decomposition as performed through NMF (H_CC). The second stage summarizes the most relevant information contained in the short-time features by computing several statistical measures over long segments. The experiments show that the use of NMF_CC and H_CC in conjunction with temporal integration significantly improves the performance of a Support Vector Machine (SVM)-based ABSC system with respect to conventional MFCC. PMID:28628630

  20. An iterative approach for compound detection in an unknown pharmaceutical drug product: Application on Raman microscopy.

    PubMed

    Boiret, Mathieu; Gorretta, Nathalie; Ginot, Yves-Michel; Roger, Jean-Michel

    2016-02-20

    Raman chemical imaging provides both spectral and spatial information on a pharmaceutical drug product. Even if the main objective of chemical imaging is to obtain distribution maps of each formulation compound, identification of pure signals in a mixture dataset remains of huge interest. In this work, an iterative approach is proposed to identify the compounds in a pharmaceutical drug product, assuming that the chemical composition of the product is not known by the analyst and that a low dose compound can be present in the studied medicine. The proposed approach uses a spectral library, spectral distances and orthogonal projections to iteratively detect pure compounds of a tablet. Since the proposed method is not based on variance decomposition, it should be well adapted for a drug product which contains a low dose product, interpreted as a compound located in few pixels and with low spectral contributions. The method is tested on a tablet specifically manufactured for this study with one active pharmaceutical ingredient and five excipients. A spectral library, constituted of 24 pure pharmaceutical compounds, is used as a reference spectral database. Pure spectra of active and excipients, including a modification of the crystalline form and a low dose compound, are iteratively detected. Once the pure spectra are identified, multivariate curve resolution-alternating least squares process is performed on the data to provide distribution maps of each compound in the studied sample. Distributions of the two crystalline forms of active and the five excipients were in accordance with the theoretical formulation. Copyright © 2015 Elsevier B.V. All rights reserved.

  1. Advanced Background Subtraction Applied to Aeroacoustic Wind Tunnel Testing

    NASA Technical Reports Server (NTRS)

    Bahr, Christopher J.; Horne, William C.

    2015-01-01

    An advanced form of background subtraction is presented and applied to aeroacoustic wind tunnel data. A variant of this method has seen use in other fields such as climatology and medical imaging. The technique, based on an eigenvalue decomposition of the background noise cross-spectral matrix, is robust against situations where isolated background auto-spectral levels are measured to be higher than levels of combined source and background signals. It also provides an alternate estimate of the cross-spectrum, which previously might have poor definition for low signal-to-noise ratio measurements. Simulated results indicate similar performance to conventional background subtraction when the subtracted spectra are weaker than the true contaminating background levels. Superior performance is observed when the subtracted spectra are stronger than the true contaminating background levels. Experimental results show limited success in recovering signal behavior for data where conventional background subtraction fails. They also demonstrate the new subtraction technique's ability to maintain a proper coherence relationship in the modified cross-spectral matrix. Beam-forming and de-convolution results indicate the method can successfully separate sources. Results also show a reduced need for the use of diagonal removal in phased array processing, at least for the limited data sets considered.

  2. Peripheral transverse densities of the baryon octet from chiral effective field theory and dispersion analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Alarcón, J. M.; Hiller Blin, A. N.; Vicente Vacas, M. J.

    2017-05-08

    The baryon electromagnetic form factors are expressed in terms of two-dimensional densities describing the distribution of charge and magnetization in transverse space at fixed light-front time. In this paper, we calculate the transverse densities of the spin-1/2 flavor-octet baryons at peripheral distances b=O(Mmore » $$-1\\atop{π}$$) using methods of relativistic chiral effective field theory (χ EFT) and dispersion analysis. The densities are represented as dispersive integrals over the imaginary parts of the form factors in the timelike region (spectral functions). The isovector spectral functions on the two-pion cut t > 4 M$$2\\atop{π}$$ are calculated using relativistic χEFT including octet and decuplet baryons. The χEFT calculations are extended into the ρ meson mass region using an N/D method that incorporates the pion electromagnetic form factor data. The isoscalar spectral functions are modeled by vector meson poles. We compute the peripheral charge and magnetization densities in the octet baryon states, estimate the uncertainties, and determine the quark flavor decomposition. Finally, the approach can be extended to baryon form factors of other operators and the moments of generalized parton distributions.« less

  3. Rapid perfusion quantification using Welch-Satterthwaite approximation and analytical spectral filtering

    NASA Astrophysics Data System (ADS)

    Krishnan, Karthik; Reddy, Kasireddy V.; Ajani, Bhavya; Yalavarthy, Phaneendra K.

    2017-02-01

    CT and MR perfusion weighted imaging (PWI) enable quantification of perfusion parameters in stroke studies. These parameters are calculated from the residual impulse response function (IRF) based on a physiological model for tissue perfusion. The standard approach for estimating the IRF is deconvolution using oscillatory-limited singular value decomposition (oSVD) or Frequency Domain Deconvolution (FDD). FDD is widely recognized as the fastest approach currently available for deconvolution of CT Perfusion/MR PWI. In this work, three faster methods are proposed. The first is a direct (model based) crude approximation to the final perfusion quantities (Blood flow, Blood volume, Mean Transit Time and Delay) using the Welch-Satterthwaite approximation for gamma fitted concentration time curves (CTC). The second method is a fast accurate deconvolution method, we call Analytical Fourier Filtering (AFF). The third is another fast accurate deconvolution technique using Showalter's method, we call Analytical Showalter's Spectral Filtering (ASSF). Through systematic evaluation on phantom and clinical data, the proposed methods are shown to be computationally more than twice as fast as FDD. The two deconvolution based methods, AFF and ASSF, are also shown to be quantitatively accurate compared to FDD and oSVD.

  4. Validation of Spectral Unmixing Results from Informed Non-Negative Matrix Factorization (INMF) of Hyperspectral Imagery

    NASA Astrophysics Data System (ADS)

    Wright, L.; Coddington, O.; Pilewskie, P.

    2017-12-01

    Hyperspectral instruments are a growing class of Earth observing sensors designed to improve remote sensing capabilities beyond discrete multi-band sensors by providing tens to hundreds of continuous spectral channels. Improved spectral resolution, range and radiometric accuracy allow the collection of large amounts of spectral data, facilitating thorough characterization of both atmospheric and surface properties. We describe the development of an Informed Non-Negative Matrix Factorization (INMF) spectral unmixing method to exploit this spectral information and separate atmospheric and surface signals based on their physical sources. INMF offers marked benefits over other commonly employed techniques including non-negativity, which avoids physically impossible results; and adaptability, which tailors the method to hyperspectral source separation. The INMF algorithm is adapted to separate contributions from physically distinct sources using constraints on spectral and spatial variability, and library spectra to improve the initial guess. Using this INMF algorithm we decompose hyperspectral imagery from the NASA Hyperspectral Imager for the Coastal Ocean (HICO), with a focus on separating surface and atmospheric signal contributions. HICO's coastal ocean focus provides a dataset with a wide range of atmospheric and surface conditions. These include atmospheres with varying aerosol optical thicknesses and cloud cover. HICO images also provide a range of surface conditions including deep ocean regions, with only minor contributions from the ocean surfaces; and more complex shallow coastal regions with contributions from the seafloor or suspended sediments. We provide extensive comparison of INMF decomposition results against independent measurements of physical properties. These include comparison against traditional model-based retrievals of water-leaving, aerosol, and molecular scattering radiances and other satellite products, such as aerosol optical thickness from the Moderate Resolution Imaging Spectroradiometer (MODIS).

  5. Fine structure of the low-frequency spectra of heart rate and blood pressure

    PubMed Central

    Kuusela, Tom A; Kaila, Timo J; Kähönen, Mika

    2003-01-01

    Background The aim of this study was to explore the principal frequency components of the heart rate and blood pressure variability in the low frequency (LF) and very low frequency (VLF) band. The spectral composition of the R–R interval (RRI) and systolic arterial blood pressure (SAP) in the frequency range below 0.15 Hz were carefully analyzed using three different spectral methods: Fast Fourier transform (FFT), Wigner-Ville distribution (WVD), and autoregression (AR). All spectral methods were used to create time–frequency plots to uncover the principal spectral components that are least dependent on time. The accurate frequencies of these components were calculated from the pole decomposition of the AR spectral density after determining the optimal model order – the most crucial factor when using this method – with the help of FFT and WVD methods. Results Spectral analysis of the RRI and SAP of 12 healthy subjects revealed that there are always at least three spectral components below 0.15 Hz. The three principal frequency components are 0.026 ± 0.003 (mean ± SD) Hz, 0.076 ± 0.012 Hz, and 0.117 ± 0.016 Hz. These principal components vary only slightly over time. FFT-based coherence and phase-function analysis suggests that the second and third components are related to the baroreflex control of blood pressure, since the phase difference between SAP and RRI was negative and almost constant, whereas the origin of the first component is different since no clear SAP–RRI phase relationship was found. Conclusion The above data indicate that spontaneous fluctuations in heart rate and blood pressure within the standard low-frequency range of 0.04–0.15 Hz typically occur at two frequency components rather than only at one as widely believed, and these components are not harmonically related. This new observation in humans can help explain divergent results in the literature concerning spontaneous low-frequency oscillations. It also raises methodological and computational questions regarding the usability and validity of the low-frequency spectral band when estimating sympathetic activity and baroreflex gain. PMID:14552660

  6. Fine structure of the low-frequency spectra of heart rate and blood pressure.

    PubMed

    Kuusela, Tom A; Kaila, Timo J; Kähönen, Mika

    2003-10-13

    The aim of this study was to explore the principal frequency components of the heart rate and blood pressure variability in the low frequency (LF) and very low frequency (VLF) band. The spectral composition of the R-R interval (RRI) and systolic arterial blood pressure (SAP) in the frequency range below 0.15 Hz were carefully analyzed using three different spectral methods: Fast Fourier transform (FFT), Wigner-Ville distribution (WVD), and autoregression (AR). All spectral methods were used to create time-frequency plots to uncover the principal spectral components that are least dependent on time. The accurate frequencies of these components were calculated from the pole decomposition of the AR spectral density after determining the optimal model order--the most crucial factor when using this method--with the help of FFT and WVD methods. Spectral analysis of the RRI and SAP of 12 healthy subjects revealed that there are always at least three spectral components below 0.15 Hz. The three principal frequency components are 0.026 +/- 0.003 (mean +/- SD) Hz, 0.076 +/- 0.012 Hz, and 0.117 +/- 0.016 Hz. These principal components vary only slightly over time. FFT-based coherence and phase-function analysis suggests that the second and third components are related to the baroreflex control of blood pressure, since the phase difference between SAP and RRI was negative and almost constant, whereas the origin of the first component is different since no clear SAP-RRI phase relationship was found. The above data indicate that spontaneous fluctuations in heart rate and blood pressure within the standard low-frequency range of 0.04-0.15 Hz typically occur at two frequency components rather than only at one as widely believed, and these components are not harmonically related. This new observation in humans can help explain divergent results in the literature concerning spontaneous low-frequency oscillations. It also raises methodological and computational questions regarding the usability and validity of the low-frequency spectral band when estimating sympathetic activity and baroreflex gain.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Clark, Darin P.; Badea, Cristian T., E-mail: cristian.badea@duke.edu; Lee, Chang-Lung

    Purpose: X-ray computed tomography (CT) is widely used, both clinically and preclinically, for fast, high-resolution anatomic imaging; however, compelling opportunities exist to expand its use in functional imaging applications. For instance, spectral information combined with nanoparticle contrast agents enables quantification of tissue perfusion levels, while temporal information details cardiac and respiratory dynamics. The authors propose and demonstrate a projection acquisition and reconstruction strategy for 5D CT (3D + dual energy + time) which recovers spectral and temporal information without substantially increasing radiation dose or sampling time relative to anatomic imaging protocols. Methods: The authors approach the 5D reconstruction problem withinmore » the framework of low-rank and sparse matrix decomposition. Unlike previous work on rank-sparsity constrained CT reconstruction, the authors establish an explicit rank-sparse signal model to describe the spectral and temporal dimensions. The spectral dimension is represented as a well-sampled time and energy averaged image plus regularly undersampled principal components describing the spectral contrast. The temporal dimension is represented as the same time and energy averaged reconstruction plus contiguous, spatially sparse, and irregularly sampled temporal contrast images. Using a nonlinear, image domain filtration approach, the authors refer to as rank-sparse kernel regression, the authors transfer image structure from the well-sampled time and energy averaged reconstruction to the spectral and temporal contrast images. This regularization strategy strictly constrains the reconstruction problem while approximately separating the temporal and spectral dimensions. Separability results in a highly compressed representation for the 5D data in which projections are shared between the temporal and spectral reconstruction subproblems, enabling substantial undersampling. The authors solved the 5D reconstruction problem using the split Bregman method and GPU-based implementations of backprojection, reprojection, and kernel regression. Using a preclinical mouse model, the authors apply the proposed algorithm to study myocardial injury following radiation treatment of breast cancer. Results: Quantitative 5D simulations are performed using the MOBY mouse phantom. Twenty data sets (ten cardiac phases, two energies) are reconstructed with 88 μm, isotropic voxels from 450 total projections acquired over a single 360° rotation. In vivo 5D myocardial injury data sets acquired in two mice injected with gold and iodine nanoparticles are also reconstructed with 20 data sets per mouse using the same acquisition parameters (dose: ∼60 mGy). For both the simulations and the in vivo data, the reconstruction quality is sufficient to perform material decomposition into gold and iodine maps to localize the extent of myocardial injury (gold accumulation) and to measure cardiac functional metrics (vascular iodine). Their 5D CT imaging protocol represents a 95% reduction in radiation dose per cardiac phase and energy and a 40-fold decrease in projection sampling time relative to their standard imaging protocol. Conclusions: Their 5D CT data acquisition and reconstruction protocol efficiently exploits the rank-sparse nature of spectral and temporal CT data to provide high-fidelity reconstruction results without increased radiation dose or sampling time.« less

  8. Spectrotemporal CT data acquisition and reconstruction at low dose

    PubMed Central

    Clark, Darin P.; Lee, Chang-Lung; Kirsch, David G.; Badea, Cristian T.

    2015-01-01

    Purpose: X-ray computed tomography (CT) is widely used, both clinically and preclinically, for fast, high-resolution anatomic imaging; however, compelling opportunities exist to expand its use in functional imaging applications. For instance, spectral information combined with nanoparticle contrast agents enables quantification of tissue perfusion levels, while temporal information details cardiac and respiratory dynamics. The authors propose and demonstrate a projection acquisition and reconstruction strategy for 5D CT (3D + dual energy + time) which recovers spectral and temporal information without substantially increasing radiation dose or sampling time relative to anatomic imaging protocols. Methods: The authors approach the 5D reconstruction problem within the framework of low-rank and sparse matrix decomposition. Unlike previous work on rank-sparsity constrained CT reconstruction, the authors establish an explicit rank-sparse signal model to describe the spectral and temporal dimensions. The spectral dimension is represented as a well-sampled time and energy averaged image plus regularly undersampled principal components describing the spectral contrast. The temporal dimension is represented as the same time and energy averaged reconstruction plus contiguous, spatially sparse, and irregularly sampled temporal contrast images. Using a nonlinear, image domain filtration approach, the authors refer to as rank-sparse kernel regression, the authors transfer image structure from the well-sampled time and energy averaged reconstruction to the spectral and temporal contrast images. This regularization strategy strictly constrains the reconstruction problem while approximately separating the temporal and spectral dimensions. Separability results in a highly compressed representation for the 5D data in which projections are shared between the temporal and spectral reconstruction subproblems, enabling substantial undersampling. The authors solved the 5D reconstruction problem using the split Bregman method and GPU-based implementations of backprojection, reprojection, and kernel regression. Using a preclinical mouse model, the authors apply the proposed algorithm to study myocardial injury following radiation treatment of breast cancer. Results: Quantitative 5D simulations are performed using the MOBY mouse phantom. Twenty data sets (ten cardiac phases, two energies) are reconstructed with 88 μm, isotropic voxels from 450 total projections acquired over a single 360° rotation. In vivo 5D myocardial injury data sets acquired in two mice injected with gold and iodine nanoparticles are also reconstructed with 20 data sets per mouse using the same acquisition parameters (dose: ∼60 mGy). For both the simulations and the in vivo data, the reconstruction quality is sufficient to perform material decomposition into gold and iodine maps to localize the extent of myocardial injury (gold accumulation) and to measure cardiac functional metrics (vascular iodine). Their 5D CT imaging protocol represents a 95% reduction in radiation dose per cardiac phase and energy and a 40-fold decrease in projection sampling time relative to their standard imaging protocol. Conclusions: Their 5D CT data acquisition and reconstruction protocol efficiently exploits the rank-sparse nature of spectral and temporal CT data to provide high-fidelity reconstruction results without increased radiation dose or sampling time. PMID:26520724

  9. ℓ0 -based sparse hyperspectral unmixing using spectral information and a multi-objectives formulation

    NASA Astrophysics Data System (ADS)

    Xu, Xia; Shi, Zhenwei; Pan, Bin

    2018-07-01

    Sparse unmixing aims at recovering pure materials from hyperpspectral images and estimating their abundance fractions. Sparse unmixing is actually ℓ0 problem which is NP-h ard, and a relaxation is often used. In this paper, we attempt to deal with ℓ0 problem directly via a multi-objective based method, which is a non-convex manner. The characteristics of hyperspectral images are integrated into the proposed method, which leads to a new spectra and multi-objective based sparse unmixing method (SMoSU). In order to solve the ℓ0 norm optimization problem, the spectral library is encoded in a binary vector, and a bit-wise flipping strategy is used to generate new individuals in the evolution process. However, a multi-objective method usually produces a number of non-dominated solutions, while sparse unmixing requires a single solution. How to make the final decision for sparse unmixing is challenging. To handle this problem, we integrate the spectral characteristic of hyperspectral images into SMoSU. By considering the spectral correlation in hyperspectral data, we improve the Tchebycheff decomposition function in SMoSU via a new regularization item. This regularization item is able to enforce the individual divergence in the evolution process of SMoSU. In this way, the diversity and convergence of population is further balanced, which is beneficial to the concentration of individuals. In the experiments part, three synthetic datasets and one real-world data are used to analyse the effectiveness of SMoSU, and several state-of-art sparse unmixing algorithms are compared.

  10. Ultrasonic technique for imaging tissue vibrations: preliminary results.

    PubMed

    Sikdar, Siddhartha; Beach, Kirk W; Vaezy, Shahram; Kim, Yongmin

    2005-02-01

    We propose an ultrasound (US)-based technique for imaging vibrations in the blood vessel walls and surrounding tissue caused by eddies produced during flow through narrowed or punctured arteries. Our approach is to utilize the clutter signal, normally suppressed in conventional color flow imaging, to detect and characterize local tissue vibrations. We demonstrate the feasibility of visualizing the origin and extent of vibrations relative to the underlying anatomy and blood flow in real-time and their quantitative assessment, including measurements of the amplitude, frequency and spatial distribution. We present two signal-processing algorithms, one based on phase decomposition and the other based on spectral estimation using eigen decomposition for isolating vibrations from clutter, blood flow and noise using an ensemble of US echoes. In simulation studies, the computationally efficient phase-decomposition method achieved 96% sensitivity and 98% specificity for vibration detection and was robust to broadband vibrations. Somewhat higher sensitivity (98%) and specificity (99%) could be achieved using the more computationally intensive eigen decomposition-based algorithm. Vibration amplitudes as low as 1 mum were measured accurately in phantom experiments. Real-time tissue vibration imaging at typical color-flow frame rates was implemented on a software-programmable US system. Vibrations were studied in vivo in a stenosed femoral bypass vein graft in a human subject and in a punctured femoral artery and incised spleen in an animal model.

  11. Hyperspectral data discrimination methods

    NASA Astrophysics Data System (ADS)

    Casasent, David P.; Chen, Xuewen

    2000-12-01

    Hyperspectral data provides spectral response information that provides detailed chemical, moisture, and other description of constituent parts of an item. These new sensor data are useful in USDA product inspection. However, such data introduce problems such as the curse of dimensionality, the need to reduce the number of features used to accommodate realistic small training set sizes, and the need to employ discriminatory features and still achieve good generalization (comparable training and test set performance). Several two-step methods are compared to a new and preferable single-step spectral decomposition algorithm. Initial results on hyperspectral data for good/bad almonds and for good/bad (aflatoxin infested) corn kernels are presented. The hyperspectral application addressed differs greatly from prior USDA work (PLS) in which the level of a specific channel constituent in food was estimated. A validation set (separate from the test set) is used in selecting algorithm parameters. Threshold parameters are varied to select the best Pc operating point. Initial results show that nonlinear features yield improved performance.

  12. Graph Frequency Analysis of Brain Signals

    PubMed Central

    Huang, Weiyu; Goldsberry, Leah; Wymbs, Nicholas F.; Grafton, Scott T.; Bassett, Danielle S.; Ribeiro, Alejandro

    2016-01-01

    This paper presents methods to analyze functional brain networks and signals from graph spectral perspectives. The notion of frequency and filters traditionally defined for signals supported on regular domains such as discrete time and image grids has been recently generalized to irregular graph domains, and defines brain graph frequencies associated with different levels of spatial smoothness across the brain regions. Brain network frequency also enables the decomposition of brain signals into pieces corresponding to smooth or rapid variations. We relate graph frequency with principal component analysis when the networks of interest denote functional connectivity. The methods are utilized to analyze brain networks and signals as subjects master a simple motor skill. We observe that brain signals corresponding to different graph frequencies exhibit different levels of adaptability throughout learning. Further, we notice a strong association between graph spectral properties of brain networks and the level of exposure to tasks performed, and recognize the most contributing and important frequency signatures at different levels of task familiarity. PMID:28439325

  13. Finite element analysis in fluids; Proceedings of the Seventh International Conference on Finite Element Methods in Flow Problems, University of Alabama, Huntsville, Apr. 3-7, 1989

    NASA Technical Reports Server (NTRS)

    Chung, T. J. (Editor); Karr, Gerald R. (Editor)

    1989-01-01

    Recent advances in computational fluid dynamics are examined in reviews and reports, with an emphasis on finite-element methods. Sections are devoted to adaptive meshes, atmospheric dynamics, combustion, compressible flows, control-volume finite elements, crystal growth, domain decomposition, EM-field problems, FDM/FEM, and fluid-structure interactions. Consideration is given to free-boundary problems with heat transfer, free surface flow, geophysical flow problems, heat and mass transfer, high-speed flow, incompressible flow, inverse design methods, MHD problems, the mathematics of finite elements, and mesh generation. Also discussed are mixed finite elements, multigrid methods, non-Newtonian fluids, numerical dissipation, parallel vector processing, reservoir simulation, seepage, shallow-water problems, spectral methods, supercomputer architectures, three-dimensional problems, and turbulent flows.

  14. Multispectral image fusion for illumination-invariant palmprint recognition

    PubMed Central

    Zhang, Xinman; Xu, Xuebin; Shang, Dongpeng

    2017-01-01

    Multispectral palmprint recognition has shown broad prospects for personal identification due to its high accuracy and great stability. In this paper, we develop a novel illumination-invariant multispectral palmprint recognition method. To combine the information from multiple spectral bands, an image-level fusion framework is completed based on a fast and adaptive bidimensional empirical mode decomposition (FABEMD) and a weighted Fisher criterion. The FABEMD technique decomposes the multispectral images into their bidimensional intrinsic mode functions (BIMFs), on which an illumination compensation operation is performed. The weighted Fisher criterion is to construct the fusion coefficients at the decomposition level, making the images be separated correctly in the fusion space. The image fusion framework has shown strong robustness against illumination variation. In addition, a tensor-based extreme learning machine (TELM) mechanism is presented for feature extraction and classification of two-dimensional (2D) images. In general, this method has fast learning speed and satisfying recognition accuracy. Comprehensive experiments conducted on the PolyU multispectral palmprint database illustrate that the proposed method can achieve favorable results. For the testing under ideal illumination, the recognition accuracy is as high as 99.93%, and the result is 99.50% when the lighting condition is unsatisfied. PMID:28558064

  15. Multispectral image fusion for illumination-invariant palmprint recognition.

    PubMed

    Lu, Longbin; Zhang, Xinman; Xu, Xuebin; Shang, Dongpeng

    2017-01-01

    Multispectral palmprint recognition has shown broad prospects for personal identification due to its high accuracy and great stability. In this paper, we develop a novel illumination-invariant multispectral palmprint recognition method. To combine the information from multiple spectral bands, an image-level fusion framework is completed based on a fast and adaptive bidimensional empirical mode decomposition (FABEMD) and a weighted Fisher criterion. The FABEMD technique decomposes the multispectral images into their bidimensional intrinsic mode functions (BIMFs), on which an illumination compensation operation is performed. The weighted Fisher criterion is to construct the fusion coefficients at the decomposition level, making the images be separated correctly in the fusion space. The image fusion framework has shown strong robustness against illumination variation. In addition, a tensor-based extreme learning machine (TELM) mechanism is presented for feature extraction and classification of two-dimensional (2D) images. In general, this method has fast learning speed and satisfying recognition accuracy. Comprehensive experiments conducted on the PolyU multispectral palmprint database illustrate that the proposed method can achieve favorable results. For the testing under ideal illumination, the recognition accuracy is as high as 99.93%, and the result is 99.50% when the lighting condition is unsatisfied.

  16. Estimation of beam material random field properties via sensitivity-based model updating using experimental frequency response functions

    NASA Astrophysics Data System (ADS)

    Machado, M. R.; Adhikari, S.; Dos Santos, J. M. C.; Arruda, J. R. F.

    2018-03-01

    Structural parameter estimation is affected not only by measurement noise but also by unknown uncertainties which are present in the system. Deterministic structural model updating methods minimise the difference between experimentally measured data and computational prediction. Sensitivity-based methods are very efficient in solving structural model updating problems. Material and geometrical parameters of the structure such as Poisson's ratio, Young's modulus, mass density, modal damping, etc. are usually considered deterministic and homogeneous. In this paper, the distributed and non-homogeneous characteristics of these parameters are considered in the model updating. The parameters are taken as spatially correlated random fields and are expanded in a spectral Karhunen-Loève (KL) decomposition. Using the KL expansion, the spectral dynamic stiffness matrix of the beam is expanded as a series in terms of discretized parameters, which can be estimated using sensitivity-based model updating techniques. Numerical and experimental tests involving a beam with distributed bending rigidity and mass density are used to verify the proposed method. This extension of standard model updating procedures can enhance the dynamic description of structural dynamic models.

  17. Empirical projection-based basis-component decomposition method

    NASA Astrophysics Data System (ADS)

    Brendel, Bernhard; Roessl, Ewald; Schlomka, Jens-Peter; Proksa, Roland

    2009-02-01

    Advances in the development of semiconductor based, photon-counting x-ray detectors stimulate research in the domain of energy-resolving pre-clinical and clinical computed tomography (CT). For counting detectors acquiring x-ray attenuation in at least three different energy windows, an extended basis component decomposition can be performed in which in addition to the conventional approach of Alvarez and Macovski a third basis component is introduced, e.g., a gadolinium based CT contrast material. After the decomposition of the measured projection data into the basis component projections, conventional filtered-backprojection reconstruction is performed to obtain the basis-component images. In recent work, this basis component decomposition was obtained by maximizing the likelihood-function of the measurements. This procedure is time consuming and often unstable for excessively noisy data or low intrinsic energy resolution of the detector. Therefore, alternative procedures are of interest. Here, we introduce a generalization of the idea of empirical dual-energy processing published by Stenner et al. to multi-energy, photon-counting CT raw data. Instead of working in the image-domain, we use prior spectral knowledge about the acquisition system (tube spectra, bin sensitivities) to parameterize the line-integrals of the basis component decomposition directly in the projection domain. We compare this empirical approach with the maximum-likelihood (ML) approach considering image noise and image bias (artifacts) and see that only moderate noise increase is to be expected for small bias in the empirical approach. Given the drastic reduction of pre-processing time, the empirical approach is considered a viable alternative to the ML approach.

  18. Spectral Reconstruction for Obtaining Virtual Hyperspectral Images

    NASA Astrophysics Data System (ADS)

    Perez, G. J. P.; Castro, E. C.

    2016-12-01

    Hyperspectral sensors demonstrated its capabalities in identifying materials and detecting processes in a satellite scene. However, availability of hyperspectral images are limited due to the high development cost of these sensors. Currently, most of the readily available data are from multi-spectral instruments. Spectral reconstruction is an alternative method to address the need for hyperspectral information. The spectral reconstruction technique has been shown to provide a quick and accurate detection of defects in an integrated circuit, recovers damaged parts of frescoes, and it also aids in converting a microscope into an imaging spectrometer. By using several spectral bands together with a spectral library, a spectrum acquired by a sensor can be expressed as a linear superposition of elementary signals. In this study, spectral reconstruction is used to estimate the spectra of different surfaces imaged by Landsat 8. Four atmospherically corrected surface reflectance from three visible bands (499 nm, 585 nm, 670 nm) and one near-infrared band (872 nm) of Landsat 8, and a spectral library of ground elements acquired from the United States Geological Survey (USGS) are used. The spectral library is limited to 420-1020 nm spectral range, and is interpolated at one nanometer resolution. Singular Value Decomposition (SVD) is used to calculate the basis spectra, which are then applied to reconstruct the spectrum. The spectral reconstruction is applied for test cases within the library consisting of vegetation communities. This technique was successful in reconstructing a hyperspectral signal with error of less than 12% for most of the test cases. Hence, this study demonstrated the potential of simulating information at any desired wavelength, creating a virtual hyperspectral sensor without the need for additional satellite bands.

  19. Precision spectral manipulation of optical pulses using a coherent photon echo memory.

    PubMed

    Buchler, B C; Hosseini, M; Hétet, G; Sparkes, B M; Lam, P K

    2010-04-01

    Photon echo schemes are excellent candidates for high efficiency coherent optical memory. They are capable of high-bandwidth multipulse storage, pulse resequencing and have been shown theoretically to be compatible with quantum information applications. One particular photon echo scheme is the gradient echo memory (GEM). In this system, an atomic frequency gradient is induced in the direction of light propagation leading to a Fourier decomposition of the optical spectrum along the length of the storage medium. This Fourier encoding allows precision spectral manipulation of the stored light. In this Letter, we show frequency shifting, spectral compression, spectral splitting, and fine dispersion control of optical pulses using GEM.

  20. Extended FDD-WT method based on correcting the errors due to non-synchronous sensing of sensors

    NASA Astrophysics Data System (ADS)

    Tarinejad, Reza; Damadipour, Majid

    2016-05-01

    In this research, a combinational non-parametric method called frequency domain decomposition-wavelet transform (FDD-WT) that was recently presented by the authors, is extended for correction of the errors resulting from asynchronous sensing of sensors, in order to extend the application of the algorithm for different kinds of structures, especially for huge structures. Therefore, the analysis process is based on time-frequency domain decomposition and is performed with emphasis on correcting time delays between sensors. Time delay estimation (TDE) methods are investigated for their efficiency and accuracy for noisy environmental records and the Phase Transform - β (PHAT-β) technique was selected as an appropriate method to modify the operation of traditional FDD-WT in order to achieve the exact results. In this paper, a theoretical example (3DOF system) has been provided in order to indicate the non-synchronous sensing effects of the sensors on the modal parameters; moreover, the Pacoima dam subjected to 13 Jan 2001 earthquake excitation was selected as a case study. The modal parameters of the dam obtained from the extended FDD-WT method were compared with the output of the classical signal processing method, which is referred to as 4-Spectral method, as well as other literatures relating to the dynamic characteristics of Pacoima dam. The results comparison indicates that values are correct and reliable.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Espinosa-Paredes, Gilberto; Prieto-Guerrero, Alfonso; Nunez-Carrera, Alejandro

    This paper introduces a wavelet-based method to analyze instability events in a boiling water reactor (BWR) during transient phenomena. The methodology to analyze BWR signals includes the following: (a) the short-time Fourier transform (STFT) analysis, (b) decomposition using the continuous wavelet transform (CWT), and (c) application of multiresolution analysis (MRA) using discrete wavelet transform (DWT). STFT analysis permits the study, in time, of the spectral content of analyzed signals. The CWT provides information about ruptures, discontinuities, and fractal behavior. To detect these important features in the signal, a mother wavelet has to be chosen and applied at several scales tomore » obtain optimum results. MRA allows fast implementation of the DWT. Features like important frequencies, discontinuities, and transients can be detected with analysis at different levels of detail coefficients. The STFT was used to provide a comparison between a classic method and the wavelet-based method. The damping ratio, which is an important stability parameter, was calculated as a function of time. The transient behavior can be detected by analyzing the maximum contained in detail coefficients at different levels in the signal decomposition. This method allows analysis of both stationary signals and highly nonstationary signals in the timescale plane. This methodology has been tested with the benchmark power instability event of Laguna Verde nuclear power plant (NPP) Unit 1, which is a BWR-5 NPP.« less

  2. Robust multitask learning with three-dimensional empirical mode decomposition-based features for hyperspectral classification

    NASA Astrophysics Data System (ADS)

    He, Zhi; Liu, Lin

    2016-11-01

    Empirical mode decomposition (EMD) and its variants have recently been applied for hyperspectral image (HSI) classification due to their ability to extract useful features from the original HSI. However, it remains a challenging task to effectively exploit the spectral-spatial information by the traditional vector or image-based methods. In this paper, a three-dimensional (3D) extension of EMD (3D-EMD) is proposed to naturally treat the HSI as a cube and decompose the HSI into varying oscillations (i.e. 3D intrinsic mode functions (3D-IMFs)). To achieve fast 3D-EMD implementation, 3D Delaunay triangulation (3D-DT) is utilized to determine the distances of extrema, while separable filters are adopted to generate the envelopes. Taking the extracted 3D-IMFs as features of different tasks, robust multitask learning (RMTL) is further proposed for HSI classification. In RMTL, pairs of low-rank and sparse structures are formulated by trace-norm and l1,2 -norm to capture task relatedness and specificity, respectively. Moreover, the optimization problems of RMTL can be efficiently solved by the inexact augmented Lagrangian method (IALM). Compared with several state-of-the-art feature extraction and classification methods, the experimental results conducted on three benchmark data sets demonstrate the superiority of the proposed methods.

  3. A scale-invariant change detection method for land use/cover change research

    NASA Astrophysics Data System (ADS)

    Xing, Jin; Sieber, Renee; Caelli, Terrence

    2018-07-01

    Land Use/Cover Change (LUCC) detection relies increasingly on comparing remote sensing images with different spatial and spectral scales. Based on scale-invariant image analysis algorithms in computer vision, we propose a scale-invariant LUCC detection method to identify changes from scale heterogeneous images. This method is composed of an entropy-based spatial decomposition, two scale-invariant feature extraction methods, Maximally Stable Extremal Region (MSER) and Scale-Invariant Feature Transformation (SIFT) algorithms, a spatial regression voting method to integrate MSER and SIFT results, a Markov Random Field-based smoothing method, and a support vector machine classification method to assign LUCC labels. We test the scale invariance of our new method with a LUCC case study in Montreal, Canada, 2005-2012. We found that the scale-invariant LUCC detection method provides similar accuracy compared with the resampling-based approach but this method avoids the LUCC distortion incurred by resampling.

  4. On Holo-Hilbert Spectral Analysis: A Full Informational Spectral Representation for Nonlinear and Non-Stationary Data

    NASA Technical Reports Server (NTRS)

    Huang, Norden E.; Hu, Kun; Yang, Albert C. C.; Chang, Hsing-Chih; Jia, Deng; Liang, Wei-Kuang; Yeh, Jia Rong; Kao, Chu-Lan; Juan, Chi-Huang; Peng, Chung Kang; hide

    2016-01-01

    The Holo-Hilbert spectral analysis (HHSA) method is introduced to cure the deficiencies of traditional spectral analysis and to give a full informational representation of nonlinear and non-stationary data. It uses a nested empirical mode decomposition and Hilbert-Huang transform (HHT) approach to identify intrinsic amplitude and frequency modulations often present in nonlinear systems. Comparisons are first made with traditional spectrum analysis, which usually achieved its results through convolutional integral transforms based on additive expansions of an a priori determined basis, mostly under linear and stationary assumptions. Thus, for non-stationary processes, the best one could do historically was to use the time- frequency representations, in which the amplitude (or energy density) variation is still represented in terms of time. For nonlinear processes, the data can have both amplitude and frequency modulations (intra-mode and inter-mode) generated by two different mechanisms: linear additive or nonlinear multiplicative processes. As all existing spectral analysis methods are based on additive expansions, either a priori or adaptive, none of them could possibly represent the multiplicative processes. While the earlier adaptive HHT spectral analysis approach could accommodate the intra-wave nonlinearity quite remarkably, it remained that any inter-wave nonlinear multiplicative mechanisms that include cross-scale coupling and phase-lock modulations were left untreated. To resolve the multiplicative processes issue, additional dimensions in the spectrum result are needed to account for the variations in both the amplitude and frequency modulations simultaneously. HHSA accommodates all the processes: additive and multiplicative, intra-mode and inter-mode, stationary and nonstationary, linear and nonlinear interactions. The Holo prefix in HHSA denotes a multiple dimensional representation with both additive and multiplicative capabilities.

  5. On Holo-Hilbert spectral analysis: a full informational spectral representation for nonlinear and non-stationary data

    PubMed Central

    Huang, Norden E.; Hu, Kun; Yang, Albert C. C.; Chang, Hsing-Chih; Jia, Deng; Liang, Wei-Kuang; Yeh, Jia Rong; Kao, Chu-Lan; Juan, Chi-Hung; Peng, Chung Kang; Meijer, Johanna H.; Wang, Yung-Hung; Long, Steven R.; Wu, Zhauhua

    2016-01-01

    The Holo-Hilbert spectral analysis (HHSA) method is introduced to cure the deficiencies of traditional spectral analysis and to give a full informational representation of nonlinear and non-stationary data. It uses a nested empirical mode decomposition and Hilbert–Huang transform (HHT) approach to identify intrinsic amplitude and frequency modulations often present in nonlinear systems. Comparisons are first made with traditional spectrum analysis, which usually achieved its results through convolutional integral transforms based on additive expansions of an a priori determined basis, mostly under linear and stationary assumptions. Thus, for non-stationary processes, the best one could do historically was to use the time–frequency representations, in which the amplitude (or energy density) variation is still represented in terms of time. For nonlinear processes, the data can have both amplitude and frequency modulations (intra-mode and inter-mode) generated by two different mechanisms: linear additive or nonlinear multiplicative processes. As all existing spectral analysis methods are based on additive expansions, either a priori or adaptive, none of them could possibly represent the multiplicative processes. While the earlier adaptive HHT spectral analysis approach could accommodate the intra-wave nonlinearity quite remarkably, it remained that any inter-wave nonlinear multiplicative mechanisms that include cross-scale coupling and phase-lock modulations were left untreated. To resolve the multiplicative processes issue, additional dimensions in the spectrum result are needed to account for the variations in both the amplitude and frequency modulations simultaneously. HHSA accommodates all the processes: additive and multiplicative, intra-mode and inter-mode, stationary and non-stationary, linear and nonlinear interactions. The Holo prefix in HHSA denotes a multiple dimensional representation with both additive and multiplicative capabilities. PMID:26953180

  6. Spherical Harmonic Analysis of Particle Velocity Distribution Function: Comparison of Moments and Anisotropies using Cluster Data

    NASA Technical Reports Server (NTRS)

    Gurgiolo, Chris; Vinas, Adolfo F.

    2009-01-01

    This paper presents a spherical harmonic analysis of the plasma velocity distribution function using high-angular, energy, and time resolution Cluster data obtained from the PEACE spectrometer instrument to demonstrate how this analysis models the particle distribution function and its moments and anisotropies. The results show that spherical harmonic analysis produced a robust physical representation model of the velocity distribution function, resolving the main features of the measured distributions. From the spherical harmonic analysis, a minimum set of nine spectral coefficients was obtained from which the moment (up to the heat flux), anisotropy, and asymmetry calculations of the velocity distribution function were obtained. The spherical harmonic method provides a potentially effective "compression" technique that can be easily carried out onboard a spacecraft to determine the moments and anisotropies of the particle velocity distribution function for any species. These calculations were implemented using three different approaches, namely, the standard traditional integration, the spherical harmonic (SPH) spectral coefficients integration, and the singular value decomposition (SVD) on the spherical harmonic methods. A comparison among the various methods shows that both SPH and SVD approaches provide remarkable agreement with the standard moment integration method.

  7. Extended dynamic mode decomposition with dictionary learning: A data-driven adaptive spectral decomposition of the Koopman operator

    NASA Astrophysics Data System (ADS)

    Li, Qianxiao; Dietrich, Felix; Bollt, Erik M.; Kevrekidis, Ioannis G.

    2017-10-01

    Numerical approximation methods for the Koopman operator have advanced considerably in the last few years. In particular, data-driven approaches such as dynamic mode decomposition (DMD)51 and its generalization, the extended-DMD (EDMD), are becoming increasingly popular in practical applications. The EDMD improves upon the classical DMD by the inclusion of a flexible choice of dictionary of observables which spans a finite dimensional subspace on which the Koopman operator can be approximated. This enhances the accuracy of the solution reconstruction and broadens the applicability of the Koopman formalism. Although the convergence of the EDMD has been established, applying the method in practice requires a careful choice of the observables to improve convergence with just a finite number of terms. This is especially difficult for high dimensional and highly nonlinear systems. In this paper, we employ ideas from machine learning to improve upon the EDMD method. We develop an iterative approximation algorithm which couples the EDMD with a trainable dictionary represented by an artificial neural network. Using the Duffing oscillator and the Kuramoto Sivashinsky partical differential equation as examples, we show that our algorithm can effectively and efficiently adapt the trainable dictionary to the problem at hand to achieve good reconstruction accuracy without the need to choose a fixed dictionary a priori. Furthermore, to obtain a given accuracy, we require fewer dictionary terms than EDMD with fixed dictionaries. This alleviates an important shortcoming of the EDMD algorithm and enhances the applicability of the Koopman framework to practical problems.

  8. Extended dynamic mode decomposition with dictionary learning: A data-driven adaptive spectral decomposition of the Koopman operator.

    PubMed

    Li, Qianxiao; Dietrich, Felix; Bollt, Erik M; Kevrekidis, Ioannis G

    2017-10-01

    Numerical approximation methods for the Koopman operator have advanced considerably in the last few years. In particular, data-driven approaches such as dynamic mode decomposition (DMD) 51 and its generalization, the extended-DMD (EDMD), are becoming increasingly popular in practical applications. The EDMD improves upon the classical DMD by the inclusion of a flexible choice of dictionary of observables which spans a finite dimensional subspace on which the Koopman operator can be approximated. This enhances the accuracy of the solution reconstruction and broadens the applicability of the Koopman formalism. Although the convergence of the EDMD has been established, applying the method in practice requires a careful choice of the observables to improve convergence with just a finite number of terms. This is especially difficult for high dimensional and highly nonlinear systems. In this paper, we employ ideas from machine learning to improve upon the EDMD method. We develop an iterative approximation algorithm which couples the EDMD with a trainable dictionary represented by an artificial neural network. Using the Duffing oscillator and the Kuramoto Sivashinsky partical differential equation as examples, we show that our algorithm can effectively and efficiently adapt the trainable dictionary to the problem at hand to achieve good reconstruction accuracy without the need to choose a fixed dictionary a priori. Furthermore, to obtain a given accuracy, we require fewer dictionary terms than EDMD with fixed dictionaries. This alleviates an important shortcoming of the EDMD algorithm and enhances the applicability of the Koopman framework to practical problems.

  9. Spectral Biclustering of Microarray Data: Coclustering Genes and Conditions

    PubMed Central

    Kluger, Yuval; Basri, Ronen; Chang, Joseph T.; Gerstein, Mark

    2003-01-01

    Global analyses of RNA expression levels are useful for classifying genes and overall phenotypes. Often these classification problems are linked, and one wants to find “marker genes” that are differentially expressed in particular sets of “conditions.” We have developed a method that simultaneously clusters genes and conditions, finding distinctive “checkerboard” patterns in matrices of gene expression data, if they exist. In a cancer context, these checkerboards correspond to genes that are markedly up- or downregulated in patients with particular types of tumors. Our method, spectral biclustering, is based on the observation that checkerboard structures in matrices of expression data can be found in eigenvectors corresponding to characteristic expression patterns across genes or conditions. In addition, these eigenvectors can be readily identified by commonly used linear algebra approaches, in particular the singular value decomposition (SVD), coupled with closely integrated normalization steps. We present a number of variants of the approach, depending on whether the normalization over genes and conditions is done independently or in a coupled fashion. We then apply spectral biclustering to a selection of publicly available cancer expression data sets, and examine the degree to which the approach is able to identify checkerboard structures. Furthermore, we compare the performance of our biclustering methods against a number of reasonable benchmarks (e.g., direct application of SVD or normalized cuts to raw data). PMID:12671006

  10. Comparison of hybrid spectral-decomposition artificial neural network models for understanding climatic forcing of groundwater levels

    NASA Astrophysics Data System (ADS)

    Abrokwah, K.; O'Reilly, A. M.

    2017-12-01

    Groundwater is an important resource that is extracted every day because of its invaluable use for domestic, industrial and agricultural purposes. The need for sustaining groundwater resources is clearly indicated by declining water levels and has led to modeling and forecasting accurate groundwater levels. In this study, spectral decomposition of climatic forcing time series was used to develop hybrid wavelet analysis (WA) and moving window average (MWA) artificial neural network (ANN) models. These techniques are explored by modeling historical groundwater levels in order to provide understanding of potential causes of the observed groundwater-level fluctuations. Selection of the appropriate decomposition level for WA and window size for MWA helps in understanding the important time scales of climatic forcing, such as rainfall, that influence water levels. Discrete wavelet transform (DWT) is used to decompose the input time-series data into various levels of approximate and details wavelet coefficients, whilst MWA acts as a low-pass signal-filtering technique for removing high-frequency signals from the input data. The variables used to develop and validate the models were daily average rainfall measurements from five National Atmospheric and Oceanic Administration (NOAA) weather stations and daily water-level measurements from two wells recorded from 1978 to 2008 in central Florida, USA. Using different decomposition levels and different window sizes, several WA-ANN and MWA-ANN models for simulating the water levels were created and their relative performances compared against each other. The WA-ANN models performed better than the corresponding MWA-ANN models; also higher decomposition levels of the input signal by the DWT gave the best results. The results obtained show the applicability and feasibility of hybrid WA-ANN and MWA-ANN models for simulating daily water levels using only climatic forcing time series as model inputs.

  11. Implementing the sine transform of fermionic modes as a tensor network

    NASA Astrophysics Data System (ADS)

    Epple, Hannes; Fries, Pascal; Hinrichsen, Haye

    2017-09-01

    Based on the algebraic theory of signal processing, we recursively decompose the discrete sine transform of the first kind (DST-I) into small orthogonal block operations. Using a diagrammatic language, we then second-quantize this decomposition to construct a tensor network implementing the DST-I for fermionic modes on a lattice. The complexity of the resulting network is shown to scale as 5/4 n logn (not considering swap gates), where n is the number of lattice sites. Our method provides a systematic approach of generalizing Ferris' spectral tensor network for nontrivial boundary conditions.

  12. Ultraviolet absorption cross-sections of hot carbon dioxide

    NASA Astrophysics Data System (ADS)

    Oehlschlaeger, Matthew A.; Davidson, David F.; Jeffries, Jay B.; Hanson, Ronald K.

    2004-12-01

    The temperature-dependent ultraviolet absorption cross-section for CO 2 has been measured in shock-heated gases between 1500 and 4500 K at 216.5, 244, 266, and 306 nm. Continuous-wave lasers provide the spectral brightness to enable precise time-resolved measurements with the microsecond time-response needed to monitor thermal decomposition of CO 2 at temperatures above 3000 K. The photophysics of the highly temperature dependent cross-section is discussed. The new data allows the extension of CO 2 absorption-based temperature sensing methods to higher temperatures, such as those found in behind detonation waves.

  13. Anisotropic Developments for Homogeneous Shear Flows

    NASA Technical Reports Server (NTRS)

    Cambon, Claude; Rubinstein, Robert

    2006-01-01

    The general decomposition of the spectral correlation tensor R(sub ij)(k) by Cambon et al. (J. Fluid Mech., 202, 295; J. Fluid Mech., 337, 303) into directional and polarization components is applied to the representation of R(sub ij)(k) by spherically averaged quantities. The decomposition splits the deviatoric part H(sub ij)(k) of the spherical average of R(sub ij)(k) into directional and polarization components H(sub ij)(sup e)(k) and H(sub ij)(sup z)(k). A self-consistent representation of the spectral tensor in the limit of weak anisotropy is constructed in terms of these spherically averaged quantities. The directional polarization components must be treated independently: models that attempt the same representation of the spectral tensor using the spherical average H(sub ij)(k) alone prove to be inconsistent with Navier-Stokes dynamics. In particular, a spectral tensor consistent with a prescribed Reynolds stress is not unique. The degree of anisotropy permitted by this theory is restricted by realizability requirements. Since these requirements will be less severe in a more accurate theory, a preliminary account is given of how to generalize the formalism of spherical averages to higher expansion of the spectral tensor. Directionality is described by a conventional expansion in spherical harmonics, but polarization requires an expansion in tensorial spherical harmonics generated by irreducible representations of the spatial rotation group SO(exp 3). These expansions are considered in more detail in the special case of axial symmetry.

  14. Quantification of breast density with spectral mammography based on a scanned multi-slit photon-counting detector: A feasibility study

    PubMed Central

    Ding, Huanjun; Molloi, Sabee

    2012-01-01

    Purpose A simple and accurate measurement of breast density is crucial for the understanding of its impact in breast cancer risk models. The feasibility to quantify volumetric breast density with a photon-counting spectral mammography system has been investigated using both computer simulations and physical phantom studies. Methods A computer simulation model involved polyenergetic spectra from a tungsten anode x-ray tube and a Si-based photon-counting detector has been evaluated for breast density quantification. The figure-of-merit (FOM), which was defined as the signal-to-noise ratio (SNR) of the dual energy image with respect to the square root of mean glandular dose (MGD), was chosen to optimize the imaging protocols, in terms of tube voltage and splitting energy. A scanning multi-slit photon-counting spectral mammography system has been employed in the experimental study to quantitatively measure breast density using dual energy decomposition with glandular and adipose equivalent phantoms of uniform thickness. Four different phantom studies were designed to evaluate the accuracy of the technique, each of which addressed one specific variable in the phantom configurations, including thickness, density, area and shape. In addition to the standard calibration fitting function used for dual energy decomposition, a modified fitting function has been proposed, which brought the tube voltages used in the imaging tasks as the third variable in dual energy decomposition. Results For an average sized breast of 4.5 cm thick, the FOM was maximized with a tube voltage of 46kVp and a splitting energy of 24 keV. To be consistent with the tube voltage used in current clinical screening exam (~ 32 kVp), the optimal splitting energy was proposed to be 22 keV, which offered a FOM greater than 90% of the optimal value. In the experimental investigation, the root-mean-square (RMS) error in breast density quantification for all four phantom studies was estimated to be approximately 1.54% using standard calibration function. The results from the modified fitting function, which integrated the tube voltage as a variable in the calibration, indicated a RMS error of approximately 1.35% for all four studies. Conclusions The results of the current study suggest that photon-counting spectral mammography systems may potentially be implemented for an accurate quantification of volumetric breast density, with an RMS error of less than 2%, using the proposed dual energy imaging technique. PMID:22771941

  15. Structural analysis and design of multivariable control systems: An algebraic approach

    NASA Technical Reports Server (NTRS)

    Tsay, Yih Tsong; Shieh, Leang-San; Barnett, Stephen

    1988-01-01

    The application of algebraic system theory to the design of controllers for multivariable (MV) systems is explored analytically using an approach based on state-space representations and matrix-fraction descriptions. Chapters are devoted to characteristic lambda matrices and canonical descriptions of MIMO systems; spectral analysis, divisors, and spectral factors of nonsingular lambda matrices; feedback control of MV systems; and structural decomposition theories and their application to MV control systems.

  16. Scale Issues in Air Quality Modeling

    EPA Science Inventory

    This presentation reviews past model evaluation studies investigating the impact of horizontal grid spacing on model performance. It also presents several examples of using a spectral decomposition technique to separate the forcings from processes operating on different time scal...

  17. A restricted signature normal form for Hermitian matrices, quasi-spectral decompositions, and applications

    NASA Technical Reports Server (NTRS)

    Freund, Roland W.; Huckle, Thomas

    1989-01-01

    In recent years, a number of results on the relationships between the inertias of Hermitian matrices and the inertias of their principal submatrices appeared in the literature. We study restricted congruence transformation of Hermitian matrices M which, at the same time, induce a congruence transformation of a given principal submatrix A of M. Such transformations lead to concept of the restricted signature normal form of M. In particular, by means of this normal form, we obtain short proofs of most of the known inertia theorems and also derive some new results of this type. For some applications, a special class of almost unitary restricted congruence transformations turns out to be useful. We show that, with such transformations, M can be reduced to a quasi-diagonal form which, in particular, displays the eigenvalues of A. Finally, applications of this quasi-spectral decomposition to generalize inverses and Hermitian matrix pencils are discussed.

  18. Randomized interpolative decomposition of separated representations

    NASA Astrophysics Data System (ADS)

    Biagioni, David J.; Beylkin, Daniel; Beylkin, Gregory

    2015-01-01

    We introduce an algorithm to compute tensor interpolative decomposition (dubbed CTD-ID) for the reduction of the separation rank of Canonical Tensor Decompositions (CTDs). Tensor ID selects, for a user-defined accuracy ɛ, a near optimal subset of terms of a CTD to represent the remaining terms via a linear combination of the selected terms. CTD-ID can be used as an alternative to or in combination with the Alternating Least Squares (ALS) algorithm. We present examples of its use within a convergent iteration to compute inverse operators in high dimensions. We also briefly discuss the spectral norm as a computational alternative to the Frobenius norm in estimating approximation errors of tensor ID. We reduce the problem of finding tensor IDs to that of constructing interpolative decompositions of certain matrices. These matrices are generated via randomized projection of the terms of the given tensor. We provide cost estimates and several examples of the new approach to the reduction of separation rank.

  19. Testing the monogamy relations via rank-2 mixtures

    NASA Astrophysics Data System (ADS)

    Jung, Eylee; Park, DaeKil

    2016-10-01

    We introduce two tangle-based four-party entanglement measures t1 and t2, and two negativity-based measures n1 and n2, which are derived from the monogamy relations. These measures are computed for three four-qubit maximally entangled and W states explicitly. We also compute these measures for the rank-2 mixture ρ4=p | GHZ4>< GHZ4|+(1 -p ) | W4>< W4| by finding the corresponding optimal decompositions. It turns out that t1(ρ4) is trivial and the corresponding optimal decomposition is equal to the spectral decomposition. Probably, this triviality is a sign of the fact that the corresponding monogamy inequality is not sufficiently tight. We fail to compute t2(ρ4) due to the difficulty in the calculation of the residual entanglement. The negativity-based measures n1(ρ4) and n2(ρ4) are explicitly computed and the corresponding optimal decompositions are also derived explicitly.

  20. Spectral methods in machine learning and new strategies for very large datasets

    PubMed Central

    Belabbas, Mohamed-Ali; Wolfe, Patrick J.

    2009-01-01

    Spectral methods are of fundamental importance in statistics and machine learning, because they underlie algorithms from classical principal components analysis to more recent approaches that exploit manifold structure. In most cases, the core technical problem can be reduced to computing a low-rank approximation to a positive-definite kernel. For the growing number of applications dealing with very large or high-dimensional datasets, however, the optimal approximation afforded by an exact spectral decomposition is too costly, because its complexity scales as the cube of either the number of training examples or their dimensionality. Motivated by such applications, we present here 2 new algorithms for the approximation of positive-semidefinite kernels, together with error bounds that improve on results in the literature. We approach this problem by seeking to determine, in an efficient manner, the most informative subset of our data relative to the kernel approximation task at hand. This leads to two new strategies based on the Nyström method that are directly applicable to massive datasets. The first of these—based on sampling—leads to a randomized algorithm whereupon the kernel induces a probability distribution on its set of partitions, whereas the latter approach—based on sorting—provides for the selection of a partition in a deterministic way. We detail their numerical implementation and provide simulation results for a variety of representative problems in statistical data analysis, each of which demonstrates the improved performance of our approach relative to existing methods. PMID:19129490

  1. Phase segregation in multiphase turbulent channel flow

    NASA Astrophysics Data System (ADS)

    Bianco, Federico; Soldati, Alfredo

    2014-11-01

    The phase segregation of a rapidly quenched mixture (namely spinodal decomposition) is numerically investigated. A phase field approach is considered. Direct numerical simulation of the coupled Navier-Stokes and Cahn-Hilliard equations is performed with spectral accuracy and focus has been put on domain growth scaling laws, in a wide range of regimes. The numerical method has been first validated against well known results of literature, then spinodal decomposition in a turbulent bounded flow (channel flow) has been considered. As for homogeneous isotropic case, turbulent fluctuations suppress the segregation process when surface tension at the interfaces is relatively low (namely low Weber number regimes). For these regimes, segregated domains size reaches a statistically steady state due to mixing and break-up phenomena. In contrast with homogenous and isotropic turbulence, the presence of mean shear, leads to a typical domain size that show a wall-distance dependence. Finally, preliminary results on the effects to the drag forces at the wall, due to phase segregation, have been discussed. Regione FVG, program PAR-FSC.

  2. Quantifying Neural Oscillatory Synchronization: A Comparison between Spectral Coherence and Phase-Locking Value Approaches

    PubMed Central

    Lowet, Eric; Roberts, Mark J.; Bonizzi, Pietro; Karel, Joël; De Weerd, Peter

    2016-01-01

    Synchronization or phase-locking between oscillating neuronal groups is considered to be important for coordination of information among cortical networks. Spectral coherence is a commonly used approach to quantify phase locking between neural signals. We systematically explored the validity of spectral coherence measures for quantifying synchronization among neural oscillators. To that aim, we simulated coupled oscillatory signals that exhibited synchronization dynamics using an abstract phase-oscillator model as well as interacting gamma-generating spiking neural networks. We found that, within a large parameter range, the spectral coherence measure deviated substantially from the expected phase-locking. Moreover, spectral coherence did not converge to the expected value with increasing signal-to-noise ratio. We found that spectral coherence particularly failed when oscillators were in the partially (intermittent) synchronized state, which we expect to be the most likely state for neural synchronization. The failure was due to the fast frequency and amplitude changes induced by synchronization forces. We then investigated whether spectral coherence reflected the information flow among networks measured by transfer entropy (TE) of spike trains. We found that spectral coherence failed to robustly reflect changes in synchrony-mediated information flow between neural networks in many instances. As an alternative approach we explored a phase-locking value (PLV) method based on the reconstruction of the instantaneous phase. As one approach for reconstructing instantaneous phase, we used the Hilbert Transform (HT) preceded by Singular Spectrum Decomposition (SSD) of the signal. PLV estimates have broad applicability as they do not rely on stationarity, and, unlike spectral coherence, they enable more accurate estimations of oscillatory synchronization across a wide range of different synchronization regimes, and better tracking of synchronization-mediated information flow among networks. PMID:26745498

  3. Time-dependent quantum transport: An efficient method based on Liouville-von-Neumann equation for single-electron density matrix

    NASA Astrophysics Data System (ADS)

    Xie, Hang; Jiang, Feng; Tian, Heng; Zheng, Xiao; Kwok, Yanho; Chen, Shuguang; Yam, ChiYung; Yan, YiJing; Chen, Guanhua

    2012-07-01

    Basing on our hierarchical equations of motion for time-dependent quantum transport [X. Zheng, G. H. Chen, Y. Mo, S. K. Koo, H. Tian, C. Y. Yam, and Y. J. Yan, J. Chem. Phys. 133, 114101 (2010), 10.1063/1.3475566], we develop an efficient and accurate numerical algorithm to solve the Liouville-von-Neumann equation. We solve the real-time evolution of the reduced single-electron density matrix at the tight-binding level. Calculations are carried out to simulate the transient current through a linear chain of atoms, with each represented by a single orbital. The self-energy matrix is expanded in terms of multiple Lorentzian functions, and the Fermi distribution function is evaluated via the Padè spectrum decomposition. This Lorentzian-Padè decomposition scheme is employed to simulate the transient current. With sufficient Lorentzian functions used to fit the self-energy matrices, we show that the lead spectral function and the dynamics response can be treated accurately. Compared to the conventional master equation approaches, our method is much more efficient as the computational time scales cubically with the system size and linearly with the simulation time. As a result, the simulations of the transient currents through systems containing up to one hundred of atoms have been carried out. As density functional theory is also an effective one-particle theory, the Lorentzian-Padè decomposition scheme developed here can be generalized for first-principles simulation of realistic systems.

  4. Stochastic shock response spectrum decomposition method based on probabilistic definitions of temporal peak acceleration, spectral energy, and phase lag distributions of mechanical impact pyrotechnic shock test data

    NASA Astrophysics Data System (ADS)

    Hwang, James Ho-Jin; Duran, Adam

    2016-08-01

    Most of the times pyrotechnic shock design and test requirements for space systems are provided in Shock Response Spectrum (SRS) without the input time history. Since the SRS does not describe the input or the environment, a decomposition method is used to obtain the source time history. The main objective of this paper is to develop a decomposition method producing input time histories that can satisfy the SRS requirement based on the pyrotechnic shock test data measured from a mechanical impact test apparatus. At the heart of this decomposition method is the statistical representation of the pyrotechnic shock test data measured from the MIT Lincoln Laboratory (LL) designed Universal Pyrotechnic Shock Simulator (UPSS). Each pyrotechnic shock test data measured at the interface of a test unit has been analyzed to produce the temporal peak acceleration, Root Mean Square (RMS) acceleration, and the phase lag at each band center frequency. Maximum SRS of each filtered time history has been calculated to produce a relationship between the input and the response. Two new definitions are proposed as a result. The Peak Ratio (PR) is defined as the ratio between the maximum SRS and the temporal peak acceleration at each band center frequency. The ratio between the maximum SRS and the RMS acceleration is defined as the Energy Ratio (ER) at each band center frequency. Phase lag is estimated based on the time delay between the temporal peak acceleration at each band center frequency and the peak acceleration at the lowest band center frequency. This stochastic process has been applied to more than one hundred pyrotechnic shock test data to produce probabilistic definitions of the PR, ER, and the phase lag. The SRS is decomposed at each band center frequency using damped sinusoids with the PR and the decays obtained by matching the ER of the damped sinusoids to the ER of the test data. The final step in this stochastic SRS decomposition process is the Monte Carlo (MC) simulation. The MC simulation identifies combinations of the PR and decays that can meet the SRS requirement at each band center frequency. Decomposed input time histories are produced by summing the converged damped sinusoids with the MC simulation of the phase lag distribution.

  5. HARPS-N high spectral resolution observations of Cepheids I. The Baade-Wesselink projection factor of δ Cep revisited

    NASA Astrophysics Data System (ADS)

    Nardetto, N.; Poretti, E.; Rainer, M.; Fokin, A.; Mathias, P.; Anderson, R. I.; Gallenne, A.; Gieren, W.; Graczyk, D.; Kervella, P.; Mérand, A.; Mourard, D.; Neilson, H.; Pietrzynski, G.; Pilecki, B.; Storm, J.

    2017-01-01

    Context. The projection factor p is the key quantity used in the Baade-Wesselink (BW) method for distance determination; it converts radial velocities into pulsation velocities. Several methods are used to determine p, such as geometrical and hydrodynamical models or the inverse BW approach when the distance is known. Aims: We analyze new HARPS-N spectra of δ Cep to measure its cycle-averaged atmospheric velocity gradient in order to better constrain the projection factor. Methods: We first apply the inverse BW method to derive p directly from observations. The projection factor can be divided into three subconcepts: (1) a geometrical effect (p0); (2) the velocity gradient within the atmosphere (fgrad); and (3) the relative motion of the optical pulsating photosphere with respect to the corresponding mass elements (fo-g). We then measure the fgrad value of δ Cep for the first time. Results: When the HARPS-N mean cross-correlated line-profiles are fitted with a Gaussian profile, the projection factor is pcc-g = 1.239 ± 0.034(stat.) ± 0.023(syst.). When we consider the different amplitudes of the radial velocity curves that are associated with 17 selected spectral lines, we measure projection factors ranging from 1.273 to 1.329. We find a relation between fgrad and the line depth measured when the Cepheid is at minimum radius. This relation is consistent with that obtained from our best hydrodynamical model of δ Cep and with our projection factor decomposition. Using the observational values of p and fgrad found for the 17 spectral lines, we derive a semi-theoretical value of fo-g. We alternatively obtain fo-g = 0.975 ± 0.002 or 1.006 ± 0.002 assuming models using radiative transfer in plane-parallel or spherically symmetric geometries, respectively. Conclusions: The new HARPS-N observations of δ Cep are consistent with our decomposition of the projection factor. The next step will be to measure p0 directly from the next generation of visible interferometers. With these values in hand, it will be possible to derive fo-g directly from observations. Table A.1 is also available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/597/A73

  6. Identification of substance in complicated mixture of simulants under the action of THz radiation on the base of SDA (spectral dynamics analysis) method

    NASA Astrophysics Data System (ADS)

    Trofimov, Vyacheslav A.; Varentsova, Svetlana A.; Krotkus, Arunas; Molis, Gediminas

    2010-10-01

    The SDA (Spectral Dynamics Analysis) - method (method of THz spectrum dynamics analysis in THz range of frequencies) is used for the detection and identification of substances with similar THz Fourier spectra (such substances are named usually as the simulants) in the two- or three-component medium. This method allows us to obtain the unique 2D THz signature of the substance - the spectrogram- and to analyze the dynamics of many spectral lines of the THz signal, passed through or reflected from substance, by one set of its integral measurements simultaneously; even measurements are made on short-term intervals (less than 20 ps). For long-term intervals (100 ps and more) the SDA method gives an opportunity to define the relaxation time for excited energy levels of molecules. This information gives new opportunity to identify the substance because the relaxation time is different for molecules of different substances. The restoration of the signal by its integral values is made on the base of SVD - Single Value Decomposition - technique. We consider three examples for PTFE mixed with small content of the L-Tartaric Acid and the Sucrose in pellets. A concentration of these substances is about 5%-10%. Our investigations show that the spectrograms and dynamics of spectral lines of THz pulse passed through the pure PTFE differ from the spectrograms of the compound medium containing PTFE and the L-Tartaric Acid or the Sucrose or both these substances together. So, it is possible to detect the presence of a small amount of the additional substances in the sample even their THz Fourier spectra are practically identical. Therefore, the SDA method can be very effective for the defense and security applications and for quality control in pharmaceutical industry. We also show that in the case of substances-simulants the use of auto- and correlation functions has much worse resolvability in a comparison with the SDA method.

  7. Exploring Galaxy Formation and Evolution via Structural Decomposition

    NASA Astrophysics Data System (ADS)

    Kelvin, Lee; Driver, Simon; Robotham, Aaron; Hill, David; Cameron, Ewan

    2010-06-01

    The Galaxy And Mass Assembly (GAMA) structural decomposition pipeline (GAMA-SIGMA Structural Investigation of Galaxies via Model Analysis) will provide multi-component information for a sample of ~12,000 galaxies across 9 bands ranging from near-UV to near-IR. This will allow the relationship between structural properties and broadband, optical-to-near-IR, spectral energy distributions of bulge, bar, and disk components to be explored, revealing clues as to the history of baryonic mass assembly within a hierarchical clustering framework. Data is initially taken from the SDSS & UKIDSS-LAS surveys to test the robustness of our automated decomposition pipeline. This will eventually be replaced with the forthcoming higher-resolution VST & VISTA surveys data, expanding the sample to ~30,000 galaxies.

  8. Algorithm 971: An Implementation of a Randomized Algorithm for Principal Component Analysis

    PubMed Central

    LI, HUAMIN; LINDERMAN, GEORGE C.; SZLAM, ARTHUR; STANTON, KELLY P.; KLUGER, YUVAL; TYGERT, MARK

    2017-01-01

    Recent years have witnessed intense development of randomized methods for low-rank approximation. These methods target principal component analysis and the calculation of truncated singular value decompositions. The present article presents an essentially black-box, foolproof implementation for Mathworks’ MATLAB, a popular software platform for numerical computation. As illustrated via several tests, the randomized algorithms for low-rank approximation outperform or at least match the classical deterministic techniques (such as Lanczos iterations run to convergence) in basically all respects: accuracy, computational efficiency (both speed and memory usage), ease-of-use, parallelizability, and reliability. However, the classical procedures remain the methods of choice for estimating spectral norms and are far superior for calculating the least singular values and corresponding singular vectors (or singular subspaces). PMID:28983138

  9. Phase-field modeling of diffusional phase behaviors of solid surfaces: A case study of phase-separating Li XFePO 4 electrode particles

    DOE PAGES

    Heo, Tae Wook; Chen, Long-Qing; Wood, Brandon C.

    2015-04-08

    In this paper, we present a comprehensive phase-field model for simulating diffusion-mediated kinetic phase behaviors near the surface of a solid particle. The model incorporates elastic inhomogeneity and anisotropy, diffusion mobility anisotropy, interfacial energy anisotropy, and Cahn–Hilliard diffusion kinetics. The free energy density function is formulated based on the regular solution model taking into account the possible solute-surface interaction near the surface. The coherency strain energy is computed using the Fourier-spectral iterative-perturbation method due to the strong elastic inhomogeneity with a zero surface traction boundary condition. Employing a phase-separating Li XFePO 4 electrode particle for Li-ion batteries as a modelmore » system, we perform parametric three-dimensional computer simulations. The model permits the observation of surface phase behaviors that are different from the bulk counterpart. For instance, it reproduces the theoretically well-established surface modes of spinodal decomposition of an unstable solid solution: the surface mode of coherent spinodal decomposition and the surface-directed spinodal decomposition mode. We systematically investigate the influences of major factors on the kinetic surface phase behaviors during the diffusional process. Finally, our simulation study provides insights for tailoring the internal phase microstructure of a particle by controlling the surface phase morphology.« less

  10. Projection decomposition algorithm for dual-energy computed tomography via deep neural network.

    PubMed

    Xu, Yifu; Yan, Bin; Chen, Jian; Zeng, Lei; Li, Lei

    2018-03-15

    Dual-energy computed tomography (DECT) has been widely used to improve identification of substances from different spectral information. Decomposition of the mixed test samples into two materials relies on a well-calibrated material decomposition function. This work aims to establish and validate a data-driven algorithm for estimation of the decomposition function. A deep neural network (DNN) consisting of two sub-nets is proposed to solve the projection decomposition problem. The compressing sub-net, substantially a stack auto-encoder (SAE), learns a compact representation of energy spectrum. The decomposing sub-net with a two-layer structure fits the nonlinear transform between energy projection and basic material thickness. The proposed DNN not only delivers image with lower standard deviation and higher quality in both simulated and real data, and also yields the best performance in cases mixed with photon noise. Moreover, DNN costs only 0.4 s to generate a decomposition solution of 360 × 512 size scale, which is about 200 times faster than the competing algorithms. The DNN model is applicable to the decomposition tasks with different dual energies. Experimental results demonstrated the strong function fitting ability of DNN. Thus, the Deep learning paradigm provides a promising approach to solve the nonlinear problem in DECT.

  11. Absolute continuity for operator valued completely positive maps on C∗-algebras

    NASA Astrophysics Data System (ADS)

    Gheondea, Aurelian; Kavruk, Ali Şamil

    2009-02-01

    Motivated by applicability to quantum operations, quantum information, and quantum probability, we investigate the notion of absolute continuity for operator valued completely positive maps on C∗-algebras, previously introduced by Parthasarathy [in Athens Conference on Applied Probability and Time Series Analysis I (Springer-Verlag, Berlin, 1996), pp. 34-54]. We obtain an intrinsic definition of absolute continuity, we show that the Lebesgue decomposition defined by Parthasarathy is the maximal one among all other Lebesgue-type decompositions and that this maximal Lebesgue decomposition does not depend on the jointly dominating completely positive map, we obtain more flexible formulas for calculating the maximal Lebesgue decomposition, and we point out the nonuniqueness of the Lebesgue decomposition as well as a sufficient condition for uniqueness. In addition, we consider Radon-Nikodym derivatives for absolutely continuous completely positive maps that, in general, are unbounded positive self-adjoint operators affiliated to a certain von Neumann algebra, and we obtain a spectral approximation by bounded Radon-Nikodym derivatives. An application to the existence of the infimum of two completely positive maps is indicated, and formulas in terms of Choi's matrices for the Lebesgue decomposition of completely positive maps in matrix algebras are obtained.

  12. Systematic implementation of spectral CT with a photon counting detector for liquid security inspection

    NASA Astrophysics Data System (ADS)

    Xu, Xiaofei; Xing, Yuxiang; Wang, Sen; Zhang, Li

    2018-06-01

    X-ray liquid security inspection system plays an important role in homeland security, while the conventional dual-energy CT (DECT) system may have a big deviation in extracting the atomic number and the electron density of materials in various conditions. Photon counting detectors (PCDs) have the capability of discriminating the incident photons of different energy. The technique becomes more and more mature in nowadays. In this work, we explore the performance of a multi-energy CT imaging system with a PCD for liquid security inspection in material discrimination. We used a maximum-likelihood (ML) decomposition method with scatter correction based on a cross-energy response model (CERM) for PCDs so that to improve the accuracy of atomic number and electronic density imaging. Experimental study was carried to examine the effectiveness and robustness of the proposed system. Our results show that the concentration of different solutions in physical phantoms can be reconstructed accurately, which could improve the material identification compared to current available dual-energy liquid security inspection systems. The CERM-base decomposition and reconstruction method can be easily used to different applications such as medical diagnosis.

  13. Data-driven signal-resolving approaches of infrared spectra to explore the macroscopic and microscopic spatial distribution of organic and inorganic compounds in plant.

    PubMed

    Chen, Jian-bo; Sun, Su-qin; Zhou, Qun

    2015-07-01

    The nondestructive and label-free infrared (IR) spectroscopy is a direct tool to characterize the spatial distribution of organic and inorganic compounds in plant. Since plant samples are usually complex mixtures, signal-resolving methods are necessary to find the spectral features of compounds of interest in the signal-overlapped IR spectra. In this research, two approaches using existing data-driven signal-resolving methods are proposed to interpret the IR spectra of plant samples. If the number of spectra is small, "tri-step identification" can enhance the spectral resolution to separate and identify the overlapped bands. First, the envelope bands of the original spectrum are interpreted according to the spectra-structure correlations. Then the spectrum is differentiated to resolve the underlying peaks in each envelope band. Finally, two-dimensional correlation spectroscopy is used to enhance the spectral resolution further. For a large number of spectra, "tri-step decomposition" can resolve the spectra by multivariate methods to obtain the structural and semi-quantitative information about the chemical components. Principal component analysis is used first to explore the existing signal types without any prior knowledge. Then the spectra are decomposed by self-modeling curve resolution methods to estimate the spectra and contents of significant chemical components. At last, targeted methods such as partial least squares target can explore the content profiles of specific components sensitively. As an example, the macroscopic and microscopic distribution of eugenol and calcium oxalate in the bud of clove is studied.

  14. Signal detection by means of orthogonal decomposition

    NASA Astrophysics Data System (ADS)

    Hajdu, C. F.; Dabóczi, T.; Péceli, G.; Zamantzas, C.

    2018-03-01

    Matched filtering is a well-known method frequently used in digital signal processing to detect the presence of a pattern in a signal. In this paper, we suggest a time variant matched filter, which, unlike a regular matched filter, maintains a given alignment between the input signal and the template carrying the pattern, and can be realized recursively. We introduce a method to synchronize the two signals for presence detection, usable in case direct synchronization between the signal generator and the receiver is not possible or not practical. We then propose a way of realizing and extending the same filter by modifying a recursive spectral observer, which gives rise to orthogonal filter channels and also leads to another way to synchronize the two signals.

  15. Exploiting physical constraints for multi-spectral exo-planet detection

    NASA Astrophysics Data System (ADS)

    Thiébaut, Éric; Devaney, Nicholas; Langlois, Maud; Hanley, Kenneth

    2016-07-01

    We derive a physical model of the on-axis PSF for a high contrast imaging system such as GPI or SPHERE. This model is based on a multi-spectral Taylor series expansion of the diffraction pattern and predicts that the speckles should be a combination of spatial modes with deterministic chromatic magnification and weighting. We propose to remove most of the residuals by fitting this model on a set of images at multiple wavelengths and times. On simulated data, we demonstrate that our approach achieves very good speckle suppression without additional heuristic parameters. The residual speckles1, 2 set the most serious limitation in the detection of exo-planets in high contrast coronographic images provided by instruments such as SPHERE3 at the VLT, GPI4, 5 at Gemini, or SCExAO6 at Subaru. A number of post-processing methods have been proposed to remove as much as possible of the residual speckles while preserving the signal from the planets. These methods exploit the fact that the speckles and the planetary signal have different temporal and spectral behaviors. Some methods like LOCI7 are based on angular differential imaging8 (ADI), spectral differential imaging9, 10 (SDI), or on a combination of ADI and SDI.11 Instead of working on image differences, we propose to tackle the exo-planet detection as an inverse problem where a model of the residual speckles is fit on the set of multi-spectral images and, possibly, multiple exposures. In order to reduce the number of degrees of freedom, we impose specific constraints on the spatio-spectral distribution of stellar speckles. These constraints are deduced from a multi-spectral Taylor series expansion of the diffraction pattern for an on-axis source which implies that the speckles are a combination of spatial modes with deterministic chromatic magnification and weighting. Using simulated data, the efficiency of speckle removal by fitting the proposed multi-spectral model is compared to the result of using an approximation based on the singular value decomposition of the rescaled images. We show how the difficult problem to fitting a bilinear model on the can be solved in practise. The results are promising for further developments including application to real data and joint planet detection in multi-variate data (multi-spectral and multiple exposures images).

  16. Fractional-order Fourier analysis for ultrashort pulse characterization.

    PubMed

    Brunel, Marc; Coetmellec, Sébastien; Lelek, Mickael; Louradour, Frédéric

    2007-06-01

    We report what we believe to be the first experimental demonstration of ultrashort pulse characterization using fractional-order Fourier analysis. The analysis is applied to the interpretation of spectral interferometry resolved in time (SPIRIT) traces [which are spectral phase interferometry for direct electric field reconstruction (SPIDER)-like interferograms]. First, the fractional-order Fourier transformation is shown to naturally allow the determination of the cubic spectral phase coefficient of pulses to be analyzed. A simultaneous determination of both cubic and quadratic spectral phase coefficients of the pulses using the fractional-order Fourier series expansion is further demonstrated. This latter technique consists of localizing relative maxima in a 2D cartography representing decomposition coefficients. It is further used to reconstruct or filter SPIRIT traces.

  17. Helium Nanodroplet Isolation of the Cyclobutyl, 1-Methylallyl, and Allylcarbinyl Radicals: Infrared Spectroscopy and Ab Initio Computations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Alaina R.; Franke, Peter R.; Douberly, Gary E.

    Gas-phase cyclobutyl radical (*C 4H 7) is produced via pyrolysis of cyclobutylmethyl nitrite (C 4H 7(CH 2)ONO). Other (C 4H 7)-C-center dot radicals, such as 1-methylallyl and allylcarbinyl, are similarly produced from nitrite precursors. Nascent radicals are promptly solvated in liquid He droplets, allowing for the acquisition of infrared spectra in the CH stretching region. For the cyclobutyl and 1-methylallyl radicals, anharmonic frequencies are predicted by VPT2+K simulations based upon a hybrid CCSD(T) force field with quadratic (cubic and quartic) force constants computed using the ANO1 (ANO0) basis set. A density functional theoretical method is used to compute the forcemore » field for the allylcarbinyl radical. For all *C 4H 7 radicals, resonance polyads in the 2800-3000 cm -1 region appear as a result of anharmonic coupling between the CH stretching fundamentals and CH, bend overtones and combinations. Upon pyrolysis of the cyclobutylmethyl nitrite precursor to produce the cyclobutyl radical, an approximately 2-fold increase in the source temperature leads to the appearance of spectral signatures that can be assigned to 1-methylallyl and 1,3-butadiene. On the basis of a previously reported *C 4H 7 potential energy surface, this result is interpreted as evidence for the unimolecular decomposition of the cyclobutyl radical via ring opening, prior to it being captured by helium droplets. On the *C 4H 7 potential surface, 1,3-butadiene is formed from cyclobutyl ring opening and H atom loss, and the 1-methylallyl radical is the most energetically stable intermediate along the decomposition pathway. Here, the allylcarbinyl radical is a higher-energy (C 4H 7)-C-center dot intermediate along the ring-opening path, and the spectral signatures of this radical are not observed under the same conditions that produce 1-methylallyl and 1,3-butadiene from the unimolecular decomposition of cyclobutyl.« less

  18. Helium Nanodroplet Isolation of the Cyclobutyl, 1-Methylallyl, and Allylcarbinyl Radicals: Infrared Spectroscopy and Ab Initio Computations

    DOE PAGES

    Brown, Alaina R.; Franke, Peter R.; Douberly, Gary E.

    2017-09-22

    Gas-phase cyclobutyl radical (*C 4H 7) is produced via pyrolysis of cyclobutylmethyl nitrite (C 4H 7(CH 2)ONO). Other (C 4H 7)-C-center dot radicals, such as 1-methylallyl and allylcarbinyl, are similarly produced from nitrite precursors. Nascent radicals are promptly solvated in liquid He droplets, allowing for the acquisition of infrared spectra in the CH stretching region. For the cyclobutyl and 1-methylallyl radicals, anharmonic frequencies are predicted by VPT2+K simulations based upon a hybrid CCSD(T) force field with quadratic (cubic and quartic) force constants computed using the ANO1 (ANO0) basis set. A density functional theoretical method is used to compute the forcemore » field for the allylcarbinyl radical. For all *C 4H 7 radicals, resonance polyads in the 2800-3000 cm -1 region appear as a result of anharmonic coupling between the CH stretching fundamentals and CH, bend overtones and combinations. Upon pyrolysis of the cyclobutylmethyl nitrite precursor to produce the cyclobutyl radical, an approximately 2-fold increase in the source temperature leads to the appearance of spectral signatures that can be assigned to 1-methylallyl and 1,3-butadiene. On the basis of a previously reported *C 4H 7 potential energy surface, this result is interpreted as evidence for the unimolecular decomposition of the cyclobutyl radical via ring opening, prior to it being captured by helium droplets. On the *C 4H 7 potential surface, 1,3-butadiene is formed from cyclobutyl ring opening and H atom loss, and the 1-methylallyl radical is the most energetically stable intermediate along the decomposition pathway. Here, the allylcarbinyl radical is a higher-energy (C 4H 7)-C-center dot intermediate along the ring-opening path, and the spectral signatures of this radical are not observed under the same conditions that produce 1-methylallyl and 1,3-butadiene from the unimolecular decomposition of cyclobutyl.« less

  19. Q-3D: Imaging Spectroscopy of Quasar Hosts with JWST Analyzed with a Powerful New PSF Decomposition and Spectral Analysis Package

    NASA Astrophysics Data System (ADS)

    Wylezalek, Dominika; Veilleux, Sylvain; Zakamska, Nadia; Barrera-Ballesteros, J.; Luetzgendorf, N.; Nesvadba, N.; Rupke, D.; Sun, A.

    2017-11-01

    In the last few years, optical and near-IR IFU observations from the ground have revolutionized extragalactic astronomy. The unprecedented infrared sensitivity, spatial resolution, and spectral coverage of the JWST IFUs will ensure high demand from the community. For a wide range of extragalactic phenomena (e.g. quasars, starbursts, supernovae, gamma ray bursts, tidal disruption events) and beyond (e.g. nebulae, debris disks around bright stars), PSF contamination will be an issue when studying the underlying extended emission. We propose to provide the community with a PSF decomposition and spectral analysis package for high dynamic range JWST IFU observations allowing the user to create science-ready maps of relevant spectral features. Luminous quasars, with their bright central source (quasar) and extended emission (host galaxy), are excellent test cases for this software. Quasars are also of high scientific interest in their own right as they are widely considered to be the main driver in regulating massive galaxy growth. JWST will revolutionize our understanding of black hole-galaxy co-evolution by allowing us to probe the stellar, gas, and dust components of nearby and distant galaxies, spatially and spectrally. We propose to use the IFU capabilities of NIRSpec and MIRI to study the impact of three carefully selected luminous quasars on their hosts. Our program will provide (1) a scientific dataset of broad interest that will serve as a pathfinder for JWST science investigations in IFU mode and (2) a powerful new data analysis tool that will enable frontier science for a wide swath of astrophysical research.

  20. Q estimation of seismic data using the generalized S-transform

    NASA Astrophysics Data System (ADS)

    Hao, Yaju; Wen, Xiaotao; Zhang, Bo; He, Zhenhua; Zhang, Rui; Zhang, Jinming

    2016-12-01

    Quality factor, Q, is a parameter that characterizes the energy dissipation during seismic wave propagation. The reservoir pore is one of the main factors that affect the value of Q. Especially, when pore space is filled with oil or gas, the rock usually exhibits a relative low Q value. Such a low Q value has been used as a direct hydrocarbon indicator by many researchers. The conventional Q estimation method based on spectral ratio suffers from the problem of waveform tuning; hence, many researchers have introduced time-frequency analysis techniques to tackle this problem. Unfortunately, the window functions adopted in time-frequency analysis algorithms such as continuous wavelet transform (CWT) and S-transform (ST) contaminate the amplitude spectra because the seismic signal is multiplied by the window functions during time-frequency decomposition. The basic assumption of the spectral ratio method is that there is a linear relationship between natural logarithmic spectral ratio and frequency. However, this assumption does not hold if we take the influence of window functions into consideration. In this paper, we first employ a recently developed two-parameter generalized S-transform (GST) to obtain the time-frequency spectra of seismic traces. We then deduce the non-linear relationship between natural logarithmic spectral ratio and frequency. Finally, we obtain a linear relationship between natural logarithmic spectral ratio and a newly defined parameter γ by ignoring the negligible second order term. The gradient of this linear relationship is 1/Q. Here, the parameter γ is a function of frequency and source wavelet. Numerical examples for VSP and post-stack reflection data confirm that our algorithm is capable of yielding accurate results. The Q-value results estimated from field data acquired in western China show reasonable comparison with oil-producing well location.

  1. SU-F-J-210: A Preliminary Study On the Dosimetric Impact of Detector Based Spectral Ct On Proton Therapy Treatment Planning

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cheng, C; Lee, S; Wessels, B

    2016-06-15

    Purpose: To compare the difference in Hounsfield unit-relative stopping power and evaluate the dosimetric impact of spectral vs. conventional CT on proton therapy treatment plans. Method: The Philips prototype (IQon), a detector-based, spectral CT system (spectral) was used to scan calibration and Rando phantoms. Data were reconstructed with and without energy decomposition to produce monoenergetic 70 keV, 140 keV, and the Zeff images. Relative stopping power (RSP) in the head and lung regions were evaluated as a function of HU in order to compare spectral and conventional CT. Treatment plans for the Rando phantom were also generated and used tomore » produce DVHs of fictitious target volume and organ-at-risk contoured on the head and lung. Results: Agreement of the Zeff of the tissue-substitute materials determined using spectral CT agrees to within 1 to 5% of the Zeff of the known phantom composition. The discrepancy is primarily attributed to non-uniformity in the phantom. Differences between the HU-RSP curves obtained using spectral and conventional CT were small except for in the lung curve at HU>1000. The large difference in planned doses using Spectral vs. conventional CT occurred in a low-dose brain region (1.7mm between the locations of the 100 cGy lines and 3 mm for 50 cGy lines). Conclusion: Conventionally, a single HU-RSP from CT scanner is used in proton treatment planning. Spectral CT allows site-specific HU-RSP for each patient. Spectral and conventional HU-RSP may result in different distributions as shown here. Additional study is required to evaluate the impact of Spectral CT in proton treatment planning. This study is part of a research agreement between Philips and University Hospitals/Case Medical Center.« less

  2. Comparison of automatic denoising methods for phonocardiograms with extraction of signal parameters via the Hilbert Transform

    NASA Astrophysics Data System (ADS)

    Messer, Sheila R.; Agzarian, John; Abbott, Derek

    2001-05-01

    Phonocardiograms (PCGs) have many advantages over traditional auscultation (listening to the heart) because they may be replayed, may be analyzed for spectral and frequency content, and frequencies inaudible to the human ear may be recorded. However, various sources of noise may pollute a PCG including lung sounds, environmental noise and noise generated from contact between the recording device and the skin. Because PCG signals are known to be nonlinear and it is often not possible to determine their noise content, traditional de-noising methods may not be effectively applied. However, other methods including wavelet de-noising, wavelet packet de-noising and averaging can be employed to de-noise the PCG. This study examines and compares these de-noising methods. This study answers such questions as to which de-noising method gives a better SNR, the magnitude of signal information that is lost as a result of the de-noising process, the appropriate uses of the different methods down to such specifics as to which wavelets and decomposition levels give best results in wavelet and wavelet packet de-noising. In general, the wavelet and wavelet packet de-noising performed roughly equally with optimal de-noising occurring at 3-5 levels of decomposition. Averaging also proved a highly useful de- noising technique; however, in some cases averaging is not appropriate. The Hilbert Transform is used to illustrate the results of the de-noising process and to extract instantaneous features including instantaneous amplitude, frequency, and phase.

  3. Data-adaptive harmonic spectra and multilayer Stuart-Landau models

    NASA Astrophysics Data System (ADS)

    Chekroun, Mickaël D.; Kondrashov, Dmitri

    2017-09-01

    Harmonic decompositions of multivariate time series are considered for which we adopt an integral operator approach with periodic semigroup kernels. Spectral decomposition theorems are derived that cover the important cases of two-time statistics drawn from a mixing invariant measure. The corresponding eigenvalues can be grouped per Fourier frequency and are actually given, at each frequency, as the singular values of a cross-spectral matrix depending on the data. These eigenvalues obey, furthermore, a variational principle that allows us to define naturally a multidimensional power spectrum. The eigenmodes, as far as they are concerned, exhibit a data-adaptive character manifested in their phase which allows us in turn to define a multidimensional phase spectrum. The resulting data-adaptive harmonic (DAH) modes allow for reducing the data-driven modeling effort to elemental models stacked per frequency, only coupled at different frequencies by the same noise realization. In particular, the DAH decomposition extracts time-dependent coefficients stacked by Fourier frequency which can be efficiently modeled—provided the decay of temporal correlations is sufficiently well-resolved—within a class of multilayer stochastic models (MSMs) tailored here on stochastic Stuart-Landau oscillators. Applications to the Lorenz 96 model and to a stochastic heat equation driven by a space-time white noise are considered. In both cases, the DAH decomposition allows for an extraction of spatio-temporal modes revealing key features of the dynamics in the embedded phase space. The multilayer Stuart-Landau models (MSLMs) are shown to successfully model the typical patterns of the corresponding time-evolving fields, as well as their statistics of occurrence.

  4. Micromechanical Sensor for the Spectral Decomposition of Acoustic Signals

    DTIC Science & Technology

    2012-02-01

    8 Figure 2.2: Reverse Ballistic Air Gun ................................................................................. 9 Figure 2.3: A MEMS...Schematic of the Sensor including Sensor-to-Sensor Parasitic .................... 177 Figure 5.9: Schematic of Laser Machined Sensor...178 Figure 5.10: Laser Machined Sensor Mode 1

  5. Mode Shape Estimation Algorithms Under Ambient Conditions: A Comparative Review

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dosiek, Luke; Zhou, Ning; Pierre, John W.

    Abstract—This paper provides a comparative review of five existing ambient electromechanical mode shape estimation algorithms, i.e., the Transfer Function (TF), Spectral, Frequency Domain Decomposition (FDD), Channel Matching, and Subspace Methods. It is also shown that the TF Method is a general approach to estimating mode shape and that the Spectral, FDD, and Channel Matching Methods are actually special cases of it. Additionally, some of the variations of the Subspace Method are reviewed and the Numerical algorithm for Subspace State Space System IDentification (N4SID) is implemented. The five algorithms are then compared using data simulated from a 17-machine model of themore » Western Electricity Coordinating Council (WECC) under ambient conditions with both low and high damping, as well as during the case where ambient data is disrupted by an oscillatory ringdown. The performance of the algorithms is compared using the statistics from Monte Carlo Simulations and results from measured WECC data, and a discussion of the practical issues surrounding their implementation, including cases where power system probing is an option, is provided. The paper concludes with some recommendations as to the appropriate use of the various techniques. Index Terms—Electromechanical mode shape, small-signal stability, phasor measurement units (PMU), system identification, N4SID, subspace.« less

  6. SU-E-T-610: Phosphor-Based Fiber Optic Probes for Proton Beam Characterization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Darafsheh, A; Soldner, A; Liu, H

    2015-06-15

    Purpose: To investigate feasibility of using fiber optics probes with rare-earth-based phosphor tips for proton beam radiation dosimetry. We designed and fabricated a fiber probe with submillimeter resolution (<0.5 mm3) based on TbF3 phosphors and evaluated its performance for measurement of proton beam including profiles and range. Methods: The fiber optic probe with TbF3 phosphor tip, embedded in tissue-mimicking phantoms was irradiated with double scattering proton beam with energy of 180 MeV. Luminescence spectroscopy was performed by a CCD-coupled spectrograph to analyze the emission spectra of the fiber tip. In order to measure the spatial beam profile and percentage depthmore » dose, we used singular value decomposition method to spectrally separate the phosphors ionoluminescence signal from the background Cerenkov radiation signal. Results: The spectra of the TbF3 fiber probe showed characteristic ionoluminescence emission peaks at 489, 542, 586, and 620 nm. By using singular value decomposition we found the contribution of the ionoluminescence signal to measure the percentage depth dose in phantoms and compared that with measurements performed with ion chamber. We observed quenching effect at the spread out Bragg peak region, manifested as under-responding of the signal, due to the high LET of the beam. However, the beam profiles were not dramatically affected by the quenching effect. Conclusion: We have evaluated the performance of a fiber optic probe with submillimeter resolution for proton beam dosimetry. We demonstrated feasibility of spectral separation of the Cerenkov radiation from the collected signal. Such fiber probes can be used for measurements of proton beams profile and range. The experimental apparatus and spectroscopy method developed in this work provide a robust platform for characterization of proton-irradiated nanophosphor particles for ultralow fluence photodynamic therapy or molecular imaging applications.« less

  7. Installation effects on the tonal noise generated by axial flow fans

    NASA Astrophysics Data System (ADS)

    Canepa, Edward; Cattanei, Andrea; Mazzocut Zecchin, Fabio

    2015-03-01

    The paper presents the results of experiments on a low-speed axial-flow fan flush mounted on flat panels typically employed in tests on automotive cooling fans. The experiments have been conducted in a hemi-anechoic chamber and were aimed at evaluating the installation effects of the whole test configuration, including chamber floor and size and shape of the mounting panel. The largest panels cause important SPL variations in a narrow, low frequency range. Their effect on the propagation function has been verified by means of parametric BEM computations. A regular wavy trend associated with reflections from the floor is also present. In both cases, the tonal noise is more strongly affected than the broadband one. The analysis is performed by means of an existing spectral decomposition technique and a new one, which allows to consider different noise generating mechanisms and also to separate the emitted tonal and broadband noise from the associated propagation effects. In order to better identify the features of the noise at the blade passing frequency (BPF) harmonics, the phase of the acoustic pressure is also analysed. Measurements are taken during speed ramps, which allow to obtain both constant-Strouhal number SPL data and constant-speed data. The former data set is employed in the new technique, while the latter may be employed in the standard spectral decomposition techniques. Based on both the similarity theory and the analysis of the Green's function of the problem, a theoretical description of the structure of the received SPL spectrum is given. Then, the possibility of discriminating between tonal and broadband noise generating mechanisms is analysed and a theoretical base for the new spectral decomposition technique is provided.

  8. Fusion method of SAR and optical images for urban object extraction

    NASA Astrophysics Data System (ADS)

    Jia, Yonghong; Blum, Rick S.; Li, Fangfang

    2007-11-01

    A new image fusion method of SAR, Panchromatic (Pan) and multispectral (MS) data is proposed. First of all, SAR texture is extracted by ratioing the despeckled SAR image to its low pass approximation, and is used to modulate high pass details extracted from the available Pan image by means of the á trous wavelet decomposition. Then, high pass details modulated with the texture is applied to obtain the fusion product by HPFM (High pass Filter-based Modulation) fusion method. A set of image data including co-registered Landsat TM, ENVISAT SAR and SPOT Pan is used for the experiment. The results demonstrate accurate spectral preservation on vegetated regions, bare soil, and also on textured areas (buildings and road network) where SAR texture information enhances the fusion product, and the proposed approach is effective for image interpret and classification.

  9. Classification of breast microcalcifications using spectral mammography

    NASA Astrophysics Data System (ADS)

    Ghammraoui, B.; Glick, S. J.

    2017-03-01

    Purpose: To investigate the potential of spectral mammography to distinguish between type I calcifications, consisting of calcium oxalate dihydrate or weddellite compounds that are more often associated with benign lesions, and type II calcifications containing hydroxyapatite which are predominantly associated with malignant tumors. Methods: Using a ray tracing algorithm, we simulated the total number of x-ray photons recorded by the detector at one pixel from a single pencil-beam projection through a breast of 50/50 (adipose/glandular) tissues with inserted microcalcifications of different types and sizes. Material decomposition using two energy bins was then applied to characterize the simulated calcifications into hydroxyapatite and weddellite using maximumlikelihood estimation, taking into account the polychromatic source, the detector response function and the energy dependent attenuation. Results: Simulation tests were carried out for different doses and calcification sizes for multiple realizations. The results were summarized using receiver operating characteristic (ROC) analysis with the area under the curve (AUC) taken as an overall indicator of discrimination performance and showing high AUC values up to 0.99. Conclusion: Our simulation results obtained for a uniform breast imaging phantom indicate that spectral mammography using two energy bins has the potential to be used as a non-invasive method for discrimination between type I and type II microcalcifications to improve early breast cancer diagnosis and reduce the number of unnecessary breast biopsies.

  10. Application of Raman microscopy to biodegradable double-walled microspheres.

    PubMed

    Widjaja, Effendi; Lee, Wei Li; Loo, Say Chye Joachim

    2010-02-15

    Raman mapping measurements were performed on the cross section of the ternary-phase biodegradable double-walled microsphere (DWMS) of poly(D,L-lactide-co-glycolide) (50:50) (PLGA), poly(L-lactide) (PLLA), and poly(epsilon-caprolactone) (PCL), which was fabricated by a one-step solvent evaporation method. The collected Raman spectra were subjected to a band-target entropy minimization (BTEM) algorithm in order to reconstruct the pure component spectra of the species observed in this sample. Seven pure component spectral estimates were recovered, and their spatial distributions within DWMS were determined. The first three spectral estimates were identified as PLLA, PLGA 50:50, and PCL, which were the main components in DWMS. The last four spectral estimates were identified as semicrystalline polyglycolic acid (PGA), dichloromethane (DCM), copper-phthalocyanine blue, and calcite, which were the minor components in DWMS. PGA was the decomposition product of PLGA. DCM was the solvent used in DWMS fabrication. Copper-phthalocyanine blue and calcite were the unexpected contaminants. The current result showed that combined Raman microscopy and BTEM analysis can provide a sensitive characterization tool to DWMS, as it can give more specific information on the chemical species present as well as the spatial distributions. This novel analytical method for microsphere characterization can serve as a complementary tool to other more established analytical techniques, such as scanning electron microscopy and optical microscopy.

  11. Scalable parallel elastic-plastic finite element analysis using a quasi-Newton method with a balancing domain decomposition preconditioner

    NASA Astrophysics Data System (ADS)

    Yusa, Yasunori; Okada, Hiroshi; Yamada, Tomonori; Yoshimura, Shinobu

    2018-04-01

    A domain decomposition method for large-scale elastic-plastic problems is proposed. The proposed method is based on a quasi-Newton method in conjunction with a balancing domain decomposition preconditioner. The use of a quasi-Newton method overcomes two problems associated with the conventional domain decomposition method based on the Newton-Raphson method: (1) avoidance of a double-loop iteration algorithm, which generally has large computational complexity, and (2) consideration of the local concentration of nonlinear deformation, which is observed in elastic-plastic problems with stress concentration. Moreover, the application of a balancing domain decomposition preconditioner ensures scalability. Using the conventional and proposed domain decomposition methods, several numerical tests, including weak scaling tests, were performed. The convergence performance of the proposed method is comparable to that of the conventional method. In particular, in elastic-plastic analysis, the proposed method exhibits better convergence performance than the conventional method.

  12. A systematic linear space approach to solving partially described inverse eigenvalue problems

    NASA Astrophysics Data System (ADS)

    Hu, Sau-Lon James; Li, Haujun

    2008-06-01

    Most applications of the inverse eigenvalue problem (IEP), which concerns the reconstruction of a matrix from prescribed spectral data, are associated with special classes of structured matrices. Solving the IEP requires one to satisfy both the spectral constraint and the structural constraint. If the spectral constraint consists of only one or few prescribed eigenpairs, this kind of inverse problem has been referred to as the partially described inverse eigenvalue problem (PDIEP). This paper develops an efficient, general and systematic approach to solve the PDIEP. Basically, the approach, applicable to various structured matrices, converts the PDIEP into an ordinary inverse problem that is formulated as a set of simultaneous linear equations. While solving simultaneous linear equations for model parameters, the singular value decomposition method is applied. Because of the conversion to an ordinary inverse problem, other constraints associated with the model parameters can be easily incorporated into the solution procedure. The detailed derivation and numerical examples to implement the newly developed approach to symmetric Toeplitz and quadratic pencil (including mass, damping and stiffness matrices of a linear dynamic system) PDIEPs are presented. Excellent numerical results for both kinds of problem are achieved under the situations that have either unique or infinitely many solutions.

  13. Hilbert-Huang Transform: A Spectral Analysis Tool Applied to Sunspot Number and Total Solar Irradiance Variations, as well as Near-Surface Atmospheric Variables

    NASA Astrophysics Data System (ADS)

    Barnhart, B. L.; Eichinger, W. E.; Prueger, J. H.

    2010-12-01

    Hilbert-Huang transform (HHT) is a relatively new data analysis tool which is used to analyze nonstationary and nonlinear time series data. It consists of an algorithm, called empirical mode decomposition (EMD), which extracts the cyclic components embedded within time series data, as well as Hilbert spectral analysis (HSA) which displays the time and frequency dependent energy contributions from each component in the form of a spectrogram. The method can be considered a generalized form of Fourier analysis which can describe the intrinsic cycles of data with basis functions whose amplitudes and phases may vary with time. The HHT will be introduced and compared to current spectral analysis tools such as Fourier analysis, short-time Fourier analysis, wavelet analysis and Wigner-Ville distributions. A number of applications are also presented which demonstrate the strengths and limitations of the tool, including analyzing sunspot number variability and total solar irradiance proxies as well as global averaged temperature and carbon dioxide concentration. Also, near-surface atmospheric quantities such as temperature and wind velocity are analyzed to demonstrate the nonstationarity of the atmosphere.

  14. The thermal decomposition of fine-grained micrometeorites, observations from mid-IR spectroscopy

    NASA Astrophysics Data System (ADS)

    Suttle, Martin David; Genge, Matthew J.; Folco, Luigi; Russell, Sara S.

    2017-06-01

    We analysed 44 fine-grained and scoriaceous micrometeorites. A bulk mid-IR spectrum (8-13 μm) for each grain was collected and the entire micrometeorite population classified into 5 spectral groups, based on the positions of their absorption bands. Corresponding carbonaceous Raman spectra, textural observations from SEM-BSE and bulk geochemical data via EMPA were collected to aid in the interpretation of mid-IR spectra. The 5 spectral groups identified correspond to progressive thermal decomposition. Unheated hydrated chondritic matrix, composed predominantly of phyllosilicates, exhibit smooth, asymmetric spectra with a peak at ∼10 μm. Thermal decomposition of sheet silicates evolves through dehydration, dehydroxylation, annealing and finally by the onset of partial melting. Both CI-like and CM-like micrometeorites are shown to pass through the same decomposition stages and produce similar mid-IR spectra. Using known temperature thresholds for each decomposition stage it is possible to assign a peak temperature range to a given micrometeorite. Since the temperature thresholds for decomposition reactions are defined by the phyllosilicate species and the cation composition and that these variables are markedly different between CM and CI classes, atmospheric entry should bias the dust flux to favour the survival of CI-like grains, whilst preferentially melting most CM-like dust. However, this hypothesis is inconsistent with empirical observations and instead requires that the source ratio of CI:CM dust is heavily skewed in favour of CM material. In addition, a small population of anomalous grains are identified whose carbonaceous and petrographic characteristics suggest in-space heating and dehydroxylation have occurred. These grains may therefore represent regolith micrometeorites derived from the surface of C-type asteroids. Since the spectroscopic signatures of dehydroxylates are distinctive, i.e. characterised by a reflectance peak at 9.0-9.5 μm, and since the surfaces of C-type asteroids are expected to be heated via impact gardening, we suggest that future spectroscopic investigations should attempt to identify dehydroxylate signatures in the reflectance spectra of young carbonaceous asteroid families.

  15. Operational modal analysis using SVD of power spectral density transmissibility matrices

    NASA Astrophysics Data System (ADS)

    Araújo, Iván Gómez; Laier, Jose Elias

    2014-05-01

    This paper proposes the singular value decomposition of power spectrum density transmissibility matrices with different references, (PSDTM-SVD), as an identification method of natural frequencies and mode shapes of a dynamic system subjected to excitations under operational conditions. At the system poles, the rows of the proposed transmissibility matrix converge to the same ratio of amplitudes of vibration modes. As a result, the matrices are linearly dependent on the columns, and their singular values converge to zero. Singular values are used to determine the natural frequencies, and the first left singular vectors are used to estimate mode shapes. A numerical example of the finite element model of a beam subjected to colored noise excitation is analyzed to illustrate the accuracy of the proposed method. Results of the PSDTM-SVD method in the numerical example are compared with obtained using frequency domain decomposition (FDD) and power spectrum density transmissibility (PSDT). It is demonstrated that the proposed method does not depend on the excitation characteristics contrary to the FDD method that assumes white noise excitation, and further reduces the risk to identify extra non-physical poles in comparison to the PSDT method. Furthermore, a case study is performed using data from an operational vibration test of a bridge with a simply supported beam system. The real application of a full-sized bridge has shown that the proposed PSDTM-SVD method is able to identify the operational modal parameter. Operational modal parameters identified by the PSDTM-SVD in the real application agree well those identified by the FDD and PSDT methods.

  16. The development of a post-mortem interval estimation for human remains found on land in the Netherlands.

    PubMed

    Gelderman, H T; Boer, L; Naujocks, T; IJzermans, A C M; Duijst, W L J M

    2018-05-01

    The decomposition process of human remains can be used to estimate the post-mortem interval (PMI), but decomposition varies due to many factors. Temperature is believed to be the most important and can be connected to decomposition by using the accumulated degree days (ADD). The aim of this research was to develop a decomposition scoring method and to develop a formula to estimate the PMI by using the developed decomposition scoring method and ADD.A decomposition scoring method and a Book of Reference (visual resource) were made. Ninety-one cases were used to develop a method to estimate the PMI. The photographs were scored using the decomposition scoring method. The temperature data was provided by the Royal Netherlands Meteorological Institute. The PMI was estimated using the total decomposition score (TDS) and using the TDS and ADD. The latter required an additional step, namely to calculate the ADD from the finding date back until the predicted day of death.The developed decomposition scoring method had a high interrater reliability. The TDS significantly estimates the PMI (R 2  = 0.67 and 0.80 for indoor and outdoor bodies, respectively). When using the ADD, the R 2 decreased to 0.66 and 0.56.The developed decomposition scoring method is a practical method to measure decomposition for human remains found on land. The PMI can be estimated using this method, but caution is advised in cases with a long PMI. The ADD does not account for all the heat present in a decomposing remain and is therefore a possible bias.

  17. Generation of metallic plasmon nanostructures in a thin transparent photosensitive copper oxide film by femtosecond thermochemical decomposition

    NASA Astrophysics Data System (ADS)

    Danilov, P. A.; Zayarny, D. A.; Ionin, A. A.; Kudryashov, S. I.; Litovko, E. P.; Mel'nik, N. N.; Rudenko, A. A.; Saraeva, I. N.; Umanskaya, S. P.; Khmelnitskii, R. A.

    2017-09-01

    Irradiation of optically transparent copper (I) oxide film covering a glass substrate with a tightly focused femtosecond laser pulses in the pre-ablation regime leads to film reduction to a metallic colloidal state via a single-photon absorption and its subsequent thermochemical decomposition. This effect was demonstrated by the corresponding measurement of the extinction spectrum in visible spectral range. The laser-induced formation of metallic copper nanoparticles in the focal region inside the bulk oxide film allows direct recording of individual thin-film plasmon nanostructures and optical-range metasurfaces.

  18. A full-spectral Bayesian reconstruction approach based on the material decomposition model applied in dual-energy computed tomography.

    PubMed

    Cai, C; Rodet, T; Legoupil, S; Mohammad-Djafari, A

    2013-11-01

    Dual-energy computed tomography (DECT) makes it possible to get two fractions of basis materials without segmentation. One is the soft-tissue equivalent water fraction and the other is the hard-matter equivalent bone fraction. Practical DECT measurements are usually obtained with polychromatic x-ray beams. Existing reconstruction approaches based on linear forward models without counting the beam polychromaticity fail to estimate the correct decomposition fractions and result in beam-hardening artifacts (BHA). The existing BHA correction approaches either need to refer to calibration measurements or suffer from the noise amplification caused by the negative-log preprocessing and the ill-conditioned water and bone separation problem. To overcome these problems, statistical DECT reconstruction approaches based on nonlinear forward models counting the beam polychromaticity show great potential for giving accurate fraction images. This work proposes a full-spectral Bayesian reconstruction approach which allows the reconstruction of high quality fraction images from ordinary polychromatic measurements. This approach is based on a Gaussian noise model with unknown variance assigned directly to the projections without taking negative-log. Referring to Bayesian inferences, the decomposition fractions and observation variance are estimated by using the joint maximum a posteriori (MAP) estimation method. Subject to an adaptive prior model assigned to the variance, the joint estimation problem is then simplified into a single estimation problem. It transforms the joint MAP estimation problem into a minimization problem with a nonquadratic cost function. To solve it, the use of a monotone conjugate gradient algorithm with suboptimal descent steps is proposed. The performance of the proposed approach is analyzed with both simulated and experimental data. The results show that the proposed Bayesian approach is robust to noise and materials. It is also necessary to have the accurate spectrum information about the source-detector system. When dealing with experimental data, the spectrum can be predicted by a Monte Carlo simulator. For the materials between water and bone, less than 5% separation errors are observed on the estimated decomposition fractions. The proposed approach is a statistical reconstruction approach based on a nonlinear forward model counting the full beam polychromaticity and applied directly to the projections without taking negative-log. Compared to the approaches based on linear forward models and the BHA correction approaches, it has advantages in noise robustness and reconstruction accuracy.

  19. Improvements to the construction of binary black hole initial data

    NASA Astrophysics Data System (ADS)

    Ossokine, Serguei; Foucart, Francois; Pfeiffer, Harald P.; Boyle, Michael; Szilágyi, Béla

    2015-12-01

    Construction of binary black hole initial data is a prerequisite for numerical evolutions of binary black holes. This paper reports improvements to the binary black hole initial data solver in the spectral Einstein code, to allow robust construction of initial data for mass-ratio above 10:1, and for dimensionless black hole spins above 0.9, while improving efficiency for lower mass-ratios and spins. We implement a more flexible domain decomposition, adaptive mesh refinement and an updated method for choosing free parameters. We also introduce a new method to control and eliminate residual linear momentum in initial data for precessing systems, and demonstrate that it eliminates gravitational mode mixing during the evolution. Finally, the new code is applied to construct initial data for hyperbolic scattering and for binaries with very small separation.

  20. A chemometric method to identify enzymatic reactions leading to the transition from glycolytic oscillations to waves

    NASA Astrophysics Data System (ADS)

    Zimányi, László; Khoroshyy, Petro; Mair, Thomas

    2010-06-01

    In the present work we demonstrate that FTIR-spectroscopy is a powerful tool for the time resolved and noninvasive measurement of multi-substrate/product interactions in complex metabolic networks as exemplified by the oscillating glycolysis in a yeast extract. Based on a spectral library constructed from the pure glycolytic intermediates, chemometric analysis of the complex spectra allowed us the identification of many of these intermediates. Singular value decomposition and multiple level wavelet decomposition were used to separate drifting substances from oscillating ones. This enabled us to identify slow and fast variables of glycolytic oscillations. Most importantly, we can attribute a qualitative change in the positive feedback regulation of the autocatalytic reaction to the transition from homogeneous oscillations to travelling waves. During the oscillatory phase the enzyme phosphofructokinase is mainly activated by its own product ADP, whereas the transition to waves is accompanied with a shift of the positive feedback from ADP to AMP. This indicates that the overall energetic state of the yeast extract determines the transition between spatially homogeneous oscillations and travelling waves.

  1. Assessment and Improvement of GOCE based Global Geopotential Models Using Wavelet Decomposition

    NASA Astrophysics Data System (ADS)

    Erol, Serdar; Erol, Bihter; Serkan Isik, Mustafa

    2016-07-01

    The contribution of recent Earth gravity field satellite missions, specifically GOCE mission, leads significant improvement in quality of gravity field models in both accuracy and resolution manners. However the performance and quality of each released model vary not only depending on the spatial location of the Earth but also the different bands of the spectral expansion. Therefore the assessment of the global model performances with validations using in situ-data in varying territories on the Earth is essential for clarifying their exact performances in local. Beside of this, their spectral evaluation and quality assessment of the signal in each part of the spherical harmonic expansion spectrum is essential to have a clear decision for the commission error content of the model and determining its optimal degree, revealed the best results, as well. The later analyses provide also a perspective and comparison on the global behavior of the models and opportunity to report the sequential improvement of the models depending on the mission developments and hence the contribution of the new data of missions. In this study a review on spectral assessment results of the recently released GOCE based global geopotential models DIR-R5, TIM-R5 with the enhancement using EGM2008, as reference model, in Turkey, versus the terrestrial data is provided. Beside of reporting the GOCE mission contribution to the models in Turkish territory, the possible improvement in the spectral quality of these models, via decomposition that are highly contaminated by noise, is purposed. In the analyses the motivation is on achieving an optimal amount of improvement that rely on conserving the useful component of the GOCE signal as much as possible, while fusing the filtered GOCE based models with EGM2008 in the appropriate spectral bands. The investigation also contain the assessment of the coherence and the correlation between the Earth gravity field parameters (free-air gravity anomalies and geoid undulations), derived from the validated geopotential models and terrestrial data (GPS/leveling, terrestrial gravity observations, DTM etc.), as well as the WGM2012 products. In the conclusion, with the numerical results, the performance of the assessed models are clarified in Turkish territory and the potential of the Wavelet decomposition in the improvement of the geopotential models is verified.

  2. The Warp and The Woof.

    ERIC Educational Resources Information Center

    Truxal, John G.

    1983-01-01

    Discusses how the topic of spectral (Fourier) decomposition is introduced in a communications course at State University of New York (Stony Brook). Includes background information on this engineering concept (without focusing on calculus), how it is demonstrated, applications, and how the spectrum of a given signal can be measured. (JN)

  3. Adventitious sounds identification and extraction using temporal-spectral dominance-based features.

    PubMed

    Jin, Feng; Krishnan, Sridhar Sri; Sattar, Farook

    2011-11-01

    Respiratory sound (RS) signals carry significant information about the underlying functioning of the pulmonary system by the presence of adventitious sounds (ASs). Although many studies have addressed the problem of pathological RS classification, only a limited number of scientific works have focused on the analysis of the evolution of symptom-related signal components in joint time-frequency (TF) plane. This paper proposes a new signal identification and extraction method for various ASs based on instantaneous frequency (IF) analysis. The presented TF decomposition method produces a noise-resistant high definition TF representation of RS signals as compared to the conventional linear TF analysis methods, yet preserving the low computational complexity as compared to those quadratic TF analysis methods. The discarded phase information in conventional spectrogram has been adopted for the estimation of IF and group delay, and a temporal-spectral dominance spectrogram has subsequently been constructed by investigating the TF spreads of the computed time-corrected IF components. The proposed dominance measure enables the extraction of signal components correspond to ASs from noisy RS signal at high noise level. A new set of TF features has also been proposed to quantify the shapes of the obtained TF contours, and therefore strongly, enhances the identification of multicomponents signals such as polyphonic wheezes. An overall accuracy of 92.4±2.9% for the classification of real RS recordings shows the promising performance of the presented method.

  4. Investigating the effect of characteristic x-rays in cadmium zinc telluride detectors under breast computerized tomography operating conditions

    PubMed Central

    Glick, Stephen J.; Didier, Clay

    2013-01-01

    A number of research groups have been investigating the use of dedicated breast computerized tomography (CT). Preliminary results have been encouraging, suggesting an improved visualization of masses on breast CT as compared to conventional mammography. Nonetheless, there are many challenges to overcome before breast CT can become a routine clinical reality. One potential improvement over current breast CT prototypes would be the use of photon counting detectors with cadmium zinc telluride (CZT) (or CdTe) semiconductor material. These detectors can operate at room temperature and provide high detection efficiency and the capability of multi-energy imaging; however, one factor in particular that limits image quality is the emission of characteristic x-rays. In this study, the degradative effects of characteristic x-rays are examined when using a CZT detector under breast CT operating conditions. Monte Carlo simulation software was used to evaluate the effect of characteristic x-rays and the detector element size on spatial and spectral resolution for a CZT detector used under breast CT operating conditions. In particular, lower kVp spectra and thinner CZT thicknesses were studied than that typically used with CZT based conventional CT detectors. In addition, the effect of characteristic x-rays on the accuracy of material decomposition in spectral CT imaging was explored. It was observed that when imaging with 50-60 kVp spectra, the x-ray transmission through CZT was very low for all detector thicknesses studied (0.5–3.0 mm), thus retaining dose efficiency. As expected, characteristic x-ray escape from the detector element of x-ray interaction increased with decreasing detector element size, approaching a 50% escape fraction for a 100 μm size detector element. The detector point spread function was observed to have only minor degradation with detector element size greater than 200 μm and lower kV settings. Characteristic x-rays produced increasing distortion in the spectral response with decreasing detector element size. If not corrected for, this caused a large bias in estimating tissue density parameters for material decomposition. It was also observed that degradation of the spectral response due to characteristic x-rays caused worsening precision in the estimation of tissue density parameters. It was observed that characteristic x-rays do cause some degradation in the spatial and spectral resolution of thin CZT detectors operating under breast CT conditions. These degradations should be manageable with careful selection of the detector element size. Even with the observed spectral distortion from characteristic x-rays, it is still possible to correctly estimate tissue parameters for material decomposition using spectral CT if accurate modeling is used. PMID:24187383

  5. THz spectral data analysis and components unmixing based on non-negative matrix factorization methods

    NASA Astrophysics Data System (ADS)

    Ma, Yehao; Li, Xian; Huang, Pingjie; Hou, Dibo; Wang, Qiang; Zhang, Guangxin

    2017-04-01

    In many situations the THz spectroscopic data observed from complex samples represent the integrated result of several interrelated variables or feature components acting together. The actual information contained in the original data might be overlapping and there is a necessity to investigate various approaches for model reduction and data unmixing. The development and use of low-rank approximate nonnegative matrix factorization (NMF) and smooth constraint NMF (CNMF) algorithms for feature components extraction and identification in the fields of terahertz time domain spectroscopy (THz-TDS) data analysis are presented. The evolution and convergence properties of NMF and CNMF methods based on sparseness, independence and smoothness constraints for the resulting nonnegative matrix factors are discussed. For general NMF, its cost function is nonconvex and the result is usually susceptible to initialization and noise corruption, and may fall into local minima and lead to unstable decomposition. To reduce these drawbacks, smoothness constraint is introduced to enhance the performance of NMF. The proposed algorithms are evaluated by several THz-TDS data decomposition experiments including a binary system and a ternary system simulating some applications such as medicine tablet inspection. Results show that CNMF is more capable of finding optimal solutions and more robust for random initialization in contrast to NMF. The investigated method is promising for THz data resolution contributing to unknown mixture identification.

  6. Clustering of Multispectral Airborne Laser Scanning Data Using Gaussian Decomposition

    NASA Astrophysics Data System (ADS)

    Morsy, S.; Shaker, A.; El-Rabbany, A.

    2017-09-01

    With the evolution of the LiDAR technology, multispectral airborne laser scanning systems are currently available. The first operational multispectral airborne LiDAR sensor, the Optech Titan, acquires LiDAR point clouds at three different wavelengths (1.550, 1.064, 0.532 μm), allowing the acquisition of different spectral information of land surface. Consequently, the recent studies are devoted to use the radiometric information (i.e., intensity) of the LiDAR data along with the geometric information (e.g., height) for classification purposes. In this study, a data clustering method, based on Gaussian decomposition, is presented. First, a ground filtering mechanism is applied to separate non-ground from ground points. Then, three normalized difference vegetation indices (NDVIs) are computed for both non-ground and ground points, followed by histograms construction from each NDVI. The Gaussian function model is used to decompose the histograms into a number of Gaussian components. The maximum likelihood estimate of the Gaussian components is then optimized using Expectation - Maximization algorithm. The intersection points of the adjacent Gaussian components are subsequently used as threshold values, whereas different classes can be clustered. This method is used to classify the terrain of an urban area in Oshawa, Ontario, Canada, into four main classes, namely roofs, trees, asphalt and grass. It is shown that the proposed method has achieved an overall accuracy up to 95.1 % using different NDVIs.

  7. THz spectral data analysis and components unmixing based on non-negative matrix factorization methods.

    PubMed

    Ma, Yehao; Li, Xian; Huang, Pingjie; Hou, Dibo; Wang, Qiang; Zhang, Guangxin

    2017-04-15

    In many situations the THz spectroscopic data observed from complex samples represent the integrated result of several interrelated variables or feature components acting together. The actual information contained in the original data might be overlapping and there is a necessity to investigate various approaches for model reduction and data unmixing. The development and use of low-rank approximate nonnegative matrix factorization (NMF) and smooth constraint NMF (CNMF) algorithms for feature components extraction and identification in the fields of terahertz time domain spectroscopy (THz-TDS) data analysis are presented. The evolution and convergence properties of NMF and CNMF methods based on sparseness, independence and smoothness constraints for the resulting nonnegative matrix factors are discussed. For general NMF, its cost function is nonconvex and the result is usually susceptible to initialization and noise corruption, and may fall into local minima and lead to unstable decomposition. To reduce these drawbacks, smoothness constraint is introduced to enhance the performance of NMF. The proposed algorithms are evaluated by several THz-TDS data decomposition experiments including a binary system and a ternary system simulating some applications such as medicine tablet inspection. Results show that CNMF is more capable of finding optimal solutions and more robust for random initialization in contrast to NMF. The investigated method is promising for THz data resolution contributing to unknown mixture identification. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. Open Rotor Computational Aeroacoustic Analysis with an Immersed Boundary Method

    NASA Technical Reports Server (NTRS)

    Brehm, Christoph; Barad, Michael F.; Kiris, Cetin C.

    2016-01-01

    Reliable noise prediction capabilities are essential to enable novel fuel efficient open rotor designs that can meet the community and cabin noise standards. Toward this end, immersed boundary methods have reached a level of maturity so that they are being frequently employed for specific real world applications within NASA. This paper demonstrates that our higher-order immersed boundary method provides the ability for aeroacoustic analysis of wake-dominated flow fields generated by highly complex geometries. This is the first of a kind aeroacoustic simulation of an open rotor propulsion system employing an immersed boundary method. In addition to discussing the peculiarities of applying the immersed boundary method to this moving boundary problem, we will provide a detailed aeroacoustic analysis of the noise generation mechanisms encountered in the open rotor flow. The simulation data is compared to available experimental data and other computational results employing more conventional CFD methods. The noise generation mechanisms are analyzed employing spectral analysis, proper orthogonal decomposition and the causality method.

  9. Uncertainty propagation in orbital mechanics via tensor decomposition

    NASA Astrophysics Data System (ADS)

    Sun, Yifei; Kumar, Mrinal

    2016-03-01

    Uncertainty forecasting in orbital mechanics is an essential but difficult task, primarily because the underlying Fokker-Planck equation (FPE) is defined on a relatively high dimensional (6-D) state-space and is driven by the nonlinear perturbed Keplerian dynamics. In addition, an enormously large solution domain is required for numerical solution of this FPE (e.g. encompassing the entire orbit in the x-y-z subspace), of which the state probability density function (pdf) occupies a tiny fraction at any given time. This coupling of large size, high dimensionality and nonlinearity makes for a formidable computational task, and has caused the FPE for orbital uncertainty propagation to remain an unsolved problem. To the best of the authors' knowledge, this paper presents the first successful direct solution of the FPE for perturbed Keplerian mechanics. To tackle the dimensionality issue, the time-varying state pdf is approximated in the CANDECOMP/PARAFAC decomposition tensor form where all the six spatial dimensions as well as the time dimension are separated from one other. The pdf approximation for all times is obtained simultaneously via the alternating least squares algorithm. Chebyshev spectral differentiation is employed for discretization on account of its spectral ("super-fast") convergence rate. To facilitate the tensor decomposition and control the solution domain size, system dynamics is expressed using spherical coordinates in a noninertial reference frame. Numerical results obtained on a regular personal computer are compared with Monte Carlo simulations.

  10. Fractal dimension to classify the heart sound recordings with KNN and fuzzy c-mean clustering methods

    NASA Astrophysics Data System (ADS)

    Juniati, D.; Khotimah, C.; Wardani, D. E. K.; Budayasa, K.

    2018-01-01

    The heart abnormalities can be detected from heart sound. A heart sound can be heard directly with a stethoscope or indirectly by a phonocardiograph, a machine of the heart sound recording. This paper presents the implementation of fractal dimension theory to make a classification of phonocardiograms into a normal heart sound, a murmur, or an extrasystole. The main algorithm used to calculate the fractal dimension was Higuchi’s Algorithm. There were two steps to make a classification of phonocardiograms, feature extraction, and classification. For feature extraction, we used Discrete Wavelet Transform to decompose the signal of heart sound into several sub-bands depending on the selected level. After the decomposition process, the signal was processed using Fast Fourier Transform (FFT) to determine the spectral frequency. The fractal dimension of the FFT output was calculated using Higuchi Algorithm. The classification of fractal dimension of all phonocardiograms was done with KNN and Fuzzy c-mean clustering methods. Based on the research results, the best accuracy obtained was 86.17%, the feature extraction by DWT decomposition level 3 with the value of kmax 50, using 5-fold cross validation and the number of neighbors was 5 at K-NN algorithm. Meanwhile, for fuzzy c-mean clustering, the accuracy was 78.56%.

  11. Efficient anisotropic quasi-P wavefield extrapolation using an isotropic low-rank approximation

    NASA Astrophysics Data System (ADS)

    Zhang, Zhen-dong; Liu, Yike; Alkhalifah, Tariq; Wu, Zedong

    2018-04-01

    The computational cost of quasi-P wave extrapolation depends on the complexity of the medium, and specifically the anisotropy. Our effective-model method splits the anisotropic dispersion relation into an isotropic background and a correction factor to handle this dependency. The correction term depends on the slope (measured using the gradient) of current wavefields and the anisotropy. As a result, the computational cost is independent of the nature of anisotropy, which makes the extrapolation efficient. A dynamic implementation of this approach decomposes the original pseudo-differential operator into a Laplacian, handled using the low-rank approximation of the spectral operator, plus an angular dependent correction factor applied in the space domain to correct for anisotropy. We analyse the role played by the correction factor and propose a new spherical decomposition of the dispersion relation. The proposed method provides accurate wavefields in phase and more balanced amplitudes than a previous spherical decomposition. Also, it is free of SV-wave artefacts. Applications to a simple homogeneous transverse isotropic medium with a vertical symmetry axis (VTI) and a modified Hess VTI model demonstrate the effectiveness of the approach. The Reverse Time Migration applied to a modified BP VTI model reveals that the anisotropic migration using the proposed modelling engine performs better than an isotropic migration.

  12. Comparing performance of standard and iterative linear unmixing methods for hyperspectral signatures

    NASA Astrophysics Data System (ADS)

    Gault, Travis R.; Jansen, Melissa E.; DeCoster, Mallory E.; Jansing, E. David; Rodriguez, Benjamin M.

    2016-05-01

    Linear unmixing is a method of decomposing a mixed signature to determine the component materials that are present in sensor's field of view, along with the abundances at which they occur. Linear unmixing assumes that energy from the materials in the field of view is mixed in a linear fashion across the spectrum of interest. Traditional unmixing methods can take advantage of adjacent pixels in the decomposition algorithm, but is not the case for point sensors. This paper explores several iterative and non-iterative methods for linear unmixing, and examines their effectiveness at identifying the individual signatures that make up simulated single pixel mixed signatures, along with their corresponding abundances. The major hurdle addressed in the proposed method is that no neighboring pixel information is available for the spectral signature of interest. Testing is performed using two collections of spectral signatures from the Johns Hopkins University Applied Physics Laboratory's Signatures Database software (SigDB): a hand-selected small dataset of 25 distinct signatures from a larger dataset of approximately 1600 pure visible/near-infrared/short-wave-infrared (VIS/NIR/SWIR) spectra. Simulated spectra are created with three and four material mixtures randomly drawn from a dataset originating from SigDB, where the abundance of one material is swept in 10% increments from 10% to 90%with the abundances of the other materials equally divided amongst the remainder. For the smaller dataset of 25 signatures, all combinations of three or four materials are used to create simulated spectra, from which the accuracy of materials returned, as well as the correctness of the abundances, is compared to the inputs. The experiment is expanded to include the signatures from the larger dataset of almost 1600 signatures evaluated using a Monte Carlo scheme with 5000 draws of three or four materials to create the simulated mixed signatures. The spectral similarity of the inputs to the output component signatures is calculated using the spectral angle mapper. Results show that iterative methods significantly outperform the traditional methods under the given test conditions.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Fa-Ke; Basu, Srinjan; Igras, Vivien

    Label-free DNA imaging is highly desirable in biology and medicine to perform live imaging without affecting cell function and to obtain instant histological tissue examination during surgical procedures. Here we show a label-free DNA imaging method with stimulated Raman scattering (SRS) microscopy for visualization of the cell nuclei in live animals and intact fresh human tissues with subcellular resolution. Relying on the distinct Raman spectral features of the carbon-hydrogen bonds in DNA, the distribution of DNA is retrieved from the strong background of proteins and lipids by linear decomposition of SRS images at three optimally selected Raman shifts. Based onmore » changes on DNA condensation in the nucleus, we were able to capture chromosome dynamics during cell division both in vitro and in vivo. We tracked mouse skin cell proliferation, induced by drug treatment, through in vivo counting of the mitotic rate. Moreover, we demonstrated a label-free histology method for human skin cancer diagnosis that provides comparable results to other conventional tissue staining methods such as H&E. In conclusion, our approach exhibits higher sensitivity than SRS imaging of DNA in the fingerprint spectral region. Compared with spontaneous Raman imaging of DNA, our approach is three orders of magnitude faster, allowing both chromatin dynamic studies and label-free optical histology in real time.« less

  14. Label-free DNA imaging in vivo with stimulated Raman scattering microscopy

    PubMed Central

    Lu, Fa-Ke; Basu, Srinjan; Igras, Vivien; Hoang, Mai P.; Ji, Minbiao; Fu, Dan; Holtom, Gary R.; Neel, Victor A.; Freudiger, Christian W.; Fisher, David E.; Xie, X. Sunney

    2015-01-01

    Label-free DNA imaging is highly desirable in biology and medicine to perform live imaging without affecting cell function and to obtain instant histological tissue examination during surgical procedures. Here we show a label-free DNA imaging method with stimulated Raman scattering (SRS) microscopy for visualization of the cell nuclei in live animals and intact fresh human tissues with subcellular resolution. Relying on the distinct Raman spectral features of the carbon-hydrogen bonds in DNA, the distribution of DNA is retrieved from the strong background of proteins and lipids by linear decomposition of SRS images at three optimally selected Raman shifts. Based on changes on DNA condensation in the nucleus, we were able to capture chromosome dynamics during cell division both in vitro and in vivo. We tracked mouse skin cell proliferation, induced by drug treatment, through in vivo counting of the mitotic rate. Furthermore, we demonstrated a label-free histology method for human skin cancer diagnosis that provides comparable results to other conventional tissue staining methods such as H&E. Our approach exhibits higher sensitivity than SRS imaging of DNA in the fingerprint spectral region. Compared with spontaneous Raman imaging of DNA, our approach is three orders of magnitude faster, allowing both chromatin dynamic studies and label-free optical histology in real time. PMID:26324899

  15. High temperature polymer degradation: Rapid IR flow-through method for volatile quantification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Giron, Nicholas H.; Celina, Mathew C.

    Accelerated aging of polymers at elevated temperatures often involves the generation of volatiles. These can be formed as the products of oxidative degradation reactions or intrinsic pyrolytic decomposition as part of polymer scission reactions. A simple analytical method for the quantification of water, CO 2, and CO as fundamental signatures of degradation kinetics is required. Here, we describe an analytical framework and develops a rapid mid-IR based gas analysis methodology to quantify volatiles that are contained in small ampoules after aging exposures. The approach requires identification of unique spectral signatures, systematic calibration with known concentrations of volatiles, and a rapidmore » acquisition FTIR spectrometer for time resolved successive spectra. Furthermore, the volatiles are flushed out from the ampoule with dry N2 carrier gas and are then quantified through spectral and time integration. This method is sufficiently sensitive to determine absolute yields of ~50 μg water or CO 2, which relates to probing mass losses of less than 0.01% for a 1 g sample, i.e. the early stages in the degradation process. Such quantitative gas analysis is not easily achieved with other approaches. Our approach opens up the possibility of quantitative monitoring of volatile evolution as an avenue to explore polymer degradation kinetics and its dependence on time and temperature.« less

  16. High temperature polymer degradation: Rapid IR flow-through method for volatile quantification

    DOE PAGES

    Giron, Nicholas H.; Celina, Mathew C.

    2017-05-19

    Accelerated aging of polymers at elevated temperatures often involves the generation of volatiles. These can be formed as the products of oxidative degradation reactions or intrinsic pyrolytic decomposition as part of polymer scission reactions. A simple analytical method for the quantification of water, CO 2, and CO as fundamental signatures of degradation kinetics is required. Here, we describe an analytical framework and develops a rapid mid-IR based gas analysis methodology to quantify volatiles that are contained in small ampoules after aging exposures. The approach requires identification of unique spectral signatures, systematic calibration with known concentrations of volatiles, and a rapidmore » acquisition FTIR spectrometer for time resolved successive spectra. Furthermore, the volatiles are flushed out from the ampoule with dry N2 carrier gas and are then quantified through spectral and time integration. This method is sufficiently sensitive to determine absolute yields of ~50 μg water or CO 2, which relates to probing mass losses of less than 0.01% for a 1 g sample, i.e. the early stages in the degradation process. Such quantitative gas analysis is not easily achieved with other approaches. Our approach opens up the possibility of quantitative monitoring of volatile evolution as an avenue to explore polymer degradation kinetics and its dependence on time and temperature.« less

  17. : Signal Decomposition of High Resolution Time Series River data to Separate Local and Regional Components of Conductivity

    EPA Science Inventory

    Signal processing techniques were applied to high-resolution time series data obtained from conductivity loggers placed upstream and downstream of a wastewater treatment facility along a river. Data was collected over 14-60 days, and several seasons. The power spectral densit...

  18. Matrix with Prescribed Eigenvectors

    ERIC Educational Resources Information Center

    Ahmad, Faiz

    2011-01-01

    It is a routine matter for undergraduates to find eigenvalues and eigenvectors of a given matrix. But the converse problem of finding a matrix with prescribed eigenvalues and eigenvectors is rarely discussed in elementary texts on linear algebra. This problem is related to the "spectral" decomposition of a matrix and has important technical…

  19. Breast composition measurement with a cadmium-zinc-telluride based spectral computed tomography system

    PubMed Central

    Ding, Huanjun; Ducote, Justin L.; Molloi, Sabee

    2012-01-01

    Purpose: To investigate the feasibility of breast tissue composition in terms of water, lipid, and protein with a cadmium-zinc-telluride (CZT) based computed tomography (CT) system to help better characterize suspicious lesions. Methods: Simulations and experimental studies were performed using a spectral CT system equipped with a CZT-based photon-counting detector with energy resolution. Simulations of the figure-of-merit (FOM), the signal-to-noise ratio (SNR) of the dual energy image with respect to the square root of mean glandular dose (MGD), were performed to find the optimal configuration of the experimental acquisition parameters. A calibration phantom 3.175 cm in diameter was constructed from polyoxymethylene plastic with cylindrical holes that were filled with water and oil. Similarly, sized samples of pure adipose and pure lean bovine tissues were used for the three-material decomposition. Tissue composition results computed from the images were compared to the chemical analysis data of the tissue samples. Results: The beam energy was selected to be 100 kVp with a splitting energy of 40 keV. The tissue samples were successfully decomposed into water, lipid, and protein contents. The RMS error of the volumetric percentage for the three-material decomposition, as compared to data from the chemical analysis, was estimated to be approximately 5.7%. Conclusions: The results of this study suggest that the CZT-based photon-counting detector may be employed in the CT system to quantify the water, lipid, and protein mass densities in tissue with a relatively good agreement. PMID:22380361

  20. Characterization and discrimination of human breast cancer and normal breast tissues using resonance Raman spectroscopy

    NASA Astrophysics Data System (ADS)

    Wu, Binlin; Smith, Jason; Zhang, Lin; Gao, Xin; Alfano, Robert R.

    2018-02-01

    Worldwide breast cancer incidence has increased by more than twenty percent in the past decade. It is also known that in that time, mortality due to the affliction has increased by fourteen percent. Using optical-based diagnostic techniques, such as Raman spectroscopy, has been explored in order to increase diagnostic accuracy in a more objective way along with significantly decreasing diagnostic wait-times. In this study, Raman spectroscopy with 532-nm excitation was used in order to incite resonance effects to enhance Stokes Raman scattering from unique biomolecular vibrational modes. Seventy-two Raman spectra (41 cancerous, 31 normal) were collected from nine breast tissue samples by performing a ten-spectra average using a 500-ms acquisition time at each acquisition location. The raw spectral data was subsequently prepared for analysis with background correction and normalization. The spectral data in the Raman Shift range of 750- 2000 cm-1 was used for analysis since the detector has highest sensitivity around in this range. The matrix decomposition technique nonnegative matrix factorization (NMF) was then performed on this processed data. The resulting leave-oneout cross-validation using two selective feature components resulted in sensitivity, specificity and accuracy of 92.6%, 100% and 96.0% respectively. The performance of NMF was also compared to that using principal component analysis (PCA), and NMF was shown be to be superior to PCA in this study. This study shows that coupling the resonance Raman spectroscopy technique with subsequent NMF decomposition method shows potential for high characterization accuracy in breast cancer detection.

  1. Forecasting hotspots in East Kutai, Kutai Kartanegara, and West Kutai as early warning information

    NASA Astrophysics Data System (ADS)

    Wahyuningsih, S.; Goejantoro, R.; Rizki, N. A.

    2018-04-01

    The aims of this research are to model hotspots and forecast hotspot 2017 in East Kutai, Kutai Kartanegara and West Kutai. The methods which used in this research were Holt exponential smoothing, Holt’s additive dump trend method, Holt-Winters’ additive method, additive decomposition method, multiplicative decomposition method, Loess decomposition method and Box-Jenkins method. For smoothing techniques, additive decomposition is better than Holt’s exponential smoothing. The hotspots model using Box-Jenkins method were Autoregressive Moving Average ARIMA(1,1,0), ARIMA(0,2,1), and ARIMA(0,1,0). Comparing the results from all methods which were used in this research, and based on Root of Mean Squared Error (RMSE), show that Loess decomposition method is the best times series model, because it has the least RMSE. Thus the Loess decomposition model used to forecast the number of hotspot. The forecasting result indicatethat hotspots pattern tend to increase at the end of 2017 in Kutai Kartanegara and West Kutai, but stationary in East Kutai.

  2. Interface conditions for domain decomposition with radical grid refinement

    NASA Technical Reports Server (NTRS)

    Scroggs, Jeffrey S.

    1991-01-01

    Interface conditions for coupling the domains in a physically motivated domain decomposition method are discussed. The domain decomposition is based on an asymptotic-induced method for the numerical solution of hyperbolic conservation laws with small viscosity. The method consists of multiple stages. The first stage is to obtain a first approximation using a first-order method, such as the Godunov scheme. Subsequent stages of the method involve solving internal-layer problem via a domain decomposition. The method is derived and justified via singular perturbation techniques.

  3. Properties of ZnO nanocrystals prepared by radiation method

    NASA Astrophysics Data System (ADS)

    Čuba, Václav; Gbur, Tomáš; Múčka, Viliam; Nikl, Martin; Kučerková, Romana; Pospíšil, Milan; Jakubec, Ivo

    2010-01-01

    Zinc oxide nanoparticles were prepared by irradiation of aqueous solutions containing zinc(II) ions, propan-2-ol, polyvinyl alcohol, and hydrogen peroxide. Zinc oxide was found in solid phase either directly after irradiation, or after additional heat treatment. Various physicochemical parameters, including scintillation properties of prepared materials, were studied. After decomposition of impurities and annealing of oxygen vacancies, the samples showed intensive emission in visible spectral range and well-shaped exciton luminescence at 390-400 nm. The best scintillating properties had zinc oxide prepared from aqueous solutions containing zinc formate as initial precursor and hydrogen peroxide. Size of the crystalline particles ranged from tens to hundreds nm, depending on type of irradiated solution and post-irradiation thermal treatment.

  4. Adomian decomposition method used to solve the one-dimensional acoustic equations

    NASA Astrophysics Data System (ADS)

    Dispini, Meta; Mungkasi, Sudi

    2017-05-01

    In this paper we propose the use of Adomian decomposition method to solve one-dimensional acoustic equations. This recursive method can be calculated easily and the result is an approximation of the exact solution. We use the Maple software to compute the series in the Adomian decomposition. We obtain that the Adomian decomposition method is able to solve the acoustic equations with the physically correct behavior.

  5. An asymptotic-preserving stochastic Galerkin method for the radiative heat transfer equations with random inputs and diffusive scalings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, Shi, E-mail: sjin@wisc.edu; Institute of Natural Sciences, Department of Mathematics, MOE-LSEC and SHL-MAC, Shanghai Jiao Tong University, Shanghai 200240; Lu, Hanqing, E-mail: hanqing@math.wisc.edu

    2017-04-01

    In this paper, we develop an Asymptotic-Preserving (AP) stochastic Galerkin scheme for the radiative heat transfer equations with random inputs and diffusive scalings. In this problem the random inputs arise due to uncertainties in cross section, initial data or boundary data. We use the generalized polynomial chaos based stochastic Galerkin (gPC-SG) method, which is combined with the micro–macro decomposition based deterministic AP framework in order to handle efficiently the diffusive regime. For linearized problem we prove the regularity of the solution in the random space and consequently the spectral accuracy of the gPC-SG method. We also prove the uniform (inmore » the mean free path) linear stability for the space-time discretizations. Several numerical tests are presented to show the efficiency and accuracy of proposed scheme, especially in the diffusive regime.« less

  6. Least squares parameter estimation methods for material decomposition with energy discriminating detectors

    PubMed Central

    Le, Huy Q.; Molloi, Sabee

    2011-01-01

    Purpose: Energy resolving detectors provide more than one spectral measurement in one image acquisition. The purpose of this study is to investigate, with simulation, the ability to decompose four materials using energy discriminating detectors and least squares minimization techniques. Methods: Three least squares parameter estimation decomposition techniques were investigated for four-material breast imaging tasks in the image domain. The first technique treats the voxel as if it consisted of fractions of all the materials. The second method assumes that a voxel primarily contains one material and divides the decomposition process into segmentation and quantification tasks. The third is similar to the second method but a calibration was used. The simulated computed tomography (CT) system consisted of an 80 kVp spectrum and a CdZnTe (CZT) detector that could resolve the x-ray spectrum into five energy bins. A postmortem breast specimen was imaged with flat panel CT to provide a model for the digital phantoms. Hydroxyapatite (HA) (50, 150, 250, 350, 450, and 550 mg∕ml) and iodine (4, 12, 20, 28, 36, and 44 mg∕ml) contrast elements were embedded into the glandular region of the phantoms. Calibration phantoms consisted of a 30∕70 glandular-to-adipose tissue ratio with embedded HA (100, 200, 300, 400, and 500 mg∕ml) and iodine (5, 15, 25, 35, and 45 mg∕ml). The x-ray transport process was simulated where the Beer–Lambert law, Poisson process, and CZT absorption efficiency were applied. Qualitative and quantitative evaluations of the decomposition techniques were performed and compared. The effect of breast size was also investigated. Results: The first technique decomposed iodine adequately but failed for other materials. The second method separated the materials but was unable to quantify the materials. With the addition of a calibration, the third technique provided good separation and quantification of hydroxyapatite, iodine, glandular, and adipose tissues. Quantification with this technique was accurate with errors of 9.83% and 6.61% for HA and iodine, respectively. Calibration at one point (one breast size) showed increased errors as the mismatch in breast diameters between calibration and measurement increased. A four-point calibration successfully decomposed breast diameter spanning the entire range from 8 to 20 cm. For a 14 cm breast, errors were reduced from 5.44% to 1.75% and from 6.17% to 3.27% with the multipoint calibration for HA and iodine, respectively. Conclusions: The results of the simulation study showed that a CT system based on CZT detectors in conjunction with least squares minimization technique can be used to decompose four materials. The calibrated least squares parameter estimation decomposition technique performed the best, separating and accurately quantifying the concentrations of hydroxyapatite and iodine. PMID:21361193

  7. Coherence and dimensionality of intense spatiospectral twin beams

    NASA Astrophysics Data System (ADS)

    Peřina, Jan

    2015-07-01

    Spatiospectral properties of twin beams at their transition from low to high intensities are analyzed in parametric and paraxial approximations using decomposition into paired spatial and spectral modes. Intensity auto- and cross-correlation functions are determined and compared in the spectral and temporal domains as well as the transverse wave-vector and crystal output planes. Whereas the spectral, temporal, and transverse wave-vector coherence increases with the increasing pump intensity, coherence in the crystal output plane is almost independent of the pump intensity owing to the mode structure in this plane. The corresponding auto- and cross-correlation functions approach each other for larger pump intensities. The entanglement dimensionality of a twin beam is determined with a comparison of several approaches.

  8. SpecViz: Interactive Spectral Data Analysis

    NASA Astrophysics Data System (ADS)

    Earl, Nicholas Michael; STScI

    2016-06-01

    The astronomical community is about to enter a new generation of scientific enterprise. With next-generation instrumentation and advanced capabilities, the need has arisen to equip astronomers with the necessary tools to deal with large, multi-faceted data. The Space Telescope Science Institute has initiated a data analysis forum for the creation, development, and maintenance of software tools for the interpretation of these new data sets. SpecViz is a spectral 1-D interactive visualization and analysis application built with Python in an open source development environment. A user-friendly GUI allows for a fast, interactive approach to spectral analysis. SpecViz supports handling of unique and instrument-specific data, incorporation of advanced spectral unit handling and conversions in a flexible, high-performance interactive plotting environment. Active spectral feature analysis is possible through interactive measurement and statistical tools. It can be used to build wide-band SEDs, with the capability of combining or overplotting data products from various instruments. SpecViz sports advanced toolsets for filtering and detrending spectral lines; identifying, isolating, and manipulating spectral features; as well as utilizing spectral templates for renormalizing data in an interactive way. SpecViz also includes a flexible model fitting toolset that allows for multi-component models, as well as custom models, to be used with various fitting and decomposition routines. SpecViz also features robust extension via custom data loaders and connection to the central communication system underneath the interface for more advanced control. Incorporation with Jupyter notebooks via connection with the active iPython kernel allows for SpecViz to be used in addition to a user’s normal workflow without demanding the user drastically alter their method of data analysis. In addition, SpecViz allows the interactive analysis of multi-object spectroscopy in the same straight-forward, consistent way. Through the development of such tools, STScI hopes to unify astronomical data analysis software for JWST and other instruments, allowing for efficient, reliable, and consistent scientific results.

  9. Spatial-spectral preprocessing for endmember extraction on GPU's

    NASA Astrophysics Data System (ADS)

    Jimenez, Luis I.; Plaza, Javier; Plaza, Antonio; Li, Jun

    2016-10-01

    Spectral unmixing is focused in the identification of spectrally pure signatures, called endmembers, and their corresponding abundances in each pixel of a hyperspectral image. Mainly focused on the spectral information contained in the hyperspectral images, endmember extraction techniques have recently included spatial information to achieve more accurate results. Several algorithms have been developed for automatic or semi-automatic identification of endmembers using spatial and spectral information, including the spectral-spatial endmember extraction (SSEE) where, within a preprocessing step in the technique, both sources of information are extracted from the hyperspectral image and equally used for this purpose. Previous works have implemented the SSEE technique in four main steps: 1) local eigenvectors calculation in each sub-region in which the original hyperspectral image is divided; 2) computation of the maxima and minima projection of all eigenvectors over the entire hyperspectral image in order to obtain a candidates pixels set; 3) expansion and averaging of the signatures of the candidate set; 4) ranking based on the spectral angle distance (SAD). The result of this method is a list of candidate signatures from which the endmembers can be extracted using various spectral-based techniques, such as orthogonal subspace projection (OSP), vertex component analysis (VCA) or N-FINDR. Considering the large volume of data and the complexity of the calculations, there is a need for efficient implementations. Latest- generation hardware accelerators such as commodity graphics processing units (GPUs) offer a good chance for improving the computational performance in this context. In this paper, we develop two different implementations of the SSEE algorithm using GPUs. Both are based on the eigenvectors computation within each sub-region of the first step, one using the singular value decomposition (SVD) and another one using principal component analysis (PCA). Based on our experiments with hyperspectral data sets, high computational performance is observed in both cases.

  10. A fast signal subspace approach for the determination of absolute levels from phased microphone array measurements

    NASA Astrophysics Data System (ADS)

    Sarradj, Ennes

    2010-04-01

    Phased microphone arrays are used in a variety of applications for the estimation of acoustic source location and spectra. The popular conventional delay-and-sum beamforming methods used with such arrays suffer from inaccurate estimations of absolute source levels and in some cases also from low resolution. Deconvolution approaches such as DAMAS have better performance, but require high computational effort. A fast beamforming method is proposed that can be used in conjunction with a phased microphone array in applications with focus on the correct quantitative estimation of acoustic source spectra. This method bases on an eigenvalue decomposition of the cross spectral matrix of microphone signals and uses the eigenvalues from the signal subspace to estimate absolute source levels. The theoretical basis of the method is discussed together with an assessment of the quality of the estimation. Experimental tests using a loudspeaker setup and an airfoil trailing edge noise setup in an aeroacoustic wind tunnel show that the proposed method is robust and leads to reliable quantitative results.

  11. Process modelling for materials preparation experiments

    NASA Technical Reports Server (NTRS)

    Rosenberger, Franz; Alexander, J. Iwan D.

    1994-01-01

    The main goals of the research under this grant consist of the development of mathematical tools and measurement techniques for transport properties necessary for high fidelity modelling of crystal growth from the melt and solution. Of the tasks described in detail in the original proposal, two remain to be worked on: development of a spectral code for moving boundary problems, and development of an expedient diffusivity measurement technique for concentrated and supersaturated solutions. We have focused on developing a code to solve for interface shape, heat and species transport during directional solidification. The work involved the computation of heat, mass and momentum transfer during Bridgman-Stockbarger solidification of compound semiconductors. Domain decomposition techniques and preconditioning methods were used in conjunction with Chebyshev spectral methods to accelerate convergence while retaining the high-order spectral accuracy. During the report period we have further improved our experimental setup. These improvements include: temperature control of the measurement cell to 0.1 C between 10 and 60 C; enclosure of the optical measurement path outside the ZYGO interferometer in a metal housing that is temperature controlled to the same temperature setting as the measurement cell; simultaneous dispensing and partial removal of the lower concentration (lighter) solution above the higher concentration (heavier) solution through independently motor-driven syringes; three-fold increase in data resolution by orientation of the interferometer with respect to diffusion direction; and increase of the optical path length in the solution cell to 12 mm.

  12. Synthesis and Structural Characterization of CdFe2O4 Nanostructures

    NASA Astrophysics Data System (ADS)

    Kalpanadevi, K.; Sinduja, C. R.; Manimekalai, R.

    The synthesis of CdFe2O4 nanoparticles has been achieved by a simple thermal decomposition method from the inorganic precursor, [CdFe2(cin)3(N2H4)3], which was obtained by a simple precipitation method from the corresponding metal salts, cinnamic acid and hydrazine hydrate. The precursor was characterized by hydrazine and metal analyses, infrared spectral analysis and thermo gravimetric analysis. On appropriate annealing, [CdFe2(cin)3(N2H4)3] yielded CdFe2O4 nanoparticles. The XRD studies showed that the crystallite size of the particles was 13nm. The results of HRTEM studies also agreed well with those of XRD. SAED pattern of the sample established the polycrystalline nature of the nanoparticles. SEM images displayed a random distribution of grains in the sample.

  13. Signal Decomposition of High Resolution Time Series River Data to Separate Local and Regional Components of Conductivity

    EPA Science Inventory

    Signal processing techniques were applied to high-resolution time series data obtained from conductivity loggers placed upstream and downstream of an oil and gas wastewater treatment facility along a river. Data was collected over 14-60 days. The power spectral density was us...

  14. Statistical properties and time-frequency analysis of temperature, salinity and turbidity measured by the MAREL Carnot station in the coastal waters of Boulogne-sur-Mer (France)

    NASA Astrophysics Data System (ADS)

    Kbaier Ben Ismail, Dhouha; Lazure, Pascal; Puillat, Ingrid

    2016-10-01

    In marine sciences, many fields display high variability over a large range of spatial and temporal scales, from seconds to thousands of years. The longer recorded time series, with an increasing sampling frequency, in this field are often nonlinear, nonstationary, multiscale and noisy. Their analysis faces new challenges and thus requires the implementation of adequate and specific methods. The objective of this paper is to highlight time series analysis methods already applied in econometrics, signal processing, health, etc. to the environmental marine domain, assess advantages and inconvenients and compare classical techniques with more recent ones. Temperature, turbidity and salinity are important quantities for ecosystem studies. The authors here consider the fluctuations of sea level, salinity, turbidity and temperature recorded from the MAREL Carnot system of Boulogne-sur-Mer (France), which is a moored buoy equipped with physico-chemical measuring devices, working in continuous and autonomous conditions. In order to perform adequate statistical and spectral analyses, it is necessary to know the nature of the considered time series. For this purpose, the stationarity of the series and the occurrence of unit-root are addressed with the Augmented-Dickey Fuller tests. As an example, the harmonic analysis is not relevant for temperature, turbidity and salinity due to the nonstationary condition, except for the nearly stationary sea level datasets. In order to consider the dominant frequencies associated to the dynamics, the large number of data provided by the sensors should enable the estimation of Fourier spectral analysis. Different power spectra show a complex variability and reveal an influence of environmental factors such as tides. However, the previous classical spectral analysis, namely the Blackman-Tukey method, requires not only linear and stationary data but also evenly-spaced data. Interpolating the time series introduces numerous artifacts to the data. The Lomb-Scargle algorithm is adapted to unevenly-spaced data and is used as an alternative. The limits of the method are also set out. It was found that beyond 50% of missing measures, few significant frequencies are detected, several seasonalities are no more visible, and even a whole range of high frequency disappears progressively. Furthermore, two time-frequency decomposition methods, namely wavelets and Hilbert-Huang Transformation (HHT), are applied for the analysis of the entire dataset. Using the Continuous Wavelet Transform (CWT), some properties of the time series are determined. Then, the inertial wave and several low-frequency tidal waves are identified by the application of the Empirical Mode Decomposition (EMD). Finally, EMD based Time Dependent Intrinsic Correlation (TDIC) analysis is applied to consider the correlation between two nonstationary time series.

  15. Delineation of subsurface hydrocarbon contamination at a former hydrogenation plant using spectral induced polarization imaging

    NASA Astrophysics Data System (ADS)

    Flores Orozco, Adrián; Kemna, Andreas; Oberdörster, Christoph; Zschornack, Ludwig; Leven, Carsten; Dietrich, Peter; Weiss, Holger

    2012-08-01

    Broadband spectral induced polarization (SIP) measurements were conducted at a former hydrogenation plant in Zeitz (NE Germany) to investigate the potential of SIP imaging to delineate areas with different BTEX (benzene, toluene, ethylbenzene, and xylene) concentrations. Conductivity images reveal a poor correlation with the distribution of contaminants; whereas phase images exhibit two main anomalies: low phase shift values (< 5 mrad) for locations with high BTEX concentrations, including the occurrence of free-phase product (BTEX concentrations > 1.7 g/l), and higher phase values for lower BTEX concentrations. Moreover, the spectral response of the areas with high BTEX concentration and free-phase products reveals a flattened spectrum in the low frequencies (< 40 Hz), while areas with lower BTEX concentrations exhibit a response characterized by a frequency peak. The SIP response was modelled using a Debye decomposition to compute images of the median relaxation-time. Consistent with laboratory studies, we observed an increase in the relaxation-time associated with an increase in BTEX concentrations. Measurements were also collected in the time domain (TDIP), revealing imaging results consistent with those obtained for frequency domain (SIP) measurements. Results presented here demonstrate the potential of the SIP imaging method to discriminate source and plume of dissolved contaminants at BTEX contaminated sites.

  16. Multipolar response of nonspherical silicon nanoparticles in the visible and near-infrared spectral ranges

    NASA Astrophysics Data System (ADS)

    Terekhov, Pavel D.; Baryshnikova, Kseniia V.; Artemyev, Yuriy A.; Karabchevsky, Alina; Shalin, Alexander S.; Evlyukhin, Andrey B.

    2017-07-01

    Spectral multipole resonances of parallelepiped-, pyramid-, and cone-like shaped silicon nanoparticles excited by linearly polarized light waves are theoretically investigated. The numerical finite element method is applied for the calculations of the scattering cross sections as a function of the nanoparticles geometrical parameters. The roles of multipole moments (up to the third order) in the scattering process are analyzed using the semianalytical multipole decomposition approach. The possibility of scattering pattern configuration due to the tuning of the multipole contributions to the total scattered waves is discussed and demonstrated. It is shown that cubic nanoparticles can provide a strong isotropic side scattering with minimization of the scattering in forward and backward directions. In the case of the pyramidal and conical nanoparticles the total suppression of the side scattering can be obtained. It was found that due to the shape factor of the pyramidal and conical nanoparticles their electric toroidal dipole resonance can be excited in the spectral region of the first electric and magnetic dipole resonances. The influence of the incident light directions on the optical response of the pyramidal and conical nanoparticles is discussed. The obtained results provide important information that can be used for the development of nanoantennas with improved functionality due to the directional scattering effects.

  17. Multigrid treatment of implicit continuum diffusion

    NASA Astrophysics Data System (ADS)

    Francisquez, Manaure; Zhu, Ben; Rogers, Barrett

    2017-10-01

    Implicit treatment of diffusive terms of various differential orders common in continuum mechanics modeling, such as computational fluid dynamics, is investigated with spectral and multigrid algorithms in non-periodic 2D domains. In doubly periodic time dependent problems these terms can be efficiently and implicitly handled by spectral methods, but in non-periodic systems solved with distributed memory parallel computing and 2D domain decomposition, this efficiency is lost for large numbers of processors. We built and present here a multigrid algorithm for these types of problems which outperforms a spectral solution that employs the highly optimized FFTW library. This multigrid algorithm is not only suitable for high performance computing but may also be able to efficiently treat implicit diffusion of arbitrary order by introducing auxiliary equations of lower order. We test these solvers for fourth and sixth order diffusion with idealized harmonic test functions as well as a turbulent 2D magnetohydrodynamic simulation. It is also shown that an anisotropic operator without cross-terms can improve model accuracy and speed, and we examine the impact that the various diffusion operators have on the energy, the enstrophy, and the qualitative aspect of a simulation. This work was supported by DOE-SC-0010508. This research used resources of the National Energy Research Scientific Computing Center (NERSC).

  18. Quantitative, spectrally-resolved intraoperative fluorescence imaging

    PubMed Central

    Valdés, Pablo A.; Leblond, Frederic; Jacobs, Valerie L.; Wilson, Brian C.; Paulsen, Keith D.; Roberts, David W.

    2012-01-01

    Intraoperative visual fluorescence imaging (vFI) has emerged as a promising aid to surgical guidance, but does not fully exploit the potential of the fluorescent agents that are currently available. Here, we introduce a quantitative fluorescence imaging (qFI) approach that converts spectrally-resolved data into images of absolute fluorophore concentration pixel-by-pixel across the surgical field of view (FOV). The resulting estimates are linear, accurate, and precise relative to true values, and spectral decomposition of multiple fluorophores is also achieved. Experiments with protoporphyrin IX in a glioma rodent model demonstrate in vivo quantitative and spectrally-resolved fluorescence imaging of infiltrating tumor margins for the first time. Moreover, we present images from human surgery which detect residual tumor not evident with state-of-the-art vFI. The wide-field qFI technique has broad implications for intraoperative surgical guidance because it provides near real-time quantitative assessment of multiple fluorescent biomarkers across the operative field. PMID:23152935

  19. A novel fusion framework of visible light and infrared images based on singular value decomposition and adaptive DUAL-PCNN in NSST domain

    NASA Astrophysics Data System (ADS)

    Cheng, Boyang; Jin, Longxu; Li, Guoning

    2018-06-01

    Visible light and infrared images fusion has been a significant subject in imaging science. As a new contribution to this field, a novel fusion framework of visible light and infrared images based on adaptive dual-channel unit-linking pulse coupled neural networks with singular value decomposition (ADS-PCNN) in non-subsampled shearlet transform (NSST) domain is present in this paper. First, the source images are decomposed into multi-direction and multi-scale sub-images by NSST. Furthermore, an improved novel sum modified-Laplacian (INSML) of low-pass sub-image and an improved average gradient (IAVG) of high-pass sub-images are input to stimulate the ADS-PCNN, respectively. To address the large spectral difference between infrared and visible light and the occurrence of black artifacts in fused images, a local structure information operator (LSI), which comes from local area singular value decomposition in each source image, is regarded as the adaptive linking strength that enhances fusion accuracy. Compared with PCNN models in other studies, the proposed method simplifies certain peripheral parameters, and the time matrix is utilized to decide the iteration number adaptively. A series of images from diverse scenes are used for fusion experiments and the fusion results are evaluated subjectively and objectively. The results of the subjective and objective evaluation show that our algorithm exhibits superior fusion performance and is more effective than the existing typical fusion techniques.

  20. Canonical decomposition of magnetotelluric responses: Experiment on 1D anisotropic structures

    NASA Astrophysics Data System (ADS)

    Guo, Ze-qiu; Wei, Wen-bo; Ye, Gao-feng; Jin, Sheng; Jing, Jian-en

    2015-08-01

    Horizontal electrical heterogeneity of subsurface earth is mostly originated from structural complexity and electrical anisotropy, and local near-surface electrical heterogeneity will severely distort regional electromagnetic responses. Conventional distortion analyses for magnetotelluric soundings are primarily physical decomposition methods with respect to isotropic models, which mostly presume that the geoelectric distribution of geological structures is of local and regional patterns represented by 3D/2D models. Due to the widespread anisotropy of earth media, the confusion between 1D anisotropic responses and 2D isotropic responses, and the defects of physical decomposition methods, we propose to conduct modeling experiments with canonical decomposition in terms of 1D layered anisotropic models, and the method is one of the mathematical decomposition methods based on eigenstate analyses differentiated from distortion analyses, which can be used to recover electrical information such as strike directions, and maximum and minimum conductivity. We tested this method with numerical simulation experiments on several 1D synthetic models, which turned out that canonical decomposition is quite effective to reveal geological anisotropic information. Finally, for the background of anisotropy from previous study by geological and seismological methods, canonical decomposition is applied to real data acquired in North China Craton for 1D anisotropy analyses, and the result shows that, with effective modeling and cautious interpretation, canonical decomposition could be another good method to detect anisotropy of geological media.

  1. The Distributed Diagonal Force Decomposition Method for Parallelizing Molecular Dynamics Simulations

    PubMed Central

    Boršnik, Urban; Miller, Benjamin T.; Brooks, Bernard R.; Janežič, Dušanka

    2011-01-01

    Parallelization is an effective way to reduce the computational time needed for molecular dynamics simulations. We describe a new parallelization method, the distributed-diagonal force decomposition method, with which we extend and improve the existing force decomposition methods. Our new method requires less data communication during molecular dynamics simulations than replicated data and current force decomposition methods, increasing the parallel efficiency. It also dynamically load-balances the processors' computational load throughout the simulation. The method is readily implemented in existing molecular dynamics codes and it has been incorporated into the CHARMM program, allowing its immediate use in conjunction with the many molecular dynamics simulation techniques that are already present in the program. We also present the design of the Force Decomposition Machine, a cluster of personal computers and networks that is tailored to running molecular dynamics simulations using the distributed diagonal force decomposition method. The design is expandable and provides various degrees of fault resilience. This approach is easily adaptable to computers with Graphics Processing Units because it is independent of the processor type being used. PMID:21793007

  2. TE/TM decomposition of electromagnetic sources

    NASA Technical Reports Server (NTRS)

    Lindell, Ismo V.

    1988-01-01

    Three methods are given by which bounded EM sources can be decomposed into two parts radiating transverse electric (TE) and transverse magnetic (TM) fields with respect to a given constant direction in space. The theory applies source equivalence and nonradiating source concepts, which lead to decomposition methods based on a recursive formula or two differential equations for the determination of the TE and TM components of the original source. Decompositions for a dipole in terms of point, line, and plane sources are studied in detail. The planar decomposition is seen to match to an earlier result given by Clemmow (1963). As an application of the point decomposition method, it is demonstrated that the general exact image expression for the Sommerfeld half-space problem, previously derived through heuristic reasoning, can be more straightforwardly obtained through the present decomposition method.

  3. A new method of Quickbird own image fusion

    NASA Astrophysics Data System (ADS)

    Han, Ying; Jiang, Hong; Zhang, Xiuying

    2009-10-01

    With the rapid development of remote sensing technology, the means of accessing to remote sensing data become increasingly abundant, thus the same area can form a large number of multi-temporal, different resolution image sequence. At present, the fusion methods are mainly: HPF, IHS transform method, PCA method, Brovey, Mallat algorithm and wavelet transform and so on. There exists a serious distortion of the spectrums in the IHS transform, Mallat algorithm omits low-frequency information of the high spatial resolution images, the integration results of which has obvious blocking effects. Wavelet multi-scale decomposition for different sizes, the directions, details and the edges can have achieved very good results, but different fusion rules and algorithms can achieve different effects. This article takes the Quickbird own image fusion as an example, basing on wavelet transform and HVS, wavelet transform and IHS integration. The result shows that the former better. This paper introduces the correlation coefficient, the relative average spectral error index and usual index to evaluate the quality of image.

  4. Detector-Response Correction of Two-Dimensional γ -Ray Spectra from Neutron Capture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rusev, G.; Jandel, M.; Arnold, C. W.

    2015-05-28

    The neutron-capture reaction produces a large variety of γ-ray cascades with different γ-ray multiplicities. A measured spectral distribution of these cascades for each γ-ray multiplicity is of importance to applications and studies of γ-ray statistical properties. The DANCE array, a 4π ball of 160 BaF 2 detectors, is an ideal tool for measurement of neutron-capture γ-rays. The high granularity of DANCE enables measurements of high-multiplicity γ-ray cascades. The measured two-dimensional spectra (γ-ray energy, γ-ray multiplicity) have to be corrected for the DANCE detector response in order to compare them with predictions of the statistical model or use them in applications.more » The detector-response correction problem becomes more difficult for a 4π detection system than for a single detector. A trial and error approach and an iterative decomposition of γ-ray multiplets, have been successfully applied to the detector-response correction. As a result, applications of the decomposition methods are discussed for two-dimensional γ-ray spectra measured at DANCE from γ-ray sources and from the 10B(n, γ) and 113Cd(n, γ) reactions.« less

  5. Application of Spectral Analysis Techniques in the Intercomparison of Aerosol Data. Part II: Using Maximum Covariance Analysis to Effectively Compare Spatiotemporal Variability of Satellite and AERONET Measured Aerosol Optical Depth

    NASA Technical Reports Server (NTRS)

    Li, Jing; Carlson, Barbara E.; Lacis, Andrew A.

    2014-01-01

    Moderate Resolution Imaging SpectroRadiometer (MODIS) and Multi-angle Imaging Spectroradiomater (MISR) provide regular aerosol observations with global coverage. It is essential to examine the coherency between space- and ground-measured aerosol parameters in representing aerosol spatial and temporal variability, especially in the climate forcing and model validation context. In this paper, we introduce Maximum Covariance Analysis (MCA), also known as Singular Value Decomposition analysis as an effective way to compare correlated aerosol spatial and temporal patterns between satellite measurements and AERONET data. This technique not only successfully extracts the variability of major aerosol regimes but also allows the simultaneous examination of the aerosol variability both spatially and temporally. More importantly, it well accommodates the sparsely distributed AERONET data, for which other spectral decomposition methods, such as Principal Component Analysis, do not yield satisfactory results. The comparison shows overall good agreement between MODIS/MISR and AERONET AOD variability. The correlations between the first three modes of MCA results for both MODIS/AERONET and MISR/ AERONET are above 0.8 for the full data set and above 0.75 for the AOD anomaly data. The correlations between MODIS and MISR modes are also quite high (greater than 0.9). We also examine the extent of spatial agreement between satellite and AERONET AOD data at the selected stations. Some sites with disagreements in the MCA results, such as Kanpur, also have low spatial coherency. This should be associated partly with high AOD spatial variability and partly with uncertainties in satellite retrievals due to the seasonally varying aerosol types and surface properties.

  6. Disentangling AGN and Star Formation in Soft X-Rays

    NASA Technical Reports Server (NTRS)

    LaMassa, Stephanie M.; Heckman, T. M.; Ptak, A.

    2012-01-01

    We have explored the interplay of star formation and active galactic nucleus (AGN) activity in soft X-rays (0.5-2 keV) in two samples of Seyfert 2 galaxies (Sy2s). Using a combination of low-resolution CCD spectra from Chandra and XMM-Newton, we modeled the soft emission of 34 Sy2s using power-law and thermal models. For the 11 sources with high signal-to-noise Chandra imaging of the diffuse host galaxy emission, we estimate the luminosity due to star formation by removing the AGN, fitting the residual emission. The AGN and star formation contributions to the soft X-ray luminosity (i.e., L(sub x,AGN) and L(sub x,SF)) for the remaining 24 Sy2s were estimated from the power-law and thermal luminosities derived from spectral fitting. These luminosities were scaled based on a template derived from XSINGS analysis of normal star-forming galaxies. To account for errors in the luminosities derived from spectral fitting and the spread in the scaling factor, we estimated L(sub x,AGN) and L(sub x,SF))from Monte Carlo simulations. These simulated luminosities agree with L(sub x,AGN) and L(sub x,SF) derived from Chandra imaging analysis within a 3sigma confidence level. Using the infrared [Ne ii]12.8 micron and [O iv]26 micron lines as a proxy of star formation and AGN activity, respectively, we independently disentangle the contributions of these two processes to the total soft X-ray emission. This decomposition generally agrees with L(sub x,SF) and L(sub x,AGN) at the 3 sigma level. In the absence of resolvable nuclear emission, our decomposition method provides a reasonable estimate of emission due to star formation in galaxies hosting type 2 AGNs.

  7. Broadband changes in the cortical surface potential track activation of functionally diverse neuronal populations

    PubMed Central

    Miller, Kai J; Honey, Christopher J; Hermes, Dora; Rao, Rajesh PN; denNijs, Marcel; Ojemann, Jeffrey G

    2013-01-01

    We illustrate a general principal of electrical potential measurements from the surface of the cerebral cortex, by revisiting and reanalyzing experimental work from the visual, language and motor systems. A naïve decomposition technique of electrocorticographic power spectral measurements reveals that broadband spectral changes reliably track task engagement. These broadband changes are shown to be a generic correlate of local cortical function across a variety of brain areas and behavioral tasks. Furthermore, they fit a power-law form that is consistent with simple models of the dendritic integration of asynchronous local population firing. Because broadband spectral changes covary with diverse perceptual and behavioral states on the timescale of 20–50ms, they provide a powerful and widely applicable experimental tool. PMID:24018305

  8. Hyperfine Sublevel Correlation (HYSCORE) Spectra for Paramagnetic Centers with Nuclear Spin I = 1 Having Isotropic Hyperfine Interactions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maryasov, Alexander G.; Bowman, Michael K.

    2004-07-08

    It is shown that HYSCORE spectra of paramagnetic centers having nuclei of spin I=1 with isotropic hfi and arbitrary NQI consist of ridges having zero width. A parametric presentation of these ridges is found which shows the range of possible frequencies in the HYSCORE spectrum and aids in spectral assignments and rapid estimation of spin Hamiltonian parameters. An alternative approach for the spectral density calculation is presented that is based on spectral decomposition of the Hamiltonian. Only the eigenvalues of the Hamiltonian are needed in this approach. An atlas of HYSCORE spectra is given in the Supporting Information. This approachmore » is applied to the estimation of the spin Hamiltonian parameters of the oxovanadium-EDTA complex.« less

  9. Passive microrheology of soft materials with atomic force microscopy: A wavelet-based spectral analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martinez-Torres, C.; Streppa, L.; Arneodo, A.

    2016-01-18

    Compared to active microrheology where a known force or modulation is periodically imposed to a soft material, passive microrheology relies on the spectral analysis of the spontaneous motion of tracers inherent or external to the material. Passive microrheology studies of soft or living materials with atomic force microscopy (AFM) cantilever tips are rather rare because, in the spectral densities, the rheological response of the materials is hardly distinguishable from other sources of random or periodic perturbations. To circumvent this difficulty, we propose here a wavelet-based decomposition of AFM cantilever tip fluctuations and we show that when applying this multi-scale methodmore » to soft polymer layers and to living myoblasts, the structural damping exponents of these soft materials can be retrieved.« less

  10. A spectral-finite difference solution of the Navier-Stokes equations in three dimensions

    NASA Astrophysics Data System (ADS)

    Alfonsi, Giancarlo; Passoni, Giuseppe; Pancaldo, Lea; Zampaglione, Domenico

    1998-07-01

    A new computational code for the numerical integration of the three-dimensional Navier-Stokes equations in their non-dimensional velocity-pressure formulation is presented. The system of non-linear partial differential equations governing the time-dependent flow of a viscous incompressible fluid in a channel is managed by means of a mixed spectral-finite difference method, in which different numerical techniques are applied: Fourier decomposition is used along the homogeneous directions, second-order Crank-Nicolson algorithms are employed for the spatial derivatives in the direction orthogonal to the solid walls and a fourth-order Runge-Kutta procedure is implemented for both the calculation of the convective term and the time advancement. The pressure problem, cast in the Helmholtz form, is solved with the use of a cyclic reduction procedure. No-slip boundary conditions are used at the walls of the channel and cyclic conditions are imposed at the other boundaries of the computing domain.Results are provided for different values of the Reynolds number at several time steps of integration and are compared with results obtained by other authors.

  11. Episodic Tremor and Slip (ETS) as a chaotic multiphysics spring

    NASA Astrophysics Data System (ADS)

    Veveakis, E.; Alevizos, S.; Poulet, T.

    2017-03-01

    Episodic Tremor and Slip (ETS) events display a rich behaviour of slow and accelerated slip with simple oscillatory to complicated chaotic time series. It is commonly believed that the fast events appearing as non volcanic tremors are signatures of deep fluid injection. The fluid source is suggested to be related to the breakdown of hydrous phyllosilicates, mainly the serpentinite group minerals such as antigorite or lizardite that are widespread in the top of the slab in subduction environments. Similar ETS sequences are recorded in different lithologies in exhumed crustal carbonate-rich thrusts where the fluid source is suggested to be the more vigorous carbonate decomposition reaction. If indeed both types of events can be understood and modelled by the same generic fluid release reaction AB(solid) ⇌A(solid) +B(fluid) , the data from ETS sequences in subduction zones reveal a geophysically tractable temporal evolution with no access to the fault zone. This work reviews recent advances in modelling ETS events considering the multiphysics instabilities triggered by the fluid release reaction and develops a thermal-hydraulic-mechanical-chemical oscillator (THMC spring) model for such mineral reactions (like dehydration and decomposition) in Megathrusts. We describe advanced computational methods for THMC instabilities and discuss spectral element and finite element solutions. We apply the presented numerical methods to field examples of this important mechanism and reproduce the temporal signature of the Cascadia and Hikurangi trench with a serpentinite oscillator.

  12. Supercritical Catalytic Cracking of Hydrocarbon Feeds Insight

    DTIC Science & Technology

    2016-04-21

    University teamed with Spectral Energies, LLC to develop appropriate spatiotemporal imaging capabilities in single body zeolites to describe beneficial...We demonstrated the ability to follow in a spatiotemporal fashion, the decomposition of the structure-directing agent used to template the zeolite ...appropriate spatiotemporal imaging capabilities in single body zeolites to describe beneficial and parasitic catalytic cracking pathways. Beneficial

  13. Three-Component Decomposition of Polarimetric SAR Data Integrating Eigen-Decomposition Results

    NASA Astrophysics Data System (ADS)

    Lu, Da; He, Zhihua; Zhang, Huan

    2018-01-01

    This paper presents a novel three-component scattering power decomposition of polarimetric SAR data. There are two problems in three-component decomposition method: volume scattering component overestimation in urban areas and artificially set parameter to be a fixed value. Though volume scattering component overestimation can be partly solved by deorientation process, volume scattering still dominants some oriented urban areas. The speckle-like decomposition results introduced by artificially setting value are not conducive to further image interpretation. This paper integrates the results of eigen-decomposition to solve the aforementioned problems. Two principal eigenvectors are used to substitute the surface scattering model and the double bounce scattering model. The decomposed scattering powers are obtained using a constrained linear least-squares method. The proposed method has been verified using an ESAR PolSAR image, and the results show that the proposed method has better performance in urban area.

  14. Label-free DNA imaging in vivo with stimulated Raman scattering microscopy

    DOE PAGES

    Lu, Fa-Ke; Basu, Srinjan; Igras, Vivien; ...

    2015-08-31

    Label-free DNA imaging is highly desirable in biology and medicine to perform live imaging without affecting cell function and to obtain instant histological tissue examination during surgical procedures. Here we show a label-free DNA imaging method with stimulated Raman scattering (SRS) microscopy for visualization of the cell nuclei in live animals and intact fresh human tissues with subcellular resolution. Relying on the distinct Raman spectral features of the carbon-hydrogen bonds in DNA, the distribution of DNA is retrieved from the strong background of proteins and lipids by linear decomposition of SRS images at three optimally selected Raman shifts. Based onmore » changes on DNA condensation in the nucleus, we were able to capture chromosome dynamics during cell division both in vitro and in vivo. We tracked mouse skin cell proliferation, induced by drug treatment, through in vivo counting of the mitotic rate. Moreover, we demonstrated a label-free histology method for human skin cancer diagnosis that provides comparable results to other conventional tissue staining methods such as H&E. In conclusion, our approach exhibits higher sensitivity than SRS imaging of DNA in the fingerprint spectral region. Compared with spontaneous Raman imaging of DNA, our approach is three orders of magnitude faster, allowing both chromatin dynamic studies and label-free optical histology in real time.« less

  15. Clinical Application of Dual-Energy Spectral Computed Tomography in Detecting Cholesterol Gallstones From Surrounding Bile.

    PubMed

    Yang, Chuang-Bo; Zhang, Shuang; Jia, Yong-Jun; Duan, Hai-Feng; Ma, Guang-Ming; Zhang, Xi-Rong; Yu, Yong; He, Tai-Ping

    2017-04-01

    This study aimed to investigate the clinical value of spectral computed tomography (CT) in the detection of cholesterol gallstones from surrounding bile. This study was approved by the institutional review board. The unenhanced spectral CT data of 24 patients who had surgically confirmed cholesterol gallstones were analyzed. Lipid concentrations and CT numbers were measured from fat-based material decomposition image and virtual monochromatic image sets (40-140 keV), respectively. The difference in lipid concentration and CT number between cholesterol gallstones and the surrounding bile were statistically analyzed. Receiver operating characteristic analysis was applied to determine the diagnostic accuracy of using lipid concentration to differentiate cholesterol gallstones from bile. Cholesterol gallstones were bright on fat-based material decomposition images yielding a 92% detection rate (22 of 24). The lipid concentrations (552.65 ± 262.36 mg/mL), CT number at 40 keV (-31.57 ± 16.88 HU) and 140 keV (24.30 ± 5.85 HU) for the cholesterol gallstones were significantly different from those of bile (-13.94 ± 105.12 mg/mL, 12.99 ± 9.39 HU and 6.19 ± 4.97 HU, respectively). Using 182.59 mg/mL as the threshold value for lipid concentration, one could obtain sensitivity of 95.5% and specificity of 100% with accuracy of 0.994 for differentiating cholesterol gallstones from bile. Virtual monochromatic spectral CT images at 40 keV and 140 keV provide significant CT number differences between cholesterol gallstones and the surrounding bile. Spectral CT provides an excellent detection rate for cholesterol gallstones. Copyright © 2017 The Association of University Radiologists. Published by Elsevier Inc. All rights reserved.

  16. Hybrid spectral CT reconstruction

    PubMed Central

    Clark, Darin P.

    2017-01-01

    Current photon counting x-ray detector (PCD) technology faces limitations associated with spectral fidelity and photon starvation. One strategy for addressing these limitations is to supplement PCD data with high-resolution, low-noise data acquired with an energy-integrating detector (EID). In this work, we propose an iterative, hybrid reconstruction technique which combines the spectral properties of PCD data with the resolution and signal-to-noise characteristics of EID data. Our hybrid reconstruction technique is based on an algebraic model of data fidelity which substitutes the EID data into the data fidelity term associated with the PCD reconstruction, resulting in a joint reconstruction problem. Within the split Bregman framework, these data fidelity constraints are minimized subject to additional constraints on spectral rank and on joint intensity-gradient sparsity measured between the reconstructions of the EID and PCD data. Following a derivation of the proposed technique, we apply it to the reconstruction of a digital phantom which contains realistic concentrations of iodine, barium, and calcium encountered in small-animal micro-CT. The results of this experiment suggest reliable separation and detection of iodine at concentrations ≥ 5 mg/ml and barium at concentrations ≥ 10 mg/ml in 2-mm features for EID and PCD data reconstructed with inherent spatial resolutions of 176 μm and 254 μm, respectively (point spread function, FWHM). Furthermore, hybrid reconstruction is demonstrated to enhance spatial resolution within material decomposition results and to improve low-contrast detectability by as much as 2.6 times relative to reconstruction with PCD data only. The parameters of the simulation experiment are based on an in vivo micro-CT experiment conducted in a mouse model of soft-tissue sarcoma. Material decomposition results produced from this in vivo data demonstrate the feasibility of distinguishing two K-edge contrast agents with a spectral separation on the order of the energy resolution of the PCD hardware. PMID:28683124

  17. Scalable direct Vlasov solver with discontinuous Galerkin method on unstructured mesh.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, J.; Ostroumov, P. N.; Mustapha, B.

    2010-12-01

    This paper presents the development of parallel direct Vlasov solvers with discontinuous Galerkin (DG) method for beam and plasma simulations in four dimensions. Both physical and velocity spaces are in two dimesions (2P2V) with unstructured mesh. Contrary to the standard particle-in-cell (PIC) approach for kinetic space plasma simulations, i.e., solving Vlasov-Maxwell equations, direct method has been used in this paper. There are several benefits to solving a Vlasov equation directly, such as avoiding noise associated with a finite number of particles and the capability to capture fine structure in the plasma. The most challanging part of a direct Vlasov solvermore » comes from higher dimensions, as the computational cost increases as N{sup 2d}, where d is the dimension of the physical space. Recently, due to the fast development of supercomputers, the possibility has become more realistic. Many efforts have been made to solve Vlasov equations in low dimensions before; now more interest has focused on higher dimensions. Different numerical methods have been tried so far, such as the finite difference method, Fourier Spectral method, finite volume method, and spectral element method. This paper is based on our previous efforts to use the DG method. The DG method has been proven to be very successful in solving Maxwell equations, and this paper is our first effort in applying the DG method to Vlasov equations. DG has shown several advantages, such as local mass matrix, strong stability, and easy parallelization. These are particularly suitable for Vlasov equations. Domain decomposition in high dimensions has been used for parallelization; these include a highly scalable parallel two-dimensional Poisson solver. Benchmark results have been shown and simulation results will be reported.« less

  18. Scare Tactics: Evaluating Problem Decompositions Using Failure Scenarios

    NASA Technical Reports Server (NTRS)

    Helm, B. Robert; Fickas, Stephen

    1992-01-01

    Our interest is in the design of multi-agent problem-solving systems, which we refer to as composite systems. We have proposed an approach to composite system design by decomposition of problem statements. An automated assistant called Critter provides a library of reusable design transformations which allow a human analyst to search the space of decompositions for a problem. In this paper we describe a method for evaluating and critiquing problem decompositions generated by this search process. The method uses knowledge stored in the form of failure decompositions attached to design transformations. We suggest the benefits of our critiquing method by showing how it could re-derive steps of a published development example. We then identify several open issues for the method.

  19. Identification of coffee bean varieties using hyperspectral imaging: influence of preprocessing methods and pixel-wise spectra analysis.

    PubMed

    Zhang, Chu; Liu, Fei; He, Yong

    2018-02-01

    Hyperspectral imaging was used to identify and to visualize the coffee bean varieties. Spectral preprocessing of pixel-wise spectra was conducted by different methods, including moving average smoothing (MA), wavelet transform (WT) and empirical mode decomposition (EMD). Meanwhile, spatial preprocessing of the gray-scale image at each wavelength was conducted by median filter (MF). Support vector machine (SVM) models using full sample average spectra and pixel-wise spectra, and the selected optimal wavelengths by second derivative spectra all achieved classification accuracy over 80%. Primarily, the SVM models using pixel-wise spectra were used to predict the sample average spectra, and these models obtained over 80% of the classification accuracy. Secondly, the SVM models using sample average spectra were used to predict pixel-wise spectra, but achieved with lower than 50% of classification accuracy. The results indicated that WT and EMD were suitable for pixel-wise spectra preprocessing. The use of pixel-wise spectra could extend the calibration set, and resulted in the good prediction results for pixel-wise spectra and sample average spectra. The overall results indicated the effectiveness of using spectral preprocessing and the adoption of pixel-wise spectra. The results provided an alternative way of data processing for applications of hyperspectral imaging in food industry.

  20. Validating the performance of one-time decomposition for fMRI analysis using ICA with automatic target generation process.

    PubMed

    Yao, Shengnan; Zeng, Weiming; Wang, Nizhuan; Chen, Lei

    2013-07-01

    Independent component analysis (ICA) has been proven to be effective for functional magnetic resonance imaging (fMRI) data analysis. However, ICA decomposition requires to optimize the unmixing matrix iteratively whose initial values are generated randomly. Thus the randomness of the initialization leads to different ICA decomposition results. Therefore, just one-time decomposition for fMRI data analysis is not usually reliable. Under this circumstance, several methods about repeated decompositions with ICA (RDICA) were proposed to reveal the stability of ICA decomposition. Although utilizing RDICA has achieved satisfying results in validating the performance of ICA decomposition, RDICA cost much computing time. To mitigate the problem, in this paper, we propose a method, named ATGP-ICA, to do the fMRI data analysis. This method generates fixed initial values with automatic target generation process (ATGP) instead of being produced randomly. We performed experimental tests on both hybrid data and fMRI data to indicate the effectiveness of the new method and made a performance comparison of the traditional one-time decomposition with ICA (ODICA), RDICA and ATGP-ICA. The proposed method demonstrated that it not only could eliminate the randomness of ICA decomposition, but also could save much computing time compared to RDICA. Furthermore, the ROC (Receiver Operating Characteristic) power analysis also denoted the better signal reconstruction performance of ATGP-ICA than that of RDICA. Copyright © 2013 Elsevier Inc. All rights reserved.

  1. Understanding a reference-free impedance method using collocated piezoelectric transducers

    NASA Astrophysics Data System (ADS)

    Kim, Eun Jin; Kim, Min Koo; Sohn, Hoon; Park, Hyun Woo

    2010-03-01

    A new concept of a reference-free impedance method, which does not require direct comparison with a baseline impedance signal, is proposed for damage detection in a plate-like structure. A single pair of piezoelectric (PZT) wafers collocated on both surfaces of a plate are utilized for extracting electro-mechanical signatures (EMS) associated with mode conversion due to damage. A numerical simulation is conducted to investigate the EMS of collocated PZT wafers in the frequency domain at the presence of damage through spectral element analysis. Then, the EMS due to mode conversion induced by damage are extracted using the signal decomposition technique based on the polarization characteristics of the collocated PZT wafers. The effects of the size and the location of damage on the decomposed EMS are investigated as well. Finally, the applicability of the decomposed EMS to the reference-free damage diagnosis is discussed.

  2. Linear and nonlinear variable selection in competing risks data.

    PubMed

    Ren, Xiaowei; Li, Shanshan; Shen, Changyu; Yu, Zhangsheng

    2018-06-15

    Subdistribution hazard model for competing risks data has been applied extensively in clinical researches. Variable selection methods of linear effects for competing risks data have been studied in the past decade. There is no existing work on selection of potential nonlinear effects for subdistribution hazard model. We propose a two-stage procedure to select the linear and nonlinear covariate(s) simultaneously and estimate the selected covariate effect(s). We use spectral decomposition approach to distinguish the linear and nonlinear parts of each covariate and adaptive LASSO to select each of the 2 components. Extensive numerical studies are conducted to demonstrate that the proposed procedure can achieve good selection accuracy in the first stage and small estimation biases in the second stage. The proposed method is applied to analyze a cardiovascular disease data set with competing death causes. Copyright © 2018 John Wiley & Sons, Ltd.

  3. Quantification of breast density with spectral mammography based on a scanned multi-slit photon-counting detector: a feasibility study.

    PubMed

    Ding, Huanjun; Molloi, Sabee

    2012-08-07

    A simple and accurate measurement of breast density is crucial for the understanding of its impact in breast cancer risk models. The feasibility to quantify volumetric breast density with a photon-counting spectral mammography system has been investigated using both computer simulations and physical phantom studies. A computer simulation model involved polyenergetic spectra from a tungsten anode x-ray tube and a Si-based photon-counting detector has been evaluated for breast density quantification. The figure-of-merit (FOM), which was defined as the signal-to-noise ratio of the dual energy image with respect to the square root of mean glandular dose, was chosen to optimize the imaging protocols, in terms of tube voltage and splitting energy. A scanning multi-slit photon-counting spectral mammography system has been employed in the experimental study to quantitatively measure breast density using dual energy decomposition with glandular and adipose equivalent phantoms of uniform thickness. Four different phantom studies were designed to evaluate the accuracy of the technique, each of which addressed one specific variable in the phantom configurations, including thickness, density, area and shape. In addition to the standard calibration fitting function used for dual energy decomposition, a modified fitting function has been proposed, which brought the tube voltages used in the imaging tasks as the third variable in dual energy decomposition. For an average sized 4.5 cm thick breast, the FOM was maximized with a tube voltage of 46 kVp and a splitting energy of 24 keV. To be consistent with the tube voltage used in current clinical screening exam (∼32 kVp), the optimal splitting energy was proposed to be 22 keV, which offered a FOM greater than 90% of the optimal value. In the experimental investigation, the root-mean-square (RMS) error in breast density quantification for all four phantom studies was estimated to be approximately 1.54% using standard calibration function. The results from the modified fitting function, which integrated the tube voltage as a variable in the calibration, indicated a RMS error of approximately 1.35% for all four studies. The results of the current study suggest that photon-counting spectral mammography systems may potentially be implemented for an accurate quantification of volumetric breast density, with an RMS error of less than 2%, using the proposed dual energy imaging technique.

  4. Lepidocrocite to Maghemite to Hematite: A way to have Magnetic and Hematitic Martian Soil

    NASA Technical Reports Server (NTRS)

    Morris, Richard V.; Golden, D. C.; Shelfer, Tad D.; Lauer, H. V., Jr.

    1997-01-01

    We examined decomposition products of lepidocrocite, which were produced by heating the phase in air at temperatures up to 525 C for 3 and 300 hr, by XRD, TEM, magnetic methods, and reflectance spectroscopy (visible and near-IR). Single-crystal lepidocrocite particles dehydroxilated to polycrystalline particles of disordered maghemite which subsequently transformed to polycrystalline particles of hematite. Essentially pure maghemite was obtained at 265 and 223 C for the 3 and 300 hr heating experiments, respectively. Its saturation magnetization (J(sub s)) and mass specific susceptibility are approximately 50 A(sq m)/kg and approximately 40 cubic micrometers/kg, respectively. Because hematite is spectrally dominant, spectrally-hematitic samples (i.e., characterized bv a minimum near 860 nm and a maximum near 750 nm) could also be strongly magnetic (J(sub s) up to approximately 30 A(sq m)/kg) from the masked maghemite component. TEM analyses showed that individual particles are polycrystalline with respect to both maghemite and hematite. The spectrally-hematitic and magnetic Mh+Hm particles can satisfy the spectral and magnetic constraints for Martian surface materials over a wide range of values of Mh/(Mh+Hm) and as either pure oxide powders or (within limits) as components of multiphase particles. These experiments are consistent with lepidocrocite as the precursor of Mh+Hm assemblages on Mars, but other phases (e.g., magnetite) that decompose to Mh and Hm are also possible precursors. Simulations done with a copy of the Mars Pathfinder Magnet Array showed that spectrally hematitic Mh+Hm powders having J(sub s) equal to 20.6 A(sq m)/kg adhered to all five magnets.

  5. Application of spectral decomposition of ²²²Rn activity concentration signal series measured in Niedźwiedzia Cave to identification of mechanisms responsible for different time-period variations.

    PubMed

    Przylibski, Tadeusz Andrzej; Wyłomańska, Agnieszka; Zimroz, Radosław; Fijałkowska-Lichwa, Lidia

    2015-10-01

    The authors present an application of spectral decomposition of (222)Rn activity concentration signal series as a mathematical tool used for distinguishing processes determining temporal changes of radon concentration in cave air. The authors demonstrate that decomposition of monitored signal such as (222)Rn activity concentration in cave air facilitates characterizing the processes affecting changes in the measured concentration of this gas. Thanks to this, one can better correlate and characterize the influence of various processes on radon behaviour in cave air. Distinguishing and characterising these processes enables the understanding of radon behaviour in cave environment and it may also enable and facilitate using radon as a precursor of geodynamic phenomena in the lithosphere. Thanks to the conducted analyses, the authors confirmed the unquestionable influence of convective air exchange between the cave and the atmosphere on seasonal and short-term (diurnal) changes in (222)Rn activity concentration in cave air. Thanks to the applied methodology of signal analysis and decomposition, the authors also identified a third process affecting (222)Rn activity concentration changes in cave air. This is a deterministic process causing changes in radon concentration, with a distribution different from the Gaussian one. The authors consider these changes to be the effect of turbulent air movements caused by the movement of visitors in caves. This movement is heterogeneous in terms of the number of visitors per group and the number of groups visiting a cave per day and per year. Such a process perfectly elucidates the observed character of the registered changes in (222)Rn activity concentration in one of the decomposed components of the analysed signal. The obtained results encourage further research into precise relationships between the registered (222)Rn activity concentration changes and factors causing them, as well as into using radon as a precursor of geodynamic phenomena in the lithosphere. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. Scaling properties of the aerodynamic noise generated by low-speed fans

    NASA Astrophysics Data System (ADS)

    Canepa, Edward; Cattanei, Andrea; Mazzocut Zecchin, Fabio

    2017-11-01

    The spectral decomposition algorithm presented in the paper may be applied to selected parts of the SPL spectrum, i.e. to specific noise generating mechanisms. It yields the propagation and the generation functions, and indeed the Mach number scaling exponent associated with each mechanism as a function of the Strouhal number. The input data are SPL spectra obtained from measurements taken during speed ramps. Firstly, the basic theory and the implemented algorithm are described. Then, the behaviour of the new method is analysed with reference to numerically generated spectral data and the results are compared with the ones of an existing method based on the assumption that the scaling exponent is constant. Guidelines for the employment of both methods are provided. Finally, the method is applied to measurements taken on a cooling fan mounted on a test plenum designed following the ISO 10302 standards. The most common noise generating mechanisms are present and attention is focused on the low-frequency part of the spectrum, where the mechanisms are superposed. Generally, both propagation and generation functions are determined with better accuracy than the scaling exponent, whose values are usually consistent with expectations based on coherence and compactness of the acoustic sources. For periodic noise, the computed exponent is less accurate, as the related SPL data set has usually a limited size. The scaling exponent is very sensitive to the details of the experimental data, e.g. to slight inconsistencies or random errors.

  7. Separation of spatial-temporal patterns ('climatic modes') by combined analysis of really measured and generated numerically vector time series

    NASA Astrophysics Data System (ADS)

    Feigin, A. M.; Mukhin, D.; Volodin, E. M.; Gavrilov, A.; Loskutov, E. M.

    2013-12-01

    The new method of decomposition of the Earth's climate system into well separated spatial-temporal patterns ('climatic modes') is discussed. The method is based on: (i) generalization of the MSSA (Multichannel Singular Spectral Analysis) [1] for expanding vector (space-distributed) time series in basis of spatial-temporal empirical orthogonal functions (STEOF), which makes allowance delayed correlations of the processes recorded in spatially separated points; (ii) expanding both real SST data, and longer by several times SST data generated numerically, in STEOF basis; (iii) use of the numerically produced STEOF basis for exclusion of 'too slow' (and thus not represented correctly) processes from real data. The application of the method allows by means of vector time series generated numerically by the INM RAS Coupled Climate Model [2] to separate from real SST anomalies data [3] two climatic modes possessing by noticeably different time scales: 3-5 and 9-11 years. Relations of separated modes to ENSO and PDO are investigated. Possible applications of spatial-temporal climatic patterns concept to prognosis of climate system evolution is discussed. 1. Ghil, M., R. M. Allen, M. D. Dettinger, K. Ide, D. Kondrashov, et al. (2002) "Advanced spectral methods for climatic time series", Rev. Geophys. 40(1), 3.1-3.41. 2. http://83.149.207.89/GCM_DATA_PLOTTING/GCM_INM_DATA_XY_en.htm 3. http://iridl.ldeo.columbia.edu/SOURCES/.KAPLAN/.EXTENDED/.v2/.ssta/

  8. A study of photothermal laser ablation of various polymers on microsecond time scales.

    PubMed

    Kappes, Ralf S; Schönfeld, Friedhelm; Li, Chen; Golriz, Ali A; Nagel, Matthias; Lippert, Thomas; Butt, Hans-Jürgen; Gutmann, Jochen S

    2014-01-01

    To analyze the photothermal ablation of polymers, we designed a temperature measurement setup based on spectral pyrometry. The setup allows to acquire 2D temperature distributions with 1 μm size and 1 μs time resolution and therefore the determination of the center temperature of a laser heating process. Finite element simulations were used to verify and understand the heat conversion and heat flow in the process. With this setup, the photothermal ablation of polystyrene, poly(α-methylstyrene), a polyimide and a triazene polymer was investigated. The thermal stability, the glass transition temperature Tg and the viscosity above Tg were governing the ablation process. Thermal decomposition for the applied laser pulse of about 10 μs started at temperatures similar to the start of decomposition in thermogravimetry. Furthermore, for polystyrene and poly(α-methylstyrene), both with a Tg in the range between room and decomposition temperature, ablation already occurred at temperatures well below the decomposition temperature, only at 30-40 K above Tg. The mechanism was photomechanical, i.e. a stress due to the thermal expansion of the polymer was responsible for ablation. Low molecular weight polymers showed differences in photomechanical ablation, corresponding to their lower Tg and lower viscosity above the glass transition. However, the difference in ablated volume was only significant at higher temperatures in the temperature regime for thermal decomposition at quasi-equilibrium time scales.

  9. Eigenvector decomposition of full-spectrum x-ray computed tomography.

    PubMed

    Gonzales, Brian J; Lalush, David S

    2012-03-07

    Energy-discriminated x-ray computed tomography (CT) data were projected onto a set of basis functions to suppress the noise in filtered back-projection (FBP) reconstructions. The x-ray CT data were acquired using a novel x-ray system which incorporated a single-pixel photon-counting x-ray detector to measure the x-ray spectrum for each projection ray. A matrix of the spectral response of different materials was decomposed using eigenvalue decomposition to form the basis functions. Projection of FBP onto basis functions created a de facto image segmentation of multiple contrast agents. Final reconstructions showed significant noise suppression while preserving important energy-axis data. The noise suppression was demonstrated by a marked improvement in the signal-to-noise ratio (SNR) along the energy axis for multiple regions of interest in the reconstructed images. Basis functions used on a more coarsely sampled energy axis still showed an improved SNR. We conclude that the noise-resolution trade off along the energy axis was significantly improved using the eigenvalue decomposition basis functions.

  10. A New Approach of evaluating the damage in simply-supported reinforced concrete beam by Local mean decomposition (LMD)

    NASA Astrophysics Data System (ADS)

    Zhang, Xuebing; Liu, Ning; Xi, Jiaxin; Zhang, Yunqi; Zhang, Wenchun; Yang, Peipei

    2017-08-01

    How to analyze the nonstationary response signals and obtain vibration characters is extremely important in the vibration-based structural diagnosis methods. In this work, we introduce a more reasonable time-frequency decomposition method termed local mean decomposition (LMD) to instead the widely-used empirical mode decomposition (EMD). By employing the LMD method, one can derive a group of component signals, each of which is more stationary, and then analyze the vibration state and make the assessment of structural damage of a construction or building. We illustrated the effectiveness of LMD by a synthetic data and an experimental data recorded in a simply-supported reinforced concrete beam. Then based on the decomposition results, an elementary method of damage diagnosis was proposed.

  11. Extracting fingerprint of wireless devices based on phase noise and multiple level wavelet decomposition

    NASA Astrophysics Data System (ADS)

    Zhao, Weichen; Sun, Zhuo; Kong, Song

    2016-10-01

    Wireless devices can be identified by the fingerprint extracted from the signal transmitted, which is useful in wireless communication security and other fields. This paper presents a method that extracts fingerprint based on phase noise of signal and multiple level wavelet decomposition. The phase of signal will be extracted first and then decomposed by multiple level wavelet decomposition. The statistic value of each wavelet coefficient vector is utilized for constructing fingerprint. Besides, the relationship between wavelet decomposition level and recognition accuracy is simulated. And advertised decomposition level is revealed as well. Compared with previous methods, our method is simpler and the accuracy of recognition remains high when Signal Noise Ratio (SNR) is low.

  12. ICER-3D: A Progressive Wavelet-Based Compressor for Hyperspectral Images

    NASA Technical Reports Server (NTRS)

    Kiely, A.; Klimesh, M.; Xie, H.; Aranki, N.

    2005-01-01

    ICER-3D is a progressive, wavelet-based compressor for hyperspectral images. ICER-3D is derived from the ICER image compressor. ICER-3D can provide lossless and lossy compression, and incorporates an error-containment scheme to limit the effects of data loss during transmission. The three-dimensional wavelet decomposition structure used by ICER-3D exploits correlations in all three dimensions of hyperspectral data sets, while facilitating elimination of spectral ringing artifacts. Correlation is further exploited by a context modeler that effectively exploits spectral dependencies in the wavelet-transformed hyperspectral data. Performance results illustrating the benefits of these features are presented.

  13. Spectral, coordination and thermal properties of 5-arylidene thiobarbituric acids

    NASA Astrophysics Data System (ADS)

    Masoud, Mamdouh S.; El-Marghany, Adel; Orabi, Adel; Ali, Alaa E.; Sayed, Reham

    2013-04-01

    Synthesis of 5-arylidine thiobarbituric acids containing different functional groups with variable electronic characters were described and their Co2+, Ni2+ and Cu2+ complexes. The stereochemistry and mode of bonding of 5-(substituted benzylidine)-2-TBA complexes were achieved based on elemental analysis, spectral (UV-VIS, IR, 1H NMR, MS), magnetic susceptibility and conductivity measurements. The ligands were of bidentate and tridentate bonding through S, N and O of pyrimidine nucleolus. All complexes were of octahedral configuration. The thermal data of the complexes pointed to their stability. The mechanism of the thermal decomposition is discussed. The thermodynamic parameters of the dissociation steps were evaluated and discussed.

  14. Landsat analysis of tropical forest succession employing a terrain model

    NASA Technical Reports Server (NTRS)

    Barringer, T. H.; Robinson, V. B.; Coiner, J. C.; Bruce, R. C.

    1980-01-01

    Landsat multispectral scanner (MSS) data have yielded a dual classification of rain forest and shadow in an analysis of a semi-deciduous forest on Mindonoro Island, Philippines. Both a spatial terrain model, using a fifth side polynomial trend surface analysis for quantitatively estimating the general spatial variation in the data set, and a spectral terrain model, based on the MSS data, have been set up. A discriminant analysis, using both sets of data, has suggested that shadowing effects may be due primarily to local variations in the spectral regions and can therefore be compensated for through the decomposition of the spatial variation in both elevation and MSS data.

  15. Application of singular value decomposition to structural dynamics systems with constraints

    NASA Technical Reports Server (NTRS)

    Juang, J.-N.; Pinson, L. D.

    1985-01-01

    Singular value decomposition is used to construct a coordinate transformation for a linear dynamic system subject to linear, homogeneous constraint equations. The method is compared with two commonly used methods, namely classical Gaussian elimination and Walton-Steeves approach. Although the classical method requires fewer numerical operations, the singular value decomposition method is more accurate and convenient in eliminating the dependent coordinates. Numerical examples are presented to demonstrate the application of the method.

  16. The atmospheric parameters of FGK stars using wavelet analysis of CORALIE spectra

    NASA Astrophysics Data System (ADS)

    Gill, S.; Maxted, P. F. L.; Smalley, B.

    2018-05-01

    Context. Atmospheric properties of F-, G- and K-type stars can be measured by spectral model fitting or with the analysis of equivalent width (EW) measurements. These methods require data with good signal-to-noise ratios (S/Ns) and reliable continuum normalisation. This is particularly challenging for the spectra we have obtained with the CORALIE échelle spectrograph for FGK stars with transiting M-dwarf companions. The spectra tend to have low S/Ns, which makes it difficult to analyse them using existing methods. Aims: Our aim is to create a reliable automated spectral analysis routine to determine Teff, [Fe/H], V sini from the CORALIE spectra of FGK stars. Methods: We use wavelet decomposition to distinguish between noise, continuum trends, and stellar spectral features in the CORALIE spectra. A subset of wavelet coefficients from the target spectrum are compared to those from a grid of models in a Bayesian framework to determine the posterior probability distributions of the atmospheric parameters. Results: By testing our method using synthetic spectra we found that our method converges on the best fitting atmospheric parameters. We test the wavelet method on 20 FGK exoplanet host stars for which higher-quality data have been independently analysed using EW measurements. We find that we can determine Teff to a precision of 85 K, [Fe/H] to a precision of 0.06 dex and V sini to a precision of 1.35 km s-1 for stars with V sini ≥ 5 km s-1. We find an offset in metallicity ≈- 0.18 dex relative to the EW fitting method. We can determine log g to a precision of 0.13 dex but find systematic trends with Teff. Measurements of log g are only reliable enough to confirm dwarf-like surface gravity (log g ≈ 4.5). Conclusions: The wavelet method can be used to determine Teff, [Fe/H], and V sini for FGK stars from CORALIE échelle spectra. Measurements of log g are unreliable but can confirm dwarf-like surface gravity. We find that our method is self consistent, and robust for spectra with S/N ⪆ 40.

  17. Domain Decomposition By the Advancing-Partition Method for Parallel Unstructured Grid Generation

    NASA Technical Reports Server (NTRS)

    Pirzadeh, Shahyar Z.; Zagaris, George

    2009-01-01

    A new method of domain decomposition has been developed for generating unstructured grids in subdomains either sequentially or using multiple computers in parallel. Domain decomposition is a crucial and challenging step for parallel grid generation. Prior methods are generally based on auxiliary, complex, and computationally intensive operations for defining partition interfaces and usually produce grids of lower quality than those generated in single domains. The new technique, referred to as "Advancing Partition," is based on the Advancing-Front method, which partitions a domain as part of the volume mesh generation in a consistent and "natural" way. The benefits of this approach are: 1) the process of domain decomposition is highly automated, 2) partitioning of domain does not compromise the quality of the generated grids, and 3) the computational overhead for domain decomposition is minimal. The new method has been implemented in NASA's unstructured grid generation code VGRID.

  18. Domain Decomposition By the Advancing-Partition Method

    NASA Technical Reports Server (NTRS)

    Pirzadeh, Shahyar Z.

    2008-01-01

    A new method of domain decomposition has been developed for generating unstructured grids in subdomains either sequentially or using multiple computers in parallel. Domain decomposition is a crucial and challenging step for parallel grid generation. Prior methods are generally based on auxiliary, complex, and computationally intensive operations for defining partition interfaces and usually produce grids of lower quality than those generated in single domains. The new technique, referred to as "Advancing Partition," is based on the Advancing-Front method, which partitions a domain as part of the volume mesh generation in a consistent and "natural" way. The benefits of this approach are: 1) the process of domain decomposition is highly automated, 2) partitioning of domain does not compromise the quality of the generated grids, and 3) the computational overhead for domain decomposition is minimal. The new method has been implemented in NASA's unstructured grid generation code VGRID.

  19. A Novel Approach to Resonant Absorption of the Fast Magnetohydrodynamic Eigenmodes of a Coronal Arcade

    NASA Astrophysics Data System (ADS)

    Hindman, Bradley W.; Jain, Rekha

    2018-05-01

    The arched field lines forming coronal arcades are often observed to undulate as magnetohydrodynamic waves propagate both across and along the magnetic field. These waves are most likely a combination of resonantly coupled fast magnetoacoustic waves and Alfvén waves. The coupling results in resonant absorption of the fast waves, converting fast wave energy into Alfvén waves. The fast eigenmodes of the arcade have proven difficult to compute or derive analytically, largely because of the mathematical complexity that the coupling introduces. When a traditional spectral decomposition is employed, the discrete spectrum associated with the fast eigenmodes is often subsumed into the continuous Alfvén spectrum. Thus fast eigenmodes become collective modes or quasi-modes. Here we present a spectral decomposition that treats the eigenmodes as having real frequencies but complex wavenumbers. Using this procedure we derive dispersion relations, spatial damping rates, and eigenfunctions for the resonant, fast eigenmodes of the arcade. We demonstrate that resonant absorption introduces a fast mode that would not exist otherwise. This new mode is heavily damped by resonant absorption, travelling only a few wavelengths before losing most of its energy.

  20. Differential Decomposition Among Pig, Rabbit, and Human Remains.

    PubMed

    Dautartas, Angela; Kenyhercz, Michael W; Vidoli, Giovanna M; Meadows Jantz, Lee; Mundorff, Amy; Steadman, Dawnie Wolfe

    2018-03-30

    While nonhuman animal remains are often utilized in forensic research to develop methods to estimate the postmortem interval, systematic studies that directly validate animals as proxies for human decomposition are lacking. The current project compared decomposition rates among pigs, rabbits, and humans at the University of Tennessee's Anthropology Research Facility across three seasonal trials that spanned nearly 2 years. The Total Body Score (TBS) method was applied to quantify decomposition changes and calculate the postmortem interval (PMI) in accumulated degree days (ADD). Decomposition trajectories were analyzed by comparing the estimated and actual ADD for each seasonal trial and by fuzzy cluster analysis. The cluster analysis demonstrated that the rabbits formed one group while pigs and humans, although more similar to each other than either to rabbits, still showed important differences in decomposition patterns. The decomposition trends show that neither nonhuman model captured the pattern, rate, and variability of human decomposition. © 2018 American Academy of Forensic Sciences.

  1. An efficient spectral crystal plasticity solver for GPU architectures

    NASA Astrophysics Data System (ADS)

    Malahe, Michael

    2018-03-01

    We present a spectral crystal plasticity (CP) solver for graphics processing unit (GPU) architectures that achieves a tenfold increase in efficiency over prior GPU solvers. The approach makes use of a database containing a spectral decomposition of CP simulations performed using a conventional iterative solver over a parameter space of crystal orientations and applied velocity gradients. The key improvements in efficiency come from reducing global memory transactions, exposing more instruction-level parallelism, reducing integer instructions and performing fast range reductions on trigonometric arguments. The scheme also makes more efficient use of memory than prior work, allowing for larger problems to be solved on a single GPU. We illustrate these improvements with a simulation of 390 million crystal grains on a consumer-grade GPU, which executes at a rate of 2.72 s per strain step.

  2. Synthesis, spectroscopic, biological activity and thermal characterization of ceftazidime with transition metals

    NASA Astrophysics Data System (ADS)

    Masoud, Mamdouh S.; Ali, Alaa E.; Elasala, Gehan S.; Kolkaila, Sherif A.

    2018-03-01

    Synthesis, physicochemical characterization and thermal analysis of ceftazidime complexes with transition metals (Cr(III), Mn(II), Fe(III), Co(II), Ni(II), Cu(II), Zn(II), Cd(II) and Hg(II)) were discussed. It's obtained that ceftazidime act as bidentate ligand. From magnetic measurement and spectral data, octahedral structures were proposed for all complexes except for cobalt, nickel and mercury had tetrahedral structural. Hyper chemistry program confirmed binding sites of ceftazidime. Ceftazidime complexes show higher activity than ceftazidime for some strains. From TG and DTA curves the thermal decomposition mechanisms of ceftazidime and their metal complexes were suggested. The thermal decomposition of the complexes ended with the formation of metal oxides as a final product except in case of Hg complex.

  3. Improving the performance of computer color matching procedures.

    PubMed

    Karbasi, A; Moradian, S; Asiaban, S

    2008-09-01

    A premise was set up entailing the possibility of a synergistical combination of advantages of spectrophotometric and colorimetric matching procedures. Attempts were therefore made to test the performances of fifteen matching procedures, all based on the Kubelka-Munk theory, including two procedures utilizing the fundamental color stimulus R(FCS) of the spectral decomposition theory. Color differences CIE DeltaE(00) as well as concentration differences DeltaC(AVE) were used to theoretically rank the fifteen color matching procedures. Results showed that procedures based on R(FCS) were superior in accurately predicting colors and concentrations. Additionally, the metameric black component R(MB) of the decomposition theory also showed promise in predicting degrees of metamerism. This preliminary study, therefore, provides evidence for the premise of this investigation.

  4. Spectral analysis and slow spreading dynamics on complex networks.

    PubMed

    Odor, Géza

    2013-09-01

    The susceptible-infected-susceptible (SIS) model is one of the simplest memoryless systems for describing information or epidemic spreading phenomena with competing creation and spontaneous annihilation reactions. The effect of quenched disorder on the dynamical behavior has recently been compared to quenched mean-field (QMF) approximations in scale-free networks. QMF can take into account topological heterogeneity and clustering effects of the activity in the steady state by spectral decomposition analysis of the adjacency matrix. Therefore, it can provide predictions on possible rare-region effects, thus on the occurrence of slow dynamics. I compare QMF results of SIS with simulations on various large dimensional graphs. In particular, I show that for Erdős-Rényi graphs this method predicts correctly the occurrence of rare-region effects. It also provides a good estimate for the epidemic threshold in case of percolating graphs. Griffiths Phases emerge if the graph is fragmented or if we apply a strong, exponentially suppressing weighting scheme on the edges. The latter model describes the connection time distributions in the face-to-face experiments. In case of a generalized Barabási-Albert type of network with aging connections, strong rare-region effects and numerical evidence for Griffiths Phase dynamics are shown. The dynamical simulation results agree well with the predictions of the spectral analysis applied for the weighted adjacency matrices.

  5. Blind source separation of ex-vivo aorta tissue multispectral images

    PubMed Central

    Galeano, July; Perez, Sandra; Montoya, Yonatan; Botina, Deivid; Garzón, Johnson

    2015-01-01

    Blind Source Separation methods (BSS) aim for the decomposition of a given signal in its main components or source signals. Those techniques have been widely used in the literature for the analysis of biomedical images, in order to extract the main components of an organ or tissue under study. The analysis of skin images for the extraction of melanin and hemoglobin is an example of the use of BSS. This paper presents a proof of concept of the use of source separation of ex-vivo aorta tissue multispectral Images. The images are acquired with an interference filter-based imaging system. The images are processed by means of two algorithms: Independent Components analysis and Non-negative Matrix Factorization. In both cases, it is possible to obtain maps that quantify the concentration of the main chromophores present in aortic tissue. Also, the algorithms allow for spectral absorbance of the main tissue components. Those spectral signatures were compared against the theoretical ones by using correlation coefficients. Those coefficients report values close to 0.9, which is a good estimator of the method’s performance. Also, correlation coefficients lead to the identification of the concentration maps according to the evaluated chromophore. The results suggest that Multi/hyper-spectral systems together with image processing techniques is a potential tool for the analysis of cardiovascular tissue. PMID:26137366

  6. Measuring dry plant residues in grasslands: A case study using AVIRIS

    NASA Technical Reports Server (NTRS)

    Fitzgerald, Michael; Ustin, Susan L.

    1992-01-01

    Grasslands, savannah, and hardwood rangelands are critical ecosystems and sensitive to disturbance. Approximately 20 percent of the Earth's surface are grasslands and represent 3 million ha. in California alone. Developing a methodology for estimating disturbance and the effects of cumulative impacts on grasslands and rangelands is needed to effectively monitor these ecosystems. Estimating the dry biomass residue remaining on rangelands at the end of the growing season provides a basis for evaluating the effectiveness of land management practices. The residual biomass is indicative of the grazing pressure and provides a measure of the system capacity for nutrient cycling since it represents the maximum organic matter available for decomposition, and finally, provides a measure of the erosion potential for the ecosystem. Remote sensing presents a possible method for measuring dry residue. However, current satellites have had limited application due to the coarse spatial scales (relative to the patch dynamics) and insensitivity of the spectral coverage to resolve dry plant material. Several hypotheses for measuring the biochemical constituents of dry plant material, particularly cellulose and lignin, using high spectral resolution sensors were proposed. The use of Airborne Visible/Infrared Imaging Spectrometers (AVIRIS) to measure dry plant residues over an oak savannah on the eastern slopes of the Coast Range in central California was investigated and it was asked what spatial and spectral resolutions are needed to quantitatively measure dry plant biomass in this ecosystem.

  7. Corneal birefringence measured by spectrally resolved Mueller matrix ellipsometry and implications for non-invasive glucose monitoring

    PubMed Central

    Westphal, Peter; Kaltenbach, Johannes-Maria; Wicker, Kai

    2016-01-01

    A good understanding of the corneal birefringence properties is essential for polarimetric glucose monitoring in the aqueous humor of the eye. Therefore, we have measured complete 16-element Mueller matrices of single-pass transitions through nine porcine corneas in-vitro, spectrally resolved in the range 300…1000 nm. These ellipsometric measurements have been performed at several angles of incidence at the apex and partially at the periphery of the corneas. The Mueller matrices have been decomposed into linear birefringence, circular birefringence (i.e. optical rotation), depolarization, and diattenuation. We found considerable circular birefringence, strongly increasing with decreasing wavelength, for most corneas. Furthermore, the decomposition revealed significant dependence of the linear retardance (in nm) on the wavelength below 500 nm. These findings suggest that uniaxial and biaxial crystals are insufficient models for a general description of the corneal birefringence, especially in the blue and in the UV spectral range. The implications on spectral-polarimetric approaches for glucose monitoring in the eye (for diabetics) are discussed. PMID:27446644

  8. a Novel Two-Component Decomposition for Co-Polar Channels of GF-3 Quad-Pol Data

    NASA Astrophysics Data System (ADS)

    Kwok, E.; Li, C. H.; Zhao, Q. H.; Li, Y.

    2018-04-01

    Polarimetric target decomposition theory is the most dynamic and exploratory research area in the field of PolSAR. But most methods of target decomposition are based on fully polarized data (quad pol) and seldom utilize dual-polar data for target decomposition. Given this, we proposed a novel two-component decomposition method for co-polar channels of GF-3 quad-pol data. This method decomposes the data into two scattering contributions: surface, double bounce in dual co-polar channels. To save this underdetermined problem, a criterion for determining the model is proposed. The criterion can be named as second-order averaged scattering angle, which originates from the H/α decomposition. and we also put forward an alternative parameter of it. To validate the effectiveness of proposed decomposition, Liaodong Bay is selected as research area. The area is located in northeastern China, where it grows various wetland resources and appears sea ice phenomenon in winter. and we use the GF-3 quad-pol data as study data, which which is China's first C-band polarimetric synthetic aperture radar (PolSAR) satellite. The dependencies between the features of proposed algorithm and comparison decompositions (Pauli decomposition, An&Yang decomposition, Yamaguchi S4R decomposition) were investigated in the study. Though several aspects of the experimental discussion, we can draw the conclusion: the proposed algorithm may be suitable for special scenes with low vegetation coverage or low vegetation in the non-growing season; proposed decomposition features only using co-polar data are highly correlated with the corresponding comparison decomposition features under quad-polarization data. Moreover, it would be become input of the subsequent classification or parameter inversion.

  9. Fast heap transform-based QR-decomposition of real and complex matrices: algorithms and codes

    NASA Astrophysics Data System (ADS)

    Grigoryan, Artyom M.

    2015-03-01

    In this paper, we describe a new look on the application of Givens rotations to the QR-decomposition problem, which is similar to the method of Householder transformations. We apply the concept of the discrete heap transform, or signal-induced unitary transforms which had been introduced by Grigoryan (2006) and used in signal and image processing. Both cases of real and complex nonsingular matrices are considered and examples of performing QR-decomposition of square matrices are given. The proposed method of QR-decomposition for the complex matrix is novel and differs from the known method of complex Givens rotation and is based on analytical equations for the heap transforms. Many examples illustrated the proposed heap transform method of QR-decomposition are given, algorithms are described in detail, and MATLAB-based codes are included.

  10. Conception of discrete systems decomposition algorithm using p-invariants and hypergraphs

    NASA Astrophysics Data System (ADS)

    Stefanowicz, Ł.

    2016-09-01

    In the article author presents an idea of decomposition algorithm of discrete systems described by Petri Nets using pinvariants. Decomposition process is significant from the point of view of discrete systems design, because it allows separation of the smaller sequential parts. Proposed algorithm uses modified Martinez-Silva method as well as author's selection algorithm. The developed method is a good complement of classical decomposition algorithms using graphs and hypergraphs.

  11. Assessment of a new method for the analysis of decomposition gases of polymers by a combining thermogravimetric solid-phase extraction and thermal desorption gas chromatography mass spectrometry.

    PubMed

    Duemichen, E; Braun, U; Senz, R; Fabian, G; Sturm, H

    2014-08-08

    For analysis of the gaseous thermal decomposition products of polymers, the common techniques are thermogravimetry, combined with Fourier transformed infrared spectroscopy (TGA-FTIR) and mass spectrometry (TGA-MS). These methods offer a simple approach to the decomposition mechanism, especially for small decomposition molecules. Complex spectra of gaseous mixtures are very often hard to identify because of overlapping signals. In this paper a new method is described to adsorb the decomposition products during controlled conditions in TGA on solid-phase extraction (SPE) material: twisters. Subsequently the twisters were analysed with thermal desorption gas chromatography mass spectrometry (TDS-GC-MS), which allows the decomposition products to be separated and identified using an MS library. The thermoplastics polyamide 66 (PA 66) and polybutylene terephthalate (PBT) were used as example polymers. The influence of the sample mass and of the purge gas flow during the decomposition process was investigated in TGA. The advantages and limitations of the method were presented in comparison to the common analysis techniques, TGA-FTIR and TGA-MS. Copyright © 2014 Elsevier B.V. All rights reserved.

  12. MO-FG-CAMPUS-IeP1-02: Dose Reduction in Contrast-Enhanced Digital Mammography Using a Photon-Counting Detector

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, S; Kang, S; Eom, J

    Purpose: Photon-counting detectors (PCDs) allow multi-energy X-ray imaging without additional exposures and spectral overlap. This capability results in the improvement of accuracy of material decomposition for dual-energy X-ray imaging and the reduction of radiation dose. In this study, the PCD-based contrast-enhanced dual-energy mammography (CEDM) was compared with the conventional CDEM in terms of radiation dose, image quality and accuracy of material decomposition. Methods: A dual-energy model was designed by using Beer-Lambert’s law and rational inverse fitting function for decomposing materials from a polychromatic X-ray source. A cadmium zinc telluride (CZT)-based PCD, which has five energy thresholds, and iodine solutions includedmore » in a 3D half-cylindrical phantom, which composed of 50% glandular and 50% adipose tissues, were simulated by using a Monte Carlo simulation tool. The low- and high-energy images were obtained in accordance with the clinical exposure conditions for the conventional CDEM. Energy bins of 20–33 and 34–50 keV were defined from X-ray energy spectra simulated at 50 kVp with different dose levels for implementing the PCD-based CDEM. The dual-energy mammographic techniques were compared by means of absorbed dose, noise property and normalized root-mean-square error (NRMSE). Results: Comparing to the conventional CEDM, the iodine solutions were clearly decomposed for the PCD-based CEDM. Although the radiation dose for the PCD-based CDEM was lower than that for the conventional CEDM, the PCD-based CDEM improved the noise property and accuracy of decomposition images. Conclusion: This study demonstrates that the PCD-based CDEM allows the quantitative material decomposition, and reduces radiation dose in comparison with the conventional CDEM. Therefore, the PCD-based CDEM is able to provide useful information for detecting breast tumor and enhancing diagnostic accuracy in mammography.« less

  13. TEMPORAL SIGNATURES OF AIR QUALITY OBSERVATIONS AND MODEL OUTPUTS: DO TIME SERIES DECOMPOSITION METHODS CAPTURE RELEVANT TIME SCALES?

    EPA Science Inventory

    Time series decomposition methods were applied to meteorological and air quality data and their numerical model estimates. Decomposition techniques express a time series as the sum of a small number of independent modes which hypothetically represent identifiable forcings, thereb...

  14. Light scattering microscopy measurements of single nuclei compared with GPU-accelerated FDTD simulations

    NASA Astrophysics Data System (ADS)

    Stark, Julian; Rothe, Thomas; Kieß, Steffen; Simon, Sven; Kienle, Alwin

    2016-04-01

    Single cell nuclei were investigated using two-dimensional angularly and spectrally resolved scattering microscopy. We show that even for a qualitative comparison of experimental and theoretical data, the standard Mie model of a homogeneous sphere proves to be insufficient. Hence, an accelerated finite-difference time-domain method using a graphics processor unit and domain decomposition was implemented to analyze the experimental scattering patterns. The measured cell nuclei were modeled as single spheres with randomly distributed spherical inclusions of different size and refractive index representing the nucleoli and clumps of chromatin. Taking into account the nuclear heterogeneity of a large number of inclusions yields a qualitative agreement between experimental and theoretical spectra and illustrates the impact of the nuclear micro- and nanostructure on the scattering patterns.

  15. Fourier transform wavefront control with adaptive prediction of the atmosphere.

    PubMed

    Poyneer, Lisa A; Macintosh, Bruce A; Véran, Jean-Pierre

    2007-09-01

    Predictive Fourier control is a temporal power spectral density-based adaptive method for adaptive optics that predicts the atmosphere under the assumption of frozen flow. The predictive controller is based on Kalman filtering and a Fourier decomposition of atmospheric turbulence using the Fourier transform reconstructor. It provides a stable way to compensate for arbitrary numbers of atmospheric layers. For each Fourier mode, efficient and accurate algorithms estimate the necessary atmospheric parameters from closed-loop telemetry and determine the predictive filter, adjusting as conditions change. This prediction improves atmospheric rejection, leading to significant improvements in system performance. For a 48x48 actuator system operating at 2 kHz, five-layer prediction for all modes is achievable in under 2x10(9) floating-point operations/s.

  16. Light scattering microscopy measurements of single nuclei compared with GPU-accelerated FDTD simulations.

    PubMed

    Stark, Julian; Rothe, Thomas; Kieß, Steffen; Simon, Sven; Kienle, Alwin

    2016-04-07

    Single cell nuclei were investigated using two-dimensional angularly and spectrally resolved scattering microscopy. We show that even for a qualitative comparison of experimental and theoretical data, the standard Mie model of a homogeneous sphere proves to be insufficient. Hence, an accelerated finite-difference time-domain method using a graphics processor unit and domain decomposition was implemented to analyze the experimental scattering patterns. The measured cell nuclei were modeled as single spheres with randomly distributed spherical inclusions of different size and refractive index representing the nucleoli and clumps of chromatin. Taking into account the nuclear heterogeneity of a large number of inclusions yields a qualitative agreement between experimental and theoretical spectra and illustrates the impact of the nuclear micro- and nanostructure on the scattering patterns.

  17. Electrochemical and Infrared Absorption Spectroscopy Detection of SF₆ Decomposition Products.

    PubMed

    Dong, Ming; Zhang, Chongxing; Ren, Ming; Albarracín, Ricardo; Ye, Rixin

    2017-11-15

    Sulfur hexafluoride (SF₆) gas-insulated electrical equipment is widely used in high-voltage (HV) and extra-high-voltage (EHV) power systems. Partial discharge (PD) and local heating can occur in the electrical equipment because of insulation faults, which results in SF₆ decomposition and ultimately generates several types of decomposition products. These SF₆ decomposition products can be qualitatively and quantitatively detected with relevant detection methods, and such detection contributes to diagnosing the internal faults and evaluating the security risks of the equipment. At present, multiple detection methods exist for analyzing the SF₆ decomposition products, and electrochemical sensing (ES) and infrared (IR) spectroscopy are well suited for application in online detection. In this study, the combination of ES with IR spectroscopy is used to detect SF₆ gas decomposition. First, the characteristics of these two detection methods are studied, and the data analysis matrix is established. Then, a qualitative and quantitative analysis ES-IR model is established by adopting a two-step approach. A SF₆ decomposition detector is designed and manufactured by combining an electrochemical sensor and IR spectroscopy technology. The detector is used to detect SF₆ gas decomposition and is verified to reliably and accurately detect the gas components and concentrations.

  18. Domain decomposition: A bridge between nature and parallel computers

    NASA Technical Reports Server (NTRS)

    Keyes, David E.

    1992-01-01

    Domain decomposition is an intuitive organizing principle for a partial differential equation (PDE) computation, both physically and architecturally. However, its significance extends beyond the readily apparent issues of geometry and discretization, on one hand, and of modular software and distributed hardware, on the other. Engineering and computer science aspects are bridged by an old but recently enriched mathematical theory that offers the subject not only unity, but also tools for analysis and generalization. Domain decomposition induces function-space and operator decompositions with valuable properties. Function-space bases and operator splittings that are not derived from domain decompositions generally lack one or more of these properties. The evolution of domain decomposition methods for elliptically dominated problems has linked two major algorithmic developments of the last 15 years: multilevel and Krylov methods. Domain decomposition methods may be considered descendants of both classes with an inheritance from each: they are nearly optimal and at the same time efficiently parallelizable. Many computationally driven application areas are ripe for these developments. A progression is made from a mathematically informal motivation for domain decomposition methods to a specific focus on fluid dynamics applications. To be introductory rather than comprehensive, simple examples are provided while convergence proofs and algorithmic details are left to the original references; however, an attempt is made to convey their most salient features, especially where this leads to algorithmic insight.

  19. Benchmarking of a T-wave alternans detection method based on empirical mode decomposition.

    PubMed

    Blanco-Velasco, Manuel; Goya-Esteban, Rebeca; Cruz-Roldán, Fernando; García-Alberola, Arcadi; Rojo-Álvarez, José Luis

    2017-07-01

    T-wave alternans (TWA) is a fluctuation of the ST-T complex occurring on an every-other-beat basis of the surface electrocardiogram (ECG). It has been shown to be an informative risk stratifier for sudden cardiac death, though the lack of gold standard to benchmark detection methods has promoted the use of synthetic signals. This work proposes a novel signal model to study the performance of a TWA detection. Additionally, the methodological validation of a denoising technique based on empirical mode decomposition (EMD), which is used here along with the spectral method, is also tackled. The proposed test bed system is based on the following guidelines: (1) use of open source databases to enable experimental replication; (2) use of real ECG signals and physiological noise; (3) inclusion of randomized TWA episodes. Both sensitivity (Se) and specificity (Sp) are separately analyzed. Also a nonparametric hypothesis test, based on Bootstrap resampling, is used to determine whether the presence of the EMD block actually improves the performance. The results show an outstanding specificity when the EMD block is used, even in very noisy conditions (0.96 compared to 0.72 for SNR = 8 dB), being always superior than that of the conventional SM alone. Regarding the sensitivity, using the EMD method also outperforms in noisy conditions (0.57 compared to 0.46 for SNR=8 dB), while it decreases in noiseless conditions. The proposed test setting designed to analyze the performance guarantees that the actual physiological variability of the cardiac system is reproduced. The use of the EMD-based block in noisy environment enables the identification of most patients with fatal arrhythmias. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Effect of Copper Oxide, Titanium Dioxide, and Lithium Fluoride on the Thermal Behavior and Decomposition Kinetics of Ammonium Nitrate

    NASA Astrophysics Data System (ADS)

    Vargeese, Anuj A.; Mija, S. J.; Muralidharan, Krishnamurthi

    2014-07-01

    Ammonium nitrate (AN) is crystallized along with copper oxide, titanium dioxide, and lithium fluoride. Thermal kinetic constants for the decomposition reaction of the samples were calculated by model-free (Friedman's differential and Vyzovkins nonlinear integral) and model-fitting (Coats-Redfern) methods. To determine the decomposition mechanisms, 12 solid-state mechanisms were tested using the Coats-Redfern method. The results of the Coats-Redfern method show that the decomposition mechanism for all samples is the contracting cylinder mechanism. The phase behavior of the obtained samples was evaluated by differential scanning calorimetry (DSC), and structural properties were determined by X-ray powder diffraction (XRPD). The results indicate that copper oxide modifies the phase transition behavior and can catalyze AN decomposition, whereas LiF inhibits AN decomposition, and TiO2 shows no influence on the rate of decomposition. Possible explanations for these results are discussed. Supplementary materials are available for this article. Go to the publisher's online edition of the Journal of Energetic Materials to view the free supplemental file.

  1. Development Of Polarimetric Decomposition Techniques For Indian Forest Resource Assessment Using Radar Imaging Satellite (Risat-1) Images

    NASA Astrophysics Data System (ADS)

    Sridhar, J.

    2015-12-01

    The focus of this work is to examine polarimetric decomposition techniques primarily focussed on Pauli decomposition and Sphere Di-Plane Helix (SDH) decomposition for forest resource assessment. The data processing methods adopted are Pre-processing (Geometric correction and Radiometric calibration), Speckle Reduction, Image Decomposition and Image Classification. Initially to classify forest regions, unsupervised classification was applied to determine different unknown classes. It was observed K-means clustering method gave better results in comparison with ISO Data method.Using the algorithm developed for Radar Tools, the code for decomposition and classification techniques were applied in Interactive Data Language (IDL) and was applied to RISAT-1 image of Mysore-Mandya region of Karnataka, India. This region is chosen for studying forest vegetation and consists of agricultural lands, water and hilly regions. Polarimetric SAR data possess a high potential for classification of earth surface.After applying the decomposition techniques, classification was done by selecting region of interests andpost-classification the over-all accuracy was observed to be higher in the SDH decomposed image, as it operates on individual pixels on a coherent basis and utilises the complete intrinsic coherent nature of polarimetric SAR data. Thereby, making SDH decomposition particularly suited for analysis of high-resolution SAR data. The Pauli Decomposition represents all the polarimetric information in a single SAR image however interpretation of the resulting image is difficult. The SDH decomposition technique seems to produce better results and interpretation as compared to Pauli Decomposition however more quantification and further analysis are being done in this area of research. The comparison of Polarimetric decomposition techniques and evolutionary classification techniques will be the scope of this work.

  2. An optimized time varying filtering based empirical mode decomposition method with grey wolf optimizer for machinery fault diagnosis

    NASA Astrophysics Data System (ADS)

    Zhang, Xin; Liu, Zhiwen; Miao, Qiang; Wang, Lei

    2018-03-01

    A time varying filtering based empirical mode decomposition (EMD) (TVF-EMD) method was proposed recently to solve the mode mixing problem of EMD method. Compared with the classical EMD, TVF-EMD was proven to improve the frequency separation performance and be robust to noise interference. However, the decomposition parameters (i.e., bandwidth threshold and B-spline order) significantly affect the decomposition results of this method. In original TVF-EMD method, the parameter values are assigned in advance, which makes it difficult to achieve satisfactory analysis results. To solve this problem, this paper develops an optimized TVF-EMD method based on grey wolf optimizer (GWO) algorithm for fault diagnosis of rotating machinery. Firstly, a measurement index termed weighted kurtosis index is constructed by using kurtosis index and correlation coefficient. Subsequently, the optimal TVF-EMD parameters that match with the input signal can be obtained by GWO algorithm using the maximum weighted kurtosis index as objective function. Finally, fault features can be extracted by analyzing the sensitive intrinsic mode function (IMF) owning the maximum weighted kurtosis index. Simulations and comparisons highlight the performance of TVF-EMD method for signal decomposition, and meanwhile verify the fact that bandwidth threshold and B-spline order are critical to the decomposition results. Two case studies on rotating machinery fault diagnosis demonstrate the effectiveness and advantages of the proposed method.

  3. Corrected confidence bands for functional data using principal components.

    PubMed

    Goldsmith, J; Greven, S; Crainiceanu, C

    2013-03-01

    Functional principal components (FPC) analysis is widely used to decompose and express functional observations. Curve estimates implicitly condition on basis functions and other quantities derived from FPC decompositions; however these objects are unknown in practice. In this article, we propose a method for obtaining correct curve estimates by accounting for uncertainty in FPC decompositions. Additionally, pointwise and simultaneous confidence intervals that account for both model- and decomposition-based variability are constructed. Standard mixed model representations of functional expansions are used to construct curve estimates and variances conditional on a specific decomposition. Iterated expectation and variance formulas combine model-based conditional estimates across the distribution of decompositions. A bootstrap procedure is implemented to understand the uncertainty in principal component decomposition quantities. Our method compares favorably to competing approaches in simulation studies that include both densely and sparsely observed functions. We apply our method to sparse observations of CD4 cell counts and to dense white-matter tract profiles. Code for the analyses and simulations is publicly available, and our method is implemented in the R package refund on CRAN. Copyright © 2013, The International Biometric Society.

  4. Corrected Confidence Bands for Functional Data Using Principal Components

    PubMed Central

    Goldsmith, J.; Greven, S.; Crainiceanu, C.

    2014-01-01

    Functional principal components (FPC) analysis is widely used to decompose and express functional observations. Curve estimates implicitly condition on basis functions and other quantities derived from FPC decompositions; however these objects are unknown in practice. In this article, we propose a method for obtaining correct curve estimates by accounting for uncertainty in FPC decompositions. Additionally, pointwise and simultaneous confidence intervals that account for both model- and decomposition-based variability are constructed. Standard mixed model representations of functional expansions are used to construct curve estimates and variances conditional on a specific decomposition. Iterated expectation and variance formulas combine model-based conditional estimates across the distribution of decompositions. A bootstrap procedure is implemented to understand the uncertainty in principal component decomposition quantities. Our method compares favorably to competing approaches in simulation studies that include both densely and sparsely observed functions. We apply our method to sparse observations of CD4 cell counts and to dense white-matter tract profiles. Code for the analyses and simulations is publicly available, and our method is implemented in the R package refund on CRAN. PMID:23003003

  5. An Efficient Local Correlation Matrix Decomposition Approach for the Localization Implementation of Ensemble-Based Assimilation Methods

    NASA Astrophysics Data System (ADS)

    Zhang, Hongqin; Tian, Xiangjun

    2018-04-01

    Ensemble-based data assimilation methods often use the so-called localization scheme to improve the representation of the ensemble background error covariance (Be). Extensive research has been undertaken to reduce the computational cost of these methods by using the localized ensemble samples to localize Be by means of a direct decomposition of the local correlation matrix C. However, the computational costs of the direct decomposition of the local correlation matrix C are still extremely high due to its high dimension. In this paper, we propose an efficient local correlation matrix decomposition approach based on the concept of alternating directions. This approach is intended to avoid direct decomposition of the correlation matrix. Instead, we first decompose the correlation matrix into 1-D correlation matrices in the three coordinate directions, then construct their empirical orthogonal function decomposition at low resolution. This procedure is followed by the 1-D spline interpolation process to transform the above decompositions to the high-resolution grid. Finally, an efficient correlation matrix decomposition is achieved by computing the very similar Kronecker product. We conducted a series of comparison experiments to illustrate the validity and accuracy of the proposed local correlation matrix decomposition approach. The effectiveness of the proposed correlation matrix decomposition approach and its efficient localization implementation of the nonlinear least-squares four-dimensional variational assimilation are further demonstrated by several groups of numerical experiments based on the Advanced Research Weather Research and Forecasting model.

  6. Extraction of Rice Heavy Metal Stress Signal Features Based on Long Time Series Leaf Area Index Data Using Ensemble Empirical Mode Decomposition

    PubMed Central

    Liu, Xiangnan; Zhang, Biyao; Liu, Ming; Wu, Ling

    2017-01-01

    The use of remote sensing technology to diagnose heavy metal stress in crops is of great significance for environmental protection and food security. However, in the natural farmland ecosystem, various stressors could have a similar influence on crop growth, therefore making heavy metal stress difficult to identify accurately, so this is still not a well resolved scientific problem and a hot topic in the field of agricultural remote sensing. This study proposes a method that uses Ensemble Empirical Mode Decomposition (EEMD) to obtain the heavy metal stress signal features on a long time scale. The method operates based on the Leaf Area Index (LAI) simulated by the Enhanced World Food Studies (WOFOST) model, assimilated with remotely sensed data. The following results were obtained: (i) the use of EEMD was effective in the extraction of heavy metal stress signals by eliminating the intra-annual and annual components; (ii) LAIdf (The first derivative of the sum of the interannual component and residual) can preferably reflect the stable feature responses to rice heavy metal stress. LAIdf showed stability with an R2 of greater than 0.9 in three growing stages, and the stability is optimal in June. This study combines the spectral characteristics of the stress effect with the time characteristics, and confirms the potential of long-term remotely sensed data for improving the accuracy of crop heavy metal stress identification. PMID:28878147

  7. Can sun-induced chlorophyll fluorescence track diurnal variations of GPP in an evergreen needle leaf forest?

    NASA Astrophysics Data System (ADS)

    Kim, J.; Ryu, Y.; Dechant, B.; Cho, S.; Kim, H. S.; Yang, K.

    2017-12-01

    The emerging technique of remotely sensed sun-induced fluorescence (SIF) has advanced our ability to estimate plant photosynthetic activity at regional and global scales. Continuous observations of SIF and gross primary productivity (GPP) at the canopy scale in evergreen needleleaf forests, however, have not yet been presented in the literature so far. Here, we report a time series of near-surface measurements of canopy-scale SIF, hyperspectral reflectance and GPP during the senescence period in an evergreen needleleaf forest in South Korea. Mean canopy height was 30 m and a hyperspectrometer connected with a single fiber and rotating prism, which measures bi-hemispheric irradiance, was installed 20 m above the canopy. SIF was retrieved in the spectral range 740-790 nm at a temporal resolution of 1 min. We tested different SIF retrieval methods, such as Fraunhofer line depth (FLD), spectral fitting method (SFM) and singular vector decomposition (SVD) against GPP estimated by eddy covariance and absorbed photosynthetically active radiation (APAR). We found that the SVD-retrieved SIF signal shows linear relationships with GPP (R2 = 0.63) and APAR (R2 = 0.52) while SFM- and FLD-retrieved SIF performed poorly. We suspect the larger influence of atmospheric oxygen absorption between the sensor and canopy might explain why SFM and FLD methods showed poor results. Data collection will continue and the relationships between SIF, GPP and APAR will be studied during the senescence period.

  8. Experimental validation of a structural damage detection method based on marginal Hilbert spectrum

    NASA Astrophysics Data System (ADS)

    Banerji, Srishti; Roy, Timir B.; Sabamehr, Ardalan; Bagchi, Ashutosh

    2017-04-01

    Structural Health Monitoring (SHM) using dynamic characteristics of structures is crucial for early damage detection. Damage detection can be performed by capturing and assessing structural responses. Instrumented structures are monitored by analyzing the responses recorded by deployed sensors in the form of signals. Signal processing is an important tool for the processing of the collected data to diagnose anomalies in structural behavior. The vibration signature of the structure varies with damage. In order to attain effective damage detection, preservation of non-linear and non-stationary features of real structural responses is important. Decomposition of the signals into Intrinsic Mode Functions (IMF) by Empirical Mode Decomposition (EMD) and application of Hilbert-Huang Transform (HHT) addresses the time-varying instantaneous properties of the structural response. The energy distribution among different vibration modes of the intact and damaged structure depicted by Marginal Hilbert Spectrum (MHS) detects location and severity of the damage. The present work investigates damage detection analytically and experimentally by employing MHS. The testing of this methodology for different damage scenarios of a frame structure resulted in its accurate damage identification. The sensitivity of Hilbert Spectral Analysis (HSA) is assessed with varying frequencies and damage locations by means of calculating Damage Indices (DI) from the Hilbert spectrum curves of the undamaged and damaged structures.

  9. Methodology for fault detection in induction motors via sound and vibration signals

    NASA Astrophysics Data System (ADS)

    Delgado-Arredondo, Paulo Antonio; Morinigo-Sotelo, Daniel; Osornio-Rios, Roque Alfredo; Avina-Cervantes, Juan Gabriel; Rostro-Gonzalez, Horacio; Romero-Troncoso, Rene de Jesus

    2017-01-01

    Nowadays, timely maintenance of electric motors is vital to keep up the complex processes of industrial production. There are currently a variety of methodologies for fault diagnosis. Usually, the diagnosis is performed by analyzing current signals at a steady-state motor operation or during a start-up transient. This method is known as motor current signature analysis, which identifies frequencies associated with faults in the frequency domain or by the time-frequency decomposition of the current signals. Fault identification may also be possible by analyzing acoustic sound and vibration signals, which is useful because sometimes this information is the only available. The contribution of this work is a methodology for detecting faults in induction motors in steady-state operation based on the analysis of acoustic sound and vibration signals. This proposed approach uses the Complete Ensemble Empirical Mode Decomposition for decomposing the signal into several intrinsic mode functions. Subsequently, the frequency marginal of the Gabor representation is calculated to obtain the spectral content of the IMF in the frequency domain. This proposal provides good fault detectability results compared to other published works in addition to the identification of more frequencies associated with the faults. The faults diagnosed in this work are two broken rotor bars, mechanical unbalance and bearing defects.

  10. Multispectral photoacoustic decomposition with localized regularization for detecting targeted contrast agent

    NASA Astrophysics Data System (ADS)

    Tavakoli, Behnoosh; Chen, Ying; Guo, Xiaoyu; Kang, Hyun Jae; Pomper, Martin; Boctor, Emad M.

    2015-03-01

    Targeted contrast agents can improve the sensitivity of imaging systems for cancer detection and monitoring the treatment. In order to accurately detect contrast agent concentration from photoacoustic images, we developed a decomposition algorithm to separate photoacoustic absorption spectrum into components from individual absorbers. In this study, we evaluated novel prostate-specific membrane antigen (PSMA) targeted agents for imaging prostate cancer. Three agents were synthesized through conjugating PSMA-targeting urea with optical dyes ICG, IRDye800CW and ATTO740 respectively. In our preliminary PA study, dyes were injected in a thin wall plastic tube embedded in water tank. The tube was illuminated with pulsed laser light using a tunable Q-switch ND-YAG laser. PA signal along with the B-mode ultrasound images were detected with a diagnostic ultrasound probe in orthogonal mode. PA spectrums of each dye at 0.5 to 20 μM concentrations were estimated using the maximum PA signal extracted from images which are obtained at illumination wavelengths of 700nm-850nm. Subsequently, we developed nonnegative linear least square optimization method along with localized regularization to solve the spectral unmixing. The algorithm was tested by imaging mixture of those dyes. The concentration of each dye was estimated with about 20% error on average from almost all mixtures albeit the small separation between dyes spectrums.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Niu, T; Dong, X; Petrongolo, M

    Purpose: Dual energy CT (DECT) imaging plays an important role in advanced imaging applications due to its material decomposition capability. Direct decomposition via matrix inversion suffers from significant degradation of image signal-to-noise ratios, which reduces clinical value. Existing de-noising algorithms achieve suboptimal performance since they suppress image noise either before or after the decomposition and do not fully explore the noise statistical properties of the decomposition process. We propose an iterative image-domain decomposition method for noise suppression in DECT, using the full variance-covariance matrix of the decomposed images. Methods: The proposed algorithm is formulated in the form of least-square estimationmore » with smoothness regularization. It includes the inverse of the estimated variance-covariance matrix of the decomposed images as the penalty weight in the least-square term. Performance is evaluated using an evaluation phantom (Catphan 600) and an anthropomorphic head phantom. Results are compared to those generated using direct matrix inversion with no noise suppression, a de-noising method applied on the decomposed images, and an existing algorithm with similar formulation but with an edge-preserving regularization term. Results: On the Catphan phantom, our method retains the same spatial resolution as the CT images before decomposition while reducing the noise standard deviation of decomposed images by over 98%. The other methods either degrade spatial resolution or achieve less low-contrast detectability. Also, our method yields lower electron density measurement error than direct matrix inversion and reduces error variation by over 97%. On the head phantom, it reduces the noise standard deviation of decomposed images by over 97% without blurring the sinus structures. Conclusion: We propose an iterative image-domain decomposition method for DECT. The method combines noise suppression and material decomposition into an iterative process and achieves both goals simultaneously. The proposed algorithm shows superior performance on noise suppression with high image spatial resolution and low-contrast detectability. This work is supported by a Varian MRA grant.« less

  12. A Type-2 Block-Component-Decomposition Based 2D AOA Estimation Algorithm for an Electromagnetic Vector Sensor Array

    PubMed Central

    Gao, Yu-Fei; Gui, Guan; Xie, Wei; Zou, Yan-Bin; Yang, Yue; Wan, Qun

    2017-01-01

    This paper investigates a two-dimensional angle of arrival (2D AOA) estimation algorithm for the electromagnetic vector sensor (EMVS) array based on Type-2 block component decomposition (BCD) tensor modeling. Such a tensor decomposition method can take full advantage of the multidimensional structural information of electromagnetic signals to accomplish blind estimation for array parameters with higher resolution. However, existing tensor decomposition methods encounter many restrictions in applications of the EMVS array, such as the strict requirement for uniqueness conditions of decomposition, the inability to handle partially-polarized signals, etc. To solve these problems, this paper investigates tensor modeling for partially-polarized signals of an L-shaped EMVS array. The 2D AOA estimation algorithm based on rank-(L1,L2,·) BCD is developed, and the uniqueness condition of decomposition is analyzed. By means of the estimated steering matrix, the proposed algorithm can automatically achieve angle pair-matching. Numerical experiments demonstrate that the present algorithm has the advantages of both accuracy and robustness of parameter estimation. Even under the conditions of lower SNR, small angular separation and limited snapshots, the proposed algorithm still possesses better performance than subspace methods and the canonical polyadic decomposition (CPD) method. PMID:28448431

  13. A Type-2 Block-Component-Decomposition Based 2D AOA Estimation Algorithm for an Electromagnetic Vector Sensor Array.

    PubMed

    Gao, Yu-Fei; Gui, Guan; Xie, Wei; Zou, Yan-Bin; Yang, Yue; Wan, Qun

    2017-04-27

    This paper investigates a two-dimensional angle of arrival (2D AOA) estimation algorithm for the electromagnetic vector sensor (EMVS) array based on Type-2 block component decomposition (BCD) tensor modeling. Such a tensor decomposition method can take full advantage of the multidimensional structural information of electromagnetic signals to accomplish blind estimation for array parameters with higher resolution. However, existing tensor decomposition methods encounter many restrictions in applications of the EMVS array, such as the strict requirement for uniqueness conditions of decomposition, the inability to handle partially-polarized signals, etc. To solve these problems, this paper investigates tensor modeling for partially-polarized signals of an L-shaped EMVS array. The 2D AOA estimation algorithm based on rank- ( L 1 , L 2 , · ) BCD is developed, and the uniqueness condition of decomposition is analyzed. By means of the estimated steering matrix, the proposed algorithm can automatically achieve angle pair-matching. Numerical experiments demonstrate that the present algorithm has the advantages of both accuracy and robustness of parameter estimation. Even under the conditions of lower SNR, small angular separation and limited snapshots, the proposed algorithm still possesses better performance than subspace methods and the canonical polyadic decomposition (CPD) method.

  14. Coordination modes, spectral, thermal and biological evaluation of hetero-metal copper containing 2-thiouracil complexes

    NASA Astrophysics Data System (ADS)

    Masoud, Mamdouh S.; Soayed, Amina A.; El-Husseiny, Amel F.

    2012-12-01

    Mononuclear copper complex [CuL(NH3)4]Cl2·0.5H2O and three new hetero-metallic complexes: [Cu2Ni(L)2(NH3)2Cl2·6H2O] 2H2O, [Cu3Co(L)4·8H2O]Cl·4·5H2O, and [Cu4Co2Ni(L)3(OH)4(NH3)Cl4·3H2O]4H2O where L is 2-thiouracil, were prepared and characterized by elemental analyses, molar conductance, room-temperature magnetic susceptibility, spectral (IR, UV-Vis and ESR) studies and thermal analyses techniques (TG, DTG and DTA). The molar conductance data revealed that [CuL(NH3)4]Cl2·0.5H2O and [Cu3Co(L)4·8H2O]Cl·4.5H2O are electrolytes, while, [Cu2Ni(L)2(NH3)2Cl2·6H2O]·2H2O and [Cu4Co2Ni(L)3(OH)4(NH3)Cl4·3H2O]4H2O are non-electrolytes. IR spectra showed, that 2-thiouracil ligand behaves as a bidentate or tetradentate ligand. The geometry around the metal atoms is octahedral in all the prepared complexes except in [Cu4Co2Ni(L)3(OH)4(NH3)Cl4·3H2O]4H2O complex where square planar environment around Co(II), Ni(II) and Cu(II) were suggested. Thermal decomposition study of the prepared complexes was monitored by TG, DTG and DTA analyses under N2 atmosphere. The decomposition course and steps were analyzed. The order of chemical reactions (n) was calculated via the peak symmetry method and the activation parameters of the non- isothermal decomposition were computed from the thermal decomposition data. The negative values of ΔS∗ deduced the ordered structures of the prepared complexes compared to their starting reactants. The antimicrobial activity of the prepared complexes were screened in vitro against a Gram positive, a Gram negative bacteria, a filamentous fungi and a yeast. The antimicrobial screening data showed that the studied compounds exhibited a good level of activity against Escherichia coli, Staphylococcus aureus and Candida albicans but have no efficacy against Aspergillus flavus. It was observed that [Cu4Co2Ni(L)3(OH)4(NH3)Cl4·3H2O]4H2O complex showed the most intensive activity against the tested microorganisms. Trials to prepare single crystals from complexes were failed.

  15. A novel ECG data compression method based on adaptive Fourier decomposition

    NASA Astrophysics Data System (ADS)

    Tan, Chunyu; Zhang, Liming

    2017-12-01

    This paper presents a novel electrocardiogram (ECG) compression method based on adaptive Fourier decomposition (AFD). AFD is a newly developed signal decomposition approach, which can decompose a signal with fast convergence, and hence reconstruct ECG signals with high fidelity. Unlike most of the high performance algorithms, our method does not make use of any preprocessing operation before compression. Huffman coding is employed for further compression. Validated with 48 ECG recordings of MIT-BIH arrhythmia database, the proposed method achieves the compression ratio (CR) of 35.53 and the percentage root mean square difference (PRD) of 1.47% on average with N = 8 decomposition times and a robust PRD-CR relationship. The results demonstrate that the proposed method has a good performance compared with the state-of-the-art ECG compressors.

  16. FIVE YEARS OF SYNTHESIS OF SOLAR SPECTRAL IRRADIANCE FROM SDID/SISA AND SDO /AIA IMAGES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fontenla, J. M.; Codrescu, M.; Fedrizzi, M.

    In this paper we describe the synthetic solar spectral irradiance (SSI) calculated from 2010 to 2015 using data from the Atmospheric Imaging Assembly (AIA) instrument, on board the Solar Dynamics Observatory spacecraft. We used the algorithms for solar disk image decomposition (SDID) and the spectral irradiance synthesis algorithm (SISA) that we had developed over several years. The SDID algorithm decomposes the images of the solar disk into areas occupied by nine types of chromospheric and 5 types of coronal physical structures. With this decomposition and a set of pre-computed angle-dependent spectra for each of the features, the SISA algorithm ismore » used to calculate the SSI. We discuss the application of the basic SDID/SISA algorithm to a subset of the AIA images and the observed variation occurring in the 2010–2015 period of the relative areas of the solar disk covered by the various solar surface features. Our results consist of the SSI and total solar irradiance variations over the 2010–2015 period. The SSI results include soft X-ray, ultraviolet, visible, infrared, and far-infrared observations and can be used for studies of the solar radiative forcing of the Earth’s atmosphere. These SSI estimates were used to drive a thermosphere–ionosphere physical simulation model. Predictions of neutral mass density at low Earth orbit altitudes in the thermosphere and peak plasma densities at mid-latitudes are in reasonable agreement with the observations. The correlation between the simulation results and the observations was consistently better when fluxes computed by SDID/SISA procedures were used.« less

  17. Towards a non-invasive quantitative analysis of the organic components in museum objects varnishes by vibrational spectroscopies: methodological approach.

    PubMed

    Daher, Céline; Pimenta, Vanessa; Bellot-Gurlet, Ludovic

    2014-11-01

    The compositions of ancient varnishes are mainly determined destructively by separation methods coupled to mass spectrometry. In this study, a methodology for non-invasive quantitative analyses of varnishes by vibrational spectroscopies is proposed. For that, experimental simplified varnishes of colophony and linseed oil were prepared according to 18th century traditional recipes with an increasing mass concentration ratio of colophony/linseed oil. FT-Raman and IR analyses using ATR and non-invasive reflectance modes were done on the "pure" materials and on the different mixtures. Then, a new approach involving spectral decomposition calculation was developed considering the mixture spectra as a linear combination of the pure materials ones, and giving a relative amount of each component. Specific spectral regions were treated and the obtained results show a good accuracy between the prepared and calculated amounts of the two compounds. We were thus able to detect and quantify from 10% to 50% of colophony in linseed oil using non-invasive techniques that can also be conducted in situ with portable instruments when it comes to museum varnished objects and artifacts. Copyright © 2014 Elsevier B.V. All rights reserved.

  18. Exploiting spectral content for image segmentation in GPR data

    NASA Astrophysics Data System (ADS)

    Wang, Patrick K.; Morton, Kenneth D., Jr.; Collins, Leslie M.; Torrione, Peter A.

    2011-06-01

    Ground-penetrating radar (GPR) sensors provide an effective means for detecting changes in the sub-surface electrical properties of soils, such as changes indicative of landmines or other buried threats. However, most GPR-based pre-screening algorithms only localize target responses along the surface of the earth, and do not provide information regarding an object's position in depth. As a result, feature extraction algorithms are forced to process data from entire cubes of data around pre-screener alarms, which can reduce feature fidelity and hamper performance. In this work, spectral analysis is investigated as a method for locating subsurface anomalies in GPR data. In particular, a 2-D spatial/frequency decomposition is applied to pre-screener flagged GPR B-scans. Analysis of these spatial/frequency regions suggests that aspects (e.g. moments, maxima, mode) of the frequency distribution of GPR energy can be indicative of the presence of target responses. After translating a GPR image to a function of the spatial/frequency distributions at each pixel, several image segmentation approaches can be applied to perform segmentation in this new transformed feature space. To illustrate the efficacy of the approach, a performance comparison between feature processing with and without the image segmentation algorithm is provided.

  19. Complex mode indication function and its applications to spatial domain parameter estimation

    NASA Astrophysics Data System (ADS)

    Shih, C. Y.; Tsuei, Y. G.; Allemang, R. J.; Brown, D. L.

    1988-10-01

    This paper introduces the concept of the Complex Mode Indication Function (CMIF) and its application in spatial domain parameter estimation. The concept of CMIF is developed by performing singular value decomposition (SVD) of the Frequency Response Function (FRF) matrix at each spectral line. The CMIF is defined as the eigenvalues, which are the square of the singular values, solved from the normal matrix formed from the FRF matrix, [ H( jω)] H[ H( jω)], at each spectral line. The CMIF appears to be a simple and efficient method for identifying the modes of the complex system. The CMIF identifies modes by showing the physical magnitude of each mode and the damped natural frequency for each root. Since multiple reference data is applied in CMIF, repeated roots can be detected. The CMIF also gives global modal parameters, such as damped natural frequencies, mode shapes and modal participation vectors. Since CMIF works in the spatial domain, uneven frequency spacing data such as data from spatial sine testing can be used. A second-stage procedure for accurate damped natural frequency and damping estimation as well as mode shape scaling is also discussed in this paper.

  20. Pd (II) complexes of bidentate chalcone ligands: Synthesis, spectral, thermal, antitumor, antioxidant, antimicrobial, DFT and SAR studies

    NASA Astrophysics Data System (ADS)

    Gaber, Mohamed; Awad, Mohamed K.; Atlam, Faten M.

    2018-05-01

    The ligation behavior of two chalcone ligands namely, (E)-3-(4-chlorophenyl)-1-(pyridin-2-yl)prop-2-en-1-one (L1) and (E)-3-(4-methoxyphenyl)-1-(pyridin-2-yl)prop-2-en-1-one (L2), towards the Pd(II) ion is determined. The structures of the complexes are elucidated by elemental analysis, spectral methods (IR, electronic and NMR spectra) as well as the conductance measurements and thermal analysis. The metal complexes exhibit a square planar geometrical arrangement. The kinetic and thermodynamic parameters for some selected decomposition steps have been calculated. The antimicrobial, antioxidant and anticancer activities of the chalcones and their Pd(II) complexes have been evaluated. Molecular orbital computations are performed using DFT at B3LYP level with 6-31 + G(d) and LANL2DZ basis sets to access reliable results to the experimental values. The calculations are performed to obtain the optimized molecular geometry, charge density distribution, extent of distortion from regular geometry. Thermodynamic parameters for the investigated compounds are also studied. The calculations confirm that the investigated complexes have square planner geometry, which is in a good agreement with the experimental observation.

  1. Multiscale image fusion using the undecimated wavelet transform with spectral factorization and nonorthogonal filter banks.

    PubMed

    Ellmauthaler, Andreas; Pagliari, Carla L; da Silva, Eduardo A B

    2013-03-01

    Multiscale transforms are among the most popular techniques in the field of pixel-level image fusion. However, the fusion performance of these methods often deteriorates for images derived from different sensor modalities. In this paper, we demonstrate that for such images, results can be improved using a novel undecimated wavelet transform (UWT)-based fusion scheme, which splits the image decomposition process into two successive filtering operations using spectral factorization of the analysis filters. The actual fusion takes place after convolution with the first filter pair. Its significantly smaller support size leads to the minimization of the unwanted spreading of coefficient values around overlapping image singularities. This usually complicates the feature selection process and may lead to the introduction of reconstruction errors in the fused image. Moreover, we will show that the nonsubsampled nature of the UWT allows the design of nonorthogonal filter banks, which are more robust to artifacts introduced during fusion, additionally improving the obtained results. The combination of these techniques leads to a fusion framework, which provides clear advantages over traditional multiscale fusion approaches, independent of the underlying fusion rule, and reduces unwanted side effects such as ringing artifacts in the fused reconstruction.

  2. Analysis of tonal noise generating mechanisms in low-speed axial-flow fans

    NASA Astrophysics Data System (ADS)

    Canepa, Edward; Cattanei, Andrea; Zecchin, Fabio Mazzocut

    2016-08-01

    The present paper reports a comparison of experimental SPL spectral data related to the tonal noise generated by axial-flow fans. A nine blade rotor has been operated at free discharge conditions and in four geometrical configurations in which different kinds of tonal noise generating mechanisms are present: large-scale inlet turbulent structures, tip-gap flow, turbulent wakes, and rotor-stator interaction. The measurements have been taken in a hemi-anechoic chamber at constant rotational speed and, in order to vary the acoustic source strength, during low angular acceleration, linear speed ramps. In order to avoid erroneous quantitative evaluations if the acoustic propagation effects are not considered, the acoustic response functions of the different test configurations have been computed by means of the spectral decomposition method. Then, the properties of the tonal noise generating mechanisms have been studied. To this aim, the constant-Strouhal number SPL, obtained by means of measurements taken during the speed ramps, have been compared with the propagation function. Finally, the analysis of the phase of the acoustic pressure has allowed to distinguish between random and deterministic tonal noise generating mechanisms and to collect information about the presence of important propagation effects.

  3. Comparison of Techniques for Sampling Adult Necrophilous Insects From Pig Carcasses.

    PubMed

    Cruise, Angela; Hatano, Eduardo; Watson, David W; Schal, Coby

    2018-02-06

    Studies of the pre-colonization interval and mechanisms driving necrophilous insect ecological succession depend on effective sampling of adult insects and knowledge of their diel and successional activity patterns. The number of insects trapped, their diversity, and diel periodicity were compared with four sampling methods on neonate pigs. Sampling method, time of day and decomposition age of the pigs significantly affected the number of insects sampled from pigs. We also found significant interactions of sampling method and decomposition day, time of sampling and decomposition day. No single method was superior to the other methods during all three decomposition days. Sampling times after noon yielded the largest samples during the first 2 d of decomposition. On day 3 of decomposition however, all sampling times were equally effective. Therefore, to maximize insect collections from neonate pigs, the method used to sample must vary by decomposition day. The suction trap collected the most species-rich samples, but sticky trap samples were the most diverse, when both species richness and evenness were factored into a Shannon diversity index. Repeated sampling during the noon to 18:00 hours period was most effective to obtain the maximum diversity of trapped insects. The integration of multiple sampling techniques would most effectively sample the necrophilous insect community. However, because all four tested methods were deficient at sampling beetle species, future work should focus on optimizing the most promising methods, alone or in combinations, and incorporate hand-collections of beetles. © The Author(s) 2018. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  4. Trend extraction using empirical mode decomposition and statistical empirical mode decomposition: Case study: Kuala Lumpur stock market

    NASA Astrophysics Data System (ADS)

    Jaber, Abobaker M.

    2014-12-01

    Two nonparametric methods for prediction and modeling of financial time series signals are proposed. The proposed techniques are designed to handle non-stationary and non-linearity behave and to extract meaningful signals for reliable prediction. Due to Fourier Transform (FT), the methods select significant decomposed signals that will be employed for signal prediction. The proposed techniques developed by coupling Holt-winter method with Empirical Mode Decomposition (EMD) and it is Extending the scope of empirical mode decomposition by smoothing (SEMD). To show performance of proposed techniques, we analyze daily closed price of Kuala Lumpur stock market index.

  5. Structural optimization by multilevel decomposition

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, J.; James, B.; Dovi, A.

    1983-01-01

    A method is described for decomposing an optimization problem into a set of subproblems and a coordination problem which preserves coupling between the subproblems. The method is introduced as a special case of multilevel, multidisciplinary system optimization and its algorithm is fully described for two level optimization for structures assembled of finite elements of arbitrary type. Numerical results are given for an example of a framework to show that the decomposition method converges and yields results comparable to those obtained without decomposition. It is pointed out that optimization by decomposition should reduce the design time by allowing groups of engineers, using different computers to work concurrently on the same large problem.

  6. Electrochemical and Infrared Absorption Spectroscopy Detection of SF6 Decomposition Products

    PubMed Central

    Dong, Ming; Ren, Ming; Ye, Rixin

    2017-01-01

    Sulfur hexafluoride (SF6) gas-insulated electrical equipment is widely used in high-voltage (HV) and extra-high-voltage (EHV) power systems. Partial discharge (PD) and local heating can occur in the electrical equipment because of insulation faults, which results in SF6 decomposition and ultimately generates several types of decomposition products. These SF6 decomposition products can be qualitatively and quantitatively detected with relevant detection methods, and such detection contributes to diagnosing the internal faults and evaluating the security risks of the equipment. At present, multiple detection methods exist for analyzing the SF6 decomposition products, and electrochemical sensing (ES) and infrared (IR) spectroscopy are well suited for application in online detection. In this study, the combination of ES with IR spectroscopy is used to detect SF6 gas decomposition. First, the characteristics of these two detection methods are studied, and the data analysis matrix is established. Then, a qualitative and quantitative analysis ES-IR model is established by adopting a two-step approach. A SF6 decomposition detector is designed and manufactured by combining an electrochemical sensor and IR spectroscopy technology. The detector is used to detect SF6 gas decomposition and is verified to reliably and accurately detect the gas components and concentrations. PMID:29140268

  7. A non-orthogonal decomposition of flows into discrete events

    NASA Astrophysics Data System (ADS)

    Boxx, Isaac; Lewalle, Jacques

    1998-11-01

    This work is based on the formula for the inverse Hermitian wavelet transform. A signal can be interpreted as a (non-unique) superposition of near-singular, partially overlapping events arising from Dirac functions and/or its derivatives combined with diffusion.( No dynamics implied: dimensionless diffusion is related to the definition of the analyzing wavelets.) These events correspond to local maxima of spectral energy density. We successfully fitted model events of various orders on a succession of fields, ranging from elementary signals to one-dimensional hot-wire traces. We document edge effects, event overlap and its implications on the algorithm. The interpretation of the discrete singularities as flow events (such as coherent structures) and the fundamental non-uniqueness of the decomposition are discussed. The dynamics of these events will be examined in the companion paper.

  8. High performance Python for direct numerical simulations of turbulent flows

    NASA Astrophysics Data System (ADS)

    Mortensen, Mikael; Langtangen, Hans Petter

    2016-06-01

    Direct Numerical Simulations (DNS) of the Navier Stokes equations is an invaluable research tool in fluid dynamics. Still, there are few publicly available research codes and, due to the heavy number crunching implied, available codes are usually written in low-level languages such as C/C++ or Fortran. In this paper we describe a pure scientific Python pseudo-spectral DNS code that nearly matches the performance of C++ for thousands of processors and billions of unknowns. We also describe a version optimized through Cython, that is found to match the speed of C++. The solvers are written from scratch in Python, both the mesh, the MPI domain decomposition, and the temporal integrators. The solvers have been verified and benchmarked on the Shaheen supercomputer at the KAUST supercomputing laboratory, and we are able to show very good scaling up to several thousand cores. A very important part of the implementation is the mesh decomposition (we implement both slab and pencil decompositions) and 3D parallel Fast Fourier Transforms (FFT). The mesh decomposition and FFT routines have been implemented in Python using serial FFT routines (either NumPy, pyFFTW or any other serial FFT module), NumPy array manipulations and with MPI communications handled by MPI for Python (mpi4py). We show how we are able to execute a 3D parallel FFT in Python for a slab mesh decomposition using 4 lines of compact Python code, for which the parallel performance on Shaheen is found to be slightly better than similar routines provided through the FFTW library. For a pencil mesh decomposition 7 lines of code is required to execute a transform.

  9. TU-AB-BRC-03: Accurate Tissue Characterization for Monte Carlo Dose Calculation Using Dual-and Multi-Energy CT Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lalonde, A; Bouchard, H

    Purpose: To develop a general method for human tissue characterization with dual-and multi-energy CT and evaluate its performance in determining elemental compositions and the associated proton stopping power relative to water (SPR) and photon mass absorption coefficients (EAC). Methods: Principal component analysis is used to extract an optimal basis of virtual materials from a reference dataset of tissues. These principal components (PC) are used to perform two-material decomposition using simulated DECT data. The elemental mass fraction and the electron density in each tissue is retrieved by measuring the fraction of each PC. A stoichiometric calibration method is adapted to themore » technique to make it suitable for clinical use. The present approach is compared with two others: parametrization and three-material decomposition using the water-lipid-protein (WLP) triplet. Results: Monte Carlo simulations using TOPAS for four reference tissues shows that characterizing them with only two PC is enough to get a submillimetric precision on proton range prediction. Based on the simulated DECT data of 43 references tissues, the proposed method is in agreement with theoretical values of protons SPR and low-kV EAC with a RMS error of 0.11% and 0.35%, respectively. In comparison, parametrization and WLP respectively yield RMS errors of 0.13% and 0.29% on SPR, and 2.72% and 2.19% on EAC. Furthermore, the proposed approach shows potential applications for spectral CT. Using five PC and five energy bins reduces the SPR RMS error to 0.03%. Conclusion: The proposed method shows good performance in determining elemental compositions from DECT data and physical quantities relevant to radiotherapy dose calculation and generally shows better accuracy and unbiased results compared to reference methods. The proposed method is particularly suitable for Monte Carlo calculations and shows promise in using more than two energies to characterize human tissue with CT.« less

  10. Probing the fractal pattern and organization of Bacillus thuringiensis bacteria colonies growing under different conditions using quantitative spectral light scattering polarimetry

    NASA Astrophysics Data System (ADS)

    Banerjee, Paromita; Soni, Jalpa; Purwar, Harsh; Ghosh, Nirmalya; Sengupta, Tapas K.

    2013-03-01

    Development of methods for quantification of cellular association and patterns in growing bacterial colony is of considerable current interest, not only to help understand multicellular behavior of a bacterial species but also to facilitate detection and identification of a bacterial species in a given space and under a given set of condition(s). We have explored quantitative spectral light scattering polarimetry for probing the morphological and structural changes taking place during colony formations of growing Bacillus thuringiensis bacteria under different conditions (in normal nutrient agar representing favorable growth environment, in the presence of 1% glucose as an additional nutrient, and 3 mM sodium arsenate as toxic material). The method is based on the measurement of spectral 3×3 Mueller matrices (which involves linear polarization measurements alone) and its subsequent analysis via polar decomposition to extract the intrinsic polarization parameters. Moreover, the fractal micro-optical parameter, namely, the Hurst exponent H, is determined via fractal-Born approximation-based inverse analysis of the polarization-preserving component of the light scattering spectra. Interesting differences are noted in the derived values for the H parameter and the intrinsic polarization parameters (linear diattenuation d, linear retardance δ, and linear depolarization Δ coefficients) of the growing bacterial colonies under different conditions. The bacterial colony growing in presence of 1% glucose exhibit the strongest fractality (lowest value of H), whereas that growing in presence of 3 mM sodium arsenate showed the weakest fractality. Moreover, the values for δ and d parameters are found to be considerably higher for the colony growing in presence of glucose, indicating more structured growth pattern. These findings are corroborated further with optical microscopic studies conducted on the same samples.

  11. A novel hybrid decomposition-and-ensemble model based on CEEMD and GWO for short-term PM2.5 concentration forecasting

    NASA Astrophysics Data System (ADS)

    Niu, Mingfei; Wang, Yufang; Sun, Shaolong; Li, Yongwu

    2016-06-01

    To enhance prediction reliability and accuracy, a hybrid model based on the promising principle of "decomposition and ensemble" and a recently proposed meta-heuristic called grey wolf optimizer (GWO) is introduced for daily PM2.5 concentration forecasting. Compared with existing PM2.5 forecasting methods, this proposed model has improved the prediction accuracy and hit rates of directional prediction. The proposed model involves three main steps, i.e., decomposing the original PM2.5 series into several intrinsic mode functions (IMFs) via complementary ensemble empirical mode decomposition (CEEMD) for simplifying the complex data; individually predicting each IMF with support vector regression (SVR) optimized by GWO; integrating all predicted IMFs for the ensemble result as the final prediction by another SVR optimized by GWO. Seven benchmark models, including single artificial intelligence (AI) models, other decomposition-ensemble models with different decomposition methods and models with the same decomposition-ensemble method but optimized by different algorithms, are considered to verify the superiority of the proposed hybrid model. The empirical study indicates that the proposed hybrid decomposition-ensemble model is remarkably superior to all considered benchmark models for its higher prediction accuracy and hit rates of directional prediction.

  12. Dominant modal decomposition method

    NASA Astrophysics Data System (ADS)

    Dombovari, Zoltan

    2017-03-01

    The paper deals with the automatic decomposition of experimental frequency response functions (FRF's) of mechanical structures. The decomposition of FRF's is based on the Green function representation of free vibratory systems. After the determination of the impulse dynamic subspace, the system matrix is formulated and the poles are calculated directly. By means of the corresponding eigenvectors, the contribution of each element of the impulse dynamic subspace is determined and the sufficient decomposition of the corresponding FRF is carried out. With the presented dominant modal decomposition (DMD) method, the mode shapes, the modal participation vectors and the modal scaling factors are identified using the decomposed FRF's. Analytical example is presented along with experimental case studies taken from machine tool industry.

  13. A comparison of reduced-order modelling techniques for application in hyperthermia control and estimation.

    PubMed

    Bailey, E A; Dutton, A W; Mattingly, M; Devasia, S; Roemer, R B

    1998-01-01

    Reduced-order modelling techniques can make important contributions in the control and state estimation of large systems. In hyperthermia, reduced-order modelling can provide a useful tool by which a large thermal model can be reduced to the most significant subset of its full-order modes, making real-time control and estimation possible. Two such reduction methods, one based on modal decomposition and the other on balanced realization, are compared in the context of simulated hyperthermia heat transfer problems. The results show that the modal decomposition reduction method has three significant advantages over that of balanced realization. First, modal decomposition reduced models result in less error, when compared to the full-order model, than balanced realization reduced models of similar order in problems with low or moderate advective heat transfer. Second, because the balanced realization based methods require a priori knowledge of the sensor and actuator placements, the reduced-order model is not robust to changes in sensor or actuator locations, a limitation not present in modal decomposition. Third, the modal decomposition transformation is less demanding computationally. On the other hand, in thermal problems dominated by advective heat transfer, numerical instabilities make modal decomposition based reduction problematic. Modal decomposition methods are therefore recommended for reduction of models in which advection is not dominant and research continues into methods to render balanced realization based reduction more suitable for real-time clinical hyperthermia control and estimation.

  14. Descent theory for semiorthogonal decompositions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elagin, Alexei D

    We put forward a method for constructing semiorthogonal decompositions of the derived category of G-equivariant sheaves on a variety X under the assumption that the derived category of sheaves on X admits a semiorthogonal decomposition with components preserved by the action of the group G on X. This method is used to obtain semiorthogonal decompositions of equivariant derived categories for projective bundles and blow-ups with a smooth centre as well as for varieties with a full exceptional collection preserved by the group action. Our main technical tool is descent theory for derived categories. Bibliography: 12 titles.

  15. About decomposition approach for solving the classification problem

    NASA Astrophysics Data System (ADS)

    Andrianova, A. A.

    2016-11-01

    This article describes the features of the application of an algorithm with using of decomposition methods for solving the binary classification problem of constructing a linear classifier based on Support Vector Machine method. Application of decomposition reduces the volume of calculations, in particular, due to the emerging possibilities to build parallel versions of the algorithm, which is a very important advantage for the solution of problems with big data. The analysis of the results of computational experiments conducted using the decomposition approach. The experiment use known data set for binary classification problem.

  16. On the Possibility of Studying the Reactions of the Thermal Decomposition of Energy Substances by the Methods of High-Resolution Terahertz Spectroscopy

    NASA Astrophysics Data System (ADS)

    Vaks, V. L.; Domracheva, E. G.; Chernyaeva, M. B.; Pripolzin, S. I.; Revin, L. S.; Tretyakov, I. V.; Anfertyev, V. A.; Yablokov, A. A.; Lukyanenko, I. A.; Sheikov, Yu. V.

    2018-02-01

    We show prospects for using the method of high-resolution terahertz spectroscopy for a continuous analysis of the decomposition products of energy substances in the gas phase (including short-lived ones) in a wide temperature range. The experimental setup, which includes a terahertz spectrometer for studying the thermal decomposition reactions, is described. The results of analysis of the gaseous decomposition products of energy substances by the example of ammonium nitrate heated from room temperature to 167°C are presented.

  17. On Certain Theoretical Developments Underlying the Hilbert-Huang Transform

    NASA Technical Reports Server (NTRS)

    Kizhner, Semion; Blank, Karin; Flatley, Thomas; Huang, Norden E.; Petrick, David; Hestness, Phyllis

    2006-01-01

    One of the main traditional tools used in scientific and engineering data spectral analysis is the Fourier Integral Transform and its high performance digital equivalent - the Fast Fourier Transform (FFT). Both carry strong a-priori assumptions about the source data, such as being linear and stationary, and of satisfying the Dirichlet conditions. A recent development at the National Aeronautics and Space Administration (NASA) Goddard Space Flight Center (GSFC), known as the Hilbert-Huang Transform (HHT), proposes a novel approach to the solution for the nonlinear class of spectral analysis problems. Using a-posteriori data processing based on the Empirical Mode Decomposition (EMD) sifting process (algorithm), followed by the normalized Hilbert Transform of the decomposed data, the HHT allows spectral analysis of nonlinear and nonstationary data. The EMD sifting process results in a non-constrained decomposition of a source real-value data vector into a finite set of Intrinsic Mode Functions (IMF). These functions form a nearly orthogonal derived from the data (adaptive) basis. The IMFs can be further analyzed for spectrum content by using the classical Hilbert Transform. A new engineering spectral analysis tool using HHT has been developed at NASA GSFC, the HHT Data Processing System (HHT-DPS). As the HHT-DPS has been successfully used and commercialized, new applications pose additional questions about the theoretical basis behind the HHT and EMD algorithms. Why is the fastest changing component of a composite signal being sifted out first in the EMD sifting process? Why does the EMD sifting process seemingly converge and why does it converge rapidly? Does an IMF have a distinctive structure? Why are the IMFs nearly orthogonal? We address these questions and develop the initial theoretical background for the HHT. This will contribute to the development of new HHT processing options, such as real-time and 2-D processing using Field Programmable Gate Array (FPGA) computational resources,

  18. Application of the Hilbert-Huang Transform to Financial Data

    NASA Technical Reports Server (NTRS)

    Huang, Norden

    2005-01-01

    A paper discusses the application of the Hilbert-Huang transform (HHT) method to time-series financial-market data. The method was described, variously without and with the HHT name, in several prior NASA Tech Briefs articles and supporting documents. To recapitulate: The method is especially suitable for analyzing time-series data that represent nonstationary and nonlinear phenomena including physical phenomena and, in the present case, financial-market processes. The method involves the empirical mode decomposition (EMD), in which a complicated signal is decomposed into a finite number of functions, called "intrinsic mode functions" (IMFs), that admit well-behaved Hilbert transforms. The HHT consists of the combination of EMD and Hilbert spectral analysis. The local energies and the instantaneous frequencies derived from the IMFs through Hilbert transforms can be used to construct an energy-frequency-time distribution, denoted a Hilbert spectrum. The instant paper begins with a discussion of prior approaches to quantification of market volatility, summarizes the HHT method, then describes the application of the method in performing time-frequency analysis of mortgage-market data from the years 1972 through 2000. Filtering by use of the EMD is shown to be useful for quantifying market volatility.

  19. Applications of singular value analysis and partial-step algorithm for nonlinear orbit determination

    NASA Technical Reports Server (NTRS)

    Ryne, Mark S.; Wang, Tseng-Chan

    1991-01-01

    An adaptive method in which cruise and nonlinear orbit determination problems can be solved using a single program is presented. It involves singular value decomposition augmented with an extended partial step algorithm. The extended partial step algorithm constrains the size of the correction to the spacecraft state and other solve-for parameters. The correction is controlled by an a priori covariance and a user-supplied bounds parameter. The extended partial step method is an extension of the update portion of the singular value decomposition algorithm. It thus preserves the numerical stability of the singular value decomposition method, while extending the region over which it converges. In linear cases, this method reduces to the singular value decomposition algorithm with the full rank solution. Two examples are presented to illustrate the method's utility.

  20. Rank-based decompositions of morphological templates.

    PubMed

    Sussner, P; Ritter, G X

    2000-01-01

    Methods for matrix decomposition have found numerous applications in image processing, in particular for the problem of template decomposition. Since existing matrix decomposition techniques are mainly concerned with the linear domain, we consider it timely to investigate matrix decomposition techniques in the nonlinear domain with applications in image processing. The mathematical basis for these investigations is the new theory of rank within minimax algebra. Thus far, only minimax decompositions of rank 1 and rank 2 matrices into outer product expansions are known to the image processing community. We derive a heuristic algorithm for the decomposition of matrices having arbitrary rank.

  1. Characterization of photon-counting multislit breast tomosynthesis.

    PubMed

    Berggren, Karl; Cederström, Björn; Lundqvist, Mats; Fredenberg, Erik

    2018-02-01

    It has been shown that breast tomosynthesis may improve sensitivity and specificity compared to two-dimensional mammography, resulting in increased detection-rate of cancers or lowered call-back rates. The purpose of this study is to characterize a spectral photon-counting multislit breast tomosynthesis system that is able to do single-scan spectral imaging with multiple collimated x-ray beams. The system differs in many aspects compared to conventional tomosynthesis using energy-integrating flat-panel detectors. The investigated system was a prototype consisting of a dual-threshold photon-counting detector with 21 collimated line detectors scanning across the compressed breast. A review of the system is done in terms of detector, acquisition geometry, and reconstruction methods. Three reconstruction methods were used, simple back-projection, filtered back-projection and an iterative algebraic reconstruction technique. The image quality was evaluated by measuring the modulation transfer-function (MTF), normalized noise-power spectrum, detective quantum-efficiency (DQE), and artifact spread-function (ASF) on reconstructed spectral tomosynthesis images for a total-energy bin (defined by a low-energy threshold calibrated to remove electronic noise) and for a high-energy bin (with a threshold calibrated to split the spectrum in roughly equal parts). Acquisition was performed using a 29 kVp W/Al x-ray spectrum at a 0.24 mGy exposure. The difference in MTF between the two energy bins was negligible, that is, there was no energy dependence on resolution. The MTF dropped to 50% at 1.5 lp/mm to 2.3 lp/mm in the scan direction and 2.4 lp/mm to 3.3 lp/mm in the slit direction, depending on the reconstruction method. The full width at half maximum of the ASF was found to range from 13.8 mm to 18.0 mm for the different reconstruction methods. The zero-frequency DQE of the system was found to be 0.72. The fraction of counts in the high-energy bin was measured to be 59% of the total detected spectrum. Scantimes ranged from 4 s to 16.5 s depending on voltage and current settings. The characterized system generates spectral tomosynthesis images with a dual-energy photon-counting detector. Measurements show a high DQE, enabling high image quality at a low dose, which is beneficial for low-dose applications such as screening. The single-scan spectral images open up for applications such as quantitative material decomposition and contrast-enhanced tomosynthesis. © 2017 American Association of Physicists in Medicine.

  2. Integrated control/structure optimization by multilevel decomposition

    NASA Technical Reports Server (NTRS)

    Zeiler, Thomas A.; Gilbert, Michael G.

    1990-01-01

    A method for integrated control/structure optimization by multilevel decomposition is presented. It is shown that several previously reported methods were actually partial decompositions wherein only the control was decomposed into a subsystem design. One of these partially decomposed problems was selected as a benchmark example for comparison. The system is fully decomposed into structural and control subsystem designs and an improved design is produced. Theory, implementation, and results for the method are presented and compared with the benchmark example.

  3. A novel iterative scheme and its application to differential equations.

    PubMed

    Khan, Yasir; Naeem, F; Šmarda, Zdeněk

    2014-01-01

    The purpose of this paper is to employ an alternative approach to reconstruct the standard variational iteration algorithm II proposed by He, including Lagrange multiplier, and to give a simpler formulation of Adomian decomposition and modified Adomian decomposition method in terms of newly proposed variational iteration method-II (VIM). Through careful investigation of the earlier variational iteration algorithm and Adomian decomposition method, we find unnecessary calculations for Lagrange multiplier and also repeated calculations involved in each iteration, respectively. Several examples are given to verify the reliability and efficiency of the method.

  4. TU-CD-207-02: Quantification of Breast Lesion Compositions Using Low-Dose Spectral Mammography: A Feasibility Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cho, H; Ding, H; Sennung, D

    2015-06-15

    Purpose: To investigate the feasibility of measuring breast lesion composition with spectral mammography using physical phantoms and bovine tissue. Methods: Phantom images were acquired with a spectral mammography system with a silicon-strip based photon-counting detector. Plastic water and adipose-equivalent phantoms were used to calibrate the system for dual-energy material decomposition. The calibration phantom was constructed in range of 2–8 cm thickness and water densities in the range of 0% to 100%. A non-linear rational fitting function was used to calibrate the imaging system. The phantom studies were performed with uniform background phantom and non-uniform background phantom. The breast lesion phantomsmore » (2 cm in diameter and 0.5 cm in thickness) were made with water densities ranging from 0 to 100%. The lesion phantoms were placed in different positions and depths on the phantoms to investigate the accuracy of the measurement under various conditions. The plastic water content of the lesion was measured by subtracting the total decomposed plastic water signal from a surrounding 2.5 mm thick border outside the lesion. In addition, bovine tissue samples composed of 80 % lean were imaged as background for the simulated lesion phantoms. Results: The thickness of measured and known water contents was compared. The rootmean-square (RMS) errors in water thickness measurements were 0.01 cm for the uniform background phantom, 0.04 cm for non-uniform background phantom, and 0.03 cm for 80% lean bovine tissue background. Conclusion: The results indicate that the proposed technique using spectral mammography can be used to accurately characterize breast lesion compositions.« less

  5. Artifact removal from EEG data with empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Grubov, Vadim V.; Runnova, Anastasiya E.; Efremova, Tatyana Yu.; Hramov, Alexander E.

    2017-03-01

    In the paper we propose the novel method for dealing with the physiological artifacts caused by intensive activity of facial and neck muscles and other movements in experimental human EEG recordings. The method is based on analysis of EEG signals with empirical mode decomposition (Hilbert-Huang transform). We introduce the mathematical algorithm of the method with following steps: empirical mode decomposition of EEG signal, choosing of empirical modes with artifacts, removing empirical modes with artifacts, reconstruction of the initial EEG signal. We test the method on filtration of experimental human EEG signals from movement artifacts and show high efficiency of the method.

  6. Time series analysis of ozone data in Isfahan

    NASA Astrophysics Data System (ADS)

    Omidvari, M.; Hassanzadeh, S.; Hosseinibalam, F.

    2008-07-01

    Time series analysis used to investigate the stratospheric ozone formation and decomposition processes. Different time series methods are applied to detect the reason for extreme high ozone concentrations for each season. Data was convert into seasonal component and frequency domain, the latter has been evaluated by using the Fast Fourier Transform (FFT), spectral analysis. The power density spectrum estimated from the ozone data showed peaks at cycle duration of 22, 20, 36, 186, 365 and 40 days. According to seasonal component analysis most fluctuation was in 1999 and 2000, but the least fluctuation was in 2003. The best correlation between ozone and sun radiation was found in 2000. Other variables which are not available cause to this fluctuation in the 1999 and 2001. The trend of ozone is increasing in 1999 and is decreasing in other years.

  7. Elimination of numerical Cherenkov instability in flowing-plasma particle-in-cell simulations by using Galilean coordinates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lehe, Remi; Kirchen, Manuel; Godfrey, Brendan B.

    Particle-in-cell (PIC) simulations of relativistic flowing plasmas are of key interest to several fields of physics (including, e.g., laser-wakefield acceleration, when viewed in a Lorentz-boosted frame) but remain sometimes infeasible due to the well-known numerical Cherenkov instability (NCI). In this article, we show that, for a plasma drifting at a uniform relativistic velocity, the NCI can be eliminated by simply integrating the PIC equations in Galilean coordinates that follow the plasma (also sometimes known as comoving coordinates) within a spectral analytical framework. The elimination of the NCI is verified empirically and confirmed by a theoretical analysis of the instability. Moreover,more » it is shown that this method is applicable both to Cartesian geometry and to cylindrical geometry with azimuthal Fourier decomposition.« less

  8. Elimination of numerical Cherenkov instability in flowing-plasma particle-in-cell simulations by using Galilean coordinates

    DOE PAGES

    Lehe, Remi; Kirchen, Manuel; Godfrey, Brendan B.; ...

    2016-11-14

    Particle-in-cell (PIC) simulations of relativistic flowing plasmas are of key interest to several fields of physics (including, e.g., laser-wakefield acceleration, when viewed in a Lorentz-boosted frame) but remain sometimes infeasible due to the well-known numerical Cherenkov instability (NCI). In this article, we show that, for a plasma drifting at a uniform relativistic velocity, the NCI can be eliminated by simply integrating the PIC equations in Galilean coordinates that follow the plasma (also sometimes known as comoving coordinates) within a spectral analytical framework. The elimination of the NCI is verified empirically and confirmed by a theoretical analysis of the instability. Moreover,more » it is shown that this method is applicable both to Cartesian geometry and to cylindrical geometry with azimuthal Fourier decomposition.« less

  9. Adaptive Fourier decomposition based ECG denoising.

    PubMed

    Wang, Ze; Wan, Feng; Wong, Chi Man; Zhang, Liming

    2016-10-01

    A novel ECG denoising method is proposed based on the adaptive Fourier decomposition (AFD). The AFD decomposes a signal according to its energy distribution, thereby making this algorithm suitable for separating pure ECG signal and noise with overlapping frequency ranges but different energy distributions. A stop criterion for the iterative decomposition process in the AFD is calculated on the basis of the estimated signal-to-noise ratio (SNR) of the noisy signal. The proposed AFD-based method is validated by the synthetic ECG signal using an ECG model and also real ECG signals from the MIT-BIH Arrhythmia Database both with additive Gaussian white noise. Simulation results of the proposed method show better performance on the denoising and the QRS detection in comparing with major ECG denoising schemes based on the wavelet transform, the Stockwell transform, the empirical mode decomposition, and the ensemble empirical mode decomposition. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Iterative image-domain decomposition for dual-energy CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Niu, Tianye; Dong, Xue; Petrongolo, Michael

    2014-04-15

    Purpose: Dual energy CT (DECT) imaging plays an important role in advanced imaging applications due to its capability of material decomposition. Direct decomposition via matrix inversion suffers from significant degradation of image signal-to-noise ratios, which reduces clinical values of DECT. Existing denoising algorithms achieve suboptimal performance since they suppress image noise either before or after the decomposition and do not fully explore the noise statistical properties of the decomposition process. In this work, the authors propose an iterative image-domain decomposition method for noise suppression in DECT, using the full variance-covariance matrix of the decomposed images. Methods: The proposed algorithm ismore » formulated in the form of least-square estimation with smoothness regularization. Based on the design principles of a best linear unbiased estimator, the authors include the inverse of the estimated variance-covariance matrix of the decomposed images as the penalty weight in the least-square term. The regularization term enforces the image smoothness by calculating the square sum of neighboring pixel value differences. To retain the boundary sharpness of the decomposed images, the authors detect the edges in the CT images before decomposition. These edge pixels have small weights in the calculation of the regularization term. Distinct from the existing denoising algorithms applied on the images before or after decomposition, the method has an iterative process for noise suppression, with decomposition performed in each iteration. The authors implement the proposed algorithm using a standard conjugate gradient algorithm. The method performance is evaluated using an evaluation phantom (Catphan©600) and an anthropomorphic head phantom. The results are compared with those generated using direct matrix inversion with no noise suppression, a denoising method applied on the decomposed images, and an existing algorithm with similar formulation as the proposed method but with an edge-preserving regularization term. Results: On the Catphan phantom, the method maintains the same spatial resolution on the decomposed images as that of the CT images before decomposition (8 pairs/cm) while significantly reducing their noise standard deviation. Compared to that obtained by the direct matrix inversion, the noise standard deviation in the images decomposed by the proposed algorithm is reduced by over 98%. Without considering the noise correlation properties in the formulation, the denoising scheme degrades the spatial resolution to 6 pairs/cm for the same level of noise suppression. Compared to the edge-preserving algorithm, the method achieves better low-contrast detectability. A quantitative study is performed on the contrast-rod slice of Catphan phantom. The proposed method achieves lower electron density measurement error as compared to that by the direct matrix inversion, and significantly reduces the error variation by over 97%. On the head phantom, the method reduces the noise standard deviation of decomposed images by over 97% without blurring the sinus structures. Conclusions: The authors propose an iterative image-domain decomposition method for DECT. The method combines noise suppression and material decomposition into an iterative process and achieves both goals simultaneously. By exploring the full variance-covariance properties of the decomposed images and utilizing the edge predetection, the proposed algorithm shows superior performance on noise suppression with high image spatial resolution and low-contrast detectability.« less

  11. Optimizing spectral CT parameters for material classification tasks

    NASA Astrophysics Data System (ADS)

    Rigie, D. S.; La Rivière, P. J.

    2016-06-01

    In this work, we propose a framework for optimizing spectral CT imaging parameters and hardware design with regard to material classification tasks. Compared with conventional CT, many more parameters must be considered when designing spectral CT systems and protocols. These choices will impact material classification performance in a non-obvious, task-dependent way with direct implications for radiation dose reduction. In light of this, we adapt Hotelling Observer formalisms typically applied to signal detection tasks to the spectral CT, material-classification problem. The result is a rapidly computable metric that makes it possible to sweep out many system configurations, generating parameter optimization curves (POC’s) that can be used to select optimal settings. The proposed model avoids restrictive assumptions about the basis-material decomposition (e.g. linearity) and incorporates signal uncertainty with a stochastic object model. This technique is demonstrated on dual-kVp and photon-counting systems for two different, clinically motivated material classification tasks (kidney stone classification and plaque removal). We show that the POC’s predicted with the proposed analytic model agree well with those derived from computationally intensive numerical simulation studies.

  12. Optimizing Spectral CT Parameters for Material Classification Tasks

    PubMed Central

    Rigie, D. S.; La Rivière, P. J.

    2017-01-01

    In this work, we propose a framework for optimizing spectral CT imaging parameters and hardware design with regard to material classification tasks. Compared with conventional CT, many more parameters must be considered when designing spectral CT systems and protocols. These choices will impact material classification performance in a non-obvious, task-dependent way with direct implications for radiation dose reduction. In light of this, we adapt Hotelling Observer formalisms typically applied to signal detection tasks to the spectral CT, material-classification problem. The result is a rapidly computable metric that makes it possible to sweep out many system configurations, generating parameter optimization curves (POC’s) that can be used to select optimal settings. The proposed model avoids restrictive assumptions about the basis-material decomposition (e.g. linearity) and incorporates signal uncertainty with a stochastic object model. This technique is demonstrated on dual-kVp and photon-counting systems for two different, clinically motivated material classification tasks (kidney stone classification and plaque removal). We show that the POC’s predicted with the proposed analytic model agree well with those derived from computationally intensive numerical simulation studies. PMID:27227430

  13. Diverse power iteration embeddings: Theory and practice

    DOE PAGES

    Huang, Hao; Yoo, Shinjae; Yu, Dantong; ...

    2015-11-09

    Manifold learning, especially spectral embedding, is known as one of the most effective learning approaches on high dimensional data, but for real-world applications it raises a serious computational burden in constructing spectral embeddings for large datasets. To overcome this computational complexity, we propose a novel efficient embedding construction, Diverse Power Iteration Embedding (DPIE). DPIE shows almost the same effectiveness of spectral embeddings and yet is three order of magnitude faster than spectral embeddings computed from eigen-decomposition. Our DPIE is unique in that (1) it finds linearly independent embeddings and thus shows diverse aspects of dataset; (2) the proposed regularized DPIEmore » is effective if we need many embeddings; (3) we show how to efficiently orthogonalize DPIE if one needs; and (4) Diverse Power Iteration Value (DPIV) provides the importance of each DPIE like an eigen value. As a result, such various aspects of DPIE and DPIV ensure that our algorithm is easy to apply to various applications, and we also show the effectiveness and efficiency of DPIE on clustering, anomaly detection, and feature selection as our case studies.« less

  14. A Robust Dynamic Heart-Rate Detection Algorithm Framework During Intense Physical Activities Using Photoplethysmographic Signals

    PubMed Central

    Song, Jiajia; Li, Dan; Ma, Xiaoyuan; Teng, Guowei; Wei, Jianming

    2017-01-01

    Dynamic accurate heart-rate (HR) estimation using a photoplethysmogram (PPG) during intense physical activities is always challenging due to corruption by motion artifacts (MAs). It is difficult to reconstruct a clean signal and extract HR from contaminated PPG. This paper proposes a robust HR-estimation algorithm framework that uses one-channel PPG and tri-axis acceleration data to reconstruct the PPG and calculate the HR based on features of the PPG and spectral analysis. Firstly, the signal is judged by the presence of MAs. Then, the spectral peaks corresponding to acceleration data are filtered from the periodogram of the PPG when MAs exist. Different signal-processing methods are applied based on the amount of remaining PPG spectral peaks. The main MA-removal algorithm (NFEEMD) includes the repeated single-notch filter and ensemble empirical mode decomposition. Finally, HR calibration is designed to ensure the accuracy of HR tracking. The NFEEMD algorithm was performed on the 23 datasets from the 2015 IEEE Signal Processing Cup Database. The average estimation errors were 1.12 BPM (12 training datasets), 2.63 BPM (10 testing datasets) and 1.87 BPM (all 23 datasets), respectively. The Pearson correlation was 0.992. The experiment results illustrate that the proposed algorithm is not only suitable for HR estimation during continuous activities, like slow running (13 training datasets), but also for intense physical activities with acceleration, like arm exercise (10 testing datasets). PMID:29068403

  15. Spectral Interpretation of Wave-vortex Duality in Northern South China Sea

    NASA Astrophysics Data System (ADS)

    Cao, H.; Jing, Z.; Yan, T.

    2017-12-01

    The mesoscale to submesocale oceanic dynamics are characterized by a joint effect of vortex and wave component, which primarily declares the partition between geostrophic balanced and unbalanced flows. The spectral method is a favorable approach that can afford the muti-scale analysis. This study investigates the characteristics of horizontal wavenumber spectra in Nothern South China Sea using orbital altimeter data (SARA/AltiKa), 13-yr shipboard ADCP (Acoustic Doppler Current Profiler) measurements (2014-2016), and a high-resolution numerical simulation (llc4320 Mitgcm). The observed SSH (sea surface height) spectrum presents a conspicuous transition at scales of 50-100 km, which clearly shows the inconsistency with geostrophic balance. The Helmholtz decomposition separating the wave and vortex energy for the spectra of ADCP and numerical model data shows that ageostrophic flows should be responsible for the spectral discrepancy with the QG (qusi-geostrophic) turbulence theory. Generally, it is found that inertia-gravity waves (including internal tides) govern the significant kinetic energy in the submesoscale range in Northern South China Sea. More specific analysis suggests that the wave kinetic energy can extend to a large scale of 500 km or more from the zonal velocity spectra at the left-center of Luzon Strait, which appears to be dominated by inertia-gravity waves likely emitted by the intrusion of the west pacific at Luzon Strait. Instead, the development of eddy kinetic energy at this place is strictly constrained by the width of the strait.

  16. Temporally flickering nanoparticles for compound cellular imaging and super resolution

    NASA Astrophysics Data System (ADS)

    Ilovitsh, Tali; Danan, Yossef; Meir, Rinat; Meiri, Amihai; Zalevsky, Zeev

    2016-03-01

    This work presents the use of flickering nanoparticles for imaging biological samples. The method has high noise immunity, and it enables the detection of overlapping types of GNPs, at significantly sub-diffraction distances, making it attractive for super resolving localization microscopy techniques. The method utilizes a lock-in technique at which the imaging of the sample is done using a time-modulated laser beam that match the number of the types of gold nanoparticles (GNPs) that label a given sample, and resulting in the excitation of the temporal flickering of the scattered light at known temporal frequencies. The final image where the GNPs are spatially separated is obtained using post processing where the proper spectral components corresponding to the different modulation frequencies are extracted. This allows the simultaneous super resolved imaging of multiple types of GNPs that label targets of interest within biological samples. Additionally applying the post-processing algorithm of the K-factor image decomposition algorithm can further improve the performance of the proposed approach.

  17. Novel laser gain and time-resolved FTIR studies of photochemistry

    NASA Technical Reports Server (NTRS)

    Leone, Stephen R.

    1990-01-01

    Several techniques are discussed which can be used to explore laboratory photochemical processes and kinetics relevant to planetary atmospheres; these include time-resolved laser gain-versus-absorption spectroscopy and time-resolved Fourier transform infrared (FTIR) emission studies. The laser gain-versus-absorption method employed tunable diode and F-center lasers to determine the yields of excited photofragments and their kinetics. The time-resolved FTIR technique synchronizes the sweep of a commercial FTIR with a pulsed source of light to obtain emission spectra of novel transient species in the infrared. These methods are presently being employed to investigate molecular photodissociation, the yields of excited states of fragments, their subsequent reaction kinetics, Doppler velocity distributions, and velocity-changing collisions of translationally fast atoms. Such techniques may be employed in future investigations of planetary atmospheres, for example to study polycyclic aromatic hydrocarbons related to cometary emissions, to analyze acetylene decomposition products and reactions, and to determine spectral features in the near infrared and infrared wavelength regions for planetary molecules and clusters.

  18. A graphical method to evaluate spectral preprocessing in multivariate regression calibrations: example with Savitzky-Golay filters and partial least squares regression.

    PubMed

    Delwiche, Stephen R; Reeves, James B

    2010-01-01

    In multivariate regression analysis of spectroscopy data, spectral preprocessing is often performed to reduce unwanted background information (offsets, sloped baselines) or accentuate absorption features in intrinsically overlapping bands. These procedures, also known as pretreatments, are commonly smoothing operations or derivatives. While such operations are often useful in reducing the number of latent variables of the actual decomposition and lowering residual error, they also run the risk of misleading the practitioner into accepting calibration equations that are poorly adapted to samples outside of the calibration. The current study developed a graphical method to examine this effect on partial least squares (PLS) regression calibrations of near-infrared (NIR) reflection spectra of ground wheat meal with two analytes, protein content and sodium dodecyl sulfate sedimentation (SDS) volume (an indicator of the quantity of the gluten proteins that contribute to strong doughs). These two properties were chosen because of their differing abilities to be modeled by NIR spectroscopy: excellent for protein content, fair for SDS sedimentation volume. To further demonstrate the potential pitfalls of preprocessing, an artificial component, a randomly generated value, was included in PLS regression trials. Savitzky-Golay (digital filter) smoothing, first-derivative, and second-derivative preprocess functions (5 to 25 centrally symmetric convolution points, derived from quadratic polynomials) were applied to PLS calibrations of 1 to 15 factors. The results demonstrated the danger of an over reliance on preprocessing when (1) the number of samples used in a multivariate calibration is low (<50), (2) the spectral response of the analyte is weak, and (3) the goodness of the calibration is based on the coefficient of determination (R(2)) rather than a term based on residual error. The graphical method has application to the evaluation of other preprocess functions and various types of spectroscopy data.

  19. Kinetics and mechanism of solid decompositions — From basic discoveries by atomic absorption spectrometry and quadrupole mass spectroscopy to thorough thermogravimetric analysis

    NASA Astrophysics Data System (ADS)

    L'vov, Boris V.

    2008-02-01

    This paper sums up the evolution of thermochemical approach to the interpretation of solid decompositions for the past 25 years. This period includes two stages related to decomposition studies by different techniques: by ET AAS and QMS in 1981-2001 and by TG in 2002-2007. As a result of ET AAS and QMS investigations, the method for determination of absolute rates of solid decompositions was developed and the mechanism of decompositions through the congruent dissociative vaporization was discovered. On this basis, in the period from 1997 to 2001, the decomposition mechanisms of several classes of reactants were interpreted and some unusual effects observed in TA were explained. However, the thermochemical approach has not received any support by other TA researchers. One of the potential reasons of this distrust was the unreliability of the E values measured by the traditional Arrhenius plot method. The theoretical analysis and comparison of metrological features of different methods used in the determinations of thermochemical quantities permitted to conclude that in comparison with the Arrhenius plot and second-law methods, the third-law method is to be very much preferred. However, this method cannot be used in the kinetic studies by the Arrhenius approach because its use suggests the measuring of the equilibrium pressures of decomposition products. On the contrary, the method of absolute rates is ideally suitable for this purpose. As a result of much higher precision of the third-law method, some quantitative conclusions that follow from the theory were confirmed, and several new effects, which were invisible in the framework of the Arrhenius approach, have been revealed. In spite of great progress reached in the development of reliable methodology, based on the third-law method, the thermochemical approach remains unclaimed as before.

  20. Integrated control/structure optimization by multilevel decomposition

    NASA Technical Reports Server (NTRS)

    Zeiler, Thomas A.; Gilbert, Michael G.

    1990-01-01

    A method for integrated control/structure optimization by multilevel decomposition is presented. It is shown that several previously reported methods were actually partial decompositions wherein only the control was decomposed into a subsystem design. One of these partially decomposed problems was selected as a benchmark example for comparison. The present paper fully decomposes the system into structural and control subsystem designs and produces an improved design. Theory, implementation, and results for the method are presented and compared with the benchmark example.

  1. Three-way analysis of the UPLC-PDA dataset for the multicomponent quantitation of hydrochlorothiazide and olmesartan medoxomil in tablets by parallel factor analysis and three-way partial least squares.

    PubMed

    Dinç, Erdal; Ertekin, Zehra Ceren

    2016-01-01

    An application of parallel factor analysis (PARAFAC) and three-way partial least squares (3W-PLS1) regression models to ultra-performance liquid chromatography-photodiode array detection (UPLC-PDA) data with co-eluted peaks in the same wavelength and time regions was described for the multicomponent quantitation of hydrochlorothiazide (HCT) and olmesartan medoxomil (OLM) in tablets. Three-way dataset of HCT and OLM in their binary mixtures containing telmisartan (IS) as an internal standard was recorded with a UPLC-PDA instrument. Firstly, the PARAFAC algorithm was applied for the decomposition of three-way UPLC-PDA data into the chromatographic, spectral and concentration profiles to quantify the concerned compounds. Secondly, 3W-PLS1 approach was subjected to the decomposition of a tensor consisting of three-way UPLC-PDA data into a set of triads to build 3W-PLS1 regression for the analysis of the same compounds in samples. For the proposed three-way analysis methods in the regression and prediction steps, the applicability and validity of PARAFAC and 3W-PLS1 models were checked by analyzing the synthetic mixture samples, inter-day and intra-day samples, and standard addition samples containing HCT and OLM. Two different three-way analysis methods, PARAFAC and 3W-PLS1, were successfully applied to the quantitative estimation of the solid dosage form containing HCT and OLM. Regression and prediction results provided from three-way analysis were compared with those obtained by traditional UPLC method. Copyright © 2015 Elsevier B.V. All rights reserved.

  2. Multivariate EMD and full spectrum based condition monitoring for rotating machinery

    NASA Astrophysics Data System (ADS)

    Zhao, Xiaomin; Patel, Tejas H.; Zuo, Ming J.

    2012-02-01

    Early assessment of machinery health condition is of paramount importance today. A sensor network with sensors in multiple directions and locations is usually employed for monitoring the condition of rotating machinery. Extraction of health condition information from these sensors for effective fault detection and fault tracking is always challenging. Empirical mode decomposition (EMD) is an advanced signal processing technology that has been widely used for this purpose. Standard EMD has the limitation in that it works only for a single real-valued signal. When dealing with data from multiple sensors and multiple health conditions, standard EMD faces two problems. First, because of the local and self-adaptive nature of standard EMD, the decomposition of signals from different sources may not match in either number or frequency content. Second, it may not be possible to express the joint information between different sensors. The present study proposes a method of extracting fault information by employing multivariate EMD and full spectrum. Multivariate EMD can overcome the limitations of standard EMD when dealing with data from multiple sources. It is used to extract the intrinsic mode functions (IMFs) embedded in raw multivariate signals. A criterion based on mutual information is proposed for selecting a sensitive IMF. A full spectral feature is then extracted from the selected fault-sensitive IMF to capture the joint information between signals measured from two orthogonal directions. The proposed method is first explained using simple simulated data, and then is tested for the condition monitoring of rotating machinery applications. The effectiveness of the proposed method is demonstrated through monitoring damage on the vane trailing edge of an impeller and rotor-stator rub in an experimental rotor rig.

  3. Decomposition of metabolic network into functional modules based on the global connectivity structure of reaction graph.

    PubMed

    Ma, Hong-Wu; Zhao, Xue-Ming; Yuan, Ying-Jin; Zeng, An-Ping

    2004-08-12

    Metabolic networks are organized in a modular, hierarchical manner. Methods for a rational decomposition of the metabolic network into relatively independent functional subsets are essential to better understand the modularity and organization principle of a large-scale, genome-wide network. Network decomposition is also necessary for functional analysis of metabolism by pathway analysis methods that are often hampered by the problem of combinatorial explosion due to the complexity of metabolic network. Decomposition methods proposed in literature are mainly based on the connection degree of metabolites. To obtain a more reasonable decomposition, the global connectivity structure of metabolic networks should be taken into account. In this work, we use a reaction graph representation of a metabolic network for the identification of its global connectivity structure and for decomposition. A bow-tie connectivity structure similar to that previously discovered for metabolite graph is found also to exist in the reaction graph. Based on this bow-tie structure, a new decomposition method is proposed, which uses a distance definition derived from the path length between two reactions. An hierarchical classification tree is first constructed from the distance matrix among the reactions in the giant strong component of the bow-tie structure. These reactions are then grouped into different subsets based on the hierarchical tree. Reactions in the IN and OUT subsets of the bow-tie structure are subsequently placed in the corresponding subsets according to a 'majority rule'. Compared with the decomposition methods proposed in literature, ours is based on combined properties of the global network structure and local reaction connectivity rather than, primarily, on the connection degree of metabolites. The method is applied to decompose the metabolic network of Escherichia coli. Eleven subsets are obtained. More detailed investigations of the subsets show that reactions in the same subset are really functionally related. The rational decomposition of metabolic networks, and subsequent studies of the subsets, make it more amenable to understand the inherent organization and functionality of metabolic networks at the modular level. http://genome.gbf.de/bioinformatics/

  4. Primary decomposition of zero-dimensional ideals over finite fields

    NASA Astrophysics Data System (ADS)

    Gao, Shuhong; Wan, Daqing; Wang, Mingsheng

    2009-03-01

    A new algorithm is presented for computing primary decomposition of zero-dimensional ideals over finite fields. Like Berlekamp's algorithm for univariate polynomials, the new method is based on the invariant subspace of the Frobenius map acting on the quotient algebra. The dimension of the invariant subspace equals the number of primary components, and a basis of the invariant subspace yields a complete decomposition. Unlike previous approaches for decomposing multivariate polynomial systems, the new method does not need primality testing nor any generic projection, instead it reduces the general decomposition problem directly to root finding of univariate polynomials over the ground field. Also, it is shown how Groebner basis structure can be used to get partial primary decomposition without any root finding.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Le, Huy Q.; Molloi, Sabee

    Purpose: Energy resolving detectors provide more than one spectral measurement in one image acquisition. The purpose of this study is to investigate, with simulation, the ability to decompose four materials using energy discriminating detectors and least squares minimization techniques. Methods: Three least squares parameter estimation decomposition techniques were investigated for four-material breast imaging tasks in the image domain. The first technique treats the voxel as if it consisted of fractions of all the materials. The second method assumes that a voxel primarily contains one material and divides the decomposition process into segmentation and quantification tasks. The third is similar tomore » the second method but a calibration was used. The simulated computed tomography (CT) system consisted of an 80 kVp spectrum and a CdZnTe (CZT) detector that could resolve the x-ray spectrum into five energy bins. A postmortem breast specimen was imaged with flat panel CT to provide a model for the digital phantoms. Hydroxyapatite (HA) (50, 150, 250, 350, 450, and 550 mg/ml) and iodine (4, 12, 20, 28, 36, and 44 mg/ml) contrast elements were embedded into the glandular region of the phantoms. Calibration phantoms consisted of a 30/70 glandular-to-adipose tissue ratio with embedded HA (100, 200, 300, 400, and 500 mg/ml) and iodine (5, 15, 25, 35, and 45 mg/ml). The x-ray transport process was simulated where the Beer-Lambert law, Poisson process, and CZT absorption efficiency were applied. Qualitative and quantitative evaluations of the decomposition techniques were performed and compared. The effect of breast size was also investigated. Results: The first technique decomposed iodine adequately but failed for other materials. The second method separated the materials but was unable to quantify the materials. With the addition of a calibration, the third technique provided good separation and quantification of hydroxyapatite, iodine, glandular, and adipose tissues. Quantification with this technique was accurate with errors of 9.83% and 6.61% for HA and iodine, respectively. Calibration at one point (one breast size) showed increased errors as the mismatch in breast diameters between calibration and measurement increased. A four-point calibration successfully decomposed breast diameter spanning the entire range from 8 to 20 cm. For a 14 cm breast, errors were reduced from 5.44% to 1.75% and from 6.17% to 3.27% with the multipoint calibration for HA and iodine, respectively. Conclusions: The results of the simulation study showed that a CT system based on CZT detectors in conjunction with least squares minimization technique can be used to decompose four materials. The calibrated least squares parameter estimation decomposition technique performed the best, separating and accurately quantifying the concentrations of hydroxyapatite and iodine.« less

  6. Adaptive variational mode decomposition method for signal processing based on mode characteristic

    NASA Astrophysics Data System (ADS)

    Lian, Jijian; Liu, Zhuo; Wang, Haijun; Dong, Xiaofeng

    2018-07-01

    Variational mode decomposition is a completely non-recursive decomposition model, where all the modes are extracted concurrently. However, the model requires a preset mode number, which limits the adaptability of the method since a large deviation in the number of mode set will cause the discard or mixing of the mode. Hence, a method called Adaptive Variational Mode Decomposition (AVMD) was proposed to automatically determine the mode number based on the characteristic of intrinsic mode function. The method was used to analyze the simulation signals and the measured signals in the hydropower plant. Comparisons have also been conducted to evaluate the performance by using VMD, EMD and EWT. It is indicated that the proposed method has strong adaptability and is robust to noise. It can determine the mode number appropriately without modulation even when the signal frequencies are relatively close.

  7. The determination of micro-arc plasma composition and properties of nanoparticles formed during cathodic plasma electrolysis of 304 stainless steel

    NASA Astrophysics Data System (ADS)

    Jovović, Jovica; Stojadinović, Stevan; Vasilić, Rastko; Tadić, Nenad; Šišović, Nikola M.

    2017-05-01

    This paper presents the research focused on the determination of micro-arc plasma composition during cathodic plasma electrolysis of AISI304 stainless steel in water solution of sodium hydroxide. The complex line shape of several Fe I spectral lines was observed and, by means of a dedicated fitting procedure based on the spectral line broadening theory and H2O thermal decomposition data, the mole fraction of micro-arc plasma constituents (H2, Fe, O, H, H2O, and OH) was determined. Subsequent characterization of the cathodic plasma electrolysis product formed during the process revealed that it consists of Fe-nanoparticles with median diameter of approximately 60 nm.

  8. MIXOPTIM: A tool for the evaluation and the optimization of the electricity mix in a territory

    NASA Astrophysics Data System (ADS)

    Bonin, Bernard; Safa, Henri; Laureau, Axel; Merle-Lucotte, Elsa; Miss, Joachim; Richet, Yann

    2014-09-01

    This article presents a method of calculation of the generation cost of a mixture of electricity sources, by means of a Monte Carlo simulation of the production output taking into account the fluctuations of the demand and the stochastic nature of the availability of the various power sources that compose the mix. This evaluation shows that for a given electricity mix, the cost has a non-linear dependence on the demand level. In the second part of the paper, we develop some considerations on the management of intermittence. We develop a method based on spectral decomposition of the imposed power fluctuations to calculate the minimal amount of the controlled power sources needed to follow these fluctuations. This can be converted into a viability criterion of the mix included in the MIXOPTIM software. In the third part of the paper, the MIXOPTIM cost evaluation method is applied to the multi-criteria optimization of the mix, according to three main criteria: the cost of the mix; its impact on climate in terms of CO2 production; and the security of supply.

  9. On Hilbert-Huang Transform Based Synthesis of a Signal Contaminated by Radio Frequency Interference or Fringes

    NASA Technical Reports Server (NTRS)

    Kizhner, Semion; Shiri, Ron S.; Vootukuru, Meg; Coletti, Alessandro

    2015-01-01

    Norden E. Huang et al. had proposed and published the Hilbert-Huang Transform (HHT) concept correspondently in 1996, 1998. The HHT is a novel method for adaptive spectral analysis of non-linear and non-stationary signals. The HHT comprises two components: - the Huang Empirical Mode Decomposition (EMD), resulting in an adaptive data-derived basis of Intrinsic Mode functions (IMFs), and the Hilbert Spectral Analysis (HSA1) based on the Hilbert Transform for 1-dimension (1D) applied to the EMD IMF's outcome. Although paper describes the HHT concept in great depth, it does not contain all needed methodology to implement the HHT computer code. In 2004, Semion Kizhner and Karin Blank implemented the reference digital HHT real-time data processing system for 1D (HHT-DPS Version 1.4). The case for 2-Dimension (2D) (HHT2) proved to be difficult due to the computational complexity of EMD for 2D (EMD2) and absence of a suitable Hilbert Transform for 2D spectral analysis (HSA2). The real-time EMD2 and HSA2 comprise the real-time HHT2. Kizhner completed the real-time EMD2 and the HSA2 reference digital implementations respectively in 2013 & 2014. Still, the HHT2 outcome synthesis remains an active research area. This paper presents the initial concepts and preliminary results of HHT2-based synthesis and its application to processing of signals contaminated by Radio-Frequency Interference (RFI), as well as optical systems' fringe detection and mitigation at design stage. The Soil Moisture Active Passive (SMAP mission (SMAP) carries a radiometer instrument that measures Earth soil moisture at L1 frequency (1.4 GHz polarimetric - H, V, 3rd and 4th Stokes parameters). There is abundant RFI at L1 and because soil moisture is a strategic parameter, it is important to be able to recover the RFI-contaminated measurement samples (15% of telemetry). State-of-the-art only allows RFI detection and removes RFI-contaminated measurements. The HHT-based analysis and synthesis facilitates recovery of measurements contaminated by all kinds of RFI, including jamming [7-8]. The fringes are inherent in optical systems and multi-layer complex contour expensive coatings are employed to remove the unwanted fringes. HHT2-based analysis allows test image decomposition to analyze and detect fringes, and HHT2-based synthesis of useful image.

  10. Final Report - High-Order Spectral Volume Method for the Navier-Stokes Equations On Unstructured Tetrahedral Grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Z J

    2012-12-06

    The overriding objective for this project is to develop an efficient and accurate method for capturing strong discontinuities and fine smooth flow structures of disparate length scales with unstructured grids, and demonstrate its potentials for problems relevant to DOE. More specifically, we plan to achieve the following objectives: 1. Extend the SV method to three dimensions, and develop a fourth-order accurate SV scheme for tetrahedral grids. Optimize the SV partition by minimizing a form of the Lebesgue constant. Verify the order of accuracy using the scalar conservation laws with an analytical solution; 2. Extend the SV method to Navier-Stokes equationsmore » for the simulation of viscous flow problems. Two promising approaches to compute the viscous fluxes will be tested and analyzed; 3. Parallelize the 3D viscous SV flow solver using domain decomposition and message passing. Optimize the cache performance of the flow solver by designing data structures minimizing data access times; 4. Demonstrate the SV method with a wide range of flow problems including both discontinuities and complex smooth structures. The objectives remain the same as those outlines in the original proposal. We anticipate no technical obstacles in meeting these objectives.« less

  11. Time-Frequency Analysis of Non-Stationary Biological Signals with Sparse Linear Regression Based Fourier Linear Combiner.

    PubMed

    Wang, Yubo; Veluvolu, Kalyana C

    2017-06-14

    It is often difficult to analyze biological signals because of their nonlinear and non-stationary characteristics. This necessitates the usage of time-frequency decomposition methods for analyzing the subtle changes in these signals that are often connected to an underlying phenomena. This paper presents a new approach to analyze the time-varying characteristics of such signals by employing a simple truncated Fourier series model, namely the band-limited multiple Fourier linear combiner (BMFLC). In contrast to the earlier designs, we first identified the sparsity imposed on the signal model in order to reformulate the model to a sparse linear regression model. The coefficients of the proposed model are then estimated by a convex optimization algorithm. The performance of the proposed method was analyzed with benchmark test signals. An energy ratio metric is employed to quantify the spectral performance and results show that the proposed method Sparse-BMFLC has high mean energy (0.9976) ratio and outperforms existing methods such as short-time Fourier transfrom (STFT), continuous Wavelet transform (CWT) and BMFLC Kalman Smoother. Furthermore, the proposed method provides an overall 6.22% in reconstruction error.

  12. Evaluation of allelopathic, decomposition and cytogenetic activities of Jasminum officinale L. f. var. grandiflorum (L.) Kob. on bioassay plants.

    PubMed

    Teerarak, Montinee; Laosinwattana, Chamroon; Charoenying, Patchanee

    2010-07-01

    Methanolic extracts prepared from dried leaves of Jasminum officinale f. var. grandiflorum (L.) Kob. (Spanish jasmine) inhibited seed germination and stunted both root and shoot length of the weeds Echinochloa crus-galli (L.) Beauv. and Phaseolus lathyroides L. The main active compound was isolated and determined by spectral data as a secoiridoid glucoside named oleuropein. In addition, a decrease in allelopathic efficacy appeared as the decomposition periods increased. The mitotic index in treated onion root tips decreased with increasing concentrations of the extracts and longer periods of treatment. Likewise, the mitotic phase index was altered in onion incubated with crude extract. Furthermore, crude extract produced mitotic abnormalities resulting from its action on chromatin organization and mitotic spindle. Copyright (c)2010 Elsevier Ltd. All rights reserved.

  13. Resolvent estimates in homogenisation of periodic problems of fractional elasticity

    NASA Astrophysics Data System (ADS)

    Cherednichenko, Kirill; Waurick, Marcus

    2018-03-01

    We provide operator-norm convergence estimates for solutions to a time-dependent equation of fractional elasticity in one spatial dimension, with rapidly oscillating coefficients that represent the material properties of a viscoelastic composite medium. Assuming periodicity in the coefficients, we prove operator-norm convergence estimates for an operator fibre decomposition obtained by applying to the original fractional elasticity problem the Fourier-Laplace transform in time and Gelfand transform in space. We obtain estimates on each fibre that are uniform in the quasimomentum of the decomposition and in the period of oscillations of the coefficients as well as quadratic with respect to the spectral variable. On the basis of these uniform estimates we derive operator-norm-type convergence estimates for the original fractional elasticity problem, for a class of sufficiently smooth densities of applied forces.

  14. Combined iterative reconstruction and image-domain decomposition for dual energy CT using total-variation regularization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dong, Xue; Niu, Tianye; Zhu, Lei, E-mail: leizhu@gatech.edu

    2014-05-15

    Purpose: Dual-energy CT (DECT) is being increasingly used for its capability of material decomposition and energy-selective imaging. A generic problem of DECT, however, is that the decomposition process is unstable in the sense that the relative magnitude of decomposed signals is reduced due to signal cancellation while the image noise is accumulating from the two CT images of independent scans. Direct image decomposition, therefore, leads to severe degradation of signal-to-noise ratio on the resultant images. Existing noise suppression techniques are typically implemented in DECT with the procedures of reconstruction and decomposition performed independently, which do not explore the statistical propertiesmore » of decomposed images during the reconstruction for noise reduction. In this work, the authors propose an iterative approach that combines the reconstruction and the signal decomposition procedures to minimize the DECT image noise without noticeable loss of resolution. Methods: The proposed algorithm is formulated as an optimization problem, which balances the data fidelity and total variation of decomposed images in one framework, and the decomposition step is carried out iteratively together with reconstruction. The noise in the CT images from the proposed algorithm becomes well correlated even though the noise of the raw projections is independent on the two CT scans. Due to this feature, the proposed algorithm avoids noise accumulation during the decomposition process. The authors evaluate the method performance on noise suppression and spatial resolution using phantom studies and compare the algorithm with conventional denoising approaches as well as combined iterative reconstruction methods with different forms of regularization. Results: On the Catphan©600 phantom, the proposed method outperforms the existing denoising methods on preserving spatial resolution at the same level of noise suppression, i.e., a reduction of noise standard deviation by one order of magnitude. This improvement is mainly attributed to the high noise correlation in the CT images reconstructed by the proposed algorithm. Iterative reconstruction using different regularization, including quadratic orq-generalized Gaussian Markov random field regularization, achieves similar noise suppression from high noise correlation. However, the proposed TV regularization obtains a better edge preserving performance. Studies of electron density measurement also show that our method reduces the average estimation error from 9.5% to 7.1%. On the anthropomorphic head phantom, the proposed method suppresses the noise standard deviation of the decomposed images by a factor of ∼14 without blurring the fine structures in the sinus area. Conclusions: The authors propose a practical method for DECT imaging reconstruction, which combines the image reconstruction and material decomposition into one optimization framework. Compared to the existing approaches, our method achieves a superior performance on DECT imaging with respect to decomposition accuracy, noise reduction, and spatial resolution.« less

  15. [Value of quantitative iodine-based material decomposition images with gemstone spectral CT imaging in the follow-up of patients with hepatocellular carcinoma after TACE treatment].

    PubMed

    Xing, Gusheng; Wang, Shuang; Li, Chenrui; Zhao, Xinming; Zhou, Chunwu

    2015-03-01

    To investigate the value of quantitative iodine-based material decomposition images with gemstone spectral CT imaging in the follow-up of patients with hepatocellular carcinoma (HCC) after transcatheter arterial chemoebolization (TACE). Consecutive 32 HCC patients with previous TACE treatment were included in this study. For the follow-up, arterial phase (AP) and venous phase (VP) dual-phase CT scans were performed with a single-source dual-energy CT scanner (Discovery CT 750HD, GE Healthcare). Iodine concentrations were derived from iodine-based material-decomposition images in the liver parenchyma, tumors and coagulation necrosis (CN) areas. The iodine concentration difference (ICD) between the arterial-phase (AP) and venal-phase (VP) were quantitatively evaluated in different tissues.The lesion-to-normal parenchyma iodine concentration ratio (LNR) was calculated. ROC analysis was performed for the qualitative evaluation, and the area under ROC (Az) was calculated to represent the diagnostic ability of ICD and LNR. In all the 32 HCC patients, the region of interesting (ROI) for iodine concentrations included liver parenchyma (n=42), tumors (n=28) and coagulation necrosis (n=24). During the AP the iodine concentration of CNs (median value 0.088 µg/mm(3)) appeared significantly higher than that of the tumors (0.064 µg/mm(3), P=0.022) and liver parenchyma (0.048 µg/mm(3), P=0.005). But it showed no significant difference between liver parenchyma and tumors (P=0.454). During the VP the iodine concentration in hepatic parenchyma (median value 0.181 µg/mm(3)) was significantly higher than that in CNs (0.140 µg/mm(3), P=0.042). There was no significant difference between liver parenchyma and tumors, CNs and tumors (both P>0.05). The median value of ICD in CNs was 0.006 µg/mm(3), significantly lower than that of the HCC (0.201 µg/mm(3), P<0.001) and hepatic parenchyma (0.117 µg/mm(3), P<0.001). The ICDs in tumors and hepatic parenchyma showed no significant difference (P=0.829). During the AP, the LNR had no significant difference between CNs and tumors (a median value 1.805 vs. 1.310, P=0.389), and during the VP, the difference was also non-significant (the median value 0.647 vs. 0.713, P=0.660). The mean Az value of ICDs for evaluation of surviving tumor tissues was 0.804, whiles LNR measured a disappointing result in both AV images and VP images. Quantitative iodine-based material decomposition images with gemstone spectral CT imaging can improve the diagnostic efficacy of CT imaging for HCC patients after TACE treatment.

  16. A general framework of noise suppression in material decomposition for dual-energy CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Petrongolo, Michael; Dong, Xue; Zhu, Lei, E-mail: leizhu@gatech.edu

    Purpose: As a general problem of dual-energy CT (DECT), noise amplification in material decomposition severely reduces the signal-to-noise ratio on the decomposed images compared to that on the original CT images. In this work, the authors propose a general framework of noise suppression in material decomposition for DECT. The method is based on an iterative algorithm recently developed in their group for image-domain decomposition of DECT, with an extension to include nonlinear decomposition models. The generalized framework of iterative DECT decomposition enables beam-hardening correction with simultaneous noise suppression, which improves the clinical benefits of DECT. Methods: The authors propose tomore » suppress noise on the decomposed images of DECT using convex optimization, which is formulated in the form of least-squares estimation with smoothness regularization. Based on the design principles of a best linear unbiased estimator, the authors include the inverse of the estimated variance–covariance matrix of the decomposed images as the penalty weight in the least-squares term. Analytical formulas are derived to compute the variance–covariance matrix for decomposed images with general-form numerical or analytical decomposition. As a demonstration, the authors implement the proposed algorithm on phantom data using an empirical polynomial function of decomposition measured on a calibration scan. The polynomial coefficients are determined from the projection data acquired on a wedge phantom, and the signal decomposition is performed in the projection domain. Results: On the Catphan{sup ®}600 phantom, the proposed noise suppression method reduces the average noise standard deviation of basis material images by one to two orders of magnitude, with a superior performance on spatial resolution as shown in comparisons of line-pair images and modulation transfer function measurements. On the synthesized monoenergetic CT images, the noise standard deviation is reduced by a factor of 2–3. By using nonlinear decomposition on projections, the authors’ method effectively suppresses the streaking artifacts of beam hardening and obtains more uniform images than their previous approach based on a linear model. Similar performance of noise suppression is observed in the results of an anthropomorphic head phantom and a pediatric chest phantom generated by the proposed method. With beam-hardening correction enabled by their approach, the image spatial nonuniformity on the head phantom is reduced from around 10% on the original CT images to 4.9% on the synthesized monoenergetic CT image. On the pediatric chest phantom, their method suppresses image noise standard deviation by a factor of around 7.5, and compared with linear decomposition, it reduces the estimation error of electron densities from 33.3% to 8.6%. Conclusions: The authors propose a general framework of noise suppression in material decomposition for DECT. Phantom studies have shown the proposed method improves the image uniformity and the accuracy of electron density measurements by effective beam-hardening correction and reduces noise level without noticeable resolution loss.« less

  17. Optimal Spectral Decomposition (OSD) for Ocean Data Assimilation

    DTIC Science & Technology

    2015-01-01

    tropical North Atlantic from the Argo float data (Chu et al. 2007 ), and temporal and spatial variability of global upper-ocean heat content (Chu 2011...O. V. Melnichenko, and N. C. Wells, 2007 : Long baro- clinic Rossby waves in the tropical North Atlantic observed fromprofiling floats. J...Harrison, and D. Stammer , D., Eds., Vol. 2, ESA Publ. WPP- 306, doi:10.5270/OceanObs09.cwp.86. Tang, Y., and R. Kleeman, 2004: SST assimilation

  18. Short-Term EEG Spectral Pattern as a Single Event in EEG Phenomenology

    PubMed Central

    Fingelkurts, Al. A; Fingelkurts, An. A

    2010-01-01

    Spectral decomposition, to this day, still remains the main analytical paradigm for the analysis of EEG oscillations. However, conventional spectral analysis assesses the mean characteristics of the EEG power spectra averaged out over extended periods of time and/or broad frequency bands, thus resulting in a “static” picture which cannot reflect adequately the underlying neurodynamic. A relatively new promising area in the study of EEG is based on reducing the signal to elementary short-term spectra of various types in accordance with the number of types of EEG stationary segments instead of using averaged power spectrum for the whole EEG. It is suggested that the various perceptual and cognitive operations associated with a mental or behavioural condition constitute a single distinguishable neurophysiological state with a distinct and reliable spectral pattern. In this case, one type of short-term spectral pattern may be considered as a single event in EEG phenomenology. To support this assumption the following issues are considered in detail: (a) the relations between local EEG short-term spectral pattern of particular type and the actual state of the neurons in underlying network and a volume conduction; (b) relationship between morphology of EEG short-term spectral pattern and the state of the underlying neurodynamical system i.e. neuronal assembly; (c) relation of different spectral pattern components to a distinct physiological mechanism; (d) relation of different spectral pattern components to different functional significance; (e) developmental changes of spectral pattern components; (f) heredity of the variance in the individual spectral pattern and its components; (g) intra-individual stability of the sets of EEG short-term spectral patterns and their percent ratio; (h) discrete dynamics of EEG short-term spectral patterns. Functional relevance (consistency) of EEG short-term spectral patterns in accordance with the changes of brain functional state, cognitive task and with different neuropsychopathologies is demonstrated. PMID:21379390

  19. Red-Edge Spectral Reflectance as an Indicator of Surface Moisture Content in an Alaskan Peatland Ecosystem

    NASA Astrophysics Data System (ADS)

    McPartland, M.; Kane, E. S.; Turetsky, M. R.; Douglass, T.; Falkowski, M. J.; Montgomery, R.; Edwards, J.

    2015-12-01

    Arctic and boreal peatlands serve as major reservoirs of terrestrial organic carbon (C) because Net Primary Productivity (NPP) outstrips C loss from decomposition over long periods of time. Peatland productivity varies as a function of water table position and surface moisture content, making C storage in these systems particularly vulnerable to the climate warming and drying predicted for high latitudes. Detailed spatial knowledge of how aboveground vegetation communities respond to changes in hydrology would allow for ecosystem response to environmental change to be measured at the landscape scale. This study leverages remotely sensed data along with field measurements taken at the Alaska Peatland Experiment (APEX) at the Bonanza Creek Long Term Ecological Research site to examine relationships between plant solar reflectance and surface moisture. APEX is a decade-long experiment investigating the effects of hydrologic change on peatland ecosystems using water table manipulation treatments (raised, lowered, and control). Water table levels were manipulated throughout the 2015 growing season, resulting in a maximum separation of 35 cm between raised and lowered treatment plots. Water table position, soil moisture content, depth to seasonal ice, soil temperature, photosynthetically active radiation (PAR), CO2 and CH4 fluxes were measured as predictors of C loss through decomposition and NPP. Vegetation was surveyed for percent cover of plant functional types. Remote sensing data was collected during peak growing season, when the separation between treatment plots was at maximum difference. Imagery was acquired via a SenseFly eBee airborne platform equipped with a Canon S110 red-edge camera capable of detecting spectral reflectance from plant tissue at 715 nm band center to within centimeters of spatial resolution. Here, we investigate empirical relationships between spectral reflectance, water table position, and surface moisture in relation to peat carbon balance.

  20. The Near and Far-IR SEDs of Spitzer GTO ULIRGs

    NASA Astrophysics Data System (ADS)

    Marshall, Jason; Armus, Lee; Spoon, Henrik

    2008-03-01

    Spectra of a sample of 109 ultraluminous infrared galaxies (ULIRGs) have been obtained as part of the Spitzer IRS GTO program, providing a dataset with which to study the underlying obscured energy source(s) (i.e., AGN and/or starburst activity) powering ULIRGs in the local universe, and providing insight into the high-redshift infrared-luminous galaxies responsible for the bulk of the star-formation energy density at z = 2-3. As part of this effort, we have developed the CAFE spectral energy distribution decomposition tool to analyze the UV to sub-mm SEDs of these galaxies (including their IRS spectra). Sufficient photometry for these decompositions exists for approximately half of the GTO ULIRGs. However, we lack crucial data for the other half of the sample in either or both the 2-5 micron gap between the near-IR passbands and the start of the IRS wavelength coverage and the far-IR beyond 100 microns. These spectral regions provide critical constraints on the amount of hot dust near the dust sublimation temperature (indicating the presence of an AGN) and the total luminosity and mass of dust in the galaxy (dominated by the coldest dust emitting at far-IR wavelengths). We therefore propose to obtain IRAC observations in all channels and MIPS observations at 70 and 160 microns for the 37 and 17 GTO ULIRGs lacking data in these wavelength ranges, respectively. Considering its very low cost of 7.3 total hours of observation, the scientific return from this program is enormous: nearly doubling the number of GTO ULIRGs with full spectral coverage, and completing a dataset that is sure to be an invaluable resource well beyond the lifetime of Spitzer.

  1. xEMD procedures as a data - Assisted filtering method

    NASA Astrophysics Data System (ADS)

    Machrowska, Anna; Jonak, Józef

    2018-01-01

    The article presents the possibility of using Empirical Mode Decomposition (EMD), Ensemble Empirical Mode Decomposition (EEMD), Complete Ensemble Empirical Mode Decomposition with Adaptive Noise (CEEMDAN) and Improved Complete Ensemble Empirical Mode Decomposition (ICEEMD) algorithms for mechanical system condition monitoring applications. There were presented the results of the xEMD procedures used for vibration signals of system in different states of wear.

  2. Removing Wave Artifacts from Eddy Correlation Data

    NASA Astrophysics Data System (ADS)

    Neumann, Andreas; Brand, Andreas

    2017-04-01

    The German Wadden Sea is an extensive system of back-barrier tidal basins along the margin of the southern North Sea. Due to their high productivity and the strong retention potential of labile organic carbon high mineralization rates are expected in this system. Since the sediment bed is sandy, the oxygen fluxes across the sediment-water interface (SWI) may be enhanced by strong tidal currents as well as by wind-induced surface waves. In order to measure oxygen fluxes in-situ without disturbance of the sediment, the Eddy Correlation method (ECM) was introduced to aquatic geoscience by Berg et al. (2003). The method is based on correlating turbulent fluctuations of oxygen concentration and vertical velocity measured at high frequency above the SWI. The method integrates over spatial heterogeneities and allows the observation of total benthic oxygen fluxes in complex systems where other methods like flux chamber deployments and oxygen profile measurements in the sediment fail. Therefore, the method should also reflect effects like the enhancement of oxygen fluxes by porewater advection driven by waves and currents over sandy sediments. Unfortunately the ECM suffers from wave contamination due to stirring sensitivity of the electrodes, spatial separation between the oxygen electrode and the location of velocity measurement as well as by a tilt of the measurement setup at the deployment side. In order to correct for this wave contamination we tested the method of spectral reconstruction initially introduced by Bricker and Monismith (2007) for the determination of Reynolds-stresses in wave-affected environments. In short, this method attempts to remove the wave signal from the Power spectral densities of oxygen concentration and vertical velocity fluctuations by cutting off the wave peak in these spectra. The wave contribution to the co-spectrum between both quantities is then reconstructed by assuming that the phasing in the wave band is dominated by the waves. Based on the example of the North-Frisian Wadden Sea we will discuss the potentials and limits of this method. References: Berg, P., H. Roy, F. Janssen, V. Meyer, B. Jorgensen, M. Huettel, and D. de Beer (2003), Oxygen uptake by aquatic sediments measured with a novel non-invasive eddy-correlation technique, Marine Ecology-Progress Series, 261, 75-83, doi:10.3354/meps261075. Bricker, J. D., and S. G. Monismith (2007), Spectral wave turbulence decomposition, J. Atmos. Oceanic Technol., 24(8), 1479-1487, doi:10.1175/JTECH2066.1.

  3. [Early warning for various internal faults of GIS based on ultraviolet spectroscopy].

    PubMed

    Zhao, Yu; Wang, Xian-pei; Hu, Hong-hong; Dai, Dang-dang; Long, Jia-chuan; Tian, Meng; Zhu, Guo-wei; Huang, Yun-guang

    2015-02-01

    As the basis of accurate diagnosis, fault early-warning of gas insulation switchgear (GIS) focuses on the time-effectiveness and the applicability. It would be significant to research the method of unified early-warning for partial discharge (PD) and overheated faults in GIS. In the present paper, SO2 is proposed as the common and typical by-product. The unified monitoring could be achieved through ultraviolet spectroscopy (UV) detection of SO2. The derivative method and Savitzky-Golay filtering are employed for baseline correction and smoothing. The wavelength range of 290-310 nm is selected for quantitative detection of SO2. Through UV method, the spectral interference of SF6 and other complex by-products, e.g., SOF2 and SOF2, can be avoided and the features of trace SO2 in GIS can be extracted. The detection system is featured by compacted structure, low maintenance and satisfactory suitability in filed surveillance. By conducting SF6 decomposition experiments, including two types of PD faults and the overheated faults between 200-400 degrees C, the feasibility of proposed UV method has been verified. Fourier transform infrared spectroscopy and gas chromatography methods can be used for subsequent fault diagnosis. The different decomposition features in two kinds of faults are confirmed and the diagnosis strategy has been briefly analyzed. The main by-products under PD are SOF2 and SO2F2. The generated SO2 is significantly less than SOF2. More carbonous by-products will be generated when PD involves epoxy. By contrast, when the material of heater is stainless steel, SF6 decomposes at about 300 "C and the main by-products in overheated faults are SO2 and SO2F2. When heated over 350 degrees C, SO2 is generated much faster. SOz content stably increases when the GIS fault lasts. The faults types could be preliminarily identified based on the generation features of SO2.

  4. New spectrophotometric assay for pilocarpine.

    PubMed

    El-Masry, S; Soliman, R

    1980-07-01

    A quick method for the determination of pilocarpine in eye drops in the presence of decomposition products is described. The method involves complexation of the alkaloid with bromocresol purple at pH 6. After treatment with 0.1N NaOH, the liberated dye is measured at 580 nm. The method has a relative standard deviation of 1.99%, and has been successfully applied to the analysis of 2 batches of pilocarpine eye drops. The recommended method was also used to monitor the stability of a pilocarpine nitrate solution in 0.05N NaOH at 65 degrees C. The BPC method failed to detect any significant decomposition after 2 h incubation, but the recommended method revealed 87.5% decomposition.

  5. Microbial genomics, transcriptomics and proteomics: new discoveries in decomposition research using complementary methods.

    PubMed

    Baldrian, Petr; López-Mondéjar, Rubén

    2014-02-01

    Molecular methods for the analysis of biomolecules have undergone rapid technological development in the last decade. The advent of next-generation sequencing methods and improvements in instrumental resolution enabled the analysis of complex transcriptome, proteome and metabolome data, as well as a detailed annotation of microbial genomes. The mechanisms of decomposition by model fungi have been described in unprecedented detail by the combination of genome sequencing, transcriptomics and proteomics. The increasing number of available genomes for fungi and bacteria shows that the genetic potential for decomposition of organic matter is widespread among taxonomically diverse microbial taxa, while expression studies document the importance of the regulation of expression in decomposition efficiency. Importantly, high-throughput methods of nucleic acid analysis used for the analysis of metagenomes and metatranscriptomes indicate the high diversity of decomposer communities in natural habitats and their taxonomic composition. Today, the metaproteomics of natural habitats is of interest. In combination with advanced analytical techniques to explore the products of decomposition and the accumulation of information on the genomes of environmentally relevant microorganisms, advanced methods in microbial ecophysiology should increase our understanding of the complex processes of organic matter transformation.

  6. Application of Extended Kalman Filter in Persistant Scatterer Interferometry to Enhace the Accuracy of Unwrapping Process

    NASA Astrophysics Data System (ADS)

    Tavakkoli Estahbanat, A.; Dehghani, M.

    2017-09-01

    In interferometry technique, phases have been modulated between 0-2π. Finding the number of integer phases missed when they were wrapped is the main goal of unwrapping algorithms. Although the density of points in conventional interferometry is high, this is not effective in some cases such as large temporal baselines or noisy interferograms. Due to existing noisy pixels, not only it does not improve results, but also it leads to some unwrapping errors during interferogram unwrapping. In PS technique, because of the sparse PS pixels, scientists are confronted with a problem to unwrap phases. Due to the irregular data separation, conventional methods are sterile. Unwrapping techniques are divided in to path-independent and path-dependent in the case of unwrapping paths. A region-growing method which is a path-dependent technique has been used to unwrap PS data. In this paper an idea of EKF has been generalized on PS data. This algorithm is applied to consider the nonlinearity of PS unwrapping problem as well as conventional unwrapping problem. A pulse-pair method enhanced with singular value decomposition (SVD) has been used to estimate spectral shift from interferometric power spectral density in 7*7 local windows. Furthermore, a hybrid cost-map is used to manage the unwrapping path. This algorithm has been implemented on simulated PS data. To form a sparse dataset, A few points from regular grid are randomly selected and the RMSE of results and true unambiguous phases in presented to validate presented approach. The results of this algorithm and true unwrapped phases were completely identical.

  7. A projection method for low speed flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Colella, P.; Pao, K.

    The authors propose a decomposition applicable to low speed, inviscid flows of all Mach numbers less than 1. By using the Hodge decomposition, they may write the velocity field as the sum of a divergence-free vector field and a gradient of a scalar function. Evolution equations for these parts are presented. A numerical procedure based on this decomposition is designed, using projection methods for solving the incompressible variables and a backward-Euler method for solving the potential variables. Numerical experiments are included to illustrate various aspects of the algorithm.

  8. Virtual Surveyor based Object Extraction from Airborne LiDAR data

    NASA Astrophysics Data System (ADS)

    Habib, Md. Ahsan

    Topographic feature detection of land cover from LiDAR data is important in various fields - city planning, disaster response and prevention, soil conservation, infrastructure or forestry. In recent years, feature classification, compliant with Object-Based Image Analysis (OBIA) methodology has been gaining traction in remote sensing and geographic information science (GIS). In OBIA, the LiDAR image is first divided into meaningful segments called object candidates. This results, in addition to spectral values, in a plethora of new information such as aggregated spectral pixel values, morphology, texture, context as well as topology. Traditional nonparametric segmentation methods rely on segmentations at different scales to produce a hierarchy of semantically significant objects. Properly tuned scale parameters are, therefore, imperative in these methods for successful subsequent classification. Recently, some progress has been made in the development of methods for tuning the parameters for automatic segmentation. However, researchers found that it is very difficult to automatically refine the tuning with respect to each object class present in the scene. Moreover, due to the relative complexity of real-world objects, the intra-class heterogeneity is very high, which leads to over-segmentation. Therefore, the method fails to deliver correctly many of the new segment features. In this dissertation, a new hierarchical 3D object segmentation algorithm called Automatic Virtual Surveyor based Object Extracted (AVSOE) is presented. AVSOE segments objects based on their distinct geometric concavity/convexity. This is achieved by strategically mapping the sloping surface, which connects the object to its background. Further analysis produces hierarchical decomposition of objects to its sub-objects at a single scale level. Extensive qualitative and qualitative results are presented to demonstrate the efficacy of this hierarchical segmentation approach.

  9. Modified complementary ensemble empirical mode decomposition and intrinsic mode functions evaluation index for high-speed train gearbox fault diagnosis

    NASA Astrophysics Data System (ADS)

    Chen, Dongyue; Lin, Jianhui; Li, Yanping

    2018-06-01

    Complementary ensemble empirical mode decomposition (CEEMD) has been developed for the mode-mixing problem in Empirical Mode Decomposition (EMD) method. Compared to the ensemble empirical mode decomposition (EEMD), the CEEMD method reduces residue noise in the signal reconstruction. Both CEEMD and EEMD need enough ensemble number to reduce the residue noise, and hence it would be too much computation cost. Moreover, the selection of intrinsic mode functions (IMFs) for further analysis usually depends on experience. A modified CEEMD method and IMFs evaluation index are proposed with the aim of reducing the computational cost and select IMFs automatically. A simulated signal and in-service high-speed train gearbox vibration signals are employed to validate the proposed method in this paper. The results demonstrate that the modified CEEMD can decompose the signal efficiently with less computation cost, and the IMFs evaluation index can select the meaningful IMFs automatically.

  10. SCoT: a Python toolbox for EEG source connectivity.

    PubMed

    Billinger, Martin; Brunner, Clemens; Müller-Putz, Gernot R

    2014-01-01

    Analysis of brain connectivity has become an important research tool in neuroscience. Connectivity can be estimated between cortical sources reconstructed from the electroencephalogram (EEG). Such analysis often relies on trial averaging to obtain reliable results. However, some applications such as brain-computer interfaces (BCIs) require single-trial estimation methods. In this paper, we present SCoT-a source connectivity toolbox for Python. This toolbox implements routines for blind source decomposition and connectivity estimation with the MVARICA approach. Additionally, a novel extension called CSPVARICA is available for labeled data. SCoT estimates connectivity from various spectral measures relying on vector autoregressive (VAR) models. Optionally, these VAR models can be regularized to facilitate ill posed applications such as single-trial fitting. We demonstrate basic usage of SCoT on motor imagery (MI) data. Furthermore, we show simulation results of utilizing SCoT for feature extraction in a BCI application. These results indicate that CSPVARICA and correct regularization can significantly improve MI classification. While SCoT was mainly designed for application in BCIs, it contains useful tools for other areas of neuroscience. SCoT is a software package that (1) brings combined source decomposition and connectivtiy estimation to the open Python platform, and (2) offers tools for single-trial connectivity estimation. The source code is released under the MIT license and is available online at github.com/SCoT-dev/SCoT.

  11. Characterization of agricultural land using singular value decomposition

    NASA Astrophysics Data System (ADS)

    Herries, Graham M.; Danaher, Sean; Selige, Thomas

    1995-11-01

    A method is defined and tested for the characterization of agricultural land from multi-spectral imagery, based on singular value decomposition (SVD) and key vector analysis. The SVD technique, which bears a close resemblance to multivariate statistic techniques, has previously been successfully applied to problems of signal extraction for marine data and forestry species classification. In this study the SVD technique is used as a classifier for agricultural regions, using airborne Daedalus ATM data, with 1 m resolution. The specific region chosen is an experimental research farm in Bavaria, Germany. This farm has a large number of crops, within a very small region and hence is not amenable to existing techniques. There are a number of other significant factors which render existing techniques such as the maximum likelihood algorithm less suitable for this area. These include a very dynamic terrain and tessellated pattern soil differences, which together cause large variations in the growth characteristics of the crops. The SVD technique is applied to this data set using a multi-stage classification approach, removing unwanted land-cover classes one step at a time. Typical classification accuracy's for SVD are of the order of 85-100%. Preliminary results indicate that it is a fast and efficient classifier with the ability to differentiate between crop types such as wheat, rye, potatoes and clover. The results of characterizing 3 sub-classes of Winter Wheat are also shown.

  12. SCoT: a Python toolbox for EEG source connectivity

    PubMed Central

    Billinger, Martin; Brunner, Clemens; Müller-Putz, Gernot R.

    2014-01-01

    Analysis of brain connectivity has become an important research tool in neuroscience. Connectivity can be estimated between cortical sources reconstructed from the electroencephalogram (EEG). Such analysis often relies on trial averaging to obtain reliable results. However, some applications such as brain-computer interfaces (BCIs) require single-trial estimation methods. In this paper, we present SCoT—a source connectivity toolbox for Python. This toolbox implements routines for blind source decomposition and connectivity estimation with the MVARICA approach. Additionally, a novel extension called CSPVARICA is available for labeled data. SCoT estimates connectivity from various spectral measures relying on vector autoregressive (VAR) models. Optionally, these VAR models can be regularized to facilitate ill posed applications such as single-trial fitting. We demonstrate basic usage of SCoT on motor imagery (MI) data. Furthermore, we show simulation results of utilizing SCoT for feature extraction in a BCI application. These results indicate that CSPVARICA and correct regularization can significantly improve MI classification. While SCoT was mainly designed for application in BCIs, it contains useful tools for other areas of neuroscience. SCoT is a software package that (1) brings combined source decomposition and connectivtiy estimation to the open Python platform, and (2) offers tools for single-trial connectivity estimation. The source code is released under the MIT license and is available online at github.com/SCoT-dev/SCoT. PMID:24653694

  13. On the Chern-Gauss-Bonnet Theorem and Conformally Twisted Spectral Triples for C*-Dynamical Systems

    NASA Astrophysics Data System (ADS)

    Fathizadeh, Farzad; Gabriel, Olivier

    2016-02-01

    The analog of the Chern-Gauss-Bonnet theorem is studied for a C^*-dynamical system consisting of a C^*-algebra A equipped with an ergodic action of a compact Lie group G. The structure of the Lie algebra g of G is used to interpret the Chevalley-Eilenberg complex with coefficients in the smooth subalgebra A subset A as noncommutative differential forms on the dynamical system. We conformally perturb the standard metric, which is associated with the unique G-invariant state on A, by means of a Weyl conformal factor given by a positive invertible element of the algebra, and consider the Hermitian structure that it induces on the complex. A Hodge decomposition theorem is proved, which allows us to relate the Euler characteristic of the complex to the index properties of a Hodge-de Rham operator for the perturbed metric. This operator, which is shown to be selfadjoint, is a key ingredient in our construction of a spectral triple on A and a twisted spectral triple on its opposite algebra. The conformal invariance of the Euler characteristic is interpreted as an indication of the Chern-Gauss-Bonnet theorem in this setting. The spectral triples encoding the conformally perturbed metrics are shown to enjoy the same spectral summability properties as the unperturbed case.

  14. Contrast-enhanced spectral mammography based on a photon-counting detector: quantitative accuracy and radiation dose

    NASA Astrophysics Data System (ADS)

    Lee, Seungwan; Kang, Sooncheol; Eom, Jisoo

    2017-03-01

    Contrast-enhanced mammography has been used to demonstrate functional information about a breast tumor by injecting contrast agents. However, a conventional technique with a single exposure degrades the efficiency of tumor detection due to structure overlapping. Dual-energy techniques with energy-integrating detectors (EIDs) also cause an increase of radiation dose and an inaccuracy of material decomposition due to the limitations of EIDs. On the other hands, spectral mammography with photon-counting detectors (PCDs) is able to resolve the issues induced by the conventional technique and EIDs using their energy-discrimination capabilities. In this study, the contrast-enhanced spectral mammography based on a PCD was implemented by using a polychromatic dual-energy model, and the proposed technique was compared with the dual-energy technique with an EID in terms of quantitative accuracy and radiation dose. The results showed that the proposed technique improved the quantitative accuracy as well as reduced radiation dose comparing to the dual-energy technique with an EID. The quantitative accuracy of the contrast-enhanced spectral mammography based on a PCD was slightly improved as a function of radiation dose. Therefore, the contrast-enhanced spectral mammography based on a PCD is able to provide useful information for detecting breast tumors and improving diagnostic accuracy.

  15. Source spectral properties of small-to-moderate earthquakes in southern Kansas

    USGS Publications Warehouse

    Trugman, Daniel T.; Dougherty, Sara L.; Cochran, Elizabeth S.; Shearer, Peter M.

    2017-01-01

    The source spectral properties of injection-induced earthquakes give insight into their nucleation, rupture processes, and influence on ground motion. Here we apply a spectral decomposition approach to analyze P-wave spectra and estimate Brune-type stress drop for more than 2000 ML1.5–5.2 earthquakes occurring in southern Kansas from 2014 to 2016. We find that these earthquakes are characterized by low stress drop values (median ∼0.4MPa) compared to natural seismicity in California. We observe a significant increase in stress drop as a function of depth, but the shallow depth distribution of these events is not by itself sufficient to explain their lower stress drop. Stress drop increases with magnitude from M1.5–M3.5, but this scaling trend may weaken above M4 and also depends on the assumed source model. Although we observe a nonstationary, sequence-specific temporal evolution in stress drop, we find no clear systematic relation with the activity of nearby injection wells.

  16. HiC-spector: a matrix library for spectral and reproducibility analysis of Hi-C contact maps.

    PubMed

    Yan, Koon-Kiu; Yardimci, Galip Gürkan; Yan, Chengfei; Noble, William S; Gerstein, Mark

    2017-07-15

    Genome-wide proximity ligation based assays like Hi-C have opened a window to the 3D organization of the genome. In so doing, they present data structures that are different from conventional 1D signal tracks. To exploit the 2D nature of Hi-C contact maps, matrix techniques like spectral analysis are particularly useful. Here, we present HiC-spector, a collection of matrix-related functions for analyzing Hi-C contact maps. In particular, we introduce a novel reproducibility metric for quantifying the similarity between contact maps based on spectral decomposition. The metric successfully separates contact maps mapped from Hi-C data coming from biological replicates, pseudo-replicates and different cell types. Source code in Julia and Python, and detailed documentation is available at https://github.com/gersteinlab/HiC-spector . koonkiu.yan@gmail.com or mark@gersteinlab.org. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.

  17. Thermal and Chemical Characterization of Composite Materials. MSFC Center Director's Discretionary Fund Final Report, Project No. ED36-18

    NASA Technical Reports Server (NTRS)

    Stanley, D. C.; Huff, T. L.

    2003-01-01

    The purpose of this research effort was to: (1) provide a concise and well-defined property profile of current and developing composite materials using thermal and chemical characterization techniques and (2) optimize analytical testing requirements of materials. This effort applied a diverse array of methodologies to ascertain composite material properties. Often, a single method of technique will provide useful, but nonetheless incomplete, information on material composition and/or behavior. To more completely understand and predict material properties, a broad-based analytical approach is required. By developing a database of information comprised of both thermal and chemical properties, material behavior under varying conditions may be better understood. THis is even more important in the aerospace community, where new composite materials and those in the development stage have little reference data. For example, Fourier transform infrared (FTIR) spectroscopy spectral databases available for identification of vapor phase spectra, such as those generated during experiments, generally refer to well-defined chemical compounds. Because this method renders a unique thermal decomposition spectral pattern, even larger, more diverse databases, such as those found in solid and liquid phase FTIR spectroscopy libraries, cannot be used. By combining this and other available methodologies, a database specifically for new materials and materials being developed at Marshall Space Flight Center can be generated . In addition, characterizing materials using this approach will be extremely useful in the verification of materials and identification of anomalies in NASA-wide investigations.

  18. Accelerated short-TE 3D proton echo-planar spectroscopic imaging using 2D-SENSE with a 32-channel array coil.

    PubMed

    Otazo, Ricardo; Tsai, Shang-Yueh; Lin, Fa-Hsuan; Posse, Stefan

    2007-12-01

    MR spectroscopic imaging (MRSI) with whole brain coverage in clinically feasible acquisition times still remains a major challenge. A combination of MRSI with parallel imaging has shown promise to reduce the long encoding times and 2D acceleration with a large array coil is expected to provide high acceleration capability. In this work a very high-speed method for 3D-MRSI based on the combination of proton echo planar spectroscopic imaging (PEPSI) with regularized 2D-SENSE reconstruction is developed. Regularization was performed by constraining the singular value decomposition of the encoding matrix to reduce the effect of low-value and overlapped coil sensitivities. The effects of spectral heterogeneity and discontinuities in coil sensitivity across the spectroscopic voxels were minimized by unaliasing the point spread function. As a result the contamination from extracranial lipids was reduced 1.6-fold on average compared to standard SENSE. We show that the acquisition of short-TE (15 ms) 3D-PEPSI at 3 T with a 32 x 32 x 8 spatial matrix using a 32-channel array coil can be accelerated 8-fold (R = 4 x 2) along y-z to achieve a minimum acquisition time of 1 min. Maps of the concentrations of N-acetyl-aspartate, creatine, choline, and glutamate were obtained with moderate reduction in spatial-spectral quality. The short acquisition time makes the method suitable for volumetric metabolite mapping in clinical studies. (c) 2007 Wiley-Liss, Inc.

  19. Adaptive Fourier decomposition based R-peak detection for noisy ECG Signals.

    PubMed

    Ze Wang; Chi Man Wong; Feng Wan

    2017-07-01

    An adaptive Fourier decomposition (AFD) based R-peak detection method is proposed for noisy ECG signals. Although lots of QRS detection methods have been proposed in literature, most detection methods require high signal quality. The proposed method extracts the R waves from the energy domain using the AFD and determines the R-peak locations based on the key decomposition parameters, achieving the denoising and the R-peak detection at the same time. Validated by clinical ECG signals in the MIT-BIH Arrhythmia Database, the proposed method shows better performance than the Pan-Tompkin (PT) algorithm in both situations of a native PT and the PT with a denoising process.

  20. Accuracy and speed in computing the Chebyshev collocation derivative

    NASA Technical Reports Server (NTRS)

    Don, Wai-Sun; Solomonoff, Alex

    1991-01-01

    We studied several algorithms for computing the Chebyshev spectral derivative and compare their roundoff error. For a large number of collocation points, the elements of the Chebyshev differentiation matrix, if constructed in the usual way, are not computed accurately. A subtle cause is is found to account for the poor accuracy when computing the derivative by the matrix-vector multiplication method. Methods for accurately computing the elements of the matrix are presented, and we find that if the entities of the matrix are computed accurately, the roundoff error of the matrix-vector multiplication is as small as that of the transform-recursion algorithm. Results of CPU time usage are shown for several different algorithms for computing the derivative by the Chebyshev collocation method for a wide variety of two-dimensional grid sizes on both an IBM and a Cray 2 computer. We found that which algorithm is fastest on a particular machine depends not only on the grid size, but also on small details of the computer hardware as well. For most practical grid sizes used in computation, the even-odd decomposition algorithm is found to be faster than the transform-recursion method.

  1. The Multiscale Robin Coupled Method for flows in porous media

    NASA Astrophysics Data System (ADS)

    Guiraldello, Rafael T.; Ausas, Roberto F.; Sousa, Fabricio S.; Pereira, Felipe; Buscaglia, Gustavo C.

    2018-02-01

    A multiscale mixed method aiming at the accurate approximation of velocity and pressure fields in heterogeneous porous media is proposed. The procedure is based on a new domain decomposition method in which the local problems are subject to Robin boundary conditions. The domain decomposition procedure is defined in terms of two independent spaces on the skeleton of the decomposition, corresponding to interface pressures and fluxes, that can be chosen with great flexibility to accommodate local features of the underlying permeability fields. The well-posedness of the new domain decomposition procedure is established and its connection with the method of Douglas et al. (1993) [12], is identified, also allowing us to reinterpret the known procedure as an optimized Schwarz (or Two-Lagrange-Multiplier) method. The multiscale property of the new domain decomposition method is indicated, and its relation with the Multiscale Mortar Mixed Finite Element Method (MMMFEM) and the Multiscale Hybrid-Mixed (MHM) Finite Element Method is discussed. Numerical simulations are presented aiming at illustrating several features of the new method. Initially we illustrate the possibility of switching from MMMFEM to MHM by suitably varying the Robin condition parameter in the new multiscale method. Then we turn our attention to realistic flows in high-contrast, channelized porous formations. We show that for a range of values of the Robin condition parameter our method provides better approximations for pressure and velocity than those computed with either the MMMFEM and the MHM. This is an indication that our method has the potential to produce more accurate velocity fields in the presence of rough, realistic permeability fields of petroleum reservoirs.

  2. Scalable domain decomposition solvers for stochastic PDEs in high performance computing

    DOE PAGES

    Desai, Ajit; Khalil, Mohammad; Pettit, Chris; ...

    2017-09-21

    Stochastic spectral finite element models of practical engineering systems may involve solutions of linear systems or linearized systems for non-linear problems with billions of unknowns. For stochastic modeling, it is therefore essential to design robust, parallel and scalable algorithms that can efficiently utilize high-performance computing to tackle such large-scale systems. Domain decomposition based iterative solvers can handle such systems. And though these algorithms exhibit excellent scalabilities, significant algorithmic and implementational challenges exist to extend them to solve extreme-scale stochastic systems using emerging computing platforms. Intrusive polynomial chaos expansion based domain decomposition algorithms are extended here to concurrently handle high resolutionmore » in both spatial and stochastic domains using an in-house implementation. Sparse iterative solvers with efficient preconditioners are employed to solve the resulting global and subdomain level local systems through multi-level iterative solvers. We also use parallel sparse matrix–vector operations to reduce the floating-point operations and memory requirements. Numerical and parallel scalabilities of these algorithms are presented for the diffusion equation having spatially varying diffusion coefficient modeled by a non-Gaussian stochastic process. Scalability of the solvers with respect to the number of random variables is also investigated.« less

  3. Scalable domain decomposition solvers for stochastic PDEs in high performance computing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Desai, Ajit; Khalil, Mohammad; Pettit, Chris

    Stochastic spectral finite element models of practical engineering systems may involve solutions of linear systems or linearized systems for non-linear problems with billions of unknowns. For stochastic modeling, it is therefore essential to design robust, parallel and scalable algorithms that can efficiently utilize high-performance computing to tackle such large-scale systems. Domain decomposition based iterative solvers can handle such systems. And though these algorithms exhibit excellent scalabilities, significant algorithmic and implementational challenges exist to extend them to solve extreme-scale stochastic systems using emerging computing platforms. Intrusive polynomial chaos expansion based domain decomposition algorithms are extended here to concurrently handle high resolutionmore » in both spatial and stochastic domains using an in-house implementation. Sparse iterative solvers with efficient preconditioners are employed to solve the resulting global and subdomain level local systems through multi-level iterative solvers. We also use parallel sparse matrix–vector operations to reduce the floating-point operations and memory requirements. Numerical and parallel scalabilities of these algorithms are presented for the diffusion equation having spatially varying diffusion coefficient modeled by a non-Gaussian stochastic process. Scalability of the solvers with respect to the number of random variables is also investigated.« less

  4. Harmonic analysis of traction power supply system based on wavelet decomposition

    NASA Astrophysics Data System (ADS)

    Dun, Xiaohong

    2018-05-01

    With the rapid development of high-speed railway and heavy-haul transport, AC drive electric locomotive and EMU large-scale operation in the country on the ground, the electrified railway has become the main harmonic source of China's power grid. In response to this phenomenon, the need for timely monitoring of power quality problems of electrified railway, assessment and governance. Wavelet transform is developed on the basis of Fourier analysis, the basic idea comes from the harmonic analysis, with a rigorous theoretical model, which has inherited and developed the local thought of Garbor transformation, and has overcome the disadvantages such as window fixation and lack of discrete orthogonally, so as to become a more recently studied spectral analysis tool. The wavelet analysis takes the gradual and precise time domain step in the high frequency part so as to focus on any details of the signal being analyzed, thereby comprehensively analyzing the harmonics of the traction power supply system meanwhile use the pyramid algorithm to increase the speed of wavelet decomposition. The matlab simulation shows that the use of wavelet decomposition of the traction power supply system for harmonic spectrum analysis is effective.

  5. An Aquatic Decomposition Scoring Method to Potentially Predict the Postmortem Submersion Interval of Bodies Recovered from the North Sea.

    PubMed

    van Daalen, Marjolijn A; de Kat, Dorothée S; Oude Grotebevelsborg, Bernice F L; de Leeuwe, Roosje; Warnaar, Jeroen; Oostra, Roelof Jan; M Duijst-Heesters, Wilma L J

    2017-03-01

    This study aimed to develop an aquatic decomposition scoring (ADS) method and investigated the predictive value of this method in estimating the postmortem submersion interval (PMSI) of bodies recovered from the North Sea. This method, consisting of an ADS item list and a pictorial reference atlas, showed a high interobserver agreement (Krippendorff's alpha ≥ 0.93) and hence proved to be valid. This scoring method was applied to data, collected from closed cases-cases in which the postmortal submersion interval (PMSI) was known-concerning bodies recovered from the North Sea from 1990 to 2013. Thirty-eight cases met the inclusion criteria and were scored by quantifying the observed total aquatic decomposition score (TADS). Statistical analysis demonstrated that TADS accurately predicts the PMSI (p < 0.001), confirming that the decomposition process in the North Sea is strongly correlated to time. © 2017 American Academy of Forensic Sciences.

  6. Compressed-sensing wavenumber-scanning interferometry

    NASA Astrophysics Data System (ADS)

    Bai, Yulei; Zhou, Yanzhou; He, Zhaoshui; Ye, Shuangli; Dong, Bo; Xie, Shengli

    2018-01-01

    The Fourier transform (FT), the nonlinear least-squares algorithm (NLSA), and eigenvalue decomposition algorithm (EDA) are used to evaluate the phase field in depth-resolved wavenumber-scanning interferometry (DRWSI). However, because the wavenumber series of the laser's output is usually accompanied by nonlinearity and mode-hop, FT, NLSA, and EDA, which are only suitable for equidistant interference data, often lead to non-negligible phase errors. In this work, a compressed-sensing method for DRWSI (CS-DRWSI) is proposed to resolve this problem. By using the randomly spaced inverse Fourier matrix and solving the underdetermined equation in the wavenumber domain, CS-DRWSI determines the nonuniform sampling and spectral leakage of the interference spectrum. Furthermore, it can evaluate interference data without prior knowledge of the object. The experimental results show that CS-DRWSI improves the depth resolution and suppresses sidelobes. It can replace the FT as a standard algorithm for DRWSI.

  7. Photochemical oxidation of persistent cyanide-related compounds

    NASA Astrophysics Data System (ADS)

    Budaev, S. L.; Batoeva, A. A.; Khandarkhaeva, M. S.; Aseev, D. G.

    2017-03-01

    Kinetic regularities of the photolysis of thiocyanate solutions using of mono- and polychromatic UV radiation sources with different spectral ranges are studied. Comparative experiments aimed at investigating the role of photochemical action during the oxidation of thiocyanates with persulfates and additional catalytic activation with iron(III) ions are performed. The rate of conversion and the initial rate of thiocyanate oxidation are found to change in the order UV < UV/S2O 8 2- < S2O 8 2- /Fe3+ < UV/S2O 8 2- /Fe3+. A synergistic effect is detected when using the combined catalytic method for the destruction of thiocyanates by the UV/S2O 8 2- /Fe3+ oxidation system. This effect is due to the formation of reactive oxygen species, as a result of both the decomposition of persulfate and the reduction of inactive Fe3+ intermediates into Fe3+.

  8. Chaotic dynamics of controlled electric power systems

    NASA Astrophysics Data System (ADS)

    Kozlov, V. N.; Trosko, I. U.

    2016-12-01

    The conditions for appearance of chaotic dynamics of electromagnetic and electromechanical processes in energy systems described by the Park-Gorev bilinear differential equations with account for lags of coordinates and restrictions on control have been formulated. On the basis of classical equations, the parameters of synchronous generators and power lines, at which the chaotic dynamics of energy systems appears, have been found. The qualitative and quantitative characteristics of chaotic processes in energy associations of two types, based on the Hopf theorem, and methods of nonstationary linearization and decompositions are given. The properties of spectral characteristics of chaotic processes have been investigated, and the qualitative similarity of bilinear equations of power systems and Lorentz equations have been found. These results can be used for modernization of the systems of control of energy objects. The qualitative and quantitative characteristics for power energy systems as objects of control and for some laws of control with the feedback have been established.

  9. SandiaMRCR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2012-01-05

    SandiaMCR was developed to identify pure components and their concentrations from spectral data. This software efficiently implements the multivariate calibration regression alternating least squares (MCR-ALS), principal component analysis (PCA), and singular value decomposition (SVD). Version 3.37 also includes the PARAFAC-ALS Tucker-1 (for trilinear analysis) algorithms. The alternating least squares methods can be used to determine the composition without or with incomplete prior information on the constituents and their concentrations. It allows the specification of numerous preprocessing, initialization and data selection and compression options for the efficient processing of large data sets. The software includes numerous options including the definition ofmore » equality and non-negativety constraints to realistically restrict the solution set, various normalization or weighting options based on the statistics of the data, several initialization choices and data compression. The software has been designed to provide a practicing spectroscopist the tools required to routinely analysis data in a reasonable time and without requiring expert intervention.« less

  10. Capturing molecular multimode relaxation processes in excitable gases based on decomposition of acoustic relaxation spectra

    NASA Astrophysics Data System (ADS)

    Zhu, Ming; Liu, Tingting; Wang, Shu; Zhang, Kesheng

    2017-08-01

    Existing two-frequency reconstructive methods can only capture primary (single) molecular relaxation processes in excitable gases. In this paper, we present a reconstructive method based on the novel decomposition of frequency-dependent acoustic relaxation spectra to capture the entire molecular multimode relaxation process. This decomposition of acoustic relaxation spectra is developed from the frequency-dependent effective specific heat, indicating that a multi-relaxation process is the sum of the interior single-relaxation processes. Based on this decomposition, we can reconstruct the entire multi-relaxation process by capturing the relaxation times and relaxation strengths of N interior single-relaxation processes, using the measurements of acoustic absorption and sound speed at 2N frequencies. Experimental data for the gas mixtures CO2-N2 and CO2-O2 validate our decomposition and reconstruction approach.

  11. Image Reconstruction for Hybrid True-Color Micro-CT

    PubMed Central

    Xu, Qiong; Yu, Hengyong; Bennett, James; He, Peng; Zainon, Rafidah; Doesburg, Robert; Opie, Alex; Walsh, Mike; Shen, Haiou; Butler, Anthony; Butler, Phillip; Mou, Xuanqin; Wang, Ge

    2013-01-01

    X-ray micro-CT is an important imaging tool for biomedical researchers. Our group has recently proposed a hybrid “true-color” micro-CT system to improve contrast resolution with lower system cost and radiation dose. The system incorporates an energy-resolved photon-counting true-color detector into a conventional micro-CT configuration, and can be used for material decomposition. In this paper, we demonstrate an interior color-CT image reconstruction algorithm developed for this hybrid true-color micro-CT system. A compressive sensing-based statistical interior tomography method is employed to reconstruct each channel in the local spectral imaging chain, where the reconstructed global gray-scale image from the conventional imaging chain served as the initial guess. Principal component analysis was used to map the spectral reconstructions into the color space. The proposed algorithm was evaluated by numerical simulations, physical phantom experiments, and animal studies. The results confirm the merits of the proposed algorithm, and demonstrate the feasibility of the hybrid true-color micro-CT system. Additionally, a “color diffusion” phenomenon was observed whereby high-quality true-color images are produced not only inside the region of interest, but also in neighboring regions. It appears harnessing that this phenomenon could potentially reduce the color detector size for a given ROI, further reducing system cost and radiation dose. PMID:22481806

  12. Trace Elemental Imaging of Rare Earth Elements Discriminates Tissues at Microscale in Flat Fossils

    PubMed Central

    Gueriau, Pierre; Mocuta, Cristian; Dutheil, Didier B.; Cohen, Serge X.; Thiaudière, Dominique; Charbonnier, Sylvain; Clément, Gaël; Bertrand, Loïc

    2014-01-01

    The interpretation of flattened fossils remains a major challenge due to compression of their complex anatomies during fossilization, making critical anatomical features invisible or hardly discernible. Key features are often hidden under greatly preserved decay prone tissues, or an unpreparable sedimentary matrix. A method offering access to such anatomical features is of paramount interest to resolve taxonomic affinities and to study fossils after a least possible invasive preparation. Unfortunately, the widely-used X-ray micro-computed tomography, for visualizing hidden or internal structures of a broad range of fossils, is generally inapplicable to flattened specimens, due to the very high differential absorbance in distinct directions. Here we show that synchrotron X-ray fluorescence spectral raster-scanning coupled to spectral decomposition or a much faster Kullback-Leibler divergence based statistical analysis provides microscale visualization of tissues. We imaged exceptionally well-preserved fossils from the Late Cretaceous without needing any prior delicate preparation. The contrasting elemental distributions greatly improved the discrimination of skeletal elements material from both the sedimentary matrix and fossilized soft tissues. Aside content in alkaline earth elements and phosphorus, a critical parameter for tissue discrimination is the distinct amounts of rare earth elements. Local quantification of rare earths may open new avenues for fossil description but also in paleoenvironmental and taphonomical studies. PMID:24489809

  13. Trace elemental imaging of rare earth elements discriminates tissues at microscale in flat fossils.

    PubMed

    Gueriau, Pierre; Mocuta, Cristian; Dutheil, Didier B; Cohen, Serge X; Thiaudière, Dominique; Charbonnier, Sylvain; Clément, Gaël; Bertrand, Loïc

    2014-01-01

    The interpretation of flattened fossils remains a major challenge due to compression of their complex anatomies during fossilization, making critical anatomical features invisible or hardly discernible. Key features are often hidden under greatly preserved decay prone tissues, or an unpreparable sedimentary matrix. A method offering access to such anatomical features is of paramount interest to resolve taxonomic affinities and to study fossils after a least possible invasive preparation. Unfortunately, the widely-used X-ray micro-computed tomography, for visualizing hidden or internal structures of a broad range of fossils, is generally inapplicable to flattened specimens, due to the very high differential absorbance in distinct directions. Here we show that synchrotron X-ray fluorescence spectral raster-scanning coupled to spectral decomposition or a much faster Kullback-Leibler divergence based statistical analysis provides microscale visualization of tissues. We imaged exceptionally well-preserved fossils from the Late Cretaceous without needing any prior delicate preparation. The contrasting elemental distributions greatly improved the discrimination of skeletal elements material from both the sedimentary matrix and fossilized soft tissues. Aside content in alkaline earth elements and phosphorus, a critical parameter for tissue discrimination is the distinct amounts of rare earth elements. Local quantification of rare earths may open new avenues for fossil description but also in paleoenvironmental and taphonomical studies.

  14. Hyperspectral scattering profiles for prediction of the microbial spoilage of beef

    NASA Astrophysics Data System (ADS)

    Peng, Yankun; Zhang, Jing; Wu, Jianhu; Hang, Hui

    2009-05-01

    Spoilage in beef is the result of decomposition and the formation of metabolites caused by the growth and enzymatic activity of microorganisms. There is still no technology for the rapid, accurate and non-destructive detection of bacterially spoiled or contaminated beef. In this study, hyperspectral imaging technique was exploited to measure biochemical changes within the fresh beef. Fresh beef rump steaks were purchased from a commercial plant, and left to spoil in refrigerator at 8°C. Every 12 hours, hyperspectral scattering profiles over the spectral region between 400 nm and 1100 nm were collected directly from the sample surface in reflection pattern in order to develop an optimal model for prediction of the beef spoilage, in parallel the total viable count (TVC) per gram of beef were obtained by classical microbiological plating methods. The spectral scattering profiles at individual wavelengths were fitted accurately by a two-parameter Lorentzian distribution function. TVC prediction models were developed, using multi-linear regression, on relating individual Lorentzian parameters and their combinations at different wavelengths to log10(TVC) value. The best predictions were obtained with r2= 0.96 and SEP = 0.23 for log10(TVC). The research demonstrated that hyperspectral imaging technique is a valid tool for real-time and non-destructive detection of bacterial spoilage in beef.

  15. Computing many-body wave functions with guaranteed precision: the first-order Møller-Plesset wave function for the ground state of helium atom.

    PubMed

    Bischoff, Florian A; Harrison, Robert J; Valeev, Edward F

    2012-09-14

    We present an approach to compute accurate correlation energies for atoms and molecules using an adaptive discontinuous spectral-element multiresolution representation for the two-electron wave function. Because of the exponential storage complexity of the spectral-element representation with the number of dimensions, a brute-force computation of two-electron (six-dimensional) wave functions with high precision was not practical. To overcome the key storage bottlenecks we utilized (1) a low-rank tensor approximation (specifically, the singular value decomposition) to compress the wave function, and (2) explicitly correlated R12-type terms in the wave function to regularize the Coulomb electron-electron singularities of the Hamiltonian. All operations necessary to solve the Schrödinger equation were expressed so that the reconstruction of the full-rank form of the wave function is never necessary. Numerical performance of the method was highlighted by computing the first-order Møller-Plesset wave function of a helium atom. The computed second-order Møller-Plesset energy is precise to ~2 microhartrees, which is at the precision limit of the existing general atomic-orbital-based approaches. Our approach does not assume special geometric symmetries, hence application to molecules is straightforward.

  16. LES of flow in the street canyon

    NASA Astrophysics Data System (ADS)

    Fuka, Vladimír; Brechler, Josef

    2012-04-01

    Results of computer simulation of flow over a series of street canyons are presented in this paper. The setup is adapted from an experimental study by [4] with two different shapes of buildings. The problem is simulated by an LES model CLMM (Charles University Large Eddy Microscale Model) and results are analysed using proper orthogonal decomposition and spectral analysis. The results in the channel (layout from the experiment) are compared with results with a free top boundary.

  17. Integrated seismic tools to delineate Pliocene gas-charged geobody, offshore west Nile delta, Egypt

    NASA Astrophysics Data System (ADS)

    Othman, Adel A. A.; Bakr, Ali; Maher, Ali

    2017-06-01

    Nile delta province is rapidly emerging as a major gas province; commercial gas accumulations have been proved in shallow Pliocene channels of El-Wastani Formation. Solar gas discovery is one of the Turbidities Slope channels within the shallow Pliocene level that was proved by Solar-1 well. The main challenge of seismic reservoir characterization is to discriminate between Gas sand, Water sand and Shale, and extracting the gas-charged geobody from the seismic data. A detailed study for channel connectivity and lithological discrimination was established to delineate the gas charged geobody. Seismic data, being non-stationary in nature, have varying frequency content in time. Spectral decomposition of a seismic signal aims to characterize the time-dependent frequency response of subsurface rocks and reservoirs for imaging and mapping of bed thickness and geologic discontinuities. Spectral decomposition unravels the seismic signal into its constituent frequencies. A crossplot between P-wave Impedance (Ip) and S-wave Impedance (Is) derived from well logs (P-wave velocity, S-wave velocity and density) can be used to discriminate between gas-bearing sand, water-bearing sand, and shale. From Ip vs. Is crossplot, clear separation occurs in the P-impedance so post stack inversion is enough to be applied. Integration between Inversion results and Ip vs. Is crossplot cutoffs help to generate 3D lithofacies cubes, which is used to extract facies geobodies.

  18. Four-dimensional data coupled to alternating weighted residue constraint quadrilinear decomposition model applied to environmental analysis: Determination of polycyclic aromatic hydrocarbons

    NASA Astrophysics Data System (ADS)

    Liu, Tingting; Zhang, Ling; Wang, Shutao; Cui, Yaoyao; Wang, Yutian; Liu, Lingfei; Yang, Zhe

    2018-03-01

    Qualitative and quantitative analysis of polycyclic aromatic hydrocarbons (PAHs) was carried out by three-dimensional fluorescence spectroscopy combining with Alternating Weighted Residue Constraint Quadrilinear Decomposition (AWRCQLD). The experimental subjects were acenaphthene (ANA) and naphthalene (NAP). Firstly, in order to solve the redundant information of the three-dimensional fluorescence spectral data, the wavelet transform was used to compress data in preprocessing. Then, the four-dimensional data was constructed by using the excitation-emission fluorescence spectra of different concentration PAHs. The sample data was obtained from three solvents that are methanol, ethanol and Ultra-pure water. The four-dimensional spectral data was analyzed by AWRCQLD, then the recovery rate of PAHs was obtained from the three solvents and compared respectively. On one hand, the results showed that PAHs can be measured more accurately by the high-order data, and the recovery rate was higher. On the other hand, the results presented that AWRCQLD can better reflect the superiority of four-dimensional algorithm than the second-order calibration and other third-order calibration algorithms. The recovery rate of ANA was 96.5% 103.3% and the root mean square error of prediction was 0.04 μgL- 1. The recovery rate of NAP was 96.7% 115.7% and the root mean square error of prediction was 0.06 μgL- 1.

  19. Decomposition of Multi-player Games

    NASA Astrophysics Data System (ADS)

    Zhao, Dengji; Schiffel, Stephan; Thielscher, Michael

    Research in General Game Playing aims at building systems that learn to play unknown games without human intervention. We contribute to this endeavour by generalising the established technique of decomposition from AI Planning to multi-player games. To this end, we present a method for the automatic decomposition of previously unknown games into independent subgames, and we show how a general game player can exploit a successful decomposition for game tree search.

  20. Sparse Solution of Fiber Orientation Distribution Function by Diffusion Decomposition

    PubMed Central

    Yeh, Fang-Cheng; Tseng, Wen-Yih Isaac

    2013-01-01

    Fiber orientation is the key information in diffusion tractography. Several deconvolution methods have been proposed to obtain fiber orientations by estimating a fiber orientation distribution function (ODF). However, the L 2 regularization used in deconvolution often leads to false fibers that compromise the specificity of the results. To address this problem, we propose a method called diffusion decomposition, which obtains a sparse solution of fiber ODF by decomposing the diffusion ODF obtained from q-ball imaging (QBI), diffusion spectrum imaging (DSI), or generalized q-sampling imaging (GQI). A simulation study, a phantom study, and an in-vivo study were conducted to examine the performance of diffusion decomposition. The simulation study showed that diffusion decomposition was more accurate than both constrained spherical deconvolution and ball-and-sticks model. The phantom study showed that the angular error of diffusion decomposition was significantly lower than those of constrained spherical deconvolution at 30° crossing and ball-and-sticks model at 60° crossing. The in-vivo study showed that diffusion decomposition can be applied to QBI, DSI, or GQI, and the resolved fiber orientations were consistent regardless of the diffusion sampling schemes and diffusion reconstruction methods. The performance of diffusion decomposition was further demonstrated by resolving crossing fibers on a 30-direction QBI dataset and a 40-direction DSI dataset. In conclusion, diffusion decomposition can improve angular resolution and resolve crossing fibers in datasets with low SNR and substantially reduced number of diffusion encoding directions. These advantages may be valuable for human connectome studies and clinical research. PMID:24146772

Top