Sample records for decomposition method results

  1. Three-Component Decomposition of Polarimetric SAR Data Integrating Eigen-Decomposition Results

    NASA Astrophysics Data System (ADS)

    Lu, Da; He, Zhihua; Zhang, Huan

    2018-01-01

    This paper presents a novel three-component scattering power decomposition of polarimetric SAR data. There are two problems in three-component decomposition method: volume scattering component overestimation in urban areas and artificially set parameter to be a fixed value. Though volume scattering component overestimation can be partly solved by deorientation process, volume scattering still dominants some oriented urban areas. The speckle-like decomposition results introduced by artificially setting value are not conducive to further image interpretation. This paper integrates the results of eigen-decomposition to solve the aforementioned problems. Two principal eigenvectors are used to substitute the surface scattering model and the double bounce scattering model. The decomposed scattering powers are obtained using a constrained linear least-squares method. The proposed method has been verified using an ESAR PolSAR image, and the results show that the proposed method has better performance in urban area.

  2. Forecasting hotspots in East Kutai, Kutai Kartanegara, and West Kutai as early warning information

    NASA Astrophysics Data System (ADS)

    Wahyuningsih, S.; Goejantoro, R.; Rizki, N. A.

    2018-04-01

    The aims of this research are to model hotspots and forecast hotspot 2017 in East Kutai, Kutai Kartanegara and West Kutai. The methods which used in this research were Holt exponential smoothing, Holt’s additive dump trend method, Holt-Winters’ additive method, additive decomposition method, multiplicative decomposition method, Loess decomposition method and Box-Jenkins method. For smoothing techniques, additive decomposition is better than Holt’s exponential smoothing. The hotspots model using Box-Jenkins method were Autoregressive Moving Average ARIMA(1,1,0), ARIMA(0,2,1), and ARIMA(0,1,0). Comparing the results from all methods which were used in this research, and based on Root of Mean Squared Error (RMSE), show that Loess decomposition method is the best times series model, because it has the least RMSE. Thus the Loess decomposition model used to forecast the number of hotspot. The forecasting result indicatethat hotspots pattern tend to increase at the end of 2017 in Kutai Kartanegara and West Kutai, but stationary in East Kutai.

  3. Adomian decomposition method used to solve the one-dimensional acoustic equations

    NASA Astrophysics Data System (ADS)

    Dispini, Meta; Mungkasi, Sudi

    2017-05-01

    In this paper we propose the use of Adomian decomposition method to solve one-dimensional acoustic equations. This recursive method can be calculated easily and the result is an approximation of the exact solution. We use the Maple software to compute the series in the Adomian decomposition. We obtain that the Adomian decomposition method is able to solve the acoustic equations with the physically correct behavior.

  4. Effect of Copper Oxide, Titanium Dioxide, and Lithium Fluoride on the Thermal Behavior and Decomposition Kinetics of Ammonium Nitrate

    NASA Astrophysics Data System (ADS)

    Vargeese, Anuj A.; Mija, S. J.; Muralidharan, Krishnamurthi

    2014-07-01

    Ammonium nitrate (AN) is crystallized along with copper oxide, titanium dioxide, and lithium fluoride. Thermal kinetic constants for the decomposition reaction of the samples were calculated by model-free (Friedman's differential and Vyzovkins nonlinear integral) and model-fitting (Coats-Redfern) methods. To determine the decomposition mechanisms, 12 solid-state mechanisms were tested using the Coats-Redfern method. The results of the Coats-Redfern method show that the decomposition mechanism for all samples is the contracting cylinder mechanism. The phase behavior of the obtained samples was evaluated by differential scanning calorimetry (DSC), and structural properties were determined by X-ray powder diffraction (XRPD). The results indicate that copper oxide modifies the phase transition behavior and can catalyze AN decomposition, whereas LiF inhibits AN decomposition, and TiO2 shows no influence on the rate of decomposition. Possible explanations for these results are discussed. Supplementary materials are available for this article. Go to the publisher's online edition of the Journal of Energetic Materials to view the free supplemental file.

  5. A practical material decomposition method for x-ray dual spectral computed tomography.

    PubMed

    Hu, Jingjing; Zhao, Xing

    2016-03-17

    X-ray dual spectral CT (DSCT) scans the measured object with two different x-ray spectra, and the acquired rawdata can be used to perform the material decomposition of the object. Direct calibration methods allow a faster material decomposition for DSCT and can be separated in two groups: image-based and rawdata-based. The image-based method is an approximative method, and beam hardening artifacts remain in the resulting material-selective images. The rawdata-based method generally obtains better image quality than the image-based method, but this method requires geometrically consistent rawdata. However, today's clinical dual energy CT scanners usually measure different rays for different energy spectra and acquire geometrically inconsistent rawdata sets, and thus cannot meet the requirement. This paper proposes a practical material decomposition method to perform rawdata-based material decomposition in the case of inconsistent measurement. This method first yields the desired consistent rawdata sets from the measured inconsistent rawdata sets, and then employs rawdata-based technique to perform material decomposition and reconstruct material-selective images. The proposed method was evaluated by use of simulated FORBILD thorax phantom rawdata and dental CT rawdata, and simulation results indicate that this method can produce highly quantitative DSCT images in the case of inconsistent DSCT measurements.

  6. An optimized time varying filtering based empirical mode decomposition method with grey wolf optimizer for machinery fault diagnosis

    NASA Astrophysics Data System (ADS)

    Zhang, Xin; Liu, Zhiwen; Miao, Qiang; Wang, Lei

    2018-03-01

    A time varying filtering based empirical mode decomposition (EMD) (TVF-EMD) method was proposed recently to solve the mode mixing problem of EMD method. Compared with the classical EMD, TVF-EMD was proven to improve the frequency separation performance and be robust to noise interference. However, the decomposition parameters (i.e., bandwidth threshold and B-spline order) significantly affect the decomposition results of this method. In original TVF-EMD method, the parameter values are assigned in advance, which makes it difficult to achieve satisfactory analysis results. To solve this problem, this paper develops an optimized TVF-EMD method based on grey wolf optimizer (GWO) algorithm for fault diagnosis of rotating machinery. Firstly, a measurement index termed weighted kurtosis index is constructed by using kurtosis index and correlation coefficient. Subsequently, the optimal TVF-EMD parameters that match with the input signal can be obtained by GWO algorithm using the maximum weighted kurtosis index as objective function. Finally, fault features can be extracted by analyzing the sensitive intrinsic mode function (IMF) owning the maximum weighted kurtosis index. Simulations and comparisons highlight the performance of TVF-EMD method for signal decomposition, and meanwhile verify the fact that bandwidth threshold and B-spline order are critical to the decomposition results. Two case studies on rotating machinery fault diagnosis demonstrate the effectiveness and advantages of the proposed method.

  7. Structural optimization by multilevel decomposition

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, J.; James, B.; Dovi, A.

    1983-01-01

    A method is described for decomposing an optimization problem into a set of subproblems and a coordination problem which preserves coupling between the subproblems. The method is introduced as a special case of multilevel, multidisciplinary system optimization and its algorithm is fully described for two level optimization for structures assembled of finite elements of arbitrary type. Numerical results are given for an example of a framework to show that the decomposition method converges and yields results comparable to those obtained without decomposition. It is pointed out that optimization by decomposition should reduce the design time by allowing groups of engineers, using different computers to work concurrently on the same large problem.

  8. Validating the performance of one-time decomposition for fMRI analysis using ICA with automatic target generation process.

    PubMed

    Yao, Shengnan; Zeng, Weiming; Wang, Nizhuan; Chen, Lei

    2013-07-01

    Independent component analysis (ICA) has been proven to be effective for functional magnetic resonance imaging (fMRI) data analysis. However, ICA decomposition requires to optimize the unmixing matrix iteratively whose initial values are generated randomly. Thus the randomness of the initialization leads to different ICA decomposition results. Therefore, just one-time decomposition for fMRI data analysis is not usually reliable. Under this circumstance, several methods about repeated decompositions with ICA (RDICA) were proposed to reveal the stability of ICA decomposition. Although utilizing RDICA has achieved satisfying results in validating the performance of ICA decomposition, RDICA cost much computing time. To mitigate the problem, in this paper, we propose a method, named ATGP-ICA, to do the fMRI data analysis. This method generates fixed initial values with automatic target generation process (ATGP) instead of being produced randomly. We performed experimental tests on both hybrid data and fMRI data to indicate the effectiveness of the new method and made a performance comparison of the traditional one-time decomposition with ICA (ODICA), RDICA and ATGP-ICA. The proposed method demonstrated that it not only could eliminate the randomness of ICA decomposition, but also could save much computing time compared to RDICA. Furthermore, the ROC (Receiver Operating Characteristic) power analysis also denoted the better signal reconstruction performance of ATGP-ICA than that of RDICA. Copyright © 2013 Elsevier Inc. All rights reserved.

  9. Development Of Polarimetric Decomposition Techniques For Indian Forest Resource Assessment Using Radar Imaging Satellite (Risat-1) Images

    NASA Astrophysics Data System (ADS)

    Sridhar, J.

    2015-12-01

    The focus of this work is to examine polarimetric decomposition techniques primarily focussed on Pauli decomposition and Sphere Di-Plane Helix (SDH) decomposition for forest resource assessment. The data processing methods adopted are Pre-processing (Geometric correction and Radiometric calibration), Speckle Reduction, Image Decomposition and Image Classification. Initially to classify forest regions, unsupervised classification was applied to determine different unknown classes. It was observed K-means clustering method gave better results in comparison with ISO Data method.Using the algorithm developed for Radar Tools, the code for decomposition and classification techniques were applied in Interactive Data Language (IDL) and was applied to RISAT-1 image of Mysore-Mandya region of Karnataka, India. This region is chosen for studying forest vegetation and consists of agricultural lands, water and hilly regions. Polarimetric SAR data possess a high potential for classification of earth surface.After applying the decomposition techniques, classification was done by selecting region of interests andpost-classification the over-all accuracy was observed to be higher in the SDH decomposed image, as it operates on individual pixels on a coherent basis and utilises the complete intrinsic coherent nature of polarimetric SAR data. Thereby, making SDH decomposition particularly suited for analysis of high-resolution SAR data. The Pauli Decomposition represents all the polarimetric information in a single SAR image however interpretation of the resulting image is difficult. The SDH decomposition technique seems to produce better results and interpretation as compared to Pauli Decomposition however more quantification and further analysis are being done in this area of research. The comparison of Polarimetric decomposition techniques and evolutionary classification techniques will be the scope of this work.

  10. TE/TM decomposition of electromagnetic sources

    NASA Technical Reports Server (NTRS)

    Lindell, Ismo V.

    1988-01-01

    Three methods are given by which bounded EM sources can be decomposed into two parts radiating transverse electric (TE) and transverse magnetic (TM) fields with respect to a given constant direction in space. The theory applies source equivalence and nonradiating source concepts, which lead to decomposition methods based on a recursive formula or two differential equations for the determination of the TE and TM components of the original source. Decompositions for a dipole in terms of point, line, and plane sources are studied in detail. The planar decomposition is seen to match to an earlier result given by Clemmow (1963). As an application of the point decomposition method, it is demonstrated that the general exact image expression for the Sommerfeld half-space problem, previously derived through heuristic reasoning, can be more straightforwardly obtained through the present decomposition method.

  11. A New Approach of evaluating the damage in simply-supported reinforced concrete beam by Local mean decomposition (LMD)

    NASA Astrophysics Data System (ADS)

    Zhang, Xuebing; Liu, Ning; Xi, Jiaxin; Zhang, Yunqi; Zhang, Wenchun; Yang, Peipei

    2017-08-01

    How to analyze the nonstationary response signals and obtain vibration characters is extremely important in the vibration-based structural diagnosis methods. In this work, we introduce a more reasonable time-frequency decomposition method termed local mean decomposition (LMD) to instead the widely-used empirical mode decomposition (EMD). By employing the LMD method, one can derive a group of component signals, each of which is more stationary, and then analyze the vibration state and make the assessment of structural damage of a construction or building. We illustrated the effectiveness of LMD by a synthetic data and an experimental data recorded in a simply-supported reinforced concrete beam. Then based on the decomposition results, an elementary method of damage diagnosis was proposed.

  12. Canonical decomposition of magnetotelluric responses: Experiment on 1D anisotropic structures

    NASA Astrophysics Data System (ADS)

    Guo, Ze-qiu; Wei, Wen-bo; Ye, Gao-feng; Jin, Sheng; Jing, Jian-en

    2015-08-01

    Horizontal electrical heterogeneity of subsurface earth is mostly originated from structural complexity and electrical anisotropy, and local near-surface electrical heterogeneity will severely distort regional electromagnetic responses. Conventional distortion analyses for magnetotelluric soundings are primarily physical decomposition methods with respect to isotropic models, which mostly presume that the geoelectric distribution of geological structures is of local and regional patterns represented by 3D/2D models. Due to the widespread anisotropy of earth media, the confusion between 1D anisotropic responses and 2D isotropic responses, and the defects of physical decomposition methods, we propose to conduct modeling experiments with canonical decomposition in terms of 1D layered anisotropic models, and the method is one of the mathematical decomposition methods based on eigenstate analyses differentiated from distortion analyses, which can be used to recover electrical information such as strike directions, and maximum and minimum conductivity. We tested this method with numerical simulation experiments on several 1D synthetic models, which turned out that canonical decomposition is quite effective to reveal geological anisotropic information. Finally, for the background of anisotropy from previous study by geological and seismological methods, canonical decomposition is applied to real data acquired in North China Craton for 1D anisotropy analyses, and the result shows that, with effective modeling and cautious interpretation, canonical decomposition could be another good method to detect anisotropy of geological media.

  13. Integrated control/structure optimization by multilevel decomposition

    NASA Technical Reports Server (NTRS)

    Zeiler, Thomas A.; Gilbert, Michael G.

    1990-01-01

    A method for integrated control/structure optimization by multilevel decomposition is presented. It is shown that several previously reported methods were actually partial decompositions wherein only the control was decomposed into a subsystem design. One of these partially decomposed problems was selected as a benchmark example for comparison. The system is fully decomposed into structural and control subsystem designs and an improved design is produced. Theory, implementation, and results for the method are presented and compared with the benchmark example.

  14. A comparison of reduced-order modelling techniques for application in hyperthermia control and estimation.

    PubMed

    Bailey, E A; Dutton, A W; Mattingly, M; Devasia, S; Roemer, R B

    1998-01-01

    Reduced-order modelling techniques can make important contributions in the control and state estimation of large systems. In hyperthermia, reduced-order modelling can provide a useful tool by which a large thermal model can be reduced to the most significant subset of its full-order modes, making real-time control and estimation possible. Two such reduction methods, one based on modal decomposition and the other on balanced realization, are compared in the context of simulated hyperthermia heat transfer problems. The results show that the modal decomposition reduction method has three significant advantages over that of balanced realization. First, modal decomposition reduced models result in less error, when compared to the full-order model, than balanced realization reduced models of similar order in problems with low or moderate advective heat transfer. Second, because the balanced realization based methods require a priori knowledge of the sensor and actuator placements, the reduced-order model is not robust to changes in sensor or actuator locations, a limitation not present in modal decomposition. Third, the modal decomposition transformation is less demanding computationally. On the other hand, in thermal problems dominated by advective heat transfer, numerical instabilities make modal decomposition based reduction problematic. Modal decomposition methods are therefore recommended for reduction of models in which advection is not dominant and research continues into methods to render balanced realization based reduction more suitable for real-time clinical hyperthermia control and estimation.

  15. About decomposition approach for solving the classification problem

    NASA Astrophysics Data System (ADS)

    Andrianova, A. A.

    2016-11-01

    This article describes the features of the application of an algorithm with using of decomposition methods for solving the binary classification problem of constructing a linear classifier based on Support Vector Machine method. Application of decomposition reduces the volume of calculations, in particular, due to the emerging possibilities to build parallel versions of the algorithm, which is a very important advantage for the solution of problems with big data. The analysis of the results of computational experiments conducted using the decomposition approach. The experiment use known data set for binary classification problem.

  16. On the Possibility of Studying the Reactions of the Thermal Decomposition of Energy Substances by the Methods of High-Resolution Terahertz Spectroscopy

    NASA Astrophysics Data System (ADS)

    Vaks, V. L.; Domracheva, E. G.; Chernyaeva, M. B.; Pripolzin, S. I.; Revin, L. S.; Tretyakov, I. V.; Anfertyev, V. A.; Yablokov, A. A.; Lukyanenko, I. A.; Sheikov, Yu. V.

    2018-02-01

    We show prospects for using the method of high-resolution terahertz spectroscopy for a continuous analysis of the decomposition products of energy substances in the gas phase (including short-lived ones) in a wide temperature range. The experimental setup, which includes a terahertz spectrometer for studying the thermal decomposition reactions, is described. The results of analysis of the gaseous decomposition products of energy substances by the example of ammonium nitrate heated from room temperature to 167°C are presented.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Niu, T; Dong, X; Petrongolo, M

    Purpose: Dual energy CT (DECT) imaging plays an important role in advanced imaging applications due to its material decomposition capability. Direct decomposition via matrix inversion suffers from significant degradation of image signal-to-noise ratios, which reduces clinical value. Existing de-noising algorithms achieve suboptimal performance since they suppress image noise either before or after the decomposition and do not fully explore the noise statistical properties of the decomposition process. We propose an iterative image-domain decomposition method for noise suppression in DECT, using the full variance-covariance matrix of the decomposed images. Methods: The proposed algorithm is formulated in the form of least-square estimationmore » with smoothness regularization. It includes the inverse of the estimated variance-covariance matrix of the decomposed images as the penalty weight in the least-square term. Performance is evaluated using an evaluation phantom (Catphan 600) and an anthropomorphic head phantom. Results are compared to those generated using direct matrix inversion with no noise suppression, a de-noising method applied on the decomposed images, and an existing algorithm with similar formulation but with an edge-preserving regularization term. Results: On the Catphan phantom, our method retains the same spatial resolution as the CT images before decomposition while reducing the noise standard deviation of decomposed images by over 98%. The other methods either degrade spatial resolution or achieve less low-contrast detectability. Also, our method yields lower electron density measurement error than direct matrix inversion and reduces error variation by over 97%. On the head phantom, it reduces the noise standard deviation of decomposed images by over 97% without blurring the sinus structures. Conclusion: We propose an iterative image-domain decomposition method for DECT. The method combines noise suppression and material decomposition into an iterative process and achieves both goals simultaneously. The proposed algorithm shows superior performance on noise suppression with high image spatial resolution and low-contrast detectability. This work is supported by a Varian MRA grant.« less

  18. xEMD procedures as a data - Assisted filtering method

    NASA Astrophysics Data System (ADS)

    Machrowska, Anna; Jonak, Józef

    2018-01-01

    The article presents the possibility of using Empirical Mode Decomposition (EMD), Ensemble Empirical Mode Decomposition (EEMD), Complete Ensemble Empirical Mode Decomposition with Adaptive Noise (CEEMDAN) and Improved Complete Ensemble Empirical Mode Decomposition (ICEEMD) algorithms for mechanical system condition monitoring applications. There were presented the results of the xEMD procedures used for vibration signals of system in different states of wear.

  19. Integrated control/structure optimization by multilevel decomposition

    NASA Technical Reports Server (NTRS)

    Zeiler, Thomas A.; Gilbert, Michael G.

    1990-01-01

    A method for integrated control/structure optimization by multilevel decomposition is presented. It is shown that several previously reported methods were actually partial decompositions wherein only the control was decomposed into a subsystem design. One of these partially decomposed problems was selected as a benchmark example for comparison. The present paper fully decomposes the system into structural and control subsystem designs and produces an improved design. Theory, implementation, and results for the method are presented and compared with the benchmark example.

  20. A novel ECG data compression method based on adaptive Fourier decomposition

    NASA Astrophysics Data System (ADS)

    Tan, Chunyu; Zhang, Liming

    2017-12-01

    This paper presents a novel electrocardiogram (ECG) compression method based on adaptive Fourier decomposition (AFD). AFD is a newly developed signal decomposition approach, which can decompose a signal with fast convergence, and hence reconstruct ECG signals with high fidelity. Unlike most of the high performance algorithms, our method does not make use of any preprocessing operation before compression. Huffman coding is employed for further compression. Validated with 48 ECG recordings of MIT-BIH arrhythmia database, the proposed method achieves the compression ratio (CR) of 35.53 and the percentage root mean square difference (PRD) of 1.47% on average with N = 8 decomposition times and a robust PRD-CR relationship. The results demonstrate that the proposed method has a good performance compared with the state-of-the-art ECG compressors.

  1. Adaptive fault feature extraction from wayside acoustic signals from train bearings

    NASA Astrophysics Data System (ADS)

    Zhang, Dingcheng; Entezami, Mani; Stewart, Edward; Roberts, Clive; Yu, Dejie

    2018-07-01

    Wayside acoustic detection of train bearing faults plays a significant role in maintaining safety in the railway transport system. However, the bearing fault information is normally masked by strong background noises and harmonic interferences generated by other components (e.g. axles and gears). In order to extract the bearing fault feature information effectively, a novel method called improved singular value decomposition (ISVD) with resonance-based signal sparse decomposition (RSSD), namely the ISVD-RSSD method, is proposed in this paper. A Savitzky-Golay (S-G) smoothing filter is used to filter singular vectors (SVs) in the ISVD method as an extension of the singular value decomposition (SVD) theorem. Hilbert spectrum entropy and a stepwise optimisation strategy are used to optimize the S-G filter's parameters. The RSSD method is able to nonlinearly decompose the wayside acoustic signal of a faulty train bearing into high and low resonance components, the latter of which contains bearing fault information. However, the high level of noise usually results in poor decomposition results from the RSSD method. Hence, the collected wayside acoustic signal must first be de-noised using the ISVD component of the ISVD-RSSD method. Next, the de-noised signal is decomposed by using the RSSD method. The obtained low resonance component is then demodulated with a Hilbert transform such that the bearing fault can be detected by observing Hilbert envelope spectra. The effectiveness of the ISVD-RSSD method is verified through both laboratory field-based experiments as described in the paper. The results indicate that the proposed method is superior to conventional spectrum analysis and ensemble empirical mode decomposition methods.

  2. Electrochemical and Infrared Absorption Spectroscopy Detection of SF₆ Decomposition Products.

    PubMed

    Dong, Ming; Zhang, Chongxing; Ren, Ming; Albarracín, Ricardo; Ye, Rixin

    2017-11-15

    Sulfur hexafluoride (SF₆) gas-insulated electrical equipment is widely used in high-voltage (HV) and extra-high-voltage (EHV) power systems. Partial discharge (PD) and local heating can occur in the electrical equipment because of insulation faults, which results in SF₆ decomposition and ultimately generates several types of decomposition products. These SF₆ decomposition products can be qualitatively and quantitatively detected with relevant detection methods, and such detection contributes to diagnosing the internal faults and evaluating the security risks of the equipment. At present, multiple detection methods exist for analyzing the SF₆ decomposition products, and electrochemical sensing (ES) and infrared (IR) spectroscopy are well suited for application in online detection. In this study, the combination of ES with IR spectroscopy is used to detect SF₆ gas decomposition. First, the characteristics of these two detection methods are studied, and the data analysis matrix is established. Then, a qualitative and quantitative analysis ES-IR model is established by adopting a two-step approach. A SF₆ decomposition detector is designed and manufactured by combining an electrochemical sensor and IR spectroscopy technology. The detector is used to detect SF₆ gas decomposition and is verified to reliably and accurately detect the gas components and concentrations.

  3. Kinetics and mechanism of solid decompositions — From basic discoveries by atomic absorption spectrometry and quadrupole mass spectroscopy to thorough thermogravimetric analysis

    NASA Astrophysics Data System (ADS)

    L'vov, Boris V.

    2008-02-01

    This paper sums up the evolution of thermochemical approach to the interpretation of solid decompositions for the past 25 years. This period includes two stages related to decomposition studies by different techniques: by ET AAS and QMS in 1981-2001 and by TG in 2002-2007. As a result of ET AAS and QMS investigations, the method for determination of absolute rates of solid decompositions was developed and the mechanism of decompositions through the congruent dissociative vaporization was discovered. On this basis, in the period from 1997 to 2001, the decomposition mechanisms of several classes of reactants were interpreted and some unusual effects observed in TA were explained. However, the thermochemical approach has not received any support by other TA researchers. One of the potential reasons of this distrust was the unreliability of the E values measured by the traditional Arrhenius plot method. The theoretical analysis and comparison of metrological features of different methods used in the determinations of thermochemical quantities permitted to conclude that in comparison with the Arrhenius plot and second-law methods, the third-law method is to be very much preferred. However, this method cannot be used in the kinetic studies by the Arrhenius approach because its use suggests the measuring of the equilibrium pressures of decomposition products. On the contrary, the method of absolute rates is ideally suitable for this purpose. As a result of much higher precision of the third-law method, some quantitative conclusions that follow from the theory were confirmed, and several new effects, which were invisible in the framework of the Arrhenius approach, have been revealed. In spite of great progress reached in the development of reliable methodology, based on the third-law method, the thermochemical approach remains unclaimed as before.

  4. Native conflict awared layout decomposition in triple patterning lithography using bin-based library matching method

    NASA Astrophysics Data System (ADS)

    Ke, Xianhua; Jiang, Hao; Lv, Wen; Liu, Shiyuan

    2016-03-01

    Triple patterning (TP) lithography becomes a feasible technology for manufacturing as the feature size further scale down to sub 14/10 nm. In TP, a layout is decomposed into three masks followed with exposures and etches/freezing processes respectively. Previous works mostly focus on layout decomposition with minimal conflicts and stitches simultaneously. However, since any existence of native conflict will result in layout re-design/modification and reperforming the time-consuming decomposition, the effective method that can be aware of native conflicts (NCs) in layout is desirable. In this paper, a bin-based library matching method is proposed for NCs detection and layout decomposition. First, a layout is divided into bins and the corresponding conflict graph in each bin is constructed. Then, we match the conflict graph in a prebuilt colored library, and as a result the NCs can be located and highlighted quickly.

  5. Joint detection and tracking of size-varying infrared targets based on block-wise sparse decomposition

    NASA Astrophysics Data System (ADS)

    Li, Miao; Lin, Zaiping; Long, Yunli; An, Wei; Zhou, Yiyu

    2016-05-01

    The high variability of target size makes small target detection in Infrared Search and Track (IRST) a challenging task. A joint detection and tracking method based on block-wise sparse decomposition is proposed to address this problem. For detection, the infrared image is divided into overlapped blocks, and each block is weighted on the local image complexity and target existence probabilities. Target-background decomposition is solved by block-wise inexact augmented Lagrange multipliers. For tracking, label multi-Bernoulli (LMB) tracker tracks multiple targets taking the result of single-frame detection as input, and provides corresponding target existence probabilities for detection. Unlike fixed-size methods, the proposed method can accommodate size-varying targets, due to no special assumption for the size and shape of small targets. Because of exact decomposition, classical target measurements are extended and additional direction information is provided to improve tracking performance. The experimental results show that the proposed method can effectively suppress background clutters, detect and track size-varying targets in infrared images.

  6. The deconvolution of complex spectra by artificial immune system

    NASA Astrophysics Data System (ADS)

    Galiakhmetova, D. I.; Sibgatullin, M. E.; Galimullin, D. Z.; Kamalova, D. I.

    2017-11-01

    An application of the artificial immune system method for decomposition of complex spectra is presented. The results of decomposition of the model contour consisting of three components, Gaussian contours, are demonstrated. The method of artificial immune system is an optimization method, which is based on the behaviour of the immune system and refers to modern methods of search for the engine optimization.

  7. Adaptive Fourier decomposition based ECG denoising.

    PubMed

    Wang, Ze; Wan, Feng; Wong, Chi Man; Zhang, Liming

    2016-10-01

    A novel ECG denoising method is proposed based on the adaptive Fourier decomposition (AFD). The AFD decomposes a signal according to its energy distribution, thereby making this algorithm suitable for separating pure ECG signal and noise with overlapping frequency ranges but different energy distributions. A stop criterion for the iterative decomposition process in the AFD is calculated on the basis of the estimated signal-to-noise ratio (SNR) of the noisy signal. The proposed AFD-based method is validated by the synthetic ECG signal using an ECG model and also real ECG signals from the MIT-BIH Arrhythmia Database both with additive Gaussian white noise. Simulation results of the proposed method show better performance on the denoising and the QRS detection in comparing with major ECG denoising schemes based on the wavelet transform, the Stockwell transform, the empirical mode decomposition, and the ensemble empirical mode decomposition. Copyright © 2016 Elsevier Ltd. All rights reserved.

  8. Pressure-dependent decomposition kinetics of the energetic material HMX up to 3.6 GPa.

    PubMed

    Glascoe, Elizabeth A; Zaug, Joseph M; Burnham, Alan K

    2009-12-03

    The effect of pressure on the global thermal decomposition rate of the energetic material HMX was studied. HMX was precompressed in a diamond anvil cell (DAC) and heated at various rates. The parent species population was monitored as a function of time and temperature using Fourier transform infrared (FTIR) spectroscopy. Global decomposition rates were determined by fitting the fraction reacted to the extended-Prout-Tompkins nucleation-growth model and the Friedman isoconversional method. The results of these experiments and analysis indicate that pressure accelerates the decomposition at low-to-moderate pressures (i.e., between ambient pressure and 0.1 GPa) and decelerates the decomposition at higher pressures. The decomposition acceleration is attributed to pressure-enhanced autocatalysis, whereas the deceleration at high pressures is attributed to pressure-inhibiting bond homolysis step(s), which would result in an increase in volume. These results indicate that both the beta- and delta-polymorphs of HMX are sensitive to pressure in the thermally induced decomposition kinetics.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Glascoe, E A; Zaug, J M; Burnham, A K

    The effect of pressure on the thermal decomposition rate of the energetic material HMX was studied. HMX was precompressed in a diamond anvil cell (DAC) and heated at various rates. The parent species population was monitored as a function of time and temperature using Fourier transform infrared (FTIR) spectroscopy. Decomposition rates were determined by fitting the fraction reacted to the extended-Prout-Tompkins nucleation-growth model and the Friedman isoconversional method. The results of these experiments and analysis indicate that pressure accelerates the decomposition at low to moderate pressures (i.e. between ambient pressure and 1 GPa) and decelerates the decomposition at higher pressures.more » The decomposition acceleration is attributed to pressure enhanced autocatalysis whereas the deceleration at high pressures is attributed pressure inhibiting bond homolysis step(s), which would result in an increase in volume. These results indicate that both {beta} and {delta} phase HMX are sensitive to pressure in the thermally induced decomposition kinetics.« less

  10. Electrochemical and Infrared Absorption Spectroscopy Detection of SF6 Decomposition Products

    PubMed Central

    Dong, Ming; Ren, Ming; Ye, Rixin

    2017-01-01

    Sulfur hexafluoride (SF6) gas-insulated electrical equipment is widely used in high-voltage (HV) and extra-high-voltage (EHV) power systems. Partial discharge (PD) and local heating can occur in the electrical equipment because of insulation faults, which results in SF6 decomposition and ultimately generates several types of decomposition products. These SF6 decomposition products can be qualitatively and quantitatively detected with relevant detection methods, and such detection contributes to diagnosing the internal faults and evaluating the security risks of the equipment. At present, multiple detection methods exist for analyzing the SF6 decomposition products, and electrochemical sensing (ES) and infrared (IR) spectroscopy are well suited for application in online detection. In this study, the combination of ES with IR spectroscopy is used to detect SF6 gas decomposition. First, the characteristics of these two detection methods are studied, and the data analysis matrix is established. Then, a qualitative and quantitative analysis ES-IR model is established by adopting a two-step approach. A SF6 decomposition detector is designed and manufactured by combining an electrochemical sensor and IR spectroscopy technology. The detector is used to detect SF6 gas decomposition and is verified to reliably and accurately detect the gas components and concentrations. PMID:29140268

  11. Calibration methods influence quantitative material decomposition in photon-counting spectral CT

    NASA Astrophysics Data System (ADS)

    Curtis, Tyler E.; Roeder, Ryan K.

    2017-03-01

    Photon-counting detectors and nanoparticle contrast agents can potentially enable molecular imaging and material decomposition in computed tomography (CT). Material decomposition has been investigated using both simulated and acquired data sets. However, the effect of calibration methods on material decomposition has not been systematically investigated. Therefore, the objective of this study was to investigate the influence of the range and number of contrast agent concentrations within a modular calibration phantom on quantitative material decomposition. A commerciallyavailable photon-counting spectral micro-CT (MARS Bioimaging) was used to acquire images with five energy bins selected to normalize photon counts and leverage the contrast agent k-edge. Material basis matrix values were determined using multiple linear regression models and material decomposition was performed using a maximum a posteriori estimator. The accuracy of quantitative material decomposition was evaluated by the root mean squared error (RMSE), specificity, sensitivity, and area under the curve (AUC). An increased maximum concentration (range) in the calibration significantly improved RMSE, specificity and AUC. The effects of an increased number of concentrations in the calibration were not statistically significant for the conditions in this study. The overall results demonstrated that the accuracy of quantitative material decomposition in spectral CT is significantly influenced by calibration methods, which must therefore be carefully considered for the intended diagnostic imaging application.

  12. A novel hybrid decomposition-and-ensemble model based on CEEMD and GWO for short-term PM2.5 concentration forecasting

    NASA Astrophysics Data System (ADS)

    Niu, Mingfei; Wang, Yufang; Sun, Shaolong; Li, Yongwu

    2016-06-01

    To enhance prediction reliability and accuracy, a hybrid model based on the promising principle of "decomposition and ensemble" and a recently proposed meta-heuristic called grey wolf optimizer (GWO) is introduced for daily PM2.5 concentration forecasting. Compared with existing PM2.5 forecasting methods, this proposed model has improved the prediction accuracy and hit rates of directional prediction. The proposed model involves three main steps, i.e., decomposing the original PM2.5 series into several intrinsic mode functions (IMFs) via complementary ensemble empirical mode decomposition (CEEMD) for simplifying the complex data; individually predicting each IMF with support vector regression (SVR) optimized by GWO; integrating all predicted IMFs for the ensemble result as the final prediction by another SVR optimized by GWO. Seven benchmark models, including single artificial intelligence (AI) models, other decomposition-ensemble models with different decomposition methods and models with the same decomposition-ensemble method but optimized by different algorithms, are considered to verify the superiority of the proposed hybrid model. The empirical study indicates that the proposed hybrid decomposition-ensemble model is remarkably superior to all considered benchmark models for its higher prediction accuracy and hit rates of directional prediction.

  13. Computer implemented empirical mode decomposition method, apparatus and article of manufacture

    NASA Technical Reports Server (NTRS)

    Huang, Norden E. (Inventor)

    1999-01-01

    A computer implemented physical signal analysis method is invented. This method includes two essential steps and the associated presentation techniques of the results. All the steps exist only in a computer: there are no analytic expressions resulting from the method. The first step is a computer implemented Empirical Mode Decomposition to extract a collection of Intrinsic Mode Functions (IMF) from nonlinear, nonstationary physical signals. The decomposition is based on the direct extraction of the energy associated with various intrinsic time scales in the physical signal. Expressed in the IMF's, they have well-behaved Hilbert Transforms from which instantaneous frequencies can be calculated. The second step is the Hilbert Transform. The final result is the Hilbert Spectrum. Thus, the invention can localize any event on the time as well as the frequency axis. The decomposition can also be viewed as an expansion of the data in terms of the IMF's. Then, these IMF's, based on and derived from the data, can serve as the basis of that expansion. The local energy and the instantaneous frequency derived from the IMF's through the Hilbert transform give a full energy-frequency-time distribution of the data which is designated as the Hilbert Spectrum.

  14. Complete ensemble local mean decomposition with adaptive noise and its application to fault diagnosis for rolling bearings

    NASA Astrophysics Data System (ADS)

    Wang, Lei; Liu, Zhiwen; Miao, Qiang; Zhang, Xin

    2018-06-01

    Mode mixing resulting from intermittent signals is an annoying problem associated with the local mean decomposition (LMD) method. Based on noise-assisted approach, ensemble local mean decomposition (ELMD) method alleviates the mode mixing issue of LMD to some degree. However, the product functions (PFs) produced by ELMD often contain considerable residual noise, and thus a relatively large number of ensemble trials are required to eliminate the residual noise. Furthermore, since different realizations of Gaussian white noise are added to the original signal, different trials may generate different number of PFs, making it difficult to take ensemble mean. In this paper, a novel method is proposed called complete ensemble local mean decomposition with adaptive noise (CELMDAN) to solve these two problems. The method adds a particular and adaptive noise at every decomposition stage for each trial. Moreover, a unique residue is obtained after separating each PF, and the obtained residue is used as input for the next stage. Two simulated signals are analyzed to illustrate the advantages of CELMDAN in comparison to ELMD and CEEMDAN. To further demonstrate the efficiency of CELMDAN, the method is applied to diagnose faults for rolling bearings in an experimental case and an engineering case. The diagnosis results indicate that CELMDAN can extract more fault characteristic information with less interference than ELMD.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dong, X; Petrongolo, M; Wang, T

    Purpose: A general problem of dual-energy CT (DECT) is that the decomposition is sensitive to noise in the two sets of dual-energy projection data, resulting in severely degraded qualities of decomposed images. We have previously proposed an iterative denoising method for DECT. Using a linear decomposition function, the method does not gain the full benefits of DECT on beam-hardening correction. In this work, we expand the framework of our iterative method to include non-linear decomposition models for noise suppression in DECT. Methods: We first obtain decomposed projections, which are free of beam-hardening artifacts, using a lookup table pre-measured on amore » calibration phantom. First-pass material images with high noise are reconstructed from the decomposed projections using standard filter-backprojection reconstruction. Noise on the decomposed images is then suppressed by an iterative method, which is formulated in the form of least-square estimation with smoothness regularization. Based on the design principles of a best linear unbiased estimator, we include the inverse of the estimated variance-covariance matrix of the decomposed images as the penalty weight in the least-square term. Analytical formulae are derived to compute the variance-covariance matrix from the measured decomposition lookup table. Results: We have evaluated the proposed method via phantom studies. Using non-linear decomposition, our method effectively suppresses the streaking artifacts of beam-hardening and obtains more uniform images than our previous approach based on a linear model. The proposed method reduces the average noise standard deviation of two basis materials by one order of magnitude without sacrificing the spatial resolution. Conclusion: We propose a general framework of iterative denoising for material decomposition of DECT. Preliminary phantom studies have shown the proposed method improves the image uniformity and reduces noise level without resolution loss. In the future, we will perform more phantom studies to further validate the performance of the purposed method. This work is supported by a Varian MRA grant.« less

  16. Combined iterative reconstruction and image-domain decomposition for dual energy CT using total-variation regularization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dong, Xue; Niu, Tianye; Zhu, Lei, E-mail: leizhu@gatech.edu

    2014-05-15

    Purpose: Dual-energy CT (DECT) is being increasingly used for its capability of material decomposition and energy-selective imaging. A generic problem of DECT, however, is that the decomposition process is unstable in the sense that the relative magnitude of decomposed signals is reduced due to signal cancellation while the image noise is accumulating from the two CT images of independent scans. Direct image decomposition, therefore, leads to severe degradation of signal-to-noise ratio on the resultant images. Existing noise suppression techniques are typically implemented in DECT with the procedures of reconstruction and decomposition performed independently, which do not explore the statistical propertiesmore » of decomposed images during the reconstruction for noise reduction. In this work, the authors propose an iterative approach that combines the reconstruction and the signal decomposition procedures to minimize the DECT image noise without noticeable loss of resolution. Methods: The proposed algorithm is formulated as an optimization problem, which balances the data fidelity and total variation of decomposed images in one framework, and the decomposition step is carried out iteratively together with reconstruction. The noise in the CT images from the proposed algorithm becomes well correlated even though the noise of the raw projections is independent on the two CT scans. Due to this feature, the proposed algorithm avoids noise accumulation during the decomposition process. The authors evaluate the method performance on noise suppression and spatial resolution using phantom studies and compare the algorithm with conventional denoising approaches as well as combined iterative reconstruction methods with different forms of regularization. Results: On the Catphan©600 phantom, the proposed method outperforms the existing denoising methods on preserving spatial resolution at the same level of noise suppression, i.e., a reduction of noise standard deviation by one order of magnitude. This improvement is mainly attributed to the high noise correlation in the CT images reconstructed by the proposed algorithm. Iterative reconstruction using different regularization, including quadratic orq-generalized Gaussian Markov random field regularization, achieves similar noise suppression from high noise correlation. However, the proposed TV regularization obtains a better edge preserving performance. Studies of electron density measurement also show that our method reduces the average estimation error from 9.5% to 7.1%. On the anthropomorphic head phantom, the proposed method suppresses the noise standard deviation of the decomposed images by a factor of ∼14 without blurring the fine structures in the sinus area. Conclusions: The authors propose a practical method for DECT imaging reconstruction, which combines the image reconstruction and material decomposition into one optimization framework. Compared to the existing approaches, our method achieves a superior performance on DECT imaging with respect to decomposition accuracy, noise reduction, and spatial resolution.« less

  17. Modified complementary ensemble empirical mode decomposition and intrinsic mode functions evaluation index for high-speed train gearbox fault diagnosis

    NASA Astrophysics Data System (ADS)

    Chen, Dongyue; Lin, Jianhui; Li, Yanping

    2018-06-01

    Complementary ensemble empirical mode decomposition (CEEMD) has been developed for the mode-mixing problem in Empirical Mode Decomposition (EMD) method. Compared to the ensemble empirical mode decomposition (EEMD), the CEEMD method reduces residue noise in the signal reconstruction. Both CEEMD and EEMD need enough ensemble number to reduce the residue noise, and hence it would be too much computation cost. Moreover, the selection of intrinsic mode functions (IMFs) for further analysis usually depends on experience. A modified CEEMD method and IMFs evaluation index are proposed with the aim of reducing the computational cost and select IMFs automatically. A simulated signal and in-service high-speed train gearbox vibration signals are employed to validate the proposed method in this paper. The results demonstrate that the modified CEEMD can decompose the signal efficiently with less computation cost, and the IMFs evaluation index can select the meaningful IMFs automatically.

  18. Iterative image-domain decomposition for dual-energy CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Niu, Tianye; Dong, Xue; Petrongolo, Michael

    2014-04-15

    Purpose: Dual energy CT (DECT) imaging plays an important role in advanced imaging applications due to its capability of material decomposition. Direct decomposition via matrix inversion suffers from significant degradation of image signal-to-noise ratios, which reduces clinical values of DECT. Existing denoising algorithms achieve suboptimal performance since they suppress image noise either before or after the decomposition and do not fully explore the noise statistical properties of the decomposition process. In this work, the authors propose an iterative image-domain decomposition method for noise suppression in DECT, using the full variance-covariance matrix of the decomposed images. Methods: The proposed algorithm ismore » formulated in the form of least-square estimation with smoothness regularization. Based on the design principles of a best linear unbiased estimator, the authors include the inverse of the estimated variance-covariance matrix of the decomposed images as the penalty weight in the least-square term. The regularization term enforces the image smoothness by calculating the square sum of neighboring pixel value differences. To retain the boundary sharpness of the decomposed images, the authors detect the edges in the CT images before decomposition. These edge pixels have small weights in the calculation of the regularization term. Distinct from the existing denoising algorithms applied on the images before or after decomposition, the method has an iterative process for noise suppression, with decomposition performed in each iteration. The authors implement the proposed algorithm using a standard conjugate gradient algorithm. The method performance is evaluated using an evaluation phantom (Catphan©600) and an anthropomorphic head phantom. The results are compared with those generated using direct matrix inversion with no noise suppression, a denoising method applied on the decomposed images, and an existing algorithm with similar formulation as the proposed method but with an edge-preserving regularization term. Results: On the Catphan phantom, the method maintains the same spatial resolution on the decomposed images as that of the CT images before decomposition (8 pairs/cm) while significantly reducing their noise standard deviation. Compared to that obtained by the direct matrix inversion, the noise standard deviation in the images decomposed by the proposed algorithm is reduced by over 98%. Without considering the noise correlation properties in the formulation, the denoising scheme degrades the spatial resolution to 6 pairs/cm for the same level of noise suppression. Compared to the edge-preserving algorithm, the method achieves better low-contrast detectability. A quantitative study is performed on the contrast-rod slice of Catphan phantom. The proposed method achieves lower electron density measurement error as compared to that by the direct matrix inversion, and significantly reduces the error variation by over 97%. On the head phantom, the method reduces the noise standard deviation of decomposed images by over 97% without blurring the sinus structures. Conclusions: The authors propose an iterative image-domain decomposition method for DECT. The method combines noise suppression and material decomposition into an iterative process and achieves both goals simultaneously. By exploring the full variance-covariance properties of the decomposed images and utilizing the edge predetection, the proposed algorithm shows superior performance on noise suppression with high image spatial resolution and low-contrast detectability.« less

  19. A Four-Stage Hybrid Model for Hydrological Time Series Forecasting

    PubMed Central

    Di, Chongli; Yang, Xiaohua; Wang, Xiaochao

    2014-01-01

    Hydrological time series forecasting remains a difficult task due to its complicated nonlinear, non-stationary and multi-scale characteristics. To solve this difficulty and improve the prediction accuracy, a novel four-stage hybrid model is proposed for hydrological time series forecasting based on the principle of ‘denoising, decomposition and ensemble’. The proposed model has four stages, i.e., denoising, decomposition, components prediction and ensemble. In the denoising stage, the empirical mode decomposition (EMD) method is utilized to reduce the noises in the hydrological time series. Then, an improved method of EMD, the ensemble empirical mode decomposition (EEMD), is applied to decompose the denoised series into a number of intrinsic mode function (IMF) components and one residual component. Next, the radial basis function neural network (RBFNN) is adopted to predict the trend of all of the components obtained in the decomposition stage. In the final ensemble prediction stage, the forecasting results of all of the IMF and residual components obtained in the third stage are combined to generate the final prediction results, using a linear neural network (LNN) model. For illustration and verification, six hydrological cases with different characteristics are used to test the effectiveness of the proposed model. The proposed hybrid model performs better than conventional single models, the hybrid models without denoising or decomposition and the hybrid models based on other methods, such as the wavelet analysis (WA)-based hybrid models. In addition, the denoising and decomposition strategies decrease the complexity of the series and reduce the difficulties of the forecasting. With its effective denoising and accurate decomposition ability, high prediction precision and wide applicability, the new model is very promising for complex time series forecasting. This new forecast model is an extension of nonlinear prediction models. PMID:25111782

  20. A four-stage hybrid model for hydrological time series forecasting.

    PubMed

    Di, Chongli; Yang, Xiaohua; Wang, Xiaochao

    2014-01-01

    Hydrological time series forecasting remains a difficult task due to its complicated nonlinear, non-stationary and multi-scale characteristics. To solve this difficulty and improve the prediction accuracy, a novel four-stage hybrid model is proposed for hydrological time series forecasting based on the principle of 'denoising, decomposition and ensemble'. The proposed model has four stages, i.e., denoising, decomposition, components prediction and ensemble. In the denoising stage, the empirical mode decomposition (EMD) method is utilized to reduce the noises in the hydrological time series. Then, an improved method of EMD, the ensemble empirical mode decomposition (EEMD), is applied to decompose the denoised series into a number of intrinsic mode function (IMF) components and one residual component. Next, the radial basis function neural network (RBFNN) is adopted to predict the trend of all of the components obtained in the decomposition stage. In the final ensemble prediction stage, the forecasting results of all of the IMF and residual components obtained in the third stage are combined to generate the final prediction results, using a linear neural network (LNN) model. For illustration and verification, six hydrological cases with different characteristics are used to test the effectiveness of the proposed model. The proposed hybrid model performs better than conventional single models, the hybrid models without denoising or decomposition and the hybrid models based on other methods, such as the wavelet analysis (WA)-based hybrid models. In addition, the denoising and decomposition strategies decrease the complexity of the series and reduce the difficulties of the forecasting. With its effective denoising and accurate decomposition ability, high prediction precision and wide applicability, the new model is very promising for complex time series forecasting. This new forecast model is an extension of nonlinear prediction models.

  1. A general framework of noise suppression in material decomposition for dual-energy CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Petrongolo, Michael; Dong, Xue; Zhu, Lei, E-mail: leizhu@gatech.edu

    Purpose: As a general problem of dual-energy CT (DECT), noise amplification in material decomposition severely reduces the signal-to-noise ratio on the decomposed images compared to that on the original CT images. In this work, the authors propose a general framework of noise suppression in material decomposition for DECT. The method is based on an iterative algorithm recently developed in their group for image-domain decomposition of DECT, with an extension to include nonlinear decomposition models. The generalized framework of iterative DECT decomposition enables beam-hardening correction with simultaneous noise suppression, which improves the clinical benefits of DECT. Methods: The authors propose tomore » suppress noise on the decomposed images of DECT using convex optimization, which is formulated in the form of least-squares estimation with smoothness regularization. Based on the design principles of a best linear unbiased estimator, the authors include the inverse of the estimated variance–covariance matrix of the decomposed images as the penalty weight in the least-squares term. Analytical formulas are derived to compute the variance–covariance matrix for decomposed images with general-form numerical or analytical decomposition. As a demonstration, the authors implement the proposed algorithm on phantom data using an empirical polynomial function of decomposition measured on a calibration scan. The polynomial coefficients are determined from the projection data acquired on a wedge phantom, and the signal decomposition is performed in the projection domain. Results: On the Catphan{sup ®}600 phantom, the proposed noise suppression method reduces the average noise standard deviation of basis material images by one to two orders of magnitude, with a superior performance on spatial resolution as shown in comparisons of line-pair images and modulation transfer function measurements. On the synthesized monoenergetic CT images, the noise standard deviation is reduced by a factor of 2–3. By using nonlinear decomposition on projections, the authors’ method effectively suppresses the streaking artifacts of beam hardening and obtains more uniform images than their previous approach based on a linear model. Similar performance of noise suppression is observed in the results of an anthropomorphic head phantom and a pediatric chest phantom generated by the proposed method. With beam-hardening correction enabled by their approach, the image spatial nonuniformity on the head phantom is reduced from around 10% on the original CT images to 4.9% on the synthesized monoenergetic CT image. On the pediatric chest phantom, their method suppresses image noise standard deviation by a factor of around 7.5, and compared with linear decomposition, it reduces the estimation error of electron densities from 33.3% to 8.6%. Conclusions: The authors propose a general framework of noise suppression in material decomposition for DECT. Phantom studies have shown the proposed method improves the image uniformity and the accuracy of electron density measurements by effective beam-hardening correction and reduces noise level without noticeable resolution loss.« less

  2. Data decomposition method for parallel polygon rasterization considering load balancing

    NASA Astrophysics Data System (ADS)

    Zhou, Chen; Chen, Zhenjie; Liu, Yongxue; Li, Feixue; Cheng, Liang; Zhu, A.-xing; Li, Manchun

    2015-12-01

    It is essential to adopt parallel computing technology to rapidly rasterize massive polygon data. In parallel rasterization, it is difficult to design an effective data decomposition method. Conventional methods ignore load balancing of polygon complexity in parallel rasterization and thus fail to achieve high parallel efficiency. In this paper, a novel data decomposition method based on polygon complexity (DMPC) is proposed. First, four factors that possibly affect the rasterization efficiency were investigated. Then, a metric represented by the boundary number and raster pixel number in the minimum bounding rectangle was developed to calculate the complexity of each polygon. Using this metric, polygons were rationally allocated according to the polygon complexity, and each process could achieve balanced loads of polygon complexity. To validate the efficiency of DMPC, it was used to parallelize different polygon rasterization algorithms and tested on different datasets. Experimental results showed that DMPC could effectively parallelize polygon rasterization algorithms. Furthermore, the implemented parallel algorithms with DMPC could achieve good speedup ratios of at least 15.69 and generally outperformed conventional decomposition methods in terms of parallel efficiency and load balancing. In addition, the results showed that DMPC exhibited consistently better performance for different spatial distributions of polygons.

  3. Two Dimensional Finite Element Based Magnetotelluric Inversion using Singular Value Decomposition Method on Transverse Electric Mode

    NASA Astrophysics Data System (ADS)

    Tjong, Tiffany; Yihaa’ Roodhiyah, Lisa; Nurhasan; Sutarno, Doddy

    2018-04-01

    In this work, an inversion scheme was performed using a vector finite element (VFE) based 2-D magnetotelluric (MT) forward modelling. We use an inversion scheme with Singular value decomposition (SVD) method toimprove the accuracy of MT inversion.The inversion scheme was applied to transverse electric (TE) mode of MT. SVD method was used in this inversion to decompose the Jacobian matrices. Singular values which obtained from the decomposition process were analyzed. This enabled us to determine the importance of data and therefore to define a threshold for truncation process. The truncation of singular value in inversion processcould improve the resulted model.

  4. Exploring Patterns of Soil Organic Matter Decomposition with Students and the Public Through the Global Decomposition Project (GDP)

    NASA Astrophysics Data System (ADS)

    Wood, J. H.; Natali, S.

    2014-12-01

    The Global Decomposition Project (GDP) is a program designed to introduce and educate students and the general public about soil organic matter and decomposition through a standardized protocol for collecting, reporting, and sharing data. This easy-to-use hands-on activity focuses on questions such as "How do environmental conditions control decomposition of organic matter in soil?" and "Why do some areas accumulate organic matter and others do not?" Soil organic matter is important to local ecosystems because it affects soil structure, regulates soil moisture and temperature, and provides energy and nutrients to soil organisms. It is also important globally because it stores a large amount of carbon, and when microbes "eat", or decompose organic matter they release greenhouse gasses such as carbon dioxide and methane into the atmosphere, which affects the earth's climate. The protocol describes a commonly used method to measure decomposition using a paper made of cellulose, a component of plant cell walls. Participants can receive pre-made cellulose decomposition bags, or make decomposition bags using instructions in the protocol and easily obtained materials (e.g., window screen and lignin-free paper). Individual results will be shared with all participants and the broader public through an online database. We will present decomposition bag results from a research site in Alaskan tundra, as well as from a middle-school-student led experiment in California. The GDP demonstrates how scientific methods can be extended to educate broader audiences, while at the same time, data collected by students and the public can provide new insight into global patterns of soil decomposition. The GDP provides a pathway for scientists and educators to interact and reach meaningful education and research goals.

  5. Short-term standard litter decomposition across three different ecosystems in middle taiga zone of West Siberia

    NASA Astrophysics Data System (ADS)

    Filippova, Nina V.; Glagolev, Mikhail V.

    2018-03-01

    The method of standard litter (tea) decomposition was implemented to compare decomposition rate constants (k) between different peatland ecosystems and coniferous forests in the middle taiga zone of West Siberia (near Khanty-Mansiysk). The standard protocol of TeaComposition initiative was used to make the data usable for comparisons among different sites and zonobiomes worldwide. This article sums up the results of short-term decomposition (3 months) on the local scale. The values of decomposition rate constants differed significantly between three ecosystem types: it was higher in forest compared to bogs, and treed bogs had lower decomposition constant compared to Sphagnum lawns. In general, the decomposition rate constants were close to ones reported earlier for similar climatic conditions and habitats.

  6. Thermal Decomposition Behavior of Hydroxytyrosol (HT) in Nitrogen Atmosphere Based on TG-FTIR Methods.

    PubMed

    Tu, Jun-Ling; Yuan, Jiao-Jiao

    2018-02-13

    The thermal decomposition behavior of olive hydroxytyrosol (HT) was first studied using thermogravimetry (TG). Cracked chemical bond and evolved gas analysis during the thermal decomposition process of HT were also investigated using thermogravimetry coupled with infrared spectroscopy (TG-FTIR). Thermogravimetry-Differential thermogravimetry (TG-DTG) curves revealed that the thermal decomposition of HT began at 262.8 °C and ended at 409.7 °C with a main mass loss. It was demonstrated that a high heating rate (over 20 K·min -1 ) restrained the thermal decomposition of HT, resulting in an obvious thermal hysteresis. Furthermore, a thermal decomposition kinetics investigation of HT indicated that the non-isothermal decomposition mechanism was one-dimensional diffusion (D1), integral form g ( x ) = x ², and differential form f ( x ) = 1/(2 x ). The four combined approaches were employed to calculate the activation energy ( E = 128.50 kJ·mol -1 ) and Arrhenius preexponential factor (ln A = 24.39 min -1 ). In addition, a tentative mechanism of HT thermal decomposition was further developed. The results provide a theoretical reference for the potential thermal stability of HT.

  7. The Stone Cold Truth: The Effect of Concrete Encasement on the Rate and Pattern of Soft Tissue Decomposition.

    PubMed

    Martin, D C; Dabbs, Gretchen R; Roberts, Lindsey G; Cleary, Megan K

    2016-03-01

    This study provides a descriptive analysis of taphonomic changes observed in the soft tissue of ten pigs (Sus scrofa) after being encased in Quickrete (®) concrete and excavated at monthly or bimonthly intervals over the course of 2 years. The best method of subject excavation was investigated. Rate and pattern of decomposition were compared to a nonencased control subject. Results demonstrate subjects interred in concrete decomposed significantly slower than the control subject (p < 0.01), the difference being observable after 1 month. After 1 year, the encased subject was in the early stage of decomposition with purging fluids and intact organs present, versus complete skeletonization of the control subject. Concrete subjects also display a unique decomposition pattern, exhibiting a chemically burned outer layer of skin and a common separation of the dermal and epidermal layers. Results suggest using traditional methods to estimate postmortem interval on concrete subjects may result in underestimation. © 2015 American Academy of Forensic Sciences.

  8. Inventory control of raw material using silver meal heuristic method in PR. Trubus Alami Malang

    NASA Astrophysics Data System (ADS)

    Ikasari, D. M.; Lestari, E. R.; Prastya, E.

    2018-03-01

    The purpose of this study was to compare the total inventory cost calculated using the method applied by PR. Trubus Alami and Silver Meal Heuristic (SMH) method. The study was started by forecasting the cigarette demand from July 2016 to June 2017 (48 weeks) using additive decomposition forecasting method. The additive decomposition was used because it has the lowest value of Mean Abosolute Deviation (MAD) and Mean Squared Deviation (MSD) compared to other methods such as multiplicative decomposition, moving average, single exponential smoothing, and double exponential smoothing. The forcasting results was then converted as a raw material needs and further calculated using SMH method to obtain inventory cost. As expected, the result shows that the order frequency of using SMH methods was smaller than that of using the method applied by Trubus Alami. This affected the total inventory cost. The result suggests that using SMH method gave a 29.41% lower inventory cost, giving the cost different of IDR 21,290,622. The findings, is therefore, indicated that the PR. Trubus Alami should apply the SMH method if the company wants to reduce the total inventory cost.

  9. Classification of fully polarimetric F-SAR ( X / S ) airborne radar images using decomposition methods. (Polish Title: Klasyfikacja treści polarymetrycznych obrazów radarowych z wykorzystaniem metod dekompozycji na przykładzie systemu F-SAR ( X / S ))

    NASA Astrophysics Data System (ADS)

    Mleczko, M.

    2014-12-01

    Polarimetric SAR data is not widely used in practice, because it is not yet available operationally from the satellites. Currently we can distinguish two approaches in POL - In - SAR technology: alternating polarization imaging (Alt - POL) and fully polarimetric (QuadPol). The first represents a subset of another and is more operational, while the second is experimental because classification of this data requires polarimetric decomposition of scattering matrix in the first stage. In the literature decomposition process is divided in two types: the coherent and incoherent decomposition. In this paper the decomposition methods have been tested using data from the high resolution airborne F - SAR system. Results of classification have been interpreted in the context of the land cover mapping capabilities

  10. Scalable parallel elastic-plastic finite element analysis using a quasi-Newton method with a balancing domain decomposition preconditioner

    NASA Astrophysics Data System (ADS)

    Yusa, Yasunori; Okada, Hiroshi; Yamada, Tomonori; Yoshimura, Shinobu

    2018-04-01

    A domain decomposition method for large-scale elastic-plastic problems is proposed. The proposed method is based on a quasi-Newton method in conjunction with a balancing domain decomposition preconditioner. The use of a quasi-Newton method overcomes two problems associated with the conventional domain decomposition method based on the Newton-Raphson method: (1) avoidance of a double-loop iteration algorithm, which generally has large computational complexity, and (2) consideration of the local concentration of nonlinear deformation, which is observed in elastic-plastic problems with stress concentration. Moreover, the application of a balancing domain decomposition preconditioner ensures scalability. Using the conventional and proposed domain decomposition methods, several numerical tests, including weak scaling tests, were performed. The convergence performance of the proposed method is comparable to that of the conventional method. In particular, in elastic-plastic analysis, the proposed method exhibits better convergence performance than the conventional method.

  11. The development of a post-mortem interval estimation for human remains found on land in the Netherlands.

    PubMed

    Gelderman, H T; Boer, L; Naujocks, T; IJzermans, A C M; Duijst, W L J M

    2018-05-01

    The decomposition process of human remains can be used to estimate the post-mortem interval (PMI), but decomposition varies due to many factors. Temperature is believed to be the most important and can be connected to decomposition by using the accumulated degree days (ADD). The aim of this research was to develop a decomposition scoring method and to develop a formula to estimate the PMI by using the developed decomposition scoring method and ADD.A decomposition scoring method and a Book of Reference (visual resource) were made. Ninety-one cases were used to develop a method to estimate the PMI. The photographs were scored using the decomposition scoring method. The temperature data was provided by the Royal Netherlands Meteorological Institute. The PMI was estimated using the total decomposition score (TDS) and using the TDS and ADD. The latter required an additional step, namely to calculate the ADD from the finding date back until the predicted day of death.The developed decomposition scoring method had a high interrater reliability. The TDS significantly estimates the PMI (R 2  = 0.67 and 0.80 for indoor and outdoor bodies, respectively). When using the ADD, the R 2 decreased to 0.66 and 0.56.The developed decomposition scoring method is a practical method to measure decomposition for human remains found on land. The PMI can be estimated using this method, but caution is advised in cases with a long PMI. The ADD does not account for all the heat present in a decomposing remain and is therefore a possible bias.

  12. Parallel processing methods for space based power systems

    NASA Technical Reports Server (NTRS)

    Berry, F. C.

    1993-01-01

    This report presents a method for doing load-flow analysis of a power system by using a decomposition approach. The power system for the Space Shuttle is used as a basis to build a model for the load-flow analysis. To test the decomposition method for doing load-flow analysis, simulations were performed on power systems of 16, 25, 34, 43, 52, 61, 70, and 79 nodes. Each of the power systems was divided into subsystems and simulated under steady-state conditions. The results from these tests have been found to be as accurate as tests performed using a standard serial simulator. The division of the power systems into different subsystems was done by assigning a processor to each area. There were 13 transputers available, therefore, up to 13 different subsystems could be simulated at the same time. This report has preliminary results for a load-flow analysis using a decomposition principal. The report shows that the decomposition algorithm for load-flow analysis is well suited for parallel processing and provides increases in the speed of execution.

  13. Domain decomposition for a mixed finite element method in three dimensions

    USGS Publications Warehouse

    Cai, Z.; Parashkevov, R.R.; Russell, T.F.; Wilson, J.D.; Ye, X.

    2003-01-01

    We consider the solution of the discrete linear system resulting from a mixed finite element discretization applied to a second-order elliptic boundary value problem in three dimensions. Based on a decomposition of the velocity space, these equations can be reduced to a discrete elliptic problem by eliminating the pressure through the use of substructures of the domain. The practicality of the reduction relies on a local basis, presented here, for the divergence-free subspace of the velocity space. We consider additive and multiplicative domain decomposition methods for solving the reduced elliptic problem, and their uniform convergence is established.

  14. A Decomposition Method for Security Constrained Economic Dispatch of a Three-Layer Power System

    NASA Astrophysics Data System (ADS)

    Yang, Junfeng; Luo, Zhiqiang; Dong, Cheng; Lai, Xiaowen; Wang, Yang

    2018-01-01

    This paper proposes a new decomposition method for the security-constrained economic dispatch in a three-layer large-scale power system. The decomposition is realized using two main techniques. The first is to use Ward equivalencing-based network reduction to reduce the number of variables and constraints in the high-layer model without sacrificing accuracy. The second is to develop a price response function to exchange signal information between neighboring layers, which significantly improves the information exchange efficiency of each iteration and results in less iterations and less computational time. The case studies based on the duplicated RTS-79 system demonstrate the effectiveness and robustness of the proposed method.

  15. Plant traits and decomposition: are the relationships for roots comparable to those for leaves?

    PubMed Central

    Birouste, Marine; Kazakou, Elena; Blanchard, Alain; Roumet, Catherine

    2012-01-01

    Background and Aims Fine root decomposition is an important determinant of nutrient and carbon cycling in grasslands; however, little is known about the factors controlling root decomposition among species. Our aim was to investigate whether interspecific variation in the potential decomposition rate of fine roots could be accounted for by root chemical and morphological traits, life history and taxonomic affiliation. We also investigated the co-ordinated variation in root and leaf traits and potential decomposition rates. Methods We analysed potential decomposition rates and the chemical and morphological traits of fine roots on 18 Mediterranean herbaceous species grown in controlled conditions. The results were compared with those obtained for leaves in a previous study conducted on similar species. Key Results Differences in the potential decomposition rates of fine roots between species were accounted for by root chemical composition, but not by morphological traits. The root potential decomposition rate varied with taxonomy, but not with life history. Poaceae, with high cellulose concentration and low concentrations of soluble compounds and phosphorus, decomposed more slowly than Asteraceae and Fabaceae. Patterns of root traits, including decomposition rate, mirrored those of leaf traits, resulting in a similar species clustering. Conclusions The highly co-ordinated variation of roots and leaves in terms of traits and potential decomposition rate suggests that changes in the functional composition of communities in response to anthropogenic changes will strongly affect biogeochemical cycles at the ecosystem level. PMID:22143881

  16. Determination of Kinetic Parameters for the Thermal Decomposition of Parthenium hysterophorus

    NASA Astrophysics Data System (ADS)

    Dhaundiyal, Alok; Singh, Suraj B.; Hanon, Muammel M.; Rawat, Rekha

    2018-02-01

    A kinetic study of pyrolysis process of Parthenium hysterophorous is carried out by using thermogravimetric analysis (TGA) equipment. The present study investigates the thermal degradation and determination of the kinetic parameters such as activation E and the frequency factor A using model-free methods given by Flynn Wall and Ozawa (FWO), Kissinger-Akahira-Sonuse (KAS) and Kissinger, and model-fitting (Coats Redfern). The results derived from thermal decomposition process demarcate decomposition of Parthenium hysterophorous among the three main stages, such as dehydration, active and passive pyrolysis. It is shown through DTG thermograms that the increase in the heating rate caused temperature peaks at maximum weight loss rate to shift towards higher temperature regime. The results are compared with Coats Redfern (Integral method) and experimental results have shown that values of kinetic parameters obtained from model-free methods are in good agreement. Whereas the results obtained through Coats Redfern model at different heating rates are not promising, however, the diffusion models provided the good fitting with the experimental data.

  17. Plasmonic Thermal Decomposition/Digestion of Proteins: A Rapid On-Surface Protein Digestion Technique for Mass Spectrometry Imaging.

    PubMed

    Zhou, Rong; Basile, Franco

    2017-09-05

    A method based on plasmon surface resonance absorption and heating was developed to perform a rapid on-surface protein thermal decomposition and digestion suitable for imaging mass spectrometry (MS) and/or profiling. This photothermal process or plasmonic thermal decomposition/digestion (plasmonic-TDD) method incorporates a continuous wave (CW) laser excitation and gold nanoparticles (Au-NPs) to induce known thermal decomposition reactions that cleave peptides and proteins specifically at the C-terminus of aspartic acid and at the N-terminus of cysteine. These thermal decomposition reactions are induced by heating a solid protein sample to temperatures between 200 and 270 °C for a short period of time (10-50 s per 200 μm segment) and are reagentless and solventless, and thus are devoid of sample product delocalization. In the plasmonic-TDD setup the sample is coated with Au-NPs and irradiated with 532 nm laser radiation to induce thermoplasmonic heating and bring about site-specific thermal decomposition on solid peptide/protein samples. In this manner the Au-NPs act as nanoheaters that result in a highly localized thermal decomposition and digestion of the protein sample that is independent of the absorption properties of the protein, making the method universally applicable to all types of proteinaceous samples (e.g., tissues or protein arrays). Several experimental variables were optimized to maximize product yield, and they include heating time, laser intensity, size of Au-NPs, and surface coverage of Au-NPs. Using optimized parameters, proof-of-principle experiments confirmed the ability of the plasmonic-TDD method to induce both C-cleavage and D-cleavage on several peptide standards and the protein lysozyme by detecting their thermal decomposition products with matrix-assisted laser desorption/ionization mass spectrometry (MALDI-MS). The high spatial specificity of the plasmonic-TDD method was demonstrated by using a mask to digest designated sections of the sample surface with the heating laser and MALDI-MS imaging to map the resulting products. The solventless nature of the plasmonic-TDD method enabled the nonenzymatic on-surface digestion of proteins to proceed with undetectable delocalization of the resulting products from their precursor protein location. The advantages of this novel plasmonic-TDD method include short reaction times (<30 s/200 μm), compatibility with MALDI, universal sample compatibility, high spatial specificity, and localization of the digestion products. These advantages point to potential applications of this method for on-tissue protein digestion and MS-imaging/profiling for the identification of proteins, high-fidelity MS imaging of high molecular weight (>30 kDa) proteins, and the rapid analysis of formalin-fixed paraffin-embedded (FFPE) tissue samples.

  18. A new multivariate empirical mode decomposition method for improving the performance of SSVEP-based brain-computer interface

    NASA Astrophysics Data System (ADS)

    Chen, Yi-Feng; Atal, Kiran; Xie, Sheng-Quan; Liu, Quan

    2017-08-01

    Objective. Accurate and efficient detection of steady-state visual evoked potentials (SSVEP) in electroencephalogram (EEG) is essential for the related brain-computer interface (BCI) applications. Approach. Although the canonical correlation analysis (CCA) has been applied extensively and successfully to SSVEP recognition, the spontaneous EEG activities and artifacts that often occur during data recording can deteriorate the recognition performance. Therefore, it is meaningful to extract a few frequency sub-bands of interest to avoid or reduce the influence of unrelated brain activity and artifacts. This paper presents an improved method to detect the frequency component associated with SSVEP using multivariate empirical mode decomposition (MEMD) and CCA (MEMD-CCA). EEG signals from nine healthy volunteers were recorded to evaluate the performance of the proposed method for SSVEP recognition. Main results. We compared our method with CCA and temporally local multivariate synchronization index (TMSI). The results suggest that the MEMD-CCA achieved significantly higher accuracy in contrast to standard CCA and TMSI. It gave the improvements of 1.34%, 3.11%, 3.33%, 10.45%, 15.78%, 18.45%, 15.00% and 14.22% on average over CCA at time windows from 0.5 s to 5 s and 0.55%, 1.56%, 7.78%, 14.67%, 13.67%, 7.33% and 7.78% over TMSI from 0.75 s to 5 s. The method outperformed the filter-based decomposition (FB), empirical mode decomposition (EMD) and wavelet decomposition (WT) based CCA for SSVEP recognition. Significance. The results demonstrate the ability of our proposed MEMD-CCA to improve the performance of SSVEP-based BCI.

  19. Comparison of two interpolation methods for empirical mode decomposition based evaluation of radiographic femur bone images.

    PubMed

    Udhayakumar, Ganesan; Sujatha, Chinnaswamy Manoharan; Ramakrishnan, Swaminathan

    2013-01-01

    Analysis of bone strength in radiographic images is an important component of estimation of bone quality in diseases such as osteoporosis. Conventional radiographic femur bone images are used to analyze its architecture using bi-dimensional empirical mode decomposition method. Surface interpolation of local maxima and minima points of an image is a crucial part of bi-dimensional empirical mode decomposition method and the choice of appropriate interpolation depends on specific structure of the problem. In this work, two interpolation methods of bi-dimensional empirical mode decomposition are analyzed to characterize the trabecular femur bone architecture of radiographic images. The trabecular bone regions of normal and osteoporotic femur bone images (N = 40) recorded under standard condition are used for this study. The compressive and tensile strength regions of the images are delineated using pre-processing procedures. The delineated images are decomposed into their corresponding intrinsic mode functions using interpolation methods such as Radial basis function multiquadratic and hierarchical b-spline techniques. Results show that bi-dimensional empirical mode decomposition analyses using both interpolations are able to represent architectural variations of femur bone radiographic images. As the strength of the bone depends on architectural variation in addition to bone mass, this study seems to be clinically useful.

  20. Simultaneous determination of α-asarone and β-asarone in Acorus tatarinowii using excitation-emission matrix fluorescence coupled with chemometrics methods

    NASA Astrophysics Data System (ADS)

    Bai, Xue-Mei; Liu, Tie; Liu, De-Long; Wei, Yong-Ju

    2018-02-01

    A chemometrics-assisted excitation-emission matrix (EEM) fluorescence method was proposed for simultaneous determination of α-asarone and β-asarone in Acorus tatarinowii. Using the strategy of combining EEM data with chemometrics methods, the simultaneous determination of α-asarone and β-asarone in the complex Traditional Chinese medicine system was achieved successfully, even in the presence of unexpected interferents. The physical or chemical separation step was avoided due to the use of ;mathematical separation;. Six second-order calibration methods were used including parallel factor analysis (PARAFAC), alternating trilinear decomposition (ATLD), alternating penalty trilinear decomposition (APTLD), self-weighted alternating trilinear decomposition (SWATLD), the unfolded partial least-squares (U-PLS) and multidimensional partial least-squares (N-PLS) with residual bilinearization (RBL). In addition, HPLC method was developed to further validate the presented strategy. Consequently, for the validation samples, the analytical results obtained by six second-order calibration methods were almost accurate. But for the Acorus tatarinowii samples, the results indicated a slightly better predictive ability of N-PLS/RBL procedure over other methods.

  1. A Novel Multilevel-SVD Method to Improve Multistep Ahead Forecasting in Traffic Accidents Domain.

    PubMed

    Barba, Lida; Rodríguez, Nibaldo

    2017-01-01

    Here is proposed a novel method for decomposing a nonstationary time series in components of low and high frequency. The method is based on Multilevel Singular Value Decomposition (MSVD) of a Hankel matrix. The decomposition is used to improve the forecasting accuracy of Multiple Input Multiple Output (MIMO) linear and nonlinear models. Three time series coming from traffic accidents domain are used. They represent the number of persons with injuries in traffic accidents of Santiago, Chile. The data were continuously collected by the Chilean Police and were weekly sampled from 2000:1 to 2014:12. The performance of MSVD is compared with the decomposition in components of low and high frequency of a commonly accepted method based on Stationary Wavelet Transform (SWT). SWT in conjunction with the Autoregressive model (SWT + MIMO-AR) and SWT in conjunction with an Autoregressive Neural Network (SWT + MIMO-ANN) were evaluated. The empirical results have shown that the best accuracy was achieved by the forecasting model based on the proposed decomposition method MSVD, in comparison with the forecasting models based on SWT.

  2. A Novel Multilevel-SVD Method to Improve Multistep Ahead Forecasting in Traffic Accidents Domain

    PubMed Central

    Rodríguez, Nibaldo

    2017-01-01

    Here is proposed a novel method for decomposing a nonstationary time series in components of low and high frequency. The method is based on Multilevel Singular Value Decomposition (MSVD) of a Hankel matrix. The decomposition is used to improve the forecasting accuracy of Multiple Input Multiple Output (MIMO) linear and nonlinear models. Three time series coming from traffic accidents domain are used. They represent the number of persons with injuries in traffic accidents of Santiago, Chile. The data were continuously collected by the Chilean Police and were weekly sampled from 2000:1 to 2014:12. The performance of MSVD is compared with the decomposition in components of low and high frequency of a commonly accepted method based on Stationary Wavelet Transform (SWT). SWT in conjunction with the Autoregressive model (SWT + MIMO-AR) and SWT in conjunction with an Autoregressive Neural Network (SWT + MIMO-ANN) were evaluated. The empirical results have shown that the best accuracy was achieved by the forecasting model based on the proposed decomposition method MSVD, in comparison with the forecasting models based on SWT. PMID:28261267

  3. Catalytic decomposition of toxic chemicals over metal-promoted carbon nanotubes.

    PubMed

    Li, Lili; Han, Changxiu; Han, Xinyu; Zhou, Yixiao; Yang, Li; Zhang, Baogui; Hu, Jianli

    2011-01-15

    Effective decomposition of toxic gaseous compounds is important for pollution control at many chemical manufacturing plants. This study explores catalytic decomposition of phosphine (PH(3)) using novel metal-promoted carbon nanotubes (CNTs). The cerium-promoted Co/CNTs catalysts (CoCe/CNTs) are synthesized by means of coimpregnation method and reduced by three different methods (H(2), KBH(4), NaH(2)PO(2)·H(2)O/KBH(4)). The morphology, structure, and composition of the catalysts are characterized using a number of analytical instrumentations including high-resolution transmission electron microscopy, X-ray diffraction, X-ray photoelectron spectroscopy, BET surface area measurement, and inductively coupled plasma. The activity of the catalysts in PH(3) decomposition reaction is measured and correlated with their surface and structural properties. The characterization results show that the CoCe/CNTs catalyst reduced by H(2) possesses small particles and is shown thermally stable in PH(3) decomposition reaction. The activities of these catalysts are compared and are shown in the following sequence: CoCe/CNTs > Co/CNTs > CoCeBP/CNTs> CoCeB/CNTs. The difference in reduction method results in the formation of different active phases during the PH(3) decomposition reaction. After a catalytic activity test, only the CoP phase is formed on CoCe/CNTs and Co/CNTs catalysts, whereas multiphases CoP, Co(2)P, and Co phases are formed on CoCeBP/CNTs and CoCeB/CNTs. Results show that the CoP phase is formed predominantly on the CoCe/CNTs and Co/CNTs catalysts and is found to likely be the most active phase for this reaction. Furthermore, the CoCe/CNTs catalyst exhibits not only highest activity but also long-term stability in PH(3) decomposition reaction. When operated in a fixed-bed reactor at 360 °C, single-pass PH(3) conversion of about 99.8% can be achieved.

  4. Application of reiteration of Hankel singular value decomposition in quality control

    NASA Astrophysics Data System (ADS)

    Staniszewski, Michał; Skorupa, Agnieszka; Boguszewicz, Łukasz; Michalczuk, Agnieszka; Wereszczyński, Kamil; Wicher, Magdalena; Konopka, Marek; Sokół, Maria; Polański, Andrzej

    2017-07-01

    Medical centres are obliged to store past medical records, including the results of quality assurance (QA) tests of the medical equipment, which is especially useful in checking reproducibility of medical devices and procedures. Analysis of multivariate time series is an important part of quality control of NMR data. In this work we proposean anomaly detection tool based on Reiteration of Hankel Singular Value Decomposition method. The presented method was compared with external software and authors obtained comparable results.

  5. Integration of progressive hedging and dual decomposition in stochastic integer programs

    DOE PAGES

    Watson, Jean -Paul; Guo, Ge; Hackebeil, Gabriel; ...

    2015-04-07

    We present a method for integrating the Progressive Hedging (PH) algorithm and the Dual Decomposition (DD) algorithm of Carøe and Schultz for stochastic mixed-integer programs. Based on the correspondence between lower bounds obtained with PH and DD, a method to transform weights from PH to Lagrange multipliers in DD is found. Fast progress in early iterations of PH speeds up convergence of DD to an exact solution. As a result, we report computational results on server location and unit commitment instances.

  6. A statistical approach based on accumulated degree-days to predict decomposition-related processes in forensic studies.

    PubMed

    Michaud, Jean-Philippe; Moreau, Gaétan

    2011-01-01

    Using pig carcasses exposed over 3 years in rural fields during spring, summer, and fall, we studied the relationship between decomposition stages and degree-day accumulation (i) to verify the predictability of the decomposition stages used in forensic entomology to document carcass decomposition and (ii) to build a degree-day accumulation model applicable to various decomposition-related processes. Results indicate that the decomposition stages can be predicted with accuracy from temperature records and that a reliable degree-day index can be developed to study decomposition-related processes. The development of degree-day indices opens new doors for researchers and allows for the application of inferential tools unaffected by climatic variability, as well as for the inclusion of statistics in a science that is primarily descriptive and in need of validation methods in courtroom proceedings. © 2010 American Academy of Forensic Sciences.

  7. Interface conditions for domain decomposition with radical grid refinement

    NASA Technical Reports Server (NTRS)

    Scroggs, Jeffrey S.

    1991-01-01

    Interface conditions for coupling the domains in a physically motivated domain decomposition method are discussed. The domain decomposition is based on an asymptotic-induced method for the numerical solution of hyperbolic conservation laws with small viscosity. The method consists of multiple stages. The first stage is to obtain a first approximation using a first-order method, such as the Godunov scheme. Subsequent stages of the method involve solving internal-layer problem via a domain decomposition. The method is derived and justified via singular perturbation techniques.

  8. Sparse Solution of Fiber Orientation Distribution Function by Diffusion Decomposition

    PubMed Central

    Yeh, Fang-Cheng; Tseng, Wen-Yih Isaac

    2013-01-01

    Fiber orientation is the key information in diffusion tractography. Several deconvolution methods have been proposed to obtain fiber orientations by estimating a fiber orientation distribution function (ODF). However, the L 2 regularization used in deconvolution often leads to false fibers that compromise the specificity of the results. To address this problem, we propose a method called diffusion decomposition, which obtains a sparse solution of fiber ODF by decomposing the diffusion ODF obtained from q-ball imaging (QBI), diffusion spectrum imaging (DSI), or generalized q-sampling imaging (GQI). A simulation study, a phantom study, and an in-vivo study were conducted to examine the performance of diffusion decomposition. The simulation study showed that diffusion decomposition was more accurate than both constrained spherical deconvolution and ball-and-sticks model. The phantom study showed that the angular error of diffusion decomposition was significantly lower than those of constrained spherical deconvolution at 30° crossing and ball-and-sticks model at 60° crossing. The in-vivo study showed that diffusion decomposition can be applied to QBI, DSI, or GQI, and the resolved fiber orientations were consistent regardless of the diffusion sampling schemes and diffusion reconstruction methods. The performance of diffusion decomposition was further demonstrated by resolving crossing fibers on a 30-direction QBI dataset and a 40-direction DSI dataset. In conclusion, diffusion decomposition can improve angular resolution and resolve crossing fibers in datasets with low SNR and substantially reduced number of diffusion encoding directions. These advantages may be valuable for human connectome studies and clinical research. PMID:24146772

  9. Offset-sparsity decomposition for enhancement of color microscopic image of stained specimen in histopathology: further results

    NASA Astrophysics Data System (ADS)

    Kopriva, Ivica; Popović Hadžija, Marijana; Hadžija, Mirko; Aralica, Gorana

    2016-03-01

    Recently, novel data-driven offset-sparsity decomposition (OSD) method was proposed by us to increase colorimetric difference between tissue-structures present in the color microscopic image of stained specimen in histopathology. The OSD method performs additive decomposition of vectorized spectral images into image-adapted offset term and sparse term. Thereby, the sparse term represents an enhanced image. The method was tested on images of the histological slides of human liver stained with hematoxylin and eosin, anti-CD34 monoclonal antibody and Sudan III. Herein, we present further results related to increase of colorimetric difference between tissue structures present in the images of human liver specimens with pancreatic carcinoma metastasis stained with Gomori, CK7, CDX2 and LCA, and with colon carcinoma metastasis stained with Gomori, CK20 and PAN CK. Obtained relative increase of colorimetric difference is in the range [19.36%, 103.94%].

  10. Decomposition Techniques for Icesat/glas Full-Waveform Data

    NASA Astrophysics Data System (ADS)

    Liu, Z.; Gao, X.; Li, G.; Chen, J.

    2018-04-01

    The geoscience laser altimeter system (GLAS) on the board Ice, Cloud, and land Elevation Satellite (ICESat), is the first long-duration space borne full-waveform LiDAR for measuring the topography of the ice shelf and temporal variation, cloud and atmospheric characteristics. In order to extract the characteristic parameters of the waveform, the key step is to process the full waveform data. In this paper, the modified waveform decomposition method is proposed to extract the echo components from full-waveform. First, the initial parameter estimation is implemented through data preprocessing and waveform detection. Next, the waveform fitting is demonstrated using the Levenberg-Marquard (LM) optimization method. The results show that the modified waveform decomposition method can effectively extract the overlapped echo components and missing echo components compared with the results from GLA14 product. The echo components can also be extracted from the complex waveforms.

  11. Regularization of nonlinear decomposition of spectral x-ray projection images.

    PubMed

    Ducros, Nicolas; Abascal, Juan Felipe Perez-Juste; Sixou, Bruno; Rit, Simon; Peyrin, Françoise

    2017-09-01

    Exploiting the x-ray measurements obtained in different energy bins, spectral computed tomography (CT) has the ability to recover the 3-D description of a patient in a material basis. This may be achieved solving two subproblems, namely the material decomposition and the tomographic reconstruction problems. In this work, we address the material decomposition of spectral x-ray projection images, which is a nonlinear ill-posed problem. Our main contribution is to introduce a material-dependent spatial regularization in the projection domain. The decomposition problem is solved iteratively using a Gauss-Newton algorithm that can benefit from fast linear solvers. A Matlab implementation is available online. The proposed regularized weighted least squares Gauss-Newton algorithm (RWLS-GN) is validated on numerical simulations of a thorax phantom made of up to five materials (soft tissue, bone, lung, adipose tissue, and gadolinium), which is scanned with a 120 kV source and imaged by a 4-bin photon counting detector. To evaluate the method performance of our algorithm, different scenarios are created by varying the number of incident photons, the concentration of the marker and the configuration of the phantom. The RWLS-GN method is compared to the reference maximum likelihood Nelder-Mead algorithm (ML-NM). The convergence of the proposed method and its dependence on the regularization parameter are also studied. We show that material decomposition is feasible with the proposed method and that it converges in few iterations. Material decomposition with ML-NM was very sensitive to noise, leading to decomposed images highly affected by noise, and artifacts even for the best case scenario. The proposed method was less sensitive to noise and improved contrast-to-noise ratio of the gadolinium image. Results were superior to those provided by ML-NM in terms of image quality and decomposition was 70 times faster. For the assessed experiments, material decomposition was possible with the proposed method when the number of incident photons was equal or larger than 10 5 and when the marker concentration was equal or larger than 0.03 g·cm -3 . The proposed method efficiently solves the nonlinear decomposition problem for spectral CT, which opens up new possibilities such as material-specific regularization in the projection domain and a parallelization framework, in which projections are solved in parallel. © 2017 American Association of Physicists in Medicine.

  12. Pi2 detection using Empirical Mode Decomposition (EMD)

    NASA Astrophysics Data System (ADS)

    Mieth, Johannes Z. D.; Frühauff, Dennis; Glassmeier, Karl-Heinz

    2017-04-01

    Empirical Mode Decomposition has been used as an alternative method to wavelet transformation to identify onset times of Pi2 pulsations in data sets of the Scandinavian Magnetometer Array (SMA). Pi2 pulsations are magnetohydrodynamic waves occurring during magnetospheric substorms. Almost always Pi2 are observed at substorm onset in mid to low latitudes on Earth's nightside. They are fed by magnetic energy release caused by dipolarization processes. Their periods lie between 40 to 150 seconds. Usually, Pi2 are detected using wavelet transformation. Here, Empirical Mode Decomposition (EMD) is presented as an alternative approach to the traditional procedure. EMD is a young signal decomposition method designed for nonlinear and non-stationary time series. It provides an adaptive, data driven, and complete decomposition of time series into slow and fast oscillations. An optimized version using Monte-Carlo-type noise assistance is used here. By displaying the results in a time-frequency space a characteristic frequency modulation is observed. This frequency modulation can be correlated with the onset of Pi2 pulsations. A basic algorithm to find the onset is presented. Finally, the results are compared to classical wavelet-based analysis. The use of different SMA stations furthermore allows the spatial analysis of Pi2 onset times. EMD mostly finds application in the fields of engineering and medicine. This work demonstrates the applicability of this method to geomagnetic time series.

  13. Numerical simulations of incompressible laminar flows using viscous-inviscid interaction procedures

    NASA Astrophysics Data System (ADS)

    Shatalov, Alexander V.

    The present method is based on Helmholtz velocity decomposition where velocity is written as a sum of irrotational (gradient of a potential) and rotational (correction due to vorticity) components. Substitution of the velocity decomposition into the continuity equation yields an equation for the potential, while substitution into the momentum equations yields equations for the velocity corrections. A continuation approach is used to relate the pressure to the gradient of the potential through a modified Bernoulli's law, which allows the elimination of the pressure variable from the momentum equations. The present work considers steady and unsteady two-dimensional incompressible flows over an infinite cylinder and NACA 0012 airfoil shape. The numerical results are compared against standard methods (stream function-vorticity and SMAC methods) and data available in literature. The results demonstrate that the proposed formulation leads to a good approximation with some possible benefits compared to the available formulations. The method is not restricted to two-dimensional flows and can be used for viscous-inviscid domain decomposition calculations.

  14. Computer implemented empirical mode decomposition method apparatus, and article of manufacture utilizing curvature extrema

    NASA Technical Reports Server (NTRS)

    Shen, Zheng (Inventor); Huang, Norden Eh (Inventor)

    2003-01-01

    A computer implemented physical signal analysis method is includes two essential steps and the associated presentation techniques of the results. All the steps exist only in a computer: there are no analytic expressions resulting from the method. The first step is a computer implemented Empirical Mode Decomposition to extract a collection of Intrinsic Mode Functions (IMF) from nonlinear, nonstationary physical signals based on local extrema and curvature extrema. The decomposition is based on the direct extraction of the energy associated with various intrinsic time scales in the physical signal. Expressed in the IMF's, they have well-behaved Hilbert Transforms from which instantaneous frequencies can be calculated. The second step is the Hilbert Transform. The final result is the Hilbert Spectrum. Thus, the invention can localize any event on the time as well as the frequency axis. The decomposition can also be viewed as an expansion of the data in terms of the IMF's. Then, these IMF's, based on and derived from the data, can serve as the basis of that expansion. The local energy and the instantaneous frequency derived from the IMF's through the Hilbert transform give a full energy-frequency-time distribution of the data which is designated as the Hilbert Spectrum.

  15. Analytical separations of mammalian decomposition products for forensic science: a review.

    PubMed

    Swann, L M; Forbes, S L; Lewis, S W

    2010-12-03

    The study of mammalian soft tissue decomposition is an emerging area in forensic science, with a major focus of the research being the use of various chemical and biological methods to study the fate of human remains in the environment. Decomposition of mammalian soft tissue is a postmortem process that, depending on environmental conditions and physiological factors, will proceed until complete disintegration of the tissue. The major stages of decomposition involve complex reactions which result in the chemical breakdown of the body's main constituents; lipids, proteins, and carbohydrates. The first step to understanding this chemistry is identifying the compounds present in decomposition fluids and determining when they are produced. This paper provides an overview of decomposition chemistry and reviews recent advances in this area utilising analytical separation science. Copyright © 2010 Elsevier B.V. All rights reserved.

  16. Improving multi-objective reservoir operation optimization with sensitivity-informed problem decomposition

    NASA Astrophysics Data System (ADS)

    Chu, J. G.; Zhang, C.; Fu, G. T.; Li, Y.; Zhou, H. C.

    2015-04-01

    This study investigates the effectiveness of a sensitivity-informed method for multi-objective operation of reservoir systems, which uses global sensitivity analysis as a screening tool to reduce the computational demands. Sobol's method is used to screen insensitive decision variables and guide the formulation of the optimization problems with a significantly reduced number of decision variables. This sensitivity-informed problem decomposition dramatically reduces the computational demands required for attaining high quality approximations of optimal tradeoff relationships between conflicting design objectives. The search results obtained from the reduced complexity multi-objective reservoir operation problems are then used to pre-condition the full search of the original optimization problem. In two case studies, the Dahuofang reservoir and the inter-basin multi-reservoir system in Liaoning province, China, sensitivity analysis results show that reservoir performance is strongly controlled by a small proportion of decision variables. Sensitivity-informed problem decomposition and pre-conditioning are evaluated in their ability to improve the efficiency and effectiveness of multi-objective evolutionary optimization. Overall, this study illustrates the efficiency and effectiveness of the sensitivity-informed method and the use of global sensitivity analysis to inform problem decomposition when solving the complex multi-objective reservoir operation problems.

  17. Rotational-path decomposition based recursive planning for spacecraft attitude reorientation

    NASA Astrophysics Data System (ADS)

    Xu, Rui; Wang, Hui; Xu, Wenming; Cui, Pingyuan; Zhu, Shengying

    2018-02-01

    The spacecraft reorientation is a common task in many space missions. With multiple pointing constraints, it is greatly difficult to solve the constrained spacecraft reorientation planning problem. To deal with this problem, an efficient rotational-path decomposition based recursive planning (RDRP) method is proposed in this paper. The uniform pointing-constraint-ignored attitude rotation planning process is designed to solve all rotations without considering pointing constraints. Then the whole path is checked node by node. If any pointing constraint is violated, the nearest critical increment approach will be used to generate feasible alternative nodes in the process of rotational-path decomposition. As the planning path of each subdivision may still violate pointing constraints, multiple decomposition is needed and the reorientation planning is designed as a recursive manner. Simulation results demonstrate the effectiveness of the proposed method. The proposed method has been successfully applied in two SPARK microsatellites to solve onboard constrained attitude reorientation planning problem, which were developed by the Shanghai Engineering Center for Microsatellites and launched on 22 December 2016.

  18. A TV-constrained decomposition method for spectral CT

    NASA Astrophysics Data System (ADS)

    Guo, Xiaoyue; Zhang, Li; Xing, Yuxiang

    2017-03-01

    Spectral CT is attracting more and more attention in medicine, industrial nondestructive testing and security inspection field. Material decomposition is an important issue to a spectral CT to discriminate materials. Because of the spectrum overlap of energy channels, as well as the correlation of basis functions, it is well acknowledged that decomposition step in spectral CT imaging causes noise amplification and artifacts in component coefficient images. In this work, we propose materials decomposition via an optimization method to improve the quality of decomposed coefficient images. On the basis of general optimization problem, total variance minimization is constrained on coefficient images in our overall objective function with adjustable weights. We solve this constrained optimization problem under the framework of ADMM. Validation on both a numerical dental phantom in simulation and a real phantom of pig leg on a practical CT system using dual-energy imaging is executed. Both numerical and physical experiments give visually obvious better reconstructions than a general direct inverse method. SNR and SSIM are adopted to quantitatively evaluate the image quality of decomposed component coefficients. All results demonstrate that the TV-constrained decomposition method performs well in reducing noise without losing spatial resolution so that improving the image quality. The method can be easily incorporated into different types of spectral imaging modalities, as well as for cases with energy channels more than two.

  19. Defect Detection in Textures through the Use of Entropy as a Means for Automatically Selecting the Wavelet Decomposition Level.

    PubMed

    Navarro, Pedro J; Fernández-Isla, Carlos; Alcover, Pedro María; Suardíaz, Juan

    2016-07-27

    This paper presents a robust method for defect detection in textures, entropy-based automatic selection of the wavelet decomposition level (EADL), based on a wavelet reconstruction scheme, for detecting defects in a wide variety of structural and statistical textures. Two main features are presented. One of the new features is an original use of the normalized absolute function value (NABS) calculated from the wavelet coefficients derived at various different decomposition levels in order to identify textures where the defect can be isolated by eliminating the texture pattern in the first decomposition level. The second is the use of Shannon's entropy, calculated over detail subimages, for automatic selection of the band for image reconstruction, which, unlike other techniques, such as those based on the co-occurrence matrix or on energy calculation, provides a lower decomposition level, thus avoiding excessive degradation of the image, allowing a more accurate defect segmentation. A metric analysis of the results of the proposed method with nine different thresholding algorithms determined that selecting the appropriate thresholding method is important to achieve optimum performance in defect detection. As a consequence, several different thresholding algorithms depending on the type of texture are proposed.

  20. A New View of Earthquake Ground Motion Data: The Hilbert Spectral Analysis

    NASA Technical Reports Server (NTRS)

    Huang, Norden; Busalacchi, Antonio J. (Technical Monitor)

    2000-01-01

    A brief description of the newly developed Empirical Mode Decomposition (ENID) and Hilbert Spectral Analysis (HSA) method will be given. The decomposition is adaptive and can be applied to both nonlinear and nonstationary data. Example of the method applied to a sample earthquake record will be given. The results indicate those low frequency components, totally missed by the Fourier analysis, are clearly identified by the new method. Comparisons with Wavelet and window Fourier analysis show the new method offers much better temporal and frequency resolutions.

  1. An Ensemble Multilabel Classification for Disease Risk Prediction

    PubMed Central

    Liu, Wei; Zhao, Hongling; Zhang, Chaoyang

    2017-01-01

    It is important to identify and prevent disease risk as early as possible through regular physical examinations. We formulate the disease risk prediction into a multilabel classification problem. A novel Ensemble Label Power-set Pruned datasets Joint Decomposition (ELPPJD) method is proposed in this work. First, we transform the multilabel classification into a multiclass classification. Then, we propose the pruned datasets and joint decomposition methods to deal with the imbalance learning problem. Two strategies size balanced (SB) and label similarity (LS) are designed to decompose the training dataset. In the experiments, the dataset is from the real physical examination records. We contrast the performance of the ELPPJD method with two different decomposition strategies. Moreover, the comparison between ELPPJD and the classic multilabel classification methods RAkEL and HOMER is carried out. The experimental results show that the ELPPJD method with label similarity strategy has outstanding performance. PMID:29065647

  2. Trace Norm Regularized CANDECOMP/PARAFAC Decomposition With Missing Data.

    PubMed

    Liu, Yuanyuan; Shang, Fanhua; Jiao, Licheng; Cheng, James; Cheng, Hong

    2015-11-01

    In recent years, low-rank tensor completion (LRTC) problems have received a significant amount of attention in computer vision, data mining, and signal processing. The existing trace norm minimization algorithms for iteratively solving LRTC problems involve multiple singular value decompositions of very large matrices at each iteration. Therefore, they suffer from high computational cost. In this paper, we propose a novel trace norm regularized CANDECOMP/PARAFAC decomposition (TNCP) method for simultaneous tensor decomposition and completion. We first formulate a factor matrix rank minimization model by deducing the relation between the rank of each factor matrix and the mode- n rank of a tensor. Then, we introduce a tractable relaxation of our rank function, and then achieve a convex combination problem of much smaller-scale matrix trace norm minimization. Finally, we develop an efficient algorithm based on alternating direction method of multipliers to solve our problem. The promising experimental results on synthetic and real-world data validate the effectiveness of our TNCP method. Moreover, TNCP is significantly faster than the state-of-the-art methods and scales to larger problems.

  3. Phase unwrapping with graph cuts optimization and dual decomposition acceleration for 3D high-resolution MRI data.

    PubMed

    Dong, Jianwu; Chen, Feng; Zhou, Dong; Liu, Tian; Yu, Zhaofei; Wang, Yi

    2017-03-01

    Existence of low SNR regions and rapid-phase variations pose challenges to spatial phase unwrapping algorithms. Global optimization-based phase unwrapping methods are widely used, but are significantly slower than greedy methods. In this paper, dual decomposition acceleration is introduced to speed up a three-dimensional graph cut-based phase unwrapping algorithm. The phase unwrapping problem is formulated as a global discrete energy minimization problem, whereas the technique of dual decomposition is used to increase the computational efficiency by splitting the full problem into overlapping subproblems and enforcing the congruence of overlapping variables. Using three dimensional (3D) multiecho gradient echo images from an agarose phantom and five brain hemorrhage patients, we compared this proposed method with an unaccelerated graph cut-based method. Experimental results show up to 18-fold acceleration in computation time. Dual decomposition significantly improves the computational efficiency of 3D graph cut-based phase unwrapping algorithms. Magn Reson Med 77:1353-1358, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  4. Teaching a New Method of Partial Fraction Decomposition to Senior Secondary Students: Results and Analysis from a Pilot Study

    ERIC Educational Resources Information Center

    Man, Yiu-Kwong; Leung, Allen

    2012-01-01

    In this paper, we introduce a new approach to compute the partial fraction decompositions of rational functions and describe the results of its trials at three secondary schools in Hong Kong. The data were collected via quizzes, questionnaire and interviews. In general, according to the responses from the teachers and students concerned, this new…

  5. Temporal dynamics of phosphorus during aquatic and terrestrial litter decomposition in an alpine forest.

    PubMed

    Peng, Yan; Yang, Wanqin; Yue, Kai; Tan, Bo; Huang, Chunping; Xu, Zhenfeng; Ni, Xiangyin; Zhang, Li; Wu, Fuzhong

    2018-06-17

    Plant litter decomposition in forested soil and watershed is an important source of phosphorus (P) for plants in forest ecosystems. Understanding P dynamics during litter decomposition in forested aquatic and terrestrial ecosystems will be of great importance for better understanding nutrient cycling across forest landscape. However, despite massive studies addressing litter decomposition have been carried out, generalizations across aquatic and terrestrial ecosystems regarding the temporal dynamics of P loss during litter decomposition remain elusive. We conducted a two-year field experiment using litterbag method in both aquatic (streams and riparian zones) and terrestrial (forest floors) ecosystems in an alpine forest on the eastern Tibetan Plateau. By using multigroup comparisons of structural equation modeling (SEM) method with different litter mass-loss intervals, we explicitly assessed the direct and indirect effects of several biotic and abiotic drivers on P loss across different decomposition stages. The results suggested that (1) P concentration in decomposing litter showed similar patterns of early increase and later decrease across different species and ecosystems types; (2) P loss shared a common hierarchy of drivers across different ecosystems types, with litter chemical dynamics mainly having direct effects but environment and initial litter quality having both direct and indirect effects; (3) when assessing at the temporal scale, the effects of initial litter quality appeared to increase in late decomposition stages, while litter chemical dynamics showed consistent significant effects almost in all decomposition stages across aquatic and terrestrial ecosystems; (4) microbial diversity showed significant effects on P loss, but its effects were lower compared with other drivers. Our results highlight the importance of including spatiotemporal variations and indicate the possibility of integrating aquatic and terrestrial decomposition into a common framework for future construction of models that account for the temporal dynamics of P in decomposing litter. Copyright © 2018 Elsevier B.V. All rights reserved.

  6. Three-pattern decomposition of global atmospheric circulation: part I—decomposition model and theorems

    NASA Astrophysics Data System (ADS)

    Hu, Shujuan; Chou, Jifan; Cheng, Jianbo

    2018-04-01

    In order to study the interactions between the atmospheric circulations at the middle-high and low latitudes from the global perspective, the authors proposed the mathematical definition of three-pattern circulations, i.e., horizontal, meridional and zonal circulations with which the actual atmospheric circulation is expanded. This novel decomposition method is proved to accurately describe the actual atmospheric circulation dynamics. The authors used the NCEP/NCAR reanalysis data to calculate the climate characteristics of those three-pattern circulations, and found that the decomposition model agreed with the observed results. Further dynamical analysis indicates that the decomposition model is more accurate to capture the major features of global three dimensional atmospheric motions, compared to the traditional definitions of Rossby wave, Hadley circulation and Walker circulation. The decomposition model for the first time realized the decomposition of global atmospheric circulation using three orthogonal circulations within the horizontal, meridional and zonal planes, offering new opportunities to study the large-scale interactions between the middle-high latitudes and low latitudes circulations.

  7. Urban-area extraction from polarimetric SAR image using combination of target decomposition and orientation angle

    NASA Astrophysics Data System (ADS)

    Zou, Bin; Lu, Da; Wu, Zhilu; Qiao, Zhijun G.

    2016-05-01

    The results of model-based target decomposition are the main features used to discriminate urban and non-urban area in polarimetric synthetic aperture radar (PolSAR) application. Traditional urban-area extraction methods based on modelbased target decomposition usually misclassified ground-trunk structure as urban-area or misclassified rotated urbanarea as forest. This paper introduces another feature named orientation angle to improve urban-area extraction scheme for the accurate mapping in urban by PolSAR image. The proposed method takes randomness of orientation angle into account for restriction of urban area first and, subsequently, implements rotation angle to improve results that oriented urban areas are recognized as double-bounce objects from volume scattering. ESAR L-band PolSAR data of the Oberpfaffenhofen Test Site Area was used to validate the proposed algorithm.

  8. ADM For Solving Linear Second-Order Fredholm Integro-Differential Equations

    NASA Astrophysics Data System (ADS)

    Karim, Mohd F.; Mohamad, Mahathir; Saifullah Rusiman, Mohd; Che-Him, Norziha; Roslan, Rozaini; Khalid, Kamil

    2018-04-01

    In this paper, we apply Adomian Decomposition Method (ADM) as numerically analyse linear second-order Fredholm Integro-differential Equations. The approximate solutions of the problems are calculated by Maple package. Some numerical examples have been considered to illustrate the ADM for solving this equation. The results are compared with the existing exact solution. Thus, the Adomian decomposition method can be the best alternative method for solving linear second-order Fredholm Integro-Differential equation. It converges to the exact solution quickly and in the same time reduces computational work for solving the equation. The result obtained by ADM shows the ability and efficiency for solving these equations.

  9. The Distributed Diagonal Force Decomposition Method for Parallelizing Molecular Dynamics Simulations

    PubMed Central

    Boršnik, Urban; Miller, Benjamin T.; Brooks, Bernard R.; Janežič, Dušanka

    2011-01-01

    Parallelization is an effective way to reduce the computational time needed for molecular dynamics simulations. We describe a new parallelization method, the distributed-diagonal force decomposition method, with which we extend and improve the existing force decomposition methods. Our new method requires less data communication during molecular dynamics simulations than replicated data and current force decomposition methods, increasing the parallel efficiency. It also dynamically load-balances the processors' computational load throughout the simulation. The method is readily implemented in existing molecular dynamics codes and it has been incorporated into the CHARMM program, allowing its immediate use in conjunction with the many molecular dynamics simulation techniques that are already present in the program. We also present the design of the Force Decomposition Machine, a cluster of personal computers and networks that is tailored to running molecular dynamics simulations using the distributed diagonal force decomposition method. The design is expandable and provides various degrees of fault resilience. This approach is easily adaptable to computers with Graphics Processing Units because it is independent of the processor type being used. PMID:21793007

  10. Limited-memory adaptive snapshot selection for proper orthogonal decomposition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oxberry, Geoffrey M.; Kostova-Vassilevska, Tanya; Arrighi, Bill

    2015-04-02

    Reduced order models are useful for accelerating simulations in many-query contexts, such as optimization, uncertainty quantification, and sensitivity analysis. However, offline training of reduced order models can have prohibitively expensive memory and floating-point operation costs in high-performance computing applications, where memory per core is limited. To overcome this limitation for proper orthogonal decomposition, we propose a novel adaptive selection method for snapshots in time that limits offline training costs by selecting snapshots according an error control mechanism similar to that found in adaptive time-stepping ordinary differential equation solvers. The error estimator used in this work is related to theory boundingmore » the approximation error in time of proper orthogonal decomposition-based reduced order models, and memory usage is minimized by computing the singular value decomposition using a single-pass incremental algorithm. Results for a viscous Burgers’ test problem demonstrate convergence in the limit as the algorithm error tolerances go to zero; in this limit, the full order model is recovered to within discretization error. The resulting method can be used on supercomputers to generate proper orthogonal decomposition-based reduced order models, or as a subroutine within hyperreduction algorithms that require taking snapshots in time, or within greedy algorithms for sampling parameter space.« less

  11. Identification of channel geometries applying seismic attributes and spectral decomposition techniques, Temsah Field, Offshore East Nile Delta, Egypt

    NASA Astrophysics Data System (ADS)

    Othman, Adel A. A.; Fathy, M.; Negm, Adel

    2018-06-01

    The Temsah field is located in eastern part of the Nile delta to seaward. The main reservoirs of the area are Middle Pliocene mainly consist from siliciclastic which associated with a close deep marine environment. The Distribution pattern of the reservoir facies is limited scale indicating fast lateral and vertical changes which are not easy to resolve by applying of conventional seismic attribute. The target of the present study is to create geophysical workflows to a better image of the channel sand distribution in the study area. We apply both Average Absolute Amplitude and Energy attribute which are indicated on the distribution of the sand bodies in the study area but filled to fully described the channel geometry. So another tool, which offers more detailed geometry description is needed. The spectral decomposition analysis method is an alternative technique focused on processing Discrete Fourier Transform which can provide better results. Spectral decomposition have been done over the upper channel shows that the frequency in the eastern part of the channel is the same frequency in places where the wells are drilled, which confirm the connection of both the eastern and western parts of the upper channel. Results suggest that application of the spectral decomposition method leads to reliable inferences. Hence, using the spectral decomposition method alone or along with other attributes has a positive impact on reserves growth and increased production where the reserve in the study area increases to 75bcf.

  12. Partial differential equation-based approach for empirical mode decomposition: application on image analysis.

    PubMed

    Niang, Oumar; Thioune, Abdoulaye; El Gueirea, Mouhamed Cheikh; Deléchelle, Eric; Lemoine, Jacques

    2012-09-01

    The major problem with the empirical mode decomposition (EMD) algorithm is its lack of a theoretical framework. So, it is difficult to characterize and evaluate this approach. In this paper, we propose, in the 2-D case, the use of an alternative implementation to the algorithmic definition of the so-called "sifting process" used in the original Huang's EMD method. This approach, especially based on partial differential equations (PDEs), was presented by Niang in previous works, in 2005 and 2007, and relies on a nonlinear diffusion-based filtering process to solve the mean envelope estimation problem. In the 1-D case, the efficiency of the PDE-based method, compared to the original EMD algorithmic version, was also illustrated in a recent paper. Recently, several 2-D extensions of the EMD method have been proposed. Despite some effort, 2-D versions for EMD appear poorly performing and are very time consuming. So in this paper, an extension to the 2-D space of the PDE-based approach is extensively described. This approach has been applied in cases of both signal and image decomposition. The obtained results confirm the usefulness of the new PDE-based sifting process for the decomposition of various kinds of data. Some results have been provided in the case of image decomposition. The effectiveness of the approach encourages its use in a number of signal and image applications such as denoising, detrending, or texture analysis.

  13. Adaptive DSPI phase denoising using mutual information and 2D variational mode decomposition

    NASA Astrophysics Data System (ADS)

    Xiao, Qiyang; Li, Jian; Wu, Sijin; Li, Weixian; Yang, Lianxiang; Dong, Mingli; Zeng, Zhoumo

    2018-04-01

    In digital speckle pattern interferometry (DSPI), noise interference leads to a low peak signal-to-noise ratio (PSNR) and measurement errors in the phase map. This paper proposes an adaptive DSPI phase denoising method based on two-dimensional variational mode decomposition (2D-VMD) and mutual information. Firstly, the DSPI phase map is subjected to 2D-VMD in order to obtain a series of band-limited intrinsic mode functions (BLIMFs). Then, on the basis of characteristics of the BLIMFs and in combination with mutual information, a self-adaptive denoising method is proposed to obtain noise-free components containing the primary phase information. The noise-free components are reconstructed to obtain the denoising DSPI phase map. Simulation and experimental results show that the proposed method can effectively reduce noise interference, giving a PSNR that is higher than that of two-dimensional empirical mode decomposition methods.

  14. DFT study of hydrogen production from formic acid decomposition on Pd-Au alloy nanoclusters

    NASA Astrophysics Data System (ADS)

    Liu, D.; Gao, Z. Y.; Wang, X. C.; Zeng, J.; Li, Y. M.

    2017-12-01

    Recently, it has been reported that the hydrogen production rate of formic acid decomposition can be significantly increased using Pd-Au binary alloy nano-catalysts [Wang et al. J. Mater. Chem. A 1 (2013) 12721-12725]. To explain the reaction mechanism of this alloy catalysis method, formic acid decomposition reactions on pure Pd and Pd-Au alloy nanoclusters are studied via density functional theory simulations. The simulation results indicate that the addition of inert element Au would not influence formic acid decomposition on Pd surface sites of Pd-Au alloy nanoclusters. On the other hand, the existence of Au surface sites brings relative weak hydrogen atom adsorption. On Pd-Au alloy nanoclusters, the dissociated hydrogen atoms from formic acid are easier to combine as hydrogen molecules than that on pure Pd clusters. Via the synergetic effect between Pd and Au, both formic acid decomposition and hydrogen production are events with large probability, which eventually results in high hydrogen production rate.

  15. Augmenting the decomposition of EMG signals using supervised feature extraction techniques.

    PubMed

    Parsaei, Hossein; Gangeh, Mehrdad J; Stashuk, Daniel W; Kamel, Mohamed S

    2012-01-01

    Electromyographic (EMG) signal decomposition is the process of resolving an EMG signal into its constituent motor unit potential trains (MUPTs). In this work, the possibility of improving the decomposing results using two supervised feature extraction methods, i.e., Fisher discriminant analysis (FDA) and supervised principal component analysis (SPCA), is explored. Using the MUP labels provided by a decomposition-based quantitative EMG system as a training data for FDA and SPCA, the MUPs are transformed into a new feature space such that the MUPs of a single MU become as close as possible to each other while those created by different MUs become as far as possible. The MUPs are then reclassified using a certainty-based classification algorithm. Evaluation results using 10 simulated EMG signals comprised of 3-11 MUPTs demonstrate that FDA and SPCA on average improve the decomposition accuracy by 6%. The improvement for the most difficult-to-decompose signal is about 12%, which shows the proposed approach is most beneficial in the decomposition of more complex signals.

  16. GPR random noise reduction using BPD and EMD

    NASA Astrophysics Data System (ADS)

    Ostoori, Roya; Goudarzi, Alireza; Oskooi, Behrooz

    2018-04-01

    Ground-penetrating radar (GPR) exploration is a new high-frequency technology that explores near-surface objects and structures accurately. The high-frequency antenna of the GPR system makes it a high-resolution method compared to other geophysical methods. The frequency range of recorded GPR is so wide that random noise recording is inevitable due to acquisition. This kind of noise comes from unknown sources and its correlation to the adjacent traces is nearly zero. This characteristic of random noise along with the higher accuracy of GPR system makes denoising very important for interpretable results. The main objective of this paper is to reduce GPR random noise based on pursuing denoising using empirical mode decomposition. Our results showed that empirical mode decomposition in combination with basis pursuit denoising (BPD) provides satisfactory outputs due to the sifting process compared to the time-domain implementation of the BPD method on both synthetic and real examples. Our results demonstrate that because of the high computational costs, the BPD-empirical mode decomposition technique should only be used for heavily noisy signals.

  17. Impact of the Choice of Normalization Method on Molecular Cancer Class Discovery Using Nonnegative Matrix Factorization.

    PubMed

    Yang, Haixuan; Seoighe, Cathal

    2016-01-01

    Nonnegative Matrix Factorization (NMF) has proved to be an effective method for unsupervised clustering analysis of gene expression data. By the nonnegativity constraint, NMF provides a decomposition of the data matrix into two matrices that have been used for clustering analysis. However, the decomposition is not unique. This allows different clustering results to be obtained, resulting in different interpretations of the decomposition. To alleviate this problem, some existing methods directly enforce uniqueness to some extent by adding regularization terms in the NMF objective function. Alternatively, various normalization methods have been applied to the factor matrices; however, the effects of the choice of normalization have not been carefully investigated. Here we investigate the performance of NMF for the task of cancer class discovery, under a wide range of normalization choices. After extensive evaluations, we observe that the maximum norm showed the best performance, although the maximum norm has not previously been used for NMF. Matlab codes are freely available from: http://maths.nuigalway.ie/~haixuanyang/pNMF/pNMF.htm.

  18. Decomposition of Proteins into Dynamic Units from Atomic Cross-Correlation Functions.

    PubMed

    Calligari, Paolo; Gerolin, Marco; Abergel, Daniel; Polimeno, Antonino

    2017-01-10

    In this article, we present a clustering method of atoms in proteins based on the analysis of the correlation times of interatomic distance correlation functions computed from MD simulations. The goal is to provide a coarse-grained description of the protein in terms of fewer elements that can be treated as dynamically independent subunits. Importantly, this domain decomposition method does not take into account structural properties of the protein. Instead, the clustering of protein residues in terms of networks of dynamically correlated domains is defined on the basis of the effective correlation times of the pair distance correlation functions. For these properties, our method stands as a complementary analysis to the customary protein decomposition in terms of quasi-rigid, structure-based domains. Results obtained for a prototypal protein structure illustrate the approach proposed.

  19. Scare Tactics: Evaluating Problem Decompositions Using Failure Scenarios

    NASA Technical Reports Server (NTRS)

    Helm, B. Robert; Fickas, Stephen

    1992-01-01

    Our interest is in the design of multi-agent problem-solving systems, which we refer to as composite systems. We have proposed an approach to composite system design by decomposition of problem statements. An automated assistant called Critter provides a library of reusable design transformations which allow a human analyst to search the space of decompositions for a problem. In this paper we describe a method for evaluating and critiquing problem decompositions generated by this search process. The method uses knowledge stored in the form of failure decompositions attached to design transformations. We suggest the benefits of our critiquing method by showing how it could re-derive steps of a published development example. We then identify several open issues for the method.

  20. Adaptive sparsest narrow-band decomposition method and its applications to rolling element bearing fault diagnosis

    NASA Astrophysics Data System (ADS)

    Cheng, Junsheng; Peng, Yanfeng; Yang, Yu; Wu, Zhantao

    2017-02-01

    Enlightened by ASTFA method, adaptive sparsest narrow-band decomposition (ASNBD) method is proposed in this paper. In ASNBD method, an optimized filter must be established at first. The parameters of the filter are determined by solving a nonlinear optimization problem. A regulated differential operator is used as the objective function so that each component is constrained to be a local narrow-band signal. Afterwards, the signal is filtered by the optimized filter to generate an intrinsic narrow-band component (INBC). ASNBD is proposed aiming at solving the problems existed in ASTFA. Gauss-Newton type method, which is applied to solve the optimization problem in ASTFA, is irreplaceable and very sensitive to initial values. However, more appropriate optimization method such as genetic algorithm (GA) can be utilized to solve the optimization problem in ASNBD. Meanwhile, compared with ASTFA, the decomposition results generated by ASNBD have better physical meaning by constraining the components to be local narrow-band signals. Comparisons are made between ASNBD, ASTFA and EMD by analyzing simulation and experimental signals. The results indicate that ASNBD method is superior to the other two methods in generating more accurate components from noise signal, restraining the boundary effect, possessing better orthogonality and diagnosing rolling element bearing fault.

  1. A New Method for Nonlinear and Nonstationary Time Series Analysis and Its Application to the Earthquake and Building Response Records

    NASA Technical Reports Server (NTRS)

    Huang, Norden E.

    1999-01-01

    A new method for analyzing nonlinear and nonstationary data has been developed. The key part of the method is the Empirical Mode Decomposition method with which any complicated data set can be decomposed into a finite and often small number of Intrinsic Mode Functions (IMF). An IMF is defined as any function having the same numbers of zero-crossing and extrema, and also having symmetric envelopes defined by the local maxima and minima respectively. The IMF also admits well-behaved Hilbert transform. This decomposition method is adaptive, and, therefore, highly efficient. Since the decomposition is based on the local characteristic time scale of the data, it is applicable to nonlinear and nonstationary processes. With the Hilbert transform, the Intrinsic Mode Functions yield instantaneous frequencies as functions of time that give sharp identifications of imbedded structures. The final presentation of the results is an energy-frequency-time distribution, designated as the Hilbert Spectrum, Example of application of this method to earthquake and building response will be given. The results indicate those low frequency components, totally missed by the Fourier analysis, are clearly identified by the new method. Comparisons with Wavelet and window Fourier analysis show the new method offers much better temporal and frequency resolutions.

  2. Educational Outcomes and Socioeconomic Status: A Decomposition Analysis for Middle-Income Countries

    ERIC Educational Resources Information Center

    Nieto, Sandra; Ramos, Raúl

    2015-01-01

    This article analyzes the factors that explain the gap in educational outcomes between the top and bottom quartile of students in different countries, according to their socioeconomic status. To do so, it uses PISA microdata for 10 middle-income and 2 high-income countries, and applies the Oaxaca-Blinder decomposition method. Its results show that…

  3. Independent EEG Sources Are Dipolar

    PubMed Central

    Delorme, Arnaud; Palmer, Jason; Onton, Julie; Oostenveld, Robert; Makeig, Scott

    2012-01-01

    Independent component analysis (ICA) and blind source separation (BSS) methods are increasingly used to separate individual brain and non-brain source signals mixed by volume conduction in electroencephalographic (EEG) and other electrophysiological recordings. We compared results of decomposing thirteen 71-channel human scalp EEG datasets by 22 ICA and BSS algorithms, assessing the pairwise mutual information (PMI) in scalp channel pairs, the remaining PMI in component pairs, the overall mutual information reduction (MIR) effected by each decomposition, and decomposition ‘dipolarity’ defined as the number of component scalp maps matching the projection of a single equivalent dipole with less than a given residual variance. The least well-performing algorithm was principal component analysis (PCA); best performing were AMICA and other likelihood/mutual information based ICA methods. Though these and other commonly-used decomposition methods returned many similar components, across 18 ICA/BSS algorithms mean dipolarity varied linearly with both MIR and with PMI remaining between the resulting component time courses, a result compatible with an interpretation of many maximally independent EEG components as being volume-conducted projections of partially-synchronous local cortical field activity within single compact cortical domains. To encourage further method comparisons, the data and software used to prepare the results have been made available (http://sccn.ucsd.edu/wiki/BSSComparison). PMID:22355308

  4. CT Image Sequence Restoration Based on Sparse and Low-Rank Decomposition

    PubMed Central

    Gou, Shuiping; Wang, Yueyue; Wang, Zhilong; Peng, Yong; Zhang, Xiaopeng; Jiao, Licheng; Wu, Jianshe

    2013-01-01

    Blurry organ boundaries and soft tissue structures present a major challenge in biomedical image restoration. In this paper, we propose a low-rank decomposition-based method for computed tomography (CT) image sequence restoration, where the CT image sequence is decomposed into a sparse component and a low-rank component. A new point spread function of Weiner filter is employed to efficiently remove blur in the sparse component; a wiener filtering with the Gaussian PSF is used to recover the average image of the low-rank component. And then we get the recovered CT image sequence by combining the recovery low-rank image with all recovery sparse image sequence. Our method achieves restoration results with higher contrast, sharper organ boundaries and richer soft tissue structure information, compared with existing CT image restoration methods. The robustness of our method was assessed with numerical experiments using three different low-rank models: Robust Principle Component Analysis (RPCA), Linearized Alternating Direction Method with Adaptive Penalty (LADMAP) and Go Decomposition (GoDec). Experimental results demonstrated that the RPCA model was the most suitable for the small noise CT images whereas the GoDec model was the best for the large noisy CT images. PMID:24023764

  5. Kinetic analysis of overlapping multistep thermal decomposition comprising exothermic and endothermic processes: thermolysis of ammonium dinitramide.

    PubMed

    Muravyev, Nikita V; Koga, Nobuyoshi; Meerov, Dmitry B; Pivkina, Alla N

    2017-01-25

    This study focused on kinetic modeling of a specific type of multistep heterogeneous reaction comprising exothermic and endothermic reaction steps, as exemplified by the practical kinetic analysis of the experimental kinetic curves for the thermal decomposition of molten ammonium dinitramide (ADN). It is known that the thermal decomposition of ADN occurs as a consecutive two step mass-loss process comprising the decomposition of ADN and subsequent evaporation/decomposition of in situ generated ammonium nitrate. These reaction steps provide exothermic and endothermic contributions, respectively, to the overall thermal effect. The overall reaction process was deconvoluted into two reaction steps using simultaneously recorded thermogravimetry and differential scanning calorimetry (TG-DSC) curves by considering the different physical meanings of the kinetic data derived from TG and DSC by P value analysis. The kinetic data thus separated into exothermic and endothermic reaction steps were kinetically characterized using kinetic computation methods including isoconversional method, combined kinetic analysis, and master plot method. The overall kinetic behavior was reproduced as the sum of the kinetic equations for each reaction step considering the contributions to the rate data derived from TG and DSC. During reproduction of the kinetic behavior, the kinetic parameters and contributions of each reaction step were optimized using kinetic deconvolution analysis. As a result, the thermal decomposition of ADN was successfully modeled as partially overlapping exothermic and endothermic reaction steps. The logic of the kinetic modeling was critically examined, and the practical usefulness of phenomenological modeling for the thermal decomposition of ADN was illustrated to demonstrate the validity of the methodology and its applicability to similar complex reaction processes.

  6. The trait contribution to wood decomposition rates of 15 Neotropical tree species.

    PubMed

    van Geffen, Koert G; Poorter, Lourens; Sass-Klaassen, Ute; van Logtestijn, Richard S P; Cornelissen, Johannes H C

    2010-12-01

    The decomposition of dead wood is a critical uncertainty in models of the global carbon cycle. Despite this, relatively few studies have focused on dead wood decomposition, with a strong bias to higher latitudes. Especially the effect of interspecific variation in species traits on differences in wood decomposition rates remains unknown. In order to fill these gaps, we applied a novel method to study long-term wood decomposition of 15 tree species in a Bolivian semi-evergreen tropical moist forest. We hypothesized that interspecific differences in species traits are important drivers of variation in wood decomposition rates. Wood decomposition rates (fractional mass loss) varied between 0.01 and 0.31 yr(-1). We measured 10 different chemical, anatomical, and morphological traits for all species. The species' average traits were useful predictors of wood decomposition rates, particularly the average diameter (dbh) of the tree species (R2 = 0.41). Lignin concentration further increased the proportion of explained inter-specific variation in wood decomposition (both negative relations, cumulative R2 = 0.55), although it did not significantly explain variation in wood decomposition rates if considered alone. When dbh values of the actual dead trees sampled for decomposition rate determination were used as a predictor variable, the final model (including dead tree dbh and lignin concentration) explained even more variation in wood decomposition rates (R2 = 0.71), underlining the importance of dbh in wood decomposition. Other traits, including wood density, wood anatomical traits, macronutrient concentrations, and the amount of phenolic extractives could not significantly explain the variation in wood decomposition rates. The surprising results of this multi-species study, in which for the first time a large set of traits is explicitly linked to wood decomposition rates, merits further testing in other forest ecosystems.

  7. Detection of Protein Complexes Based on Penalized Matrix Decomposition in a Sparse Protein⁻Protein Interaction Network.

    PubMed

    Cao, Buwen; Deng, Shuguang; Qin, Hua; Ding, Pingjian; Chen, Shaopeng; Li, Guanghui

    2018-06-15

    High-throughput technology has generated large-scale protein interaction data, which is crucial in our understanding of biological organisms. Many complex identification algorithms have been developed to determine protein complexes. However, these methods are only suitable for dense protein interaction networks, because their capabilities decrease rapidly when applied to sparse protein⁻protein interaction (PPI) networks. In this study, based on penalized matrix decomposition ( PMD ), a novel method of penalized matrix decomposition for the identification of protein complexes (i.e., PMD pc ) was developed to detect protein complexes in the human protein interaction network. This method mainly consists of three steps. First, the adjacent matrix of the protein interaction network is normalized. Second, the normalized matrix is decomposed into three factor matrices. The PMD pc method can detect protein complexes in sparse PPI networks by imposing appropriate constraints on factor matrices. Finally, the results of our method are compared with those of other methods in human PPI network. Experimental results show that our method can not only outperform classical algorithms, such as CFinder, ClusterONE, RRW, HC-PIN, and PCE-FR, but can also achieve an ideal overall performance in terms of a composite score consisting of F-measure, accuracy (ACC), and the maximum matching ratio (MMR).

  8. Search for memory effects in methane hydrate: structure of water before hydrate formation and after hydrate decomposition.

    PubMed

    Buchanan, Piers; Soper, Alan K; Thompson, Helen; Westacott, Robin E; Creek, Jefferson L; Hobson, Greg; Koh, Carolyn A

    2005-10-22

    Neutron diffraction with HD isotope substitution has been used to study the formation and decomposition of the methane clathrate hydrate. Using this atomistic technique coupled with simultaneous gas consumption measurements, we have successfully tracked the formation of the sI methane hydrate from a water/gas mixture and then the subsequent decomposition of the hydrate from initiation to completion. These studies demonstrate that the application of neutron diffraction with simultaneous gas consumption measurements provides a powerful method for studying the clathrate hydrate crystal growth and decomposition. We have also used neutron diffraction to examine the water structure before the hydrate growth and after the hydrate decomposition. From the neutron-scattering curves and the empirical potential structure refinement analysis of the data, we find that there is no significant difference between the structure of water before the hydrate formation and the structure of water after the hydrate decomposition. Nor is there any significant change to the methane hydration shell. These results are discussed in the context of widely held views on the existence of memory effects after the hydrate decomposition.

  9. Photocatalytic characteristic and photodegradation kinetics of toluene using N-doped TiO2 modified by radio frequency plasma.

    PubMed

    Shie, Je-Lueng; Lee, Chiu-Hsuan; Chiou, Chyow-San; Chen, Yi-Hung; Chang, Ching-Yuan

    2014-01-01

    This study investigates the feasibility of applications of the plasma surface modification of photocatalysts and the removal of toluene from indoor environments. N-doped TiO2 is prepared by precipitation methods and calcined using a muffle furnace (MF) and modified by radio frequency plasma (RF) at different temperatures with light sources from a visible light lamp (VLL), a white light-emitting diode (WLED) and an ultraviolet light-emitting diode (UVLED). The operation parameters and influential factors are addressed and prepared for characteristic analysis and photo-decomposition examination. Furthermore, related kinetic models are established and used to simulate the experimental data. The characteristic analysis results show that the RF plasma-calcination method enhanced the Brunauer Emmett Teller surface area of the modified photocatalysts effectively. For the elemental analysis, the mass percentages of N for the RF-modified photocatalyst are larger than those of MF by six times. The aerodynamic diameters of the RF-modifiedphotocatalyst are all smaller than those of MF. Photocatalytic decompositions of toluene are elucidated according to the Langmuir-Hinshelwood model. Decomposition efficiencies (eta) of toluene for RF-calcined methods are all higher than those of commercial TiO2 (P25). Reaction kinetics ofphoto-decomposition reactions using RF-calcined methods with WLED are proposed. A comparison of the simulation results with experimental data is also made and indicates good agreement. All the results provide useful information and design specifications. Thus, this study shows the feasibility and potential use of plasma modification via LED in photocatalysis.

  10. Empirical Mode Decomposition and k-Nearest Embedding Vectors for Timely Analyses of Antibiotic Resistance Trends

    PubMed Central

    Teodoro, Douglas; Lovis, Christian

    2013-01-01

    Background Antibiotic resistance is a major worldwide public health concern. In clinical settings, timely antibiotic resistance information is key for care providers as it allows appropriate targeted treatment or improved empirical treatment when the specific results of the patient are not yet available. Objective To improve antibiotic resistance trend analysis algorithms by building a novel, fully data-driven forecasting method from the combination of trend extraction and machine learning models for enhanced biosurveillance systems. Methods We investigate a robust model for extraction and forecasting of antibiotic resistance trends using a decade of microbiology data. Our method consists of breaking down the resistance time series into independent oscillatory components via the empirical mode decomposition technique. The resulting waveforms describing intrinsic resistance trends serve as the input for the forecasting algorithm. The algorithm applies the delay coordinate embedding theorem together with the k-nearest neighbor framework to project mappings from past events into the future dimension and estimate the resistance levels. Results The algorithms that decompose the resistance time series and filter out high frequency components showed statistically significant performance improvements in comparison with a benchmark random walk model. We present further qualitative use-cases of antibiotic resistance trend extraction, where empirical mode decomposition was applied to highlight the specificities of the resistance trends. Conclusion The decomposition of the raw signal was found not only to yield valuable insight into the resistance evolution, but also to produce novel models of resistance forecasters with boosted prediction performance, which could be utilized as a complementary method in the analysis of antibiotic resistance trends. PMID:23637796

  11. Multitasking domain decomposition fast Poisson solvers on the Cray Y-MP

    NASA Technical Reports Server (NTRS)

    Chan, Tony F.; Fatoohi, Rod A.

    1990-01-01

    The results of multitasking implementation of a domain decomposition fast Poisson solver on eight processors of the Cray Y-MP are presented. The object of this research is to study the performance of domain decomposition methods on a Cray supercomputer and to analyze the performance of different multitasking techniques using highly parallel algorithms. Two implementations of multitasking are considered: macrotasking (parallelism at the subroutine level) and microtasking (parallelism at the do-loop level). A conventional FFT-based fast Poisson solver is also multitasked. The results of different implementations are compared and analyzed. A speedup of over 7.4 on the Cray Y-MP running in a dedicated environment is achieved for all cases.

  12. Extracting fingerprint of wireless devices based on phase noise and multiple level wavelet decomposition

    NASA Astrophysics Data System (ADS)

    Zhao, Weichen; Sun, Zhuo; Kong, Song

    2016-10-01

    Wireless devices can be identified by the fingerprint extracted from the signal transmitted, which is useful in wireless communication security and other fields. This paper presents a method that extracts fingerprint based on phase noise of signal and multiple level wavelet decomposition. The phase of signal will be extracted first and then decomposed by multiple level wavelet decomposition. The statistic value of each wavelet coefficient vector is utilized for constructing fingerprint. Besides, the relationship between wavelet decomposition level and recognition accuracy is simulated. And advertised decomposition level is revealed as well. Compared with previous methods, our method is simpler and the accuracy of recognition remains high when Signal Noise Ratio (SNR) is low.

  13. Exploring Patterns of Soil Organic Matter Decomposition with Students through the Global Decomposition Project (GDP) and the Interactive Model of Leaf Decomposition (IMOLD)

    NASA Astrophysics Data System (ADS)

    Steiner, S. M.; Wood, J. H.

    2015-12-01

    As decomposition rates are affected by climate change, understanding crucial soil interactions that affect plant growth and decomposition becomes a vital part of contributing to the students' knowledge base. The Global Decomposition Project (GDP) is designed to introduce and educate students about soil organic matter and decomposition through a standardized protocol for collecting, reporting, and sharing data. The Interactive Model of Leaf Decomposition (IMOLD) utilizes animations and modeling to learn about the carbon cycle, leaf anatomy, and the role of microbes in decomposition. Paired together, IMOLD teaches the background information and allows simulation of numerous scenarios, and the GDP is a data collection protocol that allows students to gather usable measurements of decomposition in the field. Our presentation will detail how the GDP protocol works, how to obtain or make the materials needed, and how results will be shared. We will also highlight learning objectives from the three animations of IMOLD, and demonstrate how students can experiment with different climates and litter types using the interactive model to explore a variety of decomposition scenarios. The GDP demonstrates how scientific methods can be extended to educate broader audiences, and data collected by students can provide new insight into global patterns of soil decomposition. Using IMOLD, students will gain a better understanding of carbon cycling in the context of litter decomposition, as well as learn to pose questions they can answer with an authentic computer model. Using the GDP protocols and IMOLD provide a pathway for scientists and educators to interact and reach meaningful education and research goals.

  14. Application of singular value decomposition to structural dynamics systems with constraints

    NASA Technical Reports Server (NTRS)

    Juang, J.-N.; Pinson, L. D.

    1985-01-01

    Singular value decomposition is used to construct a coordinate transformation for a linear dynamic system subject to linear, homogeneous constraint equations. The method is compared with two commonly used methods, namely classical Gaussian elimination and Walton-Steeves approach. Although the classical method requires fewer numerical operations, the singular value decomposition method is more accurate and convenient in eliminating the dependent coordinates. Numerical examples are presented to demonstrate the application of the method.

  15. Marine environmental protection: An application of the nanometer photo catalyst method on decomposition of benzene.

    PubMed

    Lin, Mu-Chien; Kao, Jui-Chung

    2016-04-15

    Bioremediation is currently extensively employed in the elimination of coastal oil pollution, but it is not very effective as the process takes several months to degrade oil. Among the components of oil, benzene degradation is difficult due to its stable characteristics. This paper describes an experimental study on the decomposition of benzene by titanium dioxide (TiO2) nanometer photocatalysis. The photocatalyst is illuminated with 360-nm ultraviolet light for generation of peroxide ions. This results in complete decomposition of benzene, thus yielding CO2 and H2O. In this study, a nonwoven fabric is coated with the photocatalyst and benzene. Using the Double-Shot Py-GC system on the residual component, complete decomposition of the benzene was verified by 4h of exposure to ultraviolet light. The method proposed in this study can be directly applied to elimination of marine oil pollution. Further studies will be conducted on coastal oil pollution in situ. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Multi-focus image fusion based on window empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Qin, Xinqiang; Zheng, Jiaoyue; Hu, Gang; Wang, Jiao

    2017-09-01

    In order to improve multi-focus image fusion quality, a novel fusion algorithm based on window empirical mode decomposition (WEMD) is proposed. This WEMD is an improved form of bidimensional empirical mode decomposition (BEMD), due to its decomposition process using the adding window principle, effectively resolving the signal concealment problem. We used WEMD for multi-focus image fusion, and formulated different fusion rules for bidimensional intrinsic mode function (BIMF) components and the residue component. For fusion of the BIMF components, the concept of the Sum-modified-Laplacian was used and a scheme based on the visual feature contrast adopted; when choosing the residue coefficients, a pixel value based on the local visibility was selected. We carried out four groups of multi-focus image fusion experiments and compared objective evaluation criteria with other three fusion methods. The experimental results show that the proposed fusion approach is effective and performs better at fusing multi-focus images than some traditional methods.

  17. Layout decomposition of self-aligned double patterning for 2D random logic patterning

    NASA Astrophysics Data System (ADS)

    Ban, Yongchan; Miloslavsky, Alex; Lucas, Kevin; Choi, Soo-Han; Park, Chul-Hong; Pan, David Z.

    2011-04-01

    Self-aligned double pattering (SADP) has been adapted as a promising solution for sub-30nm technology nodes due to its lower overlay problem and better process tolerance. SADP is in production use for 1D dense patterns with good pitch control such as NAND Flash memory applications, but it is still challenging to apply SADP to 2D random logic patterns. The favored type of SADP for complex logic interconnects is a two mask approach using a core mask and a trim mask. In this paper, we first describe layout decomposition methods of spacer-type double patterning lithography, then report a type of SADP compliant layouts, and finally report SADP applications on Samsung 22nm SRAM layout. For SADP decomposition, we propose several SADP-aware layout coloring algorithms and a method of generating lithography-friendly core mask patterns. Experimental results on 22nm node designs show that our proposed layout decomposition for SADP effectively decomposes any given layouts.

  18. Domain Decomposition By the Advancing-Partition Method for Parallel Unstructured Grid Generation

    NASA Technical Reports Server (NTRS)

    Pirzadeh, Shahyar Z.; Zagaris, George

    2009-01-01

    A new method of domain decomposition has been developed for generating unstructured grids in subdomains either sequentially or using multiple computers in parallel. Domain decomposition is a crucial and challenging step for parallel grid generation. Prior methods are generally based on auxiliary, complex, and computationally intensive operations for defining partition interfaces and usually produce grids of lower quality than those generated in single domains. The new technique, referred to as "Advancing Partition," is based on the Advancing-Front method, which partitions a domain as part of the volume mesh generation in a consistent and "natural" way. The benefits of this approach are: 1) the process of domain decomposition is highly automated, 2) partitioning of domain does not compromise the quality of the generated grids, and 3) the computational overhead for domain decomposition is minimal. The new method has been implemented in NASA's unstructured grid generation code VGRID.

  19. Domain Decomposition By the Advancing-Partition Method

    NASA Technical Reports Server (NTRS)

    Pirzadeh, Shahyar Z.

    2008-01-01

    A new method of domain decomposition has been developed for generating unstructured grids in subdomains either sequentially or using multiple computers in parallel. Domain decomposition is a crucial and challenging step for parallel grid generation. Prior methods are generally based on auxiliary, complex, and computationally intensive operations for defining partition interfaces and usually produce grids of lower quality than those generated in single domains. The new technique, referred to as "Advancing Partition," is based on the Advancing-Front method, which partitions a domain as part of the volume mesh generation in a consistent and "natural" way. The benefits of this approach are: 1) the process of domain decomposition is highly automated, 2) partitioning of domain does not compromise the quality of the generated grids, and 3) the computational overhead for domain decomposition is minimal. The new method has been implemented in NASA's unstructured grid generation code VGRID.

  20. An investigation of the use of temporal decomposition in space mission scheduling

    NASA Technical Reports Server (NTRS)

    Bullington, Stanley E.; Narayanan, Venkat

    1994-01-01

    This research involves an examination of techniques for solving scheduling problems in long-duration space missions. The mission timeline is broken up into several time segments, which are then scheduled incrementally. Three methods are presented for identifying the activities that are to be attempted within these segments. The first method is a mathematical model, which is presented primarily to illustrate the structure of the temporal decomposition problem. Since the mathematical model is bound to be computationally prohibitive for realistic problems, two heuristic assignment procedures are also presented. The first heuristic method is based on dispatching rules for activity selection, and the second heuristic assigns performances of a model evenly over timeline segments. These heuristics are tested using a sample Space Station mission and a Spacelab mission. The results are compared with those obtained by scheduling the missions without any problem decomposition. The applicability of this approach to large-scale mission scheduling problems is also discussed.

  1. Extraction of Curcumin Pigment from Indonesian Local Turmeric with Its Infrared Spectra and Thermal Decomposition Properties

    NASA Astrophysics Data System (ADS)

    Nandiyanto, A. B. D.; Wiryani, A. S.; Rusli, A.; Purnamasari, A.; Abdullah, A. G.; Ana; Widiaty, I.; Hurriyati, R.

    2017-03-01

    Curcumin is one of the pigments which is used as a spice in Asian cuisine, traditional cosmetic, and medicine. Therefore, process for getting curcumin has been widely studied. Here, the purpose of this study was to demonstrate the simple method for extracting curcumin from Indonesian local turmeric and investigate the infrared spectra and thermal decomposition properties. In the experimental procedure, the washed turmeric was dissolved into an ethanol solution, and then put into a rotary evaporator to enrich curcumin concentration. The result showed that the present method is effective to isolate curcumin compound from Indonesian local turmeric. Since the process is very simple, this method can be used for home industrial application. Further, understanding the thermal decomposition properties of curcumin give information, specifically relating to the selection of treatment when curcumin must face the thermal-related process.

  2. Domain Decomposition Algorithms for First-Order System Least Squares Methods

    NASA Technical Reports Server (NTRS)

    Pavarino, Luca F.

    1996-01-01

    Least squares methods based on first-order systems have been recently proposed and analyzed for second-order elliptic equations and systems. They produce symmetric and positive definite discrete systems by using standard finite element spaces, which are not required to satisfy the inf-sup condition. In this paper, several domain decomposition algorithms for these first-order least squares methods are studied. Some representative overlapping and substructuring algorithms are considered in their additive and multiplicative variants. The theoretical and numerical results obtained show that the classical convergence bounds (on the iteration operator) for standard Galerkin discretizations are also valid for least squares methods.

  3. Isothermal Decomposition of Hydrogen Peroxide Dihydrate

    NASA Technical Reports Server (NTRS)

    Loeffler, M. J.; Baragiola, R. A.

    2011-01-01

    We present a new method of growing pure solid hydrogen peroxide in an ultra high vacuum environment and apply it to determine thermal stability of the dihydrate compound that forms when water and hydrogen peroxide are mixed at low temperatures. Using infrared spectroscopy and thermogravimetric analysis, we quantified the isothermal decomposition of the metastable dihydrate at 151.6 K. This decomposition occurs by fractional distillation through the preferential sublimation of water, which leads to the formation of pure hydrogen peroxide. The results imply that in an astronomical environment where condensed mixtures of H2O2 and H2O are shielded from radiolytic decomposition and warmed to temperatures where sublimation is significant, highly concentrated or even pure hydrogen peroxide may form.

  4. An asymptotic induced numerical method for the convection-diffusion-reaction equation

    NASA Technical Reports Server (NTRS)

    Scroggs, Jeffrey S.; Sorensen, Danny C.

    1988-01-01

    A parallel algorithm for the efficient solution of a time dependent reaction convection diffusion equation with small parameter on the diffusion term is presented. The method is based on a domain decomposition that is dictated by singular perturbation analysis. The analysis is used to determine regions where certain reduced equations may be solved in place of the full equation. Parallelism is evident at two levels. Domain decomposition provides parallelism at the highest level, and within each domain there is ample opportunity to exploit parallelism. Run time results demonstrate the viability of the method.

  5. Differential Decomposition Among Pig, Rabbit, and Human Remains.

    PubMed

    Dautartas, Angela; Kenyhercz, Michael W; Vidoli, Giovanna M; Meadows Jantz, Lee; Mundorff, Amy; Steadman, Dawnie Wolfe

    2018-03-30

    While nonhuman animal remains are often utilized in forensic research to develop methods to estimate the postmortem interval, systematic studies that directly validate animals as proxies for human decomposition are lacking. The current project compared decomposition rates among pigs, rabbits, and humans at the University of Tennessee's Anthropology Research Facility across three seasonal trials that spanned nearly 2 years. The Total Body Score (TBS) method was applied to quantify decomposition changes and calculate the postmortem interval (PMI) in accumulated degree days (ADD). Decomposition trajectories were analyzed by comparing the estimated and actual ADD for each seasonal trial and by fuzzy cluster analysis. The cluster analysis demonstrated that the rabbits formed one group while pigs and humans, although more similar to each other than either to rabbits, still showed important differences in decomposition patterns. The decomposition trends show that neither nonhuman model captured the pattern, rate, and variability of human decomposition. © 2018 American Academy of Forensic Sciences.

  6. Gas evolution from cathode materials: A pathway to solvent decomposition concomitant to SEI formation.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Browning, Katie L; Baggetto, Loic; Unocic, Raymond R

    This work reports a method to explore the catalytic reactivity of electrode surfaces towards the decomposition of carbonate solvents [ethylene carbonate (EC), dimethyl carbonate (DMC), and EC/DMC]. We show that the decomposition of a 1:1 wt% EC/DMC mixture is accelerated over certain commercially available LiCoO2 materials resulting in the formation of CO2 while over pure EC or DMC the reaction is much slower or negligible. The solubility of the produced CO2 in carbonate solvents is high (0.025 grams/mL) which masks the effect of electrolyte decomposition during storage or use. The origin of this decomposition is not clear but it ismore » expected to be present on other cathode materials and may affect the analysis of SEI products as well as the safety of Li-ion batteries.« less

  7. Planetary Gears Feature Extraction and Fault Diagnosis Method Based on VMD and CNN.

    PubMed

    Liu, Chang; Cheng, Gang; Chen, Xihui; Pang, Yusong

    2018-05-11

    Given local weak feature information, a novel feature extraction and fault diagnosis method for planetary gears based on variational mode decomposition (VMD), singular value decomposition (SVD), and convolutional neural network (CNN) is proposed. VMD was used to decompose the original vibration signal to mode components. The mode matrix was partitioned into a number of submatrices and local feature information contained in each submatrix was extracted as a singular value vector using SVD. The singular value vector matrix corresponding to the current fault state was constructed according to the location of each submatrix. Finally, by training a CNN using singular value vector matrices as inputs, planetary gear fault state identification and classification was achieved. The experimental results confirm that the proposed method can successfully extract local weak feature information and accurately identify different faults. The singular value vector matrices of different fault states have a distinct difference in element size and waveform. The VMD-based partition extraction method is better than ensemble empirical mode decomposition (EEMD), resulting in a higher CNN total recognition rate of 100% with fewer training times (14 times). Further analysis demonstrated that the method can also be applied to the degradation recognition of planetary gears. Thus, the proposed method is an effective feature extraction and fault diagnosis technique for planetary gears.

  8. Planetary Gears Feature Extraction and Fault Diagnosis Method Based on VMD and CNN

    PubMed Central

    Cheng, Gang; Chen, Xihui

    2018-01-01

    Given local weak feature information, a novel feature extraction and fault diagnosis method for planetary gears based on variational mode decomposition (VMD), singular value decomposition (SVD), and convolutional neural network (CNN) is proposed. VMD was used to decompose the original vibration signal to mode components. The mode matrix was partitioned into a number of submatrices and local feature information contained in each submatrix was extracted as a singular value vector using SVD. The singular value vector matrix corresponding to the current fault state was constructed according to the location of each submatrix. Finally, by training a CNN using singular value vector matrices as inputs, planetary gear fault state identification and classification was achieved. The experimental results confirm that the proposed method can successfully extract local weak feature information and accurately identify different faults. The singular value vector matrices of different fault states have a distinct difference in element size and waveform. The VMD-based partition extraction method is better than ensemble empirical mode decomposition (EEMD), resulting in a higher CNN total recognition rate of 100% with fewer training times (14 times). Further analysis demonstrated that the method can also be applied to the degradation recognition of planetary gears. Thus, the proposed method is an effective feature extraction and fault diagnosis technique for planetary gears. PMID:29751671

  9. Statistical CT noise reduction with multiscale decomposition and penalized weighted least squares in the projection domain

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tang Shaojie; Tang Xiangyang; School of Automation, Xi'an University of Posts and Telecommunications, Xi'an, Shaanxi 710121

    2012-09-15

    Purposes: The suppression of noise in x-ray computed tomography (CT) imaging is of clinical relevance for diagnostic image quality and the potential for radiation dose saving. Toward this purpose, statistical noise reduction methods in either the image or projection domain have been proposed, which employ a multiscale decomposition to enhance the performance of noise suppression while maintaining image sharpness. Recognizing the advantages of noise suppression in the projection domain, the authors propose a projection domain multiscale penalized weighted least squares (PWLS) method, in which the angular sampling rate is explicitly taken into consideration to account for the possible variation ofmore » interview sampling rate in advanced clinical or preclinical applications. Methods: The projection domain multiscale PWLS method is derived by converting an isotropic diffusion partial differential equation in the image domain into the projection domain, wherein a multiscale decomposition is carried out. With adoption of the Markov random field or soft thresholding objective function, the projection domain multiscale PWLS method deals with noise at each scale. To compensate for the degradation in image sharpness caused by the projection domain multiscale PWLS method, an edge enhancement is carried out following the noise reduction. The performance of the proposed method is experimentally evaluated and verified using the projection data simulated by computer and acquired by a CT scanner. Results: The preliminary results show that the proposed projection domain multiscale PWLS method outperforms the projection domain single-scale PWLS method and the image domain multiscale anisotropic diffusion method in noise reduction. In addition, the proposed method can preserve image sharpness very well while the occurrence of 'salt-and-pepper' noise and mosaic artifacts can be avoided. Conclusions: Since the interview sampling rate is taken into account in the projection domain multiscale decomposition, the proposed method is anticipated to be useful in advanced clinical and preclinical applications where the interview sampling rate varies.« less

  10. Decomposition Behavior of Curcumin during Solar Irradiation when Contact with Inorganic Particles

    NASA Astrophysics Data System (ADS)

    Nandiyanto, A. B. D.; Wiryani, A. S.; Rusli, A.; Purnamasari, A.; Abdullah, A. G.; Riza, L. S.

    2017-03-01

    Curcumin is one of materials which have been widely used in medicine, Asian cuisine, and traditional cosmetic. Therefore, understanding the stability of curcumin has been widely studied. The purpose of this study was to investigate the stability of curcumin solution against solar irradiation when making contact with inorganic material. As a model for the inorganic material, titanium dioxide (TiO2) was used. In the experimental method, the curcumin solution was irradiated using a solar irradiation. To confirm the stability of curcumin when contact with inorganic material, we added TiO2 micro particles with different concentrations. The results showed that the concentration of curcumin decreased during solar irradiation. The less concentration of curcumin affected the more decomposition rate obtained. The decomposition rate was increased greatly when TiO2 was added, in which the more TiO2 concentration added allowed the faster decomposition rate. Based on the result, we conclude that the curcumin is relatively stable as long as using higher concentration of curcumin and is no inorganic material existed. Then, the decomposition can be minimized by avoiding contact with inorganic material.

  11. Algebraic multigrid domain and range decomposition (AMG-DD / AMG-RD)*

    DOE PAGES

    Bank, R.; Falgout, R. D.; Jones, T.; ...

    2015-10-29

    In modern large-scale supercomputing applications, algebraic multigrid (AMG) is a leading choice for solving matrix equations. However, the high cost of communication relative to that of computation is a concern for the scalability of traditional implementations of AMG on emerging architectures. This paper introduces two new algebraic multilevel algorithms, algebraic multigrid domain decomposition (AMG-DD) and algebraic multigrid range decomposition (AMG-RD), that replace traditional AMG V-cycles with a fully overlapping domain decomposition approach. While the methods introduced here are similar in spirit to the geometric methods developed by Brandt and Diskin [Multigrid solvers on decomposed domains, in Domain Decomposition Methods inmore » Science and Engineering, Contemp. Math. 157, AMS, Providence, RI, 1994, pp. 135--155], Mitchell [Electron. Trans. Numer. Anal., 6 (1997), pp. 224--233], and Bank and Holst [SIAM J. Sci. Comput., 22 (2000), pp. 1411--1443], they differ primarily in that they are purely algebraic: AMG-RD and AMG-DD trade communication for computation by forming global composite “grids” based only on the matrix, not the geometry. (As is the usual AMG convention, “grids” here should be taken only in the algebraic sense, regardless of whether or not it corresponds to any geometry.) Another important distinguishing feature of AMG-RD and AMG-DD is their novel residual communication process that enables effective parallel computation on composite grids, avoiding the all-to-all communication costs of the geometric methods. The main purpose of this paper is to study the potential of these two algebraic methods as possible alternatives to existing AMG approaches for future parallel machines. As a result, this paper develops some theoretical properties of these methods and reports on serial numerical tests of their convergence properties over a spectrum of problem parameters.« less

  12. a Novel Two-Component Decomposition for Co-Polar Channels of GF-3 Quad-Pol Data

    NASA Astrophysics Data System (ADS)

    Kwok, E.; Li, C. H.; Zhao, Q. H.; Li, Y.

    2018-04-01

    Polarimetric target decomposition theory is the most dynamic and exploratory research area in the field of PolSAR. But most methods of target decomposition are based on fully polarized data (quad pol) and seldom utilize dual-polar data for target decomposition. Given this, we proposed a novel two-component decomposition method for co-polar channels of GF-3 quad-pol data. This method decomposes the data into two scattering contributions: surface, double bounce in dual co-polar channels. To save this underdetermined problem, a criterion for determining the model is proposed. The criterion can be named as second-order averaged scattering angle, which originates from the H/α decomposition. and we also put forward an alternative parameter of it. To validate the effectiveness of proposed decomposition, Liaodong Bay is selected as research area. The area is located in northeastern China, where it grows various wetland resources and appears sea ice phenomenon in winter. and we use the GF-3 quad-pol data as study data, which which is China's first C-band polarimetric synthetic aperture radar (PolSAR) satellite. The dependencies between the features of proposed algorithm and comparison decompositions (Pauli decomposition, An&Yang decomposition, Yamaguchi S4R decomposition) were investigated in the study. Though several aspects of the experimental discussion, we can draw the conclusion: the proposed algorithm may be suitable for special scenes with low vegetation coverage or low vegetation in the non-growing season; proposed decomposition features only using co-polar data are highly correlated with the corresponding comparison decomposition features under quad-polarization data. Moreover, it would be become input of the subsequent classification or parameter inversion.

  13. Fast heap transform-based QR-decomposition of real and complex matrices: algorithms and codes

    NASA Astrophysics Data System (ADS)

    Grigoryan, Artyom M.

    2015-03-01

    In this paper, we describe a new look on the application of Givens rotations to the QR-decomposition problem, which is similar to the method of Householder transformations. We apply the concept of the discrete heap transform, or signal-induced unitary transforms which had been introduced by Grigoryan (2006) and used in signal and image processing. Both cases of real and complex nonsingular matrices are considered and examples of performing QR-decomposition of square matrices are given. The proposed method of QR-decomposition for the complex matrix is novel and differs from the known method of complex Givens rotation and is based on analytical equations for the heap transforms. Many examples illustrated the proposed heap transform method of QR-decomposition are given, algorithms are described in detail, and MATLAB-based codes are included.

  14. Dynamic Load Balancing Based on Constrained K-D Tree Decomposition for Parallel Particle Tracing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Jiang; Guo, Hanqi; Yuan, Xiaoru

    Particle tracing is a fundamental technique in flow field data visualization. In this work, we present a novel dynamic load balancing method for parallel particle tracing. Specifically, we employ a constrained k-d tree decomposition approach to dynamically redistribute tasks among processes. Each process is initially assigned a regularly partitioned block along with duplicated ghost layer under the memory limit. During particle tracing, the k-d tree decomposition is dynamically performed by constraining the cutting planes in the overlap range of duplicated data. This ensures that each process is reassigned particles as even as possible, and on the other hand the newmore » assigned particles for a process always locate in its block. Result shows good load balance and high efficiency of our method.« less

  15. Aging-driven decomposition in zolpidem hemitartrate hemihydrate and the single-crystal structure of its decomposition products.

    PubMed

    Vega, Daniel R; Baggio, Ricardo; Roca, Mariana; Tombari, Dora

    2011-04-01

    The "aging-driven" decomposition of zolpidem hemitartrate hemihydrate (form A) has been followed by X-ray powder diffraction (XRPD), and the crystal and molecular structures of the decomposition products studied by single-crystal methods. The process is very similar to the "thermally driven" one, recently described in the literature for form E (Halasz and Dinnebier. 2010. J Pharm Sci 99(2): 871-874), resulting in a two-phase system: the neutral free base (common to both decomposition processes) and, in the present case, a novel zolpidem tartrate monohydrate, unique to the "aging-driven" decomposition. Our room-temperature single-crystal analysis gives for the free base comparable results as the high-temperature XRPD ones already reported by Halasz and Dinnebier: orthorhombic, Pcba, a = 9.6360(10) Å, b = 18.2690(5) Å, c = 18.4980(11) Å, and V = 3256.4(4) Å(3) . The unreported zolpidem tartrate monohydrate instead crystallizes in monoclinic P21 , which, for comparison purposes, we treated in the nonstandard setting P1121 with a = 20.7582(9) Å, b = 15.2331(5) Å, c = 7.2420(2) Å, γ = 90.826(2)°, and V = 2289.73(14) Å(3) . The structure presents two complete moieties in the asymmetric unit (z = 4, z' = 2). The different phases obtained in both decompositions are readily explained, considering the diverse genesis of both processes. Copyright © 2010 Wiley-Liss, Inc.

  16. Application of vacuum stability test to determine thermal decomposition kinetics of nitramines bonded by polyurethane matrix

    NASA Astrophysics Data System (ADS)

    Elbeih, Ahmed; Abd-Elghany, Mohamed; Elshenawy, Tamer

    2017-03-01

    Vacuum stability test (VST) is mainly used to study compatibility and stability of energetic materials. In this work, VST has been investigated to study thermal decomposition kinetics of four cyclic nitramines, 1,3,5-trinitro-1,3,5-triazinane (RDX) and 1,3,5,7-tetranitro-1,3,5,7-tetrazocane (HMX), cis-1,3,4,6-tetranitrooctahydroimidazo-[4,5-d]imidazole (BCHMX), 2,4,6,8,10,12-hexanitro-2,4,6,8,10,12-hexaazaisowurtzitane (ε-HNIW, CL-20), bonded by polyurethane matrix based on hydroxyl terminated polybutadiene (HTPB). Model fitting and model free (isoconversional) methods have been applied to determine the decomposition kinetics from VST results. For comparison, the decomposition kinetics were determined isothermally by ignition delay technique and non-isothermally by Advanced Kinetics and Technology Solution (AKTS) software. The activation energies for thermolysis obtained by isoconversional method based on VST technique of RDX/HTPB, HMX/HTPB, BCHMX/HTPB and CL20/HTPB were 157.1, 203.1, 190.0 and 176.8 kJ mol-1 respectively. Model fitting method proved that the mechanism of thermal decomposition of BCHMX/HTPB is controlled by the nucleation model while all the other studied PBXs are controlled by the diffusion models. A linear relationship between the ignition temperatures and the activation energies was observed. BCHMX/HTPB is interesting new PBX in the research stage.

  17. Conception of discrete systems decomposition algorithm using p-invariants and hypergraphs

    NASA Astrophysics Data System (ADS)

    Stefanowicz, Ł.

    2016-09-01

    In the article author presents an idea of decomposition algorithm of discrete systems described by Petri Nets using pinvariants. Decomposition process is significant from the point of view of discrete systems design, because it allows separation of the smaller sequential parts. Proposed algorithm uses modified Martinez-Silva method as well as author's selection algorithm. The developed method is a good complement of classical decomposition algorithms using graphs and hypergraphs.

  18. Assessment of a new method for the analysis of decomposition gases of polymers by a combining thermogravimetric solid-phase extraction and thermal desorption gas chromatography mass spectrometry.

    PubMed

    Duemichen, E; Braun, U; Senz, R; Fabian, G; Sturm, H

    2014-08-08

    For analysis of the gaseous thermal decomposition products of polymers, the common techniques are thermogravimetry, combined with Fourier transformed infrared spectroscopy (TGA-FTIR) and mass spectrometry (TGA-MS). These methods offer a simple approach to the decomposition mechanism, especially for small decomposition molecules. Complex spectra of gaseous mixtures are very often hard to identify because of overlapping signals. In this paper a new method is described to adsorb the decomposition products during controlled conditions in TGA on solid-phase extraction (SPE) material: twisters. Subsequently the twisters were analysed with thermal desorption gas chromatography mass spectrometry (TDS-GC-MS), which allows the decomposition products to be separated and identified using an MS library. The thermoplastics polyamide 66 (PA 66) and polybutylene terephthalate (PBT) were used as example polymers. The influence of the sample mass and of the purge gas flow during the decomposition process was investigated in TGA. The advantages and limitations of the method were presented in comparison to the common analysis techniques, TGA-FTIR and TGA-MS. Copyright © 2014 Elsevier B.V. All rights reserved.

  19. Breast density evaluation using spectral mammography, radiologist reader assessment and segmentation techniques: a retrospective study based on left and right breast comparison

    PubMed Central

    Molloi, Sabee; Ding, Huanjun; Feig, Stephen

    2015-01-01

    Purpose The purpose of this study was to compare the precision of mammographic breast density measurement using radiologist reader assessment, histogram threshold segmentation, fuzzy C-mean segmentation and spectral material decomposition. Materials and Methods Spectral mammography images from a total of 92 consecutive asymptomatic women (50–69 years old) who presented for annual screening mammography were retrospectively analyzed for this study. Breast density was estimated using 10 radiologist reader assessment, standard histogram thresholding, fuzzy C-mean algorithm and spectral material decomposition. The breast density correlation between left and right breasts was used to assess the precision of these techniques to measure breast composition relative to dual-energy material decomposition. Results In comparison to the other techniques, the results of breast density measurements using dual-energy material decomposition showed the highest correlation. The relative standard error of estimate for breast density measurements from left and right breasts using radiologist reader assessment, standard histogram thresholding, fuzzy C-mean algorithm and dual-energy material decomposition was calculated to be 1.95, 2.87, 2.07 and 1.00, respectively. Conclusion The results indicate that the precision of dual-energy material decomposition was approximately factor of two higher than the other techniques with regard to better correlation of breast density measurements from right and left breasts. PMID:26031229

  20. TEMPORAL SIGNATURES OF AIR QUALITY OBSERVATIONS AND MODEL OUTPUTS: DO TIME SERIES DECOMPOSITION METHODS CAPTURE RELEVANT TIME SCALES?

    EPA Science Inventory

    Time series decomposition methods were applied to meteorological and air quality data and their numerical model estimates. Decomposition techniques express a time series as the sum of a small number of independent modes which hypothetically represent identifiable forcings, thereb...

  1. Thermal decomposition of high-nitrogen energetic compounds: TAGzT and GUzT

    NASA Astrophysics Data System (ADS)

    Hayden, Heather F.

    The U.S. Navy is exploring high-nitrogen compounds as burning-rate additives to meet the growing demands of future high-performance gun systems. Two high-nitrogen compounds investigated as potential burning-rate additives are bis(triaminoguanidinium) 5,5-azobitetrazolate (TAGzT) and bis(guanidinium) 5,5'-azobitetrazolate (GUzT). Small-scale tests showed that formulations containing TAGzT exhibit significant increases in the burning rates of RDX-based gun propellants. However, when GUzT, a similarly structured molecule was incorporated into the formulation, there was essentially no effect on the burning rate of the propellant. Through the use of simultaneous thermogravimetric modulated beam mass spectrometry (STMBMS) and Fourier-Transform ion cyclotron resonance (FTICR) mass spectrometry methods, an investigation of the underlying chemical and physical processes that control the thermal decomposition behavior of TAGzT and GUzT alone and in the presence of RDX, was conducted. The objective was to determine why GUzT is not as good a burning-rate enhancer in RDX-based gun propellants as compared to TAGzT. The results show that TAGzT is an effective burning-rate modifier in the presence of RDX because the decomposition of TAGzT alters the initial stages of the decomposition of RDX. Hydrazine, formed in the decomposition of TAGzT, reacts faster with RDX than RDX can decompose itself. The reactions occur at temperatures below the melting point of RDX and thus the TAGzT decomposition products react with RDX in the gas phase. Although there is no hydrazine formed in the decomposition of GUzT, amines formed in the decomposition of GUzT react with aldehydes, formed in the decomposition of RDX, resulting in an increased reaction rate of RDX in the presence of GUzT. However, GUzT is not an effective burning-rate modifier because its decomposition does not alter the initial gas-phase decomposition of RDX. The decomposition of GUzT occurs at temperatures above the melting point of RDX. Therefore, the decomposition of GUzT affects reactions that are dominant in the liquid phase of RDX. Although GUzT is not an effective burning-rate modifier, features of its decomposition where the reaction between amines formed in the decomposition of GUzT react with the aldehydes, formed in the decomposition of RDX, may have implications from an insensitive-munitions perspective.

  2. Decomposition of P(CH 3) 3 on Ru(0001): comparison with PH 3 and PCl 3

    NASA Astrophysics Data System (ADS)

    Tao, H.-S.; Diebold, U.; Shinn, N. D.; Madey, T. E.

    1997-04-01

    The decomposition of P(CH 3) 3 adsorbed on Ru(0001) at 80 K is studied by soft X-ray photoelectron spectroscopy using synchrotron radiation. Using the chemical shifts in the P 2p core levels, we are able to identify various phosphorus-containing surface reaction products and follow their reactions on Ru(0001). It is found that P(CH 3) 3 undergoes a step-wise demethylation on Ru(0001), P(CH 3) 3 → P(CH 3) 2 → P(CH 3) → P, which is complete around ˜450 K. These results are compared with the decomposition of isostructural PH 3 and PCl 3 on Ru(0001). The decomposition of PH 3 involves a stable intermediate, labeled as PH x, and follows a reaction of: PH 3 → PH x → P, which is complete around ˜190 K. The conversion of chemisorbed phosphorus to ruthenium phosphide is observed and is complete around ˜700 K on Ru(0001). PCl 3 also follows a step-wise decomposition reaction, PCl 3 → PCl 2 → PCl → P, which is complete around ˜300 K. The energetics of the adsorption and the step-wise decomposition reactions of PH 3, PCl 3 and P(CH 3) 3 are estimated using the bond order conservation Morse potential (BOCMP) method. The energetics calculated using the BOCMP method agree qualitatively with the experimental data.

  3. Prediction of in situ root decomposition rates in an interspecific context from chemical and morphological traits

    PubMed Central

    Aulen, Maurice; Shipley, Bill; Bradley, Robert

    2012-01-01

    Background and Aims We quantitatively relate in situ root decomposition rates of a wide range of trees and herbs used in agroforestry to root chemical and morphological traits in order to better describe carbon fluxes from roots to the soil carbon pool across a diverse group of plant species. Methods In situ root decomposition rates were measured over an entire year by an intact core method on ten tree and seven herb species typical of agroforestry systems and were quantified using decay constants (k values) from Olson's single exponential model. Decay constants were related to root chemical (total carbon, nitrogen, soluble carbon, cellulose, hemicellulose, lignin) and morphological (specific root length, specific root length) traits. Traits were measured for both absorbing and non-absorbing roots. Key Results From 61 to 77 % of the variation in the different root traits and 63 % of that in root decomposition rates was interspecific. N was positively correlated, but total carbon and lignin were negatively correlated with k values. Initial root traits accounted for 75 % of the variation in interspecific decomposition rates using partial least squares regressions; partial slopes attributed to each trait were consistent with functional ecology expectations. Conclusions Easily measured initial root traits can be used to predict rates of root decomposition in soils in an interspecific context. PMID:22003237

  4. Mortality inequality in populations with equal life expectancy: Arriaga's decomposition method in SAS, Stata, and Excel.

    PubMed

    Auger, Nathalie; Feuillet, Pascaline; Martel, Sylvie; Lo, Ernest; Barry, Amadou D; Harper, Sam

    2014-08-01

    Life expectancy is used to measure population health, but large differences in mortality can be masked even when there is no life expectancy gap. We demonstrate how Arriaga's decomposition method can be used to assess inequality in mortality between populations with near equal life expectancy. We calculated life expectancy at birth for Quebec and the rest of Canada from 2005 to 2009 using life tables and partitioned the gap between both populations into age and cause-specific components using Arriaga's method. The life expectancy gap between Quebec and Canada was negligible (<0.1 years). Decomposition of the gap showed that higher lung cancer mortality in Quebec was offset by cardiovascular mortality in the rest of Canada, resulting in identical life expectancy in both groups. Lung cancer in Quebec had a greater impact at early ages, whereas cardiovascular mortality in Canada had a greater impact at older ages. Despite the absence of a gap, we demonstrate using decomposition analyses how lung cancer at early ages lowered life expectancy in Quebec, whereas cardiovascular causes at older ages lowered life expectancy in Canada. We provide SAS/Stata code and an Excel spreadsheeet to facilitate application of Arriaga's method to other settings. Copyright © 2014 Elsevier Inc. All rights reserved.

  5. Highly Efficient and Scalable Compound Decomposition of Two-Electron Integral Tensor and Its Application in Coupled Cluster Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peng, Bo; Kowalski, Karol

    The representation and storage of two-electron integral tensors are vital in large- scale applications of accurate electronic structure methods. Low-rank representation and efficient storage strategy of integral tensors can significantly reduce the numerical overhead and consequently time-to-solution of these methods. In this paper, by combining pivoted incomplete Cholesky decomposition (CD) with a follow-up truncated singular vector decomposition (SVD), we develop a decomposition strategy to approximately represent the two-electron integral tensor in terms of low-rank vectors. A systematic benchmark test on a series of 1-D, 2-D, and 3-D carbon-hydrogen systems demonstrates high efficiency and scalability of the compound two-step decomposition ofmore » the two-electron integral tensor in our implementation. For the size of atomic basis set N_b ranging from ~ 100 up to ~ 2, 000, the observed numerical scaling of our implementation shows O(N_b^{2.5~3}) versus O(N_b^{3~4}) of single CD in most of other implementations. More importantly, this decomposition strategy can significantly reduce the storage requirement of the atomic-orbital (AO) two-electron integral tensor from O(N_b^4) to O(N_b^2 log_{10}(N_b)) with moderate decomposition thresholds. The accuracy tests have been performed using ground- and excited-state formulations of coupled- cluster formalism employing single and double excitations (CCSD) on several bench- mark systems including the C_{60} molecule described by nearly 1,400 basis functions. The results show that the decomposition thresholds can be generally set to 10^{-4} to 10^{-3} to give acceptable compromise between efficiency and accuracy.« less

  6. Highly Efficient and Scalable Compound Decomposition of Two-Electron Integral Tensor and Its Application in Coupled Cluster Calculations.

    PubMed

    Peng, Bo; Kowalski, Karol

    2017-09-12

    The representation and storage of two-electron integral tensors are vital in large-scale applications of accurate electronic structure methods. Low-rank representation and efficient storage strategy of integral tensors can significantly reduce the numerical overhead and consequently time-to-solution of these methods. In this work, by combining pivoted incomplete Cholesky decomposition (CD) with a follow-up truncated singular vector decomposition (SVD), we develop a decomposition strategy to approximately represent the two-electron integral tensor in terms of low-rank vectors. A systematic benchmark test on a series of 1-D, 2-D, and 3-D carbon-hydrogen systems demonstrates high efficiency and scalability of the compound two-step decomposition of the two-electron integral tensor in our implementation. For the size of the atomic basis set, N b , ranging from ∼100 up to ∼2,000, the observed numerical scaling of our implementation shows [Formula: see text] versus [Formula: see text] cost of performing single CD on the two-electron integral tensor in most of the other implementations. More importantly, this decomposition strategy can significantly reduce the storage requirement of the atomic orbital (AO) two-electron integral tensor from [Formula: see text] to [Formula: see text] with moderate decomposition thresholds. The accuracy tests have been performed using ground- and excited-state formulations of coupled cluster formalism employing single and double excitations (CCSD) on several benchmark systems including the C 60 molecule described by nearly 1,400 basis functions. The results show that the decomposition thresholds can be generally set to 10 -4 to 10 -3 to give acceptable compromise between efficiency and accuracy.

  7. Grouping individual independent BOLD effects: a new way to ICA group analysis

    NASA Astrophysics Data System (ADS)

    Duann, Jeng-Ren; Jung, Tzyy-Ping; Sejnowski, Terrence J.; Makeig, Scott

    2009-04-01

    A new group analysis method to summarize the task-related BOLD responses based on independent component analysis (ICA) was presented. As opposite to the previously proposed group ICA (gICA) method, which first combined multi-subject fMRI data in either temporal or spatial domain and applied ICA decomposition only once to the combined fMRI data to extract the task-related BOLD effects, the method presented here applied ICA decomposition to the individual subjects' fMRI data to first find the independent BOLD effects specifically for each individual subject. Then, the task-related independent BOLD component was selected among the resulting independent components from the single-subject ICA decomposition and hence grouped across subjects to derive the group inference. In this new ICA group analysis (ICAga) method, one does not need to assume that the task-related BOLD time courses are identical across brain areas and subjects as used in the grand ICA decomposition on the spatially concatenated fMRI data. Neither does one need to assume that after spatial normalization, the voxels at the same coordinates represent exactly the same functional or structural brain anatomies across different subjects. These two assumptions have been problematic given the recent BOLD activation evidences. Further, since the independent BOLD effects were obtained from each individual subject, the ICAga method can better account for the individual differences in the task-related BOLD effects. Unlike the gICA approach whereby the task-related BOLD effects could only be accounted for by a single unified BOLD model across multiple subjects. As a result, the newly proposed method, ICAga, was able to better fit the task-related BOLD effects at individual level and thus allow grouping more appropriate multisubject BOLD effects in the group analysis.

  8. Decomposition of diverse solid inorganic matrices with molten ammonium bifluoride salt for constituent elemental analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    O'Hara, Matthew J.; Kellogg, Cyndi M.; Parker, Cyrena M.

    Ammonium bifluoride (ABF, NH4F·HF) is a well-known reagent for converting metal oxides to fluorides and for its applications in breaking down minerals and ores in order to extract useful components. It has been more recently applied to the decomposition of inorganic matrices prior to elemental analysis. Herein, a sample decomposition method that employs molten ABF sample treatment in the initial step is systematically evaluated across a range of inorganic sample types: glass, quartz, zircon, soil, and pitchblende ore. Method performance is evaluated across the two variables: duration of molten ABF treatment and ABF reagent mass to sample mass ratio. Themore » degree of solubilization of these sample classes are compared to the fluoride stoichiometry that is theoretically necessary to enact complete fluorination of the sample types. Finally, the sample decomposition method is performed on several soil and pitchblende ore standard reference materials, after which elemental constituent analysis is performed by ICP-OES and ICP-MS. Elemental recoveries are compared to the certified values; results indicate good to excellent recoveries across a range of alkaline earth, rare earth, transition metal, and actinide elements.« less

  9. Image fusion method based on regional feature and improved bidimensional empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Qin, Xinqiang; Hu, Gang; Hu, Kai

    2018-01-01

    The decomposition of multiple source images using bidimensional empirical mode decomposition (BEMD) often produces mismatched bidimensional intrinsic mode functions, either by their number or their frequency, making image fusion difficult. A solution to this problem is proposed using a fixed number of iterations and a union operation in the sifting process. By combining the local regional features of the images, an image fusion method has been developed. First, the source images are decomposed using the proposed BEMD to produce the first intrinsic mode function (IMF) and residue component. Second, for the IMF component, a selection and weighted average strategy based on local area energy is used to obtain a high-frequency fusion component. Third, for the residue component, a selection and weighted average strategy based on local average gray difference is used to obtain a low-frequency fusion component. Finally, the fused image is obtained by applying the inverse BEMD transform. Experimental results show that the proposed algorithm provides superior performance over methods based on wavelet transform, line and column-based EMD, and complex empirical mode decomposition, both in terms of visual quality and objective evaluation criteria.

  10. Pseudospectral reverse time migration based on wavefield decomposition

    NASA Astrophysics Data System (ADS)

    Du, Zengli; Liu, Jianjun; Xu, Feng; Li, Yongzhang

    2017-05-01

    The accuracy of seismic numerical simulations and the effectiveness of imaging conditions are important in reverse time migration studies. Using the pseudospectral method, the precision of the calculated spatial derivative of the seismic wavefield can be improved, increasing the vertical resolution of images. Low-frequency background noise, generated by the zero-lag cross-correlation of mismatched forward-propagated and backward-propagated wavefields at the impedance interfaces, can be eliminated effectively by using the imaging condition based on the wavefield decomposition technique. The computation complexity can be reduced when imaging is performed in the frequency domain. Since the Fourier transformation in the z-axis may be derived directly as one of the intermediate results of the spatial derivative calculation, the computation load of the wavefield decomposition can be reduced, improving the computation efficiency of imaging. Comparison of the results for a pulse response in a constant-velocity medium indicates that, compared with the finite difference method, the peak frequency of the Ricker wavelet can be increased by 10-15 Hz for avoiding spatial numerical dispersion, when the second-order spatial derivative of the seismic wavefield is obtained using the pseudospectral method. The results for the SEG/EAGE and Sigsbee2b models show that the signal-to-noise ratio of the profile and the imaging quality of the boundaries of the salt dome migrated using the pseudospectral method are better than those obtained using the finite difference method.

  11. Domain decomposition: A bridge between nature and parallel computers

    NASA Technical Reports Server (NTRS)

    Keyes, David E.

    1992-01-01

    Domain decomposition is an intuitive organizing principle for a partial differential equation (PDE) computation, both physically and architecturally. However, its significance extends beyond the readily apparent issues of geometry and discretization, on one hand, and of modular software and distributed hardware, on the other. Engineering and computer science aspects are bridged by an old but recently enriched mathematical theory that offers the subject not only unity, but also tools for analysis and generalization. Domain decomposition induces function-space and operator decompositions with valuable properties. Function-space bases and operator splittings that are not derived from domain decompositions generally lack one or more of these properties. The evolution of domain decomposition methods for elliptically dominated problems has linked two major algorithmic developments of the last 15 years: multilevel and Krylov methods. Domain decomposition methods may be considered descendants of both classes with an inheritance from each: they are nearly optimal and at the same time efficiently parallelizable. Many computationally driven application areas are ripe for these developments. A progression is made from a mathematically informal motivation for domain decomposition methods to a specific focus on fluid dynamics applications. To be introductory rather than comprehensive, simple examples are provided while convergence proofs and algorithmic details are left to the original references; however, an attempt is made to convey their most salient features, especially where this leads to algorithmic insight.

  12. LP and NLP decomposition without a master problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fuller, D.; Lan, B.

    We describe a new algorithm for decomposition of linear programs and a class of convex nonlinear programs, together with theoretical properties and some test results. Its most striking feature is the absence of a master problem; the subproblems pass primal and dual proposals directly to one another. The algorithm is defined for multi-stage LPs or NLPs, in which the constraints link the current stage`s variables to earlier stages` variables. This problem class is general enough to include many problem structures that do not immediately suggest stages, such as block diagonal problems. The basic algorithmis derived for two-stage problems and extendedmore » to more than two stages through nested decomposition. The main theoretical result assures convergence, to within any preset tolerance of the optimal value, in a finite number of iterations. This asymptotic convergence result contrasts with the results of limited tests on LPs, in which the optimal solution is apparently found exactly, i.e., to machine accuracy, in a small number of iterations. The tests further suggest that for LPs, the new algorithm is faster than the simplex method applied to the whole problem, as long as the stages are linked loosely; that the speedup over the simpex method improves as the number of stages increases; and that the algorithm is more reliable than nested Dantzig-Wolfe or Benders` methods in its improvement over the simplex method.« less

  13. A New Coarsening Operator for the Optimal Preconditioning of the Dual and Primal Domain Decomposition Methods: Application to Problems with Severe Coefficient Jumps

    NASA Technical Reports Server (NTRS)

    Farhat, Charbel; Rixen, Daniel

    1996-01-01

    We present an optimal preconditioning algorithm that is equally applicable to the dual (FETI) and primal (Balancing) Schur complement domain decomposition methods, and which successfully addresses the problems of subdomain heterogeneities including the effects of large jumps of coefficients. The proposed preconditioner is derived from energy principles and embeds a new coarsening operator that propagates the error globally and accelerates convergence. The resulting iterative solver is illustrated with the solution of highly heterogeneous elasticity problems.

  14. Kinetics of the cellular decomposition of supersaturated solid solutions

    NASA Astrophysics Data System (ADS)

    Ivanov, M. A.; Naumuk, A. Yu.

    2014-09-01

    A consistent description of the kinetics of the cellular decomposition of supersaturated solid solutions with the development of a spatially periodic structure of lamellar (platelike) type, which consists of alternating phases of precipitates on the basis of the impurity component and depleted initial solid solution, is given. One of the equations, which determines the relationship between the parameters that describe the process of decomposition, has been obtained from a comparison of two approaches in order to determine the rate of change in the free energy of the system. The other kinetic parameters can be described with the use of a variational method, namely, by the maximum velocity of motion of the decomposition boundary at a given temperature. It is shown that the mutual directions of growth of the lamellae of different phases are determined by the minimum value of the interphase surface energy. To determine the parameters of the decomposition, a simple thermodynamic model of states with a parabolic dependence of the free energy on the concentrations has been used. As a result, expressions that describe the decomposition rate, interlamellar distance, and the concentration of impurities in the phase that remain after the decomposition have been derived. This concentration proves to be equal to the half-sum of the initial concentration and the equilibrium concentration corresponding to the decomposition temperature.

  15. Automated torso organ segmentation from 3D CT images using structured perceptron and dual decomposition

    NASA Astrophysics Data System (ADS)

    Nimura, Yukitaka; Hayashi, Yuichiro; Kitasaka, Takayuki; Mori, Kensaku

    2015-03-01

    This paper presents a method for torso organ segmentation from abdominal CT images using structured perceptron and dual decomposition. A lot of methods have been proposed to enable automated extraction of organ regions from volumetric medical images. However, it is necessary to adjust empirical parameters of them to obtain precise organ regions. This paper proposes an organ segmentation method using structured output learning. Our method utilizes a graphical model and binary features which represent the relationship between voxel intensities and organ labels. Also we optimize the weights of the graphical model by structured perceptron and estimate the best organ label for a given image by dynamic programming and dual decomposition. The experimental result revealed that the proposed method can extract organ regions automatically using structured output learning. The error of organ label estimation was 4.4%. The DICE coefficients of left lung, right lung, heart, liver, spleen, pancreas, left kidney, right kidney, and gallbladder were 0.91, 0.95, 0.77, 0.81, 0.74, 0.08, 0.83, 0.84, and 0.03, respectively.

  16. Kinetic Analysis of Isothermal Decomposition Process of Sodium Bicarbonate Using the Weibull Probability Function—Estimation of Density Distribution Functions of the Apparent Activation Energies

    NASA Astrophysics Data System (ADS)

    Janković, Bojan

    2009-10-01

    The decomposition process of sodium bicarbonate (NaHCO3) has been studied by thermogravimetry in isothermal conditions at four different operating temperatures (380 K, 400 K, 420 K, and 440 K). It was found that the experimental integral and differential conversion curves at the different operating temperatures can be successfully described by the isothermal Weibull distribution function with a unique value of the shape parameter ( β = 1.07). It was also established that the Weibull distribution parameters ( β and η) show independent behavior on the operating temperature. Using the integral and differential (Friedman) isoconversional methods, in the conversion (α) range of 0.20 ≤ α ≤ 0.80, the apparent activation energy ( E a ) value was approximately constant ( E a, int = 95.2 kJmol-1 and E a, diff = 96.6 kJmol-1, respectively). The values of E a calculated by both isoconversional methods are in good agreement with the value of E a evaluated from the Arrhenius equation (94.3 kJmol-1), which was expressed through the scale distribution parameter ( η). The Málek isothermal procedure was used for estimation of the kinetic model for the investigated decomposition process. It was found that the two-parameter Šesták-Berggren (SB) autocatalytic model best describes the NaHCO3 decomposition process with the conversion function f(α) = α0.18(1-α)1.19. It was also concluded that the calculated density distribution functions of the apparent activation energies ( ddfE a ’s) are not dependent on the operating temperature, which exhibit the highly symmetrical behavior (shape factor = 1.00). The obtained isothermal decomposition results were compared with corresponding results of the nonisothermal decomposition process of NaHCO3.

  17. Corrected confidence bands for functional data using principal components.

    PubMed

    Goldsmith, J; Greven, S; Crainiceanu, C

    2013-03-01

    Functional principal components (FPC) analysis is widely used to decompose and express functional observations. Curve estimates implicitly condition on basis functions and other quantities derived from FPC decompositions; however these objects are unknown in practice. In this article, we propose a method for obtaining correct curve estimates by accounting for uncertainty in FPC decompositions. Additionally, pointwise and simultaneous confidence intervals that account for both model- and decomposition-based variability are constructed. Standard mixed model representations of functional expansions are used to construct curve estimates and variances conditional on a specific decomposition. Iterated expectation and variance formulas combine model-based conditional estimates across the distribution of decompositions. A bootstrap procedure is implemented to understand the uncertainty in principal component decomposition quantities. Our method compares favorably to competing approaches in simulation studies that include both densely and sparsely observed functions. We apply our method to sparse observations of CD4 cell counts and to dense white-matter tract profiles. Code for the analyses and simulations is publicly available, and our method is implemented in the R package refund on CRAN. Copyright © 2013, The International Biometric Society.

  18. Corrected Confidence Bands for Functional Data Using Principal Components

    PubMed Central

    Goldsmith, J.; Greven, S.; Crainiceanu, C.

    2014-01-01

    Functional principal components (FPC) analysis is widely used to decompose and express functional observations. Curve estimates implicitly condition on basis functions and other quantities derived from FPC decompositions; however these objects are unknown in practice. In this article, we propose a method for obtaining correct curve estimates by accounting for uncertainty in FPC decompositions. Additionally, pointwise and simultaneous confidence intervals that account for both model- and decomposition-based variability are constructed. Standard mixed model representations of functional expansions are used to construct curve estimates and variances conditional on a specific decomposition. Iterated expectation and variance formulas combine model-based conditional estimates across the distribution of decompositions. A bootstrap procedure is implemented to understand the uncertainty in principal component decomposition quantities. Our method compares favorably to competing approaches in simulation studies that include both densely and sparsely observed functions. We apply our method to sparse observations of CD4 cell counts and to dense white-matter tract profiles. Code for the analyses and simulations is publicly available, and our method is implemented in the R package refund on CRAN. PMID:23003003

  19. An Efficient Local Correlation Matrix Decomposition Approach for the Localization Implementation of Ensemble-Based Assimilation Methods

    NASA Astrophysics Data System (ADS)

    Zhang, Hongqin; Tian, Xiangjun

    2018-04-01

    Ensemble-based data assimilation methods often use the so-called localization scheme to improve the representation of the ensemble background error covariance (Be). Extensive research has been undertaken to reduce the computational cost of these methods by using the localized ensemble samples to localize Be by means of a direct decomposition of the local correlation matrix C. However, the computational costs of the direct decomposition of the local correlation matrix C are still extremely high due to its high dimension. In this paper, we propose an efficient local correlation matrix decomposition approach based on the concept of alternating directions. This approach is intended to avoid direct decomposition of the correlation matrix. Instead, we first decompose the correlation matrix into 1-D correlation matrices in the three coordinate directions, then construct their empirical orthogonal function decomposition at low resolution. This procedure is followed by the 1-D spline interpolation process to transform the above decompositions to the high-resolution grid. Finally, an efficient correlation matrix decomposition is achieved by computing the very similar Kronecker product. We conducted a series of comparison experiments to illustrate the validity and accuracy of the proposed local correlation matrix decomposition approach. The effectiveness of the proposed correlation matrix decomposition approach and its efficient localization implementation of the nonlinear least-squares four-dimensional variational assimilation are further demonstrated by several groups of numerical experiments based on the Advanced Research Weather Research and Forecasting model.

  20. Iterative methods for elliptic finite element equations on general meshes

    NASA Technical Reports Server (NTRS)

    Nicolaides, R. A.; Choudhury, Shenaz

    1986-01-01

    Iterative methods for arbitrary mesh discretizations of elliptic partial differential equations are surveyed. The methods discussed are preconditioned conjugate gradients, algebraic multigrid, deflated conjugate gradients, an element-by-element techniques, and domain decomposition. Computational results are included.

  1. Decomposition mechanism of chromite in sulfuric acid-dichromic acid solution

    NASA Astrophysics Data System (ADS)

    Zhao, Qing; Liu, Cheng-jun; Li, Bao-kuan; Jiang, Mao-fa

    2017-12-01

    The sulfuric acid leaching process is regarded as a promising, cleaner method to prepare trivalent chromium products from chromite; however, the decomposition mechanism of the ore is poorly understood. In this work, binary spinels of Mg-Al, Mg-Fe, and Mg-Cr in the powdered and lump states were synthesized and used as raw materials to investigate the decomposition mechanism of chromite in sulfuric acid-dichromic acid solution. The leaching yields of metallic elements and the changes in morphology of the spinel were studied. The experimental results showed that the three spinels were stable in sulfuric acid solution and that dichromic acid had little influence on the decomposition behavior of the Mg-Al spinel and Mg-Fe spinel because Mg2+, Al3+, and Fe3+ in spinels cannot be oxidized by Cr6+. However, in the case of the Mg-Cr spinel, dichromic acid substantially promoted the decomposition efficiency and functioned as a catalyst. The decomposition mechanism of chromite in sulfuric acid-dichromic acid solution was illustrated on the basis of the findings of this study.

  2. Acceleration of aircraft-level Traffic Flow Management

    NASA Astrophysics Data System (ADS)

    Rios, Joseph Lucio

    This dissertation describes novel approaches to solving large-scale, high fidelity, aircraft-level Traffic Flow Management scheduling problems. Depending on the methods employed, solving these problems to optimality can take longer than the length of the planning horizon in question. Research in this domain typically focuses on the quality of the modeling used to describe the problem and the benefits achieved from the optimized solution, often treating computational aspects as secondary or tertiary. The work presented here takes the complementary view and considers the computational aspect as the primary concern. To this end, a previously published model for solving this Traffic Flow Management scheduling problem is used as starting point for this study. The model proposed by Bertsimas and Stock-Patterson is a binary integer program taking into account all major resource capacities and the trajectories of each flight to decide which flights should be held in which resource for what amount of time in order to satisfy all capacity requirements. For large instances, the solve time using state-of-the-art solvers is prohibitive for use within a potential decision support tool. With this dissertation, however, it will be shown that solving can be achieved in reasonable time for instances of real-world size. Five other techniques developed and tested for this dissertation will be described in detail. These are heuristic methods that provide good results. Performance is measured in terms of runtime and "optimality gap." We then describe the most successful method presented in this dissertation: Dantzig-Wolfe Decomposition. Results indicate that a parallel implementation of Dantzig-Wolfe Decomposition optimally solves the original problem in much reduced time and with better integrality and smaller optimality gap than any of the heuristic methods or state-of-the-art, commercial solvers. The solution quality improves in every measureable way as the number of subproblems solved in parallel increases. A maximal decomposition provides the best results of any method tested. The convergence qualities of Dantzig-Wolfe Decomposition have been criticized in the past, so we examine what makes the Bertsimas-Stock Patterson model so amenable to use of this method. These mathematical qualities of the model are generalized to provide guidance on other problems that may benefit from massively parallel Dantzig-Wolfe Decomposition. This result, together with the development of the software, and the experimental results indicating the feasibility of real-time, nationwide Traffic Flow Management scheduling represent the major contributions of this dissertation.

  3. A Type-2 Block-Component-Decomposition Based 2D AOA Estimation Algorithm for an Electromagnetic Vector Sensor Array

    PubMed Central

    Gao, Yu-Fei; Gui, Guan; Xie, Wei; Zou, Yan-Bin; Yang, Yue; Wan, Qun

    2017-01-01

    This paper investigates a two-dimensional angle of arrival (2D AOA) estimation algorithm for the electromagnetic vector sensor (EMVS) array based on Type-2 block component decomposition (BCD) tensor modeling. Such a tensor decomposition method can take full advantage of the multidimensional structural information of electromagnetic signals to accomplish blind estimation for array parameters with higher resolution. However, existing tensor decomposition methods encounter many restrictions in applications of the EMVS array, such as the strict requirement for uniqueness conditions of decomposition, the inability to handle partially-polarized signals, etc. To solve these problems, this paper investigates tensor modeling for partially-polarized signals of an L-shaped EMVS array. The 2D AOA estimation algorithm based on rank-(L1,L2,·) BCD is developed, and the uniqueness condition of decomposition is analyzed. By means of the estimated steering matrix, the proposed algorithm can automatically achieve angle pair-matching. Numerical experiments demonstrate that the present algorithm has the advantages of both accuracy and robustness of parameter estimation. Even under the conditions of lower SNR, small angular separation and limited snapshots, the proposed algorithm still possesses better performance than subspace methods and the canonical polyadic decomposition (CPD) method. PMID:28448431

  4. A Type-2 Block-Component-Decomposition Based 2D AOA Estimation Algorithm for an Electromagnetic Vector Sensor Array.

    PubMed

    Gao, Yu-Fei; Gui, Guan; Xie, Wei; Zou, Yan-Bin; Yang, Yue; Wan, Qun

    2017-04-27

    This paper investigates a two-dimensional angle of arrival (2D AOA) estimation algorithm for the electromagnetic vector sensor (EMVS) array based on Type-2 block component decomposition (BCD) tensor modeling. Such a tensor decomposition method can take full advantage of the multidimensional structural information of electromagnetic signals to accomplish blind estimation for array parameters with higher resolution. However, existing tensor decomposition methods encounter many restrictions in applications of the EMVS array, such as the strict requirement for uniqueness conditions of decomposition, the inability to handle partially-polarized signals, etc. To solve these problems, this paper investigates tensor modeling for partially-polarized signals of an L-shaped EMVS array. The 2D AOA estimation algorithm based on rank- ( L 1 , L 2 , · ) BCD is developed, and the uniqueness condition of decomposition is analyzed. By means of the estimated steering matrix, the proposed algorithm can automatically achieve angle pair-matching. Numerical experiments demonstrate that the present algorithm has the advantages of both accuracy and robustness of parameter estimation. Even under the conditions of lower SNR, small angular separation and limited snapshots, the proposed algorithm still possesses better performance than subspace methods and the canonical polyadic decomposition (CPD) method.

  5. Basis material decomposition method for material discrimination with a new spectrometric X-ray imaging detector

    NASA Astrophysics Data System (ADS)

    Brambilla, A.; Gorecki, A.; Potop, A.; Paulus, C.; Verger, L.

    2017-08-01

    Energy sensitive photon counting X-ray detectors provide energy dependent information which can be exploited for material identification. The attenuation of an X-ray beam as a function of energy depends on the effective atomic number Zeff and the density. However, the measured attenuation is degraded by the imperfections of the detector response such as charge sharing or pile-up. These imperfections lead to non-linearities that limit the benefits of energy resolved imaging. This work aims to implement a basis material decomposition method which overcomes these problems. Basis material decomposition is based on the fact that the attenuation of any material or complex object can be accurately reproduced by a combination of equivalent thicknesses of basis materials. Our method is based on a calibration phase to learn the response of the detector for different combinations of thicknesses of the basis materials. The decomposition algorithm finds the thicknesses of basis material whose spectrum is closest to the measurement, using a maximum likelihood criterion assuming a Poisson law distribution of photon counts for each energy bin. The method was used with a ME100 linear array spectrometric X-ray imager to decompose different plastic materials on a Polyethylene and Polyvinyl Chloride base. The resulting equivalent thicknesses were used to estimate the effective atomic number Zeff. The results are in good agreement with the theoretical Zeff, regardless of the plastic sample thickness. The linear behaviour of the equivalent lengths makes it possible to process overlapped materials. Moreover, the method was tested with a 3 materials base by adding gadolinium, whose K-edge is not taken into account by the other two materials. The proposed method has the advantage that it can be used with any number of energy channels, taking full advantage of the high energy resolution of the ME100 detector. Although in principle two channels are sufficient, experimental measurements show that the use of a high number of channels significantly improves the accuracy of decomposition by reducing noise and systematic bias.

  6. Robust and automated three-dimensional segmentation of densely packed cell nuclei in different biological specimens with Lines-of-Sight decomposition.

    PubMed

    Mathew, B; Schmitz, A; Muñoz-Descalzo, S; Ansari, N; Pampaloni, F; Stelzer, E H K; Fischer, S C

    2015-06-08

    Due to the large amount of data produced by advanced microscopy, automated image analysis is crucial in modern biology. Most applications require reliable cell nuclei segmentation. However, in many biological specimens cell nuclei are densely packed and appear to touch one another in the images. Therefore, a major difficulty of three-dimensional cell nuclei segmentation is the decomposition of cell nuclei that apparently touch each other. Current methods are highly adapted to a certain biological specimen or a specific microscope. They do not ensure similarly accurate segmentation performance, i.e. their robustness for different datasets is not guaranteed. Hence, these methods require elaborate adjustments to each dataset. We present an advanced three-dimensional cell nuclei segmentation algorithm that is accurate and robust. Our approach combines local adaptive pre-processing with decomposition based on Lines-of-Sight (LoS) to separate apparently touching cell nuclei into approximately convex parts. We demonstrate the superior performance of our algorithm using data from different specimens recorded with different microscopes. The three-dimensional images were recorded with confocal and light sheet-based fluorescence microscopes. The specimens are an early mouse embryo and two different cellular spheroids. We compared the segmentation accuracy of our algorithm with ground truth data for the test images and results from state-of-the-art methods. The analysis shows that our method is accurate throughout all test datasets (mean F-measure: 91%) whereas the other methods each failed for at least one dataset (F-measure≤69%). Furthermore, nuclei volume measurements are improved for LoS decomposition. The state-of-the-art methods required laborious adjustments of parameter values to achieve these results. Our LoS algorithm did not require parameter value adjustments. The accurate performance was achieved with one fixed set of parameter values. We developed a novel and fully automated three-dimensional cell nuclei segmentation method incorporating LoS decomposition. LoS are easily accessible features that ensure correct splitting of apparently touching cell nuclei independent of their shape, size or intensity. Our method showed superior performance compared to state-of-the-art methods, performing accurately for a variety of test images. Hence, our LoS approach can be readily applied to quantitative evaluation in drug testing, developmental and cell biology.

  7. Comparison of Techniques for Sampling Adult Necrophilous Insects From Pig Carcasses.

    PubMed

    Cruise, Angela; Hatano, Eduardo; Watson, David W; Schal, Coby

    2018-02-06

    Studies of the pre-colonization interval and mechanisms driving necrophilous insect ecological succession depend on effective sampling of adult insects and knowledge of their diel and successional activity patterns. The number of insects trapped, their diversity, and diel periodicity were compared with four sampling methods on neonate pigs. Sampling method, time of day and decomposition age of the pigs significantly affected the number of insects sampled from pigs. We also found significant interactions of sampling method and decomposition day, time of sampling and decomposition day. No single method was superior to the other methods during all three decomposition days. Sampling times after noon yielded the largest samples during the first 2 d of decomposition. On day 3 of decomposition however, all sampling times were equally effective. Therefore, to maximize insect collections from neonate pigs, the method used to sample must vary by decomposition day. The suction trap collected the most species-rich samples, but sticky trap samples were the most diverse, when both species richness and evenness were factored into a Shannon diversity index. Repeated sampling during the noon to 18:00 hours period was most effective to obtain the maximum diversity of trapped insects. The integration of multiple sampling techniques would most effectively sample the necrophilous insect community. However, because all four tested methods were deficient at sampling beetle species, future work should focus on optimizing the most promising methods, alone or in combinations, and incorporate hand-collections of beetles. © The Author(s) 2018. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  8. Trend extraction using empirical mode decomposition and statistical empirical mode decomposition: Case study: Kuala Lumpur stock market

    NASA Astrophysics Data System (ADS)

    Jaber, Abobaker M.

    2014-12-01

    Two nonparametric methods for prediction and modeling of financial time series signals are proposed. The proposed techniques are designed to handle non-stationary and non-linearity behave and to extract meaningful signals for reliable prediction. Due to Fourier Transform (FT), the methods select significant decomposed signals that will be employed for signal prediction. The proposed techniques developed by coupling Holt-winter method with Empirical Mode Decomposition (EMD) and it is Extending the scope of empirical mode decomposition by smoothing (SEMD). To show performance of proposed techniques, we analyze daily closed price of Kuala Lumpur stock market index.

  9. A Flexible Method for Multi-Material Decomposition of Dual-Energy CT Images.

    PubMed

    Mendonca, Paulo R S; Lamb, Peter; Sahani, Dushyant V

    2014-01-01

    The ability of dual-energy computed-tomographic (CT) systems to determine the concentration of constituent materials in a mixture, known as material decomposition, is the basis for many of dual-energy CT's clinical applications. However, the complex composition of tissues and organs in the human body poses a challenge for many material decomposition methods, which assume the presence of only two, or at most three, materials in the mixture. We developed a flexible, model-based method that extends dual-energy CT's core material decomposition capability to handle more complex situations, in which it is necessary to disambiguate among and quantify the concentration of a larger number of materials. The proposed method, named multi-material decomposition (MMD), was used to develop two image analysis algorithms. The first was virtual unenhancement (VUE), which digitally removes the effect of contrast agents from contrast-enhanced dual-energy CT exams. VUE has the ability to reduce patient dose and improve clinical workflow, and can be used in a number of clinical applications such as CT urography and CT angiography. The second algorithm developed was liver-fat quantification (LFQ), which accurately quantifies the fat concentration in the liver from dual-energy CT exams. LFQ can form the basis of a clinical application targeting the diagnosis and treatment of fatty liver disease. Using image data collected from a cohort consisting of 50 patients and from phantoms, the application of MMD to VUE and LFQ yielded quantitatively accurate results when compared against gold standards. Furthermore, consistent results were obtained across all phases of imaging (contrast-free and contrast-enhanced). This is of particular importance since most clinical protocols for abdominal imaging with CT call for multi-phase imaging. We conclude that MMD can successfully form the basis of a number of dual-energy CT image analysis algorithms, and has the potential to improve the clinical utility of dual-energy CT in disease management.

  10. Modeling Oil Shale Pyrolysis: High-Temperature Unimolecular Decomposition Pathways for Thiophene.

    PubMed

    Vasiliou, AnGayle K; Hu, Hui; Cowell, Thomas W; Whitman, Jared C; Porterfield, Jessica; Parish, Carol A

    2017-10-12

    The thermal decomposition mechanism of thiophene has been investigated both experimentally and theoretically. Thermal decomposition experiments were done using a 1 mm × 3 cm pulsed silicon carbide microtubular reactor, C 4 H 4 S + Δ → Products. Unlike previous studies these experiments were able to identify the initial thiophene decomposition products. Thiophene was entrained in either Ar, Ne, or He carrier gas, passed through a heated (300-1700 K) SiC microtubular reactor (roughly ≤100 μs residence time), and exited into a vacuum chamber. The resultant molecular beam was probed by photoionization mass spectroscopy and IR spectroscopy. The pyrolysis mechanisms of thiophene were also investigated with the CBS-QB3 method using UB3LYP/6-311++G(2d,p) optimized geometries. In particular, these electronic structure methods were used to explore pathways for the formation of elemental sulfur as well as for the formation of H 2 S and 1,3-butadiyne. Thiophene was found to undergo unimolecular decomposition by five pathways: C 4 H 4 S → (1) S═C═CH 2 + HCCH, (2) CS + HCCCH 3 , (3) HCS + HCCCH 2 , (4) H 2 S + HCC-CCH, and (5) S + HCC-CH═CH 2 . The experimental and theoretical findings are in excellent agreement.

  11. Thermal decomposition pathways of hydroxylamine: theoretical investigation on the initial steps.

    PubMed

    Wang, Qingsheng; Wei, Chunyang; Pérez, Lisa M; Rogers, William J; Hall, Michael B; Mannan, M Sam

    2010-09-02

    Hydroxylamine (NH(2)OH) is an unstable compound at room temperature, and it has been involved in two tragic industrial incidents. Although experimental studies have been carried out to study the thermal stability of hydroxylamine, the detailed decomposition mechanism is still in debate. In this work, several density functional and ab initio methods were used in conjunction with several basis sets to investigate the initial thermal decomposition steps of hydroxylamine, including both unimolecular and bimolecular reaction pathways. The theoretical investigation shows that simple bond dissociations and unimolecular reactions are unlikely to occur. The energetically favorable initial step of decomposition pathways was determined as a bimolecular isomerization of hydroxylamine into ammonia oxide with an activation barrier of approximately 25 kcal/mol at the MPW1K level of theory. Because hydroxylamine is available only in aqueous solutions, solvent effects on the initial decomposition pathways were also studied using water cluster methods and the polarizable continuum model (PCM). In water, the activation barrier of the bimolecular isomerization reaction decreases to approximately 16 kcal/mol. The results indicate that the bimolecular isomerization pathway of hydroxylamine is more favorable in aqueous solutions. However, the bimolecular nature of this reaction means that more dilute aqueous solution will be more stable.

  12. A multilevel preconditioner for domain decomposition boundary systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bramble, J.H.; Pasciak, J.E.; Xu, Jinchao.

    1991-12-11

    In this note, we consider multilevel preconditioning of the reduced boundary systems which arise in non-overlapping domain decomposition methods. It will be shown that the resulting preconditioned systems have condition numbers which be bounded in the case of multilevel spaces on the whole domain and grow at most proportional to the number of levels in the case of multilevel boundary spaces without multilevel extensions into the interior.

  13. Calculation of excitation energies from the CC2 linear response theory using Cholesky decomposition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baudin, Pablo, E-mail: baudin.pablo@gmail.com; qLEAP – Center for Theoretical Chemistry, Department of Chemistry, Aarhus University, Langelandsgade 140, DK-8000 Aarhus C; Marín, José Sánchez

    2014-03-14

    A new implementation of the approximate coupled cluster singles and doubles CC2 linear response model is reported. It employs a Cholesky decomposition of the two-electron integrals that significantly reduces the computational cost and the storage requirements of the method compared to standard implementations. Our algorithm also exploits a partitioning form of the CC2 equations which reduces the dimension of the problem and avoids the storage of doubles amplitudes. We present calculation of excitation energies of benzene using a hierarchy of basis sets and compare the results with conventional CC2 calculations. The reduction of the scaling is evaluated as well asmore » the effect of the Cholesky decomposition parameter on the quality of the results. The new algorithm is used to perform an extrapolation to complete basis set investigation on the spectroscopically interesting benzylallene conformers. A set of calculations on medium-sized molecules is carried out to check the dependence of the accuracy of the results on the decomposition thresholds. Moreover, CC2 singlet excitation energies of the free base porphin are also presented.« less

  14. Effect of preliminary thermal treatment on decomposition kinetics of austenite in low-alloyed pipe steel in intercritical temperature interval

    NASA Astrophysics Data System (ADS)

    Makovetskii, A. N.; Tabatchikova, T. I.; Yakovleva, I. L.; Tereshchenko, N. A.; Mirzaev, D. A.

    2013-06-01

    The decomposition kinetics of austenite that appears in the 13KhFA low-alloyed pipe steel upon heating the samples in an intercritical temperature interval (ICI) and exposure for 5 or 30 min has been studied by the method of high-speed dilatometry. The results of dilatometry are supplemented by the microstructure analysis. Thermokinetic diagrams of the decomposition of the γ phase are represented. The conclusion has been drawn that an increase in the duration of exposure in the intercritical interval leads to a significant increase in the stability of the γ phase.

  15. On the decomposition of synchronous state mechines using sequence invariant state machines

    NASA Technical Reports Server (NTRS)

    Hebbalalu, K.; Whitaker, S.; Cameron, K.

    1992-01-01

    This paper presents a few techniques for the decomposition of Synchronous State Machines of medium to large sizes into smaller component machines. The methods are based on the nature of the transitions and sequences of states in the machine and on the number and variety of inputs to the machine. The results of the decomposition, and of using the Sequence Invariant State Machine (SISM) Design Technique for generating the component machines, include great ease and quickness in the design and implementation processes. Furthermore, there is increased flexibility in making modifications to the original design leading to negligible re-design time.

  16. A copyright protection scheme for digital images based on shuffled singular value decomposition and visual cryptography.

    PubMed

    Devi, B Pushpa; Singh, Kh Manglem; Roy, Sudipta

    2016-01-01

    This paper proposes a new watermarking algorithm based on the shuffled singular value decomposition and the visual cryptography for copyright protection of digital images. It generates the ownership and identification shares of the image based on visual cryptography. It decomposes the image into low and high frequency sub-bands. The low frequency sub-band is further divided into blocks of same size after shuffling it and then the singular value decomposition is applied to each randomly selected block. Shares are generated by comparing one of the elements in the first column of the left orthogonal matrix with its corresponding element in the right orthogonal matrix of the singular value decomposition of the block of the low frequency sub-band. The experimental results show that the proposed scheme clearly verifies the copyright of the digital images, and is robust to withstand several image processing attacks. Comparison with the other related visual cryptography-based algorithms reveals that the proposed method gives better performance. The proposed method is especially resilient against the rotation attack.

  17. Structural system identification based on variational mode decomposition

    NASA Astrophysics Data System (ADS)

    Bagheri, Abdollah; Ozbulut, Osman E.; Harris, Devin K.

    2018-03-01

    In this paper, a new structural identification method is proposed to identify the modal properties of engineering structures based on dynamic response decomposition using the variational mode decomposition (VMD). The VMD approach is a decomposition algorithm that has been developed as a means to overcome some of the drawbacks and limitations of the empirical mode decomposition method. The VMD-based modal identification algorithm decomposes the acceleration signal into a series of distinct modal responses and their respective center frequencies, such that when combined their cumulative modal responses reproduce the original acceleration response. The decaying amplitude of the extracted modal responses is then used to identify the modal damping ratios using a linear fitting function on modal response data. Finally, after extracting modal responses from available sensors, the mode shape vector for each of the decomposed modes in the system is identified from all obtained modal response data. To demonstrate the efficiency of the algorithm, a series of numerical, laboratory, and field case studies were evaluated. The laboratory case study utilized the vibration response of a three-story shear frame, whereas the field study leveraged the ambient vibration response of a pedestrian bridge to characterize the modal properties of the structure. The modal properties of the shear frame were computed using analytical approach for a comparison with the experimental modal frequencies. Results from these case studies demonstrated that the proposed method is efficient and accurate in identifying modal data of the structures.

  18. Plastic waste sacks alter the rate of decomposition of dismembered bodies within.

    PubMed

    Scholl, Kassra; Moffatt, Colin

    2017-07-01

    As a result of criminal activity, human bodies are sometimes dismembered and concealed within sealed, plastic waste sacks. Consequently, due to the inhibited ingress of insects and dismemberment, the rate of decomposition of the body parts within may be different to that of whole, exposed bodies. Correspondingly, once found, an estimation of the postmortem interval may be affected and lead to erroneous inferences. This study set out to determine whether insects were excluded and how rate of decomposition was affected inside such plastic sacks. The limbs, torsos and heads of 24 dismembered pigs were sealed using nylon cable ties within plastic garbage sacks, half of which were of a type claimed to repel insects. Using a body scoring scale to quantify decomposition, the body parts in the sacks were compared to those of ten exposed, whole pig carcasses. Insects were found to have entered both types of plastic sack. There was no difference in rate of decomposition in the two types of sack (F 1,65  = 1.78, p = 0.19), but this was considerably slower than those of whole carcasses (F 1,408  = 1453, p < 0.001), with heads showing the largest differences. As well as a slower decomposition, sacks resulted in formation of some adipocere tissue as a result of high humidity within. Based upon existing methods, postmortem intervals for body parts within sealed sacks would be significantly underestimated.

  19. Decomposition reactions of (hydroxyalkyl) nitrosoureas and related compounds: possible relationship to carcinogenicity.

    PubMed

    Singer, S S

    1985-08-01

    (Hydroxyalkyl)nitrosoureas and the related cyclic carbamates N-nitrosooxazolidones are potent carcinogens. The decompositions of four such compounds, 1-nitroso-1-(2-hydroxyethyl)urea (I), 3-nitrosooxazolid-2-one (II), 1-nitroso-1-(2-hydroxypropyl)urea (III), and 5-methyl-3-nitrosooxazolid-2-one (IV), in aqueous buffers at physiological pH were studied to determine if any obvious differences in decomposition pathways could account for the variety of tumors obtained from these four compounds. The products predicted by the literature mechanisms for nitrosourea and nitrosooxazolidone decompositions (which were derived from experiments at pH 10-12) were indeed the products formed, including glycols, active carbonyl compounds, epoxides, and, from the oxazolidones, cyclic carbonates. Furthermore, it was shown that in pH 6.4-7.4 buffer epoxides were stable reaction products. However, in the presence of hepatocytes, most of the epoxide was converted to glycol. The analytical methods developed were then applied to the analysis of the decomposition products of some related dialkylnitrosoureas, and similar results were obtained. The formation of chemically reactive secondary products and the possible relevance of these results to carcinogenesis studies are discussed.

  20. Vertically-oriented graphenes supported Mn3O4 as advanced catalysts in post plasma-catalysis for toluene decomposition

    NASA Astrophysics Data System (ADS)

    Bo, Zheng; Hao, Han; Yang, Shiling; Zhu, Jinhui; Yan, Jianhua; Cen, Kefa

    2018-04-01

    This work reports the catalytic performance of vertically-oriented graphenes (VGs) supported manganese oxide catalysts toward toluene decomposition in post plasma-catalysis (PPC) system. Dense networks of VGs were synthesized on carbon paper (CP) via a microwave plasma-enhanced chemical vapor deposition (PECVD) method. A constant current approach was applied in a conventional three-electrode electrochemical system for the electrodeposition of Mn3O4 catalysts on VGs. The as-obtained catalysts were characterized and investigated for ozone conversion and toluene decomposition in a PPC system. Experimental results show that the Mn3O4 catalyst loading mass on VG-coated CP was significantly higher than that on pristine CP (almost 1.8 times for an electrodeposition current of 10 mA). Moreover, the decoration of VGs led to both enhanced catalytic activity for ozone conversion and increased toluene decomposition, exhibiting a great promise in PPC system for the effective decomposition of volatile organic compounds.

  1. A Compound Fault Diagnosis for Rolling Bearings Method Based on Blind Source Separation and Ensemble Empirical Mode Decomposition

    PubMed Central

    Wang, Huaqing; Li, Ruitong; Tang, Gang; Yuan, Hongfang; Zhao, Qingliang; Cao, Xi

    2014-01-01

    A Compound fault signal usually contains multiple characteristic signals and strong confusion noise, which makes it difficult to separate week fault signals from them through conventional ways, such as FFT-based envelope detection, wavelet transform or empirical mode decomposition individually. In order to improve the compound faults diagnose of rolling bearings via signals’ separation, the present paper proposes a new method to identify compound faults from measured mixed-signals, which is based on ensemble empirical mode decomposition (EEMD) method and independent component analysis (ICA) technique. With the approach, a vibration signal is firstly decomposed into intrinsic mode functions (IMF) by EEMD method to obtain multichannel signals. Then, according to a cross correlation criterion, the corresponding IMF is selected as the input matrix of ICA. Finally, the compound faults can be separated effectively by executing ICA method, which makes the fault features more easily extracted and more clearly identified. Experimental results validate the effectiveness of the proposed method in compound fault separating, which works not only for the outer race defect, but also for the rollers defect and the unbalance fault of the experimental system. PMID:25289644

  2. Optimization of data analysis for the in vivo neutron activation analysis of aluminum in bone.

    PubMed

    Mohseni, H K; Matysiak, W; Chettle, D R; Byun, S H; Priest, N; Atanackovic, J; Prestwich, W V

    2016-10-01

    An existing system at McMaster University has been used for the in vivo measurement of aluminum in human bone. Precise and detailed analysis approaches are necessary to determine the aluminum concentration because of the low levels of aluminum found in the bone and the challenges associated with its detection. Phantoms resembling the composition of the human hand with varying concentrations of aluminum were made for testing the system prior to the application to human studies. A spectral decomposition model and a photopeak fitting model involving the inverse-variance weighted mean and a time-dependent analysis were explored to analyze the results and determine the model with the best performance and lowest minimum detection limit. The results showed that the spectral decomposition and the photopeak fitting model with the inverse-variance weighted mean both provided better results compared to the other methods tested. The spectral decomposition method resulted in a marginally lower detection limit (5μg Al/g Ca) compared to the inverse-variance weighted mean (5.2μg Al/g Ca), rendering both equally applicable to human measurements. Copyright © 2016 Elsevier Ltd. All rights reserved.

  3. Dominant modal decomposition method

    NASA Astrophysics Data System (ADS)

    Dombovari, Zoltan

    2017-03-01

    The paper deals with the automatic decomposition of experimental frequency response functions (FRF's) of mechanical structures. The decomposition of FRF's is based on the Green function representation of free vibratory systems. After the determination of the impulse dynamic subspace, the system matrix is formulated and the poles are calculated directly. By means of the corresponding eigenvectors, the contribution of each element of the impulse dynamic subspace is determined and the sufficient decomposition of the corresponding FRF is carried out. With the presented dominant modal decomposition (DMD) method, the mode shapes, the modal participation vectors and the modal scaling factors are identified using the decomposed FRF's. Analytical example is presented along with experimental case studies taken from machine tool industry.

  4. Multiwavelet grading of prostate pathological images

    NASA Astrophysics Data System (ADS)

    Soltanian-Zadeh, Hamid; Jafari-Khouzani, Kourosh

    2002-05-01

    We have developed image analysis methods to automatically grade pathological images of prostate. The proposed method generates Gleason grades to images, where each image is assigned a grade between 1 and 5. This is done using features extracted from multiwavelet transformations. We extract energy and entropy features from submatrices obtained in the decomposition. Next, we apply a k-NN classifier to grade the image. To find optimal multiwavelet basis, preprocessing, and classifier, we use features extracted by different multiwavelets with either critically sampled preprocessing or repeated row preprocessing and different k-NN classifiers and compare their performances, evaluated by total misclassification rate (TMR). To evaluate sensitivity to noise, we add white Gaussian noise to images and compare the results (TMR's). We applied proposed methods to 100 images. We evaluated the first and second levels of decomposition using Geronimo, Hardin, and Massopust (GHM), Chui and Lian (CL), and Shen (SA4) multiwavelets. We also evaluated k-NN classifier for k=1,2,3,4,5. Experimental results illustrate that first level of decomposition is quite noisy. They also show that critically sampled preprocessing outperforms repeated row preprocessing and has less sensitivity to noise. Finally, comparison studies indicate that SA4 multiwavelet and k-NN classifier (k=1) generates optimal results (with smallest TMR of 3%).

  5. Descent theory for semiorthogonal decompositions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Elagin, Alexei D

    We put forward a method for constructing semiorthogonal decompositions of the derived category of G-equivariant sheaves on a variety X under the assumption that the derived category of sheaves on X admits a semiorthogonal decomposition with components preserved by the action of the group G on X. This method is used to obtain semiorthogonal decompositions of equivariant derived categories for projective bundles and blow-ups with a smooth centre as well as for varieties with a full exceptional collection preserved by the group action. Our main technical tool is descent theory for derived categories. Bibliography: 12 titles.

  6. Radiation noise of the bearing applied to the ceramic motorized spindle based on the sub-source decomposition method

    NASA Astrophysics Data System (ADS)

    Bai, X. T.; Wu, Y. H.; Zhang, K.; Chen, C. Z.; Yan, H. P.

    2017-12-01

    This paper mainly focuses on the calculation and analysis on the radiation noise of the angular contact ball bearing applied to the ceramic motorized spindle. The dynamic model containing the main working conditions and structural parameters is established based on dynamic theory of rolling bearing. The sub-source decomposition method is introduced in for the calculation of the radiation noise of the bearing, and a comparative experiment is adopted to check the precision of the method. Then the comparison between the contribution of different components is carried out in frequency domain based on the sub-source decomposition method. The spectrum of radiation noise of different components under various rotation speeds are used as the basis of assessing the contribution of different eigenfrequencies on the radiation noise of the components, and the proportion of friction noise and impact noise is evaluated as well. The results of the research provide the theoretical basis for the calculation of bearing noise, and offers reference to the impact of different components on the radiation noise of the bearing under different rotation speed.

  7. Applications of singular value analysis and partial-step algorithm for nonlinear orbit determination

    NASA Technical Reports Server (NTRS)

    Ryne, Mark S.; Wang, Tseng-Chan

    1991-01-01

    An adaptive method in which cruise and nonlinear orbit determination problems can be solved using a single program is presented. It involves singular value decomposition augmented with an extended partial step algorithm. The extended partial step algorithm constrains the size of the correction to the spacecraft state and other solve-for parameters. The correction is controlled by an a priori covariance and a user-supplied bounds parameter. The extended partial step method is an extension of the update portion of the singular value decomposition algorithm. It thus preserves the numerical stability of the singular value decomposition method, while extending the region over which it converges. In linear cases, this method reduces to the singular value decomposition algorithm with the full rank solution. Two examples are presented to illustrate the method's utility.

  8. Rank-based decompositions of morphological templates.

    PubMed

    Sussner, P; Ritter, G X

    2000-01-01

    Methods for matrix decomposition have found numerous applications in image processing, in particular for the problem of template decomposition. Since existing matrix decomposition techniques are mainly concerned with the linear domain, we consider it timely to investigate matrix decomposition techniques in the nonlinear domain with applications in image processing. The mathematical basis for these investigations is the new theory of rank within minimax algebra. Thus far, only minimax decompositions of rank 1 and rank 2 matrices into outer product expansions are known to the image processing community. We derive a heuristic algorithm for the decomposition of matrices having arbitrary rank.

  9. Unconditionally energy stable time stepping scheme for Cahn–Morral equation: Application to multi-component spinodal decomposition and optimal space tiling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tavakoli, Rouhollah, E-mail: rtavakoli@sharif.ir

    An unconditionally energy stable time stepping scheme is introduced to solve Cahn–Morral-like equations in the present study. It is constructed based on the combination of David Eyre's time stepping scheme and Schur complement approach. Although the presented method is general and independent of the choice of homogeneous free energy density function term, logarithmic and polynomial energy functions are specifically considered in this paper. The method is applied to study the spinodal decomposition in multi-component systems and optimal space tiling problems. A penalization strategy is developed, in the case of later problem, to avoid trivial solutions. Extensive numerical experiments demonstrate themore » success and performance of the presented method. According to the numerical results, the method is convergent and energy stable, independent of the choice of time stepsize. Its MATLAB implementation is included in the appendix for the numerical evaluation of algorithm and reproduction of the presented results. -- Highlights: •Extension of Eyre's convex–concave splitting scheme to multiphase systems. •Efficient solution of spinodal decomposition in multi-component systems. •Efficient solution of least perimeter periodic space partitioning problem. •Developing a penalization strategy to avoid trivial solutions. •Presentation of MATLAB implementation of the introduced algorithm.« less

  10. Matrix Methods for Estimating the Coherence Functions from Estimates of the Cross-Spectral Density Matrix

    DOE PAGES

    Smallwood, D. O.

    1996-01-01

    It is shown that the usual method for estimating the coherence functions (ordinary, partial, and multiple) for a general multiple-input! multiple-output problem can be expressed as a modified form of Cholesky decomposition of the cross-spectral density matrix of the input and output records. The results can be equivalently obtained using singular value decomposition (SVD) of the cross-spectral density matrix. Using SVD suggests a new form of fractional coherence. The formulation as a SVD problem also suggests a way to order the inputs when a natural physical order of the inputs is absent.

  11. Signal evaluations using singular value decomposition for Thomson scattering diagnostics.

    PubMed

    Tojo, H; Yamada, I; Yasuhara, R; Yatsuka, E; Funaba, H; Hatae, T; Hayashi, H; Itami, K

    2014-11-01

    This paper provides a novel method for evaluating signal intensities in incoherent Thomson scattering diagnostics. A double-pass Thomson scattering system, where a laser passes through the plasma twice, generates two scattering pulses from the plasma. Evaluations of the signal intensities in the spectrometer are sometimes difficult due to noise and stray light. We apply the singular value decomposition method to Thomson scattering data with strong noise components. Results show that the average accuracy of the measured electron temperature (Te) is superior to that of temperature obtained using a low-pass filter (<20 MHz) or without any filters.

  12. Signal evaluations using singular value decomposition for Thomson scattering diagnostics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tojo, H., E-mail: tojo.hiroshi@jaea.go.jp; Yatsuka, E.; Hatae, T.

    2014-11-15

    This paper provides a novel method for evaluating signal intensities in incoherent Thomson scattering diagnostics. A double-pass Thomson scattering system, where a laser passes through the plasma twice, generates two scattering pulses from the plasma. Evaluations of the signal intensities in the spectrometer are sometimes difficult due to noise and stray light. We apply the singular value decomposition method to Thomson scattering data with strong noise components. Results show that the average accuracy of the measured electron temperature (T{sub e}) is superior to that of temperature obtained using a low-pass filter (<20 MHz) or without any filters.

  13. Controlled decomposition and oxidation: A treatment method for gaseous process effluents

    NASA Technical Reports Server (NTRS)

    Mckinley, Roger J. B., Sr.

    1990-01-01

    The safe disposal of effluent gases produced by the electronics industry deserves special attention. Due to the hazardous nature of many of the materials used, it is essential to control and treat the reactants and reactant by-products as they are exhausted from the process tool and prior to their release into the manufacturing facility's exhaust system and the atmosphere. Controlled decomposition and oxidation (CDO) is one method of treating effluent gases from thin film deposition processes. CDO equipment applications, field experience, and results of the use of CDO equipment and technological advances gained from the field experiences are discussed.

  14. Massively Parallel Dantzig-Wolfe Decomposition Applied to Traffic Flow Scheduling

    NASA Technical Reports Server (NTRS)

    Rios, Joseph Lucio; Ross, Kevin

    2009-01-01

    Optimal scheduling of air traffic over the entire National Airspace System is a computationally difficult task. To speed computation, Dantzig-Wolfe decomposition is applied to a known linear integer programming approach for assigning delays to flights. The optimization model is proven to have the block-angular structure necessary for Dantzig-Wolfe decomposition. The subproblems for this decomposition are solved in parallel via independent computation threads. Experimental evidence suggests that as the number of subproblems/threads increases (and their respective sizes decrease), the solution quality, convergence, and runtime improve. A demonstration of this is provided by using one flight per subproblem, which is the finest possible decomposition. This results in thousands of subproblems and associated computation threads. This massively parallel approach is compared to one with few threads and to standard (non-decomposed) approaches in terms of solution quality and runtime. Since this method generally provides a non-integral (relaxed) solution to the original optimization problem, two heuristics are developed to generate an integral solution. Dantzig-Wolfe followed by these heuristics can provide a near-optimal (sometimes optimal) solution to the original problem hundreds of times faster than standard (non-decomposed) approaches. In addition, when massive decomposition is employed, the solution is shown to be more likely integral, which obviates the need for an integerization step. These results indicate that nationwide, real-time, high fidelity, optimal traffic flow scheduling is achievable for (at least) 3 hour planning horizons.

  15. Towards accurate modeling of noncovalent interactions for protein rigidity analysis

    PubMed Central

    2013-01-01

    Background Protein rigidity analysis is an efficient computational method for extracting flexibility information from static, X-ray crystallography protein data. Atoms and bonds are modeled as a mechanical structure and analyzed with a fast graph-based algorithm, producing a decomposition of the flexible molecule into interconnected rigid clusters. The result depends critically on noncovalent atomic interactions, primarily on how hydrogen bonds and hydrophobic interactions are computed and modeled. Ongoing research points to the stringent need for benchmarking rigidity analysis software systems, towards the goal of increasing their accuracy and validating their results, either against each other and against biologically relevant (functional) parameters. We propose two new methods for modeling hydrogen bonds and hydrophobic interactions that more accurately reflect a mechanical model, without being computationally more intensive. We evaluate them using a novel scoring method, based on the B-cubed score from the information retrieval literature, which measures how well two cluster decompositions match. Results To evaluate the modeling accuracy of KINARI, our pebble-game rigidity analysis system, we use a benchmark data set of 20 proteins, each with multiple distinct conformations deposited in the Protein Data Bank. Cluster decompositions for them were previously determined with the RigidFinder method from Gerstein's lab and validated against experimental data. When KINARI's default tuning parameters are used, an improvement of the B-cubed score over a crude baseline is observed in 30% of this data. With our new modeling options, improvements were observed in over 70% of the proteins in this data set. We investigate the sensitivity of the cluster decomposition score with case studies on pyruvate phosphate dikinase and calmodulin. Conclusion To substantially improve the accuracy of protein rigidity analysis systems, thorough benchmarking must be performed on all current systems and future extensions. We have measured the gain in performance by comparing different modeling methods for noncovalent interactions. We showed that new criteria for modeling hydrogen bonds and hydrophobic interactions can significantly improve the results. The two new methods proposed here have been implemented and made publicly available in the current version of KINARI (v1.3), together with the benchmarking tools, which can be downloaded from our software's website, http://kinari.cs.umass.edu. PMID:24564209

  16. Experimental and DFT simulation study of a novel felodipine cocrystal: Characterization, dissolving properties and thermal decomposition kinetics.

    PubMed

    Yang, Caiqin; Guo, Wei; Lin, Yulong; Lin, Qianqian; Wang, Jiaojiao; Wang, Jing; Zeng, Yanli

    2018-05-30

    In this study, a new cocrystal of felodipine (Fel) and glutaric acid (Glu) with a high dissolution rate was developed using the solvent ultrasonic method. The prepared cocrystal was characterized using X-ray powder diffraction, differential scanning calorimetry, thermogravimetric (TG) analysis, and infrared (IR) spectroscopy. To provide basic information about the optimization of pharmaceutical preparations of Fel-based cocrystals, this work investigated the thermal decomposition kinetics of the Fel-Glu cocrystal through non-isothermal thermogravimetry. Density functional theory (DFT) simulations were also performed on the Fel monomer and the trimolecular cocrystal compound for exploring the mechanisms underlying hydrogen bonding formation and thermal decomposition. Combined results of IR spectroscopy and DFT simulation verified that the Fel-Glu cocrystal formed via the NH⋯OC and CO⋯HO hydrogen bonds between Fel and Glu at the ratio of 1:2. The TG/derivative TG curves indicated that the thermal decomposition of the Fel-Glu cocrystal underwent a two-step process. The apparent activation energy (E a ) and pre-exponential factor (A) of the thermal decomposition for the first stage were 84.90 kJ mol -1 and 7.03 × 10 7  min -1 , respectively. The mechanism underlying thermal decomposition possibly involved nucleation and growth, with the integral mechanism function G(α) of α 3/2 . DFT calculation revealed that the hydrogen bonding between Fel and Glu weakened the terminal methoxyl, methyl, and ethyl groups in the Fel molecule. As a result, these groups were lost along with the Glu molecule in the first thermal decomposition. In conclusion, the formed cocrystal exhibited different thermal decomposition kinetics and showed different E a , A, and shelf life from the intact active pharmaceutical ingredient. Copyright © 2018 Elsevier B.V. All rights reserved.

  17. Ranking the spreading ability of nodes in network core

    NASA Astrophysics Data System (ADS)

    Tong, Xiao-Lei; Liu, Jian-Guo; Wang, Jiang-Pan; Guo, Qiang; Ni, Jing

    2015-11-01

    Ranking nodes by their spreading ability in complex networks is of vital significance to better understand the network structure and more efficiently spread information. The k-shell decomposition method could identify the most influential nodes, namely network core, with the same ks values regardless to their different spreading influence. In this paper, we present an improved method based on the k-shell decomposition method and closeness centrality (CC) to rank the node spreading influence of the network core. Experiment results on the data from the scientific collaboration network and U.S. aviation network show that the accuracy of the presented method could be increased by 31% and 45% than the one obtained by the degree k, 32% and 31% than the one by the betweenness.

  18. A novel iterative scheme and its application to differential equations.

    PubMed

    Khan, Yasir; Naeem, F; Šmarda, Zdeněk

    2014-01-01

    The purpose of this paper is to employ an alternative approach to reconstruct the standard variational iteration algorithm II proposed by He, including Lagrange multiplier, and to give a simpler formulation of Adomian decomposition and modified Adomian decomposition method in terms of newly proposed variational iteration method-II (VIM). Through careful investigation of the earlier variational iteration algorithm and Adomian decomposition method, we find unnecessary calculations for Lagrange multiplier and also repeated calculations involved in each iteration, respectively. Several examples are given to verify the reliability and efficiency of the method.

  19. Artifact removal from EEG data with empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Grubov, Vadim V.; Runnova, Anastasiya E.; Efremova, Tatyana Yu.; Hramov, Alexander E.

    2017-03-01

    In the paper we propose the novel method for dealing with the physiological artifacts caused by intensive activity of facial and neck muscles and other movements in experimental human EEG recordings. The method is based on analysis of EEG signals with empirical mode decomposition (Hilbert-Huang transform). We introduce the mathematical algorithm of the method with following steps: empirical mode decomposition of EEG signal, choosing of empirical modes with artifacts, removing empirical modes with artifacts, reconstruction of the initial EEG signal. We test the method on filtration of experimental human EEG signals from movement artifacts and show high efficiency of the method.

  20. Computer implemented empirical mode decomposition method, apparatus, and article of manufacture for two-dimensional signals

    NASA Technical Reports Server (NTRS)

    Huang, Norden E. (Inventor)

    2001-01-01

    A computer implemented method of processing two-dimensional physical signals includes five basic components and the associated presentation techniques of the results. The first component decomposes the two-dimensional signal into one-dimensional profiles. The second component is a computer implemented Empirical Mode Decomposition that extracts a collection of Intrinsic Mode Functions (IMF's) from each profile based on local extrema and/or curvature extrema. The decomposition is based on the direct extraction of the energy associated with various intrinsic time scales in the profiles. In the third component, the IMF's of each profile are then subjected to a Hilbert Transform. The fourth component collates the Hilbert transformed IMF's of the profiles to form a two-dimensional Hilbert Spectrum. A fifth component manipulates the IMF's by, for example, filtering the two-dimensional signal by reconstructing the two-dimensional signal from selected IMF(s).

  1. Theoretical Study of Decomposition Pathways for HArF and HKrF

    NASA Technical Reports Server (NTRS)

    Chaban, Galina M.; Lundell, Jan; Gerber, R. Benny; Kwak, Donchan (Technical Monitor)

    2002-01-01

    To provide theoretical insights into the stability and dynamics of the new rare gas compounds HArF and HKrF, reaction paths for decomposition processes HRgF to Rg + HF and HRgF to H + Rg + F (Rg = Ar, Kr) are calculated using ab initio electronic structure methods. The bending channels, HRgF to Rg + HF, are described by single-configurational MP2 and CCSD(T) electronic structure methods, while the linear decomposition paths, HRgF to H + Rg + F, require the use of multi-configurational wave functions that include dynamic correlation and are size extensive. HArF and HKrF molecules are found to be energetically stable with respect to atomic dissociation products (H + Rg + F) and separated by substantial energy barriers from Rg + HF products, which ensure their kinetic stability. The results are compatible with experimental data on these systems.

  2. Pole-Like Street Furniture Decompostion in Mobile Laser Scanning Data

    NASA Astrophysics Data System (ADS)

    Li, F.; Oude Elberink, S.; Vosselman, G.

    2016-06-01

    Automatic semantic interpretation of street furniture has become a popular topic in recent years. Current studies detect street furniture as connected components of points above the street level. Street furniture classification based on properties of such components suffers from large intra class variability of shapes and cannot deal with mixed classes like traffic signs attached to light poles. In this paper, we focus on the decomposition of point clouds of pole-like street furniture. A novel street furniture decomposition method is proposed, which consists of three steps: (i) acquirement of prior-knowledge, (ii) pole extraction, (iii) components separation. For the pole extraction, a novel global pole extraction approach is proposed to handle 3 different cases of street furniture. In the evaluation of results, which involves the decomposition of 27 different instances of street furniture, we demonstrate that our method decomposes mixed classes street furniture into poles and different components with respect to different functionalities.

  3. Air trichloroethylene oxidation in a corona plasma-catalytic reactor

    NASA Astrophysics Data System (ADS)

    Masoomi-Godarzi, S.; Ranji-Burachaloo, H.; Khodadadi, A. A.; Vesali-Naseh, M.; Mortazavi, Y.

    2014-08-01

    The oxidative decomposition of trichloroethylene (TCE; 300 ppm) by non-thermal corona plasma was investigated in dry air at atmospheric pressure and room temperature, both in the absence and presence of catalysts including MnOx, CoOx. The catalysts were synthesized by a co-precipitation method. The morphology and structure of the catalysts were characterized by BET surface area measurement and Fourier Transform Infrared (FTIR) methods. Decomposition of TCE and distribution of products were evaluated by a gas chromatograph (GC) and an FTIR. In the absence of the catalyst, TCE removal is increased with increases in the applied voltage and current intensity. Higher TCE removal and CO2 selectivity is observed in presence of the corona and catalysts, as compared to those with the plasma alone. The results show that MnOx and CoOx catalysts can dissociate the in-plasma produced ozone to oxygen radicals, which enhances the TCE decomposition.

  4. Matching multiple rigid domain decompositions of proteins

    PubMed Central

    Flynn, Emily; Streinu, Ileana

    2017-01-01

    We describe efficient methods for consistently coloring and visualizing collections of rigid cluster decompositions obtained from variations of a protein structure, and lay the foundation for more complex setups that may involve different computational and experimental methods. The focus here is on three biological applications: the conceptually simpler problems of visualizing results of dilution and mutation analyses, and the more complex task of matching decompositions of multiple NMR models of the same protein. Implemented into the KINARI web server application, the improved visualization techniques give useful information about protein folding cores, help examining the effect of mutations on protein flexibility and function, and provide insights into the structural motions of PDB proteins solved with solution NMR. These tools have been developed with the goal of improving and validating rigidity analysis as a credible coarse-grained model capturing essential information about a protein’s slow motions near the native state. PMID:28141528

  5. Soft tissue decomposition of submerged, dismembered pig limbs enclosed in plastic bags.

    PubMed

    Pakosh, Caitlin M; Rogers, Tracy L

    2009-11-01

    This study examines underwater soft tissue decomposition of dismembered pig limbs deposited in polyethylene plastic bags. The research evaluates the level of influence that disposal method has on underwater decomposition processes and details observations specific to this scenario. To our knowledge, no other study has yet investigated decomposing, dismembered, and enclosed remains in water environments. The total sample size consisted of 120 dismembered pig limbs, divided into a subsample of 30 pig limbs per recovery period (34 and 71 days) for each treatment. The two treatments simulated non-enclosed and plastic enclosed disposal methods in a water context. The remains were completely submerged in Lake Ontario for 34 and 71 days. In both recovery periods, the non-enclosed samples lost soft tissue to a significantly greater extent than their plastic enclosed counterparts. Disposal of remains in plastic bags therefore results in preservation, most likely caused by bacterial inhibition and reduced oxygen levels.

  6. Alternative Modal Basis Selection Procedures for Nonlinear Random Response Simulation

    NASA Technical Reports Server (NTRS)

    Przekop, Adam; Guo, Xinyun; Rizzi, Stephen A.

    2010-01-01

    Three procedures to guide selection of an efficient modal basis in a nonlinear random response analysis are examined. One method is based only on proper orthogonal decomposition, while the other two additionally involve smooth orthogonal decomposition. Acoustic random response problems are employed to assess the performance of the three modal basis selection approaches. A thermally post-buckled beam exhibiting snap-through behavior, a shallowly curved arch in the auto-parametric response regime and a plate structure are used as numerical test articles. The results of the three reduced-order analyses are compared with the results of the computationally taxing simulation in the physical degrees of freedom. For the cases considered, all three methods are shown to produce modal bases resulting in accurate and computationally efficient reduced-order nonlinear simulations.

  7. Error analysis of multipoint flux domain decomposition methods for evolutionary diffusion problems

    NASA Astrophysics Data System (ADS)

    Arrarás, A.; Portero, L.; Yotov, I.

    2014-01-01

    We study space and time discretizations for mixed formulations of parabolic problems. The spatial approximation is based on the multipoint flux mixed finite element method, which reduces to an efficient cell-centered pressure system on general grids, including triangles, quadrilaterals, tetrahedra, and hexahedra. The time integration is performed by using a domain decomposition time-splitting technique combined with multiterm fractional step diagonally implicit Runge-Kutta methods. The resulting scheme is unconditionally stable and computationally efficient, as it reduces the global system to a collection of uncoupled subdomain problems that can be solved in parallel without the need for Schwarz-type iteration. Convergence analysis for both the semidiscrete and fully discrete schemes is presented.

  8. Ensemble empirical mode decomposition based fluorescence spectral noise reduction for low concentration PAHs

    NASA Astrophysics Data System (ADS)

    Wang, Shu-tao; Yang, Xue-ying; Kong, De-ming; Wang, Yu-tian

    2017-11-01

    A new noise reduction method based on ensemble empirical mode decomposition (EEMD) is proposed to improve the detection effect for fluorescence spectra. Polycyclic aromatic hydrocarbons (PAHs) pollutants, as a kind of important current environmental pollution source, are highly oncogenic. Using the fluorescence spectroscopy method, the PAHs pollutants can be detected. However, instrument will produce noise in the experiment. Weak fluorescent signals can be affected by noise, so we propose a way to denoise and improve the detection effect. Firstly, we use fluorescence spectrometer to detect PAHs to obtain fluorescence spectra. Subsequently, noises are reduced by EEMD algorithm. Finally, the experiment results show the proposed method is feasible.

  9. Automatic image enhancement based on multi-scale image decomposition

    NASA Astrophysics Data System (ADS)

    Feng, Lu; Wu, Zhuangzhi; Pei, Luo; Long, Xiong

    2014-01-01

    In image processing and computational photography, automatic image enhancement is one of the long-range objectives. Recently the automatic image enhancement methods not only take account of the globe semantics, like correct color hue and brightness imbalances, but also the local content of the image, such as human face and sky of landscape. In this paper we describe a new scheme for automatic image enhancement that considers both global semantics and local content of image. Our automatic image enhancement method employs the multi-scale edge-aware image decomposition approach to detect the underexposure regions and enhance the detail of the salient content. The experiment results demonstrate the effectiveness of our approach compared to existing automatic enhancement methods.

  10. Rapid Transient Pressure Field Computations in the Nearfield of Circular Transducers using Frequency Domain Time-Space Decomposition

    PubMed Central

    Alles, E. J.; Zhu, Y.; van Dongen, K. W. A.; McGough, R. J.

    2013-01-01

    The fast nearfield method, when combined with time-space decomposition, is a rapid and accurate approach for calculating transient nearfield pressures generated by ultrasound transducers. However, the standard time-space decomposition approach is only applicable to certain analytical representations of the temporal transducer surface velocity that, when applied to the fast nearfield method, are expressed as a finite sum of products of separate temporal and spatial terms. To extend time-space decomposition such that accelerated transient field simulations are enabled in the nearfield for an arbitrary transducer surface velocity, a new transient simulation method, frequency domain time-space decomposition (FDTSD), is derived. With this method, the temporal transducer surface velocity is transformed into the frequency domain, and then each complex-valued term is processed separately. Further improvements are achieved by spectral clipping, which reduces the number of terms and the computation time. Trade-offs between speed and accuracy are established for FDTSD calculations, and pressure fields obtained with the FDTSD method for a circular transducer are compared to those obtained with Field II and the impulse response method. The FDTSD approach, when combined with the fast nearfield method and spectral clipping, consistently achieves smaller errors in less time and requires less memory than Field II or the impulse response method. PMID:23160476

  11. Fast non-overlapping Schwarz domain decomposition methods for solving the neutron diffusion equation

    NASA Astrophysics Data System (ADS)

    Jamelot, Erell; Ciarlet, Patrick

    2013-05-01

    Studying numerically the steady state of a nuclear core reactor is expensive, in terms of memory storage and computational time. In order to address both requirements, one can use a domain decomposition method, implemented on a parallel computer. We present here such a method for the mixed neutron diffusion equations, discretized with Raviart-Thomas-Nédélec finite elements. This method is based on the Schwarz iterative algorithm with Robin interface conditions to handle communications. We analyse this method from the continuous point of view to the discrete point of view, and we give some numerical results in a realistic highly heterogeneous 3D configuration. Computations are carried out with the MINOS solver of the APOLLO3® neutronics code. APOLLO3 is a registered trademark in France.

  12. Decomposition of metabolic network into functional modules based on the global connectivity structure of reaction graph.

    PubMed

    Ma, Hong-Wu; Zhao, Xue-Ming; Yuan, Ying-Jin; Zeng, An-Ping

    2004-08-12

    Metabolic networks are organized in a modular, hierarchical manner. Methods for a rational decomposition of the metabolic network into relatively independent functional subsets are essential to better understand the modularity and organization principle of a large-scale, genome-wide network. Network decomposition is also necessary for functional analysis of metabolism by pathway analysis methods that are often hampered by the problem of combinatorial explosion due to the complexity of metabolic network. Decomposition methods proposed in literature are mainly based on the connection degree of metabolites. To obtain a more reasonable decomposition, the global connectivity structure of metabolic networks should be taken into account. In this work, we use a reaction graph representation of a metabolic network for the identification of its global connectivity structure and for decomposition. A bow-tie connectivity structure similar to that previously discovered for metabolite graph is found also to exist in the reaction graph. Based on this bow-tie structure, a new decomposition method is proposed, which uses a distance definition derived from the path length between two reactions. An hierarchical classification tree is first constructed from the distance matrix among the reactions in the giant strong component of the bow-tie structure. These reactions are then grouped into different subsets based on the hierarchical tree. Reactions in the IN and OUT subsets of the bow-tie structure are subsequently placed in the corresponding subsets according to a 'majority rule'. Compared with the decomposition methods proposed in literature, ours is based on combined properties of the global network structure and local reaction connectivity rather than, primarily, on the connection degree of metabolites. The method is applied to decompose the metabolic network of Escherichia coli. Eleven subsets are obtained. More detailed investigations of the subsets show that reactions in the same subset are really functionally related. The rational decomposition of metabolic networks, and subsequent studies of the subsets, make it more amenable to understand the inherent organization and functionality of metabolic networks at the modular level. http://genome.gbf.de/bioinformatics/

  13. Primary decomposition of zero-dimensional ideals over finite fields

    NASA Astrophysics Data System (ADS)

    Gao, Shuhong; Wan, Daqing; Wang, Mingsheng

    2009-03-01

    A new algorithm is presented for computing primary decomposition of zero-dimensional ideals over finite fields. Like Berlekamp's algorithm for univariate polynomials, the new method is based on the invariant subspace of the Frobenius map acting on the quotient algebra. The dimension of the invariant subspace equals the number of primary components, and a basis of the invariant subspace yields a complete decomposition. Unlike previous approaches for decomposing multivariate polynomial systems, the new method does not need primality testing nor any generic projection, instead it reduces the general decomposition problem directly to root finding of univariate polynomials over the ground field. Also, it is shown how Groebner basis structure can be used to get partial primary decomposition without any root finding.

  14. Fungal community structure of fallen pine and oak wood at different stages of decomposition in the Qinling Mountains, China.

    PubMed

    Yuan, Jie; Zheng, Xiaofeng; Cheng, Fei; Zhu, Xian; Hou, Lin; Li, Jingxia; Zhang, Shuoxin

    2017-10-24

    Historically, intense forest hazards have resulted in an increase in the quantity of fallen wood in the Qinling Mountains. Fallen wood has a decisive influence on the nutrient cycling, carbon budget and ecosystem biodiversity of forests, and fungi are essential for the decomposition of fallen wood. Moreover, decaying dead wood alters fungal communities. The development of high-throughput sequencing methods has facilitated the ongoing investigation of relevant molecular forest ecosystems with a focus on fungal communities. In this study, fallen wood and its associated fungal communities were compared at different stages of decomposition to evaluate relative species abundance and species diversity. The physical and chemical factors that alter fungal communities were also compared by performing correspondence analysis according to host tree species across all stages of decomposition. Tree species were the major source of differences in fungal community diversity at all decomposition stages, and fungal communities achieved the highest levels of diversity at the intermediate and late decomposition stages. Interactions between various physical and chemical factors and fungal communities shared the same regulatory mechanisms, and there was no tree species-specific influence. Improving our knowledge of wood-inhabiting fungal communities is crucial for forest ecosystem conservation.

  15. Adaptive variational mode decomposition method for signal processing based on mode characteristic

    NASA Astrophysics Data System (ADS)

    Lian, Jijian; Liu, Zhuo; Wang, Haijun; Dong, Xiaofeng

    2018-07-01

    Variational mode decomposition is a completely non-recursive decomposition model, where all the modes are extracted concurrently. However, the model requires a preset mode number, which limits the adaptability of the method since a large deviation in the number of mode set will cause the discard or mixing of the mode. Hence, a method called Adaptive Variational Mode Decomposition (AVMD) was proposed to automatically determine the mode number based on the characteristic of intrinsic mode function. The method was used to analyze the simulation signals and the measured signals in the hydropower plant. Comparisons have also been conducted to evaluate the performance by using VMD, EMD and EWT. It is indicated that the proposed method has strong adaptability and is robust to noise. It can determine the mode number appropriately without modulation even when the signal frequencies are relatively close.

  16. A Simple Application of Compressed Sensing to Further Accelerate Partially Parallel Imaging

    PubMed Central

    Miao, Jun; Guo, Weihong; Narayan, Sreenath; Wilson, David L.

    2012-01-01

    Compressed Sensing (CS) and partially parallel imaging (PPI) enable fast MR imaging by reducing the amount of k-space data required for reconstruction. Past attempts to combine these two have been limited by the incoherent sampling requirement of CS, since PPI routines typically sample on a regular (coherent) grid. Here, we developed a new method, “CS+GRAPPA,” to overcome this limitation. We decomposed sets of equidistant samples into multiple random subsets. Then, we reconstructed each subset using CS, and averaging the results to get a final CS k-space reconstruction. We used both a standard CS, and an edge and joint-sparsity guided CS reconstruction. We tested these intermediate results on both synthetic and real MR phantom data, and performed a human observer experiment to determine the effectiveness of decomposition, and to optimize the number of subsets. We then used these CS reconstructions to calibrate the GRAPPA complex coil weights. In vivo parallel MR brain and heart data sets were used. An objective image quality evaluation metric, Case-PDM, was used to quantify image quality. Coherent aliasing and noise artifacts were significantly reduced using two decompositions. More decompositions further reduced coherent aliasing and noise artifacts but introduced blurring. However, the blurring was effectively minimized using our new edge and joint-sparsity guided CS using two decompositions. Numerical results on parallel data demonstrated that the combined method greatly improved image quality as compared to standard GRAPPA, on average halving Case-PDM scores across a range of sampling rates. The proposed technique allowed the same Case-PDM scores as standard GRAPPA, using about half the number of samples. We conclude that the new method augments GRAPPA by combining it with CS, allowing CS to work even when the k-space sampling pattern is equidistant. PMID:22902065

  17. Effects of calibration methods on quantitative material decomposition in photon-counting spectral computed tomography using a maximum a posteriori estimator.

    PubMed

    Curtis, Tyler E; Roeder, Ryan K

    2017-10-01

    Advances in photon-counting detectors have enabled quantitative material decomposition using multi-energy or spectral computed tomography (CT). Supervised methods for material decomposition utilize an estimated attenuation for each material of interest at each photon energy level, which must be calibrated based upon calculated or measured values for known compositions. Measurements using a calibration phantom can advantageously account for system-specific noise, but the effect of calibration methods on the material basis matrix and subsequent quantitative material decomposition has not been experimentally investigated. Therefore, the objective of this study was to investigate the influence of the range and number of contrast agent concentrations within a modular calibration phantom on the accuracy of quantitative material decomposition in the image domain. Gadolinium was chosen as a model contrast agent in imaging phantoms, which also contained bone tissue and water as negative controls. The maximum gadolinium concentration (30, 60, and 90 mM) and total number of concentrations (2, 4, and 7) were independently varied to systematically investigate effects of the material basis matrix and scaling factor calibration on the quantitative (root mean squared error, RMSE) and spatial (sensitivity and specificity) accuracy of material decomposition. Images of calibration and sample phantoms were acquired using a commercially available photon-counting spectral micro-CT system with five energy bins selected to normalize photon counts and leverage the contrast agent k-edge. Material decomposition of gadolinium, calcium, and water was performed for each calibration method using a maximum a posteriori estimator. Both the quantitative and spatial accuracy of material decomposition were most improved by using an increased maximum gadolinium concentration (range) in the basis matrix calibration; the effects of using a greater number of concentrations were relatively small in magnitude by comparison. The material basis matrix calibration was more sensitive to changes in the calibration methods than the scaling factor calibration. The material basis matrix calibration significantly influenced both the quantitative and spatial accuracy of material decomposition, while the scaling factor calibration influenced quantitative but not spatial accuracy. Importantly, the median RMSE of material decomposition was as low as ~1.5 mM (~0.24 mg/mL gadolinium), which was similar in magnitude to that measured by optical spectroscopy on the same samples. The accuracy of quantitative material decomposition in photon-counting spectral CT was significantly influenced by calibration methods which must therefore be carefully considered for the intended diagnostic imaging application. © 2017 American Association of Physicists in Medicine.

  18. Weak characteristic information extraction from early fault of wind turbine generator gearbox

    NASA Astrophysics Data System (ADS)

    Xu, Xiaoli; Liu, Xiuli

    2017-09-01

    Given the weak early degradation characteristic information during early fault evolution in gearbox of wind turbine generator, traditional singular value decomposition (SVD)-based denoising may result in loss of useful information. A weak characteristic information extraction based on μ-SVD and local mean decomposition (LMD) is developed to address this problem. The basic principle of the method is as follows: Determine the denoising order based on cumulative contribution rate, perform signal reconstruction, extract and subject the noisy part of signal to LMD and μ-SVD denoising, and obtain denoised signal through superposition. Experimental results show that this method can significantly weaken signal noise, effectively extract the weak characteristic information of early fault, and facilitate the early fault warning and dynamic predictive maintenance.

  19. Calculation of the Full Scattering Amplitude without Partial Wave Decomposition. 2; Inclusion of Exchange

    NASA Technical Reports Server (NTRS)

    Shertzer, Janine; Temkin, Aaron

    2004-01-01

    The development of a practical method of accurately calculating the full scattering amplitude, without making a partial wave decomposition is continued. The method is developed in the context of electron-hydrogen scattering, and here exchange is dealt with by considering e-H scattering in the static exchange approximation. The Schroedinger equation in this approximation can be simplified to a set of coupled integro-differential equations. The equations are solved numerically for the full scattering wave function. The scattering amplitude can most accurately be calculated from an integral expression for the amplitude; that integral can be formally simplified, and then evaluated using the numerically determined wave function. The results are essentially identical to converged partial wave results.

  20. Distance descending ordering method: An O(n) algorithm for inverting the mass matrix in simulation of macromolecules with long branches

    NASA Astrophysics Data System (ADS)

    Xu, Xiankun; Li, Peiwen

    2017-11-01

    Fixman's work in 1974 and the follow-up studies have developed a method that can factorize the inverse of mass matrix into an arithmetic combination of three sparse matrices-one of them is positive definite and needs to be further factorized by using the Cholesky decomposition or similar methods. When the molecule subjected to study is of serial chain structure, this method can achieve O (n) time complexity. However, for molecules with long branches, Cholesky decomposition about the corresponding positive definite matrix will introduce massive fill-in due to its nonzero structure. Although there are several methods can be used to reduce the number of fill-in, none of them could strictly guarantee for zero fill-in for all molecules according to our test, and thus cannot obtain O (n) time complexity by using these traditional methods. In this paper we present a new method that can guarantee for no fill-in in doing the Cholesky decomposition, which was developed based on the correlations between the mass matrix and the geometrical structure of molecules. As a result, the inverting of mass matrix will remain the O (n) time complexity, no matter the molecule structure has long branches or not.

  1. Decomposition and correction overlapping peaks of LIBS using an error compensation method combined with curve fitting.

    PubMed

    Tan, Bing; Huang, Min; Zhu, Qibing; Guo, Ya; Qin, Jianwei

    2017-09-01

    The laser induced breakdown spectroscopy (LIBS) technique is an effective method to detect material composition by obtaining the plasma emission spectrum. The overlapping peaks in the spectrum are a fundamental problem in the qualitative and quantitative analysis of LIBS. Based on a curve fitting method, this paper studies an error compensation method to achieve the decomposition and correction of overlapping peaks. The vital step is that the fitting residual is fed back to the overlapping peaks and performs multiple curve fitting processes to obtain a lower residual result. For the quantitative experiments of Cu, the Cu-Fe overlapping peaks in the range of 321-327 nm obtained from the LIBS spectrum of five different concentrations of CuSO 4 ·5H 2 O solution were decomposed and corrected using curve fitting and error compensation methods. Compared with the curve fitting method, the error compensation reduced the fitting residual about 18.12-32.64% and improved the correlation about 0.86-1.82%. Then, the calibration curve between the intensity and concentration of the Cu was established. It can be seen that the error compensation method exhibits a higher linear correlation between the intensity and concentration of Cu, which can be applied to the decomposition and correction of overlapping peaks in the LIBS spectrum.

  2. Automatic network coupling analysis for dynamical systems based on detailed kinetic models.

    PubMed

    Lebiedz, Dirk; Kammerer, Julia; Brandt-Pollmann, Ulrich

    2005-10-01

    We introduce a numerical complexity reduction method for the automatic identification and analysis of dynamic network decompositions in (bio)chemical kinetics based on error-controlled computation of a minimal model dimension represented by the number of (locally) active dynamical modes. Our algorithm exploits a generalized sensitivity analysis along state trajectories and subsequent singular value decomposition of sensitivity matrices for the identification of these dominant dynamical modes. It allows for a dynamic coupling analysis of (bio)chemical species in kinetic models that can be exploited for the piecewise computation of a minimal model on small time intervals and offers valuable functional insight into highly nonlinear reaction mechanisms and network dynamics. We present results for the identification of network decompositions in a simple oscillatory chemical reaction, time scale separation based model reduction in a Michaelis-Menten enzyme system and network decomposition of a detailed model for the oscillatory peroxidase-oxidase enzyme system.

  3. Efficient analysis of three dimensional EUV mask induced imaging artifacts using the waveguide decomposition method

    NASA Astrophysics Data System (ADS)

    Shao, Feng; Evanschitzky, Peter; Fühner, Tim; Erdmann, Andreas

    2009-10-01

    This paper employs the Waveguide decomposition method as an efficient rigorous electromagnetic field (EMF) solver to investigate three dimensional mask-induced imaging artifacts in EUV lithography. The major mask diffraction induced imaging artifacts are first identified by applying the Zernike analysis of the mask nearfield spectrum of 2D lines/spaces. Three dimensional mask features like 22nm semidense/dense contacts/posts, isolated elbows and line-ends are then investigated in terms of lithographic results. After that, the 3D mask-induced imaging artifacts such as feature orientation dependent best focus shift, process window asymmetries, and other aberration-like phenomena are explored for the studied mask features. The simulation results can help lithographers to understand the reasons of EUV-specific imaging artifacts and to devise illumination and feature dependent strategies for their compensation in the optical proximity correction (OPC) for EUV masks. At last, an efficient approach using the Zernike analysis together with the Waveguide decomposition technique is proposed to characterize the impact of mask properties for the future OPC process.

  4. Utilization of a balanced steady state free precession signal model for improved fat/water decomposition.

    PubMed

    Henze Bancroft, Leah C; Strigel, Roberta M; Hernando, Diego; Johnson, Kevin M; Kelcz, Frederick; Kijowski, Richard; Block, Walter F

    2016-03-01

    Chemical shift based fat/water decomposition methods such as IDEAL are frequently used in challenging imaging environments with large B0 inhomogeneity. However, they do not account for the signal modulations introduced by a balanced steady state free precession (bSSFP) acquisition. Here we demonstrate improved performance when the bSSFP frequency response is properly incorporated into the multipeak spectral fat model used in the decomposition process. Balanced SSFP allows for rapid imaging but also introduces a characteristic frequency response featuring periodic nulls and pass bands. Fat spectral components in adjacent pass bands will experience bulk phase offsets and magnitude modulations that change the expected constructive and destructive interference between the fat spectral components. A bSSFP signal model was incorporated into the fat/water decomposition process and used to generate images of a fat phantom, and bilateral breast and knee images in four normal volunteers at 1.5 Tesla. Incorporation of the bSSFP signal model into the decomposition process improved the performance of the fat/water decomposition. Incorporation of this model allows rapid bSSFP imaging sequences to use robust fat/water decomposition methods such as IDEAL. While only one set of imaging parameters were presented, the method is compatible with any field strength or repetition time. © 2015 Wiley Periodicals, Inc.

  5. Determination of knock characteristics in spark ignition engines: an approach based on ensemble empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Li, Ning; Yang, Jianguo; Zhou, Rui; Liang, Caiping

    2016-04-01

    Knock is one of the major constraints to improve the performance and thermal efficiency of spark ignition (SI) engines. It can also result in severe permanent engine damage under certain operating conditions. Based on the ensemble empirical mode decomposition (EEMD), this paper proposes a new approach to determine the knock characteristics in SI engines. By adding a uniformly distributed and finite white Gaussian noise, the EEMD can preserve signal continuity in different scales and therefore alleviates the mode-mixing problem occurring in the classic empirical mode decomposition (EMD). The feasibilities of applying the EEMD to detect the knock signatures of a test SI engine via the pressure signal measured from combustion chamber and the vibration signal measured from cylinder head are investigated. Experimental results show that the EEMD-based method is able to detect the knock signatures from both the pressure signal and vibration signal, even in initial stage of knock. Finally, by comparing the application results with those obtained by short-time Fourier transform (STFT), Wigner-Ville distribution (WVD) and discrete wavelet transform (DWT), the superiority of the EEMD method in determining knock characteristics is demonstrated.

  6. A statistical forecast model using the time-scale decomposition technique to predict rainfall during flood period over the middle and lower reaches of the Yangtze River Valley

    NASA Astrophysics Data System (ADS)

    Hu, Yijia; Zhong, Zhong; Zhu, Yimin; Ha, Yao

    2018-04-01

    In this paper, a statistical forecast model using the time-scale decomposition method is established to do the seasonal prediction of the rainfall during flood period (FPR) over the middle and lower reaches of the Yangtze River Valley (MLYRV). This method decomposites the rainfall over the MLYRV into three time-scale components, namely, the interannual component with the period less than 8 years, the interdecadal component with the period from 8 to 30 years, and the interdecadal component with the period larger than 30 years. Then, the predictors are selected for the three time-scale components of FPR through the correlation analysis. At last, a statistical forecast model is established using the multiple linear regression technique to predict the three time-scale components of the FPR, respectively. The results show that this forecast model can capture the interannual and interdecadal variation of FPR. The hindcast of FPR during 14 years from 2001 to 2014 shows that the FPR can be predicted successfully in 11 out of the 14 years. This forecast model performs better than the model using traditional scheme without time-scale decomposition. Therefore, the statistical forecast model using the time-scale decomposition technique has good skills and application value in the operational prediction of FPR over the MLYRV.

  7. Comparison of methods for extracting annual cycle with changing amplitude in climate science

    NASA Astrophysics Data System (ADS)

    Deng, Q.; Fu, Z.

    2017-12-01

    Changes of annual cycle gains a growing concern recently. The basic hypothesis regards annual cycle as constant. Climatology mean within a time period is usually used to depict the annual cycle. Obviously this hypothesis contradicts with the fact that annual cycle is changing every year. For the lack of a unified definition about annual cycle, the approaches adopted in extracting annual cycle are various and may lead to different results. The precision and validity of these methods need to be examined. In this work we numerical experiments with known monofrequent annual cycle are set to evaluate five popular extracting methods: fitting sinusoids, complex demodulation, Ensemble Empirical Mode Decomposition (EEMD), Nonlinear Mode Decomposition (NMD) and Seasonal trend decomposition procedure based on loess (STL). Three different types of changing amplitude will be generated: steady, linear increasing and nonlinearly varying. Comparing the annual cycle extracted by these methods with the generated annual cycle, we find that (1) NMD performs best in depicting annual cycle itself and its amplitude change, (2) fitting sinusoids, complex demodulation and EEMD methods are more sensitive to long-term memory(LTM) of generated time series thus lead to overfitting annual cycle and too noisy amplitude, oppositely the result of STL underestimate the amplitude variation (3)all of them can present the amplitude trend correctly in long-time scale but the errors on account of noise and LTM are common in some methods over short time scales.

  8. Multidimensional k-nearest neighbor model based on EEMD for financial time series forecasting

    NASA Astrophysics Data System (ADS)

    Zhang, Ningning; Lin, Aijing; Shang, Pengjian

    2017-07-01

    In this paper, we propose a new two-stage methodology that combines the ensemble empirical mode decomposition (EEMD) with multidimensional k-nearest neighbor model (MKNN) in order to forecast the closing price and high price of the stocks simultaneously. The modified algorithm of k-nearest neighbors (KNN) has an increasingly wide application in the prediction of all fields. Empirical mode decomposition (EMD) decomposes a nonlinear and non-stationary signal into a series of intrinsic mode functions (IMFs), however, it cannot reveal characteristic information of the signal with much accuracy as a result of mode mixing. So ensemble empirical mode decomposition (EEMD), an improved method of EMD, is presented to resolve the weaknesses of EMD by adding white noise to the original data. With EEMD, the components with true physical meaning can be extracted from the time series. Utilizing the advantage of EEMD and MKNN, the new proposed ensemble empirical mode decomposition combined with multidimensional k-nearest neighbor model (EEMD-MKNN) has high predictive precision for short-term forecasting. Moreover, we extend this methodology to the case of two-dimensions to forecast the closing price and high price of the four stocks (NAS, S&P500, DJI and STI stock indices) at the same time. The results indicate that the proposed EEMD-MKNN model has a higher forecast precision than EMD-KNN, KNN method and ARIMA.

  9. Accuracy assessment of a surface electromyogram decomposition system in human first dorsal interosseus muscle

    NASA Astrophysics Data System (ADS)

    Hu, Xiaogang; Rymer, William Z.; Suresh, Nina L.

    2014-04-01

    Objective. The aim of this study is to assess the accuracy of a surface electromyogram (sEMG) motor unit (MU) decomposition algorithm during low levels of muscle contraction. Approach. A two-source method was used to verify the accuracy of the sEMG decomposition system, by utilizing simultaneous intramuscular and surface EMG recordings from the human first dorsal interosseous muscle recorded during isometric trapezoidal force contractions. Spike trains from each recording type were decomposed independently utilizing two different algorithms, EMGlab and dEMG decomposition algorithms. The degree of agreement of the decomposed spike timings was assessed for three different segments of the EMG signals, corresponding to specified regions in the force task. A regression analysis was performed to examine whether certain properties of the sEMG and force signal can predict the decomposition accuracy. Main results. The average accuracy of successful decomposition among the 119 MUs that were common to both intramuscular and surface records was approximately 95%, and the accuracy was comparable between the different segments of the sEMG signals (i.e., force ramp-up versus steady state force versus combined). The regression function between the accuracy and properties of sEMG and force signals revealed that the signal-to-noise ratio of the action potential and stability in the action potential records were significant predictors of the surface decomposition accuracy. Significance. The outcomes of our study confirm the accuracy of the sEMG decomposition algorithm during low muscle contraction levels and provide confidence in the overall validity of the surface dEMG decomposition algorithm.

  10. Model reduction of dynamical systems by proper orthogonal decomposition: Error bounds and comparison of methods using snapshots from the solution and the time derivatives [Proper orthogonal decomposition model reduction of dynamical systems: error bounds and comparison of methods using snapshots from the solution and the time derivatives

    DOE PAGES

    Kostova-Vassilevska, Tanya; Oxberry, Geoffrey M.

    2017-09-17

    In this study, we consider two proper orthogonal decomposition (POD) methods for dimension reduction of dynamical systems. The first method (M1) uses only time snapshots of the solution, while the second method (M2) augments the snapshot set with time-derivative snapshots. The goal of the paper is to analyze and compare the approximation errors resulting from the two methods by using error bounds. We derive several new bounds of the error from POD model reduction by each of the two methods. The new error bounds involve a multiplicative factor depending on the time steps between the snapshots. For method M1 themore » factor depends on the second power of the time step, while for method 2 the dependence is on the fourth power of the time step, suggesting that method M2 can be more accurate for small between-snapshot intervals. However, three other factors also affect the size of the error bounds. These include (i) the norm of the second (for M1) and fourth derivatives (M2); (ii) the first neglected singular value and (iii) the spectral properties of the projection of the system’s Jacobian in the reduced space. Because of the interplay of these factors neither method is more accurate than the other in all cases. Finally, we present numerical examples demonstrating that when the number of collected snapshots is small and the first neglected singular value has a value of zero, method M2 results in a better approximation.« less

  11. Model reduction of dynamical systems by proper orthogonal decomposition: Error bounds and comparison of methods using snapshots from the solution and the time derivatives [Proper orthogonal decomposition model reduction of dynamical systems: error bounds and comparison of methods using snapshots from the solution and the time derivatives

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kostova-Vassilevska, Tanya; Oxberry, Geoffrey M.

    In this study, we consider two proper orthogonal decomposition (POD) methods for dimension reduction of dynamical systems. The first method (M1) uses only time snapshots of the solution, while the second method (M2) augments the snapshot set with time-derivative snapshots. The goal of the paper is to analyze and compare the approximation errors resulting from the two methods by using error bounds. We derive several new bounds of the error from POD model reduction by each of the two methods. The new error bounds involve a multiplicative factor depending on the time steps between the snapshots. For method M1 themore » factor depends on the second power of the time step, while for method 2 the dependence is on the fourth power of the time step, suggesting that method M2 can be more accurate for small between-snapshot intervals. However, three other factors also affect the size of the error bounds. These include (i) the norm of the second (for M1) and fourth derivatives (M2); (ii) the first neglected singular value and (iii) the spectral properties of the projection of the system’s Jacobian in the reduced space. Because of the interplay of these factors neither method is more accurate than the other in all cases. Finally, we present numerical examples demonstrating that when the number of collected snapshots is small and the first neglected singular value has a value of zero, method M2 results in a better approximation.« less

  12. Efficient material decomposition method for dual-energy X-ray cargo inspection system

    NASA Astrophysics Data System (ADS)

    Lee, Donghyeon; Lee, Jiseoc; Min, Jonghwan; Lee, Byungcheol; Lee, Byeongno; Oh, Kyungmin; Kim, Jaehyun; Cho, Seungryong

    2018-03-01

    Dual-energy X-ray inspection systems are widely used today for it provides X-ray attenuation contrast of the imaged object and also its material information. Material decomposition capability allows a higher detection sensitivity of potential targets including purposely loaded impurities in agricultural product inspections and threats in security scans for example. Dual-energy X-ray transmission data can be transformed into two basis material thickness data, and its transformation accuracy heavily relies on a calibration of material decomposition process. The calibration process in general can be laborious and time consuming. Moreover, a conventional calibration method is often challenged by the nonuniform spectral characteristics of the X-ray beam in the entire field-of-view (FOV). In this work, we developed an efficient material decomposition calibration process for a linear accelerator (LINAC) based high-energy X-ray cargo inspection system. We also proposed a multi-spot calibration method to improve the decomposition performance throughout the entire FOV. Experimental validation of the proposed method has been demonstrated by use of a cargo inspection system that supports 6 MV and 9 MV dual-energy imaging.

  13. Decomposition and extraction: a new framework for visual classification.

    PubMed

    Fang, Yuqiang; Chen, Qiang; Sun, Lin; Dai, Bin; Yan, Shuicheng

    2014-08-01

    In this paper, we present a novel framework for visual classification based on hierarchical image decomposition and hybrid midlevel feature extraction. Unlike most midlevel feature learning methods, which focus on the process of coding or pooling, we emphasize that the mechanism of image composition also strongly influences the feature extraction. To effectively explore the image content for the feature extraction, we model a multiplicity feature representation mechanism through meaningful hierarchical image decomposition followed by a fusion step. In particularly, we first propose a new hierarchical image decomposition approach in which each image is decomposed into a series of hierarchical semantical components, i.e, the structure and texture images. Then, different feature extraction schemes can be adopted to match the decomposed structure and texture processes in a dissociative manner. Here, two schemes are explored to produce property related feature representations. One is based on a single-stage network over hand-crafted features and the other is based on a multistage network, which can learn features from raw pixels automatically. Finally, those multiple midlevel features are incorporated by solving a multiple kernel learning task. Extensive experiments are conducted on several challenging data sets for visual classification, and experimental results demonstrate the effectiveness of the proposed method.

  14. Predictability of monthly temperature and precipitation using automatic time series forecasting methods

    NASA Astrophysics Data System (ADS)

    Papacharalampous, Georgia; Tyralis, Hristos; Koutsoyiannis, Demetris

    2018-02-01

    We investigate the predictability of monthly temperature and precipitation by applying automatic univariate time series forecasting methods to a sample of 985 40-year-long monthly temperature and 1552 40-year-long monthly precipitation time series. The methods include a naïve one based on the monthly values of the last year, as well as the random walk (with drift), AutoRegressive Fractionally Integrated Moving Average (ARFIMA), exponential smoothing state-space model with Box-Cox transformation, ARMA errors, Trend and Seasonal components (BATS), simple exponential smoothing, Theta and Prophet methods. Prophet is a recently introduced model inspired by the nature of time series forecasted at Facebook and has not been applied to hydrometeorological time series before, while the use of random walk, BATS, simple exponential smoothing and Theta is rare in hydrology. The methods are tested in performing multi-step ahead forecasts for the last 48 months of the data. We further investigate how different choices of handling the seasonality and non-normality affect the performance of the models. The results indicate that: (a) all the examined methods apart from the naïve and random walk ones are accurate enough to be used in long-term applications; (b) monthly temperature and precipitation can be forecasted to a level of accuracy which can barely be improved using other methods; (c) the externally applied classical seasonal decomposition results mostly in better forecasts compared to the automatic seasonal decomposition used by the BATS and Prophet methods; and (d) Prophet is competitive, especially when it is combined with externally applied classical seasonal decomposition.

  15. Human decomposition and the reliability of a 'Universal' model for post mortem interval estimations.

    PubMed

    Cockle, Diane L; Bell, Lynne S

    2015-08-01

    Human decomposition is a complex biological process driven by an array of variables which are not clearly understood. The medico-legal community have long been searching for a reliable method to establish the post-mortem interval (PMI) for those whose deaths have either been hidden, or gone un-noticed. To date, attempts to develop a PMI estimation method based on the state of the body either at the scene or at autopsy have been unsuccessful. One recent study has proposed that two simple formulae, based on the level of decomposition humidity and temperature, could be used to accurately calculate the PMI for bodies outside, on or under the surface worldwide. This study attempted to validate 'Formula I' [1] (for bodies on the surface) using 42 Canadian cases with known PMIs. The results indicated that bodies exposed to warm temperatures consistently overestimated the known PMI by a large and inconsistent margin for Formula I estimations. And for bodies exposed to cold and freezing temperatures (less than 4°C), then the PMI was dramatically under estimated. The ability of 'Formulae II' to estimate the PMI for buried bodies was also examined using a set of 22 known Canadian burial cases. As these cases used in this study are retrospective, some of the data needed for Formula II was not available. The 4.6 value used in Formula II to represent the standard ratio of time that burial decelerates the rate of decomposition was examined. The average time taken to achieve each stage of decomposition both on, and under the surface was compared for the 118 known cases. It was found that the rate of decomposition was not consistent throughout all stages of decomposition. The rates of autolysis above and below the ground were equivalent with the buried cases staying in a state of putrefaction for a prolonged period of time. It is suggested that differences in temperature extremes and humidity levels between geographic regions may make it impractical to apply formulas developed in one region to any other region. These results also suggest that there are other variables, apart from temperature and humidity that may impact the rate of human decomposition. These variables, or complex of variables, are considered regionally specific. Neither of the Universal Formulae performed well, and our results do not support the proposition of Universality for PMI estimation. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  16. New spectrophotometric assay for pilocarpine.

    PubMed

    El-Masry, S; Soliman, R

    1980-07-01

    A quick method for the determination of pilocarpine in eye drops in the presence of decomposition products is described. The method involves complexation of the alkaloid with bromocresol purple at pH 6. After treatment with 0.1N NaOH, the liberated dye is measured at 580 nm. The method has a relative standard deviation of 1.99%, and has been successfully applied to the analysis of 2 batches of pilocarpine eye drops. The recommended method was also used to monitor the stability of a pilocarpine nitrate solution in 0.05N NaOH at 65 degrees C. The BPC method failed to detect any significant decomposition after 2 h incubation, but the recommended method revealed 87.5% decomposition.

  17. Microbial genomics, transcriptomics and proteomics: new discoveries in decomposition research using complementary methods.

    PubMed

    Baldrian, Petr; López-Mondéjar, Rubén

    2014-02-01

    Molecular methods for the analysis of biomolecules have undergone rapid technological development in the last decade. The advent of next-generation sequencing methods and improvements in instrumental resolution enabled the analysis of complex transcriptome, proteome and metabolome data, as well as a detailed annotation of microbial genomes. The mechanisms of decomposition by model fungi have been described in unprecedented detail by the combination of genome sequencing, transcriptomics and proteomics. The increasing number of available genomes for fungi and bacteria shows that the genetic potential for decomposition of organic matter is widespread among taxonomically diverse microbial taxa, while expression studies document the importance of the regulation of expression in decomposition efficiency. Importantly, high-throughput methods of nucleic acid analysis used for the analysis of metagenomes and metatranscriptomes indicate the high diversity of decomposer communities in natural habitats and their taxonomic composition. Today, the metaproteomics of natural habitats is of interest. In combination with advanced analytical techniques to explore the products of decomposition and the accumulation of information on the genomes of environmentally relevant microorganisms, advanced methods in microbial ecophysiology should increase our understanding of the complex processes of organic matter transformation.

  18. Application of the wavelet packet transform to vibration signals for surface roughness monitoring in CNC turning operations

    NASA Astrophysics Data System (ADS)

    García Plaza, E.; Núñez López, P. J.

    2018-01-01

    The wavelet packet transform method decomposes a time signal into several independent time-frequency signals called packets. This enables the temporary location of transient events occurring during the monitoring of the cutting processes, which is advantageous in monitoring condition and fault diagnosis. This paper proposes the monitoring of surface roughness using a single low cost sensor that is easily implemented in numerical control machine tools in order to make on-line decisions on workpiece surface finish quality. Packet feature extraction in vibration signals was applied to correlate the sensor signals to measured surface roughness. For the successful application of the WPT method, mother wavelets, packet decomposition level, and appropriate packet selection methods should be considered, but are poorly understood aspects in the literature. In this novel contribution, forty mother wavelets, optimal decomposition level, and packet reduction methods were analysed, as well as identifying the effective frequency range providing the best packet feature extraction for monitoring surface finish. The results show that mother wavelet biorthogonal 4.4 in decomposition level L3 with the fusion of the orthogonal vibration components (ax + ay + az) were the best option in the vibration signal and surface roughness correlation. The best packets were found in the medium-high frequency DDA (6250-9375 Hz) and high frequency ADA (9375-12500 Hz) ranges, and the feed acceleration component ay was the primary source of information. The packet reduction methods forfeited packets with relevant features to the signal, leading to poor results for the prediction of surface roughness. WPT is a robust vibration signal processing method for the monitoring of surface roughness using a single sensor without other information sources, satisfactory results were obtained in comparison to other processing methods with a low computational cost.

  19. Fluorescence background removal method for biological Raman spectroscopy based on empirical mode decomposition.

    PubMed

    Leon-Bejarano, Maritza; Dorantes-Mendez, Guadalupe; Ramirez-Elias, Miguel; Mendez, Martin O; Alba, Alfonso; Rodriguez-Leyva, Ildefonso; Jimenez, M

    2016-08-01

    Raman spectroscopy of biological tissue presents fluorescence background, an undesirable effect that generates false Raman intensities. This paper proposes the application of the Empirical Mode Decomposition (EMD) method to baseline correction. EMD is a suitable approach since it is an adaptive signal processing method for nonlinear and non-stationary signal analysis that does not require parameters selection such as polynomial methods. EMD performance was assessed through synthetic Raman spectra with different signal to noise ratio (SNR). The correlation coefficient between synthetic Raman spectra and the recovered one after EMD denoising was higher than 0.92. Additionally, twenty Raman spectra from skin were used to evaluate EMD performance and the results were compared with Vancouver Raman algorithm (VRA). The comparison resulted in a mean square error (MSE) of 0.001554. High correlation coefficient using synthetic spectra and low MSE in the comparison between EMD and VRA suggest that EMD could be an effective method to remove fluorescence background in biological Raman spectra.

  20. Scalable Parallel Computation for Extended MHD Modeling of Fusion Plasmas

    NASA Astrophysics Data System (ADS)

    Glasser, Alan H.

    2008-11-01

    Parallel solution of a linear system is scalable if simultaneously doubling the number of dependent variables and the number of processors results in little or no increase in the computation time to solution. Two approaches have this property for parabolic systems: multigrid and domain decomposition. Since extended MHD is primarily a hyperbolic rather than a parabolic system, additional steps must be taken to parabolize the linear system to be solved by such a method. Such physics-based preconditioning (PBP) methods have been pioneered by Chac'on, using finite volumes for spatial discretization, multigrid for solution of the preconditioning equations, and matrix-free Newton-Krylov methods for the accurate solution of the full nonlinear preconditioned equations. The work described here is an extension of these methods using high-order spectral element methods and FETI-DP domain decomposition. Application of PBP to a flux-source representation of the physics equations is discussed. The resulting scalability will be demonstrated for simple wave and for ideal and Hall MHD waves.

  1. Thermal Analysis of porous fin with uniform magnetic field using Adomian decomposition Sumudu transform method

    NASA Astrophysics Data System (ADS)

    Patel, Trushit; Meher, Ramakanta

    2017-09-01

    In this paper, we consider a Roseland approximation to radiate heat transfer, Darcy's model to simulate the flow in porous media and finite-length fin with insulated tip to study the thermal performance and to predict the temperature distribution in a vertical isothermal surface. The energy balance equations of the porous fin with several temperature dependent properties are solved using the Adomian Decomposition Sumudu Transform Method (ADSTM). The effects of various thermophysical parameters, such as the convection-conduction parameter, Surface-ambient radiation parameter, Rayleigh numbers and Hartman number are determined. The results obtained from the ADSTM are further compared with the fourth-fifth order Runge-Kutta-Fehlberg method and Least Square Method(LSM) (Hoshyar et al. 2016 ) to determine the accuracy of the solution.

  2. Retrieval of the non-depolarizing components of depolarizing Mueller matrices by using symmetry conditions and least squares minimization

    NASA Astrophysics Data System (ADS)

    Kuntman, Ertan; Canillas, Adolf; Arteaga, Oriol

    2017-11-01

    Experimental Mueller matrices contain certain amount of uncertainty in their elements and these uncertainties can create difficulties for decomposition methods based on analytic solutions. In an earlier paper [1], we proposed a decomposition method for depolarizing Mueller matrices by using certain symmetry conditions. However, because of the experimental error, that method creates over-determined systems with non-unique solutions. Here we propose to use least squares minimization approach in order to improve the accuracy of our results. In this method, we are taking into account the number of independent parameters of the corresponding symmetry and the rank constraints on the component matrices to decide on our fitting model. This approach is illustrated with experimental Mueller matrices that include material media with different Mueller symmetries.

  3. Intelligent diagnosis of short hydraulic signal based on improved EEMD and SVM with few low-dimensional training samples

    NASA Astrophysics Data System (ADS)

    Zhang, Meijun; Tang, Jian; Zhang, Xiaoming; Zhang, Jiaojiao

    2016-03-01

    The high accurate classification ability of an intelligent diagnosis method often needs a large amount of training samples with high-dimensional eigenvectors, however the characteristics of the signal need to be extracted accurately. Although the existing EMD(empirical mode decomposition) and EEMD(ensemble empirical mode decomposition) are suitable for processing non-stationary and non-linear signals, but when a short signal, such as a hydraulic impact signal, is concerned, their decomposition accuracy become very poor. An improve EEMD is proposed specifically for short hydraulic impact signals. The improvements of this new EEMD are mainly reflected in four aspects, including self-adaptive de-noising based on EEMD, signal extension based on SVM(support vector machine), extreme center fitting based on cubic spline interpolation, and pseudo component exclusion based on cross-correlation analysis. After the energy eigenvector is extracted from the result of the improved EEMD, the fault pattern recognition based on SVM with small amount of low-dimensional training samples is studied. At last, the diagnosis ability of improved EEMD+SVM method is compared with the EEMD+SVM and EMD+SVM methods, and its diagnosis accuracy is distinctly higher than the other two methods no matter the dimension of the eigenvectors are low or high. The improved EEMD is very propitious for the decomposition of short signal, such as hydraulic impact signal, and its combination with SVM has high ability for the diagnosis of hydraulic impact faults.

  4. A Subspace Approach to the Structural Decomposition and Identification of Ankle Joint Dynamic Stiffness.

    PubMed

    Jalaleddini, Kian; Tehrani, Ehsan Sobhani; Kearney, Robert E

    2017-06-01

    The purpose of this paper is to present a structural decomposition subspace (SDSS) method for decomposition of the joint torque to intrinsic, reflexive, and voluntary torques and identification of joint dynamic stiffness. First, it formulates a novel state-space representation for the joint dynamic stiffness modeled by a parallel-cascade structure with a concise parameter set that provides a direct link between the state-space representation matrices and the parallel-cascade parameters. Second, it presents a subspace method for the identification of the new state-space model that involves two steps: 1) the decomposition of the intrinsic and reflex pathways and 2) the identification of an impulse response model of the intrinsic pathway and a Hammerstein model of the reflex pathway. Extensive simulation studies demonstrate that SDSS has significant performance advantages over some other methods. Thus, SDSS was more robust under high noise conditions, converging where others failed; it was more accurate, giving estimates with lower bias and random errors. The method also worked well in practice and yielded high-quality estimates of intrinsic and reflex stiffnesses when applied to experimental data at three muscle activation levels. The simulation and experimental results demonstrate that SDSS accurately decomposes the intrinsic and reflex torques and provides accurate estimates of physiologically meaningful parameters. SDSS will be a valuable tool for studying joint stiffness under functionally important conditions. It has important clinical implications for the diagnosis, assessment, objective quantification, and monitoring of neuromuscular diseases that change the muscle tone.

  5. Efficient implementation of a 3-dimensional ADI method on the iPSC/860

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Van der Wijngaart, R.F.

    1993-12-31

    A comparison is made between several domain decomposition strategies for the solution of three-dimensional partial differential equations on a MIMD distributed memory parallel computer. The grids used are structured, and the numerical algorithm is ADI. Important implementation issues regarding load balancing, storage requirements, network latency, and overlap of computations and communications are discussed. Results of the solution of the three-dimensional heat equation on the Intel iPSC/860 are presented for the three most viable methods. It is found that the Bruno-Cappello decomposition delivers optimal computational speed through an almost complete elimination of processor idle time, while providing good memory efficiency.

  6. Application of advanced multidisciplinary analysis and optimization methods to vehicle design synthesis

    NASA Technical Reports Server (NTRS)

    Consoli, Robert David; Sobieszczanski-Sobieski, Jaroslaw

    1990-01-01

    Advanced multidisciplinary analysis and optimization methods, namely system sensitivity analysis and non-hierarchical system decomposition, are applied to reduce the cost and improve the visibility of an automated vehicle design synthesis process. This process is inherently complex due to the large number of functional disciplines and associated interdisciplinary couplings. Recent developments in system sensitivity analysis as applied to complex non-hierarchic multidisciplinary design optimization problems enable the decomposition of these complex interactions into sub-processes that can be evaluated in parallel. The application of these techniques results in significant cost, accuracy, and visibility benefits for the entire design synthesis process.

  7. Towards accurate modeling of noncovalent interactions for protein rigidity analysis.

    PubMed

    Fox, Naomi; Streinu, Ileana

    2013-01-01

    Protein rigidity analysis is an efficient computational method for extracting flexibility information from static, X-ray crystallography protein data. Atoms and bonds are modeled as a mechanical structure and analyzed with a fast graph-based algorithm, producing a decomposition of the flexible molecule into interconnected rigid clusters. The result depends critically on noncovalent atomic interactions, primarily on how hydrogen bonds and hydrophobic interactions are computed and modeled. Ongoing research points to the stringent need for benchmarking rigidity analysis software systems, towards the goal of increasing their accuracy and validating their results, either against each other and against biologically relevant (functional) parameters. We propose two new methods for modeling hydrogen bonds and hydrophobic interactions that more accurately reflect a mechanical model, without being computationally more intensive. We evaluate them using a novel scoring method, based on the B-cubed score from the information retrieval literature, which measures how well two cluster decompositions match. To evaluate the modeling accuracy of KINARI, our pebble-game rigidity analysis system, we use a benchmark data set of 20 proteins, each with multiple distinct conformations deposited in the Protein Data Bank. Cluster decompositions for them were previously determined with the RigidFinder method from Gerstein's lab and validated against experimental data. When KINARI's default tuning parameters are used, an improvement of the B-cubed score over a crude baseline is observed in 30% of this data. With our new modeling options, improvements were observed in over 70% of the proteins in this data set. We investigate the sensitivity of the cluster decomposition score with case studies on pyruvate phosphate dikinase and calmodulin. To substantially improve the accuracy of protein rigidity analysis systems, thorough benchmarking must be performed on all current systems and future extensions. We have measured the gain in performance by comparing different modeling methods for noncovalent interactions. We showed that new criteria for modeling hydrogen bonds and hydrophobic interactions can significantly improve the results. The two new methods proposed here have been implemented and made publicly available in the current version of KINARI (v1.3), together with the benchmarking tools, which can be downloaded from our software's website, http://kinari.cs.umass.edu.

  8. A projection method for low speed flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Colella, P.; Pao, K.

    The authors propose a decomposition applicable to low speed, inviscid flows of all Mach numbers less than 1. By using the Hodge decomposition, they may write the velocity field as the sum of a divergence-free vector field and a gradient of a scalar function. Evolution equations for these parts are presented. A numerical procedure based on this decomposition is designed, using projection methods for solving the incompressible variables and a backward-Euler method for solving the potential variables. Numerical experiments are included to illustrate various aspects of the algorithm.

  9. Alternative Modal Basis Selection Procedures For Reduced-Order Nonlinear Random Response Simulation

    NASA Technical Reports Server (NTRS)

    Przekop, Adam; Guo, Xinyun; Rizi, Stephen A.

    2012-01-01

    Three procedures to guide selection of an efficient modal basis in a nonlinear random response analysis are examined. One method is based only on proper orthogonal decomposition, while the other two additionally involve smooth orthogonal decomposition. Acoustic random response problems are employed to assess the performance of the three modal basis selection approaches. A thermally post-buckled beam exhibiting snap-through behavior, a shallowly curved arch in the auto-parametric response regime and a plate structure are used as numerical test articles. The results of a computationally taxing full-order analysis in physical degrees of freedom are taken as the benchmark for comparison with the results from the three reduced-order analyses. For the cases considered, all three methods are shown to produce modal bases resulting in accurate and computationally efficient reduced-order nonlinear simulations.

  10. Coupling experimental data and a prototype model to probe the physical and chemical processes of 2,4-dinitroimidazole solid-phase thermal decomposition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Behrens, R.; Minier, L.; Bulusu, S.

    1998-12-31

    The time-dependent, solid-phase thermal decomposition behavior of 2,4-dinitroimidazole (2,4-DNI) has been measured utilizing simultaneous thermogravimetric modulated beam mass spectrometry (STMBMS) methods. The decomposition products consist of gaseous and non-volatile polymeric products. The temporal behavior of the gas formation rates of the identified products indicate that the overall thermal decomposition process is complex. In isothermal experiments with 2,4-DNI in the solid phase, four distinguishing features are observed: (1) elevated rates of gas formation are observed during the early stages of the decomposition, which appear to be correlated to the presence of exogenous water in the sample; (2) this is followed bymore » a period of relatively constant rates of gas formation; (3) next, the rates of gas formation accelerate, characteristic of an autocatalytic reaction; (4) finally, the 2,4-DNI is depleted and gaseous decomposition products continue to evolve at a decreasing rate. A physicochemical and mathematical model of the decomposition of 2,4-DNI has been developed and applied to the experimental results. The first generation of this model is described in this paper. Differences between the first generation of the model and the experimental data collected under different conditions suggest refinements for the next generation of the model.« less

  11. A review of parametric approaches specific to aerodynamic design process

    NASA Astrophysics Data System (ADS)

    Zhang, Tian-tian; Wang, Zhen-guo; Huang, Wei; Yan, Li

    2018-04-01

    Parametric modeling of aircrafts plays a crucial role in the aerodynamic design process. Effective parametric approaches have large design space with a few variables. Parametric methods that commonly used nowadays are summarized in this paper, and their principles have been introduced briefly. Two-dimensional parametric methods include B-Spline method, Class/Shape function transformation method, Parametric Section method, Hicks-Henne method and Singular Value Decomposition method, and all of them have wide application in the design of the airfoil. This survey made a comparison among them to find out their abilities in the design of the airfoil, and the results show that the Singular Value Decomposition method has the best parametric accuracy. The development of three-dimensional parametric methods is limited, and the most popular one is the Free-form deformation method. Those methods extended from two-dimensional parametric methods have promising prospect in aircraft modeling. Since different parametric methods differ in their characteristics, real design process needs flexible choice among them to adapt to subsequent optimization procedure.

  12. Tensor-based classification of an auditory mobile BCI without a subject-specific calibration phase

    NASA Astrophysics Data System (ADS)

    Zink, Rob; Hunyadi, Borbála; Van Huffel, Sabine; De Vos, Maarten

    2016-04-01

    Objective. One of the major drawbacks in EEG brain-computer interfaces (BCI) is the need for subject-specific training of the classifier. By removing the need for a supervised calibration phase, new users could potentially explore a BCI faster. In this work we aim to remove this subject-specific calibration phase and allow direct classification. Approach. We explore canonical polyadic decompositions and block term decompositions of the EEG. These methods exploit structure in higher dimensional data arrays called tensors. The BCI tensors are constructed by concatenating ERP templates from other subjects to a target and non-target trial and the inherent structure guides a decomposition that allows accurate classification. We illustrate the new method on data from a three-class auditory oddball paradigm. Main results. The presented approach leads to a fast and intuitive classification with accuracies competitive with a supervised and cross-validated LDA approach. Significance. The described methods are a promising new way of classifying BCI data with a forthright link to the original P300 ERP signal over the conventional and widely used supervised approaches.

  13. Decomposition-Based Failure Mode Identification Method for Risk-Free Design of Large Systems

    NASA Technical Reports Server (NTRS)

    Tumer, Irem Y.; Stone, Robert B.; Roberts, Rory A.; Clancy, Daniel (Technical Monitor)

    2002-01-01

    When designing products, it is crucial to assure failure and risk-free operation in the intended operating environment. Failures are typically studied and eliminated as much as possible during the early stages of design. The few failures that go undetected result in unacceptable damage and losses in high-risk applications where public safety is of concern. Published NASA and NTSB accident reports point to a variety of components identified as sources of failures in the reported cases. In previous work, data from these reports were processed and placed in matrix form for all the system components and failure modes encountered, and then manipulated using matrix methods to determine similarities between the different components and failure modes. In this paper, these matrices are represented in the form of a linear combination of failures modes, mathematically formed using Principal Components Analysis (PCA) decomposition. The PCA decomposition results in a low-dimensionality representation of all failure modes and components of interest, represented in a transformed coordinate system. Such a representation opens the way for efficient pattern analysis and prediction of failure modes with highest potential risks on the final product, rather than making decisions based on the large space of component and failure mode data. The mathematics of the proposed method are explained first using a simple example problem. The method is then applied to component failure data gathered from helicopter, accident reports to demonstrate its potential.

  14. Measuring and decomposing socioeconomic inequality in healthcare delivery: A microsimulation approach with application to the Palestinian conflict-affected fragile setting.

    PubMed

    Abu-Zaineh, Mohammad; Mataria, Awad; Moatti, Jean-Paul; Ventelou, Bruno

    2011-01-01

    Socioeconomic-related inequalities in healthcare delivery have been extensively studied in developed countries, using standard linear models of decomposition. This paper seeks to assess equity in healthcare delivery in the particular context of the occupied Palestinian territory: the West Bank and the Gaza Strip, using a new method of decomposition based on microsimulations. Besides avoiding the 'unavoidable price' of linearity restriction that is imposed by the standard methods of decomposition, the microsimulation-based decomposition enables to circumvent the potentially contentious role of heterogeneity in behaviours and to better disentangle the various sources driving inequality in healthcare utilisation. Results suggest that the worse-off do have a disproportinately greater need for all levels of care. However with the exception of primary-level, utilisation of all levels of care appears to be significantly higher for the better-off. The microsimulation method has made it possible to identify the contributions of factors driving such pro-rich patterns. While much of the inequality in utilisation appears to be caused by the prevailing socioeconomic inequalities, detailed analysis attributes a non-trivial part (circa 30% of inequalities) to heterogeneity in healthcare-seeking behaviours across socioeconomic groups of the population. Several policy recommendations for improving equity in healthcare delivery in the occupied Palestinian territory are proposed. Copyright © 2010 Elsevier Ltd. All rights reserved.

  15. Economic Inequality in Presenting Vision in Shahroud, Iran: Two Decomposition Methods

    PubMed Central

    Mansouri, Asieh; Emamian, Mohammad Hassan; Zeraati, Hojjat; Hashemi, Hasan; Fotouhi, Akbar

    2018-01-01

    Background: Visual acuity, like many other health-related problems, does not have an equal distribution in terms of socio-economic factors. We conducted this study to estimate and decompose economic inequality in presenting visual acuity using two methods and to compare their results in a population aged 40-64 years in Shahroud, Iran. Methods: The data of 5188 participants in the first phase of the Shahroud Cohort Eye Study, performed in 2009, were used for this study. Our outcome variable was presenting vision acuity (PVA) that was measured using LogMAR (logarithm of the minimum angle of resolution). The living standard variable used for estimation of inequality was the economic status and was constructed by principal component analysis on home assets. Inequality indices were concentration index and the gap between low and high economic groups. We decomposed these indices by the concentration index and BlinderOaxaca decomposition approaches respectively and compared the results. Results: The concentration index of PVA was -0.245 (95% CI: -0.278, -0.212). The PVA gap between groups with a high and low economic status was 0.0705 and was in favor of the high economic group. Education, economic status, and age were the most important contributors of inequality in both concentration index and Blinder-Oaxaca decomposition. Percent contribution of these three factors in the concentration index and Blinder-Oaxaca decomposition was 41.1% vs. 43.4%, 25.4% vs. 19.1% and 15.2% vs. 16.2%, respectively. Other factors including gender, marital status, employment status and diabetes had minor contributions. Conclusion: This study showed that individuals with poorer visual acuity were more concentrated among people with a lower economic status. The main contributors of this inequality were similar in concentration index and Blinder-Oaxaca decomposition. So, it can be concluded that setting appropriate interventions to promote the literacy and income level in people with low economic status, formulating policies to address economic problems in the elderly, and paying more attention to their vision problems can help to alleviate economic inequality in visual acuity. PMID:29325403

  16. Graphical Methods for Quantifying Macromolecules through Bright Field Imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, Hang; DeFilippis, Rosa Anna; Tlsty, Thea D.

    Bright ?eld imaging of biological samples stained with antibodies and/or special stains provides a rapid protocol for visualizing various macromolecules. However, this method of sample staining and imaging is rarely employed for direct quantitative analysis due to variations in sample fixations, ambiguities introduced by color composition, and the limited dynamic range of imaging instruments. We demonstrate that, through the decomposition of color signals, staining can be scored on a cell-by-cell basis. We have applied our method to Flbroblasts grown from histologically normal breast tissue biopsies obtained from two distinct populations. Initially, nuclear regions are segmented through conversion of color imagesmore » into gray scale, and detection of dark elliptic features. Subsequently, the strength of staining is quanti?ed by a color decomposition model that is optimized by a graph cut algorithm. In rare cases where nuclear signal is significantly altered as a result of samplepreparation, nuclear segmentation can be validated and corrected. Finally, segmented stained patterns are associated with each nuclear region following region-based tessellation. Compared to classical non-negative matrix factorization, proposed method (i) improves color decomposition, (ii) has a better noise immunity, (iii) is more invariant to initial conditions, and (iv) has a superior computing performance« less

  17. Application of microscopy technology in thermo-catalytic methane decomposition to hydrogen

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mei, Irene Lock Sow, E-mail: irene.sowmei@gmail.com; Lock, S. S. M., E-mail: serenelock168@gmail.com; Abdullah, Bawadi, E-mail: bawadi-abdullah@petronas.com.my

    2015-07-22

    Hydrogen production from the direct thermo-catalytic decomposition of methane is a promising alternative for clean fuel production because it produces pure hydrogen without any CO{sub x} emissions. However, thermal decomposition of methane can hardly be of any practical and empirical interest in the industry unless highly efficient and effective catalysts, in terms of both specific activity and operational lifetime have been developed. In this work, bimetallic Ni-Pd on gamma alumina support have been developed for methane cracking process by using co-precipitation and incipient wetness impregnation method. The calcined catalysts were characterized to determine their morphologies and physico-chemical properties by usingmore » Brunauer-Emmett-Teller method, Field Emission Scanning Electron Microscopy, Energy-dispersive X-ray spectroscopy and Thermogravimetric Analysis. The results suggested that that the catalyst which is prepared by the co-precipitation method exhibits homogeneous morphology, higher surface area, have uniform nickel and palladium dispersion and higher thermal stability as compared to the catalyst which is prepared by wet impregnation method. This characteristics are significant to avoid deactivation of the catalysts due to sintering and carbon deposition during methane cracking process.« less

  18. Modal analysis of 2-D sedimentary basin from frequency domain decomposition of ambient vibration array recordings

    NASA Astrophysics Data System (ADS)

    Poggi, Valerio; Ermert, Laura; Burjanek, Jan; Michel, Clotaire; Fäh, Donat

    2015-01-01

    Frequency domain decomposition (FDD) is a well-established spectral technique used in civil engineering to analyse and monitor the modal response of buildings and structures. The method is based on singular value decomposition of the cross-power spectral density matrix from simultaneous array recordings of ambient vibrations. This method is advantageous to retrieve not only the resonance frequencies of the investigated structure, but also the corresponding modal shapes without the need for an absolute reference. This is an important piece of information, which can be used to validate the consistency of numerical models and analytical solutions. We apply this approach using advanced signal processing to evaluate the resonance characteristics of 2-D Alpine sedimentary valleys. In this study, we present the results obtained at Martigny, in the Rhône valley (Switzerland). For the analysis, we use 2 hr of ambient vibration recordings from a linear seismic array deployed perpendicularly to the valley axis. Only the horizontal-axial direction (SH) of the ground motion is considered. Using the FDD method, six separate resonant frequencies are retrieved together with their corresponding modal shapes. We compare the mode shapes with results from classical standard spectral ratios and numerical simulations of ambient vibration recordings.

  19. Tourism forecasting using modified empirical mode decomposition and group method of data handling

    NASA Astrophysics Data System (ADS)

    Yahya, N. A.; Samsudin, R.; Shabri, A.

    2017-09-01

    In this study, a hybrid model using modified Empirical Mode Decomposition (EMD) and Group Method of Data Handling (GMDH) model is proposed for tourism forecasting. This approach reconstructs intrinsic mode functions (IMFs) produced by EMD using trial and error method. The new component and the remaining IMFs is then predicted respectively using GMDH model. Finally, the forecasted results for each component are aggregated to construct an ensemble forecast. The data used in this experiment are monthly time series data of tourist arrivals from China, Thailand and India to Malaysia from year 2000 to 2016. The performance of the model is evaluated using Root Mean Square Error (RMSE) and Mean Absolute Percentage Error (MAPE) where conventional GMDH model and EMD-GMDH model are used as benchmark models. Empirical results proved that the proposed model performed better forecasts than the benchmarked models.

  20. Electromagnetic Characterization of Carbon Nanotube Films Subject to an Oxidative Treatment at Elevated Temperature (Preprint)

    DTIC Science & Technology

    2010-07-01

    response to the tip causes a redistribution of charge on the tip in order to maintain the equipotential surface of the sphere, and also results in a shift...can be obtained. In some instances these treatments lead to uncapping of nanotubes. Geng et al. [25] have shown that the surfaces of SWNT bundles...20] discovered a new and catalyst-free method for the growth of CNTs: surface decomposition of silicon carbide (SiC). This thermal decomposition

  1. Computation of forces arising from the polarizable continuum model within the domain-decomposition paradigm

    NASA Astrophysics Data System (ADS)

    Gatto, Paolo; Lipparini, Filippo; Stamm, Benjamin

    2017-12-01

    The domain-decomposition (dd) paradigm, originally introduced for the conductor-like screening model, has been recently extended to the dielectric Polarizable Continuum Model (PCM), resulting in the ddPCM method. We present here a complete derivation of the analytical derivatives of the ddPCM energy with respect to the positions of the solute's atoms and discuss their efficient implementation. As it is the case for the energy, we observe a quadratic scaling, which is discussed and demonstrated with numerical tests.

  2. Adaptive Fourier decomposition based R-peak detection for noisy ECG Signals.

    PubMed

    Ze Wang; Chi Man Wong; Feng Wan

    2017-07-01

    An adaptive Fourier decomposition (AFD) based R-peak detection method is proposed for noisy ECG signals. Although lots of QRS detection methods have been proposed in literature, most detection methods require high signal quality. The proposed method extracts the R waves from the energy domain using the AFD and determines the R-peak locations based on the key decomposition parameters, achieving the denoising and the R-peak detection at the same time. Validated by clinical ECG signals in the MIT-BIH Arrhythmia Database, the proposed method shows better performance than the Pan-Tompkin (PT) algorithm in both situations of a native PT and the PT with a denoising process.

  3. An inductance Fourier decomposition-based current-hysteresis control strategy for switched reluctance motors

    NASA Astrophysics Data System (ADS)

    Hua, Wei; Qi, Ji; Jia, Meng

    2017-05-01

    Switched reluctance machines (SRMs) have attracted extensive attentions due to the inherent advantages, including simple and robust structure, low cost, excellent fault-tolerance and wide speed range, etc. However, one of the bottlenecks limiting the SRMs for further applications is its unfavorable torque ripple, and consequently noise and vibration due to the unique doubly-salient structure and pulse-current-based power supply method. In this paper, an inductance Fourier decomposition-based current-hysteresis-control (IFD-CHC) strategy is proposed to reduce torque ripple of SRMs. After obtaining a nonlinear inductance-current-position model based Fourier decomposition, reference currents can be calculated by reference torque and the derived inductance model. Both the simulations and experimental results confirm the effectiveness of the proposed strategy.

  4. The Multiscale Robin Coupled Method for flows in porous media

    NASA Astrophysics Data System (ADS)

    Guiraldello, Rafael T.; Ausas, Roberto F.; Sousa, Fabricio S.; Pereira, Felipe; Buscaglia, Gustavo C.

    2018-02-01

    A multiscale mixed method aiming at the accurate approximation of velocity and pressure fields in heterogeneous porous media is proposed. The procedure is based on a new domain decomposition method in which the local problems are subject to Robin boundary conditions. The domain decomposition procedure is defined in terms of two independent spaces on the skeleton of the decomposition, corresponding to interface pressures and fluxes, that can be chosen with great flexibility to accommodate local features of the underlying permeability fields. The well-posedness of the new domain decomposition procedure is established and its connection with the method of Douglas et al. (1993) [12], is identified, also allowing us to reinterpret the known procedure as an optimized Schwarz (or Two-Lagrange-Multiplier) method. The multiscale property of the new domain decomposition method is indicated, and its relation with the Multiscale Mortar Mixed Finite Element Method (MMMFEM) and the Multiscale Hybrid-Mixed (MHM) Finite Element Method is discussed. Numerical simulations are presented aiming at illustrating several features of the new method. Initially we illustrate the possibility of switching from MMMFEM to MHM by suitably varying the Robin condition parameter in the new multiscale method. Then we turn our attention to realistic flows in high-contrast, channelized porous formations. We show that for a range of values of the Robin condition parameter our method provides better approximations for pressure and velocity than those computed with either the MMMFEM and the MHM. This is an indication that our method has the potential to produce more accurate velocity fields in the presence of rough, realistic permeability fields of petroleum reservoirs.

  5. Automatic single-image-based rain streaks removal via image decomposition.

    PubMed

    Kang, Li-Wei; Lin, Chia-Wen; Fu, Yu-Hsiang

    2012-04-01

    Rain removal from a video is a challenging problem and has been recently investigated extensively. Nevertheless, the problem of rain removal from a single image was rarely studied in the literature, where no temporal information among successive images can be exploited, making the problem very challenging. In this paper, we propose a single-image-based rain removal framework via properly formulating rain removal as an image decomposition problem based on morphological component analysis. Instead of directly applying a conventional image decomposition technique, the proposed method first decomposes an image into the low- and high-frequency (HF) parts using a bilateral filter. The HF part is then decomposed into a "rain component" and a "nonrain component" by performing dictionary learning and sparse coding. As a result, the rain component can be successfully removed from the image while preserving most original image details. Experimental results demonstrate the efficacy of the proposed algorithm.

  6. Adaptive multi-step Full Waveform Inversion based on Waveform Mode Decomposition

    NASA Astrophysics Data System (ADS)

    Hu, Yong; Han, Liguo; Xu, Zhuo; Zhang, Fengjiao; Zeng, Jingwen

    2017-04-01

    Full Waveform Inversion (FWI) can be used to build high resolution velocity models, but there are still many challenges in seismic field data processing. The most difficult problem is about how to recover long-wavelength components of subsurface velocity models when seismic data is lacking of low frequency information and without long-offsets. To solve this problem, we propose to use Waveform Mode Decomposition (WMD) method to reconstruct low frequency information for FWI to obtain a smooth model, so that the initial model dependence of FWI can be reduced. In this paper, we use adjoint-state method to calculate the gradient for Waveform Mode Decomposition Full Waveform Inversion (WMDFWI). Through the illustrative numerical examples, we proved that the low frequency which is reconstructed by WMD method is very reliable. WMDFWI in combination with the adaptive multi-step inversion strategy can obtain more faithful and accurate final inversion results. Numerical examples show that even if the initial velocity model is far from the true model and lacking of low frequency information, we still can obtain good inversion results with WMD method. From numerical examples of anti-noise test, we see that the adaptive multi-step inversion strategy for WMDFWI has strong ability to resist Gaussian noise. WMD method is promising to be able to implement for the land seismic FWI, because it can reconstruct the low frequency information, lower the dominant frequency in the adjoint source, and has a strong ability to resist noise.

  7. An Aquatic Decomposition Scoring Method to Potentially Predict the Postmortem Submersion Interval of Bodies Recovered from the North Sea.

    PubMed

    van Daalen, Marjolijn A; de Kat, Dorothée S; Oude Grotebevelsborg, Bernice F L; de Leeuwe, Roosje; Warnaar, Jeroen; Oostra, Roelof Jan; M Duijst-Heesters, Wilma L J

    2017-03-01

    This study aimed to develop an aquatic decomposition scoring (ADS) method and investigated the predictive value of this method in estimating the postmortem submersion interval (PMSI) of bodies recovered from the North Sea. This method, consisting of an ADS item list and a pictorial reference atlas, showed a high interobserver agreement (Krippendorff's alpha ≥ 0.93) and hence proved to be valid. This scoring method was applied to data, collected from closed cases-cases in which the postmortal submersion interval (PMSI) was known-concerning bodies recovered from the North Sea from 1990 to 2013. Thirty-eight cases met the inclusion criteria and were scored by quantifying the observed total aquatic decomposition score (TADS). Statistical analysis demonstrated that TADS accurately predicts the PMSI (p < 0.001), confirming that the decomposition process in the North Sea is strongly correlated to time. © 2017 American Academy of Forensic Sciences.

  8. Capturing molecular multimode relaxation processes in excitable gases based on decomposition of acoustic relaxation spectra

    NASA Astrophysics Data System (ADS)

    Zhu, Ming; Liu, Tingting; Wang, Shu; Zhang, Kesheng

    2017-08-01

    Existing two-frequency reconstructive methods can only capture primary (single) molecular relaxation processes in excitable gases. In this paper, we present a reconstructive method based on the novel decomposition of frequency-dependent acoustic relaxation spectra to capture the entire molecular multimode relaxation process. This decomposition of acoustic relaxation spectra is developed from the frequency-dependent effective specific heat, indicating that a multi-relaxation process is the sum of the interior single-relaxation processes. Based on this decomposition, we can reconstruct the entire multi-relaxation process by capturing the relaxation times and relaxation strengths of N interior single-relaxation processes, using the measurements of acoustic absorption and sound speed at 2N frequencies. Experimental data for the gas mixtures CO2-N2 and CO2-O2 validate our decomposition and reconstruction approach.

  9. Improving 3D Wavelet-Based Compression of Hyperspectral Images

    NASA Technical Reports Server (NTRS)

    Klimesh, Matthew; Kiely, Aaron; Xie, Hua; Aranki, Nazeeh

    2009-01-01

    Two methods of increasing the effectiveness of three-dimensional (3D) wavelet-based compression of hyperspectral images have been developed. (As used here, images signifies both images and digital data representing images.) The methods are oriented toward reducing or eliminating detrimental effects of a phenomenon, referred to as spectral ringing, that is described below. In 3D wavelet-based compression, an image is represented by a multiresolution wavelet decomposition consisting of several subbands obtained by applying wavelet transforms in the two spatial dimensions corresponding to the two spatial coordinate axes of the image plane, and by applying wavelet transforms in the spectral dimension. Spectral ringing is named after the more familiar spatial ringing (spurious spatial oscillations) that can be seen parallel to and near edges in ordinary images reconstructed from compressed data. These ringing phenomena are attributable to effects of quantization. In hyperspectral data, the individual spectral bands play the role of edges, causing spurious oscillations to occur in the spectral dimension. In the absence of such corrective measures as the present two methods, spectral ringing can manifest itself as systematic biases in some reconstructed spectral bands and can reduce the effectiveness of compression of spatially-low-pass subbands. One of the two methods is denoted mean subtraction. The basic idea of this method is to subtract mean values from spatial planes of spatially low-pass subbands prior to encoding, because (a) such spatial planes often have mean values that are far from zero and (b) zero-mean data are better suited for compression by methods that are effective for subbands of two-dimensional (2D) images. In this method, after the 3D wavelet decomposition is performed, mean values are computed for and subtracted from each spatial plane of each spatially-low-pass subband. The resulting data are converted to sign-magnitude form and compressed in a manner similar to that of a baseline hyperspectral- image-compression method. The mean values are encoded in the compressed bit stream and added back to the data at the appropriate decompression step. The overhead incurred by encoding the mean values only a few bits per spectral band is negligible with respect to the huge size of a typical hyperspectral data set. The other method is denoted modified decomposition. This method is so named because it involves a modified version of a commonly used multiresolution wavelet decomposition, known in the art as the 3D Mallat decomposition, in which (a) the first of multiple stages of a 3D wavelet transform is applied to the entire dataset and (b) subsequent stages are applied only to the horizontally-, vertically-, and spectrally-low-pass subband from the preceding stage. In the modified decomposition, in stages after the first, not only is the spatially-low-pass, spectrally-low-pass subband further decomposed, but also spatially-low-pass, spectrally-high-pass subbands are further decomposed spatially. Either method can be used alone to improve the quality of a reconstructed image (see figure). Alternatively, the two methods can be combined by first performing modified decomposition, then subtracting the mean values from spatial planes of spatially-low-pass subbands.

  10. Decomposition of Multi-player Games

    NASA Astrophysics Data System (ADS)

    Zhao, Dengji; Schiffel, Stephan; Thielscher, Michael

    Research in General Game Playing aims at building systems that learn to play unknown games without human intervention. We contribute to this endeavour by generalising the established technique of decomposition from AI Planning to multi-player games. To this end, we present a method for the automatic decomposition of previously unknown games into independent subgames, and we show how a general game player can exploit a successful decomposition for game tree search.

  11. Thermal Decomposition Synthesis of Iron Oxide Nanoparticles with Diminished Magnetic Dead Layer by Controlled Addition of Oxygen.

    PubMed

    Unni, Mythreyi; Uhl, Amanda M; Savliwala, Shehaab; Savitzky, Benjamin H; Dhavalikar, Rohan; Garraud, Nicolas; Arnold, David P; Kourkoutis, Lena F; Andrew, Jennifer S; Rinaldi, Carlos

    2017-02-28

    Decades of research focused on size and shape control of iron oxide nanoparticles have led to methods of synthesis that afford excellent control over physical size and shape but comparatively poor control over magnetic properties. Popular synthesis methods based on thermal decomposition of organometallic precursors in the absence of oxygen have yielded particles with mixed iron oxide phases, crystal defects, and poorer than expected magnetic properties, including the existence of a thick "magnetically dead layer" experimentally evidenced by a magnetic diameter significantly smaller than the physical diameter. Here, we show how single-crystalline iron oxide nanoparticles with few defects and similar physical and magetic diameter distributions can be obtained by introducing molecular oxygen as one of the reactive species in the thermal decomposition synthesis. This is achieved without the need for any postsynthesis oxidation or thermal annealing. These results address a significant challenge in the synthesis of nanoparticles with predictable magnetic properties and could lead to advances in applications of magnetic nanoparticles.

  12. A Multilevel, Hierarchical Sampling Technique for Spatially Correlated Random Fields

    DOE PAGES

    Osborn, Sarah; Vassilevski, Panayot S.; Villa, Umberto

    2017-10-26

    In this paper, we propose an alternative method to generate samples of a spatially correlated random field with applications to large-scale problems for forward propagation of uncertainty. A classical approach for generating these samples is the Karhunen--Loève (KL) decomposition. However, the KL expansion requires solving a dense eigenvalue problem and is therefore computationally infeasible for large-scale problems. Sampling methods based on stochastic partial differential equations provide a highly scalable way to sample Gaussian fields, but the resulting parametrization is mesh dependent. We propose a multilevel decomposition of the stochastic field to allow for scalable, hierarchical sampling based on solving amore » mixed finite element formulation of a stochastic reaction-diffusion equation with a random, white noise source function. Lastly, numerical experiments are presented to demonstrate the scalability of the sampling method as well as numerical results of multilevel Monte Carlo simulations for a subsurface porous media flow application using the proposed sampling method.« less

  13. A Multilevel, Hierarchical Sampling Technique for Spatially Correlated Random Fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Osborn, Sarah; Vassilevski, Panayot S.; Villa, Umberto

    In this paper, we propose an alternative method to generate samples of a spatially correlated random field with applications to large-scale problems for forward propagation of uncertainty. A classical approach for generating these samples is the Karhunen--Loève (KL) decomposition. However, the KL expansion requires solving a dense eigenvalue problem and is therefore computationally infeasible for large-scale problems. Sampling methods based on stochastic partial differential equations provide a highly scalable way to sample Gaussian fields, but the resulting parametrization is mesh dependent. We propose a multilevel decomposition of the stochastic field to allow for scalable, hierarchical sampling based on solving amore » mixed finite element formulation of a stochastic reaction-diffusion equation with a random, white noise source function. Lastly, numerical experiments are presented to demonstrate the scalability of the sampling method as well as numerical results of multilevel Monte Carlo simulations for a subsurface porous media flow application using the proposed sampling method.« less

  14. Direct Extraction of Tumor Response Based on Ensemble Empirical Mode Decomposition for Image Reconstruction of Early Breast Cancer Detection by UWB.

    PubMed

    Li, Qinwei; Xiao, Xia; Wang, Liang; Song, Hang; Kono, Hayato; Liu, Peifang; Lu, Hong; Kikkawa, Takamaro

    2015-10-01

    A direct extraction method of tumor response based on ensemble empirical mode decomposition (EEMD) is proposed for early breast cancer detection by ultra-wide band (UWB) microwave imaging. With this approach, the image reconstruction for the tumor detection can be realized with only extracted signals from as-detected waveforms. The calibration process executed in the previous research for obtaining reference waveforms which stand for signals detected from the tumor-free model is not required. The correctness of the method is testified by successfully detecting a 4 mm tumor located inside the glandular region in one breast model and by the model located at the interface between the gland and the fat, respectively. The reliability of the method is checked by distinguishing a tumor buried in the glandular tissue whose dielectric constant is 35. The feasibility of the method is confirmed by showing the correct tumor information in both simulation results and experimental results for the realistic 3-D printed breast phantom.

  15. Performance of tensor decomposition-based modal identification under nonstationary vibration

    NASA Astrophysics Data System (ADS)

    Friesen, P.; Sadhu, A.

    2017-03-01

    Health monitoring of civil engineering structures is of paramount importance when they are subjected to natural hazards or extreme climatic events like earthquake, strong wind gusts or man-made excitations. Most of the traditional modal identification methods are reliant on stationarity assumption of the vibration response and posed difficulty while analyzing nonstationary vibration (e.g. earthquake or human-induced vibration). Recently tensor decomposition based methods are emerged as powerful and yet generic blind (i.e. without requiring a knowledge of input characteristics) signal decomposition tool for structural modal identification. In this paper, a tensor decomposition based system identification method is further explored to estimate modal parameters using nonstationary vibration generated due to either earthquake or pedestrian induced excitation in a structure. The effects of lag parameters and sensor densities on tensor decomposition are studied with respect to the extent of nonstationarity of the responses characterized by the stationary duration and peak ground acceleration of the earthquake. A suite of more than 1400 earthquakes is used to investigate the performance of the proposed method under a wide variety of ground motions utilizing both complete and partial measurements of a high-rise building model. Apart from the earthquake, human-induced nonstationary vibration of a real-life pedestrian bridge is also used to verify the accuracy of the proposed method.

  16. Isoconversional approach for non-isothermal decomposition of un-irradiated and photon-irradiated 5-fluorouracil.

    PubMed

    Mohamed, Hala Sh; Dahy, AbdelRahman A; Mahfouz, Refaat M

    2017-10-25

    Kinetic analysis for the non-isothermal decomposition of un-irradiated and photon-beam-irradiated 5-fluorouracil (5-FU) as anti-cancer drug, was carried out in static air. Thermal decomposition of 5-FU proceeds in two steps. One minor step in the temperature range of (270-283°C) followed by the major step in the temperature range of (285-360°C). The non-isothermal data for un-irradiated and photon-irradiated 5-FU were analyzed using linear (Tang) and non-linear (Vyazovkin) isoconversional methods. The results of the application of these free models on the present kinetic data showed quite a dependence of the activation energy on the extent of conversion. For un-irradiated 5-FU, the non-isothermal data analysis indicates that the decomposition is generally described by A3 and A4 modeles for the minor and major decomposition steps, respectively. For a photon-irradiated sample of 5-FU with total absorbed dose of 10Gy, the decomposition is controlled by A2 model throughout the coversion range. The activation energies calculated in case of photon-irradiated 5-FU were found to be lower compared to the values obtained from the thermal decomposition of the un-irradiated sample probably due to the formation of additional nucleation sites created by a photon-irradiation. The decomposition path was investigated by intrinsic reaction coordinate (IRC) at the B3LYP/6-311++G(d,p) level of DFT. Two transition states were involved in the process by homolytic rupture of NH bond and ring secession, respectively. Published by Elsevier B.V.

  17. Comparison of rapid methods for chemical analysis of milligram samples of ultrafine clays

    USGS Publications Warehouse

    Rettig, S.L.; Marinenko, J.W.; Khoury, Hani N.; Jones, B.F.

    1983-01-01

    Two rapid methods for the decomposition and chemical analysis of clays were adapted for use with 20–40-mg size samples, typical amounts of ultrafine products (≤0.5-µm diameter) obtained by modern separation methods for clay minerals. The results of these methods were compared with those of “classical” rock analyses. The two methods consisted of mixed lithium metaborate fusion and heated decomposition with HF in a closed vessel. The latter technique was modified to include subsequent evaporation with concentrated H2SO4 and re-solution in HCl, which reduced the interference of the fluoride ion in the determination of Al, Fe, Ca, Mg, Na, and K. Results from the two methods agree sufficiently well with those of the “classical” techniques to minimize error in the calculation of clay mineral structural formulae. Representative maximum variations, in atoms per unit formula of the smectite type based on 22 negative charges, are 0.09 for Si, 0.03 for Al, 0.015 for Fe, 0.07 for Mg, 0.03 for Na, and 0.01 for K.

  18. Analysis of Vibration and Noise of Construction Machinery Based on Ensemble Empirical Mode Decomposition and Spectral Correlation Analysis Method

    NASA Astrophysics Data System (ADS)

    Chen, Yuebiao; Zhou, Yiqi; Yu, Gang; Lu, Dan

    In order to analyze the effect of engine vibration on cab noise of construction machinery in multi-frequency bands, a new method based on ensemble empirical mode decomposition (EEMD) and spectral correlation analysis is proposed. Firstly, the intrinsic mode functions (IMFs) of vibration and noise signals were obtained by EEMD method, and then the IMFs which have the same frequency bands were selected. Secondly, we calculated the spectral correlation coefficients between the selected IMFs, getting the main frequency bands in which engine vibration has significant impact on cab noise. Thirdly, the dominated frequencies were picked out and analyzed by spectral analysis method. The study result shows that the main frequency bands and dominated frequencies in which engine vibration have serious impact on cab noise can be identified effectively by the proposed method, which provides effective guidance to noise reduction of construction machinery.

  19. Component isolation for multi-component signal analysis using a non-parametric gaussian latent feature model

    NASA Astrophysics Data System (ADS)

    Yang, Yang; Peng, Zhike; Dong, Xingjian; Zhang, Wenming; Clifton, David A.

    2018-03-01

    A challenge in analysing non-stationary multi-component signals is to isolate nonlinearly time-varying signals especially when they are overlapped in time and frequency plane. In this paper, a framework integrating time-frequency analysis-based demodulation and a non-parametric Gaussian latent feature model is proposed to isolate and recover components of such signals. The former aims to remove high-order frequency modulation (FM) such that the latter is able to infer demodulated components while simultaneously discovering the number of the target components. The proposed method is effective in isolating multiple components that have the same FM behavior. In addition, the results show that the proposed method is superior to generalised demodulation with singular-value decomposition-based method, parametric time-frequency analysis with filter-based method and empirical model decomposition base method, in recovering the amplitude and phase of superimposed components.

  20. Fusion of infrared and visible images based on BEMD and NSDFB

    NASA Astrophysics Data System (ADS)

    Zhu, Pan; Huang, Zhanhua; Lei, Hai

    2016-07-01

    This paper presents a new fusion method based on the adaptive multi-scale decomposition of bidimensional empirical mode decomposition (BEMD) and the flexible directional expansion of nonsubsampled directional filter banks (NSDFB) for visible-infrared images. Compared with conventional multi-scale fusion methods, BEMD is non-parametric and completely data-driven, which is relatively more suitable for non-linear signals decomposition and fusion. NSDFB can provide direction filtering on the decomposition levels to capture more geometrical structure of the source images effectively. In our fusion framework, the entropies of the two patterns of source images are firstly calculated and the residue of the image whose entropy is larger is extracted to make it highly relevant with the other source image. Then, the residue and the other source image are decomposed into low-frequency sub-bands and a sequence of high-frequency directional sub-bands in different scales by using BEMD and NSDFB. In this fusion scheme, two relevant fusion rules are used in low-frequency sub-bands and high-frequency directional sub-bands, respectively. Finally, the fused image is obtained by applying corresponding inverse transform. Experimental results indicate that the proposed fusion algorithm can obtain state-of-the-art performance for visible-infrared images fusion in both aspects of objective assessment and subjective visual quality even for the source images obtained in different conditions. Furthermore, the fused results have high contrast, remarkable target information and rich details information that are more suitable for human visual characteristics or machine perception.

  1. Accurate analytical periodic solution of the elliptical Kepler equation using the Adomian decomposition method

    NASA Astrophysics Data System (ADS)

    Alshaery, Aisha; Ebaid, Abdelhalim

    2017-11-01

    Kepler's equation is one of the fundamental equations in orbital mechanics. It is a transcendental equation in terms of the eccentric anomaly of a planet which orbits the Sun. Determining the position of a planet in its orbit around the Sun at a given time depends upon the solution of Kepler's equation, which we will solve in this paper by the Adomian decomposition method (ADM). Several properties of the periodicity of the obtained approximate solutions have been proved in lemmas. Our calculations demonstrated a rapid convergence of the obtained approximate solutions which are displayed in tables and graphs. Also, it has been shown in this paper that only a few terms of the Adomian decomposition series are sufficient to achieve highly accurate numerical results for any number of revolutions of the Earth around the Sun as a consequence of the periodicity property. Numerically, the four-term approximate solution coincides with the Bessel-Fourier series solution in the literature up to seven decimal places at some values of the time parameter and nine decimal places at other values. Moreover, the absolute error approaches zero using the nine term approximate Adomian solution. In addition, the approximate Adomian solutions for the eccentric anomaly have been used to show the convergence of the approximate radial distances of the Earth from the Sun for any number of revolutions. The minimal distance (perihelion) and maximal distance (aphelion) approach 147 million kilometers and 152.505 million kilometers, respectively, and these coincide with the well known results in astronomical physics. Therefore, the Adomian decomposition method is validated as an effective tool to solve Kepler's equation for elliptical orbits.

  2. Singular Value Decomposition Method to Determine Distance Distributions in Pulsed Dipolar Electron Spin Resonance.

    PubMed

    Srivastava, Madhur; Freed, Jack H

    2017-11-16

    Regularization is often utilized to elicit the desired physical results from experimental data. The recent development of a denoising procedure yielding about 2 orders of magnitude in improvement in SNR obviates the need for regularization, which achieves a compromise between canceling effects of noise and obtaining an estimate of the desired physical results. We show how singular value decomposition (SVD) can be employed directly on the denoised data, using pulse dipolar electron spin resonance experiments as an example. Such experiments are useful in measuring distances and their distributions, P(r) between spin labels on proteins. In noise-free model cases exact results are obtained, but even a small amount of noise (e.g., SNR = 850 after denoising) corrupts the solution. We develop criteria that precisely determine an optimum approximate solution, which can readily be automated. This method is applicable to any signal that is currently processed with regularization of its SVD analysis.

  3. Synthesis, characterization, thermal and explosive properties of potassium salts of trinitrophloroglucinol.

    PubMed

    Wang, Liqiong; Chen, Hongyan; Zhang, Tonglai; Zhang, Jianguo; Yang, Li

    2007-08-17

    Three different substituted potassium salts of trinitrophloroglucinol (H(3)TNPG) were prepared and characterized. The salts are all hydrates, and thermogravimetric analysis (TG) and elemental analysis confirmed that these salts contain crystal H2O and that the amount crystal H2O in potassium salts of H3TNPG is 1.0 hydrate for mono-substituted potassium salts of H3TNPG [K(H2TNPG)] and di-substituted potassium salt of H3TNPG [K2(HTNPG)], and 2.0 hydrate for tri-substituted potassium salt of H3TNPG [K3(TNPG)]. Their thermal decomposition mechanisms and kinetic parameters from 50 to 500 degrees C were studied under a linear heating rate by differential scanning calorimetry (DSC). Their thermal decomposition mechanisms undergo dehydration stage and intensive exothermic decomposition stage. FT-IR and TG studies verify that their final residua of decomposition are potassium cyanide or potassium carbonate. According to the onset temperature of the first exothermic decomposition process of dehydrated salts, the order of the thermal stability from low to high is from K(H2TNPG) and K2(HTNPG) to K3(TNPG), which is conform to the results of apparent activation energy calculated by Kissinger's and Ozawa-Doyle's method. Sensitivity test results showed that potassium salts of H3TNPG demonstrated higher sensitivity properties and had greater explosive probabilities.

  4. Domain decomposition methods in computational fluid dynamics

    NASA Technical Reports Server (NTRS)

    Gropp, William D.; Keyes, David E.

    1991-01-01

    The divide-and-conquer paradigm of iterative domain decomposition, or substructuring, has become a practical tool in computational fluid dynamic applications because of its flexibility in accommodating adaptive refinement through locally uniform (or quasi-uniform) grids, its ability to exploit multiple discretizations of the operator equations, and the modular pathway it provides towards parallelism. These features are illustrated on the classic model problem of flow over a backstep using Newton's method as the nonlinear iteration. Multiple discretizations (second-order in the operator and first-order in the preconditioner) and locally uniform mesh refinement pay dividends separately, and they can be combined synergistically. Sample performance results are included from an Intel iPSC/860 hypercube implementation.

  5. Preparation of Coaxial-Line and Hollow Mn2O3 Nanofibers by Single-Nozzle Electrospinning and Their Catalytic Performances for Thermal Decomposition of Ammonium Perchlorate.

    PubMed

    Liang, Jiyuan; Yang, Jie; Cao, Weiguo; Guo, Xiangke; Guo, Xuefeng; Ding, Weiping

    2015-09-01

    Coaxial-line and hollow Mn2O3 nanofibers have been synthesized by a simple single-nozzle electrospinning method without using a complicated coaxial jet head, combined with final calcination. The crystal structure and morphology of the Mn2O3 nanofibers were investigated by using the X-ray diffraction, scanning electron microscopy and transmission electron microscopy. The results indicate that the electrospinning distance has important influence on the morphology and structure of the obtained Mn2O3 nanofibers, which changes from hollow fibers for short electrospinning distance to coaxial-line structure for long electrospinning distance after calcination in the air. The formation mechanisms of different structured Mn2O3 fibers are discussed in detail. This facile and effective method is easy to scale up and may be versatile for constructing coaxial-line and hollow fibers of other metal oxides. The catalytic activity of the obtained Mn2O3 nanofibers on thermal decomposition of ammonium perchlorate (AP) was studied by differential scanning calorimetry (DSC). The results show that the hollow Mn2O3 nanofibers have good catalytic activity to promote the thermal decomposition of AP.

  6. Multi-Centrality Graph Spectral Decompositions and Their Application to Cyber Intrusion Detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Pin-Yu; Choudhury, Sutanay; Hero, Alfred

    Many modern datasets can be represented as graphs and hence spectral decompositions such as graph principal component analysis (PCA) can be useful. Distinct from previous graph decomposition approaches based on subspace projection of a single topological feature, e.g., the centered graph adjacency matrix (graph Laplacian), we propose spectral decomposition approaches to graph PCA and graph dictionary learning that integrate multiple features, including graph walk statistics, centrality measures and graph distances to reference nodes. In this paper we propose a new PCA method for single graph analysis, called multi-centrality graph PCA (MC-GPCA), and a new dictionary learning method for ensembles ofmore » graphs, called multi-centrality graph dictionary learning (MC-GDL), both based on spectral decomposition of multi-centrality matrices. As an application to cyber intrusion detection, MC-GPCA can be an effective indicator of anomalous connectivity pattern and MC-GDL can provide discriminative basis for attack classification.« less

  7. A Raman spectroscopic determination of the kinetics of decomposition of ammonium chromate (NH 4) 2CrO 4

    NASA Astrophysics Data System (ADS)

    De Waal, D.; Heyns, A. M.; Range, K.-J.

    1989-06-01

    Raman spectroscopy was used as a method in the kinetic investigation of the thermal decomposition of solid (NH 4) 2CrO 4. Time-dependent measurements of the intensity of the totally symmetric stretching CrO mode of (NH 4) 2CrO 4 have been made between 343 and 363 K. A short initial acceleratory period is observed at lower temperatures and the decomposition reaction decelerates after the maximum decomposition rate has been reached at all temperatures. These results can be interpreted in terms of the Avrami-Erofe'ev law 1 - (χ r) {1}/{2} = kt , where χr is the fraction of reactant at time t. At 358 K, k is equal to 1.76 ± 0.01 × 10 -3 sec -1 for microcrystals and for powdered samples. Activation energies of 97 ± 10 and 49 ± 0.9 kJ mole -1 have been calculated for microcrystalline and powdered samples, respectively.

  8. Multivariate Curve Resolution Applied to Infrared Reflection Measurements of Soil Contaminated with an Organophosphorus Analyte

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gallagher, Neal B.; Blake, Thomas A.; Gassman, Paul L.

    2006-07-01

    Multivariate curve resolution (MCR) is a powerful technique for extracting chemical information from measured spectra on complex mixtures. The difficulty with applying MCR to soil reflectance measurements is that light scattering artifacts can contribute much more variance to the measurements than the analyte(s) of interest. Two methods were integrated into a MCR decomposition to account for light scattering effects. Firstly, an extended mixture model using pure analyte spectra augmented with scattering ‘spectra’ was used for the measured spectra. And secondly, second derivative preprocessed spectra, which have higher selectivity than the unprocessed spectra, were included in a second block as amore » part of the decomposition. The conventional alternating least squares (ALS) algorithm was modified to simultaneously decompose the measured and second derivative spectra in a two-block decomposition. Equality constraints were also included to incorporate information about sampling conditions. The result was an MCR decomposition that provided interpretable spectra from soil reflectance measurements.« less

  9. Time Series Decomposition into Oscillation Components and Phase Estimation.

    PubMed

    Matsuda, Takeru; Komaki, Fumiyasu

    2017-02-01

    Many time series are naturally considered as a superposition of several oscillation components. For example, electroencephalogram (EEG) time series include oscillation components such as alpha, beta, and gamma. We propose a method for decomposing time series into such oscillation components using state-space models. Based on the concept of random frequency modulation, gaussian linear state-space models for oscillation components are developed. In this model, the frequency of an oscillator fluctuates by noise. Time series decomposition is accomplished by this model like the Bayesian seasonal adjustment method. Since the model parameters are estimated from data by the empirical Bayes' method, the amplitudes and the frequencies of oscillation components are determined in a data-driven manner. Also, the appropriate number of oscillation components is determined with the Akaike information criterion (AIC). In this way, the proposed method provides a natural decomposition of the given time series into oscillation components. In neuroscience, the phase of neural time series plays an important role in neural information processing. The proposed method can be used to estimate the phase of each oscillation component and has several advantages over a conventional method based on the Hilbert transform. Thus, the proposed method enables an investigation of the phase dynamics of time series. Numerical results show that the proposed method succeeds in extracting intermittent oscillations like ripples and detecting the phase reset phenomena. We apply the proposed method to real data from various fields such as astronomy, ecology, tidology, and neuroscience.

  10. A novel approach for baseline correction in 1H-MRS signals based on ensemble empirical mode decomposition.

    PubMed

    Parto Dezfouli, Mohammad Ali; Dezfouli, Mohsen Parto; Rad, Hamidreza Saligheh

    2014-01-01

    Proton magnetic resonance spectroscopy ((1)H-MRS) is a non-invasive diagnostic tool for measuring biochemical changes in the human body. Acquired (1)H-MRS signals may be corrupted due to a wideband baseline signal generated by macromolecules. Recently, several methods have been developed for the correction of such baseline signals, however most of them are not able to estimate baseline in complex overlapped signal. In this study, a novel automatic baseline correction method is proposed for (1)H-MRS spectra based on ensemble empirical mode decomposition (EEMD). This investigation was applied on both the simulated data and the in-vivo (1)H-MRS of human brain signals. Results justify the efficiency of the proposed method to remove the baseline from (1)H-MRS signals.

  11. Temporal structure of neuronal population oscillations with empirical model decomposition

    NASA Astrophysics Data System (ADS)

    Li, Xiaoli

    2006-08-01

    Frequency analysis of neuronal oscillation is very important for understanding the neural information processing and mechanism of disorder in the brain. This Letter addresses a new method to analyze the neuronal population oscillations with empirical mode decomposition (EMD). Following EMD of neuronal oscillation, a series of intrinsic mode functions (IMFs) are obtained, then Hilbert transform of IMFs can be used to extract the instantaneous time frequency structure of neuronal oscillation. The method is applied to analyze the neuronal oscillation in the hippocampus of epileptic rats in vivo, the results show the neuronal oscillations have different descriptions during the pre-ictal, seizure onset and ictal periods of the epileptic EEG at the different frequency band. This new method is very helpful to provide a view for the temporal structure of neural oscillation.

  12. Decomposition of algebraic sets and applications to weak centers of cubic systems

    NASA Astrophysics Data System (ADS)

    Chen, Xingwu; Zhang, Weinian

    2009-10-01

    There are many methods such as Gröbner basis, characteristic set and resultant, in computing an algebraic set of a system of multivariate polynomials. The common difficulties come from the complexity of computation, singularity of the corresponding matrices and some unnecessary factors in successive computation. In this paper, we decompose algebraic sets, stratum by stratum, into a union of constructible sets with Sylvester resultants, so as to simplify the procedure of elimination. Applying this decomposition to systems of multivariate polynomials resulted from period constants of reversible cubic differential systems which possess a quadratic isochronous center, we determine the order of weak centers and discuss the bifurcation of critical periods.

  13. On the decomposition of modular multiplicative inverse operators via a new functional algorithm approach to Bachet’s-Bezout’s Lemma

    NASA Astrophysics Data System (ADS)

    Cortés–Vega, Luis A.

    2017-12-01

    In this paper, we consider modular multiplicative inverse operators (MMIO)’s of the form: J(m+n):(ℤ/(m+n)ℤ)*→ℤ/(m+n)ℤ, J(m+n)(a)=a-1. A general method to decompose {{\\mathscr{J}}}(m+n)(.) over group of units {({{Z}}/(m+n){{Z}})}* is derived. As result, an interesting decomposition law for these operators over {({{Z}}/(m+n){{Z}})}* is established. Numerical examples illustring the new results are given. This, complement some recent results obtained by the author for (MMIO)’s defined over group of units of the form {({{Z}}/\\varrho {{Z}})}* with ϱ = m × n > 2.

  14. Impact of joint statistical dual-energy CT reconstruction of proton stopping power images: Comparison to image- and sinogram-domain material decomposition approaches.

    PubMed

    Zhang, Shuangyue; Han, Dong; Politte, David G; Williamson, Jeffrey F; O'Sullivan, Joseph A

    2018-05-01

    The purpose of this study was to assess the performance of a novel dual-energy CT (DECT) approach for proton stopping power ratio (SPR) mapping that integrates image reconstruction and material characterization using a joint statistical image reconstruction (JSIR) method based on a linear basis vector model (BVM). A systematic comparison between the JSIR-BVM method and previously described DECT image- and sinogram-domain decomposition approaches is also carried out on synthetic data. The JSIR-BVM method was implemented to estimate the electron densities and mean excitation energies (I-values) required by the Bethe equation for SPR mapping. In addition, image- and sinogram-domain DECT methods based on three available SPR models including BVM were implemented for comparison. The intrinsic SPR modeling accuracy of the three models was first validated. Synthetic DECT transmission sinograms of two 330 mm diameter phantoms each containing 17 soft and bony tissues (for a total of 34) of known composition were then generated with spectra of 90 and 140 kVp. The estimation accuracy of the reconstructed SPR images were evaluated for the seven investigated methods. The impact of phantom size and insert location on SPR estimation accuracy was also investigated. All three selected DECT-SPR models predict the SPR of all tissue types with less than 0.2% RMS errors under idealized conditions with no reconstruction uncertainties. When applied to synthetic sinograms, the JSIR-BVM method achieves the best performance with mean and RMS-average errors of less than 0.05% and 0.3%, respectively, for all noise levels, while the image- and sinogram-domain decomposition methods show increasing mean and RMS-average errors with increasing noise level. The JSIR-BVM method also reduces statistical SPR variation by sixfold compared to other methods. A 25% phantom diameter change causes up to 4% SPR differences for the image-domain decomposition approach, while the JSIR-BVM method and sinogram-domain decomposition methods are insensitive to size change. Among all the investigated methods, the JSIR-BVM method achieves the best performance for SPR estimation in our simulation phantom study. This novel method is robust with respect to sinogram noise and residual beam-hardening effects, yielding SPR estimation errors comparable to intrinsic BVM modeling error. In contrast, the achievable SPR estimation accuracy of the image- and sinogram-domain decomposition methods is dominated by the CT image intensity uncertainties introduced by the reconstruction and decomposition processes. © 2018 American Association of Physicists in Medicine.

  15. Amplitude-cyclic frequency decomposition of vibration signals for bearing fault diagnosis based on phase editing

    NASA Astrophysics Data System (ADS)

    Barbini, L.; Eltabach, M.; Hillis, A. J.; du Bois, J. L.

    2018-03-01

    In rotating machine diagnosis different spectral tools are used to analyse vibration signals. Despite the good diagnostic performance such tools are usually refined, computationally complex to implement and require oversight of an expert user. This paper introduces an intuitive and easy to implement method for vibration analysis: amplitude cyclic frequency decomposition. This method firstly separates vibration signals accordingly to their spectral amplitudes and secondly uses the squared envelope spectrum to reveal the presence of cyclostationarity in each amplitude level. The intuitive idea is that in a rotating machine different components contribute vibrations at different amplitudes, for instance defective bearings contribute a very weak signal in contrast to gears. This paper also introduces a new quantity, the decomposition squared envelope spectrum, which enables separation between the components of a rotating machine. The amplitude cyclic frequency decomposition and the decomposition squared envelope spectrum are tested on real word signals, both at stationary and varying speeds, using data from a wind turbine gearbox and an aircraft engine. In addition a benchmark comparison to the spectral correlation method is presented.

  16. An optimization approach for fitting canonical tensor decompositions.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dunlavy, Daniel M.; Acar, Evrim; Kolda, Tamara Gibson

    Tensor decompositions are higher-order analogues of matrix decompositions and have proven to be powerful tools for data analysis. In particular, we are interested in the canonical tensor decomposition, otherwise known as the CANDECOMP/PARAFAC decomposition (CPD), which expresses a tensor as the sum of component rank-one tensors and is used in a multitude of applications such as chemometrics, signal processing, neuroscience, and web analysis. The task of computing the CPD, however, can be difficult. The typical approach is based on alternating least squares (ALS) optimization, which can be remarkably fast but is not very accurate. Previously, nonlinear least squares (NLS) methodsmore » have also been recommended; existing NLS methods are accurate but slow. In this paper, we propose the use of gradient-based optimization methods. We discuss the mathematical calculation of the derivatives and further show that they can be computed efficiently, at the same cost as one iteration of ALS. Computational experiments demonstrate that the gradient-based optimization methods are much more accurate than ALS and orders of magnitude faster than NLS.« less

  17. The suitability of visual taphonomic methods for digital photographs: An experimental approach with pig carcasses in a tropical climate.

    PubMed

    Ribéreau-Gayon, Agathe; Rando, Carolyn; Morgan, Ruth M; Carter, David O

    2018-05-01

    In the context of increased scrutiny of the methods in forensic sciences, it is essential to ensure that the approaches used in forensic taphonomy to measure decomposition and estimate the postmortem interval are underpinned by robust evidence-based data. Digital photographs are an important source of documentation in forensic taphonomic investigations but the suitability of the current approaches for photographs, rather than real-time remains, is poorly studied which can undermine accurate forensic conclusions. The present study aimed to investigate the suitability of 2D colour digital photographs for evaluating decomposition of exposed human analogues (Sus scrofa domesticus) in a tropical savanna environment (Hawaii), using two published scoring methods; Megyesi et al., 2005 and Keough et al., 2017. It was found that there were significant differences between the real-time and photograph decomposition scores when the Megyesi et al. method was used. However, the Keough et al. method applied to photographs reflected real-time decomposition more closely and thus appears more suitable to evaluate pig decomposition from 2D photographs. The findings indicate that the type of scoring method used has a significant impact on the ability to accurately evaluate the decomposition of exposed pig carcasses from photographs. It was further identified that photographic taphonomic analysis can reach high inter-observer reproducibility. These novel findings are of significant importance for the forensic sciences as they highlight the potential for high quality photograph coverage to provide useful complementary information for the forensic taphonomic investigation. New recommendations to develop robust transparent approaches adapted to photographs in forensic taphonomy are suggested based on these findings. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  18. Empirical mode decomposition apparatus, method and article of manufacture for analyzing biological signals and performing curve fitting

    NASA Technical Reports Server (NTRS)

    Huang, Norden E. (Inventor)

    2004-01-01

    A computer implemented physical signal analysis method includes four basic steps and the associated presentation techniques of the results. The first step is a computer implemented Empirical Mode Decomposition that extracts a collection of Intrinsic Mode Functions (IMF) from nonlinear, nonstationary physical signals. The decomposition is based on the direct extraction of the energy associated with various intrinsic time scales in the physical signal. Expressed in the IMF's, they have well-behaved Hilbert Transforms from which instantaneous frequencies can be calculated. The second step is the Hilbert Transform which produces a Hilbert Spectrum. Thus, the invention can localize any event on the time as well as the frequency axis. The decomposition can also be viewed as an expansion of the data in terms of the IMF's. Then, these IMF's, based on and derived from the data, can serve as the basis of that expansion. The local energy and the instantaneous frequency derived from the IMF's through the Hilbert transform give a full energy-frequency-time distribution of the data which is designated as the Hilbert Spectrum. The third step filters the physical signal by combining a subset of the IMFs. In the fourth step, a curve may be fitted to the filtered signal which may not have been possible with the original, unfiltered signal.

  19. Empirical mode decomposition apparatus, method and article of manufacture for analyzing biological signals and performing curve fitting

    NASA Technical Reports Server (NTRS)

    Huang, Norden E. (Inventor)

    2002-01-01

    A computer implemented physical signal analysis method includes four basic steps and the associated presentation techniques of the results. The first step is a computer implemented Empirical Mode Decomposition that extracts a collection of Intrinsic Mode Functions (IMF) from nonlinear, nonstationary physical signals. The decomposition is based on the direct extraction of the energy associated with various intrinsic time scales in the physical signal. Expressed in the IMF's, they have well-behaved Hilbert Transforms from which instantaneous frequencies can be calculated. The second step is the Hilbert Transform which produces a Hilbert Spectrum. Thus, the invention can localize any event on the time as well as the frequency axis. The decomposition can also be viewed as an expansion of the data in terms of the IMF's. Then, these IMF's, based on and derived from the data, can serve as the basis of that expansion. The local energy and the instantaneous frequency derived from the IMF's through the Hilbert transform give a full energy-frequency-time distribution of the data which is designated as the Hilbert Spectrum. The third step filters the physical signal by combining a subset of the IMFs. In the fourth step, a curve may be fitted to the filtered signal which may not have been possible with the original, unfiltered signal.

  20. Theoretical study on the mechanism of the reaction of FOX-7 with OH and NO2 radicals: bimolecular reactions with low barrier during the decomposition of FOX-7

    NASA Astrophysics Data System (ADS)

    Zhang, Ji-Dong; Zhang, Li-Li

    2017-12-01

    The decomposition of 1,1-diamino-2,2-dinitroethene (FOX-7) attracts great interests, while the studies on bimolecular reactions during the decomposition of FOX-7 are scarce. This study for the first time investigated the bimolecular reactions of OH and NO2 radicals, which are pyrolysis products of ammonium perchlorate (an efficient oxidant usually used in solid propellant), with FOX-7 by computational chemistry methods. The molecular geometries and energies were calculated using the (U)B3LYP/6-31++G(d,p) method. The rate constants of the reactions were calculated by canonical variational transition state theory. We found three mechanisms (H-abstraction, OH addition to C and N atom) for the reaction of OH + FOX-7 and two mechanisms (O abstraction and H abstraction) for the reaction of NO2 + FOX-7. OH radical can abstract H atom or add to C atom of FOX-7 with barriers near to zero, which means OH radical can effectively degrade FOX-7. The O abstraction channel of the reaction of NO2 + FOX-7 results in the formation of NO3 radical, which has never been detected experimentally during the decomposition of FOX-7.

  1. Application of composite dictionary multi-atom matching in gear fault diagnosis.

    PubMed

    Cui, Lingli; Kang, Chenhui; Wang, Huaqing; Chen, Peng

    2011-01-01

    The sparse decomposition based on matching pursuit is an adaptive sparse expression method for signals. This paper proposes an idea concerning a composite dictionary multi-atom matching decomposition and reconstruction algorithm, and the introduction of threshold de-noising in the reconstruction algorithm. Based on the structural characteristics of gear fault signals, a composite dictionary combining the impulse time-frequency dictionary and the Fourier dictionary was constituted, and a genetic algorithm was applied to search for the best matching atom. The analysis results of gear fault simulation signals indicated the effectiveness of the hard threshold, and the impulse or harmonic characteristic components could be separately extracted. Meanwhile, the robustness of the composite dictionary multi-atom matching algorithm at different noise levels was investigated. Aiming at the effects of data lengths on the calculation efficiency of the algorithm, an improved segmented decomposition and reconstruction algorithm was proposed, and the calculation efficiency of the decomposition algorithm was significantly enhanced. In addition it is shown that the multi-atom matching algorithm was superior to the single-atom matching algorithm in both calculation efficiency and algorithm robustness. Finally, the above algorithm was applied to gear fault engineering signals, and achieved good results.

  2. Time-varying singular value decomposition for periodic transient identification in bearing fault diagnosis

    NASA Astrophysics Data System (ADS)

    Zhang, Shangbin; Lu, Siliang; He, Qingbo; Kong, Fanrang

    2016-09-01

    For rotating machines, the defective faults of bearings generally are represented as periodic transient impulses in acquired signals. The extraction of transient features from signals has been a key issue for fault diagnosis. However, the background noise reduces identification performance of periodic faults in practice. This paper proposes a time-varying singular value decomposition (TSVD) method to enhance the identification of periodic faults. The proposed method is inspired by the sliding window method. By applying singular value decomposition (SVD) to the signal under a sliding window, we can obtain a time-varying singular value matrix (TSVM). Each column in the TSVM is occupied by the singular values of the corresponding sliding window, and each row represents the intrinsic structure of the raw signal, namely time-singular-value-sequence (TSVS). Theoretical and experimental analyses show that the frequency of TSVS is exactly twice that of the corresponding intrinsic structure. Moreover, the signal-to-noise ratio (SNR) of TSVS is improved significantly in comparison with the raw signal. The proposed method takes advantages of the TSVS in noise suppression and feature extraction to enhance fault frequency for diagnosis. The effectiveness of the TSVD is verified by means of simulation studies and applications to diagnosis of bearing faults. Results indicate that the proposed method is superior to traditional methods for bearing fault diagnosis.

  3. Comparative kinetic analysis on thermal degradation of some cephalosporins using TG and DSC data

    PubMed Central

    2013-01-01

    Background The thermal decomposition of cephalexine, cefadroxil and cefoperazone under non-isothermal conditions using the TG, respectively DSC methods, was studied. In case of TG, a hyphenated technique, including EGA, was used. Results The kinetic analysis was performed using the TG and DSC data in air for the first step of cephalosporin’s decomposition at four heating rates. The both TG and DSC data were processed according to an appropriate strategy to the following kinetic methods: Kissinger-Akahira-Sunose, Friedman, and NPK, in order to obtain realistic kinetic parameters, even if the decomposition process is a complex one. The EGA data offer some valuable indications about a possible decomposition mechanism. The obtained data indicate a rather good agreement between the activation energy’s values obtained by different methods, whereas the EGA data and the chemical structures give a possible explanation of the observed differences on the thermal stability. A complete kinetic analysis needs a data processing strategy using two or more methods, but the kinetic methods must also be applied to the different types of experimental data (TG and DSC). Conclusion The simultaneous use of DSC and TG data for the kinetic analysis coupled with evolved gas analysis (EGA) provided us a more complete picture of the degradation of the three cephalosporins. It was possible to estimate kinetic parameters by using three different kinetic methods and this allowed us to compare the Ea values obtained from different experimental data, TG and DSC. The thermodegradation being a complex process, the both differential and integral methods based on the single step hypothesis are inadequate for obtaining believable kinetic parameters. Only the modified NPK method allowed an objective separation of the temperature, respective conversion influence on the reaction rate and in the same time to ascertain the existence of two simultaneous steps. PMID:23594763

  4. Decomposition of Fuzzy Soft Sets with Finite Value Spaces

    PubMed Central

    Jun, Young Bae

    2014-01-01

    The notion of fuzzy soft sets is a hybrid soft computing model that integrates both gradualness and parameterization methods in harmony to deal with uncertainty. The decomposition of fuzzy soft sets is of great importance in both theory and practical applications with regard to decision making under uncertainty. This study aims to explore decomposition of fuzzy soft sets with finite value spaces. Scalar uni-product and int-product operations of fuzzy soft sets are introduced and some related properties are investigated. Using t-level soft sets, we define level equivalent relations and show that the quotient structure of the unit interval induced by level equivalent relations is isomorphic to the lattice consisting of all t-level soft sets of a given fuzzy soft set. We also introduce the concepts of crucial threshold values and complete threshold sets. Finally, some decomposition theorems for fuzzy soft sets with finite value spaces are established, illustrated by an example concerning the classification and rating of multimedia cell phones. The obtained results extend some classical decomposition theorems of fuzzy sets, since every fuzzy set can be viewed as a fuzzy soft set with a single parameter. PMID:24558342

  5. Decomposition of fuzzy soft sets with finite value spaces.

    PubMed

    Feng, Feng; Fujita, Hamido; Jun, Young Bae; Khan, Madad

    2014-01-01

    The notion of fuzzy soft sets is a hybrid soft computing model that integrates both gradualness and parameterization methods in harmony to deal with uncertainty. The decomposition of fuzzy soft sets is of great importance in both theory and practical applications with regard to decision making under uncertainty. This study aims to explore decomposition of fuzzy soft sets with finite value spaces. Scalar uni-product and int-product operations of fuzzy soft sets are introduced and some related properties are investigated. Using t-level soft sets, we define level equivalent relations and show that the quotient structure of the unit interval induced by level equivalent relations is isomorphic to the lattice consisting of all t-level soft sets of a given fuzzy soft set. We also introduce the concepts of crucial threshold values and complete threshold sets. Finally, some decomposition theorems for fuzzy soft sets with finite value spaces are established, illustrated by an example concerning the classification and rating of multimedia cell phones. The obtained results extend some classical decomposition theorems of fuzzy sets, since every fuzzy set can be viewed as a fuzzy soft set with a single parameter.

  6. INDDGO: Integrated Network Decomposition & Dynamic programming for Graph Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Groer, Christopher S; Sullivan, Blair D; Weerapurage, Dinesh P

    2012-10-01

    It is well-known that dynamic programming algorithms can utilize tree decompositions to provide a way to solve some \\emph{NP}-hard problems on graphs where the complexity is polynomial in the number of nodes and edges in the graph, but exponential in the width of the underlying tree decomposition. However, there has been relatively little computational work done to determine the practical utility of such dynamic programming algorithms. We have developed software to construct tree decompositions using various heuristics and have created a fast, memory-efficient dynamic programming implementation for solving maximum weighted independent set. We describe our software and the algorithms wemore » have implemented, focusing on memory saving techniques for the dynamic programming. We compare the running time and memory usage of our implementation with other techniques for solving maximum weighted independent set, including a commercial integer programming solver and a semi-definite programming solver. Our results indicate that it is possible to solve some instances where the underlying decomposition has width much larger than suggested by the literature. For certain types of problems, our dynamic programming code runs several times faster than these other methods.« less

  7. Tea polyphenols dominate the short-term tea (Camellia sinensis) leaf litter decomposition*

    PubMed Central

    Fan, Dong-mei; Fan, Kai; Yu, Cui-ping; Lu, Ya-ting; Wang, Xiao-chang

    2017-01-01

    Polyphenols are one of the most important secondary metabolites, and affect the decomposition of litter and soil organic matter. This study aims to monitor the mass loss rate of tea leaf litter and nutrient release pattern, and investigate the role of tea polyphenols played in this process. High-performance liquid chromatography (HPLC) and classical litter bag method were used to simulate the decomposition process of tea leaf litter and track the changes occurring in major polyphenols over eight months. The release patterns of nitrogen, potassium, calcium, and magnesium were also determined. The decomposition pattern of tea leaf litter could be described by a two-phase decomposition model, and the polyphenol/N ratio effectively regulated the degradation process. Most of the catechins decreased dramatically within two months; gallic acid (GA), catechin gallate (CG), and gallocatechin (GC) were faintly detected, while others were outside the detection limits by the end of the experiment. These results demonstrated that tea polyphenols transformed quickly and catechins had an effect on the individual conversion rate. The nutrient release pattern was different from other plants which might be due to the existence of tea polyphenols. PMID:28124839

  8. Tea polyphenols dominate the short-term tea (Camellia sinensis) leaf litter decomposition.

    PubMed

    Fan, Dong-Mei; Fan, Kai; Yu, Cui-Ping; Lu, Ya-Ting; Wang, Xiao-Chang

    Polyphenols are one of the most important secondary metabolites, and affect the decomposition of litter and soil organic matter. This study aims to monitor the mass loss rate of tea leaf litter and nutrient release pattern, and investigate the role of tea polyphenols played in this process. High-performance liquid chromatography (HPLC) and classical litter bag method were used to simulate the decomposition process of tea leaf litter and track the changes occurring in major polyphenols over eight months. The release patterns of nitrogen, potassium, calcium, and magnesium were also determined. The decomposition pattern of tea leaf litter could be described by a two-phase decomposition model, and the polyphenol/N ratio effectively regulated the degradation process. Most of the catechins decreased dramatically within two months; gallic acid (GA), catechin gallate (CG), and gallocatechin (GC) were faintly detected, while others were outside the detection limits by the end of the experiment. These results demonstrated that tea polyphenols transformed quickly and catechins had an effect on the individual conversion rate. The nutrient release pattern was different from other plants which might be due to the existence of tea polyphenols.

  9. Effect of body mass and clothing on decomposition of pig carcasses.

    PubMed

    Matuszewski, Szymon; Konwerski, Szymon; Frątczak, Katarzyna; Szafałowicz, Michał

    2014-11-01

    Carcass mass and carcass clothing are factors of potential high forensic importance. In casework, corpses differ in mass and kind or extent of clothing; hence, a question arises whether methods for post-mortem interval estimation should take these differences into account. Unfortunately, effects of carcass mass and clothing on specific processes in decomposition and related entomological phenomena are unclear. In this article, simultaneous effects of these factors are analysed. The experiment followed a complete factorial block design with four levels of carcass mass (small carcasses 5-15 kg, medium carcasses 15.1-30 kg, medium/large carcasses 35-50 kg, large carcasses 55-70 kg) and two levels of carcass clothing (clothed and unclothed). Pig carcasses (N = 24) were grouped into three blocks, which were separated in time. Generally, carcass mass revealed significant and frequently large effects in almost all analyses, whereas carcass clothing had only minor influence on some phenomena related to the advanced decay. Carcass mass differently affected particular gross processes in decomposition. Putrefaction was more efficient in larger carcasses, which manifested itself through earlier onset and longer duration of bloating. On the other hand, active decay was less efficient in these carcasses, with relatively low average rate, resulting in slower mass loss and later onset of advanced decay. The average rate of active decay showed a significant, logarithmic increase with an increase in carcass mass, but only in these carcasses on which active decay was driven solely by larval blowflies. If a blowfly-driven active decay was followed by active decay driven by larval Necrodes littoralis (Coleoptera: Silphidae), which was regularly found in medium/large and large carcasses, the average rate showed only a slight and insignificant increase with an increase in carcass mass. These results indicate that lower efficiency of active decay in larger carcasses is a consequence of a multi-guild and competition-related pattern of this process. Pattern of mass loss in large and medium/large carcasses was not sigmoidal, but rather exponential. The overall rate of decomposition was strongly, but not linearly, related to carcass mass. In a range of low mass decomposition rate increased with an increase in mass, then at about 30 kg, there was a distinct decrease in rate, and again at about 50 kg, the rate slightly increased. Until about 100 accumulated degree-days larger carcasses gained higher total body scores than smaller carcasses. Afterwards, the pattern was reversed; moreover, differences between classes of carcasses enlarged with the progress of decomposition. In conclusion, current results demonstrate that cadaver mass is a factor of key importance for decomposition, and as such, it should be taken into account by decomposition-related methods for post-mortem interval estimation.

  10. Analysing Institutions Interdisciplinarity by Extensive Use of Rao-Stirling Diversity Index.

    PubMed

    Cassi, Lorenzo; Champeimont, Raphaël; Mescheba, Wilfriedo; de Turckheim, Élisabeth

    2017-01-01

    This paper shows how the Rao-Stirling diversity index may be extensively used for positioning and comparing institutions interdisciplinary practices. Two decompositions of this index make it possible to explore different components of the diversity of the cited references in a corpus of publications. The paper aims at demonstrating how these bibliometric tools can be used for comparing institutions in a research field by highlighting collaboration orientations and institutions strategies. To make the method available and easy to use for indicator users, this paper first recalls a previous result on the decomposition of the Rao-Stirling index into multidisciplinarity and interdisciplinarity components, then proposes a new decomposition to further explore the profile of research collaborations and finally presents an application to Neuroscience research in French universities.

  11. Analysing Institutions Interdisciplinarity by Extensive Use of Rao-Stirling Diversity Index

    PubMed Central

    Cassi, Lorenzo; Champeimont, Raphaël; Mescheba, Wilfriedo

    2017-01-01

    This paper shows how the Rao-Stirling diversity index may be extensively used for positioning and comparing institutions interdisciplinary practices. Two decompositions of this index make it possible to explore different components of the diversity of the cited references in a corpus of publications. The paper aims at demonstrating how these bibliometric tools can be used for comparing institutions in a research field by highlighting collaboration orientations and institutions strategies. To make the method available and easy to use for indicator users, this paper first recalls a previous result on the decomposition of the Rao-Stirling index into multidisciplinarity and interdisciplinarity components, then proposes a new decomposition to further explore the profile of research collaborations and finally presents an application to Neuroscience research in French universities. PMID:28114382

  12. High performance computation of radiative transfer equation using the finite element method

    NASA Astrophysics Data System (ADS)

    Badri, M. A.; Jolivet, P.; Rousseau, B.; Favennec, Y.

    2018-05-01

    This article deals with an efficient strategy for numerically simulating radiative transfer phenomena using distributed computing. The finite element method alongside the discrete ordinate method is used for spatio-angular discretization of the monochromatic steady-state radiative transfer equation in an anisotropically scattering media. Two very different methods of parallelization, angular and spatial decomposition methods, are presented. To do so, the finite element method is used in a vectorial way. A detailed comparison of scalability, performance, and efficiency on thousands of processors is established for two- and three-dimensional heterogeneous test cases. Timings show that both algorithms scale well when using proper preconditioners. It is also observed that our angular decomposition scheme outperforms our domain decomposition method. Overall, we perform numerical simulations at scales that were previously unattainable by standard radiative transfer equation solvers.

  13. Nanorods, nanospheres, nanocubes: Synthesis, characterization and catalytic activity of nanoferrites of Mn, Co, Ni, Part-89

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Singh, Supriya; Srivastava, Pratibha; Singh, Gurdip, E-mail: gsingh4us@yahoo.com

    2013-02-15

    Graphical abstract: Prepared nanoferrites were characterized by FE-SEM and bright field TEM micrographs. The catalytic effect of these nanoferrites was evaluated on the thermal decomposition of ammonium perchlorate using TG and TG–DSC techniques. The kinetics of thermal decomposition of AP was evaluated using isothermal TG data by model fitting as well as isoconversional method. Display Omitted Highlights: ► Synthesis of ferrite nanostructures (∼20.0 nm) by wet-chemical method under different synthetic conditions. ► Characterization using XRD, FE-SEM, EDS, TEM, HRTEM and SAED pattern. ► Catalytic activity of ferrite nanostructures on AP thermal decomposition by thermal techniques. ► Burning rate measurements ofmore » CSPs with ferrite nanostructures. ► Kinetics of thermal decomposition of AP + nanoferrites. -- Abstract: In this paper, the nanoferrites of Mn, Co and Ni were synthesized by wet chemical method and characterized by X-ray diffraction (XRD), field emission scanning electron microscopy (FE-SEM), energy dispersive, X-ray spectra (EDS), transmission electron microscopy (TEM) and high resolution transmission electron microscopy (HR-TEM). It is catalytic activity were investigated on the thermal decomposition of ammonium perchlorate (AP) and composite solid propellants (CSPs) using thermogravimetry (TG), TG coupled with differential scanning calorimetry (TG–DSC) and ignition delay measurements. Kinetics of thermal decomposition of AP + nanoferrites have also been investigated using isoconversional and model fitting approaches which have been applied to data for isothermal TG decomposition. The burning rate of CSPs was considerably enhanced by these nanoferrites. Addition of nanoferrites to AP led to shifting of the high temperature decomposition peak toward lower temperature. All these studies reveal that ferrite nanorods show the best catalytic activity superior to that of nanospheres and nanocubes.« less

  14. Using Rényi parameter to improve the predictive power of singular value decomposition entropy on stock market

    NASA Astrophysics Data System (ADS)

    Jiang, Jiaqi; Gu, Rongbao

    2016-04-01

    This paper generalizes the method of traditional singular value decomposition entropy by incorporating orders q of Rényi entropy. We analyze the predictive power of the entropy based on trajectory matrix using Shanghai Composite Index and Dow Jones Index data in both static test and dynamic test. In the static test on SCI, results of global granger causality tests all turn out to be significant regardless of orders selected. But this entropy fails to show much predictability in American stock market. In the dynamic test, we find that the predictive power can be significantly improved in SCI by our generalized method but not in DJI. This suggests that noises and errors affect SCI more frequently than DJI. In the end, results obtained using different length of sliding window also corroborate this finding.

  15. Applying Novel Time-Frequency Moments Singular Value Decomposition Method and Artificial Neural Networks for Ballistocardiography

    NASA Astrophysics Data System (ADS)

    Akhbardeh, Alireza; Junnila, Sakari; Koivuluoma, Mikko; Koivistoinen, Teemu; Värri, Alpo

    2006-12-01

    As we know, singular value decomposition (SVD) is designed for computing singular values (SVs) of a matrix. Then, if it is used for finding SVs of an [InlineEquation not available: see fulltext.]-by-1 or 1-by- [InlineEquation not available: see fulltext.] array with elements representing samples of a signal, it will return only one singular value that is not enough to express the whole signal. To overcome this problem, we designed a new kind of the feature extraction method which we call ''time-frequency moments singular value decomposition (TFM-SVD).'' In this new method, we use statistical features of time series as well as frequency series (Fourier transform of the signal). This information is then extracted into a certain matrix with a fixed structure and the SVs of that matrix are sought. This transform can be used as a preprocessing stage in pattern clustering methods. The results in using it indicate that the performance of a combined system including this transform and classifiers is comparable with the performance of using other feature extraction methods such as wavelet transforms. To evaluate TFM-SVD, we applied this new method and artificial neural networks (ANNs) for ballistocardiogram (BCG) data clustering to look for probable heart disease of six test subjects. BCG from the test subjects was recorded using a chair-like ballistocardiograph, developed in our project. This kind of device combined with automated recording and analysis would be suitable for use in many places, such as home, office, and so forth. The results show that the method has high performance and it is almost insensitive to BCG waveform latency or nonlinear disturbance.

  16. Blurred Palmprint Recognition Based on Stable-Feature Extraction Using a Vese–Osher Decomposition Model

    PubMed Central

    Hong, Danfeng; Su, Jian; Hong, Qinggen; Pan, Zhenkuan; Wang, Guodong

    2014-01-01

    As palmprints are captured using non-contact devices, image blur is inevitably generated because of the defocused status. This degrades the recognition performance of the system. To solve this problem, we propose a stable-feature extraction method based on a Vese–Osher (VO) decomposition model to recognize blurred palmprints effectively. A Gaussian defocus degradation model is first established to simulate image blur. With different degrees of blurring, stable features are found to exist in the image which can be investigated by analyzing the blur theoretically. Then, a VO decomposition model is used to obtain structure and texture layers of the blurred palmprint images. The structure layer is stable for different degrees of blurring (this is a theoretical conclusion that needs to be further proved via experiment). Next, an algorithm based on weighted robustness histogram of oriented gradients (WRHOG) is designed to extract the stable features from the structure layer of the blurred palmprint image. Finally, a normalized correlation coefficient is introduced to measure the similarity in the palmprint features. We also designed and performed a series of experiments to show the benefits of the proposed method. The experimental results are used to demonstrate the theoretical conclusion that the structure layer is stable for different blurring scales. The WRHOG method also proves to be an advanced and robust method of distinguishing blurred palmprints. The recognition results obtained using the proposed method and data from two palmprint databases (PolyU and Blurred–PolyU) are stable and superior in comparison to previous high-performance methods (the equal error rate is only 0.132%). In addition, the authentication time is less than 1.3 s, which is fast enough to meet real-time demands. Therefore, the proposed method is a feasible way of implementing blurred palmprint recognition. PMID:24992328

  17. Blurred palmprint recognition based on stable-feature extraction using a Vese-Osher decomposition model.

    PubMed

    Hong, Danfeng; Su, Jian; Hong, Qinggen; Pan, Zhenkuan; Wang, Guodong

    2014-01-01

    As palmprints are captured using non-contact devices, image blur is inevitably generated because of the defocused status. This degrades the recognition performance of the system. To solve this problem, we propose a stable-feature extraction method based on a Vese-Osher (VO) decomposition model to recognize blurred palmprints effectively. A Gaussian defocus degradation model is first established to simulate image blur. With different degrees of blurring, stable features are found to exist in the image which can be investigated by analyzing the blur theoretically. Then, a VO decomposition model is used to obtain structure and texture layers of the blurred palmprint images. The structure layer is stable for different degrees of blurring (this is a theoretical conclusion that needs to be further proved via experiment). Next, an algorithm based on weighted robustness histogram of oriented gradients (WRHOG) is designed to extract the stable features from the structure layer of the blurred palmprint image. Finally, a normalized correlation coefficient is introduced to measure the similarity in the palmprint features. We also designed and performed a series of experiments to show the benefits of the proposed method. The experimental results are used to demonstrate the theoretical conclusion that the structure layer is stable for different blurring scales. The WRHOG method also proves to be an advanced and robust method of distinguishing blurred palmprints. The recognition results obtained using the proposed method and data from two palmprint databases (PolyU and Blurred-PolyU) are stable and superior in comparison to previous high-performance methods (the equal error rate is only 0.132%). In addition, the authentication time is less than 1.3 s, which is fast enough to meet real-time demands. Therefore, the proposed method is a feasible way of implementing blurred palmprint recognition.

  18. Organic Carbon Sorption and Decomposition in Selected Global Soils

    DOE Data Explorer

    Jagadamma, S.; Mayes, M. A.; Steinweg, J. M.; Wang, G.; Post, W. M.

    2014-01-01

    This data set reports the results of lab-scale experiments conducted to investigate the dynamics of organic carbon (C) decomposition from several soils from temperate, tropical, arctic, and sub-arctic environments. Results were used to test the newly developed soil microbe decomposition C model--Microbial-ENzyme-medicated Decomposition (MEND).

  19. Measuring Glial Metabolism in Repetitive Brain Trauma and Alzheimer’s Disease

    DTIC Science & Technology

    2016-09-01

    Six methods: Single value decomposition (SVD), wavelet, sliding window, sliding window with Gaussian weighting, spline and spectral improvements...comparison of a range of different denoising methods for dynamic MRS. Six denoising methods were considered: Single value decomposition (SVD), wavelet...project by improving the software required for the data analysis by developing six different denoising methods. He also assisted with the testing

  20. Orbital-Optimized MP3 and MP2.5 with Density-Fitting and Cholesky Decomposition Approximations.

    PubMed

    Bozkaya, Uğur

    2016-03-08

    Efficient implementations of the orbital-optimized MP3 and MP2.5 methods with the density-fitting (DF-OMP3 and DF-OMP2.5) and Cholesky decomposition (CD-OMP3 and CD-OMP2.5) approaches are presented. The DF/CD-OMP3 and DF/CD-OMP2.5 methods are applied to a set of alkanes to compare the computational cost with the conventional orbital-optimized MP3 (OMP3) [Bozkaya J. Chem. Phys. 2011, 135, 224103] and the orbital-optimized MP2.5 (OMP2.5) [Bozkaya and Sherrill J. Chem. Phys. 2014, 141, 204105]. Our results demonstrate that the DF-OMP3 and DF-OMP2.5 methods provide considerably lower computational costs than OMP3 and OMP2.5. Further application results show that the orbital-optimized methods are very helpful for the study of open-shell noncovalent interactions, aromatic bond dissociation energies, and hydrogen transfer reactions. We conclude that the DF-OMP3 and DF-OMP2.5 methods are very promising for molecular systems with challenging electronic structures.

  1. Preparation, non-isothermal decomposition kinetics, heat capacity and adiabatic time-to-explosion of NTOxDNAZ.

    PubMed

    Ma, Haixia; Yan, Biao; Li, Zhaona; Guan, Yulei; Song, Jirong; Xu, Kangzhen; Hu, Rongzu

    2009-09-30

    NTOxDNAZ was prepared by mixing 3,3-dinitroazetidine (DNAZ) and 3-nitro-1,2,4-triazol-5-one (NTO) in ethanol solution. The thermal behavior of the title compound was studied under a non-isothermal condition by DSC and TG/DTG methods. The kinetic parameters were obtained from analysis of the DSC and TG/DTG curves by Kissinger method, Ozawa method, the differential method and the integral method. The main exothermic decomposition reaction mechanism of NTOxDNAZ is classified as chemical reaction, and the kinetic parameters of the reaction are E(a)=149.68 kJ mol(-1) and A=10(15.81)s(-1). The specific heat capacity of the title compound was determined with continuous C(p) mode of microcalorimeter. The standard mole specific heat capacity of NTOxDNAZ was 352.56 J mol(-1)K(-1) in 298.15K. Using the relationship between C(p) and T and the thermal decomposition parameters, the time of the thermal decomposition from initialization to thermal explosion (adiabatic time-to-explosion) was obtained.

  2. Development of parallel algorithms for electrical power management in space applications

    NASA Technical Reports Server (NTRS)

    Berry, Frederick C.

    1989-01-01

    The application of parallel techniques for electrical power system analysis is discussed. The Newton-Raphson method of load flow analysis was used along with the decomposition-coordination technique to perform load flow analysis. The decomposition-coordination technique enables tasks to be performed in parallel by partitioning the electrical power system into independent local problems. Each independent local problem represents a portion of the total electrical power system on which a loan flow analysis can be performed. The load flow analysis is performed on these partitioned elements by using the Newton-Raphson load flow method. These independent local problems will produce results for voltage and power which can then be passed to the coordinator portion of the solution procedure. The coordinator problem uses the results of the local problems to determine if any correction is needed on the local problems. The coordinator problem is also solved by an iterative method much like the local problem. The iterative method for the coordination problem will also be the Newton-Raphson method. Therefore, each iteration at the coordination level will result in new values for the local problems. The local problems will have to be solved again along with the coordinator problem until some convergence conditions are met.

  3. Peak tree: a new tool for multiscale hierarchical representation and peak detection of mass spectrometry data.

    PubMed

    Zhang, Peng; Li, Houqiang; Wang, Honghui; Wong, Stephen T C; Zhou, Xiaobo

    2011-01-01

    Peak detection is one of the most important steps in mass spectrometry (MS) analysis. However, the detection result is greatly affected by severe spectrum variations. Unfortunately, most current peak detection methods are neither flexible enough to revise false detection results nor robust enough to resist spectrum variations. To improve flexibility, we introduce peak tree to represent the peak information in MS spectra. Each tree node is a peak judgment on a range of scales, and each tree decomposition, as a set of nodes, is a candidate peak detection result. To improve robustness, we combine peak detection and common peak alignment into a closed-loop framework, which finds the optimal decomposition via both peak intensity and common peak information. The common peak information is derived and loopily refined from the density clustering of the latest peak detection result. Finally, we present an improved ant colony optimization biomarker selection method to build a whole MS analysis system. Experiment shows that our peak detection method can better resist spectrum variations and provide higher sensitivity and lower false detection rates than conventional methods. The benefits from our peak-tree-based system for MS disease analysis are also proved on real SELDI data.

  4. Thermal decomposition in thermal desorption instruments: importance of thermogram measurements for analysis of secondary organic aerosol

    NASA Astrophysics Data System (ADS)

    Stark, H.; Yatavelli, R. L. N.; Thompson, S.; Kang, H.; Krechmer, J. E.; Kimmel, J.; Palm, B. B.; Hu, W.; Hayes, P.; Day, D. A.; Campuzano Jost, P.; Ye, P.; Canagaratna, M. R.; Jayne, J. T.; Worsnop, D. R.; Jimenez, J. L.

    2017-12-01

    Understanding the chemical composition of secondary organic aerosol (SOA) is crucial for explaining sources and fate of this important aerosol class in tropospheric chemistry. Further, determining SOA volatility is key in predicting its atmospheric lifetime and fate, due to partitioning from and to the gas phase. We present three analysis approaches to determine SOA volatility distributions from two field campaigns in areas with strong biogenic emissions, a Ponderosa pine forest in Colorado, USA, from the BEACHON-RoMBAS campaign, and a mixed forest in Alabama, USA, from the SOAS campaign. We used a high-resolution-time-of-flight chemical ionization mass spectrometer (CIMS) for both campaigns, equipped with a micro-orifice volatilization impactor (MOVI) inlet for BEACHON and a filter inlet for gases and aerosols (FIGAERO) for SOAS. These inlets allow near simultaneous analysis of particle and gas-phase species by the CIMS. While gas-phase species are directly measured without heating, particles undergo thermal desorption prior to analysis. Volatility distributions can be estimated in three ways: (1) analysis of the thermograms (signal vs. temperature); (2) via partitioning theory using the gas- and particle-phase measurements; (3) from measured chemical formulas via a group contribution model. Comparison of the SOA volatility distributions from the three methods shows large discrepancies for both campaigns. Results from the thermogram method are the most consistent of the methods when compared with independent AMS-thermal denuder measurements. The volatility distributions estimated from partitioning measurements are very narrow, likely due to signal-to-noise limits in the measurements. The discrepancy between the formula and the thermogram methods indicates large-scale thermal decomposition of the SOA species. We will also show results of citric acid thermal decomposition, where, in addition to the mass spectra, measurements of CO, CO2 and H2O were made, showing thermal decomposition of up to 65% of the citric acid molecules.

  5. Forecasting outpatient visits using empirical mode decomposition coupled with back-propagation artificial neural networks optimized by particle swarm optimization

    PubMed Central

    Huang, Daizheng; Wu, Zhihui

    2017-01-01

    Accurately predicting the trend of outpatient visits by mathematical modeling can help policy makers manage hospitals effectively, reasonably organize schedules for human resources and finances, and appropriately distribute hospital material resources. In this study, a hybrid method based on empirical mode decomposition and back-propagation artificial neural networks optimized by particle swarm optimization is developed to forecast outpatient visits on the basis of monthly numbers. The data outpatient visits are retrieved from January 2005 to December 2013 and first obtained as the original time series. Second, the original time series is decomposed into a finite and often small number of intrinsic mode functions by the empirical mode decomposition technique. Third, a three-layer back-propagation artificial neural network is constructed to forecast each intrinsic mode functions. To improve network performance and avoid falling into a local minimum, particle swarm optimization is employed to optimize the weights and thresholds of back-propagation artificial neural networks. Finally, the superposition of forecasting results of the intrinsic mode functions is regarded as the ultimate forecasting value. Simulation indicates that the proposed method attains a better performance index than the other four methods. PMID:28222194

  6. Forecasting outpatient visits using empirical mode decomposition coupled with back-propagation artificial neural networks optimized by particle swarm optimization.

    PubMed

    Huang, Daizheng; Wu, Zhihui

    2017-01-01

    Accurately predicting the trend of outpatient visits by mathematical modeling can help policy makers manage hospitals effectively, reasonably organize schedules for human resources and finances, and appropriately distribute hospital material resources. In this study, a hybrid method based on empirical mode decomposition and back-propagation artificial neural networks optimized by particle swarm optimization is developed to forecast outpatient visits on the basis of monthly numbers. The data outpatient visits are retrieved from January 2005 to December 2013 and first obtained as the original time series. Second, the original time series is decomposed into a finite and often small number of intrinsic mode functions by the empirical mode decomposition technique. Third, a three-layer back-propagation artificial neural network is constructed to forecast each intrinsic mode functions. To improve network performance and avoid falling into a local minimum, particle swarm optimization is employed to optimize the weights and thresholds of back-propagation artificial neural networks. Finally, the superposition of forecasting results of the intrinsic mode functions is regarded as the ultimate forecasting value. Simulation indicates that the proposed method attains a better performance index than the other four methods.

  7. Simultaneous Tensor Decomposition and Completion Using Factor Priors.

    PubMed

    Chen, Yi-Lei; Hsu, Chiou-Ting Candy; Liao, Hong-Yuan Mark

    2013-08-27

    Tensor completion, which is a high-order extension of matrix completion, has generated a great deal of research interest in recent years. Given a tensor with incomplete entries, existing methods use either factorization or completion schemes to recover the missing parts. However, as the number of missing entries increases, factorization schemes may overfit the model because of incorrectly predefined ranks, while completion schemes may fail to interpret the model factors. In this paper, we introduce a novel concept: complete the missing entries and simultaneously capture the underlying model structure. To this end, we propose a method called Simultaneous Tensor Decomposition and Completion (STDC) that combines a rank minimization technique with Tucker model decomposition. Moreover, as the model structure is implicitly included in the Tucker model, we use factor priors, which are usually known a priori in real-world tensor objects, to characterize the underlying joint-manifold drawn from the model factors. We conducted experiments to empirically verify the convergence of our algorithm on synthetic data, and evaluate its effectiveness on various kinds of real-world data. The results demonstrate the efficacy of the proposed method and its potential usage in tensor-based applications. It also outperforms state-of-the-art methods on multilinear model analysis and visual data completion tasks.

  8. Molecular mechanism of metal-independent decomposition of lipid hydroperoxide 13-HPODE by halogenated quinoid carcinogens.

    PubMed

    Qin, Hao; Huang, Chun-Hua; Mao, Li; Xia, Hai-Ying; Kalyanaraman, Balaraman; Shao, Jie; Shan, Guo-Qiang; Zhu, Ben-Zhan

    2013-10-01

    Halogenated quinones are a class of carcinogenic intermediates and newly identified chlorination disinfection by-products in drinking water. 13-Hydroperoxy-9,11-octadecadienoic acid (13-HPODE) is the most extensively studied endogenous lipid hydroperoxide. Although it is well known that the decomposition of 13-HPODE can be catalyzed by transition metal ions, it is not clear whether halogenated quinones could enhance its decomposition independent of metal ions and, if so, what the unique characteristics and similarities are. Here we show that 2,5-dichloro-1,4-benzoquinone (DCBQ) could markedly enhance the decomposition of 13-HPODE and formation of reactive lipid alkyl radicals such as pentyl and 7-carboxyheptyl radicals, and the genotoxic 4-hydroxy-2-nonenal (HNE), through the complementary application of ESR spin trapping, HPLC-MS, and GC-MS methods. Interestingly, two chloroquinone-lipid alkoxyl conjugates were also detected and identified from the reaction between DCBQ and 13-HPODE. Analogous results were observed with other halogenated quinones. This represents the first report that halogenated quinoid carcinogens can enhance the decomposition of the endogenous lipid hydroperoxide 13-HPODE and formation of reactive lipid alkyl radicals and genotoxic HNE via a novel metal-independent nucleophilic substitution coupled with homolytic decomposition mechanism, which may partly explain their potential genotoxicity and carcinogenicity. Copyright © 2013 Elsevier Inc. All rights reserved.

  9. Oxidative decomposition of propylene carbonate in lithium ion batteries: a DFT study.

    PubMed

    Leggesse, Ermias Girma; Lin, Rao Tung; Teng, Tsung-Fan; Chen, Chi-Liang; Jiang, Jyh-Chiang

    2013-08-22

    This paper reports an in-depth mechanistic study on the oxidative decomposition of propylene carbonate in the presence of lithium salts (LiClO4, LiBF4, LiPF6, and LiAsF6) with the aid of density functional theory calculations at the B3LYP/6-311++G(d,p) level of theory. The solvent effect is accounted for by using the implicit solvation model with density method. Moreover, the rate constants for the decompositions of propylene carbonate have been investigated by using transition-state theory. The shortening of the original carbonyl C-O bond and a lengthening of the adjacent ethereal C-O bonds of propylene carbonate, which occurs as a result of oxidation, leads to the formation of acetone radical and CO2 as a primary oxidative decomposition product. The termination of the primary radical generates polycarbonate, acetone, diketone, 2-(ethan-1-ylium-1-yl)-4-methyl-1,3-dioxolan-4-ylium, and CO2. The thermodynamic and kinetic data show that the major oxidative decomposition products of propylene carbonate are independent of the type of lithium salt. However, the decomposition rate constants of propylene carbonate are highly affected by the lithium salt type. On the basis of the rate constant calculations using transition-state theory, the order of gas volume generation is: [PC-ClO4](-) > [PC-BF4](-) > [PC-AsF6](-) > [PC-PF6](-).

  10. Numeric Modified Adomian Decomposition Method for Power System Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dimitrovski, Aleksandar D; Simunovic, Srdjan; Pannala, Sreekanth

    This paper investigates the applicability of numeric Wazwaz El Sayed modified Adomian Decomposition Method (WES-ADM) for time domain simulation of power systems. WESADM is a numerical method based on a modified Adomian decomposition (ADM) technique. WES-ADM is a numerical approximation method for the solution of nonlinear ordinary differential equations. The non-linear terms in the differential equations are approximated using Adomian polynomials. In this paper WES-ADM is applied to time domain simulations of multimachine power systems. WECC 3-generator, 9-bus system and IEEE 10-generator, 39-bus system have been used to test the applicability of the approach. Several fault scenarios have been tested.more » It has been found that the proposed approach is faster than the trapezoidal method with comparable accuracy.« less

  11. Numerical prediction of the energy efficiency of the three-dimensional fish school using the discretized Adomian decomposition method

    NASA Astrophysics Data System (ADS)

    Lin, Yinwei

    2018-06-01

    A three-dimensional modeling of fish school performed by a modified Adomian decomposition method (ADM) discretized by the finite difference method is proposed. To our knowledge, few studies of the fish school are documented due to expensive cost of numerical computing and tedious three-dimensional data analysis. Here, we propose a simple model replied on the Adomian decomposition method to estimate the efficiency of energy saving of the flow motion of the fish school. First, the analytic solutions of Navier-Stokes equations are used for numerical validation. The influences of the distance between the side-by-side two fishes are studied on the energy efficiency of the fish school. In addition, the complete error analysis for this method is presented.

  12. Forward Looking Radar Imaging by Truncated Singular Value Decomposition and Its Application for Adverse Weather Aircraft Landing.

    PubMed

    Huang, Yulin; Zha, Yuebo; Wang, Yue; Yang, Jianyu

    2015-06-18

    The forward looking radar imaging task is a practical and challenging problem for adverse weather aircraft landing industry. Deconvolution method can realize the forward looking imaging but it often leads to the noise amplification in the radar image. In this paper, a forward looking radar imaging based on deconvolution method is presented for adverse weather aircraft landing. We first present the theoretical background of forward looking radar imaging task and its application for aircraft landing. Then, we convert the forward looking radar imaging task into a corresponding deconvolution problem, which is solved in the framework of algebraic theory using truncated singular decomposition method. The key issue regarding the selecting of the truncated parameter is addressed using generalized cross validation approach. Simulation and experimental results demonstrate that the proposed method is effective in achieving angular resolution enhancement with suppressing the noise amplification in forward looking radar imaging.

  13. A Framework for Parallel Unstructured Grid Generation for Complex Aerodynamic Simulations

    NASA Technical Reports Server (NTRS)

    Zagaris, George; Pirzadeh, Shahyar Z.; Chrisochoides, Nikos

    2009-01-01

    A framework for parallel unstructured grid generation targeting both shared memory multi-processors and distributed memory architectures is presented. The two fundamental building-blocks of the framework consist of: (1) the Advancing-Partition (AP) method used for domain decomposition and (2) the Advancing Front (AF) method used for mesh generation. Starting from the surface mesh of the computational domain, the AP method is applied recursively to generate a set of sub-domains. Next, the sub-domains are meshed in parallel using the AF method. The recursive nature of domain decomposition naturally maps to a divide-and-conquer algorithm which exhibits inherent parallelism. For the parallel implementation, the Master/Worker pattern is employed to dynamically balance the varying workloads of each task on the set of available CPUs. Performance results by this approach are presented and discussed in detail as well as future work and improvements.

  14. Reducing variation in decomposition odour profiling using comprehensive two-dimensional gas chromatography.

    PubMed

    Perrault, Katelynn A; Stefanuto, Pierre-Hugues; Stuart, Barbara H; Rai, Tapan; Focant, Jean-François; Forbes, Shari L

    2015-01-01

    Challenges in decomposition odour profiling have led to variation in the documented odour profile by different research groups worldwide. Background subtraction and use of controls are important considerations given the variation introduced by decomposition studies conducted in different geographical environments. The collection of volatile organic compounds (VOCs) from soil beneath decomposing remains is challenging due to the high levels of inherent soil VOCs, further confounded by the use of highly sensitive instrumentation. This study presents a method that provides suitable chromatographic resolution for profiling decomposition odour in soil by comprehensive two-dimensional gas chromatography coupled with time-of-flight mass spectrometry using appropriate controls and field blanks. Logarithmic transformation and t-testing of compounds permitted the generation of a compound list of decomposition VOCs in soil. Principal component analysis demonstrated the improved discrimination between experimental and control soil, verifying the value of the data handling method. Data handling procedures have not been well documented in this field and standardisation would thereby reduce misidentification of VOCs present in the surrounding environment as decomposition byproducts. Uniformity of data handling and instrumental procedures will reduce analytical variation, increasing confidence in the future when investigating the effect of taphonomic variables on the decomposition VOC profile. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Parallelization of PANDA discrete ordinates code using spatial decomposition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Humbert, P.

    2006-07-01

    We present the parallel method, based on spatial domain decomposition, implemented in the 2D and 3D versions of the discrete Ordinates code PANDA. The spatial mesh is orthogonal and the spatial domain decomposition is Cartesian. For 3D problems a 3D Cartesian domain topology is created and the parallel method is based on a domain diagonal plane ordered sweep algorithm. The parallel efficiency of the method is improved by directions and octants pipelining. The implementation of the algorithm is straightforward using MPI blocking point to point communications. The efficiency of the method is illustrated by an application to the 3D-Ext C5G7more » benchmark of the OECD/NEA. (authors)« less

  16. Uncertainty Analysis of Decomposing Polyurethane Foam

    NASA Technical Reports Server (NTRS)

    Hobbs, Michael L.; Romero, Vicente J.

    2000-01-01

    Sensitivity/uncertainty analyses are necessary to determine where to allocate resources for improved predictions in support of our nation's nuclear safety mission. Yet, sensitivity/uncertainty analyses are not commonly performed on complex combustion models because the calculations are time consuming, CPU intensive, nontrivial exercises that can lead to deceptive results. To illustrate these ideas, a variety of sensitivity/uncertainty analyses were used to determine the uncertainty associated with thermal decomposition of polyurethane foam exposed to high radiative flux boundary conditions. The polyurethane used in this study is a rigid closed-cell foam used as an encapsulant. Related polyurethane binders such as Estane are used in many energetic materials of interest to the JANNAF community. The complex, finite element foam decomposition model used in this study has 25 input parameters that include chemistry, polymer structure, and thermophysical properties. The response variable was selected as the steady-state decomposition front velocity calculated as the derivative of the decomposition front location versus time. An analytical mean value sensitivity/uncertainty (MV) analysis was used to determine the standard deviation by taking numerical derivatives of the response variable with respect to each of the 25 input parameters. Since the response variable is also a derivative, the standard deviation was essentially determined from a second derivative that was extremely sensitive to numerical noise. To minimize the numerical noise, 50-micrometer element dimensions and approximately 1-msec time steps were required to obtain stable uncertainty results. As an alternative method to determine the uncertainty and sensitivity in the decomposition front velocity, surrogate response surfaces were generated for use with a constrained Latin Hypercube Sampling (LHS) technique. Two surrogate response surfaces were investigated: 1) a linear surrogate response surface (LIN) and 2) a quadratic response surface (QUAD). The LHS techniques do not require derivatives of the response variable and are subsequently relatively insensitive to numerical noise. To compare the LIN and QUAD methods to the MV method, a direct LHS analysis (DLHS) was performed using the full grid and timestep resolved finite element model. The surrogate response models (LIN and QUAD) are shown to give acceptable values of the mean and standard deviation when compared to the fully converged DLHS model.

  17. Method for improved decomposition of metal nitrate solutions

    DOEpatents

    Haas, P.A.; Stines, W.B.

    1981-01-21

    A method for co-conversion of aqueous solutions of one or more heavy metal nitrates is described, wherein thermal decomposition within a temperature range of about 300 to 800/sup 0/C is carried out in the presence of about 50 to 500% molar concentration of ammonium nitrate to total metal.

  18. Method for improved decomposition of metal nitrate solutions

    DOEpatents

    Haas, Paul A.; Stines, William B.

    1983-10-11

    A method for co-conversion of aqueous solutions of one or more heavy metal nitrates wherein thermal decomposition within a temperature range of about 300.degree. to 800.degree. C. is carried out in the presence of about 50 to 500% molar concentration of ammonium nitrate to total metal.

  19. Method of forming semiconducting amorphous silicon films from the thermal decomposition of fluorohydridodisilanes

    DOEpatents

    Sharp, Kenneth G.; D'Errico, John J.

    1988-01-01

    The invention relates to a method of forming amorphous, photoconductive, and semiconductive silicon films on a substrate by the vapor phase thermal decomposition of a fluorohydridodisilane or a mixture of fluorohydridodisilanes. The invention is useful for the protection of surfaces including electronic devices.

  20. On Partial Fraction Decompositions by Repeated Polynomial Divisions

    ERIC Educational Resources Information Center

    Man, Yiu-Kwong

    2017-01-01

    We present a method for finding partial fraction decompositions of rational functions with linear or quadratic factors in the denominators by means of repeated polynomial divisions. This method does not involve differentiation or solving linear equations for obtaining the unknown partial fraction coefficients, which is very suitable for either…

  1. Decomposing Achievement Gaps among OECD Countries

    ERIC Educational Resources Information Center

    Zhang, Liang; Lee, Kristen A.

    2011-01-01

    In this study, we use decomposition methods on PISA 2006 data to compare student academic performance across OECD countries. We first establish an empirical model to explain the variation in academic performance across individuals, and then use the Oaxaca-Blinder decomposition method to decompose the achievement gap between each of the OECD…

  2. Iterative filtering decomposition based on local spectral evolution kernel

    PubMed Central

    Wang, Yang; Wei, Guo-Wei; Yang, Siyang

    2011-01-01

    The synthesizing information, achieving understanding, and deriving insight from increasingly massive, time-varying, noisy and possibly conflicting data sets are some of most challenging tasks in the present information age. Traditional technologies, such as Fourier transform and wavelet multi-resolution analysis, are inadequate to handle all of the above-mentioned tasks. The empirical model decomposition (EMD) has emerged as a new powerful tool for resolving many challenging problems in data processing and analysis. Recently, an iterative filtering decomposition (IFD) has been introduced to address the stability and efficiency problems of the EMD. Another data analysis technique is the local spectral evolution kernel (LSEK), which provides a near prefect low pass filter with desirable time-frequency localizations. The present work utilizes the LSEK to further stabilize the IFD, and offers an efficient, flexible and robust scheme for information extraction, complexity reduction, and signal and image understanding. The performance of the present LSEK based IFD is intensively validated over a wide range of data processing tasks, including mode decomposition, analysis of time-varying data, information extraction from nonlinear dynamic systems, etc. The utility, robustness and usefulness of the proposed LESK based IFD are demonstrated via a large number of applications, such as the analysis of stock market data, the decomposition of ocean wave magnitudes, the understanding of physiologic signals and information recovery from noisy images. The performance of the proposed method is compared with that of existing methods in the literature. Our results indicate that the LSEK based IFD improves both the efficiency and the stability of conventional EMD algorithms. PMID:22350559

  3. A Dual Super-Element Domain Decomposition Approach for Parallel Nonlinear Finite Element Analysis

    NASA Astrophysics Data System (ADS)

    Jokhio, G. A.; Izzuddin, B. A.

    2015-05-01

    This article presents a new domain decomposition method for nonlinear finite element analysis introducing the concept of dual partition super-elements. The method extends ideas from the displacement frame method and is ideally suited for parallel nonlinear static/dynamic analysis of structural systems. In the new method, domain decomposition is realized by replacing one or more subdomains in a "parent system," each with a placeholder super-element, where the subdomains are processed separately as "child partitions," each wrapped by a dual super-element along the partition boundary. The analysis of the overall system, including the satisfaction of equilibrium and compatibility at all partition boundaries, is realized through direct communication between all pairs of placeholder and dual super-elements. The proposed method has particular advantages for matrix solution methods based on the frontal scheme, and can be readily implemented for existing finite element analysis programs to achieve parallelization on distributed memory systems with minimal intervention, thus overcoming memory bottlenecks typically faced in the analysis of large-scale problems. Several examples are presented in this article which demonstrate the computational benefits of the proposed parallel domain decomposition approach and its applicability to the nonlinear structural analysis of realistic structural systems.

  4. Efficient Method for the Determination of the Activation Energy of the Iodide-Catalyzed Decomposition of Hydrogen Peroxide

    ERIC Educational Resources Information Center

    Sweeney, William; Lee, James; Abid, Nauman; DeMeo, Stephen

    2014-01-01

    An experiment is described that determines the activation energy (E[subscript a]) of the iodide-catalyzed decomposition reaction of hydrogen peroxide in a much more efficient manner than previously reported in the literature. Hydrogen peroxide, spontaneously or with a catalyst, decomposes to oxygen and water. Because the decomposition reaction is…

  5. Fault feature analysis of cracked gear based on LOD and analytical-FE method

    NASA Astrophysics Data System (ADS)

    Wu, Jiateng; Yang, Yu; Yang, Xingkai; Cheng, Junsheng

    2018-01-01

    At present, there are two main ideas for gear fault diagnosis. One is the model-based gear dynamic analysis; the other is signal-based gear vibration diagnosis. In this paper, a method for fault feature analysis of gear crack is presented, which combines the advantages of dynamic modeling and signal processing. Firstly, a new time-frequency analysis method called local oscillatory-characteristic decomposition (LOD) is proposed, which has the attractive feature of extracting fault characteristic efficiently and accurately. Secondly, an analytical-finite element (analytical-FE) method which is called assist-stress intensity factor (assist-SIF) gear contact model, is put forward to calculate the time-varying mesh stiffness (TVMS) under different crack states. Based on the dynamic model of the gear system with 6 degrees of freedom, the dynamic simulation response was obtained for different tooth crack depths. For the dynamic model, the corresponding relation between the characteristic parameters and the degree of the tooth crack is established under a specific condition. On the basis of the methods mentioned above, a novel gear tooth root crack diagnosis method which combines the LOD with the analytical-FE is proposed. Furthermore, empirical mode decomposition (EMD) and ensemble empirical mode decomposition (EEMD) are contrasted with the LOD by gear crack fault vibration signals. The analysis results indicate that the proposed method performs effectively and feasibility for the tooth crack stiffness calculation and the gear tooth crack fault diagnosis.

  6. A hybrid filtering method based on a novel empirical mode decomposition for friction signals

    NASA Astrophysics Data System (ADS)

    Li, Chengwei; Zhan, Liwei

    2015-12-01

    During a measurement, the measured signal usually contains noise. To remove the noise and preserve the important feature of the signal, we introduce a hybrid filtering method that uses a new intrinsic mode function (NIMF) and a modified Hausdorff distance. The NIMF is defined as the difference between the noisy signal and each intrinsic mode function (IMF), which is obtained by empirical mode decomposition (EMD), ensemble EMD, complementary ensemble EMD, or complete ensemble EMD with adaptive noise (CEEMDAN). The relevant mode selecting is based on the similarity between the first NIMF and the rest of the NIMFs. With this filtering method, the EMD and improved versions are used to filter the simulation and friction signals. The friction signal between an airplane tire and the runaway is recorded during a simulated airplane touchdown and features spikes of various amplitudes and noise. The filtering effectiveness of the four hybrid filtering methods are compared and discussed. The results show that the filtering method based on CEEMDAN outperforms other signal filtering methods.

  7. Examining responses of ecosystem carbon exchange to environmental changes using particle filtering mathod

    NASA Astrophysics Data System (ADS)

    Yokozawa, M.

    2017-12-01

    Attention has been paid to the agricultural field that could regulate ecosystem carbon exchange by water management and residual treatments. However, there have been less known about the dynamic responses of the ecosystem to environmental changes. In this study, focussing on paddy field, where CO2 emissions due to microbial decomposition of organic matter are suppressed and alternatively CH4 emitted under flooding condition during rice growth season and subsequently CO2 emission following the fallow season after harvest, the responses of ecosystem carbon exchange were examined. We conducted model data fusion analysis for examining the response of cropland-atmosphere carbon exchange to environmental variation. The used model consists of two sub models, paddy rice growth sub-model and soil decomposition sub-model. The crop growth sub-model mimics the rice plant growth processes including formation of reproductive organs as well as leaf expansion. The soil decomposition sub-model simulates the decomposition process of soil organic carbon. Assimilating the data on the time changes in CO2 flux measured by eddy covariance method, rice plant biomass, LAI and the final yield with the model, the parameters were calibrated using a stochastic optimization algorithm with a particle filter method. The particle filter method, which is one of the Monte Carlo filters, enable us to evaluating time changes in parameters based on the observed data until the time and to make prediction of the system. Iterative filtering and prediction with changing parameters and/or boundary condition enable us to obtain time changes in parameters governing the crop production as well as carbon exchange. In this study, we focused on the parameters related to crop production as well as soil carbon storage. As the results, the calibrated model with estimated parameters could accurately predict the NEE flux in the subsequent years. The temperature sensitivity, denoted by Q10s in the decomposition rate of soil organic carbon (SOC) were obtained as 1.4 for no cultivation period and 2.9 for cultivation period (submerged soil condition in flooding season). It suggests that the response of ecosystem carbon exchange differs due to SOC decomposition process which is sensitive to environmental variation during paddy rice cultivation period.

  8. Application of a spectrally filtered probing light beam and RGB decomposition of microphotographs for flow registration of ultrasonically enhanced agglutination of erythrocytes

    NASA Astrophysics Data System (ADS)

    Doubrovski, V. A.; Ganilova, Yu. A.; Zabenkov, I. V.

    2013-08-01

    We propose a development of the flow microscopy method to increase the resolving power upon registration of erythrocyte agglutination. We experimentally show that the action of a ultrasonic standing wave on an agglutinating mixture blood-serum leads to the formation of so large erythrocytic immune complexes that it seems possible to propose a new two-wave optical method of registration of the process of erythrocyte agglutination using the RGB decomposition of microphotographs of the flow of the mixture under study. This approach increases the reliability of registration of erythrocyte agglutination and, consequently, increases the reliability of blood typing. Our results can be used in the development of instruments for automatic human blood typing.

  9. Effects of biopretreatment of corn stover with white-rot fungus on low-temperature pyrolysis products.

    PubMed

    Yang, Xuewei; Ma, Fuying; Yu, Hongbo; Zhang, Xiaoyu; Chen, Shulin

    2011-02-01

    The thermal decomposition of biopretreated corn stover during the low temperature has been studied by using the Py-GC/MS analysis and thermogravimetric analysis with the distributed activation energy model (DAEM). Results showed that biopretreatment with white-rot fungus Echinodontium taxodii 2538 can improve the low-temperature pyrolysis of biomass, by increasing the pyrolysis products of cellulose, hemicellulose (furfural and sucrose increased up to 4.68-fold and 2.94-fold respectively) and lignin (biophenyl and 3,7,11,15-tetramethyl-2-hexadecen-1-ol increased 2.45-fold and 4.22-fold, respectively). Calculated by DAEM method, it showed that biopretreatment can decrease the activation energy during the low temperature range, accelerate the reaction rate and start the thermal decomposition with lower temperature. ATR-FTIR results showed that the deconstruction of lignin and the decomposition of the main linkages between hemicellulose and lignin could contribute to the improvement of the pyrolysis at low temperature. Copyright © 2010 Elsevier Ltd. All rights reserved.

  10. RESOLVING THE ACTIVE GALACTIC NUCLEUS AND HOST EMISSION IN THE MID-INFRARED USING A MODEL-INDEPENDENT SPECTRAL DECOMPOSITION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hernán-Caballero, Antonio; Alonso-Herrero, Almudena; Hatziminaoglou, Evanthia

    2015-04-20

    We present results on the spectral decomposition of 118 Spitzer Infrared Spectrograph (IRS) spectra from local active galactic nuclei (AGNs) using a large set of Spitzer/IRS spectra as templates. The templates are themselves IRS spectra from extreme cases where a single physical component (stellar, interstellar, or AGN) completely dominates the integrated mid-infrared emission. We show that a linear combination of one template for each physical component reproduces the observed IRS spectra of AGN hosts with unprecedented fidelity for a template fitting method with no need to model extinction separately. We use full probability distribution functions to estimate expectation values andmore » uncertainties for observables, and find that the decomposition results are robust against degeneracies. Furthermore, we compare the AGN spectra derived from the spectral decomposition with sub-arcsecond resolution nuclear photometry and spectroscopy from ground-based observations. We find that the AGN component derived from the decomposition closely matches the nuclear spectrum with a 1σ dispersion of 0.12 dex in luminosity and typical uncertainties of ∼0.19 in the spectral index and ∼0.1 in the silicate strength. We conclude that the emission from the host galaxy can be reliably removed from the IRS spectra of AGNs. This allows for unbiased studies of the AGN emission in intermediate- and high-redshift galaxies—currently inaccesible to ground-based observations—with archival Spitzer/IRS data and in the future with the Mid-InfraRed Instrument of the James Webb Space Telescope. The decomposition code and templates are available at http://denebola.org/ahc/deblendIRS.« less

  11. Scoring of Decomposition: A Proposed Amendment to the Method When Using a Pig Model for Human Studies.

    PubMed

    Keough, Natalie; Myburgh, Jolandie; Steyn, Maryna

    2017-07-01

    Decomposition studies often use pigs as proxies for human cadavers. However, differences in decomposition sequences/rates relative to humans have not been scientifically examined. Descriptions of five main decomposition stages (humans) were developed and refined by Galloway and later by Megyesi. However, whether these changes/processes are alike in pigs is unclear. Any differences can have significant effects when pig models are used for human PMI estimation. This study compared human decomposition models to the changes observed in pigs. Twenty pigs (50-90 kg) were decomposed over five months and decompositional features recorded. Total body scores (TBS) were calculated. Significant differences were observed during early decomposition between pigs and humans. An amended scoring system to be used in future studies was developed. Standards for PMI estimation derived from porcine models may not directly apply to humans and may need adjustment. Porcine models, however, remain valuable to study variables influencing decomposition. © 2016 American Academy of Forensic Sciences.

  12. Can the biomass-ratio hypothesis predict mixed-species litter decomposition along a climatic gradient?

    PubMed Central

    Tardif, Antoine; Shipley, Bill; Bloor, Juliette M. G.; Soussana, Jean-François

    2014-01-01

    Background and Aims The biomass-ratio hypothesis states that ecosystem properties are driven by the characteristics of dominant species in the community. In this study, the hypothesis was operationalized as community-weighted means (CWMs) of monoculture values and tested for predicting the decomposition of multispecies litter mixtures along an abiotic gradient in the field. Methods Decomposition rates (mg g−1 d−1) of litter from four herb species were measured using litter-bed experiments with the same soil at three sites in central France along a correlated climatic gradient of temperature and precipitation. All possible combinations from one to four species mixtures were tested over 28 weeks of incubation. Observed mixture decomposition rates were compared with those predicted by the biomass-ratio hypothesis. Variability of the prediction errors was compared with the species richness of the mixtures, across sites, and within sites over time. Key Results Both positive and negative prediction errors occurred. Despite this, the biomass-ratio hypothesis was true as an average claim for all sites (r = 0·91) and for each site separately, except for the climatically intermediate site, which showed mainly synergistic deviations. Variability decreased with increasing species richness and in less favourable climatic conditions for decomposition. Conclusions Community-weighted mean values provided good predictions of mixed-species litter decomposition, converging to the predicted values with increasing species richness and in climates less favourable to decomposition. Under a context of climate change, abiotic variability would be important to take into account when predicting ecosystem processes. PMID:24482152

  13. Seasonal variation of carcass decomposition and gravesoil chemistry in a cold (Dfa) climate.

    PubMed

    Meyer, Jessica; Anderson, Brianna; Carter, David O

    2013-09-01

    It is well known that temperature significantly affects corpse decomposition. Yet relatively few taphonomy studies investigate the effects of seasonality on decomposition. Here, we propose the use of the Köppen-Geiger climate classification system and describe the decomposition of swine (Sus scrofa domesticus) carcasses during the summer and winter near Lincoln, Nebraska, USA. Decomposition was scored, and gravesoil chemistry (total carbon, total nitrogen, ninhydrin-reactive nitrogen, ammonium, nitrate, and soil pH) was assessed. Gross carcass decomposition in summer was three to seven times greater than in winter. Initial significant changes in gravesoil chemistry occurred following approximately 320 accumulated degree days, regardless of season. Furthermore, significant (p < 0.05) correlations were observed between ammonium and pH (positive correlation) and between nitrate and pH (negative correlation). We hope that future decomposition studies employ the Köppen-Geiger climate classification system to understand the seasonality of corpse decomposition, to validate taphonomic methods, and to facilitate cross-climate comparisons of carcass decomposition. © 2013 American Academy of Forensic Sciences.

  14. Jellyfish (Cyanea nozakii) decomposition and its potential influence on marine environments studied via simulation experiments.

    PubMed

    Qu, Chang-Feng; Song, Jin-Ming; Li, Ning; Li, Xue-Gang; Yuan, Hua-Mao; Duan, Li-Qin; Ma, Qing-Xia

    2015-08-15

    A growing body of evidence suggests that the jellyfish population in Chinese seas is increasing, and decomposition of jellyfish strongly influences the marine ecosystem. This study investigated the change in water quality during Cyanea nozakii decomposition using simulation experiments. The results demonstrated that the amount of dissolved nutrients released by jellyfish was greater than the amount of particulate nutrients. NH4(+) was predominant in the dissolved matter, whereas the particulate matter was dominated by organic nitrogen and inorganic phosphorus. The high N/P ratios demonstrated that jellyfish decomposition may result in high nitrogen loads. The inorganic nutrients released by C. nozakii decomposition were important for primary production. Jellyfish decomposition caused decreases in the pH and oxygen consumption associated with acidification and hypoxia or anoxia; however, sediments partially mitigated the changes in the pH and oxygen. These results imply that jellyfish decomposition can result in potentially detrimental effects on marine environments. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Modified truncated randomized singular value decomposition (MTRSVD) algorithms for large scale discrete ill-posed problems with general-form regularization

    NASA Astrophysics Data System (ADS)

    Jia, Zhongxiao; Yang, Yanfei

    2018-05-01

    In this paper, we propose new randomization based algorithms for large scale linear discrete ill-posed problems with general-form regularization: subject to , where L is a regularization matrix. Our algorithms are inspired by the modified truncated singular value decomposition (MTSVD) method, which suits only for small to medium scale problems, and randomized SVD (RSVD) algorithms that generate good low rank approximations to A. We use rank-k truncated randomized SVD (TRSVD) approximations to A by truncating the rank- RSVD approximations to A, where q is an oversampling parameter. The resulting algorithms are called modified TRSVD (MTRSVD) methods. At every step, we use the LSQR algorithm to solve the resulting inner least squares problem, which is proved to become better conditioned as k increases so that LSQR converges faster. We present sharp bounds for the approximation accuracy of the RSVDs and TRSVDs for severely, moderately and mildly ill-posed problems, and substantially improve a known basic bound for TRSVD approximations. We prove how to choose the stopping tolerance for LSQR in order to guarantee that the computed and exact best regularized solutions have the same accuracy. Numerical experiments illustrate that the best regularized solutions by MTRSVD are as accurate as the ones by the truncated generalized singular value decomposition (TGSVD) algorithm, and at least as accurate as those by some existing truncated randomized generalized singular value decomposition (TRGSVD) algorithms. This work was supported in part by the National Science Foundation of China (Nos. 11771249 and 11371219).

  16. Water-splitting using photocatalytic porphyrin-nanotube composite devices

    DOEpatents

    Shelnutt, John A [Tijeras, NM; Miller, James E [Albuquerque, NM; Wang, Zhongchun [Albuquerque, NM; Medforth, Craig J [Winters, CA

    2008-03-04

    A method for generating hydrogen by photocatalytic decomposition of water using porphyrin nanotube composites. In some embodiments, both hydrogen and oxygen are generated by photocatalytic decomposition of water.

  17. Combining DCQGMP-Based Sparse Decomposition and MPDR Beamformer for Multi-Type Interferences Mitigation for GNSS Receivers.

    PubMed

    Guo, Qiang; Qi, Liangang

    2017-04-10

    In the coexistence of multiple types of interfering signals, the performance of interference suppression methods based on time and frequency domains is degraded seriously, and the technique using an antenna array requires a large enough size and huge hardware costs. To combat multi-type interferences better for GNSS receivers, this paper proposes a cascaded multi-type interferences mitigation method combining improved double chain quantum genetic matching pursuit (DCQGMP)-based sparse decomposition and an MPDR beamformer. The key idea behind the proposed method is that the multiple types of interfering signals can be excised by taking advantage of their sparse features in different domains. In the first stage, the single-tone (multi-tone) and linear chirp interfering signals are canceled by sparse decomposition according to their sparsity in the over-complete dictionary. In order to improve the timeliness of matching pursuit (MP)-based sparse decomposition, a DCQGMP is introduced by combining an improved double chain quantum genetic algorithm (DCQGA) and the MP algorithm, and the DCQGMP algorithm is extended to handle the multi-channel signals according to the correlation among the signals in different channels. In the second stage, the minimum power distortionless response (MPDR) beamformer is utilized to nullify the residuary interferences (e.g., wideband Gaussian noise interferences). Several simulation results show that the proposed method can not only improve the interference mitigation degree of freedom (DoF) of the array antenna, but also effectively deal with the interference arriving from the same direction with the GNSS signal, which can be sparse represented in the over-complete dictionary. Moreover, it does not bring serious distortions into the navigation signal.

  18. Combining DCQGMP-Based Sparse Decomposition and MPDR Beamformer for Multi-Type Interferences Mitigation for GNSS Receivers

    PubMed Central

    Guo, Qiang; Qi, Liangang

    2017-01-01

    In the coexistence of multiple types of interfering signals, the performance of interference suppression methods based on time and frequency domains is degraded seriously, and the technique using an antenna array requires a large enough size and huge hardware costs. To combat multi-type interferences better for GNSS receivers, this paper proposes a cascaded multi-type interferences mitigation method combining improved double chain quantum genetic matching pursuit (DCQGMP)-based sparse decomposition and an MPDR beamformer. The key idea behind the proposed method is that the multiple types of interfering signals can be excised by taking advantage of their sparse features in different domains. In the first stage, the single-tone (multi-tone) and linear chirp interfering signals are canceled by sparse decomposition according to their sparsity in the over-complete dictionary. In order to improve the timeliness of matching pursuit (MP)-based sparse decomposition, a DCQGMP is introduced by combining an improved double chain quantum genetic algorithm (DCQGA) and the MP algorithm, and the DCQGMP algorithm is extended to handle the multi-channel signals according to the correlation among the signals in different channels. In the second stage, the minimum power distortionless response (MPDR) beamformer is utilized to nullify the residuary interferences (e.g., wideband Gaussian noise interferences). Several simulation results show that the proposed method can not only improve the interference mitigation degree of freedom (DoF) of the array antenna, but also effectively deal with the interference arriving from the same direction with the GNSS signal, which can be sparse represented in the over-complete dictionary. Moreover, it does not bring serious distortions into the navigation signal. PMID:28394290

  19. Sparse decomposition of seismic data and migration using Gaussian beams with nonzero initial curvature

    NASA Astrophysics Data System (ADS)

    Liu, Peng; Wang, Yanfei

    2018-04-01

    We study problems associated with seismic data decomposition and migration imaging. We first represent the seismic data utilizing Gaussian beam basis functions, which have nonzero curvature, and then consider the sparse decomposition technique. The sparse decomposition problem is an l0-norm constrained minimization problem. In solving the l0-norm minimization, a polynomial Radon transform is performed to achieve sparsity, and a fast gradient descent method is used to calculate the waveform functions. The waveform functions can subsequently be used for sparse Gaussian beam migration. Compared with traditional sparse Gaussian beam methods, the seismic data can be properly reconstructed employing fewer Gaussian beams with nonzero initial curvature. The migration approach described in this paper is more efficient than the traditional sparse Gaussian beam migration.

  20. Kinetics of Thermal Decomposition of Ammonium Perchlorate by TG/DSC-MS-FTIR

    NASA Astrophysics Data System (ADS)

    Zhu, Yan-Li; Huang, Hao; Ren, Hui; Jiao, Qing-Jie

    2014-01-01

    The method of thermogravimetry/differential scanning calorimetry-mass spectrometry-Fourier transform infrared (TG/DSC-MS-FTIR) simultaneous analysis has been used to study thermal decomposition of ammonium perchlorate (AP). The processing of nonisothermal data at various heating rates was performed using NETZSCH Thermokinetics. The MS-FTIR spectra showed that N2O and NO2 were the main gaseous products of the thermal decomposition of AP, and there was a competition between the formation reaction of N2O and that of NO2 during the process with an iso-concentration point of N2O and NO2. The dependence of the activation energy calculated by Friedman's iso-conversional method on the degree of conversion indicated that the AP decomposition process can be divided into three stages, which are autocatalytic, low-temperature diffusion and high-temperature, stable-phase reaction. The corresponding kinetic parameters were determined by multivariate nonlinear regression and the mechanism of the AP decomposition process was proposed.

  1. Block matrix based LU decomposition to analyze kinetic damping in active plasma resonance spectroscopy

    NASA Astrophysics Data System (ADS)

    Roehl, Jan Hendrik; Oberrath, Jens

    2016-09-01

    ``Active plasma resonance spectroscopy'' (APRS) is a widely used diagnostic method to measure plasma parameter like electron density. Measurements with APRS probes in plasmas of a few Pa typically show a broadening of the spectrum due to kinetic effects. To analyze the broadening a general kinetic model in electrostatic approximation based on functional analytic methods has been presented [ 1 ] . One of the main results is, that the system response function Y(ω) is given in terms of the matrix elements of the resolvent of the dynamic operator evaluated for values on the imaginary axis. To determine the response function of a specific probe the resolvent has to be approximated by a huge matrix which is given by a banded block structure. Due to this structure a block based LU decomposition can be implemented. It leads to a solution of Y(ω) which is given only by products of matrices of the inner block size. This LU decomposition allows to analyze the influence of kinetic effects on the broadening and saves memory and calculation time. Gratitude is expressed to the internal funding of Leuphana University.

  2. Decomposition of intact chicken feathers by a thermophile in combination with an acidulocomposting garbage-treatment process.

    PubMed

    Shigeri, Yasushi; Matsui, Tatsunobu; Watanabe, Kunihiko

    2009-11-01

    In order to develop a practical method for the decomposition of intact chicken feathers, a moderate thermophile strain, Meiothermus ruber H328, having strong keratinolytic activity, was used in a bio-type garbage-treatment machine working with an acidulocomposting process. The addition of strain H328 cells (15 g) combined with acidulocomposting in the garbage machine resulted in 70% degradation of intact chicken feathers (30 g) within 14 d. This degradation efficiency is comparable to a previous result employing the strain as a single bacterium in flask culture, and it indicates that strain H328 can promote intact feather degradation activity in a garbage machine currently on the market.

  3. A Posteriori Error Analysis and Uncertainty Quantification for Adaptive Multiscale Operator Decomposition Methods for Multiphysics Problems

    DTIC Science & Technology

    2014-04-01

    Barrier methods for critical exponent problems in geometric analysis and mathematical physics, J. Erway and M. Holst, Submitted for publication ...TR-14-33 A Posteriori Error Analysis and Uncertainty Quantification for Adaptive Multiscale Operator Decomposition Methods for Multiphysics...Problems Approved for public release, distribution is unlimited. April 2014 HDTRA1-09-1-0036 Donald Estep and Michael

  4. Newton-Krylov-Schwarz: An implicit solver for CFD

    NASA Technical Reports Server (NTRS)

    Cai, Xiao-Chuan; Keyes, David E.; Venkatakrishnan, V.

    1995-01-01

    Newton-Krylov methods and Krylov-Schwarz (domain decomposition) methods have begun to become established in computational fluid dynamics (CFD) over the past decade. The former employ a Krylov method inside of Newton's method in a Jacobian-free manner, through directional differencing. The latter employ an overlapping Schwarz domain decomposition to derive a preconditioner for the Krylov accelerator that relies primarily on local information, for data-parallel concurrency. They may be composed as Newton-Krylov-Schwarz (NKS) methods, which seem particularly well suited for solving nonlinear elliptic systems in high-latency, distributed-memory environments. We give a brief description of this family of algorithms, with an emphasis on domain decomposition iterative aspects. We then describe numerical simulations with Newton-Krylov-Schwarz methods on aerodynamics applications emphasizing comparisons with a standard defect-correction approach, subdomain preconditioner consistency, subdomain preconditioner quality, and the effect of a coarse grid.

  5. Potential macro-detritivore range expansion into the subarctic stimulates litter decomposition: a new positive feedback mechanism to climate change?

    PubMed

    van Geffen, Koert G; Berg, Matty P; Aerts, Rien

    2011-12-01

    As a result of low decomposition rates, high-latitude ecosystems store large amounts of carbon. Litter decomposition in these ecosystems is constrained by harsh abiotic conditions, but also by the absence of macro-detritivores. We have studied the potential effects of their climate change-driven northward range expansion on the decomposition of two contrasting subarctic litter types. Litter of Alnus incana and Betula pubescens was incubated in microcosms together with monocultures and all possible combinations of three functionally different macro-detritivores (the earthworm Lumbricus rubellus, isopod Oniscus asellus, and millipede Julus scandinavius). Our results show that these macro-detritivores stimulated decomposition, especially of the high-quality A. incana litter and that the macro-detritivores tested differed in their decomposition-stimulating effects, with earthworms having the largest influence. Decomposition processes increased with increasing number of macro-detritivore species, and positive net diveristy effects occurred in several macro-detritivore treatments. However, after correction for macro-detritivore biomass, all interspecific differences in macro-detritivore effects, as well as the positive effects of species number on subarctic litter decomposition disappeared. The net diversity effects also appeared to be driven by variation in biomass, with a possible exception of net diversity effects in mass loss. Based on these results, we conclude that the expected climate change-induced range expansion of macro-detritivores into subarctic regions is likely to result in accelerated decomposition rates. Our results also indicate that the magnitude of macro-detritivore effects on subarctic decomposition will mainly depend on macro-detritivore biomass, rather than on macro-detritivore species number or identity.

  6. Waveform LiDAR processing: comparison of classic approaches and optimized Gold deconvolution to characterize vegetation structure and terrain elevation

    NASA Astrophysics Data System (ADS)

    Zhou, T.; Popescu, S. C.; Krause, K.

    2016-12-01

    Waveform Light Detection and Ranging (LiDAR) data have advantages over discrete-return LiDAR data in accurately characterizing vegetation structure. However, we lack a comprehensive understanding of waveform data processing approaches under different topography and vegetation conditions. The objective of this paper is to highlight a novel deconvolution algorithm, the Gold algorithm, for processing waveform LiDAR data with optimal deconvolution parameters. Further, we present a comparative study of waveform processing methods to provide insight into selecting an approach for a given combination of vegetation and terrain characteristics. We employed two waveform processing methods: 1) direct decomposition, 2) deconvolution and decomposition. In method two, we utilized two deconvolution algorithms - the Richardson Lucy (RL) algorithm and the Gold algorithm. The comprehensive and quantitative comparisons were conducted in terms of the number of detected echoes, position accuracy, the bias of the end products (such as digital terrain model (DTM) and canopy height model (CHM)) from discrete LiDAR data, along with parameter uncertainty for these end products obtained from different methods. This study was conducted at three study sites that include diverse ecological regions, vegetation and elevation gradients. Results demonstrate that two deconvolution algorithms are sensitive to the pre-processing steps of input data. The deconvolution and decomposition method is more capable of detecting hidden echoes with a lower false echo detection rate, especially for the Gold algorithm. Compared to the reference data, all approaches generate satisfactory accuracy assessment results with small mean spatial difference (<1.22 m for DTMs, < 0.77 m for CHMs) and root mean square error (RMSE) (<1.26 m for DTMs, < 1.93 m for CHMs). More specifically, the Gold algorithm is superior to others with smaller root mean square error (RMSE) (< 1.01m), while the direct decomposition approach works better in terms of the percentage of spatial difference within 0.5 and 1 m. The parameter uncertainty analysis demonstrates that the Gold algorithm outperforms other approaches in dense vegetation areas, with the smallest RMSE, and the RL algorithm performs better in sparse vegetation areas in terms of RMSE.

  7. MO-FG-204-01: Improved Noise Suppression for Dual-Energy CT Through Entropy Minimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Petrongolo, M; Zhu, L

    2015-06-15

    Purpose: In dual energy CT (DECT), noise amplification during signal decomposition significantly limits the utility of basis material images. Since clinically relevant objects contain a limited number of materials, we propose to suppress noise for DECT based on image entropy minimization. An adaptive weighting scheme is employed during noise suppression to improve decomposition accuracy with limited effect on spatial resolution and image texture preservation. Methods: From decomposed images, we first generate a 2D plot of scattered data points, using basis material densities as coordinates. Data points representing the same material generate a highly asymmetric cluster. We orient an axis bymore » minimizing the entropy in a 1D histogram of these points projected onto the axis. To suppress noise, we replace pixel values of decomposed images with center-of-mass values in the direction perpendicular to the optimal axis. To limit errors due to cluster overlap, we weight each data point’s contribution based on its high and low energy CT values and location within the image. The proposed method’s performance is assessed on physical phantom studies. Electron density is used as the quality metric for decomposition accuracy. Our results are compared to those without noise suppression and with a recently developed iterative method. Results: The proposed method reduces noise standard deviations of the decomposed images by at least one order of magnitude. On the Catphan phantom, this method greatly preserves the spatial resolution and texture of the CT images and limits induced error in measured electron density to below 1.2%. In the head phantom study, the proposed method performs the best in retaining fine, intricate structures. Conclusion: The entropy minimization based algorithm with adaptive weighting substantially reduces DECT noise while preserving image spatial resolution and texture. Future investigations will include extensive investigations on material decomposition accuracy that go beyond the current electron density calculations. This work was supported in part by the National Institutes of Health (NIH) under Grant Number R21 EB012700.« less

  8. Dynamic characterization of a damaged beam using empirical mode decomposition and Hilbert spectrum method

    NASA Astrophysics Data System (ADS)

    Chang, Chih-Chen; Poon, Chun-Wing

    2004-07-01

    Recently, the empirical mode decomposition (EMD) in combination with the Hilbert spectrum method has been proposed to identify the dynamic characteristics of linear structures. In this study, this EMD and Hilbert spectrum method is used to analyze the dynamic characteristics of a damaged reinforced concrete (RC) beam in the laboratory. The RC beam is 4m long with a cross section of 200mm X 250mm. The beam is sequentially subjected to a concentrated load of different magnitudes at the mid-span to produce different degrees of damage. An impact load is applied around the mid-span to excite the beam. Responses of the beam are recorded by four accelerometers. Results indicate that the EMD and Hilbert spectrum method can reveal the variation of the dynamic characteristics in the time domain. These results are also compared with those obtained using the Fourier analysis. In general, it is found that the two sets of results correlate quite well in terms of mode counts and frequency values. Some differences, however, can be seen in the damping values, which perhaps can be attributed to the linear assumption of the Fourier transform.

  9. In vitro analysis of rifampicin and its effect on quality control tests of rifampicin containing dosage forms.

    PubMed

    Agrawal, S; Panchagnula, R

    2004-10-01

    The chemical stability of rifampicin both in solid state and various media has widely been investigated. While rifampicin is appreciably stable in solid-state, its decomposition rate is very high in acidic as well as in alkaline medium and a variety of decomposition products were identified. The literature reports on highly variable rifampicin decomposition in acidic medium. Hence, the objective of this investigation was to study possible reasons responsible for this variability. For this purpose, filter validation and correlation between rifampicin and its degradation products were developed to account for the loss of rifampicin in acidic media. For analysis of rifampicin with or without the presence of isoniazid, a simple and accurate method was developed using high performance chromatography recommended in FDC monographs of the United States Pharmacopoeia. Using the equations developed in this investigation, the amount of rifampicin degraded in the acidic media was calculated from the area under curve of the degradation products. Further, it was proved that in a dissolution study, the colorimetric method of analysis recommended in the United States Pharmacopoeia provides accurate results regarding rifampicin release. Filter type, time of injection as well as interpretation of data are important factors that affect analysis results of rifampicin in in vitro studies and quality control.

  10. Some results on numerical methods for hyperbolic conservation laws

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang Huanan.

    1989-01-01

    This dissertation contains some results on the numerical solutions of hyperbolic conservation laws. (1) The author introduced an artificial compression method as a correction to the basic ENO schemes. The method successfully prevents contact discontinuities from being smeared. This is achieved by increasing the slopes of the ENO reconstructions in such a way that the essentially non-oscillatory property of the schemes is kept. He analyzes the non-oscillatory property of the new artificial compression method by applying it to the UNO scheme which is a second order accurate ENO scheme, and proves that the resulting scheme is indeed non-oscillatory. Extensive 1-Dmore » numerical results and some preliminary 2-D ones are provided to show the strong performance of the method. (2) He combines the ENO schemes and the centered difference schemes into self-adjusting hybrid schemes which will be called the localized ENO schemes. At or near the jumps, he uses the ENO schemes with the field by field decompositions, otherwise he simply uses the centered difference schemes without the field by field decompositions. The method involves a new interpolation analysis. In the numerical experiments on several standard test problems, the quality of the numerical results of this method is close to that of the pure ENO results. The localized ENO schemes can be equipped with the above artificial compression method. In this way, he dramatically improves the resolutions of the contact discontinuities at very little additional costs. (3) He introduces a space-time mesh refinement method for time dependent problems.« less

  11. [A field study of tundra plant litter decomposition rate via mass loss and carbon dioxide emission: the role of biotic and abiotic controls, biotope, season of year, and spatial-temporal scale].

    PubMed

    Pochikalov, A V; Karelin, D V

    2014-01-01

    Although many recently published original papers and reviews deal with plant matter decomposition rates and their controls, we are still very short in understanding of these processes in boreal and high latiude plant communities, especially in permafrost areas of our planet. First and foremost, this is holds true for winter period. Here, we present the results of 2-year field observations in south taiga and south shrub tundra ecosystems in European Russia. We pioneered in simultaneous application of two independent methods: classic mass loss estimation by litter-bag technique, and direct measurement of CO2 emission (respiration) of the same litter bags with different types of dead plant matter. Such an approach let us to reconstruct intra-seasonal dynamics of decomposition rates of the main tundra litter fractions with high temporal resolution, to estimate the partial role of different seasons and defragmentation in the process of plant matter decomposition, and to determine its factors under different temporal scale.

  12. Initial decomposition of the condensed-phase β-HMX under shock waves: molecular dynamics simulations.

    PubMed

    Ge, Ni-Na; Wei, Yong-Kai; Ji, Guang-Fu; Chen, Xiang-Rong; Zhao, Feng; Wei, Dong-Qing

    2012-11-26

    We have performed quantum-based multiscale simulations to study the initial chemical processes of condensed-phase octahydro-1,3,5,7-tetranitro-1,3,5,7-tetrazocine (HMX) under shock wave loading. A self-consistent charge density-functional tight-binding (SCC-DFTB) method was employed. The results show that the initial decomposition of shocked HMX is triggered by the N-NO(2) bond breaking under the low velocity impact (8 km/s). As the shock velocity increases (11 km/s), the homolytic cleavage of the N-NO(2) bond is suppressed under high pressure, the C-H bond dissociation becomes the primary pathway for HMX decomposition in its early stages. It is accompanied by a five-membered ring formation and hydrogen transfer from the CH(2) group to the -NO(2) group. Our simulations suggest that the initial chemical processes of shocked HMX are dependent on the impact velocity, which gain new insights into the initial decomposition mechanism of HMX upon shock loading at the atomistic level, and have important implications for understanding and development of energetic materials.

  13. Factors Affecting Regional Per-Capita Carbon Emissions in China Based on an LMDI Factor Decomposition Model

    PubMed Central

    Dong, Feng; Long, Ruyin; Chen, Hong; Li, Xiaohui; Yang, Qingliang

    2013-01-01

    China is considered to be the main carbon producer in the world. The per-capita carbon emissions indicator is an important measure of the regional carbon emissions situation. This study used the LMDI factor decomposition model–panel co-integration test two-step method to analyze the factors that affect per-capita carbon emissions. The main results are as follows. (1) During 1997, Eastern China, Central China, and Western China ranked first, second, and third in the per-capita carbon emissions, while in 2009 the pecking order changed to Eastern China, Western China, and Central China. (2) According to the LMDI decomposition results, the key driver boosting the per-capita carbon emissions in the three economic regions of China between 1997 and 2009 was economic development, and the energy efficiency was much greater than the energy structure after considering their effect on restraining increased per-capita carbon emissions. (3) Based on the decomposition, the factors that affected per-capita carbon emissions in the panel co-integration test showed that Central China had the best energy structure elasticity in its regional per-capita carbon emissions. Thus, Central China was ranked first for energy efficiency elasticity, while Western China was ranked first for economic development elasticity. PMID:24353753

  14. Factors affecting regional per-capita carbon emissions in China based on an LMDI factor decomposition model.

    PubMed

    Dong, Feng; Long, Ruyin; Chen, Hong; Li, Xiaohui; Yang, Qingliang

    2013-01-01

    China is considered to be the main carbon producer in the world. The per-capita carbon emissions indicator is an important measure of the regional carbon emissions situation. This study used the LMDI factor decomposition model-panel co-integration test two-step method to analyze the factors that affect per-capita carbon emissions. The main results are as follows. (1) During 1997, Eastern China, Central China, and Western China ranked first, second, and third in the per-capita carbon emissions, while in 2009 the pecking order changed to Eastern China, Western China, and Central China. (2) According to the LMDI decomposition results, the key driver boosting the per-capita carbon emissions in the three economic regions of China between 1997 and 2009 was economic development, and the energy efficiency was much greater than the energy structure after considering their effect on restraining increased per-capita carbon emissions. (3) Based on the decomposition, the factors that affected per-capita carbon emissions in the panel co-integration test showed that Central China had the best energy structure elasticity in its regional per-capita carbon emissions. Thus, Central China was ranked first for energy efficiency elasticity, while Western China was ranked first for economic development elasticity.

  15. A Domain Decomposition Parallelization of the Fast Marching Method

    NASA Technical Reports Server (NTRS)

    Herrmann, M.

    2003-01-01

    In this paper, the first domain decomposition parallelization of the Fast Marching Method for level sets has been presented. Parallel speedup has been demonstrated in both the optimal and non-optimal domain decomposition case. The parallel performance of the proposed method is strongly dependent on load balancing separately the number of nodes on each side of the interface. A load imbalance of nodes on either side of the domain leads to an increase in communication and rollback operations. Furthermore, the amount of inter-domain communication can be reduced by aligning the inter-domain boundaries with the interface normal vectors. In the case of optimal load balancing and aligned inter-domain boundaries, the proposed parallel FMM algorithm is highly efficient, reaching efficiency factors of up to 0.98. Future work will focus on the extension of the proposed parallel algorithm to higher order accuracy. Also, to further enhance parallel performance, the coupling of the domain decomposition parallelization to the G(sub 0)-based parallelization will be investigated.

  16. Robust and Efficient Biomolecular Clustering of Tumor Based on ${p}$ -Norm Singular Value Decomposition.

    PubMed

    Kong, Xiang-Zhen; Liu, Jin-Xing; Zheng, Chun-Hou; Hou, Mi-Xiao; Wang, Juan

    2017-07-01

    High dimensionality has become a typical feature of biomolecular data. In this paper, a novel dimension reduction method named p-norm singular value decomposition (PSVD) is proposed to seek the low-rank approximation matrix to the biomolecular data. To enhance the robustness to outliers, the Lp-norm is taken as the error function and the Schatten p-norm is used as the regularization function in the optimization model. To evaluate the performance of PSVD, the Kmeans clustering method is then employed for tumor clustering based on the low-rank approximation matrix. Extensive experiments are carried out on five gene expression data sets including two benchmark data sets and three higher dimensional data sets from the cancer genome atlas. The experimental results demonstrate that the PSVD-based method outperforms many existing methods. Especially, it is experimentally proved that the proposed method is more efficient for processing higher dimensional data with good robustness, stability, and superior time performance.

  17. An integrated condition-monitoring method for a milling process using reduced decomposition features

    NASA Astrophysics Data System (ADS)

    Liu, Jie; Wu, Bo; Wang, Yan; Hu, Youmin

    2017-08-01

    Complex and non-stationary cutting chatter affects productivity and quality in the milling process. Developing an effective condition-monitoring approach is critical to accurately identify cutting chatter. In this paper, an integrated condition-monitoring method is proposed, where reduced features are used to efficiently recognize and classify machine states in the milling process. In the proposed method, vibration signals are decomposed into multiple modes with variational mode decomposition, and Shannon power spectral entropy is calculated to extract features from the decomposed signals. Principal component analysis is adopted to reduce feature size and computational cost. With the extracted feature information, the probabilistic neural network model is used to recognize and classify the machine states, including stable, transition, and chatter states. Experimental studies are conducted, and results show that the proposed method can effectively detect cutting chatter during different milling operation conditions. This monitoring method is also efficient enough to satisfy fast machine state recognition and classification.

  18. Multiscale infrared and visible image fusion using gradient domain guided image filtering

    NASA Astrophysics Data System (ADS)

    Zhu, Jin; Jin, Weiqi; Li, Li; Han, Zhenghao; Wang, Xia

    2018-03-01

    For better surveillance with infrared and visible imaging, a novel hybrid multiscale decomposition fusion method using gradient domain guided image filtering (HMSD-GDGF) is proposed in this study. In this method, hybrid multiscale decomposition with guided image filtering and gradient domain guided image filtering of source images are first applied before the weight maps of each scale are obtained using a saliency detection technology and filtering means with three different fusion rules at different scales. The three types of fusion rules are for small-scale detail level, large-scale detail level, and base level. Finally, the target becomes more salient and can be more easily detected in the fusion result, with the detail information of the scene being fully displayed. After analyzing the experimental comparisons with state-of-the-art fusion methods, the HMSD-GDGF method has obvious advantages in fidelity of salient information (including structural similarity, brightness, and contrast), preservation of edge features, and human visual perception. Therefore, visual effects can be improved by using the proposed HMSD-GDGF method.

  19. A New Strategy for ECG Baseline Wander Elimination Using Empirical Mode Decomposition

    NASA Astrophysics Data System (ADS)

    Shahbakhti, Mohammad; Bagheri, Hamed; Shekarchi, Babak; Mohammadi, Somayeh; Naji, Mohsen

    2016-06-01

    Electrocardiogram (ECG) signals might be affected by various artifacts and noises that have biological and external sources. Baseline wander (BW) is a low-frequency artifact that may be caused by breathing, body movements and loose sensor contact. In this paper, a novel method based on empirical mode decomposition (EMD) for removal of baseline noise from ECG is presented. When compared to other EMD-based methods, the novelty of this research is to reach the optimized number of decomposed levels for ECG BW de-noising using mean power frequency (MPF), while the reduction of processing time is considered. To evaluate the performance of the proposed method, a fifth-order Butterworth high pass filtering (BHPF) with cut-off frequency at 0.5Hz and wavelet approach are applied. Three performance indices, signal-to-noise ratio (SNR), mean square error (MSE) and correlation coefficient (CC), between pure and filtered signals have been utilized for qualification of presented techniques. Results suggest that the EMD-based method outperforms the other filtering method.

  20. Sparse time-frequency decomposition based on dictionary adaptation.

    PubMed

    Hou, Thomas Y; Shi, Zuoqiang

    2016-04-13

    In this paper, we propose a time-frequency analysis method to obtain instantaneous frequencies and the corresponding decomposition by solving an optimization problem. In this optimization problem, the basis that is used to decompose the signal is not known a priori. Instead, it is adapted to the signal and is determined as part of the optimization problem. In this sense, this optimization problem can be seen as a dictionary adaptation problem, in which the dictionary is adaptive to one signal rather than a training set in dictionary learning. This dictionary adaptation problem is solved by using the augmented Lagrangian multiplier (ALM) method iteratively. We further accelerate the ALM method in each iteration by using the fast wavelet transform. We apply our method to decompose several signals, including signals with poor scale separation, signals with outliers and polluted by noise and a real signal. The results show that this method can give accurate recovery of both the instantaneous frequencies and the intrinsic mode functions. © 2016 The Author(s).

  1. Decomposition of Composite Electric Field in a Three-Phase D-Dot Voltage Transducer Measuring System

    PubMed Central

    Hu, Xueqi; Wang, Jingang; Wei, Gang; Deng, Xudong

    2016-01-01

    In line with the wider application of non-contact voltage transducers in the engineering field, transducers are required to have better performance for different measuring environments. In the present study, the D-dot voltage transducer is further improved based on previous research in order to meet the requirements for long-distance measurement of electric transmission lines. When measuring three-phase electric transmission lines, problems such as synchronous data collection and composite electric field need to be resolved. A decomposition method is proposed with respect to the superimposed electric field generated between neighboring phases. The charge simulation method is utilized to deduce the decomposition equation of the composite electric field and the validity of the proposed method is verified by simulation calculation software. With the deduced equation as the algorithm foundation, this paper improves hardware circuits, establishes a measuring system and constructs an experimental platform for examination. Under experimental conditions, a 10 kV electric transmission line was tested for steady-state errors, and the measuring results of the transducer and the high-voltage detection head were compared. Ansoft Maxwell Stimulation Software was adopted to obtain the electric field intensity in different positions under transmission lines; its values and the measuring values of the transducer were also compared. Experimental results show that the three-phase transducer is characterized by a relatively good synchronization for data measurement, measuring results with high precision, and an error ratio within a prescribed limit. Therefore, the proposed three-phase transducer can be broadly applied and popularized in the engineering field. PMID:27754340

  2. Homotopy decomposition method for solving one-dimensional time-fractional diffusion equation

    NASA Astrophysics Data System (ADS)

    Abuasad, Salah; Hashim, Ishak

    2018-04-01

    In this paper, we present the homotopy decomposition method with a modified definition of beta fractional derivative for the first time to find exact solution of one-dimensional time-fractional diffusion equation. In this method, the solution takes the form of a convergent series with easily computable terms. The exact solution obtained by the proposed method is compared with the exact solution obtained by using fractional variational homotopy perturbation iteration method via a modified Riemann-Liouville derivative.

  3. Energy-Based Wavelet De-Noising of Hydrologic Time Series

    PubMed Central

    Sang, Yan-Fang; Liu, Changming; Wang, Zhonggen; Wen, Jun; Shang, Lunyu

    2014-01-01

    De-noising is a substantial issue in hydrologic time series analysis, but it is a difficult task due to the defect of methods. In this paper an energy-based wavelet de-noising method was proposed. It is to remove noise by comparing energy distribution of series with the background energy distribution, which is established from Monte-Carlo test. Differing from wavelet threshold de-noising (WTD) method with the basis of wavelet coefficient thresholding, the proposed method is based on energy distribution of series. It can distinguish noise from deterministic components in series, and uncertainty of de-noising result can be quantitatively estimated using proper confidence interval, but WTD method cannot do this. Analysis of both synthetic and observed series verified the comparable power of the proposed method and WTD, but de-noising process by the former is more easily operable. The results also indicate the influences of three key factors (wavelet choice, decomposition level choice and noise content) on wavelet de-noising. Wavelet should be carefully chosen when using the proposed method. The suitable decomposition level for wavelet de-noising should correspond to series' deterministic sub-signal which has the smallest temporal scale. If too much noise is included in a series, accurate de-noising result cannot be obtained by the proposed method or WTD, but the series would show pure random but not autocorrelation characters, so de-noising is no longer needed. PMID:25360533

  4. SU-E-QI-14: Quantitative Variogram Detection of Mild, Unilateral Disease in Elastase-Treated Rats

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jacob, R; Carson, J

    2014-06-15

    Purpose: Determining the presence of mild or early disease in the lungs can be challenging and subjective. We present a rapid and objective method for evaluating lung damage in a rat model of unilateral mild emphysema based on a new approach to heterogeneity assessment. We combined octree decomposition (used in three-dimensional (3D) computer graphics) with variograms (used in geostatistics to assess spatial relationships) to evaluate 3D computed tomography (CT) lung images for disease. Methods: Male, Sprague-Dawley rats (232 ± 7 g) were intratracheally dosed with 50 U/kg of elastase dissolved in 200 μL of saline to a single lobe (n=6)more » or with saline only (n=5). After four weeks, 3D micro-CT images were acquired at end expiration on mechanically ventilated rats using prospective gating. Images were masked, and lungs were decomposed to homogeneous blocks of 2×2×2, 4×4×4, and 8×8×8 voxels using octree decomposition. The spatial variance – the square of the difference of signal intensity – between all pairs of the 8×8×8 blocks was calculated. Variograms – graphs of distance vs. variance - were made, and data were fit to a power law and the exponent determined. The mean HU values, coefficient of variation (CoV), and the emphysema index (EI) were calculated and compared to the variograms. Results: The variogram analysis showed that significant differences between groups existed (p<0.01), whereas the mean HU (p=0.07), CoV (p=0.24), and EI (p=0.08) did not. Calculation time for the variogram for a typical 1000 block decomposition was ∼6 seconds, and octree decomposition took ∼2 minutes. Decomposing the images prior to variogram calculation resulted in a ∼700x decrease in time as compared to other published approaches. Conclusions: Our results suggest that the approach combining octree decomposition and variogram analysis may be a rapid, non-subjective, and sensitive imaging-based biomarker for quantitative characterization of lung disease.« less

  5. Ranking of critical species to preserve the functionality of mutualistic networks using the k-core decomposition

    PubMed Central

    García-Algarra, Javier; Pastor, Juan Manuel; Iriondo, José María

    2017-01-01

    Background Network analysis has become a relevant approach to analyze cascading species extinctions resulting from perturbations on mutualistic interactions as a result of environmental change. In this context, it is essential to be able to point out key species, whose stability would prevent cascading extinctions, and the consequent loss of ecosystem function. In this study, we aim to explain how the k-core decomposition sheds light on the understanding the robustness of bipartite mutualistic networks. Methods We defined three k-magnitudes based on the k-core decomposition: k-radius, k-degree, and k-risk. The first one, k-radius, quantifies the distance from a node to the innermost shell of the partner guild, while k-degree provides a measure of centrality in the k-shell based decomposition. k-risk is a way to measure the vulnerability of a network to the loss of a particular species. Using these magnitudes we analyzed 89 mutualistic networks involving plant pollinators or seed dispersers. Two static extinction procedures were implemented in which k-degree and k-risk were compared against other commonly used ranking indexes, as for example MusRank, explained in detail in Material and Methods. Results When extinctions take place in both guilds, k-risk is the best ranking index if the goal is to identify the key species to preserve the giant component. When species are removed only in the primary class and cascading extinctions are measured in the secondary class, the most effective ranking index to identify the key species to preserve the giant component is k-degree. However, MusRank index was more effective when the goal is to identify the key species to preserve the greatest species richness in the second class. Discussion The k-core decomposition offers a new topological view of the structure of mutualistic networks. The new k-radius, k-degree and k-risk magnitudes take advantage of its properties and provide new insight into the structure of mutualistic networks. The k-risk and k-degree ranking indexes are especially effective approaches to identify key species to preserve when conservation practitioners focus on the preservation of ecosystem functionality over species richness. PMID:28533969

  6. In situ spectroscopic studies on vapor phase catalytic decomposition of dimethyl oxalate.

    PubMed

    Hegde, Shweta; Tharpa, Kalsang; Akuri, Satyanarayana Reddy; K, Rakesh; Kumar, Ajay; Deshpande, Raj; Nair, Sreejit A

    2017-03-15

    Dimethyl Oxalate (DMO) has recently gained prominence as a valuable intermediate for the production of compounds of commercial importance. The stability of DMO is poor and hence this can result in the decomposition of DMO under reaction conditions. The mechanism of DMO decomposition is however not reported and more so on catalytic surfaces. Insights into the mechanism of decomposition would help in designing catalysts for its effective molecular transformation. It is well known that DMO is sensitive to moisture, which can also be a factor contributing to its decomposition. The present work reports the results of decomposition of DMO on various catalytic materials. The materials studied consist of acidic (γ-Al 2 O 3 ), basic (MgO), weakly acidic (ZnAl 2 O 4 ) and neutral surfaces such as α-Al 2 O 3 and mesoporous precipitated SiO 2 . Infrared spectroscopy is used to identify the nature of adsorption of the molecule on the various surfaces. The spectroscopy study is done at a temperature of 200 °C, which is the onset of gas phase decomposition of DMO. The results indicate that the stability of DMO is lower than the corresponding acid, i.e. oxalic acid. It is also one of the products of decomposition. Spectroscopic data suggest that DMO decomposition is related to surface acidity and the extent of decomposition depends on the number of surface hydroxyl groups. Decomposition was also observed on α-Al 2 O 3 , which was attributed to the residual surface hydroxyl groups. DMO decomposition to oxalic acid was not observed on the basic surface (MgO).

  7. Seismic random noise attenuation method based on empirical mode decomposition of Hausdorff dimension

    NASA Astrophysics Data System (ADS)

    Yan, Z.; Luan, X.

    2017-12-01

    Introduction Empirical mode decomposition (EMD) is a noise suppression algorithm by using wave field separation, which is based on the scale differences between effective signal and noise. However, since the complexity of the real seismic wave field results in serious aliasing modes, it is not ideal and effective to denoise with this method alone. Based on the multi-scale decomposition characteristics of the signal EMD algorithm, combining with Hausdorff dimension constraints, we propose a new method for seismic random noise attenuation. First of all, We apply EMD algorithm adaptive decomposition of seismic data and obtain a series of intrinsic mode function (IMF)with different scales. Based on the difference of Hausdorff dimension between effectively signals and random noise, we identify IMF component mixed with random noise. Then we use threshold correlation filtering process to separate the valid signal and random noise effectively. Compared with traditional EMD method, the results show that the new method of seismic random noise attenuation has a better suppression effect. The implementation process The EMD algorithm is used to decompose seismic signals into IMF sets and analyze its spectrum. Since most of the random noise is high frequency noise, the IMF sets can be divided into three categories: the first category is the effective wave composition of the larger scale; the second category is the noise part of the smaller scale; the third category is the IMF component containing random noise. Then, the third kind of IMF component is processed by the Hausdorff dimension algorithm, and the appropriate time window size, initial step and increment amount are selected to calculate the Hausdorff instantaneous dimension of each component. The dimension of the random noise is between 1.0 and 1.05, while the dimension of the effective wave is between 1.05 and 2.0. On the basis of the previous steps, according to the dimension difference between the random noise and effective signal, we extracted the sample points, whose fractal dimension value is less than or equal to 1.05 for the each IMF components, to separate the residual noise. Using the IMF components after dimension filtering processing and the effective wave IMF components after the first selection for reconstruction, we can obtained the results of de-noising.

  8. Turbulent Statistics From Time-Resolved PIV Measurements of a Jet Using Empirical Mode Decomposition

    NASA Technical Reports Server (NTRS)

    Dahl, Milo D.

    2013-01-01

    Empirical mode decomposition is an adaptive signal processing method that when applied to a broadband signal, such as that generated by turbulence, acts as a set of band-pass filters. This process was applied to data from time-resolved, particle image velocimetry measurements of subsonic jets prior to computing the second-order, two-point, space-time correlations from which turbulent phase velocities and length and time scales could be determined. The application of this method to large sets of simultaneous time histories is new. In this initial study, the results are relevant to acoustic analogy source models for jet noise prediction. The high frequency portion of the results could provide the turbulent values for subgrid scale models for noise that is missed in large-eddy simulations. The results are also used to infer that the cross-correlations between different components of the decomposed signals at two points in space, neglected in this initial study, are important.

  9. Turbulent Statistics from Time-Resolved PIV Measurements of a Jet Using Empirical Mode Decomposition

    NASA Technical Reports Server (NTRS)

    Dahl, Milo D.

    2012-01-01

    Empirical mode decomposition is an adaptive signal processing method that when applied to a broadband signal, such as that generated by turbulence, acts as a set of band-pass filters. This process was applied to data from time-resolved, particle image velocimetry measurements of subsonic jets prior to computing the second-order, two-point, space-time correlations from which turbulent phase velocities and length and time scales could be determined. The application of this method to large sets of simultaneous time histories is new. In this initial study, the results are relevant to acoustic analogy source models for jet noise prediction. The high frequency portion of the results could provide the turbulent values for subgrid scale models for noise that is missed in large-eddy simulations. The results are also used to infer that the cross-correlations between different components of the decomposed signals at two points in space, neglected in this initial study, are important.

  10. Studies on thermal decomposition behaviors of polypropylene using molecular dynamics simulation

    NASA Astrophysics Data System (ADS)

    Huang, Jinbao; He, Chao; Tong, Hong; Pan, Guiying

    2017-11-01

    Polypropylene (PP) is one of the main components of waste plastics. In order to understand the mechanism of PP thermal decomposition, the pyrolysis behaviour of PP has been simulated from 300 to 1000 K in periodic boundary conditions by molecular dynamic method, based on AMBER force field. The simulation results show that the pyrolysis process of PP can mostly be divided into three stages: low temperature pyrolysis stage, intermediate temperature stage and high temperature pyrolysis stage. PP pyrolysis is typical of random main-chain scission, and the possible formation mechanism of major pyrolysis products was analyzed.

  11. Raman analysis of non stoichiometric Ni1-δO

    NASA Astrophysics Data System (ADS)

    Dubey, Paras; Choudhary, K. K.; Kaurav, Netram

    2018-04-01

    Thermal decomposition method was used to synthesize non-stoichiometric nickel oxide at different sintering temperatures upto 1100 °C. The structure of synthesized compounds were analyzed by X ray diffraction analysis (XRD) and magnetic ordering was studied with the help of Raman scattering spectroscopy for the samples sintered at different temperature. It was found that due to change in sintering temperature the stoichiometry of the sample changes and hence intensity of two magnon band changes. These results were interpreted as the decomposition temperature increases, which heals the defects present in the non-stoichiometric nickel oxide and antiferromagnetic spin correlation changes accordingly.

  12. Kinetics of the isothermal decomposition of zirconium hydride: terminal solid solubility for precipitation and dissolution

    NASA Astrophysics Data System (ADS)

    Denisov, E. A.; Kompaniets, T. N.; Voyt, A. P.

    2018-05-01

    The hydrogen permeation technique in the surface-limited regime (SLR) was first used to study the isothermal decomposition of zirconium hydride. It is shown that under isothermal conditions, the hydrogen terminal solid solubility in the α-phase for hydride precipitation (TSSp) and dissolution (TSSd) differ only by 6%, in contrast to the 20-30% indicated in the available literature. It is demonstrated that even the minimum heating/cooling rate (1 C/min) used in the traditional methods of studying TSSp and TSSd is too high to exclude the effect of kinetics on the results obtained.

  13. Microbial Signatures of Cadaver Gravesoil During Decomposition.

    PubMed

    Finley, Sheree J; Pechal, Jennifer L; Benbow, M Eric; Robertson, B K; Javan, Gulnaz T

    2016-04-01

    Genomic studies have estimated there are approximately 10(3)-10(6) bacterial species per gram of soil. The microbial species found in soil associated with decomposing human remains (gravesoil) have been investigated and recognized as potential molecular determinants for estimates of time since death. The nascent era of high-throughput amplicon sequencing of the conserved 16S ribosomal RNA (rRNA) gene region of gravesoil microbes is allowing research to expand beyond more subjective empirical methods used in forensic microbiology. The goal of the present study was to evaluate microbial communities and identify taxonomic signatures associated with the gravesoil human cadavers. Using 16S rRNA gene amplicon-based sequencing, soil microbial communities were surveyed from 18 cadavers placed on the surface or buried that were allowed to decompose over a range of decomposition time periods (3-303 days). Surface soil microbial communities showed a decreasing trend in taxon richness, diversity, and evenness over decomposition, while buried cadaver-soil microbial communities demonstrated increasing taxon richness, consistent diversity, and decreasing evenness. The results show that ubiquitous Proteobacteria was confirmed as the most abundant phylum in all gravesoil samples. Surface cadaver-soil communities demonstrated a decrease in Acidobacteria and an increase in Firmicutes relative abundance over decomposition, while buried soil communities were consistent in their community composition throughout decomposition. Better understanding of microbial community structure and its shifts over time may be important for advancing general knowledge of decomposition soil ecology and its potential use during forensic investigations.

  14. Transportation Network Analysis and Decomposition Methods

    DOT National Transportation Integrated Search

    1978-03-01

    The report outlines research in transportation network analysis using decomposition techniques as a basis for problem solutions. Two transportation network problems were considered in detail: a freight network flow problem and a scheduling problem fo...

  15. System and methods for determining masking signals for applying empirical mode decomposition (EMD) and for demodulating intrinsic mode functions obtained from application of EMD

    DOEpatents

    Senroy, Nilanjan [New Delhi, IN; Suryanarayanan, Siddharth [Littleton, CO

    2011-03-15

    A computer-implemented method of signal processing is provided. The method includes generating one or more masking signals based upon a computed Fourier transform of a received signal. The method further includes determining one or more intrinsic mode functions (IMFs) of the received signal by performing a masking-signal-based empirical mode decomposition (EMD) using the at least one masking signal.

  16. Vapor Pressure Data and Analysis for Selected HD Decomposition Products: 1,4-Thioxane, Divinyl Sulfoxide, Chloroethyl Acetylsulfide, and 1,4-Dithiane

    DTIC Science & Technology

    2018-06-01

    decomposition products from bis-(2-chloroethyl) sulfide (HD). These data were measured using an ASTM International method that is based on differential...2.1 Materials and Method ........................................................................................2 2.2 Data Analysis...and Method The source and purity of the materials studied are listed in Table 1. Table 1. Sample Information for Title Compounds Compound

  17. Extraction of drainage networks from large terrain datasets using high throughput computing

    NASA Astrophysics Data System (ADS)

    Gong, Jianya; Xie, Jibo

    2009-02-01

    Advanced digital photogrammetry and remote sensing technology produces large terrain datasets (LTD). How to process and use these LTD has become a big challenge for GIS users. Extracting drainage networks, which are basic for hydrological applications, from LTD is one of the typical applications of digital terrain analysis (DTA) in geographical information applications. Existing serial drainage algorithms cannot deal with large data volumes in a timely fashion, and few GIS platforms can process LTD beyond the GB size. High throughput computing (HTC), a distributed parallel computing mode, is proposed to improve the efficiency of drainage networks extraction from LTD. Drainage network extraction using HTC involves two key issues: (1) how to decompose the large DEM datasets into independent computing units and (2) how to merge the separate outputs into a final result. A new decomposition method is presented in which the large datasets are partitioned into independent computing units using natural watershed boundaries instead of using regular 1-dimensional (strip-wise) and 2-dimensional (block-wise) decomposition. Because the distribution of drainage networks is strongly related to watershed boundaries, the new decomposition method is more effective and natural. The method to extract natural watershed boundaries was improved by using multi-scale DEMs instead of single-scale DEMs. A HTC environment is employed to test the proposed methods with real datasets.

  18. A Survey of Singular Value Decomposition Methods and Performance Comparison of Some Available Serial Codes

    NASA Technical Reports Server (NTRS)

    Plassman, Gerald E.

    2005-01-01

    This contractor report describes a performance comparison of available alternative complete Singular Value Decomposition (SVD) methods and implementations which are suitable for incorporation into point spread function deconvolution algorithms. The report also presents a survey of alternative algorithms, including partial SVD's special case SVD's, and others developed for concurrent processing systems.

  19. Computing the Partial Fraction Decomposition of Rational Functions with Irreducible Quadratic Factors in the Denominators

    ERIC Educational Resources Information Center

    Man, Yiu-Kwong

    2012-01-01

    In this note, a new method for computing the partial fraction decomposition of rational functions with irreducible quadratic factors in the denominators is presented. This method involves polynomial divisions and substitutions only, without having to solve for the complex roots of the irreducible quadratic polynomial or to solve a system of linear…

  20. Quantitative separation of tetralin hydroperoxide from its decomposition products by high performance liquid chromatography

    NASA Technical Reports Server (NTRS)

    Worstell, J. H.; Daniel, S. R.

    1981-01-01

    A method for the separation and analysis of tetralin hydroperoxide and its decomposition products by high pressure liquid chromatography has been developed. Elution with a single, mixed solvent from a micron-Porasil column was employed. Constant response factors (internal standard method) over large concentration ranges and reproducible retention parameters are reported.

  1. Multidisciplinary Optimization Methods for Aircraft Preliminary Design

    NASA Technical Reports Server (NTRS)

    Kroo, Ilan; Altus, Steve; Braun, Robert; Gage, Peter; Sobieski, Ian

    1994-01-01

    This paper describes a research program aimed at improved methods for multidisciplinary design and optimization of large-scale aeronautical systems. The research involves new approaches to system decomposition, interdisciplinary communication, and methods of exploiting coarse-grained parallelism for analysis and optimization. A new architecture, that involves a tight coupling between optimization and analysis, is intended to improve efficiency while simplifying the structure of multidisciplinary, computation-intensive design problems involving many analysis disciplines and perhaps hundreds of design variables. Work in two areas is described here: system decomposition using compatibility constraints to simplify the analysis structure and take advantage of coarse-grained parallelism; and collaborative optimization, a decomposition of the optimization process to permit parallel design and to simplify interdisciplinary communication requirements.

  2. A resilient domain decomposition polynomial chaos solver for uncertain elliptic PDEs

    NASA Astrophysics Data System (ADS)

    Mycek, Paul; Contreras, Andres; Le Maître, Olivier; Sargsyan, Khachik; Rizzi, Francesco; Morris, Karla; Safta, Cosmin; Debusschere, Bert; Knio, Omar

    2017-07-01

    A resilient method is developed for the solution of uncertain elliptic PDEs on extreme scale platforms. The method is based on a hybrid domain decomposition, polynomial chaos (PC) framework that is designed to address soft faults. Specifically, parallel and independent solves of multiple deterministic local problems are used to define PC representations of local Dirichlet boundary-to-boundary maps that are used to reconstruct the global solution. A LAD-lasso type regression is developed for this purpose. The performance of the resulting algorithm is tested on an elliptic equation with an uncertain diffusivity field. Different test cases are considered in order to analyze the impacts of correlation structure of the uncertain diffusivity field, the stochastic resolution, as well as the probability of soft faults. In particular, the computations demonstrate that, provided sufficiently many samples are generated, the method effectively overcomes the occurrence of soft faults.

  3. Estimation of slip distribution using an inverse method based on spectral decomposition of Green's function utilizing Global Positioning System (GPS) data

    NASA Astrophysics Data System (ADS)

    Jin, Honglin; Kato, Teruyuki; Hori, Muneo

    2007-07-01

    An inverse method based on the spectral decomposition of the Green's function was employed for estimating a slip distribution. We conducted numerical simulations along the Philippine Sea plate (PH) boundary in southwest Japan using this method to examine how to determine the essential parameters which are the number of deformation function modes and their coefficients. Japanese GPS Earth Observation Network (GEONET) Global Positioning System (GPS) data were used for three years covering 1997-1999 to estimate interseismic back slip distribution in this region. The estimated maximum back slip rate is about 7 cm/yr, which is consistent with the Philippine Sea plate convergence rate. Areas of strong coupling are confined between depths of 10 and 30 km and three areas of strong coupling were delineated. These results are consistent with other studies that have estimated locations of coupling distribution.

  4. Fast computation of radiation pressure force exerted by multiple laser beams on red blood cell-like particles

    NASA Astrophysics Data System (ADS)

    Gou, Ming-Jiang; Yang, Ming-Lin; Sheng, Xin-Qing

    2016-10-01

    Mature red blood cells (RBC) do not contain huge complex nuclei and organelles, makes them can be approximately regarded as homogeneous medium particles. To compute the radiation pressure force (RPF) exerted by multiple laser beams on this kind of arbitrary shaped homogenous nano-particles, a fast electromagnetic optics method is demonstrated. In general, based on the Maxwell's equations, the matrix equation formed by the method of moment (MOM) has many right hand sides (RHS's) corresponding to the different laser beams. In order to accelerate computing the matrix equation, the algorithm conducts low-rank decomposition on the excitation matrix consisting of all RHS's to figure out the so-called skeleton laser beams by interpolative decomposition (ID). After the solutions corresponding to the skeletons are obtained, the desired responses can be reconstructed efficiently. Some numerical results are performed to validate the developed method.

  5. Towards estimation of respiratory muscle effort with respiratory inductance plethysmography signals and complementary ensemble empirical mode decomposition.

    PubMed

    Chen, Ya-Chen; Hsiao, Tzu-Chien

    2018-07-01

    Respiratory inductance plethysmography (RIP) sensor is an inexpensive, non-invasive, easy-to-use transducer for collecting respiratory movement data. Studies have reported that the RIP signal's amplitude and frequency can be used to discriminate respiratory diseases. However, with the conventional approach of RIP data analysis, respiratory muscle effort cannot be estimated. In this paper, the estimation of the respiratory muscle effort through RIP signal was proposed. A complementary ensemble empirical mode decomposition method was used, to extract hidden signals from the RIP signals based on the frequency bands of the activities of different respiratory muscles. To validate the proposed method, an experiment to collect subjects' RIP signal under thoracic breathing (TB) and abdominal breathing (AB) was conducted. The experimental results for both the TB and AB indicate that the proposed method can be used to loosely estimate the activities of thoracic muscles, abdominal muscles, and diaphragm. Graphical abstract ᅟ.

  6. Learning Low-Rank Decomposition for Pan-Sharpening With Spatial-Spectral Offsets.

    PubMed

    Yang, Shuyuan; Zhang, Kai; Wang, Min

    2017-08-25

    Finding accurate injection components is the key issue in pan-sharpening methods. In this paper, a low-rank pan-sharpening (LRP) model is developed from a new perspective of offset learning. Two offsets are defined to represent the spatial and spectral differences between low-resolution multispectral and high-resolution multispectral (HRMS) images, respectively. In order to reduce spatial and spectral distortions, spatial equalization and spectral proportion constraints are designed and cast on the offsets, to develop a spatial and spectral constrained stable low-rank decomposition algorithm via augmented Lagrange multiplier. By fine modeling and heuristic learning, our method can simultaneously reduce spatial and spectral distortions in the fused HRMS images. Moreover, our method can efficiently deal with noises and outliers in source images, for exploring low-rank and sparse characteristics of data. Extensive experiments are taken on several image data sets, and the results demonstrate the efficiency of the proposed LRP.

  7. Preparation and catalytic activities for H{sub 2}O{sub 2} decomposition of Rh/Au bimetallic nanoparticles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Haijun, E-mail: zhanghaijun@wust.edu.cn; The State Key Laboratory of Refractory and Metallurgy, Wuhan University of Science and Technology, Wuhan 430081; Deng, Xiangong

    2016-07-15

    Graphical abstract: PVP-protected Rh/Au bimetallic nanoparticles (BNPs) were prepared by using hydrogen sacrificial reduction method, the activity of Rh80Au20 BNPs were about 3.6 times higher than that of Rh NPs. - Highlights: • Rh/Au bimetallic nanoparticles (BNPs) of 3∼5 nm in diameter were prepared. • Activity for H{sub 2}O{sub 2} decomposition of BNPs is 3.6 times higher than that of Rh NPs. • The high activity of BNPs was caused by the existence of charged Rh atoms. • The apparent activation energy for H{sub 2}O{sub 2} decomposition over the BNPs was calculated. - Abstract: PVP-protected Rh/Au bimetallic nanoparticles (BNPs) weremore » prepared by using hydrogen sacrificial reduction method and characterized by UV–vis, XRD, FT-IR, XPS, TEM, HR-TEM and DF-STEM, the effects of composition on their particle sizes and catalytic activities for H{sub 2}O{sub 2} decomposition were also studied. The as-prepared Rh/Au BNPs possessed a high catalytic activity for the H{sub 2}O{sub 2} decomposition, and the activity of the Rh{sub 80}Au{sub 20} BNPs with average size of 2.7 nm were about 3.6 times higher than that of Rh monometallic nanoparticles (MNPs) even the Rh MNPs possess a smaller particle size of 1.7 nm. In contrast, Au MNPs with size of 2.7 nm show no any activity. Density functional theory (DFT) calculation as well as XPS results showed that charged Rh and Au atoms formed via electronic charge transfer effects could be responsible for the high catalytic activity of the BNPs.« less

  8. A data-driven method to enhance vibration signal decomposition for rolling bearing fault analysis

    NASA Astrophysics Data System (ADS)

    Grasso, M.; Chatterton, S.; Pennacchi, P.; Colosimo, B. M.

    2016-12-01

    Health condition analysis and diagnostics of rotating machinery requires the capability of properly characterizing the information content of sensor signals in order to detect and identify possible fault features. Time-frequency analysis plays a fundamental role, as it allows determining both the existence and the causes of a fault. The separation of components belonging to different time-frequency scales, either associated to healthy or faulty conditions, represents a challenge that motivates the development of effective methodologies for multi-scale signal decomposition. In this framework, the Empirical Mode Decomposition (EMD) is a flexible tool, thanks to its data-driven and adaptive nature. However, the EMD usually yields an over-decomposition of the original signals into a large number of intrinsic mode functions (IMFs). The selection of most relevant IMFs is a challenging task, and the reference literature lacks automated methods to achieve a synthetic decomposition into few physically meaningful modes by avoiding the generation of spurious or meaningless modes. The paper proposes a novel automated approach aimed at generating a decomposition into a minimal number of relevant modes, called Combined Mode Functions (CMFs), each consisting in a sum of adjacent IMFs that share similar properties. The final number of CMFs is selected in a fully data driven way, leading to an enhanced characterization of the signal content without any information loss. A novel criterion to assess the dissimilarity between adjacent CMFs is proposed, based on probability density functions of frequency spectra. The method is suitable to analyze vibration signals that may be periodically acquired within the operating life of rotating machineries. A rolling element bearing fault analysis based on experimental data is presented to demonstrate the performances of the method and the provided benefits.

  9. Adaptive bearing estimation and tracking of multiple targets in a realistic passive sonar scenario

    NASA Astrophysics Data System (ADS)

    Rajagopal, R.; Challa, Subhash; Faruqi, Farhan A.; Rao, P. R.

    1997-06-01

    In a realistic passive sonar environment, the received signal consists of multipath arrivals from closely separated moving targets. The signals are contaminated by spatially correlated noise. The differential MUSIC has been proposed to estimate the DOAs in such a scenario. This method estimates the 'noise subspace' in order to estimate the DOAs. However, the 'noise subspace' estimate has to be updated as and when new data become available. In order to save the computational costs, a new adaptive noise subspace estimation algorithm is proposed in this paper. The salient features of the proposed algorithm are: (1) Noise subspace estimation is done by QR decomposition of the difference matrix which is formed from the data covariance matrix. Thus, as compared to standard eigen-decomposition based methods which require O(N3) computations, the proposed method requires only O(N2) computations. (2) Noise subspace is updated by updating the QR decomposition. (3) The proposed algorithm works in a realistic sonar environment. In the second part of the paper, the estimated bearing values are used to track multiple targets. In order to achieve this, the nonlinear system/linear measurement extended Kalman filtering proposed is applied. Computer simulation results are also presented to support the theory.

  10. Extended FDD-WT method based on correcting the errors due to non-synchronous sensing of sensors

    NASA Astrophysics Data System (ADS)

    Tarinejad, Reza; Damadipour, Majid

    2016-05-01

    In this research, a combinational non-parametric method called frequency domain decomposition-wavelet transform (FDD-WT) that was recently presented by the authors, is extended for correction of the errors resulting from asynchronous sensing of sensors, in order to extend the application of the algorithm for different kinds of structures, especially for huge structures. Therefore, the analysis process is based on time-frequency domain decomposition and is performed with emphasis on correcting time delays between sensors. Time delay estimation (TDE) methods are investigated for their efficiency and accuracy for noisy environmental records and the Phase Transform - β (PHAT-β) technique was selected as an appropriate method to modify the operation of traditional FDD-WT in order to achieve the exact results. In this paper, a theoretical example (3DOF system) has been provided in order to indicate the non-synchronous sensing effects of the sensors on the modal parameters; moreover, the Pacoima dam subjected to 13 Jan 2001 earthquake excitation was selected as a case study. The modal parameters of the dam obtained from the extended FDD-WT method were compared with the output of the classical signal processing method, which is referred to as 4-Spectral method, as well as other literatures relating to the dynamic characteristics of Pacoima dam. The results comparison indicates that values are correct and reliable.

  11. Multi-scale fluctuation analysis of precipitation in Beijing by Extreme-point Symmetric Mode Decomposition

    NASA Astrophysics Data System (ADS)

    Li, Jiqing; Duan, Zhipeng; Huang, Jing

    2018-06-01

    With the aggravation of the global climate change, the shortage of water resources in China is becoming more and more serious. Using reasonable methods to study changes in precipitation is very important for planning and management of water resources. Based on the time series of precipitation in Beijing from 1951 to 2015, the multi-scale features of precipitation are analyzed by the Extreme-point Symmetric Mode Decomposition (ESMD) method to forecast the precipitation shift. The results show that the precipitation series have periodic changes of 2.6, 4.3, 14 and 21.7 years, and the variance contribution rate of each modal component shows that the inter-annual variation dominates the precipitation in Beijing. It is predicted that precipitation in Beijing will continue to decrease in the near future.

  12. Non invasive transcostal focusing based on the decomposition of the time reversal operator: in vitro validation

    NASA Astrophysics Data System (ADS)

    Cochard, Étienne; Prada, Claire; Aubry, Jean-François; Fink, Mathias

    2010-03-01

    Thermal ablation induced by high intensity focused ultrasound has produced promising clinical results to treat hepatocarcinoma and other liver tumors. However skin burns have been reported due to the high absorption of ultrasonic energy by the ribs. This study proposes a method to produce an acoustic field focusing on a chosen target while sparing the ribs, using the decomposition of the time-reversal operator (DORT method). The idea is to apply an excitation weight vector to the transducers array which is orthogonal to the subspace of emissions focusing on the ribs. The ratio of the energies absorbed at the focal point and on the ribs has been enhanced up to 100-fold as demonstrated by the measured specific absorption rates.

  13. Structural modal parameter identification using local mean decomposition

    NASA Astrophysics Data System (ADS)

    Keyhani, Ali; Mohammadi, Saeed

    2018-02-01

    Modal parameter identification is the first step in structural health monitoring of existing structures. Already, many powerful methods have been proposed for this concept and each method has some benefits and shortcomings. In this study, a new method based on local mean decomposition is proposed for modal identification of civil structures from free or ambient vibration measurements. The ability of the proposed method was investigated using some numerical studies and the results compared with those obtained from the Hilbert-Huang transform (HHT). As a major advantage, the proposed method can extract natural frequencies and damping ratios of all active modes from only one measurement. The accuracy of the identified modes depends on their participation in the measured responses. Nevertheless, the identified natural frequencies have reasonable accuracy in both cases of free and ambient vibration measurements, even in the presence of noise. The instantaneous phase angle and the natural logarithm of instantaneous amplitude curves obtained from the proposed method have more linearity rather than those from the HHT algorithm. Also, the end effect is more restricted for the proposed method.

  14. Study on the mechanism of copper-ammonia complex decomposition in struvite formation process and enhanced ammonia and copper removal.

    PubMed

    Peng, Cong; Chai, Liyuan; Tang, Chongjian; Min, Xiaobo; Song, Yuxia; Duan, Chengshan; Yu, Cheng

    2017-01-01

    Heavy metals and ammonia are difficult to remove from wastewater, as they easily combine into refractory complexes. The struvite formation method (SFM) was applied for the complex decomposition and simultaneous removal of heavy metal and ammonia. The results indicated that ammonia deprivation by SFM was the key factor leading to the decomposition of the copper-ammonia complex ion. Ammonia was separated from solution as crystalline struvite, and the copper mainly co-precipitated as copper hydroxide together with struvite. Hydrogen bonding and electrostatic attraction were considered to be the main surface interactions between struvite and copper hydroxide. Hydrogen bonding was concluded to be the key factor leading to the co-precipitation. In addition, incorporation of copper ions into the struvite crystal also occurred during the treatment process. Copyright © 2016. Published by Elsevier B.V.

  15. An Integrated Centroid Finding and Particle Overlap Decomposition Algorithm for Stereo Imaging Velocimetry

    NASA Technical Reports Server (NTRS)

    McDowell, Mark

    2004-01-01

    An integrated algorithm for decomposing overlapping particle images (multi-particle objects) along with determining each object s constituent particle centroid(s) has been developed using image analysis techniques. The centroid finding algorithm uses a modified eight-direction search method for finding the perimeter of any enclosed object. The centroid is calculated using the intensity-weighted center of mass of the object. The overlap decomposition algorithm further analyzes the object data and breaks it down into its constituent particle centroid(s). This is accomplished with an artificial neural network, feature based technique and provides an efficient way of decomposing overlapping particles. Combining the centroid finding and overlap decomposition routines into a single algorithm allows us to accurately predict the error associated with finding the centroid(s) of particles in our experiments. This algorithm has been tested using real, simulated, and synthetic data and the results are presented and discussed.

  16. Synthesis and structure characterization of chromium oxide prepared by solid thermal decomposition reaction.

    PubMed

    Li, Li; Yan, Zi F; Lu, Gao Q; Zhu, Zhong H

    2006-01-12

    Mesoporous chromium oxide (Cr2O3) nanocrystals were first synthesized by the thermal decomposition reaction of Cr(NO3)3.9H2O using citric acid monohydrate (CA) as the mesoporous template agent. The texture and chemistry of chromium oxide nanocrystals were characterized by N2 adsorption-desorption isotherms, FTIR, X-ray diffraction (XRD), UV-vis, and thermoanalytical methods. It was shown that the hydrate water and CA are the crucial factors in influencing the formation of mesoporous Cr2O3 nanocrystals in the mixture system. The decomposition of CA results in the formation of a mesoporous structure with wormlike pores. The hydrate water of the mixture provides surface hydroxyls that act as binders, making the nanocrystals aggregate. The pore structures and phases of chromium oxide are affected by the ratio of precursor-to-CA, thermal temperature, and time.

  17. Insight into litter decomposition driven by nutrient demands of symbiosis system through the hypha bridge of arbuscular mycorrhizal fungi.

    PubMed

    Kong, Xiangshi; Jia, Yanyan; Song, Fuqiang; Tian, Kai; Lin, Hong; Bei, Zhanlin; Jia, Xiuqin; Yao, Bei; Guo, Peng; Tian, Xingjun

    2018-02-01

    Arbuscular mycorrhizal fungi (AMF) play an important role in litter decomposition. This study investigated how soil nutrient level affected the process. Results showed that AMF colonization had no significant effect on litter decomposition under normal soil nutrient conditions. However, litter decomposition was accelerated significantly under lower nutrient conditions. Soil microbial biomass in decomposition system was significantly increased. Especially, in moderate lower nutrient treatment (condition of half-normal soil nutrient), litters exhibited the highest decomposition rate, AMF hypha revealed the greatest density, and enzymes (especially nitrate reductase) showed the highest activities as well. Meanwhile, the immobilization of nitrogen (N) in the decomposing litter remarkably decreased. Our results suggested that the roles AMF played in ecosystem were largely affected by soil nutrient levels. At normal soil nutrient level, AMF exhibited limited effects in promoting decomposition. When soil nutrient level decreased, the promoting effect of AMF on litter decomposition began to appear, especially on N mobilization. However, under extremely low nutrient conditions, AMF showed less influence on decomposition and may even compete with decomposer microorganisms for nutrients.

  18. Simultaneous tensor decomposition and completion using factor priors.

    PubMed

    Chen, Yi-Lei; Hsu, Chiou-Ting; Liao, Hong-Yuan Mark

    2014-03-01

    The success of research on matrix completion is evident in a variety of real-world applications. Tensor completion, which is a high-order extension of matrix completion, has also generated a great deal of research interest in recent years. Given a tensor with incomplete entries, existing methods use either factorization or completion schemes to recover the missing parts. However, as the number of missing entries increases, factorization schemes may overfit the model because of incorrectly predefined ranks, while completion schemes may fail to interpret the model factors. In this paper, we introduce a novel concept: complete the missing entries and simultaneously capture the underlying model structure. To this end, we propose a method called simultaneous tensor decomposition and completion (STDC) that combines a rank minimization technique with Tucker model decomposition. Moreover, as the model structure is implicitly included in the Tucker model, we use factor priors, which are usually known a priori in real-world tensor objects, to characterize the underlying joint-manifold drawn from the model factors. By exploiting this auxiliary information, our method leverages two classic schemes and accurately estimates the model factors and missing entries. We conducted experiments to empirically verify the convergence of our algorithm on synthetic data and evaluate its effectiveness on various kinds of real-world data. The results demonstrate the efficacy of the proposed method and its potential usage in tensor-based applications. It also outperforms state-of-the-art methods on multilinear model analysis and visual data completion tasks.

  19. Domain decomposition method for the Baltic Sea based on theory of adjoint equation and inverse problem.

    NASA Astrophysics Data System (ADS)

    Lezina, Natalya; Agoshkov, Valery

    2017-04-01

    Domain decomposition method (DDM) allows one to present a domain with complex geometry as a set of essentially simpler subdomains. This method is particularly applied for the hydrodynamics of oceans and seas. In each subdomain the system of thermo-hydrodynamic equations in the Boussinesq and hydrostatic approximations is solved. The problem of obtaining solution in the whole domain is that it is necessary to combine solutions in subdomains. For this purposes iterative algorithm is created and numerical experiments are conducted to investigate an effectiveness of developed algorithm using DDM. For symmetric operators in DDM, Poincare-Steklov's operators [1] are used, but for the problems of the hydrodynamics, it is not suitable. In this case for the problem, adjoint equation method [2] and inverse problem theory are used. In addition, it is possible to create algorithms for the parallel calculations using DDM on multiprocessor computer system. DDM for the model of the Baltic Sea dynamics is numerically studied. The results of numerical experiments using DDM are compared with the solution of the system of hydrodynamic equations in the whole domain. The work was supported by the Russian Science Foundation (project 14-11-00609, the formulation of the iterative process and numerical experiments). [1] V.I. Agoshkov, Domain Decompositions Methods in the Mathematical Physics Problem // Numerical processes and systems, No 8, Moscow, 1991 (in Russian). [2] V.I. Agoshkov, Optimal Control Approaches and Adjoint Equations in the Mathematical Physics Problem, Institute of Numerical Mathematics, RAS, Moscow, 2003 (in Russian).

  20. Catalytic decomposition of toxic chemicals over iron group metals supported on carbon nanotubes.

    PubMed

    Li, Lili; Chen, Can; Chen, Long; Zhu, Zixue; Hu, Jianli

    2014-03-18

    This study explores catalytic decomposition of phosphine (PH3) using iron group metals (Co, Ni) and metal oxides (Fe2O3, Co(3)O4, NiO) supported on carbon nanotubes (CNTs). The catalysts are synthesized by means of a deposition-precipitation method. The morphology, structure, and composition of the catalysts are characterized using a number of analytical instrumentations, including high-resolution transmission electron microscopy, X-ray diffraction, X-ray photoelectron spectroscopy, BET surface area measurement, and inductively coupled plasma. The activity of the catalysts in the PH3 decomposition reaction is measured and correlated with their surface and structural properties. The characterization results show that phosphidation occurs on the catalyst surface, and the resulting metal phosphides act as an active phase in the PH3 decomposition reaction. Cobalt phosphide, CoP, is formed on Co/CNTs and Co(3)O4/CNTs, whereas iron phosphide, FeP, is formed on Fe2O3/CNTs. In contrast, phosphorus-rich phosphide NiP2 is formed on Ni/CNTs and NiO/CNTs. The initial activities of the catalysts are shown in the following sequence: Ni/CNTs > Co/CNTs > Co(3)O4/CNTs >NiO/CNTs > Fe2O3/CNTs, whereas activities of metal phosphides are shown in the following order: CoP > NiP2 > FeP. The catalytic activity of metal phosphides is attributed to their electronic properties. Cobalt phosphide formed on Co/CNTs and Co(3)O4/CNTs exhibits not only the highest activity, but also long-term stability in the PH3 decomposition reaction.

  1. Lamb Waves Decomposition and Mode Identification Using Matching Pursuit Method

    DTIC Science & Technology

    2009-01-01

    Wigner - Ville distribution ( WVD ). However, WVD suffers from severe interferences, called cross-terms. Cross- terms are the area of a time-frequency...transform (STFT), wavelet transform, Wigner - Ville distribution , matching pursuit decomposition, etc. 1 Report Documentation Page Form ApprovedOMB No...MP decomposition using chirplet dictionary was applied to a simulated S0 mode Lamb wave shown previously in Figure 2a. Wigner - Ville distribution of

  2. Analysis of Self-Excited Combustion Instabilities Using Decomposition Techniques

    DTIC Science & Technology

    2016-07-05

    are evaluated for the study of self-excited longitudinal combustion instabilities in laboratory-scaled single-element gas turbine and rocket...Air Force Base, California 93524 DOI: 10.2514/1.J054557 Proper orthogonal decomposition and dynamic mode decomposition are evaluated for the study of...instabilities. In addition, we also evaluate the capabilities of the methods to deal with data sets of different spatial extents and temporal resolution

  3. Gas Pressure Monitored Iodide-Catalyzed Decomposition Kinetics of H[subscript 2]O[subscript 2]: Initial-Rate and Integrated-Rate Methods in the General Chemistry Lab

    ERIC Educational Resources Information Center

    Nyasulu, Frazier; Barlag, Rebecca

    2010-01-01

    The reaction kinetics of the iodide-catalyzed decomposition of [subscript 2]O[subscript 2] using the integrated-rate method is described. The method is based on the measurement of the total gas pressure using a datalogger and pressure sensor. This is a modification of a previously reported experiment based on the initial-rate approach. (Contains 2…

  4. Cerebrospinal fluid PCR analysis and biochemistry in bodies with severe decomposition.

    PubMed

    Palmiere, Cristian; Vanhaebost, Jessica; Ventura, Francesco; Bonsignore, Alessandro; Bonetti, Luca Reggiani

    2015-02-01

    The aim of this study was to assess whether Neisseria meningitidis, Listeria monocytogenes, Streptococcus pneumoniae and Haemophilus influenzae can be identified using the polymerase chain reaction technique in the cerebrospinal fluid of severely decomposed bodies with known, noninfectious causes of death or whether postmortem changes can lead to false positive results and thus erroneous diagnostic information. Biochemical investigations, postmortem bacteriology and real-time polymerase chain reaction analysis in cerebrospinal fluid were performed in a series of medico-legal autopsies that included noninfectious causes of death with decomposition, bacterial meningitis without decomposition, bacterial meningitis with decomposition, low respiratory tract infections with decomposition and abdominal infections with decomposition. In noninfectious causes of death with decomposition, postmortem investigations failed to reveal results consistent with generalized inflammation or bacterial infections at the time of death. Real-time polymerase chain reaction analysis in cerebrospinal fluid did not identify the studied bacteria in any of these cases. The results of this study highlight the usefulness of molecular approaches in bacteriology as well as the use of alternative biological samples in postmortem biochemistry in order to obtain suitable information even in corpses with severe decompositional changes. Copyright © 2014 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  5. Improved accuracy and precision in δ15 NAIR measurements of explosives, urea, and inorganic nitrates by elemental analyzer/isotope ratio mass spectrometry using thermal decomposition.

    PubMed

    Lott, Michael J; Howa, John D; Chesson, Lesley A; Ehleringer, James R

    2015-08-15

    Elemental analyzer systems generate N(2) and CO(2) for elemental composition and isotope ratio measurements. As quantitative conversion of nitrogen in some materials (i.e., nitrate salts and nitro-organic compounds) is difficult, this study tests a recently published method - thermal decomposition without the addition of O(2) - for the analysis of these materials. Elemental analyzer/isotope ratio mass spectrometry (EA/IRMS) was used to compare the traditional combustion method (CM) and the thermal decomposition method (TDM), where additional O(2) is eliminated from the reaction. The comparisons used organic and inorganic materials with oxidized and/or reduced nitrogen and included ureas, nitrate salts, ammonium sulfate, nitro esters, and nitramines. Previous TDM applications were limited to nitrate salts and ammonium sulfate. The measurement precision and accuracy were compared to determine the effectiveness of converting materials containing different fractions of oxidized nitrogen into N(2). The δ(13) C(VPDB) values were not meaningfully different when measured via CM or TDM, allowing for the analysis of multiple elements in one sample. For materials containing oxidized nitrogen, (15) N measurements made using thermal decomposition were more precise than those made using combustion. The precision was similar between the methods for materials containing reduced nitrogen. The %N values were closer to theoretical when measured by TDM than by CM. The δ(15) N(AIR) values of purchased nitrate salts and ureas were nearer to the known values when analyzed using thermal decomposition than using combustion. The thermal decomposition method addresses insufficient recovery of nitrogen during elemental analysis in a variety of organic and inorganic materials. Its implementation requires relatively few changes to the elemental analyzer. Using TDM, it is possible to directly calibrate certain organic materials to international nitrate isotope reference materials without off-line preparation. Copyright © 2015 John Wiley & Sons, Ltd.

  6. Stable isotope analyses of oxygen (18O:17O:16O) and chlorine (37Cl:35Cl) in perchlorate: reference materials, calibrations, methods, and interferences

    USGS Publications Warehouse

    Böhlke, John Karl; Mroczkowski, Stanley J.; Sturchio, Neil C.; Heraty, Linnea J.; Richman, Kent W.; Sullivan, Donald B.; Griffith, Kris N.; Gu, Baohua; Hatzinger, Paul B.

    2017-01-01

    RationalePerchlorate (ClO4−) is a common trace constituent of water, soils, and plants; it has both natural and synthetic sources and is subject to biodegradation. The stable isotope ratios of Cl and O provide three independent quantities for ClO4− source attribution and natural attenuation studies: δ37Cl, δ18O, and δ17O (or Δ17O or 17Δ) values. Documented reference materials, calibration schemes, methods, and interferences will improve the reliability of such studies.MethodsThree large batches of KClO4 with contrasting isotopic compositions were synthesized and analyzed against VSMOW-SLAP, atmospheric O2, and international nitrate and chloride reference materials. Three analytical methods were tested for O isotopes: conversion of ClO4− to CO for continuous-flow IRMS (CO-CFIRMS), decomposition to O2 for dual-inlet IRMS (O2-DIIRMS), and decomposition to O2 with molecular-sieve trap (O2-DIIRMS+T). For Cl isotopes, KCl produced by thermal decomposition of KClO4 was reprecipitated as AgCl and converted into CH3Cl for DIIRMS.ResultsKClO4 isotopic reference materials (USGS37, USGS38, USGS39) represent a wide range of Cl and O isotopic compositions, including non-mass-dependent O isotopic variation. Isotopic fractionation and exchange can affect O isotope analyses of ClO4− depending on the decomposition method. Routine analyses can be adjusted for such effects by normalization, using reference materials prepared and analyzed as samples. Analytical errors caused by SO42−, NO3−, ReO42−, and C-bearing contaminants include isotope mixing and fractionation effects on CO and O2, plus direct interference from CO2 in the mass spectrometer. The results highlight the importance of effective purification of ClO4− from environmental samples.ConclusionsKClO4 reference materials are available for testing methods and calibrating isotopic data for ClO4− and other substances with widely varying Cl or O isotopic compositions. Current ClO4−extraction, purification, and analysis techniques provide relative isotope-ratio measurements with uncertainties much smaller than the range of values in environmental ClO4−, permitting isotopic evaluation of environmental ClO4− sources and natural attenuation.

  7. Repeated decompositions reveal the stability of infomax decomposition of fMRI data

    PubMed Central

    Duann, Jeng-Ren; Jung, Tzyy-Ping; Sejnowski, Terrence J.; Makeig, Scott

    2010-01-01

    In this study, we decomposed 12 fMRI data sets from six subjects each 101 times using the infomax algorithm. The first decomposition was taken as a reference decomposition; the others were used to form a component matrix of 100 by 100 components. Equivalence relations between components in this matrix, defined as maximum spatial correlations to the components of the reference decomposition, were found by the Hungarian sorting method and used to form 100 equivalence classes for each data set. We then tested the reproducibility of the matched components in the equivalence classes using uncertainty measures based on component distributions, time courses, and ROC curves. Infomax ICA rarely failed to derive nearly the same components in different decompositions. Very few components per data set were poorly reproduced, even using vector angle uncertainty measures stricter than correlation and detection theory measures. PMID:17281453

  8. Domain decomposition and matching for time-domain analysis of motions of ships advancing in head sea

    NASA Astrophysics Data System (ADS)

    Tang, Kai; Zhu, Ren-chuan; Miao, Guo-ping; Fan, Ju

    2014-08-01

    A domain decomposition and matching method in the time-domain is outlined for simulating the motions of ships advancing in waves. The flow field is decomposed into inner and outer domains by an imaginary control surface, and the Rankine source method is applied to the inner domain while the transient Green function method is used in the outer domain. Two initial boundary value problems are matched on the control surface. The corresponding numerical codes are developed, and the added masses, wave exciting forces and ship motions advancing in head sea for Series 60 ship and S175 containership, are presented and verified. A good agreement has been obtained when the numerical results are compared with the experimental data and other references. It shows that the present method is more efficient because of the panel discretization only in the inner domain during the numerical calculation, and good numerical stability is proved to avoid divergence problem regarding ships with flare.

  9. A new route for synthesis of spherical NiO nanoparticles via emulsion nano-reactors with enhanced photocatalytic activity

    NASA Astrophysics Data System (ADS)

    Fazlali, Farnaz; Mahjoub, Ali reza; Abazari, Reza

    2015-10-01

    This study has sought to draw a comparison among the nickel oxide nanostructures (NSs) with multiple shapes in terms of their photocatalytic properties. These NSs have been synthesized using a set of wet chemical methods (thermal-decomposition, sol-gel, hydrothermal, and emulsion nano-reactors), for which a similar precursor has been considered. For evaluation of the photocatalytic properties of the suggested NSs, methyl orange (MeO) solution photocatalytic degradation has been estimated based on UV-Vis spectroscopy. As shown by our results, the photocatalytic efficiency of the prepared NSs is highly dependent upon the shape of the corresponding structures. In this context, the emulsion nano-reactors (ENRs) method has been developed for the synthesis of pure nickel oxide nanoparticles (NPs) with unaggregated, quite spherical, and homogeneous NPs at environmental conditions. Compared with the other methods in this work, ENRs method shows high photocatalytic efficiency in the MeO dye decomposition.

  10. Detection of the ice assertion on aircraft using empirical mode decomposition enhanced by multi-objective optimization

    NASA Astrophysics Data System (ADS)

    Bagherzadeh, Seyed Amin; Asadi, Davood

    2017-05-01

    In search of a precise method for analyzing nonlinear and non-stationary flight data of an aircraft in the icing condition, an Empirical Mode Decomposition (EMD) algorithm enhanced by multi-objective optimization is introduced. In the proposed method, dissimilar IMF definitions are considered by the Genetic Algorithm (GA) in order to find the best decision parameters of the signal trend. To resolve disadvantages of the classical algorithm caused by the envelope concept, the signal trend is estimated directly in the proposed method. Furthermore, in order to simplify the performance and understanding of the EMD algorithm, the proposed method obviates the need for a repeated sifting process. The proposed enhanced EMD algorithm is verified by some benchmark signals. Afterwards, the enhanced algorithm is applied to simulated flight data in the icing condition in order to detect the ice assertion on the aircraft. The results demonstrate the effectiveness of the proposed EMD algorithm in aircraft ice detection by providing a figure of merit for the icing severity.

  11. Hybrid Monte Carlo approach to the entanglement entropy of interacting fermions

    NASA Astrophysics Data System (ADS)

    Drut, Joaquín E.; Porter, William J.

    2015-09-01

    The Monte Carlo calculation of Rényi entanglement entropies Sn of interacting fermions suffers from a well-known signal-to-noise problem, even for a large number of situations in which the infamous sign problem is absent. A few methods have been proposed to overcome this issue, such as ensemble switching and the use of auxiliary partition-function ratios. Here, we present an approach that builds on the recently proposed free-fermion decomposition method; it incorporates entanglement in the probability measure in a natural way; it takes advantage of the hybrid Monte Carlo algorithm (an essential tool in lattice quantum chromodynamics and other gauge theories with dynamical fermions); and it does not suffer from noise problems. This method displays no sign problem for the same cases as other approaches and is therefore useful for a wide variety of systems. As a proof of principle, we calculate S2 for the one-dimensional, half-filled Hubbard model and compare with results from exact diagonalization and the free-fermion decomposition method.

  12. Spectral Regression Discriminant Analysis for Hyperspectral Image Classification

    NASA Astrophysics Data System (ADS)

    Pan, Y.; Wu, J.; Huang, H.; Liu, J.

    2012-08-01

    Dimensionality reduction algorithms, which aim to select a small set of efficient and discriminant features, have attracted great attention for Hyperspectral Image Classification. The manifold learning methods are popular for dimensionality reduction, such as Locally Linear Embedding, Isomap, and Laplacian Eigenmap. However, a disadvantage of many manifold learning methods is that their computations usually involve eigen-decomposition of dense matrices which is expensive in both time and memory. In this paper, we introduce a new dimensionality reduction method, called Spectral Regression Discriminant Analysis (SRDA). SRDA casts the problem of learning an embedding function into a regression framework, which avoids eigen-decomposition of dense matrices. Also, with the regression based framework, different kinds of regularizes can be naturally incorporated into our algorithm which makes it more flexible. It can make efficient use of data points to discover the intrinsic discriminant structure in the data. Experimental results on Washington DC Mall and AVIRIS Indian Pines hyperspectral data sets demonstrate the effectiveness of the proposed method.

  13. Multi-label learning with fuzzy hypergraph regularization for protein subcellular location prediction.

    PubMed

    Chen, Jing; Tang, Yuan Yan; Chen, C L Philip; Fang, Bin; Lin, Yuewei; Shang, Zhaowei

    2014-12-01

    Protein subcellular location prediction aims to predict the location where a protein resides within a cell using computational methods. Considering the main limitations of the existing methods, we propose a hierarchical multi-label learning model FHML for both single-location proteins and multi-location proteins. The latent concepts are extracted through feature space decomposition and label space decomposition under the nonnegative data factorization framework. The extracted latent concepts are used as the codebook to indirectly connect the protein features to their annotations. We construct dual fuzzy hypergraphs to capture the intrinsic high-order relations embedded in not only feature space, but also label space. Finally, the subcellular location annotation information is propagated from the labeled proteins to the unlabeled proteins by performing dual fuzzy hypergraph Laplacian regularization. The experimental results on the six protein benchmark datasets demonstrate the superiority of our proposed method by comparing it with the state-of-the-art methods, and illustrate the benefit of exploiting both feature correlations and label correlations.

  14. Kinetics of calcium sulfoaluminate formation from tricalcium aluminate, calcium sulfate and calcium oxide

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Xuerun, E-mail: xuerunli@163.com; Zhang, Yu; Shen, Xiaodong, E-mail: xdshen@njut.edu.cn

    The formation kinetics of tricalcium aluminate (C{sub 3}A) and calcium sulfate yielding calcium sulfoaluminate (C{sub 4}A{sub 3}more » $$) and the decomposition kinetics of calcium sulfoaluminate were investigated by sintering a mixture of synthetic C{sub 3}A and gypsum. The quantitative analysis of the phase composition was performed by X-ray powder diffraction analysis using the Rietveld method. The results showed that the formation reaction 3Ca{sub 3}Al{sub 2}O{sub 6} + CaSO{sub 4} → Ca{sub 4}Al{sub 6}O{sub 12}(SO{sub 4}) + 6CaO was the primary reaction < 1350 °C with and activation energy of 231 ± 42 kJ/mol; while the decomposition reaction 2Ca{sub 4}Al{sub 6}O{sub 12}(SO{sub 4}) + 10CaO → 6Ca{sub 3}Al{sub 2}O{sub 6} + 2SO{sub 2} ↑ + O{sub 2} ↑ primarily occurred beyond 1350 °C with an activation energy of 792 ± 64 kJ/mol. The optimal formation region for C{sub 4}A{sub 3}$$ was from 1150 °C to 1350 °C and from 6 h to 1 h, which could provide useful information on the formation of C{sub 4}A{sub 3}$ containing clinkers. The Jander diffusion model was feasible for the formation and decomposition of calcium sulfoaluminate. Ca{sup 2+} and SO{sub 4}{sup 2−} were the diffusive species in both the formation and decomposition reactions. -- Highlights: •Formation and decomposition of calcium sulphoaluminate were studied. •Decomposition of calcium sulphoaluminate combined CaO and yielded C{sub 3}A. •Activation energy for formation was 231 ± 42 kJ/mol. •Activation energy for decomposition was 792 ± 64 kJ/mol. •Both the formation and decomposition were controlled by diffusion.« less

  15. Thermal decomposition hazard evaluation of hydroxylamine nitrate.

    PubMed

    Wei, Chunyang; Rogers, William J; Mannan, M Sam

    2006-03-17

    Hydroxylamine nitrate (HAN) is an important member of the hydroxylamine family and it is a liquid propellant when combined with alkylammonium nitrate fuel in an aqueous solution. Low concentrations of HAN are used primarily in the nuclear industry as a reductant in nuclear material processing and for decontamination of equipment. Also, HAN has been involved in several incidents because of its instability and autocatalytic decomposition behavior. This paper presents calorimetric measurement for the thermal decomposition of 24 mass% HAN/water. Gas phase enthalpy of formation of HAN is calculated using both semi-empirical methods with MOPAC and high-level quantum chemical methods of Gaussian 03. CHETAH is used to estimate the energy release potential of HAN. A Reactive System Screening Tool (RSST) and an Automatic Pressure Tracking Adiabatic Calorimeter (APTAC) are used to characterize thermal decomposition of HAN and to provide guidance about safe conditions for handling and storing of HAN.

  16. Method and apparatus for maintaining the pH in zinc-bromine battery systems

    DOEpatents

    Grimes, Patrick G.

    1985-09-10

    A method and apparatus for maintaining the pH level in a zinc-bromine battery features reacting decomposition hydrogen with bromine in the presence of a catalyst. The catalyst encourages the formation of hydrogen and bromine ions. The decomposition hydrogen is therefore consumed, alloying the pH of the system to remain substantially at a given value.

  17. Density functional theory studies of HCOOH decomposition on Pd(111)

    DOE PAGES

    Scaranto, Jessica; Mavrikakis, Manos

    2015-12-02

    Here, the investigation of formic acid (HCOOH) decomposition on transition metal surfaces is important to derive useful insights for vapor phase catalysis involving HCOOH and for the development of direct HCOOH fuel cells (DFAFC). Here we present the results obtained from periodic, self-consistent, density functional theory (DFT-GGA) calculations for the elementary steps involved in the gas-phase decomposition of HCOOH on Pd(111). Accordingly, we analyzed the minimum energy paths for HCOOH dehydrogenation to CO 2 + H 2 and dehydration to CO + H 2O through the carboxyl (COOH) and formate (HCOO) intermediates. Our results suggest that HCOO formation is easiermore » than COOH formation, but HCOO decomposition is more difficult than COOH decomposition, in particular in presence of co-adsorbed O and OH species. Therefore, both paths may contribute to HCOOH decomposition. CO formation goes mainly through COOH decomposition.« less

  18. Density functional theory studies of HCOOH decomposition on Pd(111)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scaranto, Jessica; Mavrikakis, Manos

    Here, the investigation of formic acid (HCOOH) decomposition on transition metal surfaces is important to derive useful insights for vapor phase catalysis involving HCOOH and for the development of direct HCOOH fuel cells (DFAFC). Here we present the results obtained from periodic, self-consistent, density functional theory (DFT-GGA) calculations for the elementary steps involved in the gas-phase decomposition of HCOOH on Pd(111). Accordingly, we analyzed the minimum energy paths for HCOOH dehydrogenation to CO 2 + H 2 and dehydration to CO + H 2O through the carboxyl (COOH) and formate (HCOO) intermediates. Our results suggest that HCOO formation is easiermore » than COOH formation, but HCOO decomposition is more difficult than COOH decomposition, in particular in presence of co-adsorbed O and OH species. Therefore, both paths may contribute to HCOOH decomposition. CO formation goes mainly through COOH decomposition.« less

  19. Wavelet-based unsupervised learning method for electrocardiogram suppression in surface electromyograms.

    PubMed

    Niegowski, Maciej; Zivanovic, Miroslav

    2016-03-01

    We present a novel approach aimed at removing electrocardiogram (ECG) perturbation from single-channel surface electromyogram (EMG) recordings by means of unsupervised learning of wavelet-based intensity images. The general idea is to combine the suitability of certain wavelet decomposition bases which provide sparse electrocardiogram time-frequency representations, with the capacity of non-negative matrix factorization (NMF) for extracting patterns from images. In order to overcome convergence problems which often arise in NMF-related applications, we design a novel robust initialization strategy which ensures proper signal decomposition in a wide range of ECG contamination levels. Moreover, the method can be readily used because no a priori knowledge or parameter adjustment is needed. The proposed method was evaluated on real surface EMG signals against two state-of-the-art unsupervised learning algorithms and a singular spectrum analysis based method. The results, expressed in terms of high-to-low energy ratio, normalized median frequency, spectral power difference and normalized average rectified value, suggest that the proposed method enables better ECG-EMG separation quality than the reference methods. Copyright © 2015 IPEM. Published by Elsevier Ltd. All rights reserved.

  20. Accelerating Dynamic Magnetic Resonance Imaging (MRI) for Lung Tumor Tracking Based on Low-Rank Decomposition in the Spatial–Temporal Domain: A Feasibility Study Based on Simulation and Preliminary Prospective Undersampled MRI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sarma, Manoj; Department of Radiation Oncology, University of California, Los Angeles, California; Hu, Peng

    Purpose: To evaluate a low-rank decomposition method to reconstruct down-sampled k-space data for the purpose of tumor tracking. Methods and Materials: Seven retrospective lung cancer patients were included in the simulation study. The fully-sampled k-space data were first generated from existing 2-dimensional dynamic MR images and then down-sampled by 5 × -20 × before reconstruction using a Cartesian undersampling mask. Two methods, a low-rank decomposition method using combined dynamic MR images (k-t SLR based on sparsity and low-rank penalties) and a total variation (TV) method using individual dynamic MR frames, were used to reconstruct images. The tumor trajectories were derived on the basis ofmore » autosegmentation of the resultant images. To further test its feasibility, k-t SLR was used to reconstruct prospective data of a healthy subject. An undersampled balanced steady-state free precession sequence with the same undersampling mask was used to acquire the imaging data. Results: In the simulation study, higher imaging fidelity and low noise levels were achieved with the k-t SLR compared with TV. At 10 × undersampling, the k-t SLR method resulted in an average normalized mean square error <0.05, as opposed to 0.23 by using the TV reconstruction on individual frames. Less than 6% showed tracking errors >1 mm with 10 × down-sampling using k-t SLR, as opposed to 17% using TV. In the prospective study, k-t SLR substantially reduced reconstruction artifacts and retained anatomic details. Conclusions: Magnetic resonance reconstruction using k-t SLR on highly undersampled dynamic MR imaging data results in high image quality useful for tumor tracking. The k-t SLR was superior to TV by better exploiting the intrinsic anatomic coherence of the same patient. The feasibility of k-t SLR was demonstrated by prospective imaging acquisition and reconstruction.« less

  1. Detecting phase-amplitude coupling with high frequency resolution using adaptive decompositions

    PubMed Central

    Pittman-Polletta, Benjamin; Hsieh, Wan-Hsin; Kaur, Satvinder; Lo, Men-Tzung; Hu, Kun

    2014-01-01

    Background Phase-amplitude coupling (PAC) – the dependence of the amplitude of one rhythm on the phase of another, lower-frequency rhythm – has recently been used to illuminate cross-frequency coordination in neurophysiological activity. An essential step in measuring PAC is decomposing data to obtain rhythmic components of interest. Current methods of PAC assessment employ narrowband Fourier-based filters, which assume that biological rhythms are stationary, harmonic oscillations. However, biological signals frequently contain irregular and nonstationary features, which may contaminate rhythms of interest and complicate comodulogram interpretation, especially when frequency resolution is limited by short data segments. New method To better account for nonstationarities while maintaining sharp frequency resolution in PAC measurement, even for short data segments, we introduce a new method of PAC assessment which utilizes adaptive and more generally broadband decomposition techniques – such as the empirical mode decomposition (EMD). To obtain high frequency resolution PAC measurements, our method distributes the PAC associated with pairs of broadband oscillations over frequency space according to the time-local frequencies of these oscillations. Comparison with existing methods We compare our novel adaptive approach to a narrowband comodulogram approach on a variety of simulated signals of short duration, studying systematically how different types of nonstationarities affect these methods, as well as on EEG data. Conclusions Our results show: (1) narrowband filtering can lead to poor PAC frequency resolution, and inaccuracy and false negatives in PAC assessment; (2) our adaptive approach attains better PAC frequency resolution and is more resistant to nonstationarities and artifacts than traditional comodulograms. PMID:24452055

  2. Decompositions of large-scale biological systems based on dynamical properties.

    PubMed

    Soranzo, Nicola; Ramezani, Fahimeh; Iacono, Giovanni; Altafini, Claudio

    2012-01-01

    Given a large-scale biological network represented as an influence graph, in this article we investigate possible decompositions of the network aimed at highlighting specific dynamical properties. The first decomposition we study consists in finding a maximal directed acyclic subgraph of the network, which dynamically corresponds to searching for a maximal open-loop subsystem of the given system. Another dynamical property investigated is strong monotonicity. We propose two methods to deal with this property, both aimed at decomposing the system into strongly monotone subsystems, but with different structural characteristics: one method tends to produce a single large strongly monotone component, while the other typically generates a set of smaller disjoint strongly monotone subsystems. Original heuristics for the methods investigated are described in the article. altafini@sissa.it

  3. Signal decomposition for surrogate modeling of a constrained ultrasonic design space

    NASA Astrophysics Data System (ADS)

    Homa, Laura; Sparkman, Daniel; Wertz, John; Welter, John; Aldrin, John C.

    2018-04-01

    The U.S. Air Force seeks to improve the methods and measures by which the lifecycle of composite structures are managed. Nondestructive evaluation of damage - particularly internal damage resulting from impact - represents a significant input to that improvement. Conventional ultrasound can detect this damage; however, full 3D characterization has not been demonstrated. A proposed approach for robust characterization uses model-based inversion through fitting of simulated results to experimental data. One challenge with this approach is the high computational expense of the forward model to simulate the ultrasonic B-scans for each damage scenario. A potential solution is to construct a surrogate model using a subset of simulated ultrasonic scans built using a highly accurate, computationally expensive forward model. However, the dimensionality of these simulated B-scans makes interpolating between them a difficult and potentially infeasible problem. Thus, we propose using the chirplet decomposition to reduce the dimensionality of the data, and allow for interpolation in the chirplet parameter space. By applying the chirplet decomposition, we are able to extract the salient features in the data and construct a surrogate forward model.

  4. Impact of metal-induced degradation on the determination of pharmaceutical compound purity and a strategy for mitigation.

    PubMed

    Dotterer, Sally K; Forbes, Robert A; Hammill, Cynthia L

    2011-04-05

    Case studies are presented demonstrating how exposure to traces of transition metals such as copper and/or iron during sample preparation or analysis can impact the accuracy of purity analysis of pharmaceuticals. Some compounds, such as phenols and indoles, react with metals in the presence of oxygen to produce metal-induced oxidative decomposition products. Compounds susceptible to metal-induced decomposition can degrade following preparation for purity analysis leading to falsely high impurity results. Our work has shown even metals at levels below 0.1 ppm can negatively impact susceptible compounds. Falsely low results are also possible when the impurities themselves react with metals and degrade prior to analysis. Traces of metals in the HPLC mobile phase can lead to chromatographic artifacts, affecting the reproducibility of purity results. To understand and mitigate the impact of metal induced decomposition, a proactive strategy is presented. The pharmaceutical would first be tested for reactivity with specific transition metals in the sample solvent/diluents and in the HPLC mobile phase. If found to be reactive, alternative sample diluents and/or mobile phases with less reactive solvents or addition of a metal chelator would be explored. If unsuccessful, glassware cleaning or sample solution refrigeration could be investigated. By employing this strategy during method development, robust purity methods would be delivered to the quality control laboratories, preventing future problems from potential sporadic contamination of glassware with metals. Copyright © 2010 Elsevier B.V. All rights reserved.

  5. Extracting functional components of neural dynamics with Independent Component Analysis and inverse Current Source Density.

    PubMed

    Lęski, Szymon; Kublik, Ewa; Swiejkowski, Daniel A; Wróbel, Andrzej; Wójcik, Daniel K

    2010-12-01

    Local field potentials have good temporal resolution but are blurred due to the slow spatial decay of the electric field. For simultaneous recordings on regular grids one can reconstruct efficiently the current sources (CSD) using the inverse Current Source Density method (iCSD). It is possible to decompose the resultant spatiotemporal information about the current dynamics into functional components using Independent Component Analysis (ICA). We show on test data modeling recordings of evoked potentials on a grid of 4 × 5 × 7 points that meaningful results are obtained with spatial ICA decomposition of reconstructed CSD. The components obtained through decomposition of CSD are better defined and allow easier physiological interpretation than the results of similar analysis of corresponding evoked potentials in the thalamus. We show that spatiotemporal ICA decompositions can perform better for certain types of sources but it does not seem to be the case for the experimental data studied. Having found the appropriate approach to decomposing neural dynamics into functional components we use the technique to study the somatosensory evoked potentials recorded on a grid spanning a large part of the forebrain. We discuss two example components associated with the first waves of activation of the somatosensory thalamus. We show that the proposed method brings up new, more detailed information on the time and spatial location of specific activity conveyed through various parts of the somatosensory thalamus in the rat.

  6. Ab initio investigation of the thermal decomposition of n-butylcyclohexane.

    PubMed

    Ali, Mohamad Akbar; Dillstrom, V Tyler; Lai, Jason Y W; Violi, Angela

    2014-02-13

    Environmental and energy security concerns have motivated an increased focus on developing clean, efficient combustors, which increasingly relies on insight into the combustion chemistry of fuels. In particular, naphthenes (cycloalkanes and alkylcycloalkanes) are important chemical components of distillate fuels, such as diesel and jet fuels. As such, there is a growing interest in describing napthene reactivity with kinetic mechanisms. Use of these mechanisms in predictive combustion models aids in the development of combustors. This study focuses on the pyrolysis of n-butylcyclohexane (n-BCH), an important representative of naphthenes in jet fuels. Seven different unimolecular decomposition pathways of C-C bond fission were explored utilizing ab initio/DFT methods. Accurate reaction energies were computed using the high-level quantum composite G3B3 method. Variational transition state theory, Rice-Ramsperger-Kassel-Marcus/master equation simulations provided temperature- and pressure-dependent rate constants. Implementation of these pathways into an existing chemical kinetic mechanism improved the prediction of experimental OH radical and H2O speciation in shock tube oxidation. Simulations of this combustion showed a change in the expected decomposition chemistry of n-BCH, predicting increased production of cyclic alkyl radicals instead of straight-chain alkenes. The most prominent reaction pathway for the decomposition of n-BCH is n-BCH = C3H7 + C7H13. The results of this study provide insight into the combustion of n-BCH and will aid in the future development of naphthene kinetic mechanisms.

  7. Comparative study of activated carbon, natural zeolite, and green sand supports for CuOX and ZnO sites as ozone decomposition catalyst

    NASA Astrophysics Data System (ADS)

    Azhariyah, A. S.; Pradyasti, A.; Dianty, A. G.; Bismo, S.

    2018-03-01

    This research was based on ozone decomposition in industrial environment. Ozone is harmful to human. Therefore, catalysts were made as a mask filter to decompose ozone. Comparison studies of catalyst supports were done using Granular Activated Carbon (GAC), Natural Zeolite (NZ), and Green Sand (GS). GAC showed the highest catalytic activity compared to other supports with conversion of 98%. Meanwhile, the conversion using NZ was only 77% and GS had been just 27%. GAC had the highest catalytic activity because it had the largest pore volume, which is 0.478 cm3/g. So GAC was used as catalyst supports. To have a higher conversion in ozone decomposition, GAC was impregnated with metal oxide as the active site of the catalyst. Active site comparison was made using CuOX and ZnO as the active site. Morphology, composition, and crystal phase were analyzed using SEM-EDX, XRF, and XRD methods. Mask filter, which contained catalysts for ozone decomposition, was tested using a fixed bed reactor at room temperature and atmospheric pressure. The result of conversion was analyzed using iodometric method. CuOX/GAC and ZnO/GAC 2%-w showed the highest catalytic activity and conversion reached 100%. From the durability test, CuOX/GAC 2%-w was better than ZnO/GAC 2%-w because the conversion of ozone to oxygen reached 100% with the lowest conversion was 70% for over eight hours.

  8. Progressive Precision Surface Design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Duchaineau, M; Joy, KJ

    2002-01-11

    We introduce a novel wavelet decomposition algorithm that makes a number of powerful new surface design operations practical. Wavelets, and hierarchical representations generally, have held promise to facilitate a variety of design tasks in a unified way by approximating results very precisely, thus avoiding a proliferation of undergirding mathematical representations. However, traditional wavelet decomposition is defined from fine to coarse resolution, thus limiting its efficiency for highly precise surface manipulation when attempting to create new non-local editing methods. Our key contribution is the progressive wavelet decomposition algorithm, a general-purpose coarse-to-fine method for hierarchical fitting, based in this paper on anmore » underlying multiresolution representation called dyadic splines. The algorithm requests input via a generic interval query mechanism, allowing a wide variety of non-local operations to be quickly implemented. The algorithm performs work proportionate to the tiny compressed output size, rather than to some arbitrarily high resolution that would otherwise be required, thus increasing performance by several orders of magnitude. We describe several design operations that are made tractable because of the progressive decomposition. Free-form pasting is a generalization of the traditional control-mesh edit, but for which the shape of the change is completely general and where the shape can be placed using a free-form deformation within the surface domain. Smoothing and roughening operations are enhanced so that an arbitrary loop in the domain specifies the area of effect. Finally, the sculpting effect of moving a tool shape along a path is simulated.« less

  9. Performance Comparison of Superresolution Array Processing Algorithms. Revised

    DTIC Science & Technology

    1998-06-15

    plane waves is finite is the MUSIC algorithm [16]. MUSIC , which denotes Multiple Signal Classification, is an extension of the method of Pisarenko [18... MUSIC Is but one member of a class of methods based upon the decomposition of covariance data into eigenvectors and eigenvalues. Such techniques...techniques relative to the classical methods, however, results for MUSIC are included in this report. All of the techniques reviewed have application to

  10. Comparative evaluation of thermal decomposition behavior and thermal stability of powdered ammonium nitrate under different atmosphere conditions.

    PubMed

    Yang, Man; Chen, Xianfeng; Wang, Yujie; Yuan, Bihe; Niu, Yi; Zhang, Ying; Liao, Ruoyu; Zhang, Zumin

    2017-09-05

    In order to analyze the thermal decomposition characteristics of ammonium nitrate (AN), its thermal behavior and stability under different conditions are studied, including different atmospheres, heating rates and gas flow rates. The evolved decomposition gases of AN in air and nitrogen are analyzed with a quadrupole mass spectrometer. Thermal stability of AN at different heating rates and gas flow rates are studied by differential scanning calorimetry, thermogravimetric analysis, paired comparison method and safety parameter evaluation. Experimental results show that the major evolved decomposition gases in air are H 2 O, NH 3 , N 2 O, NO, NO 2 and HNO 3 , while in nitrogen, H 2 O, NH 3 , NO and HNO 3 are major components. Compared with nitrogen atmosphere, lower initial and end temperatures, higher heat flux and broader reaction temperature range are obtained in air. Meanwhile, higher air gas flow rate tends to achieve lower reaction temperature and to reduce thermal stability of AN. Self-accelerating decomposition temperature of AN in air is much lower than that in nitrogen. It is considered that thermostability of AN is influenced by atmosphere, heating rate and gas flow rate, thus changes of boundary conditions will influence its thermostability, which is helpful to its safe production, storage, transportation and utilization. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. A Fast Solver for Implicit Integration of the Vlasov--Poisson System in the Eulerian Framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garrett, C. Kristopher; Hauck, Cory D.

    In this paper, we present a domain decomposition algorithm to accelerate the solution of Eulerian-type discretizations of the linear, steady-state Vlasov equation. The steady-state solver then forms a key component in the implementation of fully implicit or nearly fully implicit temporal integrators for the nonlinear Vlasov--Poisson system. The solver relies on a particular decomposition of phase space that enables the use of sweeping techniques commonly used in radiation transport applications. The original linear system for the phase space unknowns is then replaced by a smaller linear system involving only unknowns on the boundary between subdomains, which can then be solvedmore » efficiently with Krylov methods such as GMRES. Steady-state solves are combined to form an implicit Runge--Kutta time integrator, and the Vlasov equation is coupled self-consistently to the Poisson equation via a linearized procedure or a nonlinear fixed-point method for the electric field. Finally, numerical results for standard test problems demonstrate the efficiency of the domain decomposition approach when compared to the direct application of an iterative solver to the original linear system.« less

  12. Self-Attractive Hartree Decomposition: Partitioning Electron Density into Smooth Localized Fragments.

    PubMed

    Zhu, Tianyu; de Silva, Piotr; Van Voorhis, Troy

    2018-01-09

    Chemical bonding plays a central role in the description and understanding of chemistry. Many methods have been proposed to extract information about bonding from quantum chemical calculations, the majority of them resorting to molecular orbitals as basic descriptors. Here, we present a method called self-attractive Hartree (SAH) decomposition to unravel pairs of electrons directly from the electron density, which unlike molecular orbitals is a well-defined observable that can be accessed experimentally. The key idea is to partition the density into a sum of one-electron fragments that simultaneously maximize the self-repulsion and maintain regular shapes. This leads to a set of rather unusual equations in which every electron experiences self-attractive Hartree potential in addition to an external potential common for all the electrons. The resulting symmetry breaking and localization are surprisingly consistent with chemical intuition. SAH decomposition is also shown to be effective in visualization of single/multiple bonds, lone pairs, and unusual bonds due to the smooth nature of fragment densities. Furthermore, we demonstrate that it can be used to identify specific chemical bonds in molecular complexes and provides a simple and accurate electrostatic model of hydrogen bonding.

  13. A Fast Solver for Implicit Integration of the Vlasov--Poisson System in the Eulerian Framework

    DOE PAGES

    Garrett, C. Kristopher; Hauck, Cory D.

    2018-04-05

    In this paper, we present a domain decomposition algorithm to accelerate the solution of Eulerian-type discretizations of the linear, steady-state Vlasov equation. The steady-state solver then forms a key component in the implementation of fully implicit or nearly fully implicit temporal integrators for the nonlinear Vlasov--Poisson system. The solver relies on a particular decomposition of phase space that enables the use of sweeping techniques commonly used in radiation transport applications. The original linear system for the phase space unknowns is then replaced by a smaller linear system involving only unknowns on the boundary between subdomains, which can then be solvedmore » efficiently with Krylov methods such as GMRES. Steady-state solves are combined to form an implicit Runge--Kutta time integrator, and the Vlasov equation is coupled self-consistently to the Poisson equation via a linearized procedure or a nonlinear fixed-point method for the electric field. Finally, numerical results for standard test problems demonstrate the efficiency of the domain decomposition approach when compared to the direct application of an iterative solver to the original linear system.« less

  14. Computationally Efficient Adaptive Beamformer for Ultrasound Imaging Based on QR Decomposition.

    PubMed

    Park, Jongin; Wi, Seok-Min; Lee, Jin S

    2016-02-01

    Adaptive beamforming methods for ultrasound imaging have been studied to improve image resolution and contrast. The most common approach is the minimum variance (MV) beamformer which minimizes the power of the beamformed output while maintaining the response from the direction of interest constant. The method achieves higher resolution and better contrast than the delay-and-sum (DAS) beamformer, but it suffers from high computational cost. This cost is mainly due to the computation of the spatial covariance matrix and its inverse, which requires O(L(3)) computations, where L denotes the subarray size. In this study, we propose a computationally efficient MV beamformer based on QR decomposition. The idea behind our approach is to transform the spatial covariance matrix to be a scalar matrix σI and we subsequently obtain the apodization weights and the beamformed output without computing the matrix inverse. To do that, QR decomposition algorithm is used and also can be executed at low cost, and therefore, the computational complexity is reduced to O(L(2)). In addition, our approach is mathematically equivalent to the conventional MV beamformer, thereby showing the equivalent performances. The simulation and experimental results support the validity of our approach.

  15. Spectral decomposition of nonlinear systems with memory

    NASA Astrophysics Data System (ADS)

    Svenkeson, Adam; Glaz, Bryan; Stanton, Samuel; West, Bruce J.

    2016-02-01

    We present an alternative approach to the analysis of nonlinear systems with long-term memory that is based on the Koopman operator and a Lévy transformation in time. Memory effects are considered to be the result of interactions between a system and its surrounding environment. The analysis leads to the decomposition of a nonlinear system with memory into modes whose temporal behavior is anomalous and lacks a characteristic scale. On average, the time evolution of a mode follows a Mittag-Leffler function, and the system can be described using the fractional calculus. The general theory is demonstrated on the fractional linear harmonic oscillator and the fractional nonlinear logistic equation. When analyzing data from an ill-defined (black-box) system, the spectral decomposition in terms of Mittag-Leffler functions that we propose may uncover inherent memory effects through identification of a small set of dynamically relevant structures that would otherwise be obscured by conventional spectral methods. Consequently, the theoretical concepts we present may be useful for developing more general methods for numerical modeling that are able to determine whether observables of a dynamical system are better represented by memoryless operators, or operators with long-term memory in time, when model details are unknown.

  16. Model-based multiple patterning layout decomposition

    NASA Astrophysics Data System (ADS)

    Guo, Daifeng; Tian, Haitong; Du, Yuelin; Wong, Martin D. F.

    2015-10-01

    As one of the most promising next generation lithography technologies, multiple patterning lithography (MPL) plays an important role in the attempts to keep in pace with 10 nm technology node and beyond. With feature size keeps shrinking, it has become impossible to print dense layouts within one single exposure. As a result, MPL such as double patterning lithography (DPL) and triple patterning lithography (TPL) has been widely adopted. There is a large volume of literature on DPL/TPL layout decomposition, and the current approach is to formulate the problem as a classical graph-coloring problem: Layout features (polygons) are represented by vertices in a graph G and there is an edge between two vertices if and only if the distance between the two corresponding features are less than a minimum distance threshold value dmin. The problem is to color the vertices of G using k colors (k = 2 for DPL, k = 3 for TPL) such that no two vertices connected by an edge are given the same color. This is a rule-based approach, which impose a geometric distance as a minimum constraint to simply decompose polygons within the distance into different masks. It is not desired in practice because this criteria cannot completely capture the behavior of the optics. For example, it lacks of sufficient information such as the optical source characteristics and the effects between the polygons outside the minimum distance. To remedy the deficiency, a model-based layout decomposition approach to make the decomposition criteria base on simulation results was first introduced at SPIE 2013.1 However, the algorithm1 is based on simplified assumption on the optical simulation model and therefore its usage on real layouts is limited. Recently AMSL2 also proposed a model-based approach to layout decomposition by iteratively simulating the layout, which requires excessive computational resource and may lead to sub-optimal solutions. The approach2 also potentially generates too many stiches. In this paper, we propose a model-based MPL layout decomposition method using a pre-simulated library of frequent layout patterns. Instead of using the graph G in the standard graph-coloring formulation, we build an expanded graph H where each vertex represents a group of adjacent features together with a coloring solution. By utilizing the library and running sophisticated graph algorithms on H, our approach can obtain optimal decomposition results efficiently. Our model-based solution can achieve a practical mask design which significantly improves the lithography quality on the wafer compared to the rule based decomposition.

  17. Characterization of Thermo-Physical Properties of EVA/ATH: Application to Gasification Experiments and Pyrolysis Modeling.

    PubMed

    Girardin, Bertrand; Fontaine, Gaëlle; Duquesne, Sophie; Försth, Michael; Bourbigot, Serge

    2015-11-20

    The pyrolysis of solid polymeric materials is a complex process that involves both chemical and physical phenomena such as phase transitions, chemical reactions, heat transfer, and mass transport of gaseous components. For modeling purposes, it is important to characterize and to quantify the properties driving those phenomena, especially in the case of flame-retarded materials. In this study, protocols have been developed to characterize the thermal conductivity and the heat capacity of an ethylene-vinyl acetate copolymer (EVA) flame retarded with aluminum tri-hydroxide (ATH). These properties were measured for the various species identified across the decomposition of the material. Namely, the thermal conductivity was found to decrease as a function of temperature before decomposition whereas the ceramic residue obtained after the decomposition at the steady state exhibits a thermal conductivity as low as 0.2 W/m/K. The heat capacity of the material was also investigated using both isothermal modulated Differential Scanning Calorimetry (DSC) and the standard method (ASTM E1269). It was shown that the final residue exhibits a similar behavior to alumina, which is consistent with the decomposition pathway of EVA/ATH. Besides, the two experimental approaches give similar results over the whole range of temperatures. Moreover, the optical properties before decomposition and the heat capacity of the decomposition gases were also analyzed. Those properties were then used as input data for a pyrolysis model in order to predict gasification experiments. Mass losses of gasification experiments were well predicted, thus validating the characterization of the thermo-physical properties of the material.

  18. Characterization of Thermo-Physical Properties of EVA/ATH: Application to Gasification Experiments and Pyrolysis Modeling

    PubMed Central

    Girardin, Bertrand; Fontaine, Gaëlle; Duquesne, Sophie; Försth, Michael; Bourbigot, Serge

    2015-01-01

    The pyrolysis of solid polymeric materials is a complex process that involves both chemical and physical phenomena such as phase transitions, chemical reactions, heat transfer, and mass transport of gaseous components. For modeling purposes, it is important to characterize and to quantify the properties driving those phenomena, especially in the case of flame-retarded materials. In this study, protocols have been developed to characterize the thermal conductivity and the heat capacity of an ethylene-vinyl acetate copolymer (EVA) flame retarded with aluminum tri-hydroxide (ATH). These properties were measured for the various species identified across the decomposition of the material. Namely, the thermal conductivity was found to decrease as a function of temperature before decomposition whereas the ceramic residue obtained after the decomposition at the steady state exhibits a thermal conductivity as low as 0.2 W/m/K. The heat capacity of the material was also investigated using both isothermal modulated Differential Scanning Calorimetry (DSC) and the standard method (ASTM E1269). It was shown that the final residue exhibits a similar behavior to alumina, which is consistent with the decomposition pathway of EVA/ATH. Besides, the two experimental approaches give similar results over the whole range of temperatures. Moreover, the optical properties before decomposition and the heat capacity of the decomposition gases were also analyzed. Those properties were then used as input data for a pyrolysis model in order to predict gasification experiments. Mass losses of gasification experiments were well predicted, thus validating the characterization of the thermo-physical properties of the material. PMID:28793682

  19. Crowdsourcing data on decomposition with the help of schools - Tea4Science

    NASA Astrophysics Data System (ADS)

    Lehtinen, Taru; Dingemans, Bas J. J.; Keuskamp, Joost A.; Hefting, Mariet M.; Sarneel, Judith M.

    2015-04-01

    Decay of organic material, decomposition, is a critical process for life on earth. Through decomposition, food becomes available for plants and soil organisms that they use in their growth and maintenance. When plant material decomposes, it loses weight and releases the greenhouse gas carbon dioxide (CO2) into the atmosphere. Commercial nylon teabags containing plant material can provide vital information on the global carbon cycle, if we study their decomposition in soils. Terrestrial soils contain three times more carbon than the atmosphere and therefore changes in the balance of soil carbon storage and release can significantly amplify or attenuate global warming. Many factors affecting the global carbon cycle are already known and archived; however, an index for decomposition rate is still missing. It would be a great improvement if we could measure decomposition (rate and degree) globally instead of estimating it from small scale experiments and lab incubations. We developed a cost-effective and standardised method to investigate decomposition rate and carbon stabilisation; by using commercially available teabags as standardised test-kits for simplified litter bag experiments. In order to make it easy for schools to take part through crowdsourcing (i.e. volunteer-assisted data collection by means of Internet applications), a lesson plan has been written to teachers. The so acquired Tea Bag Index (TBI) provides process-driven information on soil functions at local, regional and global scales essential for future climate modelling; and it is sensitive enough to discriminate data between different ecosystems and soil types. The lesson plan will enable students to understand the concept of decomposition and its relevance for soil fertility and our climate. TBI requires only little means and knowledge, making data collection by crowdsourcing possible. Successful results have already been attained by scout groups in Austria. Engaging schools classes as co-researchers would enlarge the crowdsourcing potential of the TBI. Subsequently, it will increase awareness of soils and provide essential development in including soils more frequently into the natural sciences and environmental classes at schools. The numerous data points collected will allow for a great leap forward in mapping decomposition, as well as understanding and modelling the global carbon cycle.

  20. Carbon dioxide emissions from the electricity sector in major countries: a decomposition analysis.

    PubMed

    Li, Xiangzheng; Liao, Hua; Du, Yun-Fei; Wang, Ce; Wang, Jin-Wei; Liu, Yanan

    2018-03-01

    The electric power sector is one of the primary sources of CO 2 emissions. Analyzing the influential factors that result in CO 2 emissions from the power sector would provide valuable information to reduce the world's CO 2 emissions. Herein, we applied the Divisia decomposition method to analyze the influential factors for CO 2 emissions from the power sector from 11 countries, which account for 67% of the world's emissions from 1990 to 2013. We decompose the influential factors for CO 2 emissions into seven areas: the emission coefficient, energy intensity, the share of electricity generation, the share of thermal power generation, electricity intensity, economic activity, and population. The decomposition analysis results show that economic activity, population, and the emission coefficient have positive roles in increasing CO 2 emissions, and their contribution rates are 119, 23.9, and 0.5%, respectively. Energy intensity, electricity intensity, the share of electricity generation, and the share of thermal power generation curb CO 2 emissions and their contribution rates are 17.2, 15.7, 7.7, and 2.8%, respectively. Through decomposition analysis for each country, economic activity and population are the major factors responsible for increasing CO 2 emissions from the power sector. However, the other factors from developed countries can offset the growth in CO 2 emissions due to economic activities.

  1. The Chemical Decomposition of 5-aza-2′-deoxycytidine (Decitabine): Kinetic Analyses and Identification of Products by NMR, HPLC, and Mass Spectrometry

    PubMed Central

    Rogstad, Daniel K.; Herring, Jason L.; Theruvathu, Jacob A.; Burdzy, Artur; Perry, Christopher C.; Neidigh, Jonathan W.; Sowers, Lawrence C.

    2014-01-01

    The nucleoside analog 5-aza-2′-deoxycytidine (Decitabine, DAC) is one of several drugs in clinical use that inhibit DNA methyltransferases, leading to a decrease of 5-methylcytosine in newly replicated DNA and subsequent transcriptional activation of genes silenced by cytosine methylation. In addition to methyltransferase inhibition, DAC has demonstrated toxicity and potential mutagenicity, and can induce a DNA-repair response. The mechanisms accounting for these events are not well understood. DAC is chemically unstable in aqueous solutions, but there is little consensus between previous reports as to its half-life and corresponding products of decomposition at physiological temperature and pH, potentially confounding studies on its mechanism of action and long-term use in humans. Here we have employed a battery of analytical methods to estimate kinetic rates and to characterize DAC decomposition products under conditions of physiological temperature and pH. Our results indicate that DAC decomposes into a plethora of products, formed by hydrolytic opening and deformylation of the triazine ring, in addition to anomerization and possibly other changes in the sugar ring structure. We also discuss the advantages and problems associated with each analytical method used. The results reported here will facilitate ongoing studies and clinical trials aimed at understanding the mechanisms of action, toxicity, and possible mutagenicity of DAC and related analogs. PMID:19480391

  2. Chemical decomposition of 5-aza-2'-deoxycytidine (Decitabine): kinetic analyses and identification of products by NMR, HPLC, and mass spectrometry.

    PubMed

    Rogstad, Daniel K; Herring, Jason L; Theruvathu, Jacob A; Burdzy, Artur; Perry, Christopher C; Neidigh, Jonathan W; Sowers, Lawrence C

    2009-06-01

    The nucleoside analogue 5-aza-2'-deoxycytidine (Decitabine, DAC) is one of several drugs in clinical use that inhibit DNA methyltransferases, leading to a decrease of 5-methylcytosine in newly replicated DNA and subsequent transcriptional activation of genes silenced by cytosine methylation. In addition to methyltransferase inhibition, DAC has demonstrated toxicity and potential mutagenicity, and can induce a DNA-repair response. The mechanisms accounting for these events are not well understood. DAC is chemically unstable in aqueous solutions, but there is little consensus between previous reports as to its half-life and corresponding products of decomposition at physiological temperature and pH, potentially confounding studies on its mechanism of action and long-term use in humans. Here, we have employed a battery of analytical methods to estimate kinetic rates and to characterize DAC decomposition products under conditions of physiological temperature and pH. Our results indicate that DAC decomposes into a plethora of products, formed by hydrolytic opening and deformylation of the triazine ring, in addition to anomerization and possibly other changes in the sugar ring structure. We also discuss the advantages and problems associated with each analytical method used. The results reported here will facilitate ongoing studies and clinical trials aimed at understanding the mechanisms of action, toxicity, and possible mutagenicity of DAC and related analogues.

  3. Motion magnification using the Hermite transform

    NASA Astrophysics Data System (ADS)

    Brieva, Jorge; Moya-Albor, Ernesto; Gomez-Coronel, Sandra L.; Escalante-Ramírez, Boris; Ponce, Hiram; Mora Esquivel, Juan I.

    2015-12-01

    We present an Eulerian motion magnification technique with a spatial decomposition based on the Hermite Transform (HT). We compare our results to the approach presented in.1 We test our method in one sequence of the breathing of a newborn baby and on an MRI left ventricle sequence. Methods are compared using quantitative and qualitative metrics after the application of the motion magnification algorithm.

  4. Motor current signature analysis for gearbox condition monitoring under transient speeds using wavelet analysis and dual-level time synchronous averaging

    NASA Astrophysics Data System (ADS)

    Bravo-Imaz, Inaki; Davari Ardakani, Hossein; Liu, Zongchang; García-Arribas, Alfredo; Arnaiz, Aitor; Lee, Jay

    2017-09-01

    This paper focuses on analyzing motor current signature for fault diagnosis of gearboxes operating under transient speed regimes. Two different strategies are evaluated, extensively tested and compared to analyze the motor current signature in order to implement a condition monitoring system for gearboxes in industrial machinery. A specially designed test bench is used, thoroughly monitored to fully characterize the experiments, in which gears in different health status are tested. The measured signals are analyzed using discrete wavelet decomposition, in different decomposition levels using a range of mother wavelets. Moreover, a dual-level time synchronous averaging analysis is performed on the same signal to compare the performance of the two methods. From both analyses, the relevant features of the signals are extracted and cataloged using a self-organizing map, which allows for an easy detection and classification of the diverse health states of the gears. The results demonstrate the effectiveness of both methods for diagnosing gearbox faults. A slightly better performance was observed for dual-level time synchronous averaging method. Based on the obtained results, the proposed methods can used as effective and reliable condition monitoring procedures for gearbox condition monitoring using only motor current signature.

  5. Study of a two-dimension transient heat propagation in cylindrical coordinates by means of two finite difference methods

    NASA Astrophysics Data System (ADS)

    Dumencu, A.; Horbaniuc, B.; Dumitraşcu, G.

    2016-08-01

    The analytical approach of unsteady conduction heat transfer under actual conditions represent a very difficult (if not insurmountable) problem due to the issues related to finding analytical solutions for the conduction heat transfer equation. Various techniques have been developed in order to overcome these difficulties, among which the alternate directions method and the decomposition method. Both of them are particularly suited for two-dimension heat propagation. The paper deals with both techniques in order to verify whether the results provided are in good accordance. The studied case consists of a long hollow cylinder, and considers that the time-dependent temperature field varies both in the radial and the axial directions. The implicit technique is used in both methods and involves the simultaneous solving of a set of equations for all of the nodes for each time step successively for each of the two directions. Gauss elimination is used to obtain the solution of the set, representing the nodal temperatures. After using the two techniques the results show a very good agreement, and since the decomposition is easier to use in terms of computer code and running time, this technique seems to be more recommendable.

  6. Convective heat transfer for a gaseous slip flow in micropipe and parallel-plate microchannel with uniform wall heat flux: effect of axial heat conduction

    NASA Astrophysics Data System (ADS)

    Haddout, Y.; Essaghir, E.; Oubarra, A.; Lahjomri, J.

    2017-12-01

    Thermally developing laminar slip flow through a micropipe and a parallel plate microchannel, with axial heat conduction and uniform wall heat flux, is studied analytically by using a powerful method of self-adjoint formalism. This method results from a decomposition of the elliptic energy equation into a system of two first-order partial differential equations. The advantage of this method over other methods, resides in the fact that the decomposition procedure leads to a selfadjoint problem although the initial problem is apparently not a self-adjoint one. The solution is an extension of prior studies and considers a first order slip model boundary conditions at the fluid-wall interface. The analytical expressions for the developing temperature and local Nusselt number in the thermal entrance region are obtained in the general case. Therefore, the solution obtained could be extended easily to any hydrodynamically developed flow and arbitrary heat flux distribution. The analytical results obtained are compared for select simplified cases with available numerical calculations and they both agree. The results show that the heat transfer characteristics of flow in the thermal entrance region are strongly influenced by the axial heat conduction and rarefaction effects which are respectively characterized by Péclet and Knudsen numbers.

  7. Convective heat transfer for a gaseous slip flow in micropipe and parallel-plate microchannel with uniform wall heat flux: effect of axial heat conduction

    NASA Astrophysics Data System (ADS)

    Haddout, Y.; Essaghir, E.; Oubarra, A.; Lahjomri, J.

    2018-06-01

    Thermally developing laminar slip flow through a micropipe and a parallel plate microchannel, with axial heat conduction and uniform wall heat flux, is studied analytically by using a powerful method of self-adjoint formalism. This method results from a decomposition of the elliptic energy equation into a system of two first-order partial differential equations. The advantage of this method over other methods, resides in the fact that the decomposition procedure leads to a selfadjoint problem although the initial problem is apparently not a self-adjoint one. The solution is an extension of prior studies and considers a first order slip model boundary conditions at the fluid-wall interface. The analytical expressions for the developing temperature and local Nusselt number in the thermal entrance region are obtained in the general case. Therefore, the solution obtained could be extended easily to any hydrodynamically developed flow and arbitrary heat flux distribution. The analytical results obtained are compared for select simplified cases with available numerical calculations and they both agree. The results show that the heat transfer characteristics of flow in the thermal entrance region are strongly influenced by the axial heat conduction and rarefaction effects which are respectively characterized by Péclet and Knudsen numbers.

  8. Effect of water level drawdown on decomposition in boreal peatlands

    NASA Astrophysics Data System (ADS)

    Straková, Petra; Penttilä, Timo; Laiho, Raija

    2010-05-01

    Plant litter production and decomposition are key processes in element cycling in most ecosystems. In peatlands, there has been a long-term imbalance between litter production and decay caused by high water levels (WL) and consequent anoxia. This has resulted in peatlands being a significant sink of carbon (C) from the atmosphere. However, peatlands are experiencing both "natural" (global climate change) and anthropogenic (ditching) changes that threaten their ability to retain this ecosystem identity and function. Many of these alterations can be traced back to WL drawdown, which can cause increased aeration, higher acidity, falling temperatures, and a greater probability of drought. Such changes are also associated with an increasing decomposition rate, and therefore a greater amount of C released back to the atmosphere. Yet studies about how the overall C balance of peatlands will be affected have come up with conflicting conclusions, demonstrating that the C store could increase, decrease, or remain static. A factor that has been largely overlooked is the change in litter type composition following persistent WL drawdown. It is the aim of our study, then, to help to resolve this issue. We studied the effects of short-term (ca. 4 years) and long-term (ca. 40 years) persistent WL drawdown on the decomposition of numerous types of above-ground and below-ground plant litters at three boreal peatland sites: bog, oligotrophic fen and mesotrophic fen. We thus believe that enough permutations have been created to obtain a good assessment of how each factor, site nutrient level, WL regime, and litter type composition, influences decomposition. We used the litter bag method to measure the decomposition rates: placed measured amounts of plant litter, or cellulose strips as a control, into closed mesh bags, and installed the bags in the natural environment for decomposition for each litter type for varying amounts of time. Following litter bag recovery, the litter was cleaned of excess debris and analyzed for changes in mass, enzyme activity, mesofauna presence, and microbial community composition, among other things. The experiment has a run-time of ten years, the results from the first two years are presented in the poster.

  9. An efficient computational approach to model statistical correlations in photon counting x-ray detectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Faby, Sebastian; Maier, Joscha; Sawall, Stefan

    2016-07-15

    Purpose: To introduce and evaluate an increment matrix approach (IMA) describing the signal statistics of energy-selective photon counting detectors including spatial–spectral correlations between energy bins of neighboring detector pixels. The importance of the occurring correlations for image-based material decomposition is studied. Methods: An IMA describing the counter increase patterns in a photon counting detector is proposed. This IMA has the potential to decrease the number of required random numbers compared to Monte Carlo simulations by pursuing an approach based on convolutions. To validate and demonstrate the IMA, an approximate semirealistic detector model is provided, simulating a photon counting detector inmore » a simplified manner, e.g., by neglecting count rate-dependent effects. In this way, the spatial–spectral correlations on the detector level are obtained and fed into the IMA. The importance of these correlations in reconstructed energy bin images and the corresponding detector performance in image-based material decomposition is evaluated using a statistically optimal decomposition algorithm. Results: The results of IMA together with the semirealistic detector model were compared to other models and measurements using the spectral response and the energy bin sensitivity, finding a good agreement. Correlations between the different reconstructed energy bin images could be observed, and turned out to be of weak nature. These correlations were found to be not relevant in image-based material decomposition. An even simpler simulation procedure based on the energy bin sensitivity was tested instead and yielded similar results for the image-based material decomposition task, as long as the fact that one incident photon can increase multiple counters across neighboring detector pixels is taken into account. Conclusions: The IMA is computationally efficient as it required about 10{sup 2} random numbers per ray incident on a detector pixel instead of an estimated 10{sup 8} random numbers per ray as Monte Carlo approaches would need. The spatial–spectral correlations as described by IMA are not important for the studied image-based material decomposition task. Respecting the absolute photon counts and thus the multiple counter increases by a single x-ray photon, the same material decomposition performance could be obtained with a simpler detector description using the energy bin sensitivity.« less

  10. Sensitivity analysis of a model of CO2 exchange in tundra ecosystems by the adjoint method

    NASA Technical Reports Server (NTRS)

    Waelbroek, C.; Louis, J.-F.

    1995-01-01

    A model of net primary production (NPP), decomposition, and nitrogen cycling in tundra ecosystems has been developed. The adjoint technique is used to study the sensitivity of the computed annual net CO2 flux to perturbation in initial conditions, climatic inputs, and model's main parameters describing current seasonal CO2 exchange in wet sedge tundra at Barrow, Alaska. The results show that net CO2 flux is most sensitive to parameters characterizing litter chemical composition and more sensitive to decomposition parameters than to NPP parameters. This underlines the fact that in nutrient-limited ecosystems, decomposition drives net CO2 exchange by controlling mineralization of main nutrients. The results also indicate that the short-term (1 year) response of wet sedge tundra to CO2-induced warming is a significant increase in CO2 emission, creating a positive feedback to atmosphreic CO2 accumulation. However, a cloudiness increase during the same year can severely alter this response and lead to either a slight decrease or a strong increase in emitted CO2, depending on its exact timing. These results demonstrate that the adjoint method is well suited to study systems encountering regime changes, as a single run of the adjoint model provides sensitivities of the net CO2 flux to perturbations in all parameters and variables at any time of the year. Moreover, it is shown that large errors due to the presence of thresholds can be avoided by first delimiting the range of applicability of the adjoint results.

  11. Examination of Treatment Methods for Cyanide Wastes.

    DTIC Science & Technology

    1979-05-15

    industry,is alkaline chlorination. This process oxidizes cyanide to cyanate followed by complete decomposition yielding carbon dioxide and nitrogen or...decomposition yielding carbon dioxide and nitrogen, or ammonium salts depending on final treatment methods. The major oxidizing agents that have been...2H20 (X represents a cation.) 29 NADC-78198-60 This liberates carbon dioxide and nitrogen gas as end products. Possible acid hydrolysis has been

  12. [Detection of constitutional types of EEG using the orthogonal decomposition method].

    PubMed

    Kuznetsova, S M; Kudritskaia, O V

    1987-01-01

    The authors present an algorithm of investigation into the processes of brain bioelectrical activity with the help of an orthogonal decomposition device intended for the identification of constitutional types of EEGs. The method has helped to effectively solve the task of the diagnosis of constitutional types of EEGs, which are determined by a variable degree of hereditary predisposition for longevity or cerebral stroke.

  13. Catalytic performance of M@Ni (M = Fe, Ru, Ir) core-shell nanoparticles towards ammonia decomposition for CO x -free hydrogen production

    NASA Astrophysics Data System (ADS)

    Chen, Xin; Zhou, Junwei; Chen, Shuangjing; Zhang, Hui

    2018-06-01

    To reduce the use of precious metals and maintain the catalytic activity for NH3 decomposition reaction, it is an effective way to construct bimetallic nanoparticles with special structures. In this paper, by using density functional theory methods, we investigated NH3 decomposition reaction on three types of core-shell nanoparticles M@Ni (M = Fe, Ru, Ir) with 13 core M atoms and 42 shell Ni atoms. The size of these three particles is about 1 nm. Benefit from alloying with Ru in this nanocluster, Ru@Ni core-shell nanoparticles exhibit catalytic activity comparable to that of single metal Ru, based on the analysis of the adsorption energy and potential energy diagram of NH3 decomposition, as well as N2 desorption processes. However, as for Fe@Ni and Ir@Ni core-shell nanoparticles, their catalytic activities are still unsatisfactory compared to the active metal Ru. In addition, in order to further explain the synergistic effect of bimetallic core-shell nanoparticles, the partial density of states were also calculated. The results show that d-band electrons provided by the core metal are the main factors affecting the entire catalytic process.

  14. Short-term forecasting of urban rail transit ridership based on ARIMA and wavelet decomposition

    NASA Astrophysics Data System (ADS)

    Wang, Xuemei; Zhang, Ning; Chen, Ying; Zhang, Yunlong

    2018-05-01

    Due to different functions and land use types, there are significant differences in ridership patterns among different urban rail transit stations. Considering the characteristics of different ridership and coping with the uncertainty, periodical and stochastic natures of short-term passenger flow, and this paper proposes a novel hybrid methodology that combines the autoregressive integrated moving average (ARIMA) model and wavelet decomposition, which has strong strengths in signal processing, to short-term ridership forecasting. The seasonal ARIMA is used to represent the relatively stable and regular ridership patterns while the wavelet decomposition is used to capture the stochastic or sometimes drastic changing characteristics of ridership patterns. The inclusion of wavelet decomposition and reconstruction provides the hybrid model with a unique strength in capturing sudden change in ridership patterns associated with certain rail stations. The case study is carried out by analyzing real ridership data of Metro Line 1 in Nanjing, China. The experimental results indicate that the hybrid method is superior to the individual ARIMA model for all ridership patterns, but particularly advantageous in predicting ridership at stations often associated with sudden pattern changes due to special events.

  15. Decomposition of the Total Effect in the Presence of Multiple Mediators and Interactions.

    PubMed

    Bellavia, Andrea; Valeri, Linda

    2018-06-01

    Mediation analysis allows decomposing a total effect into a direct effect of the exposure on the outcome and an indirect effect operating through a number of possible hypothesized pathways. Recent studies have provided formal definitions of direct and indirect effects when multiple mediators are of interest and have described parametric and semiparametric methods for their estimation. Investigating direct and indirect effects with multiple mediators, however, can be challenging in the presence of multiple exposure-mediator and mediator-mediator interactions. In this paper we derive a decomposition of the total effect that unifies mediation and interaction when multiple mediators are present. We illustrate the properties of the proposed framework in a secondary analysis of a pragmatic trial for the treatment of schizophrenia. The decomposition is employed to investigate the interplay of side effects and psychiatric symptoms in explaining the effect of antipsychotic medication on quality of life in schizophrenia patients. Our result offers a valuable tool to identify the proportions of total effect due to mediation and interaction when more than one mediator is present, providing the finest decomposition of the total effect that unifies multiple mediators and interactions.

  16. Pressure-induced metallization of condensed phase β-HMX under shock loadings via molecular dynamics simulations in conjunction with multi-scale shock technique.

    PubMed

    Ge, Ni-Na; Wei, Yong-Kai; Zhao, Feng; Chen, Xiang-Rong; Ji, Guang-Fu

    2014-07-01

    The electronic structure and initial decomposition in high explosive HMX under conditions of shock loading are examined. The simulation is performed using quantum molecular dynamics in conjunction with multi-scale shock technique (MSST). A self-consistent charge density-functional tight-binding (SCC-DFTB) method is adapted. The results show that the N-N-C angle has a drastic change under shock wave compression along lattice vector b at shock velocity 11 km/s, which is the main reason that leads to an insulator-to-metal transition for the HMX system. The metallization pressure (about 130 GPa) of condensed-phase HMX is predicted firstly. We also detect the formation of several key products of condensed-phase HMX decomposition, such as NO2, NO, N2, N2O, H2O, CO, and CO2, and all of them have been observed in previous experimental studies. Moreover, the initial decomposition products include H2 due to the C-H bond breaking as a primary reaction pathway at extreme condition, which presents a new insight into the initial decomposition mechanism of HMX under shock loading at the atomistic level.

  17. Decomposition of hydroxy amino acids in foraminiferal tests; kinetics, mechanism and geochronological implications

    USGS Publications Warehouse

    Bada, J.L.; Shou, M.-Y.; Man, E.H.; Schroeder, R.A.

    1978-01-01

    The diagenesis of the hydroxy amino acids serine and threonine in foraminiferal tests has been investigated. The decomposition pathways of these amino acids are complex; the principal reactions appear to be dehydration, aldol cleavage and decarboxylation. Stereochemical studies indicate that the ??-amino-n-butyric acid (ABA) detected in foraminiferal tests is the end product of threonine dehydration pathway. Decomposition of serine and threonine in foraminiferal tests from two well-dated Caribbean deep-sea cores, P6304-8 and -9, has been found to follow irreversible first-order kinetics. Three empirical equations were derived for the disappearance of serine and threonine and the appearance of ABA. These equations can be used as a new geochronological method for dating foraminiferal tests from other deep-sea sediments. Preliminary results suggest that ages deduced from the ABA kinetics equation are most reliable because "species effect" and contamination problems are not important for this nonbiological amino acid. Because of the variable serine and threonine contents of modern foraminiferal species, it is likely that the accurate age estimates can be obtained from the serine and threonine decomposition equations only if a homogeneous species assemblage or single species sample isolated from mixed natural assemblages is used. ?? 1978.

  18. Integrand-level reduction of loop amplitudes by computational algebraic geometry methods

    NASA Astrophysics Data System (ADS)

    Zhang, Yang

    2012-09-01

    We present an algorithm for the integrand-level reduction of multi-loop amplitudes of renormalizable field theories, based on computational algebraic geometry. This algorithm uses (1) the Gröbner basis method to determine the basis for integrand-level reduction, (2) the primary decomposition of an ideal to classify all inequivalent solutions of unitarity cuts. The resulting basis and cut solutions can be used to reconstruct the integrand from unitarity cuts, via polynomial fitting techniques. The basis determination part of the algorithm has been implemented in the Mathematica package, BasisDet. The primary decomposition part can be readily carried out by algebraic geometry softwares, with the output of the package BasisDet. The algorithm works in both D = 4 and D = 4 - 2 ɛ dimensions, and we present some two and three-loop examples of applications of this algorithm.

  19. Identification of faulty sensor using relative partial decomposition via independent component analysis

    NASA Astrophysics Data System (ADS)

    Wang, Z.; Quek, S. T.

    2015-07-01

    Performance of any structural health monitoring algorithm relies heavily on good measurement data. Hence, it is necessary to employ robust faulty sensor detection approaches to isolate sensors with abnormal behaviour and exclude the highly inaccurate data in the subsequent analysis. The independent component analysis (ICA) is implemented to detect the presence of sensors showing abnormal behaviour. A normalized form of the relative partial decomposition contribution (rPDC) is proposed to identify the faulty sensor. Both additive and multiplicative types of faults are addressed and the detectability illustrated using a numerical and an experimental example. An empirical method to establish control limits for detecting and identifying the type of fault is also proposed. The results show the effectiveness of the ICA and rPDC method in identifying faulty sensor assuming that baseline cases are available.

  20. Trading strategy based on dynamic mode decomposition: Tested in Chinese stock market

    NASA Astrophysics Data System (ADS)

    Cui, Ling-xiao; Long, Wen

    2016-11-01

    Dynamic mode decomposition (DMD) is an effective method to capture the intrinsic dynamical modes of complex system. In this work, we adopt DMD method to discover the evolutionary patterns in stock market and apply it to Chinese A-share stock market. We design two strategies based on DMD algorithm. The strategy which considers only timing problem can make reliable profits in a choppy market with no prominent trend while fails to beat the benchmark moving-average strategy in bull market. After considering the spatial information from spatial-temporal coherent structure of DMD modes, we improved the trading strategy remarkably. Then the DMD strategies profitability is quantitatively evaluated by performing SPA test to correct the data-snooping effect. The results further prove that DMD algorithm can model the market patterns well in sideways market.

  1. Analysis of network clustering behavior of the Chinese stock market

    NASA Astrophysics Data System (ADS)

    Chen, Huan; Mai, Yong; Li, Sai-Ping

    2014-11-01

    Random Matrix Theory (RMT) and the decomposition of correlation matrix method are employed to analyze spatial structure of stocks interactions and collective behavior in the Shanghai and Shenzhen stock markets in China. The result shows that there exists prominent sector structures, with subsectors including the Real Estate (RE), Commercial Banks (CB), Pharmaceuticals (PH), Distillers&Vintners (DV) and Steel (ST) industries. Furthermore, the RE and CB subsectors are mostly anti-correlated. We further study the temporal behavior of the dataset and find that while the sector structures are relatively stable from 2007 through 2013, the correlation between the real estate and commercial bank stocks shows large variations. By employing the ensemble empirical mode decomposition (EEMD) method, we show that this anti-correlation behavior is closely related to the monetary and austerity policies of the Chinese government during the period of study.

  2. Litter composition effects on decomposition across the litter-soil interface

    EPA Science Inventory

    Background/Question/Methods Many studies have investigated the influence of plant litter species composition on decomposition dynamics, but given the variety of communities and environments around the world, a variety of consequences of litter-mixing have been reported. Litter ...

  3. AUTONOMOUS GAUSSIAN DECOMPOSITION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lindner, Robert R.; Vera-Ciro, Carlos; Murray, Claire E.

    2015-04-15

    We present a new algorithm, named Autonomous Gaussian Decomposition (AGD), for automatically decomposing spectra into Gaussian components. AGD uses derivative spectroscopy and machine learning to provide optimized guesses for the number of Gaussian components in the data, and also their locations, widths, and amplitudes. We test AGD and find that it produces results comparable to human-derived solutions on 21 cm absorption spectra from the 21 cm SPectral line Observations of Neutral Gas with the EVLA (21-SPONGE) survey. We use AGD with Monte Carlo methods to derive the H i line completeness as a function of peak optical depth and velocitymore » width for the 21-SPONGE data, and also show that the results of AGD are stable against varying observational noise intensity. The autonomy and computational efficiency of the method over traditional manual Gaussian fits allow for truly unbiased comparisons between observations and simulations, and for the ability to scale up and interpret the very large data volumes from the upcoming Square Kilometer Array and pathfinder telescopes.« less

  4. Numerical simulation for solution of space-time fractional telegraphs equations with local fractional derivatives via HAFSTM

    NASA Astrophysics Data System (ADS)

    Pandey, Rishi Kumar; Mishra, Hradyesh Kumar

    2017-11-01

    In this paper, the semi-analytic numerical technique for the solution of time-space fractional telegraph equation is applied. This numerical technique is based on coupling of the homotopy analysis method and sumudu transform. It shows the clear advantage with mess methods like finite difference method and also with polynomial methods similar to perturbation and Adomian decomposition methods. It is easily transform the complex fractional order derivatives in simple time domain and interpret the results in same meaning.

  5. Application of the enhanced homotopy perturbation method to solve the fractional-order Bagley-Torvik differential equation

    NASA Astrophysics Data System (ADS)

    Zolfaghari, M.; Ghaderi, R.; Sheikhol Eslami, A.; Ranjbar, A.; Hosseinnia, S. H.; Momani, S.; Sadati, J.

    2009-10-01

    The enhanced homotopy perturbation method (EHPM) is applied for finding improved approximate solutions of the well-known Bagley-Torvik equation for three different cases. The main characteristic of the EHPM is using a stabilized linear part, which guarantees the stability and convergence of the overall solution. The results are finally compared with the Adams-Bashforth-Moulton numerical method, the Adomian decomposition method (ADM) and the fractional differential transform method (FDTM) to verify the performance of the EHPM.

  6. Application of empirical mode decomposition with local linear quantile regression in financial time series forecasting.

    PubMed

    Jaber, Abobaker M; Ismail, Mohd Tahir; Altaher, Alsaidi M

    2014-01-01

    This paper mainly forecasts the daily closing price of stock markets. We propose a two-stage technique that combines the empirical mode decomposition (EMD) with nonparametric methods of local linear quantile (LLQ). We use the proposed technique, EMD-LLQ, to forecast two stock index time series. Detailed experiments are implemented for the proposed method, in which EMD-LPQ, EMD, and Holt-Winter methods are compared. The proposed EMD-LPQ model is determined to be superior to the EMD and Holt-Winter methods in predicting the stock closing prices.

  7. Lung imaging in rodents using dual energy micro-CT

    NASA Astrophysics Data System (ADS)

    Badea, C. T.; Guo, X.; Clark, D.; Johnston, S. M.; Marshall, C.; Piantadosi, C.

    2012-03-01

    Dual energy CT imaging is expected to play a major role in the diagnostic arena as it provides material decomposition on an elemental basis. The purpose of this work is to investigate the use of dual energy micro-CT for the estimation of vascular, tissue, and air fractions in rodent lungs using a post-reconstruction three-material decomposition method. We have tested our method using both simulations and experimental work. Using simulations, we have estimated the accuracy limits of the decomposition for realistic micro-CT noise levels. Next, we performed experiments involving ex vivo lung imaging in which intact lungs were carefully removed from the thorax, were injected with an iodine-based contrast agent and inflated with air at different volume levels. Finally, we performed in vivo imaging studies in (n=5) C57BL/6 mice using fast prospective respiratory gating in endinspiration and end-expiration for three different levels of positive end-expiratory pressure (PEEP). Prior to imaging, mice were injected with a liposomal blood pool contrast agent. The mean accuracy values were for Air (95.5%), Blood (96%), and Tissue (92.4%). The absolute accuracy in determining all fraction materials was 94.6%. The minimum difference that we could detect in material fractions was 15%. As expected, an increase in PEEP levels for the living mouse resulted in statistically significant increases in air fractions at end-expiration, but no significant changes in end-inspiration. Our method has applicability in preclinical pulmonary studies where various physiological changes can occur as a result of genetic changes, lung disease, or drug effects.

  8. Shrub encroachment in Arctic tundra: Betula nana effects on above- and belowground litter decomposition.

    PubMed

    McLaren, Jennie R; Buckeridge, Kate M; van de Weg, Martine J; Shaver, Gaius R; Schimel, Joshua P; Gough, Laura

    2017-05-01

    Rapid arctic vegetation change as a result of global warming includes an increase in the cover and biomass of deciduous shrubs. Increases in shrub abundance will result in a proportional increase of shrub litter in the litter community, potentially affecting carbon turnover rates in arctic ecosystems. We investigated the effects of leaf and root litter of a deciduous shrub, Betula nana, on decomposition, by examining species-specific decomposition patterns, as well as effects of Betula litter on the decomposition of other species. We conducted a 2-yr decomposition experiment in moist acidic tundra in northern Alaska, where we decomposed three tundra species (Vaccinium vitis-idaea, Rhododendron palustre, and Eriophorum vaginatum) alone and in combination with Betula litter. Decomposition patterns for leaf and root litter were determined using three different measures of decomposition (mass loss, respiration, extracellular enzyme activity). We report faster decomposition of Betula leaf litter compared to other species, with support for species differences coming from all three measures of decomposition. Mixing effects were less consistent among the measures, with negative mixing effects shown only for mass loss. In contrast, there were few species differences or mixing effects for root decomposition. Overall, we attribute longer-term litter mass loss patterns to patterns created by early decomposition processes in the first winter. We note numerous differences for species patterns between leaf and root decomposition, indicating that conclusions from leaf litter experiments should not be extrapolated to below-ground decomposition. The high decomposition rates of Betula leaf litter aboveground, and relatively similar decomposition rates of multiple species below, suggest a potential for increases in turnover in the fast-decomposing carbon pool of leaves and fine roots as the dominance of deciduous shrubs in the Arctic increases, but this outcome may be tempered by negative litter mixing effects during the early stages of encroachment. © 2017 by the Ecological Society of America.

  9. Least squares parameter estimation methods for material decomposition with energy discriminating detectors

    PubMed Central

    Le, Huy Q.; Molloi, Sabee

    2011-01-01

    Purpose: Energy resolving detectors provide more than one spectral measurement in one image acquisition. The purpose of this study is to investigate, with simulation, the ability to decompose four materials using energy discriminating detectors and least squares minimization techniques. Methods: Three least squares parameter estimation decomposition techniques were investigated for four-material breast imaging tasks in the image domain. The first technique treats the voxel as if it consisted of fractions of all the materials. The second method assumes that a voxel primarily contains one material and divides the decomposition process into segmentation and quantification tasks. The third is similar to the second method but a calibration was used. The simulated computed tomography (CT) system consisted of an 80 kVp spectrum and a CdZnTe (CZT) detector that could resolve the x-ray spectrum into five energy bins. A postmortem breast specimen was imaged with flat panel CT to provide a model for the digital phantoms. Hydroxyapatite (HA) (50, 150, 250, 350, 450, and 550 mg∕ml) and iodine (4, 12, 20, 28, 36, and 44 mg∕ml) contrast elements were embedded into the glandular region of the phantoms. Calibration phantoms consisted of a 30∕70 glandular-to-adipose tissue ratio with embedded HA (100, 200, 300, 400, and 500 mg∕ml) and iodine (5, 15, 25, 35, and 45 mg∕ml). The x-ray transport process was simulated where the Beer–Lambert law, Poisson process, and CZT absorption efficiency were applied. Qualitative and quantitative evaluations of the decomposition techniques were performed and compared. The effect of breast size was also investigated. Results: The first technique decomposed iodine adequately but failed for other materials. The second method separated the materials but was unable to quantify the materials. With the addition of a calibration, the third technique provided good separation and quantification of hydroxyapatite, iodine, glandular, and adipose tissues. Quantification with this technique was accurate with errors of 9.83% and 6.61% for HA and iodine, respectively. Calibration at one point (one breast size) showed increased errors as the mismatch in breast diameters between calibration and measurement increased. A four-point calibration successfully decomposed breast diameter spanning the entire range from 8 to 20 cm. For a 14 cm breast, errors were reduced from 5.44% to 1.75% and from 6.17% to 3.27% with the multipoint calibration for HA and iodine, respectively. Conclusions: The results of the simulation study showed that a CT system based on CZT detectors in conjunction with least squares minimization technique can be used to decompose four materials. The calibrated least squares parameter estimation decomposition technique performed the best, separating and accurately quantifying the concentrations of hydroxyapatite and iodine. PMID:21361193

  10. Interobserver Reliability of the Total Body Score System for Quantifying Human Decomposition.

    PubMed

    Dabbs, Gretchen R; Connor, Melissa; Bytheway, Joan A

    2016-03-01

    Several authors have tested the accuracy of the Total Body Score (TBS) method for quantifying decomposition, but none have examined the reliability of the method as a scoring system by testing interobserver error rates. Sixteen participants used the TBS system to score 59 observation packets including photographs and written descriptions of 13 human cadavers in different stages of decomposition (postmortem interval: 2-186 days). Data analysis used a two-way random model intraclass correlation in SPSS (v. 17.0). The TBS method showed "almost perfect" agreement between observers, with average absolute correlation coefficients of 0.990 and average consistency correlation coefficients of 0.991. While the TBS method may have sources of error, scoring reliability is not one of them. Individual component scores were examined, and the influences of education and experience levels were investigated. Overall, the trunk component scores were the least concordant. Suggestions are made to improve the reliability of the TBS method. © 2016 American Academy of Forensic Sciences.

  11. Anodic electrochemical performances of MgCo{sub 2}O{sub 4} synthesized by oxalate decomposition method and electrospinning technique for Li-ion battery application

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Darbar, Devendrasinh; Department of Mechanical Engineering, National University of Singapore, 117576; Department of Physics, National University of Singapore, 117542

    2016-01-15

    Highlights: • MgCo{sub 2}O{sub 4} was prepared by oxalate decomposition method and electrospinning technique. • Electrospun MgCo{sub 2}O{sub 4} shows the reversible capacity of 795 and 227 mAh g{sup −1} oxalate decomposition MgCo{sub 2}O{sub 4} after 50 cycle. • Electrospun MgCo{sub 2}O{sub 4} show good cycling stability and electrochemical performance. - Abstract: Magnesium cobalt oxide, MgCo{sub 2}O{sub 4} was synthesized by oxalate decomposition method and electrospinning technique. The electrochemical performances, structures, phase formation and morphology of MgCo{sub 2}O{sub 4} synthesized by both the methods are compared. Scanning electron microscope (SEM) studies show spherical and fiber type morphology, respectively for themore » oxalate decomposition and electrospinning method. The electrospun nanofibers of MgCo{sub 2}O{sub 4} calcined at 650 °C, showed a very good reversible capacity of 795 mAh g{sup −1} after 50 cycles when compared to bulk material capacity of 227 mAh g{sup −1} at current rate of 60 mA g{sup −1}. MgCo{sub 2}O{sub 4} nanofiber showed a reversible capacity of 411 mAh g{sup −1} (at cycle) at current density of 240 mA g{sup −1}. Improved performance was due to improved conductivity of MgO, which may act as buffer layer leading to improved cycling stability. The cyclic voltammetry studies at scan rate of 0.058 mV/s show main cathodic at around 1.0 V and anodic peaks at 2.1 V vs. Li.« less

  12. Tensor-based Dictionary Learning for Spectral CT Reconstruction

    PubMed Central

    Zhang, Yanbo; Wang, Ge

    2016-01-01

    Spectral computed tomography (CT) produces an energy-discriminative attenuation map of an object, extending a conventional image volume with a spectral dimension. In spectral CT, an image can be sparsely represented in each of multiple energy channels, and are highly correlated among energy channels. According to this characteristics, we propose a tensor-based dictionary learning method for spectral CT reconstruction. In our method, tensor patches are extracted from an image tensor, which is reconstructed using the filtered backprojection (FBP), to form a training dataset. With the Candecomp/Parafac decomposition, a tensor-based dictionary is trained, in which each atom is a rank-one tensor. Then, the trained dictionary is used to sparsely represent image tensor patches during an iterative reconstruction process, and the alternating minimization scheme is adapted for optimization. The effectiveness of our proposed method is validated with both numerically simulated and real preclinical mouse datasets. The results demonstrate that the proposed tensor-based method generally produces superior image quality, and leads to more accurate material decomposition than the currently popular popular methods. PMID:27541628

  13. Molecular markers indicate different dynamics of leaves and roots during litter decomposition

    NASA Astrophysics Data System (ADS)

    Altmann, Jens; Jansen, Boris; Palviainen, Marjo; Kalbitz, Karsten

    2010-05-01

    Up to now there is only a poor understanding of the sources contributing to organic carbon in forest soils, especially the contribution of leaves and roots. Studies of the last 2 decades have shown that methods like pyrolysis and CuO oxidation are suitable tools to trace back the main contributors of organic matter in water, sediments and soils. Lignin derived monomers, extractable lipids, cutin and suberin derived compounds have been used frequently for identification of plant material. However, for the selection of suitable biomarker the decomposition patterns and stability of these compounds are of high importance but they are only poorly understood. In this study we focused on following questions: (I) Which compounds are characteristic to identify certain plant parts and plant species? (II) How stable are these compounds during the first 3 years of litter decomposition? We studied the chemical composition of samples from a 3-year litterbag decomposition experiment with roots and leaves of spruce, pine and birch which was done in Finland. Additionally to mass loss, carbon and nitrogen contents, free lipids were extracted; by alkaline hydrolysis non extractable lipids were gained. The extracts were analyzed afterwards by GC-MS, the insoluble residues were analyzed by curie-point Pyrolysis GC-MS. In addition to the identification and quantification of a variety of different compounds and compound ratios we used statistical classification methods to get deeper insights into the patterns of leaf and root-derived biomarkers during litter decomposition. The mass loss was largely different between the litter species and we always observed larger mass loss for leaf-derived litter in comparison to root derived litter. This trend was also observed by molecular analysis. The increase of the ratio of vanillic acid to vanillin was correlated to the mass loss of the samples over time. This shows that the degree of decomposition of plant material was linked with the degree of lignin degradation. Preliminary results show, that we were able to distinguish the different species and plant parts using various approaches, e.g., abundance and patterns of different substances and different ratios of compounds. The polyesters suberin and cutin were particularly useful to differentiate between roots and leaves. We conclude that knowledge of the decomposition patterns of molecular markers will largely improve the identification power of organic matter sources in soils.

  14. Initial mechanisms for the unimolecular decomposition of electronically excited bisfuroxan based energetic materials.

    PubMed

    Yuan, Bing; Bernstein, Elliot R

    2017-01-07

    Unimolecular decomposition of energetic molecules, 3,3'-diamino-4,4'-bisfuroxan (labeled as A) and 4,4'-diamino-3,3'-bisfuroxan (labeled as B), has been explored via 226/236 nm single photon laser excitation/decomposition. These two energetic molecules, subsequent to UV excitation, create NO as an initial decomposition product at the nanosecond excitation energies (5.0-5.5 eV) with warm vibrational temperature (1170 ± 50 K for A, 1400 ± 50 K for B) and cold rotational temperature (<55 K). Initial decomposition mechanisms for these two electronically excited, isolated molecules are explored at the complete active space self-consistent field (CASSCF(12,12)/6-31G(d)) level with and without MP2 correction. Potential energy surface calculations illustrate that conical intersections play an essential role in the calculated decomposition mechanisms. Based on experimental observations and theoretical calculations, NO product is released through opening of the furoxan ring: ring opening can occur either on the S 1 excited or S 0 ground electronic state. The reaction path with the lowest energetic barrier is that for which the furoxan ring opens on the S 1 state via the breaking of the N1-O1 bond. Subsequently, the molecule moves to the ground S 0 state through related ring-opening conical intersections, and an NO product is formed on the ground state surface with little rotational excitation at the last NO dissociation step. For the ground state ring opening decomposition mechanism, the N-O bond and C-N bond break together in order to generate dissociated NO. With the MP2 correction for the CASSCF(12,12) surface, the potential energies of molecules with dissociated NO product are in the range from 2.04 to 3.14 eV, close to the theoretical result for the density functional theory (B3LYP) and MP2 methods. The CASMP2(12,12) corrected approach is essential in order to obtain a reasonable potential energy surface that corresponds to the observed decomposition behavior of these molecules. Apparently, highly excited states are essential for an accurate representation of the kinetics and dynamics of excited state decomposition of both of these bisfuroxan energetic molecules. The experimental vibrational temperatures of NO products of A and B are about 800-1000 K lower than previously studied energetic molecules with NO as a decomposition product.

  15. Multidisciplinary optimization for engineering systems - Achievements and potential

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw

    1989-01-01

    The currently common sequential design process for engineering systems is likely to lead to suboptimal designs. Recently developed decomposition methods offer an alternative for coming closer to optimum by breaking the large task of system optimization into smaller, concurrently executed and, yet, coupled tasks, identified with engineering disciplines or subsystems. The hierarchic and non-hierarchic decompositions are discussed and illustrated by examples. An organization of a design process centered on the non-hierarchic decomposition is proposed.

  16. Multidisciplinary optimization for engineering systems: Achievements and potential

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, Jaroslaw

    1989-01-01

    The currently common sequential design process for engineering systems is likely to lead to suboptimal designs. Recently developed decomposition methods offer an alternative for coming closer to optimum by breaking the large task of system optimization into smaller, concurrently executed and, yet, coupled tasks, identified with engineering disciplines or subsystems. The hierarchic and non-hierarchic decompositions are discussed and illustrated by examples. An organization of a design process centered on the non-hierarchic decomposition is proposed.

  17. A unification of mediation and interaction: a four-way decomposition

    PubMed Central

    VanderWeele, Tyler J.

    2014-01-01

    It is shown that the overall effect of an exposure on an outcome, in the presence of a mediator with which the exposure may interact, can be decomposed into four components: (i) the effect of the exposure in the absence of the mediator, (ii) the interactive effect when the mediator is left to what it would be in the absence of exposure, (iii) a mediated interaction, and (iv) a pure mediated effect. These four components, respectively, correspond to the portion of the effect that is due to neither mediation nor interaction, to just interaction (but not mediation), to both mediation and interaction, and to just mediation (but not interaction). This four-way decomposition unites methods that attribute effects to interactions and methods that assess mediation. Certain combinations of these four components correspond to measures for mediation, while other combinations correspond to measures of interaction previously proposed in the literature. Prior decompositions in the literature are in essence special cases of this four-way decomposition. The four-way decomposition can be carried out using standard statistical models, and software is provided to estimate each of the four components. The four-way decomposition provides maximum insight into how much of an effect is mediated, how much is due to interaction, how much is due to both mediation and interaction together, and how much is due to neither. PMID:25000145

  18. Circular Mixture Modeling of Color Distribution for Blind Stain Separation in Pathology Images.

    PubMed

    Li, Xingyu; Plataniotis, Konstantinos N

    2017-01-01

    In digital pathology, to address color variation and histological component colocalization in pathology images, stain decomposition is usually performed preceding spectral normalization and tissue component segmentation. This paper examines the problem of stain decomposition, which is a naturally nonnegative matrix factorization (NMF) problem in algebra, and introduces a systematical and analytical solution consisting of a circular color analysis module and an NMF-based computation module. Unlike the paradigm of existing stain decomposition algorithms where stain proportions are computed from estimated stain spectra using a matrix inverse operation directly, the introduced solution estimates stain spectra and stain depths via probabilistic reasoning individually. Since the proposed method pays extra attentions to achromatic pixels in color analysis and stain co-occurrence in pixel clustering, it achieves consistent and reliable stain decomposition with minimum decomposition residue. Particularly, aware of the periodic and angular nature of hue, we propose the use of a circular von Mises mixture model to analyze the hue distribution, and provide a complete color-based pixel soft-clustering solution to address color mixing introduced by stain overlap. This innovation combined with saturation-weighted computation makes our study effective for weak stains and broad-spectrum stains. Extensive experimentation on multiple public pathology datasets suggests that our approach outperforms state-of-the-art blind stain separation methods in terms of decomposition effectiveness.

  19. Orlistat interaction with sibutramine and carnitine. A physicochemical and theoretical study

    NASA Astrophysics Data System (ADS)

    Nicolás-Vázquez, Inés; Hinojosa Torres, Jaime; Cruz Borbolla, Julián; Miranda Ruvalcaba, René; Aceves-Hernández, Juan Manuel

    2014-03-01

    Chemical degradation of orlistat, (ORT) after melting and reaction of decomposition byproducts with sibutramine, SIB was studied. Interactions between the active pharmaceutical ingredients by using thermal analysis, TA, methods and other experimental techniques such as PXRD, IR and UV-vis spectroscopies were carried out to investigate chemical reactions between components. It was found that orlistat melts with decomposition and byproducts quickly affect sibutramine molecule and then reacting also with carnitine, CRN when the three active pharmaceutical ingredients (API's) are mixed. However ORT byproducts do not react when ORT is mixed only with carnitine. It was found that compounds containing chlorine atoms react easily with orlistat when the temperature increases up to its melting point. Some reaction mechanisms of orlistat decomposition are proposed, the fragments in the mechanisms were found in the corresponding mass spectra. Results obtained indicate that special studies should be carried out in the formulation stage before the final composition of a poly-pill could be established. Similar results are commonly found for compounds very prone to react in presence of water, light and/or temperature. In order to explain the reactivity of orlistat with sibutramine and carnitine, theoretical calculations were carried out and the results are in agreement with the experimental results.

  20. Application of Mortar Coupling in Multiscale Modelling of Coupled Flow, Transport, and Biofilm Growth in Porous Media

    NASA Astrophysics Data System (ADS)

    Laleian, A.; Valocchi, A. J.; Werth, C. J.

    2017-12-01

    Multiscale models of reactive transport in porous media are capable of capturing complex pore-scale processes while leveraging the efficiency of continuum-scale models. In particular, porosity changes caused by biofilm development yield complex feedbacks between transport and reaction that are difficult to quantify at the continuum scale. Pore-scale models, needed to accurately resolve these dynamics, are often impractical for applications due to their computational cost. To address this challenge, we are developing a multiscale model of biofilm growth in which non-overlapping regions at pore and continuum spatial scales are coupled with a mortar method providing continuity at interfaces. We explore two decompositions of coupled pore-scale and continuum-scale regions to study biofilm growth in a transverse mixing zone. In the first decomposition, all reaction is confined to a pore-scale region extending the transverse mixing zone length. Only solute transport occurs in the surrounding continuum-scale regions. Relative to a fully pore-scale result, we find the multiscale model with this decomposition has a reduced run time and consistent result in terms of biofilm growth and solute utilization. In the second decomposition, reaction occurs in both an up-gradient pore-scale region and a down-gradient continuum-scale region. To quantify clogging, the continuum-scale model implements empirical relations between porosity and continuum-scale parameters, such as permeability and the transverse dispersion coefficient. Solutes are sufficiently mixed at the end of the pore-scale region, such that the initial reaction rate is accurately computed using averaged concentrations in the continuum-scale region. Relative to a fully pore-scale result, we find accuracy of biomass growth in the multiscale model with this decomposition improves as the interface between pore-scale and continuum-scale regions moves downgradient where transverse mixing is more fully developed. Also, this decomposition poses additional challenges with respect to mortar coupling. We explore these challenges and potential solutions. While recent work has demonstrated growing interest in multiscale models, further development is needed for their application to field-scale subsurface contaminant transport and remediation.

Top