Sample records for time-space decomposition method

  1. Rapid Transient Pressure Field Computations in the Nearfield of Circular Transducers using Frequency Domain Time-Space Decomposition

    PubMed Central

    Alles, E. J.; Zhu, Y.; van Dongen, K. W. A.; McGough, R. J.

    2013-01-01

    The fast nearfield method, when combined with time-space decomposition, is a rapid and accurate approach for calculating transient nearfield pressures generated by ultrasound transducers. However, the standard time-space decomposition approach is only applicable to certain analytical representations of the temporal transducer surface velocity that, when applied to the fast nearfield method, are expressed as a finite sum of products of separate temporal and spatial terms. To extend time-space decomposition such that accelerated transient field simulations are enabled in the nearfield for an arbitrary transducer surface velocity, a new transient simulation method, frequency domain time-space decomposition (FDTSD), is derived. With this method, the temporal transducer surface velocity is transformed into the frequency domain, and then each complex-valued term is processed separately. Further improvements are achieved by spectral clipping, which reduces the number of terms and the computation time. Trade-offs between speed and accuracy are established for FDTSD calculations, and pressure fields obtained with the FDTSD method for a circular transducer are compared to those obtained with Field II and the impulse response method. The FDTSD approach, when combined with the fast nearfield method and spectral clipping, consistently achieves smaller errors in less time and requires less memory than Field II or the impulse response method. PMID:23160476

  2. Unconditionally energy stable time stepping scheme for Cahn–Morral equation: Application to multi-component spinodal decomposition and optimal space tiling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tavakoli, Rouhollah, E-mail: rtavakoli@sharif.ir

    An unconditionally energy stable time stepping scheme is introduced to solve Cahn–Morral-like equations in the present study. It is constructed based on the combination of David Eyre's time stepping scheme and Schur complement approach. Although the presented method is general and independent of the choice of homogeneous free energy density function term, logarithmic and polynomial energy functions are specifically considered in this paper. The method is applied to study the spinodal decomposition in multi-component systems and optimal space tiling problems. A penalization strategy is developed, in the case of later problem, to avoid trivial solutions. Extensive numerical experiments demonstrate themore » success and performance of the presented method. According to the numerical results, the method is convergent and energy stable, independent of the choice of time stepsize. Its MATLAB implementation is included in the appendix for the numerical evaluation of algorithm and reproduction of the presented results. -- Highlights: •Extension of Eyre's convex–concave splitting scheme to multiphase systems. •Efficient solution of spinodal decomposition in multi-component systems. •Efficient solution of least perimeter periodic space partitioning problem. •Developing a penalization strategy to avoid trivial solutions. •Presentation of MATLAB implementation of the introduced algorithm.« less

  3. An investigation of the use of temporal decomposition in space mission scheduling

    NASA Technical Reports Server (NTRS)

    Bullington, Stanley E.; Narayanan, Venkat

    1994-01-01

    This research involves an examination of techniques for solving scheduling problems in long-duration space missions. The mission timeline is broken up into several time segments, which are then scheduled incrementally. Three methods are presented for identifying the activities that are to be attempted within these segments. The first method is a mathematical model, which is presented primarily to illustrate the structure of the temporal decomposition problem. Since the mathematical model is bound to be computationally prohibitive for realistic problems, two heuristic assignment procedures are also presented. The first heuristic method is based on dispatching rules for activity selection, and the second heuristic assigns performances of a model evenly over timeline segments. These heuristics are tested using a sample Space Station mission and a Spacelab mission. The results are compared with those obtained by scheduling the missions without any problem decomposition. The applicability of this approach to large-scale mission scheduling problems is also discussed.

  4. Domain decomposition: A bridge between nature and parallel computers

    NASA Technical Reports Server (NTRS)

    Keyes, David E.

    1992-01-01

    Domain decomposition is an intuitive organizing principle for a partial differential equation (PDE) computation, both physically and architecturally. However, its significance extends beyond the readily apparent issues of geometry and discretization, on one hand, and of modular software and distributed hardware, on the other. Engineering and computer science aspects are bridged by an old but recently enriched mathematical theory that offers the subject not only unity, but also tools for analysis and generalization. Domain decomposition induces function-space and operator decompositions with valuable properties. Function-space bases and operator splittings that are not derived from domain decompositions generally lack one or more of these properties. The evolution of domain decomposition methods for elliptically dominated problems has linked two major algorithmic developments of the last 15 years: multilevel and Krylov methods. Domain decomposition methods may be considered descendants of both classes with an inheritance from each: they are nearly optimal and at the same time efficiently parallelizable. Many computationally driven application areas are ripe for these developments. A progression is made from a mathematically informal motivation for domain decomposition methods to a specific focus on fluid dynamics applications. To be introductory rather than comprehensive, simple examples are provided while convergence proofs and algorithmic details are left to the original references; however, an attempt is made to convey their most salient features, especially where this leads to algorithmic insight.

  5. Time Series Decomposition into Oscillation Components and Phase Estimation.

    PubMed

    Matsuda, Takeru; Komaki, Fumiyasu

    2017-02-01

    Many time series are naturally considered as a superposition of several oscillation components. For example, electroencephalogram (EEG) time series include oscillation components such as alpha, beta, and gamma. We propose a method for decomposing time series into such oscillation components using state-space models. Based on the concept of random frequency modulation, gaussian linear state-space models for oscillation components are developed. In this model, the frequency of an oscillator fluctuates by noise. Time series decomposition is accomplished by this model like the Bayesian seasonal adjustment method. Since the model parameters are estimated from data by the empirical Bayes' method, the amplitudes and the frequencies of oscillation components are determined in a data-driven manner. Also, the appropriate number of oscillation components is determined with the Akaike information criterion (AIC). In this way, the proposed method provides a natural decomposition of the given time series into oscillation components. In neuroscience, the phase of neural time series plays an important role in neural information processing. The proposed method can be used to estimate the phase of each oscillation component and has several advantages over a conventional method based on the Hilbert transform. Thus, the proposed method enables an investigation of the phase dynamics of time series. Numerical results show that the proposed method succeeds in extracting intermittent oscillations like ripples and detecting the phase reset phenomena. We apply the proposed method to real data from various fields such as astronomy, ecology, tidology, and neuroscience.

  6. Numerical simulation for solution of space-time fractional telegraphs equations with local fractional derivatives via HAFSTM

    NASA Astrophysics Data System (ADS)

    Pandey, Rishi Kumar; Mishra, Hradyesh Kumar

    2017-11-01

    In this paper, the semi-analytic numerical technique for the solution of time-space fractional telegraph equation is applied. This numerical technique is based on coupling of the homotopy analysis method and sumudu transform. It shows the clear advantage with mess methods like finite difference method and also with polynomial methods similar to perturbation and Adomian decomposition methods. It is easily transform the complex fractional order derivatives in simple time domain and interpret the results in same meaning.

  7. Parallel processing methods for space based power systems

    NASA Technical Reports Server (NTRS)

    Berry, F. C.

    1993-01-01

    This report presents a method for doing load-flow analysis of a power system by using a decomposition approach. The power system for the Space Shuttle is used as a basis to build a model for the load-flow analysis. To test the decomposition method for doing load-flow analysis, simulations were performed on power systems of 16, 25, 34, 43, 52, 61, 70, and 79 nodes. Each of the power systems was divided into subsystems and simulated under steady-state conditions. The results from these tests have been found to be as accurate as tests performed using a standard serial simulator. The division of the power systems into different subsystems was done by assigning a processor to each area. There were 13 transputers available, therefore, up to 13 different subsystems could be simulated at the same time. This report has preliminary results for a load-flow analysis using a decomposition principal. The report shows that the decomposition algorithm for load-flow analysis is well suited for parallel processing and provides increases in the speed of execution.

  8. Error analysis of multipoint flux domain decomposition methods for evolutionary diffusion problems

    NASA Astrophysics Data System (ADS)

    Arrarás, A.; Portero, L.; Yotov, I.

    2014-01-01

    We study space and time discretizations for mixed formulations of parabolic problems. The spatial approximation is based on the multipoint flux mixed finite element method, which reduces to an efficient cell-centered pressure system on general grids, including triangles, quadrilaterals, tetrahedra, and hexahedra. The time integration is performed by using a domain decomposition time-splitting technique combined with multiterm fractional step diagonally implicit Runge-Kutta methods. The resulting scheme is unconditionally stable and computationally efficient, as it reduces the global system to a collection of uncoupled subdomain problems that can be solved in parallel without the need for Schwarz-type iteration. Convergence analysis for both the semidiscrete and fully discrete schemes is presented.

  9. Pi2 detection using Empirical Mode Decomposition (EMD)

    NASA Astrophysics Data System (ADS)

    Mieth, Johannes Z. D.; Frühauff, Dennis; Glassmeier, Karl-Heinz

    2017-04-01

    Empirical Mode Decomposition has been used as an alternative method to wavelet transformation to identify onset times of Pi2 pulsations in data sets of the Scandinavian Magnetometer Array (SMA). Pi2 pulsations are magnetohydrodynamic waves occurring during magnetospheric substorms. Almost always Pi2 are observed at substorm onset in mid to low latitudes on Earth's nightside. They are fed by magnetic energy release caused by dipolarization processes. Their periods lie between 40 to 150 seconds. Usually, Pi2 are detected using wavelet transformation. Here, Empirical Mode Decomposition (EMD) is presented as an alternative approach to the traditional procedure. EMD is a young signal decomposition method designed for nonlinear and non-stationary time series. It provides an adaptive, data driven, and complete decomposition of time series into slow and fast oscillations. An optimized version using Monte-Carlo-type noise assistance is used here. By displaying the results in a time-frequency space a characteristic frequency modulation is observed. This frequency modulation can be correlated with the onset of Pi2 pulsations. A basic algorithm to find the onset is presented. Finally, the results are compared to classical wavelet-based analysis. The use of different SMA stations furthermore allows the spatial analysis of Pi2 onset times. EMD mostly finds application in the fields of engineering and medicine. This work demonstrates the applicability of this method to geomagnetic time series.

  10. TE/TM decomposition of electromagnetic sources

    NASA Technical Reports Server (NTRS)

    Lindell, Ismo V.

    1988-01-01

    Three methods are given by which bounded EM sources can be decomposed into two parts radiating transverse electric (TE) and transverse magnetic (TM) fields with respect to a given constant direction in space. The theory applies source equivalence and nonradiating source concepts, which lead to decomposition methods based on a recursive formula or two differential equations for the determination of the TE and TM components of the original source. Decompositions for a dipole in terms of point, line, and plane sources are studied in detail. The planar decomposition is seen to match to an earlier result given by Clemmow (1963). As an application of the point decomposition method, it is demonstrated that the general exact image expression for the Sommerfeld half-space problem, previously derived through heuristic reasoning, can be more straightforwardly obtained through the present decomposition method.

  11. Turbulent Statistics From Time-Resolved PIV Measurements of a Jet Using Empirical Mode Decomposition

    NASA Technical Reports Server (NTRS)

    Dahl, Milo D.

    2013-01-01

    Empirical mode decomposition is an adaptive signal processing method that when applied to a broadband signal, such as that generated by turbulence, acts as a set of band-pass filters. This process was applied to data from time-resolved, particle image velocimetry measurements of subsonic jets prior to computing the second-order, two-point, space-time correlations from which turbulent phase velocities and length and time scales could be determined. The application of this method to large sets of simultaneous time histories is new. In this initial study, the results are relevant to acoustic analogy source models for jet noise prediction. The high frequency portion of the results could provide the turbulent values for subgrid scale models for noise that is missed in large-eddy simulations. The results are also used to infer that the cross-correlations between different components of the decomposed signals at two points in space, neglected in this initial study, are important.

  12. Turbulent Statistics from Time-Resolved PIV Measurements of a Jet Using Empirical Mode Decomposition

    NASA Technical Reports Server (NTRS)

    Dahl, Milo D.

    2012-01-01

    Empirical mode decomposition is an adaptive signal processing method that when applied to a broadband signal, such as that generated by turbulence, acts as a set of band-pass filters. This process was applied to data from time-resolved, particle image velocimetry measurements of subsonic jets prior to computing the second-order, two-point, space-time correlations from which turbulent phase velocities and length and time scales could be determined. The application of this method to large sets of simultaneous time histories is new. In this initial study, the results are relevant to acoustic analogy source models for jet noise prediction. The high frequency portion of the results could provide the turbulent values for subgrid scale models for noise that is missed in large-eddy simulations. The results are also used to infer that the cross-correlations between different components of the decomposed signals at two points in space, neglected in this initial study, are important.

  13. Limited-memory adaptive snapshot selection for proper orthogonal decomposition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oxberry, Geoffrey M.; Kostova-Vassilevska, Tanya; Arrighi, Bill

    2015-04-02

    Reduced order models are useful for accelerating simulations in many-query contexts, such as optimization, uncertainty quantification, and sensitivity analysis. However, offline training of reduced order models can have prohibitively expensive memory and floating-point operation costs in high-performance computing applications, where memory per core is limited. To overcome this limitation for proper orthogonal decomposition, we propose a novel adaptive selection method for snapshots in time that limits offline training costs by selecting snapshots according an error control mechanism similar to that found in adaptive time-stepping ordinary differential equation solvers. The error estimator used in this work is related to theory boundingmore » the approximation error in time of proper orthogonal decomposition-based reduced order models, and memory usage is minimized by computing the singular value decomposition using a single-pass incremental algorithm. Results for a viscous Burgers’ test problem demonstrate convergence in the limit as the algorithm error tolerances go to zero; in this limit, the full order model is recovered to within discretization error. The resulting method can be used on supercomputers to generate proper orthogonal decomposition-based reduced order models, or as a subroutine within hyperreduction algorithms that require taking snapshots in time, or within greedy algorithms for sampling parameter space.« less

  14. An efficient solution procedure for the thermoelastic analysis of truss space structures

    NASA Technical Reports Server (NTRS)

    Givoli, D.; Rand, O.

    1992-01-01

    A solution procedure is proposed for the thermal and thermoelastic analysis of truss space structures in periodic motion. In this method, the spatial domain is first descretized using a consistent finite element formulation. Then the resulting semi-discrete equations in time are solved analytically by using Fourier decomposition. Geometrical symmetry is taken advantage of completely. An algorithm is presented for the calculation of heat flux distribution. The method is demonstrated via a numerical example of a cylindrically shaped space structure.

  15. Decomposition of the complex system into nonlinear spatio-temporal modes: algorithm and application to climate data mining

    NASA Astrophysics Data System (ADS)

    Feigin, Alexander; Gavrilov, Andrey; Loskutov, Evgeny; Mukhin, Dmitry

    2015-04-01

    Proper decomposition of the complex system into well separated "modes" is a way to reveal and understand the mechanisms governing the system behaviour as well as discover essential feedbacks and nonlinearities. The decomposition is also natural procedure that provides to construct adequate and concurrently simplest models of both corresponding sub-systems, and of the system in whole. In recent works two new methods of decomposition of the Earth's climate system into well separated modes were discussed. The first method [1-3] is based on the MSSA (Multichannel Singular Spectral Analysis) [4] for linear expanding vector (space-distributed) time series and makes allowance delayed correlations of the processes recorded in spatially separated points. The second one [5-7] allows to construct nonlinear dynamic modes, but neglects delay of correlations. It was demonstrated [1-3] that first method provides effective separation of different time scales, but prevent from correct reduction of data dimension: slope of variance spectrum of spatio-temporal empirical orthogonal functions that are "structural material" for linear spatio-temporal modes, is too flat. The second method overcomes this problem: variance spectrum of nonlinear modes falls essentially sharply [5-7]. However neglecting time-lag correlations brings error of mode selection that is uncontrolled and increases with growth of mode time scale. In the report we combine these two methods in such a way that the developed algorithm allows constructing nonlinear spatio-temporal modes. The algorithm is applied for decomposition of (i) multi hundreds years globally distributed data generated by the INM RAS Coupled Climate Model [8], and (ii) 156 years time series of SST anomalies distributed over the globe [9]. We compare efficiency of different methods of decomposition and discuss the abilities of nonlinear spatio-temporal modes for construction of adequate and concurrently simplest ("optimal") models of climate systems. 1. Feigin A.M., Mukhin D., Gavrilov A., Volodin E.M., and Loskutov E.M. (2013) "Separation of spatial-temporal patterns ("climatic modes") by combined analysis of really measured and generated numerically vector time series", AGU 2013 Fall Meeting, Abstract NG33A-1574. 2. Alexander Feigin, Dmitry Mukhin, Andrey Gavrilov, Evgeny Volodin, and Evgeny Loskutov (2014) "Approach to analysis of multiscale space-distributed time series: separation of spatio-temporal modes with essentially different time scales", Geophysical Research Abstracts, Vol. 16, EGU2014-6877. 3. Dmitry Mukhin, Dmitri Kondrashov, Evgeny Loskutov, Andrey Gavrilov, Alexander Feigin, and Michael Ghil (2014) "Predicting critical transitions in ENSO models, Part II: Spatially dependent models", Journal of Climate (accepted, doi: 10.1175/JCLI-D-14-00240.1). 4. Ghil, M., R. M. Allen, M. D. Dettinger, K. Ide, D. Kondrashov, et al. (2002) "Advanced spectral methods for climatic time series", Rev. Geophys. 40(1), 3.1-3.41. 5. Dmitry Mukhin, Andrey Gavrilov, Evgeny M Loskutov and Alexander M Feigin (2014) "Nonlinear Decomposition of Climate Data: a New Method for Reconstruction of Dynamical Modes", AGU 2014 Fall Meeting, Abstract NG43A-3752. 6. Andrey Gavrilov, Dmitry Mukhin, Evgeny Loskutov, and Alexander Feigin (2015) "Empirical decomposition of climate data into nonlinear dynamic modes", Geophysical Research Abstracts, Vol. 17, EGU2015-627. 7. Dmitry Mukhin, Andrey Gavrilov, Evgeny Loskutov, Alexander Feigin, and Juergen Kurths (2015) "Reconstruction of principal dynamical modes from climatic variability: nonlinear approach", Geophysical Research Abstracts, Vol. 17, EGU2015-5729. 8. http://83.149.207.89/GCM_DATA_PLOTTING/GCM_INM_DATA_XY_en.htm. 9. http://iridl.ldeo.columbia.edu/SOURCES/.KAPLAN/.EXTENDED/.v2/.ssta/.

  16. Parameter estimation methods for gene circuit modeling from time-series mRNA data: a comparative study.

    PubMed

    Fan, Ming; Kuwahara, Hiroyuki; Wang, Xiaolei; Wang, Suojin; Gao, Xin

    2015-11-01

    Parameter estimation is a challenging computational problem in the reverse engineering of biological systems. Because advances in biotechnology have facilitated wide availability of time-series gene expression data, systematic parameter estimation of gene circuit models from such time-series mRNA data has become an important method for quantitatively dissecting the regulation of gene expression. By focusing on the modeling of gene circuits, we examine here the performance of three types of state-of-the-art parameter estimation methods: population-based methods, online methods and model-decomposition-based methods. Our results show that certain population-based methods are able to generate high-quality parameter solutions. The performance of these methods, however, is heavily dependent on the size of the parameter search space, and their computational requirements substantially increase as the size of the search space increases. In comparison, online methods and model decomposition-based methods are computationally faster alternatives and are less dependent on the size of the search space. Among other things, our results show that a hybrid approach that augments computationally fast methods with local search as a subsequent refinement procedure can substantially increase the quality of their parameter estimates to the level on par with the best solution obtained from the population-based methods while maintaining high computational speed. These suggest that such hybrid methods can be a promising alternative to the more commonly used population-based methods for parameter estimation of gene circuit models when limited prior knowledge about the underlying regulatory mechanisms makes the size of the parameter search space vastly large. © The Author 2015. Published by Oxford University Press. For Permissions, please email: journals.permissions@oup.com.

  17. A parallel domain decomposition-based implicit method for the Cahn–Hilliard–Cook phase-field equation in 3D

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zheng, Xiang; Yang, Chao; State Key Laboratory of Computer Science, Chinese Academy of Sciences, Beijing 100190

    2015-03-15

    We present a numerical algorithm for simulating the spinodal decomposition described by the three dimensional Cahn–Hilliard–Cook (CHC) equation, which is a fourth-order stochastic partial differential equation with a noise term. The equation is discretized in space and time based on a fully implicit, cell-centered finite difference scheme, with an adaptive time-stepping strategy designed to accelerate the progress to equilibrium. At each time step, a parallel Newton–Krylov–Schwarz algorithm is used to solve the nonlinear system. We discuss various numerical and computational challenges associated with the method. The numerical scheme is validated by a comparison with an explicit scheme of high accuracymore » (and unreasonably high cost). We present steady state solutions of the CHC equation in two and three dimensions. The effect of the thermal fluctuation on the spinodal decomposition process is studied. We show that the existence of the thermal fluctuation accelerates the spinodal decomposition process and that the final steady morphology is sensitive to the stochastic noise. We also show the evolution of the energies and statistical moments. In terms of the parallel performance, it is found that the implicit domain decomposition approach scales well on supercomputers with a large number of processors.« less

  18. A fast new algorithm for a robot neurocontroller using inverse QR decomposition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morris, A.S.; Khemaissia, S.

    2000-01-01

    A new adaptive neural network controller for robots is presented. The controller is based on direct adaptive techniques. Unlike many neural network controllers in the literature, inverse dynamical model evaluation is not required. A numerically robust, computationally efficient processing scheme for neutral network weight estimation is described, namely, the inverse QR decomposition (INVQR). The inverse QR decomposition and a weighted recursive least-squares (WRLS) method for neural network weight estimation is derived using Cholesky factorization of the data matrix. The algorithm that performs the efficient INVQR of the underlying space-time data matrix may be implemented in parallel on a triangular array.more » Furthermore, its systolic architecture is well suited for VLSI implementation. Another important benefit is well suited for VLSI implementation. Another important benefit of the INVQR decomposition is that it solves directly for the time-recursive least-squares filter vector, while avoiding the sequential back-substitution step required by the QR decomposition approaches.« less

  19. Dynamics in the Decompositions Approach to Quantum Mechanics

    NASA Astrophysics Data System (ADS)

    Harding, John

    2017-12-01

    In Harding (Trans. Amer. Math. Soc. 348(5), 1839-1862 1996) it was shown that the direct product decompositions of any non-empty set, group, vector space, and topological space X form an orthomodular poset Fact X. This is the basis for a line of study in foundational quantum mechanics replacing Hilbert spaces with other types of structures. Here we develop dynamics and an abstract version of a time independent Schrödinger's equation in the setting of decompositions by considering representations of the group of real numbers in the automorphism group of the orthomodular poset Fact X of decompositions.

  20. Low rank approximation methods for MR fingerprinting with large scale dictionaries.

    PubMed

    Yang, Mingrui; Ma, Dan; Jiang, Yun; Hamilton, Jesse; Seiberlich, Nicole; Griswold, Mark A; McGivney, Debra

    2018-04-01

    This work proposes new low rank approximation approaches with significant memory savings for large scale MR fingerprinting (MRF) problems. We introduce a compressed MRF with randomized singular value decomposition method to significantly reduce the memory requirement for calculating a low rank approximation of large sized MRF dictionaries. We further relax this requirement by exploiting the structures of MRF dictionaries in the randomized singular value decomposition space and fitting them to low-degree polynomials to generate high resolution MRF parameter maps. In vivo 1.5T and 3T brain scan data are used to validate the approaches. T 1 , T 2 , and off-resonance maps are in good agreement with that of the standard MRF approach. Moreover, the memory savings is up to 1000 times for the MRF-fast imaging with steady-state precession sequence and more than 15 times for the MRF-balanced, steady-state free precession sequence. The proposed compressed MRF with randomized singular value decomposition and dictionary fitting methods are memory efficient low rank approximation methods, which can benefit the usage of MRF in clinical settings. They also have great potentials in large scale MRF problems, such as problems considering multi-component MRF parameters or high resolution in the parameter space. Magn Reson Med 79:2392-2400, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  1. A biorthogonal decomposition for the identification and simulation of non-stationary and non-Gaussian random fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zentner, I.; Ferré, G., E-mail: gregoire.ferre@ponts.org; Poirion, F.

    2016-06-01

    In this paper, a new method for the identification and simulation of non-Gaussian and non-stationary stochastic fields given a database is proposed. It is based on two successive biorthogonal decompositions aiming at representing spatio–temporal stochastic fields. The proposed double expansion allows to build the model even in the case of large-size problems by separating the time, space and random parts of the field. A Gaussian kernel estimator is used to simulate the high dimensional set of random variables appearing in the decomposition. The capability of the method to reproduce the non-stationary and non-Gaussian features of random phenomena is illustrated bymore » applications to earthquakes (seismic ground motion) and sea states (wave heights).« less

  2. Space-time least-squares Petrov-Galerkin projection in nonlinear model reduction.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Choi, Youngsoo; Carlberg, Kevin Thomas

    Our work proposes a space-time least-squares Petrov-Galerkin (ST-LSPG) projection method for model reduction of nonlinear dynamical systems. In contrast to typical nonlinear model-reduction methods that first apply Petrov-Galerkin projection in the spatial dimension and subsequently apply time integration to numerically resolve the resulting low-dimensional dynamical system, the proposed method applies projection in space and time simultaneously. To accomplish this, the method first introduces a low-dimensional space-time trial subspace, which can be obtained by computing tensor decompositions of state-snapshot data. The method then computes discrete-optimal approximations in this space-time trial subspace by minimizing the residual arising after time discretization over allmore » space and time in a weighted ℓ 2-norm. This norm can be de ned to enable complexity reduction (i.e., hyper-reduction) in time, which leads to space-time collocation and space-time GNAT variants of the ST-LSPG method. Advantages of the approach relative to typical spatial-projection-based nonlinear model reduction methods such as Galerkin projection and least-squares Petrov-Galerkin projection include: (1) a reduction of both the spatial and temporal dimensions of the dynamical system, (2) the removal of spurious temporal modes (e.g., unstable growth) from the state space, and (3) error bounds that exhibit slower growth in time. Numerical examples performed on model problems in fluid dynamics demonstrate the ability of the method to generate orders-of-magnitude computational savings relative to spatial-projection-based reduced-order models without sacrificing accuracy.« less

  3. Domain decomposition methods for nonconforming finite element spaces of Lagrange-type

    NASA Technical Reports Server (NTRS)

    Cowsar, Lawrence C.

    1993-01-01

    In this article, we consider the application of three popular domain decomposition methods to Lagrange-type nonconforming finite element discretizations of scalar, self-adjoint, second order elliptic equations. The additive Schwarz method of Dryja and Widlund, the vertex space method of Smith, and the balancing method of Mandel applied to nonconforming elements are shown to converge at a rate no worse than their applications to the standard conforming piecewise linear Galerkin discretization. Essentially, the theory for the nonconforming elements is inherited from the existing theory for the conforming elements with only modest modification by constructing an isomorphism between the nonconforming finite element space and a space of continuous piecewise linear functions.

  4. Task-discriminative space-by-time factorization of muscle activity

    PubMed Central

    Delis, Ioannis; Panzeri, Stefano; Pozzo, Thierry; Berret, Bastien

    2015-01-01

    Movement generation has been hypothesized to rely on a modular organization of muscle activity. Crucial to this hypothesis is the ability to perform reliably a variety of motor tasks by recruiting a limited set of modules and combining them in a task-dependent manner. Thus far, existing algorithms that extract putative modules of muscle activations, such as Non-negative Matrix Factorization (NMF), identify modular decompositions that maximize the reconstruction of the recorded EMG data. Typically, the functional role of the decompositions, i.e., task accomplishment, is only assessed a posteriori. However, as motor actions are defined in task space, we suggest that motor modules should be computed in task space too. In this study, we propose a new module extraction algorithm, named DsNM3F, that uses task information during the module identification process. DsNM3F extends our previous space-by-time decomposition method (the so-called sNM3F algorithm, which could assess task performance only after having computed modules) to identify modules gauging between two complementary objectives: reconstruction of the original data and reliable discrimination of the performed tasks. We show that DsNM3F recovers the task dependence of module activations more accurately than sNM3F. We also apply it to electromyographic signals recorded during performance of a variety of arm pointing tasks and identify spatial and temporal modules of muscle activity that are highly consistent with previous studies. DsNM3F achieves perfect task categorization without significant loss in data approximation when task information is available and generalizes as well as sNM3F when applied to new data. These findings suggest that the space-by-time decomposition of muscle activity finds robust task-discriminating modular representations of muscle activity and that the insertion of task discrimination objectives is useful for describing the task modulation of module recruitment. PMID:26217213

  5. Task-discriminative space-by-time factorization of muscle activity.

    PubMed

    Delis, Ioannis; Panzeri, Stefano; Pozzo, Thierry; Berret, Bastien

    2015-01-01

    Movement generation has been hypothesized to rely on a modular organization of muscle activity. Crucial to this hypothesis is the ability to perform reliably a variety of motor tasks by recruiting a limited set of modules and combining them in a task-dependent manner. Thus far, existing algorithms that extract putative modules of muscle activations, such as Non-negative Matrix Factorization (NMF), identify modular decompositions that maximize the reconstruction of the recorded EMG data. Typically, the functional role of the decompositions, i.e., task accomplishment, is only assessed a posteriori. However, as motor actions are defined in task space, we suggest that motor modules should be computed in task space too. In this study, we propose a new module extraction algorithm, named DsNM3F, that uses task information during the module identification process. DsNM3F extends our previous space-by-time decomposition method (the so-called sNM3F algorithm, which could assess task performance only after having computed modules) to identify modules gauging between two complementary objectives: reconstruction of the original data and reliable discrimination of the performed tasks. We show that DsNM3F recovers the task dependence of module activations more accurately than sNM3F. We also apply it to electromyographic signals recorded during performance of a variety of arm pointing tasks and identify spatial and temporal modules of muscle activity that are highly consistent with previous studies. DsNM3F achieves perfect task categorization without significant loss in data approximation when task information is available and generalizes as well as sNM3F when applied to new data. These findings suggest that the space-by-time decomposition of muscle activity finds robust task-discriminating modular representations of muscle activity and that the insertion of task discrimination objectives is useful for describing the task modulation of module recruitment.

  6. Domain decomposition for a mixed finite element method in three dimensions

    USGS Publications Warehouse

    Cai, Z.; Parashkevov, R.R.; Russell, T.F.; Wilson, J.D.; Ye, X.

    2003-01-01

    We consider the solution of the discrete linear system resulting from a mixed finite element discretization applied to a second-order elliptic boundary value problem in three dimensions. Based on a decomposition of the velocity space, these equations can be reduced to a discrete elliptic problem by eliminating the pressure through the use of substructures of the domain. The practicality of the reduction relies on a local basis, presented here, for the divergence-free subspace of the velocity space. We consider additive and multiplicative domain decomposition methods for solving the reduced elliptic problem, and their uniform convergence is established.

  7. An NN-Based SRD Decomposition Algorithm and Its Application in Nonlinear Compensation

    PubMed Central

    Yan, Honghang; Deng, Fang; Sun, Jian; Chen, Jie

    2014-01-01

    In this study, a neural network-based square root of descending (SRD) order decomposition algorithm for compensating for nonlinear data generated by sensors is presented. The study aims at exploring the optimized decomposition of data 1.00,0.00,0.00 and minimizing the computational complexity and memory space of the training process. A linear decomposition algorithm, which automatically finds the optimal decomposition of N subparts and reduces the training time to 1N and memory cost to 1N, has been implemented on nonlinear data obtained from an encoder. Particular focus is given to the theoretical access of estimating the numbers of hidden nodes and the precision of varying the decomposition method. Numerical experiments are designed to evaluate the effect of this algorithm. Moreover, a designed device for angular sensor calibration is presented. We conduct an experiment that samples the data of an encoder and compensates for the nonlinearity of the encoder to testify this novel algorithm. PMID:25232912

  8. Model reduction of dynamical systems by proper orthogonal decomposition: Error bounds and comparison of methods using snapshots from the solution and the time derivatives [Proper orthogonal decomposition model reduction of dynamical systems: error bounds and comparison of methods using snapshots from the solution and the time derivatives

    DOE PAGES

    Kostova-Vassilevska, Tanya; Oxberry, Geoffrey M.

    2017-09-17

    In this study, we consider two proper orthogonal decomposition (POD) methods for dimension reduction of dynamical systems. The first method (M1) uses only time snapshots of the solution, while the second method (M2) augments the snapshot set with time-derivative snapshots. The goal of the paper is to analyze and compare the approximation errors resulting from the two methods by using error bounds. We derive several new bounds of the error from POD model reduction by each of the two methods. The new error bounds involve a multiplicative factor depending on the time steps between the snapshots. For method M1 themore » factor depends on the second power of the time step, while for method 2 the dependence is on the fourth power of the time step, suggesting that method M2 can be more accurate for small between-snapshot intervals. However, three other factors also affect the size of the error bounds. These include (i) the norm of the second (for M1) and fourth derivatives (M2); (ii) the first neglected singular value and (iii) the spectral properties of the projection of the system’s Jacobian in the reduced space. Because of the interplay of these factors neither method is more accurate than the other in all cases. Finally, we present numerical examples demonstrating that when the number of collected snapshots is small and the first neglected singular value has a value of zero, method M2 results in a better approximation.« less

  9. Model reduction of dynamical systems by proper orthogonal decomposition: Error bounds and comparison of methods using snapshots from the solution and the time derivatives [Proper orthogonal decomposition model reduction of dynamical systems: error bounds and comparison of methods using snapshots from the solution and the time derivatives

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kostova-Vassilevska, Tanya; Oxberry, Geoffrey M.

    In this study, we consider two proper orthogonal decomposition (POD) methods for dimension reduction of dynamical systems. The first method (M1) uses only time snapshots of the solution, while the second method (M2) augments the snapshot set with time-derivative snapshots. The goal of the paper is to analyze and compare the approximation errors resulting from the two methods by using error bounds. We derive several new bounds of the error from POD model reduction by each of the two methods. The new error bounds involve a multiplicative factor depending on the time steps between the snapshots. For method M1 themore » factor depends on the second power of the time step, while for method 2 the dependence is on the fourth power of the time step, suggesting that method M2 can be more accurate for small between-snapshot intervals. However, three other factors also affect the size of the error bounds. These include (i) the norm of the second (for M1) and fourth derivatives (M2); (ii) the first neglected singular value and (iii) the spectral properties of the projection of the system’s Jacobian in the reduced space. Because of the interplay of these factors neither method is more accurate than the other in all cases. Finally, we present numerical examples demonstrating that when the number of collected snapshots is small and the first neglected singular value has a value of zero, method M2 results in a better approximation.« less

  10. Hidden Surface Removal through Object Space Decomposition.

    DTIC Science & Technology

    1982-01-01

    12 2.1 Methods of Subdividing the Object Space ..................................................... 14 2.2 Accessing...AC.AIIA TO5ASK FORCE MNT OF TECH WRIONT-PATTERSON AFB 0O4 P/O 1a/I 64100(6 SURFACE REMOVAL THROWN4 OBJECT SPACE 0(COMPOSIT109d.(U UiCLASIFIEC AFZITNl...Surface Removal Through Object Space THESlS/ J AJ;I Decomposition 6. PERFORMING ORG. REPORT NUMBER 7. AUTHOR() a. CONTRACT OR GRANT NUMBER(s) Robert

  11. A study on characteristics of retrospective optimal interpolation with WRF testbed

    NASA Astrophysics Data System (ADS)

    Kim, S.; Noh, N.; Lim, G.

    2012-12-01

    This study presents the application of retrospective optimal interpolation (ROI) with Weather Research and Forecasting model (WRF). Song et al. (2009) suggest ROI method which is an optimal interpolation (OI) that gradually assimilates observations over the analysis window for variance-minimum estimate of an atmospheric state at the initial time of the analysis window. Song and Lim (2011) improve the method by incorporating eigen-decomposition and covariance inflation. ROI method assimilates the data at post analysis time using perturbation method (Errico and Raeder, 1999) without adjoint model. In this study, ROI method is applied to WRF model to validate the algorithm and to investigate the capability. The computational costs for ROI can be reduced due to the eigen-decomposition of background error covariance. Using the background error covariance in eigen-space, 1-profile assimilation experiment is performed. The difference between forecast errors with assimilation and without assimilation is obviously increased as time passed, which means the improvement of forecast error by assimilation. The characteristics and strength/weakness of ROI method are investigated by conducting the experiments with other data assimilation method.

  12. Predictability of monthly temperature and precipitation using automatic time series forecasting methods

    NASA Astrophysics Data System (ADS)

    Papacharalampous, Georgia; Tyralis, Hristos; Koutsoyiannis, Demetris

    2018-02-01

    We investigate the predictability of monthly temperature and precipitation by applying automatic univariate time series forecasting methods to a sample of 985 40-year-long monthly temperature and 1552 40-year-long monthly precipitation time series. The methods include a naïve one based on the monthly values of the last year, as well as the random walk (with drift), AutoRegressive Fractionally Integrated Moving Average (ARFIMA), exponential smoothing state-space model with Box-Cox transformation, ARMA errors, Trend and Seasonal components (BATS), simple exponential smoothing, Theta and Prophet methods. Prophet is a recently introduced model inspired by the nature of time series forecasted at Facebook and has not been applied to hydrometeorological time series before, while the use of random walk, BATS, simple exponential smoothing and Theta is rare in hydrology. The methods are tested in performing multi-step ahead forecasts for the last 48 months of the data. We further investigate how different choices of handling the seasonality and non-normality affect the performance of the models. The results indicate that: (a) all the examined methods apart from the naïve and random walk ones are accurate enough to be used in long-term applications; (b) monthly temperature and precipitation can be forecasted to a level of accuracy which can barely be improved using other methods; (c) the externally applied classical seasonal decomposition results mostly in better forecasts compared to the automatic seasonal decomposition used by the BATS and Prophet methods; and (d) Prophet is competitive, especially when it is combined with externally applied classical seasonal decomposition.

  13. Scare Tactics: Evaluating Problem Decompositions Using Failure Scenarios

    NASA Technical Reports Server (NTRS)

    Helm, B. Robert; Fickas, Stephen

    1992-01-01

    Our interest is in the design of multi-agent problem-solving systems, which we refer to as composite systems. We have proposed an approach to composite system design by decomposition of problem statements. An automated assistant called Critter provides a library of reusable design transformations which allow a human analyst to search the space of decompositions for a problem. In this paper we describe a method for evaluating and critiquing problem decompositions generated by this search process. The method uses knowledge stored in the form of failure decompositions attached to design transformations. We suggest the benefits of our critiquing method by showing how it could re-derive steps of a published development example. We then identify several open issues for the method.

  14. Domain decomposition methods for systems of conservation laws: Spectral collocation approximations

    NASA Technical Reports Server (NTRS)

    Quarteroni, Alfio

    1989-01-01

    Hyperbolic systems of conversation laws are considered which are discretized in space by spectral collocation methods and advanced in time by finite difference schemes. At any time-level a domain deposition method based on an iteration by subdomain procedure was introduced yielding at each step a sequence of independent subproblems (one for each subdomain) that can be solved simultaneously. The method is set for a general nonlinear problem in several space variables. The convergence analysis, however, is carried out only for a linear one-dimensional system with continuous solutions. A precise form of the error reduction factor at each iteration is derived. Although the method is applied here to the case of spectral collocation approximation only, the idea is fairly general and can be used in a different context as well. For instance, its application to space discretization by finite differences is straight forward.

  15. Design and assembly of a catalyst bed gas generator for the catalytic decomposition of high concentration hydrogen peroxide propellants and the catalytic combustion of hydrocarbon/air mixtures

    NASA Technical Reports Server (NTRS)

    Lohner, Kevin A. (Inventor); Mays, Jeffrey A. (Inventor); Sevener, Kathleen M. (Inventor)

    2004-01-01

    A method for designing and assembling a high performance catalyst bed gas generator for use in decomposing propellants, particularly hydrogen peroxide propellants, for use in target, space, and on-orbit propulsion systems and low-emission terrestrial power and gas generation. The gas generator utilizes a sectioned catalyst bed system, and incorporates a robust, high temperature mixed metal oxide catalyst. The gas generator requires no special preheat apparatus or special sequencing to meet start-up requirements, enabling a fast overall response time. The high performance catalyst bed gas generator system has consistently demonstrated high decomposition efficiency, extremely low decomposition roughness, and long operating life on multiple test articles.

  16. An operational modal analysis method in frequency and spatial domain

    NASA Astrophysics Data System (ADS)

    Wang, Tong; Zhang, Lingmi; Tamura, Yukio

    2005-12-01

    A frequency and spatial domain decomposition method (FSDD) for operational modal analysis (OMA) is presented in this paper, which is an extension of the complex mode indicator function (CMIF) method for experimental modal analysis (EMA). The theoretical background of the FSDD method is clarified. Singular value decomposition is adopted to separate the signal space from the noise space. Finally, an enhanced power spectrum density (PSD) is proposed to obtain more accurate modal parameters by curve fitting in the frequency domain. Moreover, a simulation case and an application case are used to validate this method.

  17. A Fast Solver for Implicit Integration of the Vlasov--Poisson System in the Eulerian Framework

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Garrett, C. Kristopher; Hauck, Cory D.

    In this paper, we present a domain decomposition algorithm to accelerate the solution of Eulerian-type discretizations of the linear, steady-state Vlasov equation. The steady-state solver then forms a key component in the implementation of fully implicit or nearly fully implicit temporal integrators for the nonlinear Vlasov--Poisson system. The solver relies on a particular decomposition of phase space that enables the use of sweeping techniques commonly used in radiation transport applications. The original linear system for the phase space unknowns is then replaced by a smaller linear system involving only unknowns on the boundary between subdomains, which can then be solvedmore » efficiently with Krylov methods such as GMRES. Steady-state solves are combined to form an implicit Runge--Kutta time integrator, and the Vlasov equation is coupled self-consistently to the Poisson equation via a linearized procedure or a nonlinear fixed-point method for the electric field. Finally, numerical results for standard test problems demonstrate the efficiency of the domain decomposition approach when compared to the direct application of an iterative solver to the original linear system.« less

  18. A Fast Solver for Implicit Integration of the Vlasov--Poisson System in the Eulerian Framework

    DOE PAGES

    Garrett, C. Kristopher; Hauck, Cory D.

    2018-04-05

    In this paper, we present a domain decomposition algorithm to accelerate the solution of Eulerian-type discretizations of the linear, steady-state Vlasov equation. The steady-state solver then forms a key component in the implementation of fully implicit or nearly fully implicit temporal integrators for the nonlinear Vlasov--Poisson system. The solver relies on a particular decomposition of phase space that enables the use of sweeping techniques commonly used in radiation transport applications. The original linear system for the phase space unknowns is then replaced by a smaller linear system involving only unknowns on the boundary between subdomains, which can then be solvedmore » efficiently with Krylov methods such as GMRES. Steady-state solves are combined to form an implicit Runge--Kutta time integrator, and the Vlasov equation is coupled self-consistently to the Poisson equation via a linearized procedure or a nonlinear fixed-point method for the electric field. Finally, numerical results for standard test problems demonstrate the efficiency of the domain decomposition approach when compared to the direct application of an iterative solver to the original linear system.« less

  19. TEMPORAL SIGNATURES OF AIR QUALITY OBSERVATIONS AND MODEL OUTPUTS: DO TIME SERIES DECOMPOSITION METHODS CAPTURE RELEVANT TIME SCALES?

    EPA Science Inventory

    Time series decomposition methods were applied to meteorological and air quality data and their numerical model estimates. Decomposition techniques express a time series as the sum of a small number of independent modes which hypothetically represent identifiable forcings, thereb...

  20. Some Remarks on Space-Time Decompositions, and Degenerate Metrics, in General Relativity

    NASA Astrophysics Data System (ADS)

    Bengtsson, Ingemar

    Space-time decomposition of the Hilbert-Palatini action, written in a form which admits degenerate metrics, is considered. Simple numerology shows why D = 3 and 4 are singled out as admitting a simple phase space. The canonical structure of the degenerate sector turns out to be awkward. However, the real degenerate metrics obtained as solutions are the same as those that occur in Ashtekar's formulation of complex general relativity. An exact solution of Ashtekar's equations, with degenerate metric, shows that the manifestly four-dimensional form of the action, and its 3 + 1 form, are not quite equivalent.

  1. On the velocity space discretization for the Vlasov-Poisson system: Comparison between implicit Hermite spectral and Particle-in-Cell methods

    NASA Astrophysics Data System (ADS)

    Camporeale, E.; Delzanno, G. L.; Bergen, B. K.; Moulton, J. D.

    2016-01-01

    We describe a spectral method for the numerical solution of the Vlasov-Poisson system where the velocity space is decomposed by means of an Hermite basis, and the configuration space is discretized via a Fourier decomposition. The novelty of our approach is an implicit time discretization that allows exact conservation of charge, momentum and energy. The computational efficiency and the cost-effectiveness of this method are compared to the fully-implicit PIC method recently introduced by Markidis and Lapenta (2011) and Chen et al. (2011). The following examples are discussed: Langmuir wave, Landau damping, ion-acoustic wave, two-stream instability. The Fourier-Hermite spectral method can achieve solutions that are several orders of magnitude more accurate at a fraction of the cost with respect to PIC.

  2. Validating the performance of one-time decomposition for fMRI analysis using ICA with automatic target generation process.

    PubMed

    Yao, Shengnan; Zeng, Weiming; Wang, Nizhuan; Chen, Lei

    2013-07-01

    Independent component analysis (ICA) has been proven to be effective for functional magnetic resonance imaging (fMRI) data analysis. However, ICA decomposition requires to optimize the unmixing matrix iteratively whose initial values are generated randomly. Thus the randomness of the initialization leads to different ICA decomposition results. Therefore, just one-time decomposition for fMRI data analysis is not usually reliable. Under this circumstance, several methods about repeated decompositions with ICA (RDICA) were proposed to reveal the stability of ICA decomposition. Although utilizing RDICA has achieved satisfying results in validating the performance of ICA decomposition, RDICA cost much computing time. To mitigate the problem, in this paper, we propose a method, named ATGP-ICA, to do the fMRI data analysis. This method generates fixed initial values with automatic target generation process (ATGP) instead of being produced randomly. We performed experimental tests on both hybrid data and fMRI data to indicate the effectiveness of the new method and made a performance comparison of the traditional one-time decomposition with ICA (ODICA), RDICA and ATGP-ICA. The proposed method demonstrated that it not only could eliminate the randomness of ICA decomposition, but also could save much computing time compared to RDICA. Furthermore, the ROC (Receiver Operating Characteristic) power analysis also denoted the better signal reconstruction performance of ATGP-ICA than that of RDICA. Copyright © 2013 Elsevier Inc. All rights reserved.

  3. A multilevel preconditioner for domain decomposition boundary systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bramble, J.H.; Pasciak, J.E.; Xu, Jinchao.

    1991-12-11

    In this note, we consider multilevel preconditioning of the reduced boundary systems which arise in non-overlapping domain decomposition methods. It will be shown that the resulting preconditioned systems have condition numbers which be bounded in the case of multilevel spaces on the whole domain and grow at most proportional to the number of levels in the case of multilevel boundary spaces without multilevel extensions into the interior.

  4. Surface fuel litterfall and decomposition in the northern Rocky Mountains, U.S.A.

    Treesearch

    Robert E. Keane

    2008-01-01

    Surface fuel deposition and decomposition rates are important to fire management and research because they can define the longevity of fuel treatments in time and space and they can be used to design, build, test, and validate complex fire and ecosystem models useful in evaluating management alternatives. We determined rates of surface fuel litterfall and decomposition...

  5. Fast flux module detection using matroid theory.

    PubMed

    Reimers, Arne C; Bruggeman, Frank J; Olivier, Brett G; Stougie, Leen

    2015-05-01

    Flux balance analysis (FBA) is one of the most often applied methods on genome-scale metabolic networks. Although FBA uniquely determines the optimal yield, the pathway that achieves this is usually not unique. The analysis of the optimal-yield flux space has been an open challenge. Flux variability analysis is only capturing some properties of the flux space, while elementary mode analysis is intractable due to the enormous number of elementary modes. However, it has been found by Kelk et al. (2012) that the space of optimal-yield fluxes decomposes into flux modules. These decompositions allow a much easier but still comprehensive analysis of the optimal-yield flux space. Using the mathematical definition of module introduced by Müller and Bockmayr (2013b), we discovered useful connections to matroid theory, through which efficient algorithms enable us to compute the decomposition into modules in a few seconds for genome-scale networks. Using that every module can be represented by one reaction that represents its function, in this article, we also present a method that uses this decomposition to visualize the interplay of modules. We expect the new method to replace flux variability analysis in the pipelines for metabolic networks.

  6. Efficient scheme for parametric fitting of data in arbitrary dimensions.

    PubMed

    Pang, Ning-Ning; Tzeng, Wen-Jer; Kao, Hisen-Ching

    2008-07-01

    We propose an efficient scheme for parametric fitting expressed in terms of the Legendre polynomials. For continuous systems, our scheme is exact and the derived explicit expression is very helpful for further analytical studies. For discrete systems, our scheme is almost as accurate as the method of singular value decomposition. Through a few numerical examples, we show that our algorithm costs much less CPU time and memory space than the method of singular value decomposition. Thus, our algorithm is very suitable for a large amount of data fitting. In addition, the proposed scheme can also be used to extract the global structure of fluctuating systems. We then derive the exact relation between the correlation function and the detrended variance function of fluctuating systems in arbitrary dimensions and give a general scaling analysis.

  7. Molecular structure, thermal behavior and adiabatic time-to-explosion of 3,3-dinitroazetidinium picrate

    NASA Astrophysics Data System (ADS)

    Ma, Haixia; Yan, Biao; Li, Junfeng; Ren, Yinghui; Chen, Yongshi; Zhao, Fengqi; Song, Jirong; Hu, Rongzu

    2010-09-01

    3,3-Dinitroazetidinium picrate (DNAZṡPA) was synthesized by adding 3,3-dinitroazetidine (DNAZ) to picric acid (PA) in methanol, the single crystals suitable for X-ray measurement were obtained by recrystallization at room temperature. The compound crystallises orthorhombic with space group P2 12 12 1 and crystal parameters of a = 0.7655(1) nm, b = 0.8962(2) nm, c = 2.0507(4) nm, V = 1.4069(5) nm 3, D c = 1.776 g cm -3, Z = 4, F(0 0 0) = 768 and μ = 0.166 mm -1. The thermal behavior of DNAZṡPA was studied under a non-isothermal condition by DSC and TG-DTG methods. The kinetic parameters of the first exothermic thermal decomposition process were obtained from analysis of the DSC and TG curves by Kissinger method, Ozawa method and the integral method. The specific heat capacity of DNAZṡPA was determined with a continuous C p mode of micro-calorimeter and the standard mole specific heat capacity was 436.56 J mol -1 K -1 at 298.15 K. Using the relationship of C p with T and the thermal decomposition parameters, the time of the thermal decomposition from initialization to thermal explosion (adiabatic time-to-explosion) was evaluated to be 40.7 s. The free radical signals of DNAZṡPA and 1,3,3-trinitroazetidine (TNAZ) were detected by electron spin resonance (ESR) technique to estimate its sensitivity.

  8. Stochastic shock response spectrum decomposition method based on probabilistic definitions of temporal peak acceleration, spectral energy, and phase lag distributions of mechanical impact pyrotechnic shock test data

    NASA Astrophysics Data System (ADS)

    Hwang, James Ho-Jin; Duran, Adam

    2016-08-01

    Most of the times pyrotechnic shock design and test requirements for space systems are provided in Shock Response Spectrum (SRS) without the input time history. Since the SRS does not describe the input or the environment, a decomposition method is used to obtain the source time history. The main objective of this paper is to develop a decomposition method producing input time histories that can satisfy the SRS requirement based on the pyrotechnic shock test data measured from a mechanical impact test apparatus. At the heart of this decomposition method is the statistical representation of the pyrotechnic shock test data measured from the MIT Lincoln Laboratory (LL) designed Universal Pyrotechnic Shock Simulator (UPSS). Each pyrotechnic shock test data measured at the interface of a test unit has been analyzed to produce the temporal peak acceleration, Root Mean Square (RMS) acceleration, and the phase lag at each band center frequency. Maximum SRS of each filtered time history has been calculated to produce a relationship between the input and the response. Two new definitions are proposed as a result. The Peak Ratio (PR) is defined as the ratio between the maximum SRS and the temporal peak acceleration at each band center frequency. The ratio between the maximum SRS and the RMS acceleration is defined as the Energy Ratio (ER) at each band center frequency. Phase lag is estimated based on the time delay between the temporal peak acceleration at each band center frequency and the peak acceleration at the lowest band center frequency. This stochastic process has been applied to more than one hundred pyrotechnic shock test data to produce probabilistic definitions of the PR, ER, and the phase lag. The SRS is decomposed at each band center frequency using damped sinusoids with the PR and the decays obtained by matching the ER of the damped sinusoids to the ER of the test data. The final step in this stochastic SRS decomposition process is the Monte Carlo (MC) simulation. The MC simulation identifies combinations of the PR and decays that can meet the SRS requirement at each band center frequency. Decomposed input time histories are produced by summing the converged damped sinusoids with the MC simulation of the phase lag distribution.

  9. Spatiotemporal Domain Decomposition for Massive Parallel Computation of Space-Time Kernel Density

    NASA Astrophysics Data System (ADS)

    Hohl, A.; Delmelle, E. M.; Tang, W.

    2015-07-01

    Accelerated processing capabilities are deemed critical when conducting analysis on spatiotemporal datasets of increasing size, diversity and availability. High-performance parallel computing offers the capacity to solve computationally demanding problems in a limited timeframe, but likewise poses the challenge of preventing processing inefficiency due to workload imbalance between computing resources. Therefore, when designing new algorithms capable of implementing parallel strategies, careful spatiotemporal domain decomposition is necessary to account for heterogeneity in the data. In this study, we perform octtree-based adaptive decomposition of the spatiotemporal domain for parallel computation of space-time kernel density. In order to avoid edge effects near subdomain boundaries, we establish spatiotemporal buffers to include adjacent data-points that are within the spatial and temporal kernel bandwidths. Then, we quantify computational intensity of each subdomain to balance workloads among processors. We illustrate the benefits of our methodology using a space-time epidemiological dataset of Dengue fever, an infectious vector-borne disease that poses a severe threat to communities in tropical climates. Our parallel implementation of kernel density reaches substantial speedup compared to sequential processing, and achieves high levels of workload balance among processors due to great accuracy in quantifying computational intensity. Our approach is portable of other space-time analytical tests.

  10. Reduced nonlinear prognostic model construction from high-dimensional data

    NASA Astrophysics Data System (ADS)

    Gavrilov, Andrey; Mukhin, Dmitry; Loskutov, Evgeny; Feigin, Alexander

    2017-04-01

    Construction of a data-driven model of evolution operator using universal approximating functions can only be statistically justified when the dimension of its phase space is small enough, especially in the case of short time series. At the same time in many applications real-measured data is high-dimensional, e.g. it is space-distributed and multivariate in climate science. Therefore it is necessary to use efficient dimensionality reduction methods which are also able to capture key dynamical properties of the system from observed data. To address this problem we present a Bayesian approach to an evolution operator construction which incorporates two key reduction steps. First, the data is decomposed into a set of certain empirical modes, such as standard empirical orthogonal functions or recently suggested nonlinear dynamical modes (NDMs) [1], and the reduced space of corresponding principal components (PCs) is obtained. Then, the model of evolution operator for PCs is constructed which maps a number of states in the past to the current state. The second step is to reduce this time-extended space in the past using appropriate decomposition methods. Such a reduction allows us to capture only the most significant spatio-temporal couplings. The functional form of the evolution operator includes separately linear, nonlinear (based on artificial neural networks) and stochastic terms. Explicit separation of the linear term from the nonlinear one allows us to more easily interpret degree of nonlinearity as well as to deal better with smooth PCs which can naturally occur in the decompositions like NDM, as they provide a time scale separation. Results of application of the proposed method to climate data are demonstrated and discussed. The study is supported by Government of Russian Federation (agreement #14.Z50.31.0033 with the Institute of Applied Physics of RAS). 1. Mukhin, D., Gavrilov, A., Feigin, A., Loskutov, E., & Kurths, J. (2015). Principal nonlinear dynamical modes of climate variability. Scientific Reports, 5, 15510. http://doi.org/10.1038/srep15510

  11. Partial differential equation-based approach for empirical mode decomposition: application on image analysis.

    PubMed

    Niang, Oumar; Thioune, Abdoulaye; El Gueirea, Mouhamed Cheikh; Deléchelle, Eric; Lemoine, Jacques

    2012-09-01

    The major problem with the empirical mode decomposition (EMD) algorithm is its lack of a theoretical framework. So, it is difficult to characterize and evaluate this approach. In this paper, we propose, in the 2-D case, the use of an alternative implementation to the algorithmic definition of the so-called "sifting process" used in the original Huang's EMD method. This approach, especially based on partial differential equations (PDEs), was presented by Niang in previous works, in 2005 and 2007, and relies on a nonlinear diffusion-based filtering process to solve the mean envelope estimation problem. In the 1-D case, the efficiency of the PDE-based method, compared to the original EMD algorithmic version, was also illustrated in a recent paper. Recently, several 2-D extensions of the EMD method have been proposed. Despite some effort, 2-D versions for EMD appear poorly performing and are very time consuming. So in this paper, an extension to the 2-D space of the PDE-based approach is extensively described. This approach has been applied in cases of both signal and image decomposition. The obtained results confirm the usefulness of the new PDE-based sifting process for the decomposition of various kinds of data. Some results have been provided in the case of image decomposition. The effectiveness of the approach encourages its use in a number of signal and image applications such as denoising, detrending, or texture analysis.

  12. Functional magnetic resonance imaging activation detection: fuzzy cluster analysis in wavelet and multiwavelet domains.

    PubMed

    Jahanian, Hesamoddin; Soltanian-Zadeh, Hamid; Hossein-Zadeh, Gholam-Ali

    2005-09-01

    To present novel feature spaces, based on multiscale decompositions obtained by scalar wavelet and multiwavelet transforms, to remedy problems associated with high dimension of functional magnetic resonance imaging (fMRI) time series (when they are used directly in clustering algorithms) and their poor signal-to-noise ratio (SNR) that limits accurate classification of fMRI time series according to their activation contents. Using randomization, the proposed method finds wavelet/multiwavelet coefficients that represent the activation content of fMRI time series and combines them to define new feature spaces. Using simulated and experimental fMRI data sets, the proposed feature spaces are compared to the cross-correlation (CC) feature space and their performances are evaluated. In these studies, the false positive detection rate is controlled using randomization. To compare different methods, several points of the receiver operating characteristics (ROC) curves, using simulated data, are estimated and compared. The proposed features suppress the effects of confounding signals and improve activation detection sensitivity. Experimental results show improved sensitivity and robustness of the proposed method compared to the conventional CC analysis. More accurate and sensitive activation detection can be achieved using the proposed feature spaces compared to CC feature space. Multiwavelet features show superior detection sensitivity compared to the scalar wavelet features. (c) 2005 Wiley-Liss, Inc.

  13. A Subspace Approach to the Structural Decomposition and Identification of Ankle Joint Dynamic Stiffness.

    PubMed

    Jalaleddini, Kian; Tehrani, Ehsan Sobhani; Kearney, Robert E

    2017-06-01

    The purpose of this paper is to present a structural decomposition subspace (SDSS) method for decomposition of the joint torque to intrinsic, reflexive, and voluntary torques and identification of joint dynamic stiffness. First, it formulates a novel state-space representation for the joint dynamic stiffness modeled by a parallel-cascade structure with a concise parameter set that provides a direct link between the state-space representation matrices and the parallel-cascade parameters. Second, it presents a subspace method for the identification of the new state-space model that involves two steps: 1) the decomposition of the intrinsic and reflex pathways and 2) the identification of an impulse response model of the intrinsic pathway and a Hammerstein model of the reflex pathway. Extensive simulation studies demonstrate that SDSS has significant performance advantages over some other methods. Thus, SDSS was more robust under high noise conditions, converging where others failed; it was more accurate, giving estimates with lower bias and random errors. The method also worked well in practice and yielded high-quality estimates of intrinsic and reflex stiffnesses when applied to experimental data at three muscle activation levels. The simulation and experimental results demonstrate that SDSS accurately decomposes the intrinsic and reflex torques and provides accurate estimates of physiologically meaningful parameters. SDSS will be a valuable tool for studying joint stiffness under functionally important conditions. It has important clinical implications for the diagnosis, assessment, objective quantification, and monitoring of neuromuscular diseases that change the muscle tone.

  14. Multi-label learning with fuzzy hypergraph regularization for protein subcellular location prediction.

    PubMed

    Chen, Jing; Tang, Yuan Yan; Chen, C L Philip; Fang, Bin; Lin, Yuewei; Shang, Zhaowei

    2014-12-01

    Protein subcellular location prediction aims to predict the location where a protein resides within a cell using computational methods. Considering the main limitations of the existing methods, we propose a hierarchical multi-label learning model FHML for both single-location proteins and multi-location proteins. The latent concepts are extracted through feature space decomposition and label space decomposition under the nonnegative data factorization framework. The extracted latent concepts are used as the codebook to indirectly connect the protein features to their annotations. We construct dual fuzzy hypergraphs to capture the intrinsic high-order relations embedded in not only feature space, but also label space. Finally, the subcellular location annotation information is propagated from the labeled proteins to the unlabeled proteins by performing dual fuzzy hypergraph Laplacian regularization. The experimental results on the six protein benchmark datasets demonstrate the superiority of our proposed method by comparing it with the state-of-the-art methods, and illustrate the benefit of exploiting both feature correlations and label correlations.

  15. Parallel CE/SE Computations via Domain Decomposition

    NASA Technical Reports Server (NTRS)

    Himansu, Ananda; Jorgenson, Philip C. E.; Wang, Xiao-Yen; Chang, Sin-Chung

    2000-01-01

    This paper describes the parallelization strategy and achieved parallel efficiency of an explicit time-marching algorithm for solving conservation laws. The Space-Time Conservation Element and Solution Element (CE/SE) algorithm for solving the 2D and 3D Euler equations is parallelized with the aid of domain decomposition. The parallel efficiency of the resultant algorithm on a Silicon Graphics Origin 2000 parallel computer is checked.

  16. Proper Generalized Decomposition (PGD) for the numerical simulation of polycrystalline aggregates under cyclic loading

    NASA Astrophysics Data System (ADS)

    Nasri, Mohamed Aziz; Robert, Camille; Ammar, Amine; El Arem, Saber; Morel, Franck

    2018-02-01

    The numerical modelling of the behaviour of materials at the microstructural scale has been greatly developed over the last two decades. Unfortunately, conventional resolution methods cannot simulate polycrystalline aggregates beyond tens of loading cycles, and they do not remain quantitative due to the plasticity behaviour. This work presents the development of a numerical solver for the resolution of the Finite Element modelling of polycrystalline aggregates subjected to cyclic mechanical loading. The method is based on two concepts. The first one consists in maintaining a constant stiffness matrix. The second uses a time/space model reduction method. In order to analyse the applicability and the performance of the use of a space-time separated representation, the simulations are carried out on a three-dimensional polycrystalline aggregate under cyclic loading. Different numbers of elements per grain and two time increments per cycle are investigated. The results show a significant CPU time saving while maintaining good precision. Moreover, increasing the number of elements and the number of time increments per cycle, the model reduction method is faster than the standard solver.

  17. Optimization of Time-Dependent Particle Tracing Using Tetrahedral Decomposition

    NASA Technical Reports Server (NTRS)

    Kenwright, David; Lane, David

    1995-01-01

    An efficient algorithm is presented for computing particle paths, streak lines and time lines in time-dependent flows with moving curvilinear grids. The integration, velocity interpolation and step-size control are all performed in physical space which avoids the need to transform the velocity field into computational space. This leads to higher accuracy because there are no Jacobian matrix approximations or expensive matrix inversions. Integration accuracy is maintained using an adaptive step-size control scheme which is regulated by the path line curvature. The problem of cell-searching, point location and interpolation in physical space is simplified by decomposing hexahedral cells into tetrahedral cells. This enables the point location to be done analytically and substantially faster than with a Newton-Raphson iterative method. Results presented show this algorithm is up to six times faster than particle tracers which operate on hexahedral cells yet produces almost identical particle trajectories.

  18. Decomposition of Fuzzy Soft Sets with Finite Value Spaces

    PubMed Central

    Jun, Young Bae

    2014-01-01

    The notion of fuzzy soft sets is a hybrid soft computing model that integrates both gradualness and parameterization methods in harmony to deal with uncertainty. The decomposition of fuzzy soft sets is of great importance in both theory and practical applications with regard to decision making under uncertainty. This study aims to explore decomposition of fuzzy soft sets with finite value spaces. Scalar uni-product and int-product operations of fuzzy soft sets are introduced and some related properties are investigated. Using t-level soft sets, we define level equivalent relations and show that the quotient structure of the unit interval induced by level equivalent relations is isomorphic to the lattice consisting of all t-level soft sets of a given fuzzy soft set. We also introduce the concepts of crucial threshold values and complete threshold sets. Finally, some decomposition theorems for fuzzy soft sets with finite value spaces are established, illustrated by an example concerning the classification and rating of multimedia cell phones. The obtained results extend some classical decomposition theorems of fuzzy sets, since every fuzzy set can be viewed as a fuzzy soft set with a single parameter. PMID:24558342

  19. Decomposition of fuzzy soft sets with finite value spaces.

    PubMed

    Feng, Feng; Fujita, Hamido; Jun, Young Bae; Khan, Madad

    2014-01-01

    The notion of fuzzy soft sets is a hybrid soft computing model that integrates both gradualness and parameterization methods in harmony to deal with uncertainty. The decomposition of fuzzy soft sets is of great importance in both theory and practical applications with regard to decision making under uncertainty. This study aims to explore decomposition of fuzzy soft sets with finite value spaces. Scalar uni-product and int-product operations of fuzzy soft sets are introduced and some related properties are investigated. Using t-level soft sets, we define level equivalent relations and show that the quotient structure of the unit interval induced by level equivalent relations is isomorphic to the lattice consisting of all t-level soft sets of a given fuzzy soft set. We also introduce the concepts of crucial threshold values and complete threshold sets. Finally, some decomposition theorems for fuzzy soft sets with finite value spaces are established, illustrated by an example concerning the classification and rating of multimedia cell phones. The obtained results extend some classical decomposition theorems of fuzzy sets, since every fuzzy set can be viewed as a fuzzy soft set with a single parameter.

  20. Time-frequency analysis of neuronal populations with instantaneous resolution based on noise-assisted multivariate empirical mode decomposition.

    PubMed

    Alegre-Cortés, J; Soto-Sánchez, C; Pizá, Á G; Albarracín, A L; Farfán, F D; Felice, C J; Fernández, E

    2016-07-15

    Linear analysis has classically provided powerful tools for understanding the behavior of neural populations, but the neuron responses to real-world stimulation are nonlinear under some conditions, and many neuronal components demonstrate strong nonlinear behavior. In spite of this, temporal and frequency dynamics of neural populations to sensory stimulation have been usually analyzed with linear approaches. In this paper, we propose the use of Noise-Assisted Multivariate Empirical Mode Decomposition (NA-MEMD), a data-driven template-free algorithm, plus the Hilbert transform as a suitable tool for analyzing population oscillatory dynamics in a multi-dimensional space with instantaneous frequency (IF) resolution. The proposed approach was able to extract oscillatory information of neurophysiological data of deep vibrissal nerve and visual cortex multiunit recordings that were not evidenced using linear approaches with fixed bases such as the Fourier analysis. Texture discrimination analysis performance was increased when Noise-Assisted Multivariate Empirical Mode plus Hilbert transform was implemented, compared to linear techniques. Cortical oscillatory population activity was analyzed with precise time-frequency resolution. Similarly, NA-MEMD provided increased time-frequency resolution of cortical oscillatory population activity. Noise-Assisted Multivariate Empirical Mode Decomposition plus Hilbert transform is an improved method to analyze neuronal population oscillatory dynamics overcoming linear and stationary assumptions of classical methods. Copyright © 2016 Elsevier B.V. All rights reserved.

  1. Wavelet decomposition based principal component analysis for face recognition using MATLAB

    NASA Astrophysics Data System (ADS)

    Sharma, Mahesh Kumar; Sharma, Shashikant; Leeprechanon, Nopbhorn; Ranjan, Aashish

    2016-03-01

    For the realization of face recognition systems in the static as well as in the real time frame, algorithms such as principal component analysis, independent component analysis, linear discriminate analysis, neural networks and genetic algorithms are used for decades. This paper discusses an approach which is a wavelet decomposition based principal component analysis for face recognition. Principal component analysis is chosen over other algorithms due to its relative simplicity, efficiency, and robustness features. The term face recognition stands for identifying a person from his facial gestures and having resemblance with factor analysis in some sense, i.e. extraction of the principal component of an image. Principal component analysis is subjected to some drawbacks, mainly the poor discriminatory power and the large computational load in finding eigenvectors, in particular. These drawbacks can be greatly reduced by combining both wavelet transform decomposition for feature extraction and principal component analysis for pattern representation and classification together, by analyzing the facial gestures into space and time domain, where, frequency and time are used interchangeably. From the experimental results, it is envisaged that this face recognition method has made a significant percentage improvement in recognition rate as well as having a better computational efficiency.

  2. Using Matrix and Tensor Factorizations for the Single-Trial Analysis of Population Spike Trains.

    PubMed

    Onken, Arno; Liu, Jian K; Karunasekara, P P Chamanthi R; Delis, Ioannis; Gollisch, Tim; Panzeri, Stefano

    2016-11-01

    Advances in neuronal recording techniques are leading to ever larger numbers of simultaneously monitored neurons. This poses the important analytical challenge of how to capture compactly all sensory information that neural population codes carry in their spatial dimension (differences in stimulus tuning across neurons at different locations), in their temporal dimension (temporal neural response variations), or in their combination (temporally coordinated neural population firing). Here we investigate the utility of tensor factorizations of population spike trains along space and time. These factorizations decompose a dataset of single-trial population spike trains into spatial firing patterns (combinations of neurons firing together), temporal firing patterns (temporal activation of these groups of neurons) and trial-dependent activation coefficients (strength of recruitment of such neural patterns on each trial). We validated various factorization methods on simulated data and on populations of ganglion cells simultaneously recorded in the salamander retina. We found that single-trial tensor space-by-time decompositions provided low-dimensional data-robust representations of spike trains that capture efficiently both their spatial and temporal information about sensory stimuli. Tensor decompositions with orthogonality constraints were the most efficient in extracting sensory information, whereas non-negative tensor decompositions worked well even on non-independent and overlapping spike patterns, and retrieved informative firing patterns expressed by the same population in response to novel stimuli. Our method showed that populations of retinal ganglion cells carried information in their spike timing on the ten-milliseconds-scale about spatial details of natural images. This information could not be recovered from the spike counts of these cells. First-spike latencies carried the majority of information provided by the whole spike train about fine-scale image features, and supplied almost as much information about coarse natural image features as firing rates. Together, these results highlight the importance of spike timing, and particularly of first-spike latencies, in retinal coding.

  3. Using Matrix and Tensor Factorizations for the Single-Trial Analysis of Population Spike Trains

    PubMed Central

    Onken, Arno; Liu, Jian K.; Karunasekara, P. P. Chamanthi R.; Delis, Ioannis; Gollisch, Tim; Panzeri, Stefano

    2016-01-01

    Advances in neuronal recording techniques are leading to ever larger numbers of simultaneously monitored neurons. This poses the important analytical challenge of how to capture compactly all sensory information that neural population codes carry in their spatial dimension (differences in stimulus tuning across neurons at different locations), in their temporal dimension (temporal neural response variations), or in their combination (temporally coordinated neural population firing). Here we investigate the utility of tensor factorizations of population spike trains along space and time. These factorizations decompose a dataset of single-trial population spike trains into spatial firing patterns (combinations of neurons firing together), temporal firing patterns (temporal activation of these groups of neurons) and trial-dependent activation coefficients (strength of recruitment of such neural patterns on each trial). We validated various factorization methods on simulated data and on populations of ganglion cells simultaneously recorded in the salamander retina. We found that single-trial tensor space-by-time decompositions provided low-dimensional data-robust representations of spike trains that capture efficiently both their spatial and temporal information about sensory stimuli. Tensor decompositions with orthogonality constraints were the most efficient in extracting sensory information, whereas non-negative tensor decompositions worked well even on non-independent and overlapping spike patterns, and retrieved informative firing patterns expressed by the same population in response to novel stimuli. Our method showed that populations of retinal ganglion cells carried information in their spike timing on the ten-milliseconds-scale about spatial details of natural images. This information could not be recovered from the spike counts of these cells. First-spike latencies carried the majority of information provided by the whole spike train about fine-scale image features, and supplied almost as much information about coarse natural image features as firing rates. Together, these results highlight the importance of spike timing, and particularly of first-spike latencies, in retinal coding. PMID:27814363

  4. Quantitative evaluation of muscle synergy models: a single-trial task decoding approach

    PubMed Central

    Delis, Ioannis; Berret, Bastien; Pozzo, Thierry; Panzeri, Stefano

    2013-01-01

    Muscle synergies, i.e., invariant coordinated activations of groups of muscles, have been proposed as building blocks that the central nervous system (CNS) uses to construct the patterns of muscle activity utilized for executing movements. Several efficient dimensionality reduction algorithms that extract putative synergies from electromyographic (EMG) signals have been developed. Typically, the quality of synergy decompositions is assessed by computing the Variance Accounted For (VAF). Yet, little is known about the extent to which the combination of those synergies encodes task-discriminating variations of muscle activity in individual trials. To address this question, here we conceive and develop a novel computational framework to evaluate muscle synergy decompositions in task space. Unlike previous methods considering the total variance of muscle patterns (VAF based metrics), our approach focuses on variance discriminating execution of different tasks. The procedure is based on single-trial task decoding from muscle synergy activation features. The task decoding based metric evaluates quantitatively the mapping between synergy recruitment and task identification and automatically determines the minimal number of synergies that captures all the task-discriminating variability in the synergy activations. In this paper, we first validate the method on plausibly simulated EMG datasets. We then show that it can be applied to different types of muscle synergy decomposition and illustrate its applicability to real data by using it for the analysis of EMG recordings during an arm pointing task. We find that time-varying and synchronous synergies with similar number of parameters are equally efficient in task decoding, suggesting that in this experimental paradigm they are equally valid representations of muscle synergies. Overall, these findings stress the effectiveness of the decoding metric in systematically assessing muscle synergy decompositions in task space. PMID:23471195

  5. Localized temperature and chemical reaction control in nanoscale space by nanowire array.

    PubMed

    Jin, C Yan; Li, Zhiyong; Williams, R Stanley; Lee, K-Cheol; Park, Inkyu

    2011-11-09

    We introduce a novel method for chemical reaction control with nanoscale spatial resolution based on localized heating by using a well-aligned nanowire array. Numerical and experimental analysis shows that each individual nanowire could be selectively and rapidly Joule heated for local and ultrafast temperature modulation in nanoscale space (e.g., maximum temperature gradient 2.2 K/nm at the nanowire edge; heating/cooling time < 2 μs). By taking advantage of this capability, several nanoscale chemical reactions such as polymer decomposition/cross-linking and direct and localized hydrothermal synthesis of metal oxide nanowires were demonstrated.

  6. Comparison of Techniques for Sampling Adult Necrophilous Insects From Pig Carcasses.

    PubMed

    Cruise, Angela; Hatano, Eduardo; Watson, David W; Schal, Coby

    2018-02-06

    Studies of the pre-colonization interval and mechanisms driving necrophilous insect ecological succession depend on effective sampling of adult insects and knowledge of their diel and successional activity patterns. The number of insects trapped, their diversity, and diel periodicity were compared with four sampling methods on neonate pigs. Sampling method, time of day and decomposition age of the pigs significantly affected the number of insects sampled from pigs. We also found significant interactions of sampling method and decomposition day, time of sampling and decomposition day. No single method was superior to the other methods during all three decomposition days. Sampling times after noon yielded the largest samples during the first 2 d of decomposition. On day 3 of decomposition however, all sampling times were equally effective. Therefore, to maximize insect collections from neonate pigs, the method used to sample must vary by decomposition day. The suction trap collected the most species-rich samples, but sticky trap samples were the most diverse, when both species richness and evenness were factored into a Shannon diversity index. Repeated sampling during the noon to 18:00 hours period was most effective to obtain the maximum diversity of trapped insects. The integration of multiple sampling techniques would most effectively sample the necrophilous insect community. However, because all four tested methods were deficient at sampling beetle species, future work should focus on optimizing the most promising methods, alone or in combinations, and incorporate hand-collections of beetles. © The Author(s) 2018. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  7. A unifying model of concurrent spatial and temporal modularity in muscle activity.

    PubMed

    Delis, Ioannis; Panzeri, Stefano; Pozzo, Thierry; Berret, Bastien

    2014-02-01

    Modularity in the central nervous system (CNS), i.e., the brain capability to generate a wide repertoire of movements by combining a small number of building blocks ("modules"), is thought to underlie the control of movement. Numerous studies reported evidence for such a modular organization by identifying invariant muscle activation patterns across various tasks. However, previous studies relied on decompositions differing in both the nature and dimensionality of the identified modules. Here, we derive a single framework that encompasses all influential models of muscle activation modularity. We introduce a new model (named space-by-time decomposition) that factorizes muscle activations into concurrent spatial and temporal modules. To infer these modules, we develop an algorithm, referred to as sample-based nonnegative matrix trifactorization (sNM3F). We test the space-by-time decomposition on a comprehensive electromyographic dataset recorded during execution of arm pointing movements and show that it provides a low-dimensional yet accurate, highly flexible and task-relevant representation of muscle patterns. The extracted modules have a well characterized functional meaning and implement an efficient trade-off between replication of the original muscle patterns and task discriminability. Furthermore, they are compatible with the modules extracted from existing models, such as synchronous synergies and temporal primitives, and generalize time-varying synergies. Our results indicate the effectiveness of a simultaneous but separate condensation of spatial and temporal dimensions of muscle patterns. The space-by-time decomposition accommodates a unified view of the hierarchical mapping from task parameters to coordinated muscle activations, which could be employed as a reference framework for studying compositional motor control.

  8. Space-by-Time Modular Decomposition Effectively Describes Whole-Body Muscle Activity During Upright Reaching in Various Directions

    PubMed Central

    Hilt, Pauline M.; Delis, Ioannis; Pozzo, Thierry; Berret, Bastien

    2018-01-01

    The modular control hypothesis suggests that motor commands are built from precoded modules whose specific combined recruitment can allow the performance of virtually any motor task. Despite considerable experimental support, this hypothesis remains tentative as classical findings of reduced dimensionality in muscle activity may also result from other constraints (biomechanical couplings, data averaging or low dimensionality of motor tasks). Here we assessed the effectiveness of modularity in describing muscle activity in a comprehensive experiment comprising 72 distinct point-to-point whole-body movements during which the activity of 30 muscles was recorded. To identify invariant modules of a temporal and spatial nature, we used a space-by-time decomposition of muscle activity that has been shown to encompass classical modularity models. To examine the decompositions, we focused not only on the amount of variance they explained but also on whether the task performed on each trial could be decoded from the single-trial activations of modules. For the sake of comparison, we confronted these scores to the scores obtained from alternative non-modular descriptions of the muscle data. We found that the space-by-time decomposition was effective in terms of data approximation and task discrimination at comparable reduction of dimensionality. These findings show that few spatial and temporal modules give a compact yet approximate representation of muscle patterns carrying nearly all task-relevant information for a variety of whole-body reaching movements. PMID:29666576

  9. Multivariate Time Series Decomposition into Oscillation Components.

    PubMed

    Matsuda, Takeru; Komaki, Fumiyasu

    2017-08-01

    Many time series are considered to be a superposition of several oscillation components. We have proposed a method for decomposing univariate time series into oscillation components and estimating their phases (Matsuda & Komaki, 2017 ). In this study, we extend that method to multivariate time series. We assume that several oscillators underlie the given multivariate time series and that each variable corresponds to a superposition of the projections of the oscillators. Thus, the oscillators superpose on each variable with amplitude and phase modulation. Based on this idea, we develop gaussian linear state-space models and use them to decompose the given multivariate time series. The model parameters are estimated from data using the empirical Bayes method, and the number of oscillators is determined using the Akaike information criterion. Therefore, the proposed method extracts underlying oscillators in a data-driven manner and enables investigation of phase dynamics in a given multivariate time series. Numerical results show the effectiveness of the proposed method. From monthly mean north-south sunspot number data, the proposed method reveals an interesting phase relationship.

  10. Analyzing Transient Turbuelnce in a Stenosed Carotid Artery by Proper Orthogonal Decomposition

    NASA Astrophysics Data System (ADS)

    Grinberg, Leopold; Yakhot, Alexander; Karniadakis, George

    2009-11-01

    High resolution 3D simulation (involving 100M degrees of freedom) were employed to study transient turbulent flow in a carotid arterial bifurcation with a stenosed internal carotid artery (ICA). In the performed simulation an intermittent (in space and time) laminar-turbulent-laminar regime was observed. The simulation reveals the mechanism of the onset of turbulent flow in the stenosed ICA where the narrowing in the artery generates a strong jet flow. Time- and space-window Proper Orthogonal Decomposition (POD) was applied to quantify the different flow regimes in the occluded artery. A simplified version of the POD analysis that utilizes 2D slices only - more appropriate in the clinical setting - was also investigated.

  11. Approximating the 0-1 Multiple Knapsack Problem with Agent Decomposition and Market Negotiation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smolinski, B.

    The 0-1 multiple knapsack problem appears in many domains from financial portfolio management to cargo ship stowing. Methods for solving it range from approximate algorithms, such as greedy algorithms, to exact algorithms, such as branch and bound. Approximate algorithms have no bounds on how poorly they perform and exact algorithms can suffer from exponential time and space complexities with large data sets. This paper introduces a market model based on agent decomposition and market auctions for approximating the 0-1 multiple knapsack problem, and an algorithm that implements the model (M(x)). M(x) traverses the solution space rather than getting caught inmore » a local maximum, overcoming an inherent problem of many greedy algorithms. The use of agents ensures that infeasible solutions are not considered while traversing the solution space and that traversal of the solution space is not just random, but is also directed. M(x) is compared to a bound and bound algorithm (BB) and a simple greedy algorithm with a random shuffle (G(x)). The results suggest that M(x) is a good algorithm for approximating the 0-1 Multiple Knapsack problem. M(x) almost always found solutions that were close to optimal in a fraction of the time it took BB to run and with much less memory on large test data sets. M(x) usually performed better than G(x) on hard problems with correlated data.« less

  12. Forecasting hotspots in East Kutai, Kutai Kartanegara, and West Kutai as early warning information

    NASA Astrophysics Data System (ADS)

    Wahyuningsih, S.; Goejantoro, R.; Rizki, N. A.

    2018-04-01

    The aims of this research are to model hotspots and forecast hotspot 2017 in East Kutai, Kutai Kartanegara and West Kutai. The methods which used in this research were Holt exponential smoothing, Holt’s additive dump trend method, Holt-Winters’ additive method, additive decomposition method, multiplicative decomposition method, Loess decomposition method and Box-Jenkins method. For smoothing techniques, additive decomposition is better than Holt’s exponential smoothing. The hotspots model using Box-Jenkins method were Autoregressive Moving Average ARIMA(1,1,0), ARIMA(0,2,1), and ARIMA(0,1,0). Comparing the results from all methods which were used in this research, and based on Root of Mean Squared Error (RMSE), show that Loess decomposition method is the best times series model, because it has the least RMSE. Thus the Loess decomposition model used to forecast the number of hotspot. The forecasting result indicatethat hotspots pattern tend to increase at the end of 2017 in Kutai Kartanegara and West Kutai, but stationary in East Kutai.

  13. A novel spatial-temporal detection method of dim infrared moving small target

    NASA Astrophysics Data System (ADS)

    Chen, Zhong; Deng, Tao; Gao, Lei; Zhou, Heng; Luo, Song

    2014-09-01

    Moving small target detection under complex background in infrared image sequence is one of the major challenges of modern military in Early Warning Systems (EWS) and the use of Long-Range Strike (LRS). However, because of the low SNR and undulating background, the infrared moving small target detection is a difficult problem in a long time. To solve this problem, a novel spatial-temporal detection method based on bi-dimensional empirical mode decomposition (EMD) and time-domain difference is proposed in this paper. This method is downright self-data decomposition and do not rely on any transition kernel function, so it has a strong adaptive capacity. Firstly, we generalized the 1D EMD algorithm to the 2D case. In this process, the project has solved serial issues in 2D EMD, such as large amount of data operations, define and identify extrema in 2D case, and two-dimensional signal boundary corrosion. The EMD algorithm studied in this project can be well adapted to the automatic detection of small targets under low SNR and complex background. Secondly, considering the characteristics of moving target, we proposed an improved filtering method based on three-frame difference on basis of the original difference filtering in time-domain, which greatly improves the ability of anti-jamming algorithm. Finally, we proposed a new time-space fusion method based on a combined processing of 2D EMD and improved time-domain differential filtering. And, experimental results show that this method works well in infrared small moving target detection under low SNR and complex background.

  14. Newmark-Beta-FDTD method for super-resolution analysis of time reversal waves

    NASA Astrophysics Data System (ADS)

    Shi, Sheng-Bing; Shao, Wei; Ma, Jing; Jin, Congjun; Wang, Xiao-Hua

    2017-09-01

    In this work, a new unconditionally stable finite-difference time-domain (FDTD) method with the split-field perfectly matched layer (PML) is proposed for the analysis of time reversal (TR) waves. The proposed method is very suitable for multiscale problems involving microstructures. The spatial and temporal derivatives in this method are discretized by the central difference technique and Newmark-Beta algorithm, respectively, and the derivation results in the calculation of a banded-sparse matrix equation. Since the coefficient matrix keeps unchanged during the whole simulation process, the lower-upper (LU) decomposition of the matrix needs to be performed only once at the beginning of the calculation. Moreover, the reverse Cuthill-Mckee (RCM) technique, an effective preprocessing technique in bandwidth compression of sparse matrices, is used to improve computational efficiency. The super-resolution focusing of TR wave propagation in two- and three-dimensional spaces is included to validate the accuracy and efficiency of the proposed method.

  15. A Four-Stage Hybrid Model for Hydrological Time Series Forecasting

    PubMed Central

    Di, Chongli; Yang, Xiaohua; Wang, Xiaochao

    2014-01-01

    Hydrological time series forecasting remains a difficult task due to its complicated nonlinear, non-stationary and multi-scale characteristics. To solve this difficulty and improve the prediction accuracy, a novel four-stage hybrid model is proposed for hydrological time series forecasting based on the principle of ‘denoising, decomposition and ensemble’. The proposed model has four stages, i.e., denoising, decomposition, components prediction and ensemble. In the denoising stage, the empirical mode decomposition (EMD) method is utilized to reduce the noises in the hydrological time series. Then, an improved method of EMD, the ensemble empirical mode decomposition (EEMD), is applied to decompose the denoised series into a number of intrinsic mode function (IMF) components and one residual component. Next, the radial basis function neural network (RBFNN) is adopted to predict the trend of all of the components obtained in the decomposition stage. In the final ensemble prediction stage, the forecasting results of all of the IMF and residual components obtained in the third stage are combined to generate the final prediction results, using a linear neural network (LNN) model. For illustration and verification, six hydrological cases with different characteristics are used to test the effectiveness of the proposed model. The proposed hybrid model performs better than conventional single models, the hybrid models without denoising or decomposition and the hybrid models based on other methods, such as the wavelet analysis (WA)-based hybrid models. In addition, the denoising and decomposition strategies decrease the complexity of the series and reduce the difficulties of the forecasting. With its effective denoising and accurate decomposition ability, high prediction precision and wide applicability, the new model is very promising for complex time series forecasting. This new forecast model is an extension of nonlinear prediction models. PMID:25111782

  16. A four-stage hybrid model for hydrological time series forecasting.

    PubMed

    Di, Chongli; Yang, Xiaohua; Wang, Xiaochao

    2014-01-01

    Hydrological time series forecasting remains a difficult task due to its complicated nonlinear, non-stationary and multi-scale characteristics. To solve this difficulty and improve the prediction accuracy, a novel four-stage hybrid model is proposed for hydrological time series forecasting based on the principle of 'denoising, decomposition and ensemble'. The proposed model has four stages, i.e., denoising, decomposition, components prediction and ensemble. In the denoising stage, the empirical mode decomposition (EMD) method is utilized to reduce the noises in the hydrological time series. Then, an improved method of EMD, the ensemble empirical mode decomposition (EEMD), is applied to decompose the denoised series into a number of intrinsic mode function (IMF) components and one residual component. Next, the radial basis function neural network (RBFNN) is adopted to predict the trend of all of the components obtained in the decomposition stage. In the final ensemble prediction stage, the forecasting results of all of the IMF and residual components obtained in the third stage are combined to generate the final prediction results, using a linear neural network (LNN) model. For illustration and verification, six hydrological cases with different characteristics are used to test the effectiveness of the proposed model. The proposed hybrid model performs better than conventional single models, the hybrid models without denoising or decomposition and the hybrid models based on other methods, such as the wavelet analysis (WA)-based hybrid models. In addition, the denoising and decomposition strategies decrease the complexity of the series and reduce the difficulties of the forecasting. With its effective denoising and accurate decomposition ability, high prediction precision and wide applicability, the new model is very promising for complex time series forecasting. This new forecast model is an extension of nonlinear prediction models.

  17. Rotational-path decomposition based recursive planning for spacecraft attitude reorientation

    NASA Astrophysics Data System (ADS)

    Xu, Rui; Wang, Hui; Xu, Wenming; Cui, Pingyuan; Zhu, Shengying

    2018-02-01

    The spacecraft reorientation is a common task in many space missions. With multiple pointing constraints, it is greatly difficult to solve the constrained spacecraft reorientation planning problem. To deal with this problem, an efficient rotational-path decomposition based recursive planning (RDRP) method is proposed in this paper. The uniform pointing-constraint-ignored attitude rotation planning process is designed to solve all rotations without considering pointing constraints. Then the whole path is checked node by node. If any pointing constraint is violated, the nearest critical increment approach will be used to generate feasible alternative nodes in the process of rotational-path decomposition. As the planning path of each subdivision may still violate pointing constraints, multiple decomposition is needed and the reorientation planning is designed as a recursive manner. Simulation results demonstrate the effectiveness of the proposed method. The proposed method has been successfully applied in two SPARK microsatellites to solve onboard constrained attitude reorientation planning problem, which were developed by the Shanghai Engineering Center for Microsatellites and launched on 22 December 2016.

  18. Homotopy decomposition method for solving one-dimensional time-fractional diffusion equation

    NASA Astrophysics Data System (ADS)

    Abuasad, Salah; Hashim, Ishak

    2018-04-01

    In this paper, we present the homotopy decomposition method with a modified definition of beta fractional derivative for the first time to find exact solution of one-dimensional time-fractional diffusion equation. In this method, the solution takes the form of a convergent series with easily computable terms. The exact solution obtained by the proposed method is compared with the exact solution obtained by using fractional variational homotopy perturbation iteration method via a modified Riemann-Liouville derivative.

  19. Blind source separation problem in GPS time series

    NASA Astrophysics Data System (ADS)

    Gualandi, A.; Serpelloni, E.; Belardinelli, M. E.

    2016-04-01

    A critical point in the analysis of ground displacement time series, as those recorded by space geodetic techniques, is the development of data-driven methods that allow the different sources of deformation to be discerned and characterized in the space and time domains. Multivariate statistic includes several approaches that can be considered as a part of data-driven methods. A widely used technique is the principal component analysis (PCA), which allows us to reduce the dimensionality of the data space while maintaining most of the variance of the dataset explained. However, PCA does not perform well in finding the solution to the so-called blind source separation (BSS) problem, i.e., in recovering and separating the original sources that generate the observed data. This is mainly due to the fact that PCA minimizes the misfit calculated using an L2 norm (χ 2), looking for a new Euclidean space where the projected data are uncorrelated. The independent component analysis (ICA) is a popular technique adopted to approach the BSS problem. However, the independence condition is not easy to impose, and it is often necessary to introduce some approximations. To work around this problem, we test the use of a modified variational Bayesian ICA (vbICA) method to recover the multiple sources of ground deformation even in the presence of missing data. The vbICA method models the probability density function (pdf) of each source signal using a mix of Gaussian distributions, allowing for more flexibility in the description of the pdf of the sources with respect to standard ICA, and giving a more reliable estimate of them. Here we present its application to synthetic global positioning system (GPS) position time series, generated by simulating deformation near an active fault, including inter-seismic, co-seismic, and post-seismic signals, plus seasonal signals and noise, and an additional time-dependent volcanic source. We evaluate the ability of the PCA and ICA decomposition techniques in explaining the data and in recovering the original (known) sources. Using the same number of components, we find that the vbICA method fits the data almost as well as a PCA method, since the χ 2 increase is less than 10 % the value calculated using a PCA decomposition. Unlike PCA, the vbICA algorithm is found to correctly separate the sources if the correlation of the dataset is low (<0.67) and the geodetic network is sufficiently dense (ten continuous GPS stations within a box of side equal to two times the locking depth of a fault where an earthquake of Mw >6 occurred). We also provide a cookbook for the use of the vbICA algorithm in analyses of position time series for tectonic and non-tectonic applications.

  20. Global Solutions to Repulsive Hookean Elastodynamics

    NASA Astrophysics Data System (ADS)

    Hu, Xianpeng; Masmoudi, Nader

    2017-01-01

    The global existence of classical solutions to the three dimensional repulsive Hookean elastodynamics around an equilibrium is considered. By linearization and Hodge's decomposition, the compressible part of the velocity, the density, and the compressible part of the transpose of the deformation gradient satisfy Klein-Gordon equations with speed {√{2}}, while the incompressible parts of the velocity and of the transpose of the deformation gradient satisfy wave equations with speed one. The space-time resonance method combined with the vector field method is used in a novel way to obtain the decay of the solution and hence global existence.

  1. The study of Thai stock market across the 2008 financial crisis

    NASA Astrophysics Data System (ADS)

    Kanjamapornkul, K.; Pinčák, Richard; Bartoš, Erik

    2016-11-01

    The cohomology theory for financial market can allow us to deform Kolmogorov space of time series data over time period with the explicit definition of eight market states in grand unified theory. The anti-de Sitter space induced from a coupling behavior field among traders in case of a financial market crash acts like gravitational field in financial market spacetime. Under this hybrid mathematical superstructure, we redefine a behavior matrix by using Pauli matrix and modified Wilson loop for time series data. We use it to detect the 2008 financial market crash by using a degree of cohomology group of sphere over tensor field in correlation matrix over all possible dominated stocks underlying Thai SET50 Index Futures. The empirical analysis of financial tensor network was performed with the help of empirical mode decomposition and intrinsic time scale decomposition of correlation matrix and the calculation of closeness centrality of planar graph.

  2. Community ecology in 3D: Tensor decomposition reveals spatio-temporal dynamics of large ecological communities.

    PubMed

    Frelat, Romain; Lindegren, Martin; Denker, Tim Spaanheden; Floeter, Jens; Fock, Heino O; Sguotti, Camilla; Stäbler, Moritz; Otto, Saskia A; Möllmann, Christian

    2017-01-01

    Understanding spatio-temporal dynamics of biotic communities containing large numbers of species is crucial to guide ecosystem management and conservation efforts. However, traditional approaches usually focus on studying community dynamics either in space or in time, often failing to fully account for interlinked spatio-temporal changes. In this study, we demonstrate and promote the use of tensor decomposition for disentangling spatio-temporal community dynamics in long-term monitoring data. Tensor decomposition builds on traditional multivariate statistics (e.g. Principal Component Analysis) but extends it to multiple dimensions. This extension allows for the synchronized study of multiple ecological variables measured repeatedly in time and space. We applied this comprehensive approach to explore the spatio-temporal dynamics of 65 demersal fish species in the North Sea, a marine ecosystem strongly altered by human activities and climate change. Our case study demonstrates how tensor decomposition can successfully (i) characterize the main spatio-temporal patterns and trends in species abundances, (ii) identify sub-communities of species that share similar spatial distribution and temporal dynamics, and (iii) reveal external drivers of change. Our results revealed a strong spatial structure in fish assemblages persistent over time and linked to differences in depth, primary production and seasonality. Furthermore, we simultaneously characterized important temporal distribution changes related to the low frequency temperature variability inherent in the Atlantic Multidecadal Oscillation. Finally, we identified six major sub-communities composed of species sharing similar spatial distribution patterns and temporal dynamics. Our case study demonstrates the application and benefits of using tensor decomposition for studying complex community data sets usually derived from large-scale monitoring programs.

  3. An optimized time varying filtering based empirical mode decomposition method with grey wolf optimizer for machinery fault diagnosis

    NASA Astrophysics Data System (ADS)

    Zhang, Xin; Liu, Zhiwen; Miao, Qiang; Wang, Lei

    2018-03-01

    A time varying filtering based empirical mode decomposition (EMD) (TVF-EMD) method was proposed recently to solve the mode mixing problem of EMD method. Compared with the classical EMD, TVF-EMD was proven to improve the frequency separation performance and be robust to noise interference. However, the decomposition parameters (i.e., bandwidth threshold and B-spline order) significantly affect the decomposition results of this method. In original TVF-EMD method, the parameter values are assigned in advance, which makes it difficult to achieve satisfactory analysis results. To solve this problem, this paper develops an optimized TVF-EMD method based on grey wolf optimizer (GWO) algorithm for fault diagnosis of rotating machinery. Firstly, a measurement index termed weighted kurtosis index is constructed by using kurtosis index and correlation coefficient. Subsequently, the optimal TVF-EMD parameters that match with the input signal can be obtained by GWO algorithm using the maximum weighted kurtosis index as objective function. Finally, fault features can be extracted by analyzing the sensitive intrinsic mode function (IMF) owning the maximum weighted kurtosis index. Simulations and comparisons highlight the performance of TVF-EMD method for signal decomposition, and meanwhile verify the fact that bandwidth threshold and B-spline order are critical to the decomposition results. Two case studies on rotating machinery fault diagnosis demonstrate the effectiveness and advantages of the proposed method.

  4. A Simple Application of Compressed Sensing to Further Accelerate Partially Parallel Imaging

    PubMed Central

    Miao, Jun; Guo, Weihong; Narayan, Sreenath; Wilson, David L.

    2012-01-01

    Compressed Sensing (CS) and partially parallel imaging (PPI) enable fast MR imaging by reducing the amount of k-space data required for reconstruction. Past attempts to combine these two have been limited by the incoherent sampling requirement of CS, since PPI routines typically sample on a regular (coherent) grid. Here, we developed a new method, “CS+GRAPPA,” to overcome this limitation. We decomposed sets of equidistant samples into multiple random subsets. Then, we reconstructed each subset using CS, and averaging the results to get a final CS k-space reconstruction. We used both a standard CS, and an edge and joint-sparsity guided CS reconstruction. We tested these intermediate results on both synthetic and real MR phantom data, and performed a human observer experiment to determine the effectiveness of decomposition, and to optimize the number of subsets. We then used these CS reconstructions to calibrate the GRAPPA complex coil weights. In vivo parallel MR brain and heart data sets were used. An objective image quality evaluation metric, Case-PDM, was used to quantify image quality. Coherent aliasing and noise artifacts were significantly reduced using two decompositions. More decompositions further reduced coherent aliasing and noise artifacts but introduced blurring. However, the blurring was effectively minimized using our new edge and joint-sparsity guided CS using two decompositions. Numerical results on parallel data demonstrated that the combined method greatly improved image quality as compared to standard GRAPPA, on average halving Case-PDM scores across a range of sampling rates. The proposed technique allowed the same Case-PDM scores as standard GRAPPA, using about half the number of samples. We conclude that the new method augments GRAPPA by combining it with CS, allowing CS to work even when the k-space sampling pattern is equidistant. PMID:22902065

  5. Regular Decompositions for H(div) Spaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kolev, Tzanio; Vassilevski, Panayot

    We study regular decompositions for H(div) spaces. In particular, we show that such regular decompositions are closely related to a previously studied “inf-sup” condition for parameter-dependent Stokes problems, for which we provide an alternative, more direct, proof.

  6. Domain Decomposition Algorithms for First-Order System Least Squares Methods

    NASA Technical Reports Server (NTRS)

    Pavarino, Luca F.

    1996-01-01

    Least squares methods based on first-order systems have been recently proposed and analyzed for second-order elliptic equations and systems. They produce symmetric and positive definite discrete systems by using standard finite element spaces, which are not required to satisfy the inf-sup condition. In this paper, several domain decomposition algorithms for these first-order least squares methods are studied. Some representative overlapping and substructuring algorithms are considered in their additive and multiplicative variants. The theoretical and numerical results obtained show that the classical convergence bounds (on the iteration operator) for standard Galerkin discretizations are also valid for least squares methods.

  7. Temporal and spatial heterogeneity of rupture process application in shakemaps of Yushu Ms7.1 earthquake, China

    NASA Astrophysics Data System (ADS)

    Kun, C.

    2015-12-01

    Studies have shown that estimates of ground motion parameter from ground motion attenuation relationship often greater than the observed value, mainly because multiple ruptures of the big earthquake reduce the source pulse height of source time function. In the absence of real-time data of the station after the earthquake, this paper attempts to make some constraints from the source, to improve the accuracy of shakemaps. Causative fault of Yushu Ms 7.1 earthquake is vertical approximately (dip 83 °), and source process in time and space was dispersive distinctly. Main shock of Yushu Ms7.1 earthquake can be divided into several sub-events based on source process of this earthquake. Magnitude of each sub-events depended on each area under the curve of source pulse of source time function, and location derived from source process of each sub-event. We use ShakeMap method with considering the site effect to generate shakeMap for each sub-event, respectively. Finally, ShakeMaps of mainshock can be aquired from superposition of shakemaps for all the sub-events in space. Shakemaps based on surface rupture of causative Fault from field survey can also be derived for mainshock with only one magnitude. We compare ShakeMaps of both the above methods with Intensity of investigation. Comparisons show that decomposition method of main shock more accurately reflect the shake of earthquake in near-field, but for far field the shake is controlled by the weakening influence of the source, the estimated Ⅵ area was smaller than the intensity of the actual investigation. Perhaps seismic intensity in far-field may be related to the increasing seismic duration for the two events. In general, decomposition method of main shock based on source process, considering shakemap of each sub-event, is feasible for disaster emergency response, decision-making and rapid Disaster Assessment after the earthquake.

  8. GENESIS: a hybrid-parallel and multi-scale molecular dynamics simulator with enhanced sampling algorithms for biomolecular and cellular simulations.

    PubMed

    Jung, Jaewoon; Mori, Takaharu; Kobayashi, Chigusa; Matsunaga, Yasuhiro; Yoda, Takao; Feig, Michael; Sugita, Yuji

    2015-07-01

    GENESIS (Generalized-Ensemble Simulation System) is a new software package for molecular dynamics (MD) simulations of macromolecules. It has two MD simulators, called ATDYN and SPDYN. ATDYN is parallelized based on an atomic decomposition algorithm for the simulations of all-atom force-field models as well as coarse-grained Go-like models. SPDYN is highly parallelized based on a domain decomposition scheme, allowing large-scale MD simulations on supercomputers. Hybrid schemes combining OpenMP and MPI are used in both simulators to target modern multicore computer architectures. Key advantages of GENESIS are (1) the highly parallel performance of SPDYN for very large biological systems consisting of more than one million atoms and (2) the availability of various REMD algorithms (T-REMD, REUS, multi-dimensional REMD for both all-atom and Go-like models under the NVT, NPT, NPAT, and NPγT ensembles). The former is achieved by a combination of the midpoint cell method and the efficient three-dimensional Fast Fourier Transform algorithm, where the domain decomposition space is shared in real-space and reciprocal-space calculations. Other features in SPDYN, such as avoiding concurrent memory access, reducing communication times, and usage of parallel input/output files, also contribute to the performance. We show the REMD simulation results of a mixed (POPC/DMPC) lipid bilayer as a real application using GENESIS. GENESIS is released as free software under the GPLv2 licence and can be easily modified for the development of new algorithms and molecular models. WIREs Comput Mol Sci 2015, 5:310-323. doi: 10.1002/wcms.1220.

  9. Efficient solution of the Wigner-Liouville equation using a spectral decomposition of the force field

    NASA Astrophysics Data System (ADS)

    Van de Put, Maarten L.; Sorée, Bart; Magnus, Wim

    2017-12-01

    The Wigner-Liouville equation is reformulated using a spectral decomposition of the classical force field instead of the potential energy. The latter is shown to simplify the Wigner-Liouville kernel both conceptually and numerically as the spectral force Wigner-Liouville equation avoids the numerical evaluation of the highly oscillatory Wigner kernel which is nonlocal in both position and momentum. The quantum mechanical evolution is instead governed by a term local in space and non-local in momentum, where the non-locality in momentum has only a limited range. An interpretation of the time evolution in terms of two processes is presented; a classical evolution under the influence of the averaged driving field, and a probability-preserving quantum-mechanical generation and annihilation term. Using the inherent stability and reduced complexity, a direct deterministic numerical implementation using Chebyshev and Fourier pseudo-spectral methods is detailed. For the purpose of illustration, we present results for the time-evolution of a one-dimensional resonant tunneling diode driven out of equilibrium.

  10. Data-adaptive harmonic spectra and multilayer Stuart-Landau models

    NASA Astrophysics Data System (ADS)

    Chekroun, Mickaël D.; Kondrashov, Dmitri

    2017-09-01

    Harmonic decompositions of multivariate time series are considered for which we adopt an integral operator approach with periodic semigroup kernels. Spectral decomposition theorems are derived that cover the important cases of two-time statistics drawn from a mixing invariant measure. The corresponding eigenvalues can be grouped per Fourier frequency and are actually given, at each frequency, as the singular values of a cross-spectral matrix depending on the data. These eigenvalues obey, furthermore, a variational principle that allows us to define naturally a multidimensional power spectrum. The eigenmodes, as far as they are concerned, exhibit a data-adaptive character manifested in their phase which allows us in turn to define a multidimensional phase spectrum. The resulting data-adaptive harmonic (DAH) modes allow for reducing the data-driven modeling effort to elemental models stacked per frequency, only coupled at different frequencies by the same noise realization. In particular, the DAH decomposition extracts time-dependent coefficients stacked by Fourier frequency which can be efficiently modeled—provided the decay of temporal correlations is sufficiently well-resolved—within a class of multilayer stochastic models (MSMs) tailored here on stochastic Stuart-Landau oscillators. Applications to the Lorenz 96 model and to a stochastic heat equation driven by a space-time white noise are considered. In both cases, the DAH decomposition allows for an extraction of spatio-temporal modes revealing key features of the dynamics in the embedded phase space. The multilayer Stuart-Landau models (MSLMs) are shown to successfully model the typical patterns of the corresponding time-evolving fields, as well as their statistics of occurrence.

  11. Developing a complex independent component analysis technique to extract non-stationary patterns from geophysical time-series

    NASA Astrophysics Data System (ADS)

    Forootan, Ehsan; Kusche, Jürgen

    2016-04-01

    Geodetic/geophysical observations, such as the time series of global terrestrial water storage change or sea level and temperature change, represent samples of physical processes and therefore contain information about complex physical interactionswith many inherent time scales. Extracting relevant information from these samples, for example quantifying the seasonality of a physical process or its variability due to large-scale ocean-atmosphere interactions, is not possible by rendering simple time series approaches. In the last decades, decomposition techniques have found increasing interest for extracting patterns from geophysical observations. Traditionally, principal component analysis (PCA) and more recently independent component analysis (ICA) are common techniques to extract statistical orthogonal (uncorrelated) and independent modes that represent the maximum variance of observations, respectively. PCA and ICA can be classified as stationary signal decomposition techniques since they are based on decomposing the auto-covariance matrix or diagonalizing higher (than two)-order statistical tensors from centered time series. However, the stationary assumption is obviously not justifiable for many geophysical and climate variables even after removing cyclic components e.g., the seasonal cycles. In this paper, we present a new decomposition method, the complex independent component analysis (CICA, Forootan, PhD-2014), which can be applied to extract to non-stationary (changing in space and time) patterns from geophysical time series. Here, CICA is derived as an extension of real-valued ICA (Forootan and Kusche, JoG-2012), where we (i) define a new complex data set using a Hilbert transformation. The complex time series contain the observed values in their real part, and the temporal rate of variability in their imaginary part. (ii) An ICA algorithm based on diagonalization of fourth-order cumulants is then applied to decompose the new complex data set in (i). (iii) Dominant non-stationary patterns are recognized as independent complex patterns that can be used to represent the space and time amplitude and phase propagations. We present the results of CICA on simulated and real cases e.g., for quantifying the impact of large-scale ocean-atmosphere interaction on global mass changes. Forootan (PhD-2014) Statistical signal decomposition techniques for analyzing time-variable satellite gravimetry data, PhD Thesis, University of Bonn, http://hss.ulb.uni-bonn.de/2014/3766/3766.htm Forootan and Kusche (JoG-2012) Separation of global time-variable gravity signals into maximally independent components, Journal of Geodesy 86 (7), 477-497, doi: 10.1007/s00190-011-0532-5

  12. Preparation, non-isothermal decomposition kinetics, heat capacity and adiabatic time-to-explosion of NTOxDNAZ.

    PubMed

    Ma, Haixia; Yan, Biao; Li, Zhaona; Guan, Yulei; Song, Jirong; Xu, Kangzhen; Hu, Rongzu

    2009-09-30

    NTOxDNAZ was prepared by mixing 3,3-dinitroazetidine (DNAZ) and 3-nitro-1,2,4-triazol-5-one (NTO) in ethanol solution. The thermal behavior of the title compound was studied under a non-isothermal condition by DSC and TG/DTG methods. The kinetic parameters were obtained from analysis of the DSC and TG/DTG curves by Kissinger method, Ozawa method, the differential method and the integral method. The main exothermic decomposition reaction mechanism of NTOxDNAZ is classified as chemical reaction, and the kinetic parameters of the reaction are E(a)=149.68 kJ mol(-1) and A=10(15.81)s(-1). The specific heat capacity of the title compound was determined with continuous C(p) mode of microcalorimeter. The standard mole specific heat capacity of NTOxDNAZ was 352.56 J mol(-1)K(-1) in 298.15K. Using the relationship between C(p) and T and the thermal decomposition parameters, the time of the thermal decomposition from initialization to thermal explosion (adiabatic time-to-explosion) was obtained.

  13. The Application of Neutron Transport Green's Functions to Threat Scenario Simulation

    NASA Astrophysics Data System (ADS)

    Thoreson, Gregory G.; Schneider, Erich A.; Armstrong, Hirotatsu; van der Hoeven, Christopher A.

    2015-02-01

    Radiation detectors provide deterrence and defense against nuclear smuggling attempts by scanning vehicles, ships, and pedestrians for radioactive material. Understanding detector performance is crucial to developing novel technologies, architectures, and alarm algorithms. Detection can be modeled through radiation transport simulations; however, modeling a spanning set of threat scenarios over the full transport phase-space is computationally challenging. Previous research has demonstrated Green's functions can simulate photon detector signals by decomposing the scenario space into independently simulated submodels. This paper presents decomposition methods for neutron and time-dependent transport. As a result, neutron detector signals produced from full forward transport simulations can be efficiently reconstructed by sequential application of submodel response functions.

  14. A New Approach of evaluating the damage in simply-supported reinforced concrete beam by Local mean decomposition (LMD)

    NASA Astrophysics Data System (ADS)

    Zhang, Xuebing; Liu, Ning; Xi, Jiaxin; Zhang, Yunqi; Zhang, Wenchun; Yang, Peipei

    2017-08-01

    How to analyze the nonstationary response signals and obtain vibration characters is extremely important in the vibration-based structural diagnosis methods. In this work, we introduce a more reasonable time-frequency decomposition method termed local mean decomposition (LMD) to instead the widely-used empirical mode decomposition (EMD). By employing the LMD method, one can derive a group of component signals, each of which is more stationary, and then analyze the vibration state and make the assessment of structural damage of a construction or building. We illustrated the effectiveness of LMD by a synthetic data and an experimental data recorded in a simply-supported reinforced concrete beam. Then based on the decomposition results, an elementary method of damage diagnosis was proposed.

  15. An asymptotic-preserving stochastic Galerkin method for the radiative heat transfer equations with random inputs and diffusive scalings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jin, Shi, E-mail: sjin@wisc.edu; Institute of Natural Sciences, Department of Mathematics, MOE-LSEC and SHL-MAC, Shanghai Jiao Tong University, Shanghai 200240; Lu, Hanqing, E-mail: hanqing@math.wisc.edu

    2017-04-01

    In this paper, we develop an Asymptotic-Preserving (AP) stochastic Galerkin scheme for the radiative heat transfer equations with random inputs and diffusive scalings. In this problem the random inputs arise due to uncertainties in cross section, initial data or boundary data. We use the generalized polynomial chaos based stochastic Galerkin (gPC-SG) method, which is combined with the micro–macro decomposition based deterministic AP framework in order to handle efficiently the diffusive regime. For linearized problem we prove the regularity of the solution in the random space and consequently the spectral accuracy of the gPC-SG method. We also prove the uniform (inmore » the mean free path) linear stability for the space-time discretizations. Several numerical tests are presented to show the efficiency and accuracy of proposed scheme, especially in the diffusive regime.« less

  16. The Distributed Diagonal Force Decomposition Method for Parallelizing Molecular Dynamics Simulations

    PubMed Central

    Boršnik, Urban; Miller, Benjamin T.; Brooks, Bernard R.; Janežič, Dušanka

    2011-01-01

    Parallelization is an effective way to reduce the computational time needed for molecular dynamics simulations. We describe a new parallelization method, the distributed-diagonal force decomposition method, with which we extend and improve the existing force decomposition methods. Our new method requires less data communication during molecular dynamics simulations than replicated data and current force decomposition methods, increasing the parallel efficiency. It also dynamically load-balances the processors' computational load throughout the simulation. The method is readily implemented in existing molecular dynamics codes and it has been incorporated into the CHARMM program, allowing its immediate use in conjunction with the many molecular dynamics simulation techniques that are already present in the program. We also present the design of the Force Decomposition Machine, a cluster of personal computers and networks that is tailored to running molecular dynamics simulations using the distributed diagonal force decomposition method. The design is expandable and provides various degrees of fault resilience. This approach is easily adaptable to computers with Graphics Processing Units because it is independent of the processor type being used. PMID:21793007

  17. The Multiscale Robin Coupled Method for flows in porous media

    NASA Astrophysics Data System (ADS)

    Guiraldello, Rafael T.; Ausas, Roberto F.; Sousa, Fabricio S.; Pereira, Felipe; Buscaglia, Gustavo C.

    2018-02-01

    A multiscale mixed method aiming at the accurate approximation of velocity and pressure fields in heterogeneous porous media is proposed. The procedure is based on a new domain decomposition method in which the local problems are subject to Robin boundary conditions. The domain decomposition procedure is defined in terms of two independent spaces on the skeleton of the decomposition, corresponding to interface pressures and fluxes, that can be chosen with great flexibility to accommodate local features of the underlying permeability fields. The well-posedness of the new domain decomposition procedure is established and its connection with the method of Douglas et al. (1993) [12], is identified, also allowing us to reinterpret the known procedure as an optimized Schwarz (or Two-Lagrange-Multiplier) method. The multiscale property of the new domain decomposition method is indicated, and its relation with the Multiscale Mortar Mixed Finite Element Method (MMMFEM) and the Multiscale Hybrid-Mixed (MHM) Finite Element Method is discussed. Numerical simulations are presented aiming at illustrating several features of the new method. Initially we illustrate the possibility of switching from MMMFEM to MHM by suitably varying the Robin condition parameter in the new multiscale method. Then we turn our attention to realistic flows in high-contrast, channelized porous formations. We show that for a range of values of the Robin condition parameter our method provides better approximations for pressure and velocity than those computed with either the MMMFEM and the MHM. This is an indication that our method has the potential to produce more accurate velocity fields in the presence of rough, realistic permeability fields of petroleum reservoirs.

  18. Wave-filter-based approach for generation of a quiet space in a rectangular cavity

    NASA Astrophysics Data System (ADS)

    Iwamoto, Hiroyuki; Tanaka, Nobuo; Sanada, Akira

    2018-02-01

    This paper is concerned with the generation of a quiet space in a rectangular cavity using active wave control methodology. It is the purpose of this paper to present the wave filtering method for a rectangular cavity using multiple microphones and its application to an adaptive feedforward control system. Firstly, the transfer matrix method is introduced for describing the wave dynamics of the sound field, and then feedforward control laws for eliminating transmitted waves is derived. Furthermore, some numerical simulations are conducted that show the best possible result of active wave control. This is followed by the derivation of the wave filtering equations that indicates the structure of the wave filter. It is clarified that the wave filter consists of three portions; modal group filter, rearrangement filter and wave decomposition filter. Next, from a numerical point of view, the accuracy of the wave decomposition filter which is expressed as a function of frequency is investigated using condition numbers. Finally, an experiment on the adaptive feedforward control system using the wave filter is carried out, demonstrating that a quiet space is generated in the target space by the proposed method.

  19. Performance impact of stop lists and morphological decomposition on word-word corpus-based semantic space models.

    PubMed

    Keith, Jeff; Westbury, Chris; Goldman, James

    2015-09-01

    Corpus-based semantic space models, which primarily rely on lexical co-occurrence statistics, have proven effective in modeling and predicting human behavior in a number of experimental paradigms that explore semantic memory representation. The most widely studied extant models, however, are strongly influenced by orthographic word frequency (e.g., Shaoul & Westbury, Behavior Research Methods, 38, 190-195, 2006). This has the implication that high-frequency closed-class words can potentially bias co-occurrence statistics. Because these closed-class words are purported to carry primarily syntactic, rather than semantic, information, the performance of corpus-based semantic space models may be improved by excluding closed-class words (using stop lists) from co-occurrence statistics, while retaining their syntactic information through other means (e.g., part-of-speech tagging and/or affixes from inflected word forms). Additionally, very little work has been done to explore the effect of employing morphological decomposition on the inflected forms of words in corpora prior to compiling co-occurrence statistics, despite (controversial) evidence that humans perform early morphological decomposition in semantic processing. In this study, we explored the impact of these factors on corpus-based semantic space models. From this study, morphological decomposition appears to significantly improve performance in word-word co-occurrence semantic space models, providing some support for the claim that sublexical information-specifically, word morphology-plays a role in lexical semantic processing. An overall decrease in performance was observed in models employing stop lists (e.g., excluding closed-class words). Furthermore, we found some evidence that weakens the claim that closed-class words supply primarily syntactic information in word-word co-occurrence semantic space models.

  20. On the physical significance of the Effective Independence method for sensor placement

    NASA Astrophysics Data System (ADS)

    Jiang, Yaoguang; Li, Dongsheng; Song, Gangbing

    2017-05-01

    Optimally deploy sparse sensors for better damage identification and structural health monitoring is always a challenging task. The Effective Independence(EI) is one of the most influential sensor placement method and to be discussed in the paper. Specifically, the effect of the different weighting coefficients on the maximization of the Fisher information matrix(FIM) and the physical significance of the re-orthogonalization of modal shapes through QR decomposition in the EI method are addressed. By analyzing the widely used EI method, we found that the absolute identification space put forward along with the EI method is preferable to ensuring the maximization of the FIM, instead of the original EI coefficient which was post-multiolied by a weighting matrix. That is, deleting the row with the minimum EI coefficient can’t achieve the objective of maximizing the trace of FIM as initially conceived. Furthermore, we observed that in the computation of EI method, the sum of each retained row in the absolute identification space is a constant in each iteration. This potential property can be revealed distinctively by the product of target mode and its transpose, and its form is similar to an alternative formula of the EI method through orthogonal-triangular(QR) decomposition previously proposed by the authors. With it, the physical significance of re-orthogonalization of modal shapes through QR decomposition in the computation of EI method can be obviously manifested from a new perspective. Finally, two simple examples are provided to demonstrate the above two observations.

  1. Detecting phase-amplitude coupling with high frequency resolution using adaptive decompositions

    PubMed Central

    Pittman-Polletta, Benjamin; Hsieh, Wan-Hsin; Kaur, Satvinder; Lo, Men-Tzung; Hu, Kun

    2014-01-01

    Background Phase-amplitude coupling (PAC) – the dependence of the amplitude of one rhythm on the phase of another, lower-frequency rhythm – has recently been used to illuminate cross-frequency coordination in neurophysiological activity. An essential step in measuring PAC is decomposing data to obtain rhythmic components of interest. Current methods of PAC assessment employ narrowband Fourier-based filters, which assume that biological rhythms are stationary, harmonic oscillations. However, biological signals frequently contain irregular and nonstationary features, which may contaminate rhythms of interest and complicate comodulogram interpretation, especially when frequency resolution is limited by short data segments. New method To better account for nonstationarities while maintaining sharp frequency resolution in PAC measurement, even for short data segments, we introduce a new method of PAC assessment which utilizes adaptive and more generally broadband decomposition techniques – such as the empirical mode decomposition (EMD). To obtain high frequency resolution PAC measurements, our method distributes the PAC associated with pairs of broadband oscillations over frequency space according to the time-local frequencies of these oscillations. Comparison with existing methods We compare our novel adaptive approach to a narrowband comodulogram approach on a variety of simulated signals of short duration, studying systematically how different types of nonstationarities affect these methods, as well as on EEG data. Conclusions Our results show: (1) narrowband filtering can lead to poor PAC frequency resolution, and inaccuracy and false negatives in PAC assessment; (2) our adaptive approach attains better PAC frequency resolution and is more resistant to nonstationarities and artifacts than traditional comodulograms. PMID:24452055

  2. A comparison of reduced-order modelling techniques for application in hyperthermia control and estimation.

    PubMed

    Bailey, E A; Dutton, A W; Mattingly, M; Devasia, S; Roemer, R B

    1998-01-01

    Reduced-order modelling techniques can make important contributions in the control and state estimation of large systems. In hyperthermia, reduced-order modelling can provide a useful tool by which a large thermal model can be reduced to the most significant subset of its full-order modes, making real-time control and estimation possible. Two such reduction methods, one based on modal decomposition and the other on balanced realization, are compared in the context of simulated hyperthermia heat transfer problems. The results show that the modal decomposition reduction method has three significant advantages over that of balanced realization. First, modal decomposition reduced models result in less error, when compared to the full-order model, than balanced realization reduced models of similar order in problems with low or moderate advective heat transfer. Second, because the balanced realization based methods require a priori knowledge of the sensor and actuator placements, the reduced-order model is not robust to changes in sensor or actuator locations, a limitation not present in modal decomposition. Third, the modal decomposition transformation is less demanding computationally. On the other hand, in thermal problems dominated by advective heat transfer, numerical instabilities make modal decomposition based reduction problematic. Modal decomposition methods are therefore recommended for reduction of models in which advection is not dominant and research continues into methods to render balanced realization based reduction more suitable for real-time clinical hyperthermia control and estimation.

  3. Community ecology in 3D: Tensor decomposition reveals spatio-temporal dynamics of large ecological communities

    PubMed Central

    Lindegren, Martin; Denker, Tim Spaanheden; Floeter, Jens; Fock, Heino O.; Sguotti, Camilla; Stäbler, Moritz; Otto, Saskia A.; Möllmann, Christian

    2017-01-01

    Understanding spatio-temporal dynamics of biotic communities containing large numbers of species is crucial to guide ecosystem management and conservation efforts. However, traditional approaches usually focus on studying community dynamics either in space or in time, often failing to fully account for interlinked spatio-temporal changes. In this study, we demonstrate and promote the use of tensor decomposition for disentangling spatio-temporal community dynamics in long-term monitoring data. Tensor decomposition builds on traditional multivariate statistics (e.g. Principal Component Analysis) but extends it to multiple dimensions. This extension allows for the synchronized study of multiple ecological variables measured repeatedly in time and space. We applied this comprehensive approach to explore the spatio-temporal dynamics of 65 demersal fish species in the North Sea, a marine ecosystem strongly altered by human activities and climate change. Our case study demonstrates how tensor decomposition can successfully (i) characterize the main spatio-temporal patterns and trends in species abundances, (ii) identify sub-communities of species that share similar spatial distribution and temporal dynamics, and (iii) reveal external drivers of change. Our results revealed a strong spatial structure in fish assemblages persistent over time and linked to differences in depth, primary production and seasonality. Furthermore, we simultaneously characterized important temporal distribution changes related to the low frequency temperature variability inherent in the Atlantic Multidecadal Oscillation. Finally, we identified six major sub-communities composed of species sharing similar spatial distribution patterns and temporal dynamics. Our case study demonstrates the application and benefits of using tensor decomposition for studying complex community data sets usually derived from large-scale monitoring programs. PMID:29136658

  4. Decomposition Techniques for Icesat/glas Full-Waveform Data

    NASA Astrophysics Data System (ADS)

    Liu, Z.; Gao, X.; Li, G.; Chen, J.

    2018-04-01

    The geoscience laser altimeter system (GLAS) on the board Ice, Cloud, and land Elevation Satellite (ICESat), is the first long-duration space borne full-waveform LiDAR for measuring the topography of the ice shelf and temporal variation, cloud and atmospheric characteristics. In order to extract the characteristic parameters of the waveform, the key step is to process the full waveform data. In this paper, the modified waveform decomposition method is proposed to extract the echo components from full-waveform. First, the initial parameter estimation is implemented through data preprocessing and waveform detection. Next, the waveform fitting is demonstrated using the Levenberg-Marquard (LM) optimization method. The results show that the modified waveform decomposition method can effectively extract the overlapped echo components and missing echo components compared with the results from GLA14 product. The echo components can also be extracted from the complex waveforms.

  5. Data driven discrete-time parsimonious identification of a nonlinear state-space model for a weakly nonlinear system with short data record

    NASA Astrophysics Data System (ADS)

    Relan, Rishi; Tiels, Koen; Marconato, Anna; Dreesen, Philippe; Schoukens, Johan

    2018-05-01

    Many real world systems exhibit a quasi linear or weakly nonlinear behavior during normal operation, and a hard saturation effect for high peaks of the input signal. In this paper, a methodology to identify a parsimonious discrete-time nonlinear state space model (NLSS) for the nonlinear dynamical system with relatively short data record is proposed. The capability of the NLSS model structure is demonstrated by introducing two different initialisation schemes, one of them using multivariate polynomials. In addition, a method using first-order information of the multivariate polynomials and tensor decomposition is employed to obtain the parsimonious decoupled representation of the set of multivariate real polynomials estimated during the identification of NLSS model. Finally, the experimental verification of the model structure is done on the cascaded water-benchmark identification problem.

  6. Computation of an Underexpanded 3-D Rectangular Jet by the CE/SE Method

    NASA Technical Reports Server (NTRS)

    Loh, Ching Y.; Himansu, Ananda; Wang, Xiao Y.; Jorgenson, Philip C. E.

    2000-01-01

    Recently, an unstructured three-dimensional space-time conservation element and solution element (CE/SE) Euler solver was developed. Now it is also developed for parallel computation using METIS for domain decomposition and MPI (message passing interface). The method is employed here to numerically study the near-field of a typical 3-D rectangular under-expanded jet. For the computed case-a jet with Mach number Mj = 1.6. with a very modest grid of 1.7 million tetrahedrons, the flow features such as the shock-cell structures and the axis switching, are in good qualitative agreement with experimental results.

  7. Trend extraction using empirical mode decomposition and statistical empirical mode decomposition: Case study: Kuala Lumpur stock market

    NASA Astrophysics Data System (ADS)

    Jaber, Abobaker M.

    2014-12-01

    Two nonparametric methods for prediction and modeling of financial time series signals are proposed. The proposed techniques are designed to handle non-stationary and non-linearity behave and to extract meaningful signals for reliable prediction. Due to Fourier Transform (FT), the methods select significant decomposed signals that will be employed for signal prediction. The proposed techniques developed by coupling Holt-winter method with Empirical Mode Decomposition (EMD) and it is Extending the scope of empirical mode decomposition by smoothing (SEMD). To show performance of proposed techniques, we analyze daily closed price of Kuala Lumpur stock market index.

  8. Structural optimization by multilevel decomposition

    NASA Technical Reports Server (NTRS)

    Sobieszczanski-Sobieski, J.; James, B.; Dovi, A.

    1983-01-01

    A method is described for decomposing an optimization problem into a set of subproblems and a coordination problem which preserves coupling between the subproblems. The method is introduced as a special case of multilevel, multidisciplinary system optimization and its algorithm is fully described for two level optimization for structures assembled of finite elements of arbitrary type. Numerical results are given for an example of a framework to show that the decomposition method converges and yields results comparable to those obtained without decomposition. It is pointed out that optimization by decomposition should reduce the design time by allowing groups of engineers, using different computers to work concurrently on the same large problem.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meng, F.; Banks, J. W.; Henshaw, W. D.

    We describe a new partitioned approach for solving conjugate heat transfer (CHT) problems where the governing temperature equations in different material domains are time-stepped in a implicit manner, but where the interface coupling is explicit. The new approach, called the CHAMP scheme (Conjugate Heat transfer Advanced Multi-domain Partitioned), is based on a discretization of the interface coupling conditions using a generalized Robin (mixed) condition. The weights in the Robin condition are determined from the optimization of a condition derived from a local stability analysis of the coupling scheme. The interface treatment combines ideas from optimized-Schwarz methods for domain-decomposition problems togethermore » with the interface jump conditions and additional compatibility jump conditions derived from the governing equations. For many problems (i.e. for a wide range of material properties, grid-spacings and time-steps) the CHAMP algorithm is stable and second-order accurate using no sub-time-step iterations (i.e. a single implicit solve of the temperature equation in each domain). In extreme cases (e.g. very fine grids with very large time-steps) it may be necessary to perform one or more sub-iterations. Each sub-iteration generally increases the range of stability substantially and thus one sub-iteration is likely sufficient for the vast majority of practical problems. The CHAMP algorithm is developed first for a model problem and analyzed using normal-mode the- ory. The theory provides a mechanism for choosing optimal parameters in the mixed interface condition. A comparison is made to the classical Dirichlet-Neumann (DN) method and, where applicable, to the optimized- Schwarz (OS) domain-decomposition method. For problems with different thermal conductivities and dif- fusivities, the CHAMP algorithm outperforms the DN scheme. For domain-decomposition problems with uniform conductivities and diffusivities, the CHAMP algorithm performs better than the typical OS scheme with one grid-cell overlap. Lastly, the CHAMP scheme is also developed for general curvilinear grids and CHT ex- amples are presented using composite overset grids that confirm the theory and demonstrate the effectiveness of the approach.« less

  10. Reactive Goal Decomposition Hierarchies for On-Board Autonomy

    NASA Astrophysics Data System (ADS)

    Hartmann, L.

    2002-01-01

    As our experience grows, space missions and systems are expected to address ever more complex and demanding requirements with fewer resources (e.g., mass, power, budget). One approach to accommodating these higher expectations is to increase the level of autonomy to improve the capabilities and robustness of on- board systems and to simplify operations. The goal decomposition hierarchies described here provide a simple but powerful form of goal-directed behavior that is relatively easy to implement for space systems. A goal corresponds to a state or condition that an operator of the space system would like to bring about. In the system described here goals are decomposed into simpler subgoals until the subgoals are simple enough to execute directly. For each goal there is an activation condition and a set of decompositions. The decompositions correspond to different ways of achieving the higher level goal. Each decomposition contains a gating condition and a set of subgoals to be "executed" sequentially or in parallel. The gating conditions are evaluated in order and for the first one that is true, the corresponding decomposition is executed in order to achieve the higher level goal. The activation condition specifies global conditions (i.e., for all decompositions of the goal) that need to hold in order for the goal to be achieved. In real-time, parameters and state information are passed between goals and subgoals in the decomposition; a termination indication (success, failure, degree) is passed up when a decomposition finishes executing. The lowest level decompositions include servo control loops and finite state machines for generating control signals and sequencing i/o. Semaphores and shared memory are used to synchronize and coordinate decompositions that execute in parallel. The goal decomposition hierarchy is reactive in that the generated behavior is sensitive to the real-time state of the system and the environment. That is, the system is able to react to state and environment and in general can terminate the execution of a decomposition and attempt a new decomposition at any level in the hierarchy. This goal decomposition system is suitable for workstation, microprocessor and fpga implementation and thus is able to support the full range of prototyping activities, from mission design in the laboratory to development of the fpga firmware for the flight system. This approach is based on previous artificial intelligence work including (1) Brooks' subsumption architecture for robot control, (2) Firby's Reactive Action Package System (RAPS) for mediating between high level automated planning and low level execution and (3) hierarchical task networks for automated planning. Reactive goal decomposition hierarchies can be used for a wide variety of on-board autonomy applications including automating low level operation sequences (such as scheduling prerequisite operations, e.g., heaters, warm-up periods, monitoring power constraints), coordinating multiple spacecraft as in formation flying and constellations, robot manipulator operations, rendez-vous, docking, servicing, assembly, on-orbit maintenance, planetary rover operations, solar system and interstellar probes, intelligent science data gathering and disaster early warning. Goal decomposition hierarchies can support high level fault tolerance. Given models of on-board resources and goals to accomplish, the decomposition hierarchy could allocate resources to goals taking into account existing faults and in real-time reallocating resources as new faults arise. Resources to be modeled include memory (e.g., ROM, FPGA configuration memory, processor memory, payload instrument memory), processors, on-board and interspacecraft network nodes and links, sensors, actuators (e.g., attitude determination and control, guidance and navigation) and payload instruments. A goal decomposition hierarchy could be defined to map mission goals and tasks to available on-board resources. As faults occur and are detected the resource allocation is modified to avoid using the faulty resource. Goal decomposition hierarchies can implement variable autonomy (in which the operator chooses to command the system at a high or low level, mixed initiative planning (in which the system is able to interact with the operator, e.g, to request operator intervention when a working envelope is exceeded) and distributed control (in which, for example, multiple spacecraft cooperate to accomplish a task without a fixed master). The full paper will describe in greater detail how goal decompositions work, how they can be implemented, techniques for implementing a candidate application and the current state of the fpga implementation.

  11. Aerospace plane guidance using geometric control theory

    NASA Technical Reports Server (NTRS)

    Van Buren, Mark A.; Mease, Kenneth D.

    1990-01-01

    A reduced-order method employing decomposition, based on time-scale separation, of the 4-D state space in a 2-D slow manifold and a family of 2-D fast manifolds is shown to provide an excellent approximation to the full-order minimum-fuel ascent trajectory. Near-optimal guidance is obtained by tracking the reduced-order trajectory. The tracking problem is solved as regulation problems on the family of fast manifolds, using the exact linearization methodology from nonlinear geometric control theory. The validity of the overall guidance approach is indicated by simulation.

  12. Numeric Modified Adomian Decomposition Method for Power System Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dimitrovski, Aleksandar D; Simunovic, Srdjan; Pannala, Sreekanth

    This paper investigates the applicability of numeric Wazwaz El Sayed modified Adomian Decomposition Method (WES-ADM) for time domain simulation of power systems. WESADM is a numerical method based on a modified Adomian decomposition (ADM) technique. WES-ADM is a numerical approximation method for the solution of nonlinear ordinary differential equations. The non-linear terms in the differential equations are approximated using Adomian polynomials. In this paper WES-ADM is applied to time domain simulations of multimachine power systems. WECC 3-generator, 9-bus system and IEEE 10-generator, 39-bus system have been used to test the applicability of the approach. Several fault scenarios have been tested.more » It has been found that the proposed approach is faster than the trapezoidal method with comparable accuracy.« less

  13. Rank-based decompositions of morphological templates.

    PubMed

    Sussner, P; Ritter, G X

    2000-01-01

    Methods for matrix decomposition have found numerous applications in image processing, in particular for the problem of template decomposition. Since existing matrix decomposition techniques are mainly concerned with the linear domain, we consider it timely to investigate matrix decomposition techniques in the nonlinear domain with applications in image processing. The mathematical basis for these investigations is the new theory of rank within minimax algebra. Thus far, only minimax decompositions of rank 1 and rank 2 matrices into outer product expansions are known to the image processing community. We derive a heuristic algorithm for the decomposition of matrices having arbitrary rank.

  14. The suitability of visual taphonomic methods for digital photographs: An experimental approach with pig carcasses in a tropical climate.

    PubMed

    Ribéreau-Gayon, Agathe; Rando, Carolyn; Morgan, Ruth M; Carter, David O

    2018-05-01

    In the context of increased scrutiny of the methods in forensic sciences, it is essential to ensure that the approaches used in forensic taphonomy to measure decomposition and estimate the postmortem interval are underpinned by robust evidence-based data. Digital photographs are an important source of documentation in forensic taphonomic investigations but the suitability of the current approaches for photographs, rather than real-time remains, is poorly studied which can undermine accurate forensic conclusions. The present study aimed to investigate the suitability of 2D colour digital photographs for evaluating decomposition of exposed human analogues (Sus scrofa domesticus) in a tropical savanna environment (Hawaii), using two published scoring methods; Megyesi et al., 2005 and Keough et al., 2017. It was found that there were significant differences between the real-time and photograph decomposition scores when the Megyesi et al. method was used. However, the Keough et al. method applied to photographs reflected real-time decomposition more closely and thus appears more suitable to evaluate pig decomposition from 2D photographs. The findings indicate that the type of scoring method used has a significant impact on the ability to accurately evaluate the decomposition of exposed pig carcasses from photographs. It was further identified that photographic taphonomic analysis can reach high inter-observer reproducibility. These novel findings are of significant importance for the forensic sciences as they highlight the potential for high quality photograph coverage to provide useful complementary information for the forensic taphonomic investigation. New recommendations to develop robust transparent approaches adapted to photographs in forensic taphonomy are suggested based on these findings. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  15. High-temperature catalyst for catalytic combustion and decomposition

    NASA Technical Reports Server (NTRS)

    Mays, Jeffrey A. (Inventor); Lohner, Kevin A. (Inventor); Sevener, Kathleen M. (Inventor); Jensen, Jeff J. (Inventor)

    2005-01-01

    A robust, high temperature mixed metal oxide catalyst for propellant composition, including high concentration hydrogen peroxide, and catalytic combustion, including methane air mixtures. The uses include target, space, and on-orbit propulsion systems and low-emission terrestrial power and gas generation. The catalyst system requires no special preheat apparatus or special sequencing to meet start-up requirements, enabling a fast overall response time. Start-up transients of less than 1 second have been demonstrated with catalyst bed and propellant temperatures as low as 50 degrees Fahrenheit. The catalyst system has consistently demonstrated high decomposition effeciency, extremely low decomposition roughness, and long operating life on multiple test particles.

  16. GENESIS: a hybrid-parallel and multi-scale molecular dynamics simulator with enhanced sampling algorithms for biomolecular and cellular simulations

    PubMed Central

    Jung, Jaewoon; Mori, Takaharu; Kobayashi, Chigusa; Matsunaga, Yasuhiro; Yoda, Takao; Feig, Michael; Sugita, Yuji

    2015-01-01

    GENESIS (Generalized-Ensemble Simulation System) is a new software package for molecular dynamics (MD) simulations of macromolecules. It has two MD simulators, called ATDYN and SPDYN. ATDYN is parallelized based on an atomic decomposition algorithm for the simulations of all-atom force-field models as well as coarse-grained Go-like models. SPDYN is highly parallelized based on a domain decomposition scheme, allowing large-scale MD simulations on supercomputers. Hybrid schemes combining OpenMP and MPI are used in both simulators to target modern multicore computer architectures. Key advantages of GENESIS are (1) the highly parallel performance of SPDYN for very large biological systems consisting of more than one million atoms and (2) the availability of various REMD algorithms (T-REMD, REUS, multi-dimensional REMD for both all-atom and Go-like models under the NVT, NPT, NPAT, and NPγT ensembles). The former is achieved by a combination of the midpoint cell method and the efficient three-dimensional Fast Fourier Transform algorithm, where the domain decomposition space is shared in real-space and reciprocal-space calculations. Other features in SPDYN, such as avoiding concurrent memory access, reducing communication times, and usage of parallel input/output files, also contribute to the performance. We show the REMD simulation results of a mixed (POPC/DMPC) lipid bilayer as a real application using GENESIS. GENESIS is released as free software under the GPLv2 licence and can be easily modified for the development of new algorithms and molecular models. WIREs Comput Mol Sci 2015, 5:310–323. doi: 10.1002/wcms.1220 PMID:26753008

  17. Comparing and improving proper orthogonal decomposition (POD) to reduce the complexity of groundwater models

    NASA Astrophysics Data System (ADS)

    Gosses, Moritz; Nowak, Wolfgang; Wöhling, Thomas

    2017-04-01

    Physically-based modeling is a wide-spread tool in understanding and management of natural systems. With the high complexity of many such models and the huge amount of model runs necessary for parameter estimation and uncertainty analysis, overall run times can be prohibitively long even on modern computer systems. An encouraging strategy to tackle this problem are model reduction methods. In this contribution, we compare different proper orthogonal decomposition (POD, Siade et al. (2010)) methods and their potential applications to groundwater models. The POD method performs a singular value decomposition on system states as simulated by the complex (e.g., PDE-based) groundwater model taken at several time-steps, so-called snapshots. The singular vectors with the highest information content resulting from this decomposition are then used as a basis for projection of the system of model equations onto a subspace of much lower dimensionality than the original complex model, thereby greatly reducing complexity and accelerating run times. In its original form, this method is only applicable to linear problems. Many real-world groundwater models are non-linear, tough. These non-linearities are introduced either through model structure (unconfined aquifers) or boundary conditions (certain Cauchy boundaries, like rivers with variable connection to the groundwater table). To date, applications of POD focused on groundwater models simulating pumping tests in confined aquifers with constant head boundaries. In contrast, POD model reduction either greatly looses accuracy or does not significantly reduce model run time if the above-mentioned non-linearities are introduced. We have also found that variable Dirichlet boundaries are problematic for POD model reduction. An extension to the POD method, called POD-DEIM, has been developed for non-linear groundwater models by Stanko et al. (2016). This method uses spatial interpolation points to build the equation system in the reduced model space, thereby allowing the recalculation of system matrices at every time-step necessary for non-linear models while retaining the speed of the reduced model. This makes POD-DEIM applicable for groundwater models simulating unconfined aquifers. However, in our analysis, the method struggled to reproduce variable river boundaries accurately and gave no advantage for variable Dirichlet boundaries compared to the original POD method. We have developed another extension for POD that targets to address these remaining problems by performing a second POD operation on the model matrix on the left-hand side of the equation. The method aims to at least reproduce the accuracy of the other methods where they are applicable while outperforming them for setups with changing river boundaries or variable Dirichlet boundaries. We compared the new extension with original POD and POD-DEIM for different combinations of model structures and boundary conditions. The new method shows the potential of POD extensions for applications to non-linear groundwater systems and complex boundary conditions that go beyond the current, relatively limited range of applications. References: Siade, A. J., Putti, M., and Yeh, W. W.-G. (2010). Snapshot selection for groundwater model reduction using proper orthogonal decomposition. Water Resour. Res., 46(8):W08539. Stanko, Z. P., Boyce, S. E., and Yeh, W. W.-G. (2016). Nonlinear model reduction of unconfined groundwater flow using pod and deim. Advances in Water Resources, 97:130 - 143.

  18. A novel ECG data compression method based on adaptive Fourier decomposition

    NASA Astrophysics Data System (ADS)

    Tan, Chunyu; Zhang, Liming

    2017-12-01

    This paper presents a novel electrocardiogram (ECG) compression method based on adaptive Fourier decomposition (AFD). AFD is a newly developed signal decomposition approach, which can decompose a signal with fast convergence, and hence reconstruct ECG signals with high fidelity. Unlike most of the high performance algorithms, our method does not make use of any preprocessing operation before compression. Huffman coding is employed for further compression. Validated with 48 ECG recordings of MIT-BIH arrhythmia database, the proposed method achieves the compression ratio (CR) of 35.53 and the percentage root mean square difference (PRD) of 1.47% on average with N = 8 decomposition times and a robust PRD-CR relationship. The results demonstrate that the proposed method has a good performance compared with the state-of-the-art ECG compressors.

  19. Some results on numerical methods for hyperbolic conservation laws

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang Huanan.

    1989-01-01

    This dissertation contains some results on the numerical solutions of hyperbolic conservation laws. (1) The author introduced an artificial compression method as a correction to the basic ENO schemes. The method successfully prevents contact discontinuities from being smeared. This is achieved by increasing the slopes of the ENO reconstructions in such a way that the essentially non-oscillatory property of the schemes is kept. He analyzes the non-oscillatory property of the new artificial compression method by applying it to the UNO scheme which is a second order accurate ENO scheme, and proves that the resulting scheme is indeed non-oscillatory. Extensive 1-Dmore » numerical results and some preliminary 2-D ones are provided to show the strong performance of the method. (2) He combines the ENO schemes and the centered difference schemes into self-adjusting hybrid schemes which will be called the localized ENO schemes. At or near the jumps, he uses the ENO schemes with the field by field decompositions, otherwise he simply uses the centered difference schemes without the field by field decompositions. The method involves a new interpolation analysis. In the numerical experiments on several standard test problems, the quality of the numerical results of this method is close to that of the pure ENO results. The localized ENO schemes can be equipped with the above artificial compression method. In this way, he dramatically improves the resolutions of the contact discontinuities at very little additional costs. (3) He introduces a space-time mesh refinement method for time dependent problems.« less

  20. A Novel Multilevel-SVD Method to Improve Multistep Ahead Forecasting in Traffic Accidents Domain.

    PubMed

    Barba, Lida; Rodríguez, Nibaldo

    2017-01-01

    Here is proposed a novel method for decomposing a nonstationary time series in components of low and high frequency. The method is based on Multilevel Singular Value Decomposition (MSVD) of a Hankel matrix. The decomposition is used to improve the forecasting accuracy of Multiple Input Multiple Output (MIMO) linear and nonlinear models. Three time series coming from traffic accidents domain are used. They represent the number of persons with injuries in traffic accidents of Santiago, Chile. The data were continuously collected by the Chilean Police and were weekly sampled from 2000:1 to 2014:12. The performance of MSVD is compared with the decomposition in components of low and high frequency of a commonly accepted method based on Stationary Wavelet Transform (SWT). SWT in conjunction with the Autoregressive model (SWT + MIMO-AR) and SWT in conjunction with an Autoregressive Neural Network (SWT + MIMO-ANN) were evaluated. The empirical results have shown that the best accuracy was achieved by the forecasting model based on the proposed decomposition method MSVD, in comparison with the forecasting models based on SWT.

  1. A Novel Multilevel-SVD Method to Improve Multistep Ahead Forecasting in Traffic Accidents Domain

    PubMed Central

    Rodríguez, Nibaldo

    2017-01-01

    Here is proposed a novel method for decomposing a nonstationary time series in components of low and high frequency. The method is based on Multilevel Singular Value Decomposition (MSVD) of a Hankel matrix. The decomposition is used to improve the forecasting accuracy of Multiple Input Multiple Output (MIMO) linear and nonlinear models. Three time series coming from traffic accidents domain are used. They represent the number of persons with injuries in traffic accidents of Santiago, Chile. The data were continuously collected by the Chilean Police and were weekly sampled from 2000:1 to 2014:12. The performance of MSVD is compared with the decomposition in components of low and high frequency of a commonly accepted method based on Stationary Wavelet Transform (SWT). SWT in conjunction with the Autoregressive model (SWT + MIMO-AR) and SWT in conjunction with an Autoregressive Neural Network (SWT + MIMO-ANN) were evaluated. The empirical results have shown that the best accuracy was achieved by the forecasting model based on the proposed decomposition method MSVD, in comparison with the forecasting models based on SWT. PMID:28261267

  2. A statistical forecast model using the time-scale decomposition technique to predict rainfall during flood period over the middle and lower reaches of the Yangtze River Valley

    NASA Astrophysics Data System (ADS)

    Hu, Yijia; Zhong, Zhong; Zhu, Yimin; Ha, Yao

    2018-04-01

    In this paper, a statistical forecast model using the time-scale decomposition method is established to do the seasonal prediction of the rainfall during flood period (FPR) over the middle and lower reaches of the Yangtze River Valley (MLYRV). This method decomposites the rainfall over the MLYRV into three time-scale components, namely, the interannual component with the period less than 8 years, the interdecadal component with the period from 8 to 30 years, and the interdecadal component with the period larger than 30 years. Then, the predictors are selected for the three time-scale components of FPR through the correlation analysis. At last, a statistical forecast model is established using the multiple linear regression technique to predict the three time-scale components of the FPR, respectively. The results show that this forecast model can capture the interannual and interdecadal variation of FPR. The hindcast of FPR during 14 years from 2001 to 2014 shows that the FPR can be predicted successfully in 11 out of the 14 years. This forecast model performs better than the model using traditional scheme without time-scale decomposition. Therefore, the statistical forecast model using the time-scale decomposition technique has good skills and application value in the operational prediction of FPR over the MLYRV.

  3. System-independent characterization of materials using dual-energy computed tomography

    DOE PAGES

    Azevedo, Stephen G.; Martz, Jr., Harry E.; Aufderheide, III, Maurice B.; ...

    2016-02-01

    In this study, we present a new decomposition approach for dual-energy computed tomography (DECT) called SIRZ that provides precise and accurate material description, independent of the scanner, over diagnostic energy ranges (30 to 200 keV). System independence is achieved by explicitly including a scanner-specific spectral description in the decomposition method, and a new X-ray-relevant feature space. The feature space consists of electron density, ρ e, and a new effective atomic number, Z e, which is based on published X-ray cross sections. Reference materials are used in conjunction with the system spectral response so that additional beam-hardening correction is not necessary.more » The technique is tested against other methods on DECT data of known specimens scanned by diverse spectra and systems. Uncertainties in accuracy and precision are less than 3% and 2% respectively for the (ρ e, Z e) results compared to prior methods that are inaccurate and imprecise (over 9%).« less

  4. Koopman decomposition of Burgers' equation: What can we learn?

    NASA Astrophysics Data System (ADS)

    Page, Jacob; Kerswell, Rich

    2017-11-01

    Burgers' equation is a well known 1D model of the Navier-Stokes equations and admits a selection of equilibria and travelling wave solutions. A series of Burgers' trajectories are examined with Dynamic Mode Decomposition (DMD) to probe the capability of the method to extract coherent structures from ``run-down'' simulations. The performance of the method depends critically on the choice of observable. We use the Cole-Hopf transformation to derive an observable which has linear, autonomous dynamics and for which the DMD modes overlap exactly with Koopman modes. This observable can accurately predict the flow evolution beyond the time window of the data used in the DMD, and in that sense outperforms other observables motivated by the nonlinearity in the governing equation. The linearizing observable also allows us to make informed decisions about often ambiguous choices in nonlinear problems, such as rank truncation and snapshot spacing. A number of rules of thumb for connecting DMD with the Koopman operator for nonlinear PDEs are distilled from the results. Related problems in low Reynolds number fluid turbulence are also discussed.

  5. Computer implemented empirical mode decomposition method, apparatus and article of manufacture

    NASA Technical Reports Server (NTRS)

    Huang, Norden E. (Inventor)

    1999-01-01

    A computer implemented physical signal analysis method is invented. This method includes two essential steps and the associated presentation techniques of the results. All the steps exist only in a computer: there are no analytic expressions resulting from the method. The first step is a computer implemented Empirical Mode Decomposition to extract a collection of Intrinsic Mode Functions (IMF) from nonlinear, nonstationary physical signals. The decomposition is based on the direct extraction of the energy associated with various intrinsic time scales in the physical signal. Expressed in the IMF's, they have well-behaved Hilbert Transforms from which instantaneous frequencies can be calculated. The second step is the Hilbert Transform. The final result is the Hilbert Spectrum. Thus, the invention can localize any event on the time as well as the frequency axis. The decomposition can also be viewed as an expansion of the data in terms of the IMF's. Then, these IMF's, based on and derived from the data, can serve as the basis of that expansion. The local energy and the instantaneous frequency derived from the IMF's through the Hilbert transform give a full energy-frequency-time distribution of the data which is designated as the Hilbert Spectrum.

  6. Scale Issues in Air Quality Modeling

    EPA Science Inventory

    This presentation reviews past model evaluation studies investigating the impact of horizontal grid spacing on model performance. It also presents several examples of using a spectral decomposition technique to separate the forcings from processes operating on different time scal...

  7. Application of empirical mode decomposition with local linear quantile regression in financial time series forecasting.

    PubMed

    Jaber, Abobaker M; Ismail, Mohd Tahir; Altaher, Alsaidi M

    2014-01-01

    This paper mainly forecasts the daily closing price of stock markets. We propose a two-stage technique that combines the empirical mode decomposition (EMD) with nonparametric methods of local linear quantile (LLQ). We use the proposed technique, EMD-LLQ, to forecast two stock index time series. Detailed experiments are implemented for the proposed method, in which EMD-LPQ, EMD, and Holt-Winter methods are compared. The proposed EMD-LPQ model is determined to be superior to the EMD and Holt-Winter methods in predicting the stock closing prices.

  8. Performance of Scattering Matrix Decomposition and Color Spaces for Synthetic Aperture Radar Imagery

    DTIC Science & Technology

    2010-03-01

    Color Spaces and Synthetic Aperture Radar (SAR) Multicolor Imaging. 15 2.3.1 Colorimetry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 2.3.2...III. Decomposition Techniques on SAR Polarimetry and Colorimetry applied to SAR Imagery...space polarimetric SAR systems. Colorimetry is also introduced in this chapter, presenting the fundamentals of the RGB and CMY color spaces, defined for

  9. Spectral decomposition of seismic data with reassigned smoothed pseudo Wigner-Ville distribution

    NASA Astrophysics Data System (ADS)

    Wu, Xiaoyang; Liu, Tianyou

    2009-07-01

    Seismic signals are nonstationary mainly due to absorption and attenuation of seismic energy in strata. Referring to spectral decomposition of seismic data, the conventional method using short-time Fourier transform (STFT) limits temporal and spectral resolution by a predefined window length. Continuous-wavelet transform (CWT) uses dilation and translation of a wavelet to produce a time-scale map. However, the wavelets utilized should be orthogonal in order to obtain a satisfactory resolution. The less applied, Wigner-Ville distribution (WVD) being superior in energy distribution concentration, is confronted with cross-terms interference (CTI) when signals are multi-component. In order to reduce the impact of CTI, Cohen class uses kernel function as low-pass filter. Nevertheless it also weakens energy concentration of auto-terms. In this paper, we employ smoothed pseudo Wigner-Ville distribution (SPWVD) with Gauss kernel function to reduce CTI in time and frequency domain, then reassign values of SPWVD (called reassigned SPWVD) according to the center of gravity of the considering energy region so that distribution concentration is maintained simultaneously. We conduct the method above on a multi-component synthetic seismic record and compare with STFT and CWT spectra. Two field examples reveal that RSPWVD potentially can be applied to detect low-frequency shadows caused by hydrocarbons and to delineate the space distribution of abnormal geological body more precisely.

  10. Augmenting the decomposition of EMG signals using supervised feature extraction techniques.

    PubMed

    Parsaei, Hossein; Gangeh, Mehrdad J; Stashuk, Daniel W; Kamel, Mohamed S

    2012-01-01

    Electromyographic (EMG) signal decomposition is the process of resolving an EMG signal into its constituent motor unit potential trains (MUPTs). In this work, the possibility of improving the decomposing results using two supervised feature extraction methods, i.e., Fisher discriminant analysis (FDA) and supervised principal component analysis (SPCA), is explored. Using the MUP labels provided by a decomposition-based quantitative EMG system as a training data for FDA and SPCA, the MUPs are transformed into a new feature space such that the MUPs of a single MU become as close as possible to each other while those created by different MUs become as far as possible. The MUPs are then reclassified using a certainty-based classification algorithm. Evaluation results using 10 simulated EMG signals comprised of 3-11 MUPTs demonstrate that FDA and SPCA on average improve the decomposition accuracy by 6%. The improvement for the most difficult-to-decompose signal is about 12%, which shows the proposed approach is most beneficial in the decomposition of more complex signals.

  11. Palm vein recognition based on directional empirical mode decomposition

    NASA Astrophysics Data System (ADS)

    Lee, Jen-Chun; Chang, Chien-Ping; Chen, Wei-Kuei

    2014-04-01

    Directional empirical mode decomposition (DEMD) has recently been proposed to make empirical mode decomposition suitable for the processing of texture analysis. Using DEMD, samples are decomposed into a series of images, referred to as two-dimensional intrinsic mode functions (2-D IMFs), from finer to large scale. A DEMD-based 2 linear discriminant analysis (LDA) for palm vein recognition is proposed. The proposed method progresses through three steps: (i) a set of 2-D IMF features of various scale and orientation are extracted using DEMD, (ii) the 2LDA method is then applied to reduce the dimensionality of the feature space in both the row and column directions, and (iii) the nearest neighbor classifier is used for classification. We also propose two strategies for using the set of 2-D IMF features: ensemble DEMD vein representation (EDVR) and multichannel DEMD vein representation (MDVR). In experiments using palm vein databases, the proposed MDVR-based 2LDA method achieved recognition accuracy of 99.73%, thereby demonstrating its feasibility for palm vein recognition.

  12. Human visual system-based color image steganography using the contourlet transform

    NASA Astrophysics Data System (ADS)

    Abdul, W.; Carré, P.; Gaborit, P.

    2010-01-01

    We present a steganographic scheme based on the contourlet transform which uses the contrast sensitivity function (CSF) to control the force of insertion of the hidden information in a perceptually uniform color space. The CIELAB color space is used as it is well suited for steganographic applications because any change in the CIELAB color space has a corresponding effect on the human visual system as is very important for steganographic schemes to be undetectable by the human visual system (HVS). The perceptual decomposition of the contourlet transform gives it a natural advantage over other decompositions as it can be molded with respect to the human perception of different frequencies in an image. The evaluation of the imperceptibility of the steganographic scheme with respect to the color perception of the HVS is done using standard methods such as the structural similarity (SSIM) and CIEDE2000. The robustness of the inserted watermark is tested against JPEG compression.

  13. Position Papers for the First Workshop on Principles and Practice of Constraint Programming Held in Newport, Rhode Island on April 28-30, 1993

    DTIC Science & Technology

    1993-04-30

    There are alternative methods to MBB’s, based on decomposition of space into disjoint cells. These include uniform grid method [Fr84], quadtree-based...space. The IIn grid and quadtree methods there is a trade off between the resolution of the cells (and thus quantity of the cells) and the effectiveness...Mathematics, 13, pp. 221-229, 1983. 9 IFr84] W.R. Franklin, Adaptive grids for geometric operations, Cartographica 21, 2 g 3, pp. 160-167, 1984. (Gun87

  14. Geometric decompositions of collective motion

    NASA Astrophysics Data System (ADS)

    Mischiati, Matteo; Krishnaprasad, P. S.

    2017-04-01

    Collective motion in nature is a captivating phenomenon. Revealing the underlying mechanisms, which are of biological and theoretical interest, will require empirical data, modelling and analysis techniques. Here, we contribute a geometric viewpoint, yielding a novel method of analysing movement. Snapshots of collective motion are portrayed as tangent vectors on configuration space, with length determined by the total kinetic energy. Using the geometry of fibre bundles and connections, this portrait is split into orthogonal components each tangential to a lower dimensional manifold derived from configuration space. The resulting decomposition, when interleaved with classical shape space construction, is categorized into a family of kinematic modes-including rigid translations, rigid rotations, inertia tensor transformations, expansions and compressions. Snapshots of empirical data from natural collectives can be allocated to these modes and weighted by fractions of total kinetic energy. Such quantitative measures can provide insight into the variation of the driving goals of a collective, as illustrated by applying these methods to a publicly available dataset of pigeon flocking. The geometric framework may also be profitably employed in the control of artificial systems of interacting agents such as robots.

  15. Geometric decompositions of collective motion

    PubMed Central

    Krishnaprasad, P. S.

    2017-01-01

    Collective motion in nature is a captivating phenomenon. Revealing the underlying mechanisms, which are of biological and theoretical interest, will require empirical data, modelling and analysis techniques. Here, we contribute a geometric viewpoint, yielding a novel method of analysing movement. Snapshots of collective motion are portrayed as tangent vectors on configuration space, with length determined by the total kinetic energy. Using the geometry of fibre bundles and connections, this portrait is split into orthogonal components each tangential to a lower dimensional manifold derived from configuration space. The resulting decomposition, when interleaved with classical shape space construction, is categorized into a family of kinematic modes—including rigid translations, rigid rotations, inertia tensor transformations, expansions and compressions. Snapshots of empirical data from natural collectives can be allocated to these modes and weighted by fractions of total kinetic energy. Such quantitative measures can provide insight into the variation of the driving goals of a collective, as illustrated by applying these methods to a publicly available dataset of pigeon flocking. The geometric framework may also be profitably employed in the control of artificial systems of interacting agents such as robots. PMID:28484319

  16. Fault identification of rotor-bearing system based on ensemble empirical mode decomposition and self-zero space projection analysis

    NASA Astrophysics Data System (ADS)

    Jiang, Fan; Zhu, Zhencai; Li, Wei; Zhou, Gongbo; Chen, Guoan

    2014-07-01

    Accurately identifying faults in rotor-bearing systems by analyzing vibration signals, which are nonlinear and nonstationary, is challenging. To address this issue, a new approach based on ensemble empirical mode decomposition (EEMD) and self-zero space projection analysis is proposed in this paper. This method seeks to identify faults appearing in a rotor-bearing system using simple algebraic calculations and projection analyses. First, EEMD is applied to decompose the collected vibration signals into a set of intrinsic mode functions (IMFs) for features. Second, these extracted features under various mechanical health conditions are used to design a self-zero space matrix according to space projection analysis. Finally, the so-called projection indicators are calculated to identify the rotor-bearing system's faults with simple decision logic. Experiments are implemented to test the reliability and effectiveness of the proposed approach. The results show that this approach can accurately identify faults in rotor-bearing systems.

  17. Problem decomposition by mutual information and force-based clustering

    NASA Astrophysics Data System (ADS)

    Otero, Richard Edward

    The scale of engineering problems has sharply increased over the last twenty years. Larger coupled systems, increasing complexity, and limited resources create a need for methods that automatically decompose problems into manageable sub-problems by discovering and leveraging problem structure. The ability to learn the coupling (inter-dependence) structure and reorganize the original problem could lead to large reductions in the time to analyze complex problems. Such decomposition methods could also provide engineering insight on the fundamental physics driving problem solution. This work forwards the current state of the art in engineering decomposition through the application of techniques originally developed within computer science and information theory. The work describes the current state of automatic problem decomposition in engineering and utilizes several promising ideas to advance the state of the practice. Mutual information is a novel metric for data dependence and works on both continuous and discrete data. Mutual information can measure both the linear and non-linear dependence between variables without the limitations of linear dependence measured through covariance. Mutual information is also able to handle data that does not have derivative information, unlike other metrics that require it. The value of mutual information to engineering design work is demonstrated on a planetary entry problem. This study utilizes a novel tool developed in this work for planetary entry system synthesis. A graphical method, force-based clustering, is used to discover related sub-graph structure as a function of problem structure and links ranked by their mutual information. This method does not require the stochastic use of neural networks and could be used with any link ranking method currently utilized in the field. Application of this method is demonstrated on a large, coupled low-thrust trajectory problem. Mutual information also serves as the basis for an alternative global optimizer, called MIMIC, which is unrelated to Genetic Algorithms. Advancement to the current practice demonstrates the use of MIMIC as a global method that explicitly models problem structure with mutual information, providing an alternate method for globally searching multi-modal domains. By leveraging discovered problem inter- dependencies, MIMIC may be appropriate for highly coupled problems or those with large function evaluation cost. This work introduces a useful addition to the MIMIC algorithm that enables its use on continuous input variables. By leveraging automatic decision tree generation methods from Machine Learning and a set of randomly generated test problems, decision trees for which method to apply are also created, quantifying decomposition performance over a large region of the design space.

  18. An approach to solving large reliability models

    NASA Technical Reports Server (NTRS)

    Boyd, Mark A.; Veeraraghavan, Malathi; Dugan, Joanne Bechta; Trivedi, Kishor S.

    1988-01-01

    This paper describes a unified approach to the problem of solving large realistic reliability models. The methodology integrates behavioral decomposition, state trunction, and efficient sparse matrix-based numerical methods. The use of fault trees, together with ancillary information regarding dependencies to automatically generate the underlying Markov model state space is proposed. The effectiveness of this approach is illustrated by modeling a state-of-the-art flight control system and a multiprocessor system. Nonexponential distributions for times to failure of components are assumed in the latter example. The modeling tool used for most of this analysis is HARP (the Hybrid Automated Reliability Predictor).

  19. A general algorithm using finite element method for aerodynamic configurations at low speeds

    NASA Technical Reports Server (NTRS)

    Balasubramanian, R.

    1975-01-01

    A finite element algorithm for numerical simulation of two-dimensional, incompressible, viscous flows was developed. The Navier-Stokes equations are suitably modelled to facilitate direct solution for the essential flow parameters. A leap-frog time differencing and Galerkin minimization of these model equations yields the finite element algorithm. The finite elements are triangular with bicubic shape functions approximating the solution space. The finite element matrices are unsymmetrically banded to facilitate savings in storage. An unsymmetric L-U decomposition is performed on the finite element matrices to obtain the solution for the boundary value problem.

  20. Power System Decomposition for Practical Implementation of Bulk-Grid Voltage Control Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vallem, Mallikarjuna R.; Vyakaranam, Bharat GNVSR; Holzer, Jesse T.

    Power system algorithms such as AC optimal power flow and coordinated volt/var control of the bulk power system are computationally intensive and become difficult to solve in operational time frames. The computational time required to run these algorithms increases exponentially as the size of the power system increases. The solution time for multiple subsystems is less than that for solving the entire system simultaneously, and the local nature of the voltage problem lends itself to such decomposition. This paper describes an algorithm that can be used to perform power system decomposition from the point of view of the voltage controlmore » problem. Our approach takes advantage of the dominant localized effect of voltage control and is based on clustering buses according to the electrical distances between them. One of the contributions of the paper is to use multidimensional scaling to compute n-dimensional Euclidean coordinates for each bus based on electrical distance to perform algorithms like K-means clustering. A simple coordinated reactive power control of photovoltaic inverters for voltage regulation is used to demonstrate the effectiveness of the proposed decomposition algorithm and its components. The proposed decomposition method is demonstrated on the IEEE 118-bus system.« less

  1. Adaptive Fourier decomposition based R-peak detection for noisy ECG Signals.

    PubMed

    Ze Wang; Chi Man Wong; Feng Wan

    2017-07-01

    An adaptive Fourier decomposition (AFD) based R-peak detection method is proposed for noisy ECG signals. Although lots of QRS detection methods have been proposed in literature, most detection methods require high signal quality. The proposed method extracts the R waves from the energy domain using the AFD and determines the R-peak locations based on the key decomposition parameters, achieving the denoising and the R-peak detection at the same time. Validated by clinical ECG signals in the MIT-BIH Arrhythmia Database, the proposed method shows better performance than the Pan-Tompkin (PT) algorithm in both situations of a native PT and the PT with a denoising process.

  2. Repeated decompositions reveal the stability of infomax decomposition of fMRI data

    PubMed Central

    Duann, Jeng-Ren; Jung, Tzyy-Ping; Sejnowski, Terrence J.; Makeig, Scott

    2010-01-01

    In this study, we decomposed 12 fMRI data sets from six subjects each 101 times using the infomax algorithm. The first decomposition was taken as a reference decomposition; the others were used to form a component matrix of 100 by 100 components. Equivalence relations between components in this matrix, defined as maximum spatial correlations to the components of the reference decomposition, were found by the Hungarian sorting method and used to form 100 equivalence classes for each data set. We then tested the reproducibility of the matched components in the equivalence classes using uncertainty measures based on component distributions, time courses, and ROC curves. Infomax ICA rarely failed to derive nearly the same components in different decompositions. Very few components per data set were poorly reproduced, even using vector angle uncertainty measures stricter than correlation and detection theory measures. PMID:17281453

  3. Dynamic Stability Analysis of Linear Time-varying Systems via an Extended Modal Identification Approach

    NASA Astrophysics Data System (ADS)

    Ma, Zhisai; Liu, Li; Zhou, Sida; Naets, Frank; Heylen, Ward; Desmet, Wim

    2017-03-01

    The problem of linear time-varying(LTV) system modal analysis is considered based on time-dependent state space representations, as classical modal analysis of linear time-invariant systems and current LTV system modal analysis under the "frozen-time" assumption are not able to determine the dynamic stability of LTV systems. Time-dependent state space representations of LTV systems are first introduced, and the corresponding modal analysis theories are subsequently presented via a stability-preserving state transformation. The time-varying modes of LTV systems are extended in terms of uniqueness, and are further interpreted to determine the system's stability. An extended modal identification is proposed to estimate the time-varying modes, consisting of the estimation of the state transition matrix via a subspace-based method and the extraction of the time-varying modes by the QR decomposition. The proposed approach is numerically validated by three numerical cases, and is experimentally validated by a coupled moving-mass simply supported beam experimental case. The proposed approach is capable of accurately estimating the time-varying modes, and provides a new way to determine the dynamic stability of LTV systems by using the estimated time-varying modes.

  4. Extracting Leading Nonlinear Modes of Changing Climate From Global SST Time Series

    NASA Astrophysics Data System (ADS)

    Mukhin, D.; Gavrilov, A.; Loskutov, E. M.; Feigin, A. M.; Kurths, J.

    2017-12-01

    Data-driven modeling of climate requires adequate principal variables extracted from observed high-dimensional data. For constructing such variables it is needed to find spatial-temporal patterns explaining a substantial part of the variability and comprising all dynamically related time series from the data. The difficulties of this task rise from the nonlinearity and non-stationarity of the climate dynamical system. The nonlinearity leads to insufficiency of linear methods of data decomposition for separating different processes entangled in the observed time series. On the other hand, various forcings, both anthropogenic and natural, make the dynamics non-stationary, and we should be able to describe the response of the system to such forcings in order to separate the modes explaining the internal variability. The method we present is aimed to overcome both these problems. The method is based on the Nonlinear Dynamical Mode (NDM) decomposition [1,2], but takes into account external forcing signals. An each mode depends on hidden, unknown a priori, time series which, together with external forcing time series, are mapped onto data space. Finding both the hidden signals and the mapping allows us to study the evolution of the modes' structure in changing external conditions and to compare the roles of the internal variability and forcing in the observed behavior. The method is used for extracting of the principal modes of SST variability on inter-annual and multidecadal time scales accounting the external forcings such as CO2, variations of the solar activity and volcanic activity. The structure of the revealed teleconnection patterns as well as their forecast under different CO2 emission scenarios are discussed.[1] Mukhin, D., Gavrilov, A., Feigin, A., Loskutov, E., & Kurths, J. (2015). Principal nonlinear dynamical modes of climate variability. Scientific Reports, 5, 15510. [2] Gavrilov, A., Mukhin, D., Loskutov, E., Volodin, E., Feigin, A., & Kurths, J. (2016). Method for reconstructing nonlinear modes with adaptive structure from multidimensional data. Chaos: An Interdisciplinary Journal of Nonlinear Science, 26(12), 123101.

  5. Magnetofluid dynamics in curved spacetime

    NASA Astrophysics Data System (ADS)

    Bhattacharjee, Chinmoy; Das, Rupam; Mahajan, S. M.

    2015-03-01

    A grand unified field Mμ ν is constructed from Maxwell's field tensor and an appropriately modified flow field, both nonminimally coupled to gravity, to analyze the dynamics of hot charged fluids in curved background space-time. With a suitable 3 +1 decomposition, this new formalism of the hot fluid is then applied to investigate the vortical dynamics of the system. Finally, the equilibrium state for plasma with nonminimal coupling through Ricci scalar R to gravity is investigated to derive a double Beltrami equation in curved space-time.

  6. An adaptive model order reduction by proper snapshot selection for nonlinear dynamical problems

    NASA Astrophysics Data System (ADS)

    Nigro, P. S. B.; Anndif, M.; Teixeira, Y.; Pimenta, P. M.; Wriggers, P.

    2016-04-01

    Model Order Reduction (MOR) methods are employed in many fields of Engineering in order to reduce the processing time of complex computational simulations. A usual approach to achieve this is the application of Galerkin projection to generate representative subspaces (reduced spaces). However, when strong nonlinearities in a dynamical system are present and this technique is employed several times along the simulation, it can be very inefficient. This work proposes a new adaptive strategy, which ensures low computational cost and small error to deal with this problem. This work also presents a new method to select snapshots named Proper Snapshot Selection (PSS). The objective of the PSS is to obtain a good balance between accuracy and computational cost by improving the adaptive strategy through a better snapshot selection in real time (online analysis). With this method, it is possible a substantial reduction of the subspace, keeping the quality of the model without the use of the Proper Orthogonal Decomposition (POD).

  7. A Multitaper, Causal Decomposition for Stochastic, Multivariate Time Series: Application to High-Frequency Calcium Imaging Data.

    PubMed

    Sornborger, Andrew T; Lauderdale, James D

    2016-11-01

    Neural data analysis has increasingly incorporated causal information to study circuit connectivity. Dimensional reduction forms the basis of most analyses of large multivariate time series. Here, we present a new, multitaper-based decomposition for stochastic, multivariate time series that acts on the covariance of the time series at all lags, C ( τ ), as opposed to standard methods that decompose the time series, X ( t ), using only information at zero-lag. In both simulated and neural imaging examples, we demonstrate that methods that neglect the full causal structure may be discarding important dynamical information in a time series.

  8. Catalytic Decomposition of Hydroxylammonium Nitrate Ionic Liquid: Enhancement of NO Formation.

    PubMed

    Chambreau, Steven D; Popolan-Vaida, Denisia M; Vaghjiani, Ghanshyam L; Leone, Stephen R

    2017-05-18

    Hydroxylammonium nitrate (HAN) is a promising candidate to replace highly toxic hydrazine in monopropellant thruster space applications. The reactivity of HAN aerosols on heated copper and iridium targets was investigated using tunable vacuum ultraviolet photoionization time-of-flight aerosol mass spectrometry. The reaction products were identified by their mass-to-charge ratios and their ionization energies. Products include NH 3 , H 2 O, NO, hydroxylamine (HA), HNO 3 , and a small amount of NO 2 at high temperature. No N 2 O was detected under these experimental conditions, despite the fact that N 2 O is one of the expected products according to the generally accepted thermal decomposition mechanism of HAN. Upon introduction of iridium catalyst, a significant enhancement of the NO/HA ratio was observed. This observation indicates that the formation of NO via decomposition of HA is an important pathway in the catalytic decomposition of HAN.

  9. Case report: Time of death estimation of a buried body by modeling a decomposition matrix for a pig carcass.

    PubMed

    Niederegger, Senta; Schermer, Julia; Höfig, Juliane; Mall, Gita

    2015-01-01

    Estimating time of death of buried human bodies is a very difficult task. Casper's rule from 1860 is still widely used which illustrates the lack of suitable methods. In this case study excavations in an arbor revealed the crouching body of a human being, dressed only in boxer shorts and socks. Witnesses were not able to generate a concise answer as to when the person in question was last seen alive; the pieces of information opened a window of 2-6 weeks for the possible time of death. To determine the post mortem interval (PMI) an experiment using a pig carcass was conducted to set up a decomposition matrix. Fitting the autopsy findings of the victim into the decomposition matrix yielded a time of death estimation of 2-3 weeks. This time frame was later confirmed by a new witness. The authors feel confident that a widespread conduction of decomposition matrices using pig carcasses can lead to a great increase of experience and knowledge in PMI estimation of buried bodies and will eventually lead to applicable new methods. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  10. A New Method for Nonlinear and Nonstationary Time Series Analysis and Its Application to the Earthquake and Building Response Records

    NASA Technical Reports Server (NTRS)

    Huang, Norden E.

    1999-01-01

    A new method for analyzing nonlinear and nonstationary data has been developed. The key part of the method is the Empirical Mode Decomposition method with which any complicated data set can be decomposed into a finite and often small number of Intrinsic Mode Functions (IMF). An IMF is defined as any function having the same numbers of zero-crossing and extrema, and also having symmetric envelopes defined by the local maxima and minima respectively. The IMF also admits well-behaved Hilbert transform. This decomposition method is adaptive, and, therefore, highly efficient. Since the decomposition is based on the local characteristic time scale of the data, it is applicable to nonlinear and nonstationary processes. With the Hilbert transform, the Intrinsic Mode Functions yield instantaneous frequencies as functions of time that give sharp identifications of imbedded structures. The final presentation of the results is an energy-frequency-time distribution, designated as the Hilbert Spectrum, Example of application of this method to earthquake and building response will be given. The results indicate those low frequency components, totally missed by the Fourier analysis, are clearly identified by the new method. Comparisons with Wavelet and window Fourier analysis show the new method offers much better temporal and frequency resolutions.

  11. Resonance-Based Time-Frequency Manifold for Feature Extraction of Ship-Radiated Noise.

    PubMed

    Yan, Jiaquan; Sun, Haixin; Chen, Hailan; Junejo, Naveed Ur Rehman; Cheng, En

    2018-03-22

    In this paper, a novel time-frequency signature using resonance-based sparse signal decomposition (RSSD), phase space reconstruction (PSR), time-frequency distribution (TFD) and manifold learning is proposed for feature extraction of ship-radiated noise, which is called resonance-based time-frequency manifold (RTFM). This is suitable for analyzing signals with oscillatory, non-stationary and non-linear characteristics in a situation of serious noise pollution. Unlike the traditional methods which are sensitive to noise and just consider one side of oscillatory, non-stationary and non-linear characteristics, the proposed RTFM can provide the intact feature signature of all these characteristics in the form of a time-frequency signature by the following steps: first, RSSD is employed on the raw signal to extract the high-oscillatory component and abandon the low-oscillatory component. Second, PSR is performed on the high-oscillatory component to map the one-dimensional signal to the high-dimensional phase space. Third, TFD is employed to reveal non-stationary information in the phase space. Finally, manifold learning is applied to the TFDs to fetch the intrinsic non-linear manifold. A proportional addition of the top two RTFMs is adopted to produce the improved RTFM signature. All of the case studies are validated on real audio recordings of ship-radiated noise. Case studies of ship-radiated noise on different datasets and various degrees of noise pollution manifest the effectiveness and robustness of the proposed method.

  12. Resonance-Based Time-Frequency Manifold for Feature Extraction of Ship-Radiated Noise

    PubMed Central

    Yan, Jiaquan; Sun, Haixin; Chen, Hailan; Junejo, Naveed Ur Rehman; Cheng, En

    2018-01-01

    In this paper, a novel time-frequency signature using resonance-based sparse signal decomposition (RSSD), phase space reconstruction (PSR), time-frequency distribution (TFD) and manifold learning is proposed for feature extraction of ship-radiated noise, which is called resonance-based time-frequency manifold (RTFM). This is suitable for analyzing signals with oscillatory, non-stationary and non-linear characteristics in a situation of serious noise pollution. Unlike the traditional methods which are sensitive to noise and just consider one side of oscillatory, non-stationary and non-linear characteristics, the proposed RTFM can provide the intact feature signature of all these characteristics in the form of a time-frequency signature by the following steps: first, RSSD is employed on the raw signal to extract the high-oscillatory component and abandon the low-oscillatory component. Second, PSR is performed on the high-oscillatory component to map the one-dimensional signal to the high-dimensional phase space. Third, TFD is employed to reveal non-stationary information in the phase space. Finally, manifold learning is applied to the TFDs to fetch the intrinsic non-linear manifold. A proportional addition of the top two RTFMs is adopted to produce the improved RTFM signature. All of the case studies are validated on real audio recordings of ship-radiated noise. Case studies of ship-radiated noise on different datasets and various degrees of noise pollution manifest the effectiveness and robustness of the proposed method. PMID:29565288

  13. Single block three-dimensional volume grids about complex aerodynamic vehicles

    NASA Technical Reports Server (NTRS)

    Alter, Stephen J.; Weilmuenster, K. James

    1993-01-01

    This paper presents an alternate approach for the generation of volumetric grids for supersonic and hypersonic flows about complex configurations. The method uses parametric two dimensional block face grid definition within the framework of GRIDGEN2D. The incorporation of face decomposition reduces complex surfaces to simple shapes. These simple shapes are combined to obtain the final face definition. The advantages of this method include the reduction of overall grid generation time through the use of vectorized computer code, the elimination of the need to generate matching block faces, and the implementation of simplified boundary conditions. A simple axisymmetric grid is used to illustrate this method. In addition, volume grids for two complex configurations, the Langley Lifting Body (HL-20) and the Space Shuttle Orbiter, are shown.

  14. Placement-aware decomposition of a digital standard cells library for double patterning lithography

    NASA Astrophysics Data System (ADS)

    Wassal, Amr G.; Sharaf, Heba; Hammouda, Sherif

    2012-11-01

    To continue scaling the circuit features down, Double Patterning (DP) technology is needed in 22nm technologies and lower. DP requires decomposing the layout features into two masks for pitch relaxation, such that the spacing between any two features on each mask is greater than the minimum allowed mask spacing. The relaxed pitches of each mask are then processed on two separate exposure steps. In many cases, post-layout decomposition fails to decompose the layout into two masks due to the presence of conflicts. Post-layout decomposition of a standard cells block can result in native conflicts inside the cells (internal conflict), or native conflicts on the boundary between two cells (boundary conflict). Resolving native conflicts requires a redesign and/or multiple iterations for the placement and routing phases to get a clean decomposition. Therefore, DP compliance must be considered in earlier phases, before getting the final placed cell block. The main focus of this paper is generating a library of decomposed standard cells to be used in a DP-aware placer. This library should contain all possible decompositions for each standard cell, i.e., these decompositions consider all possible combinations of boundary conditions. However, the large number of combinations of boundary conditions for each standard cell will significantly increase the processing time and effort required to obtain all possible decompositions. Therefore, an efficient methodology is required to reduce this large number of combinations. In this paper, three different reduction methodologies are proposed to reduce the number of different combinations processed to get the decomposed library. Experimental results show a significant reduction in the number of combinations and decompositions needed for the library processing. To generate and verify the proposed flow and methodologies, a prototype for a placement-aware DP-ready cell-library is developed with an optimized number of cell views.

  15. Decompositions of the polyhedral product functor with applications to moment-angle complexes and related spaces

    PubMed Central

    Bahri, A.; Bendersky, M.; Cohen, F. R.; Gitler, S.

    2009-01-01

    This article gives a natural decomposition of the suspension of a generalized moment-angle complex or partial product space which arises as the polyhedral product functor described below. The introduction and application of the smash product moment-angle complex provides a precise identification of the stable homotopy type of the values of the polyhedral product functor. One direct consequence is an analysis of the associated cohomology. For the special case of the complements of certain subspace arrangements, the geometrical decomposition implies the homological decomposition in earlier work of others as described below. Because the splitting is geometric, an analogous homological decomposition for a generalized moment-angle complex applies for any homology theory. Implied, therefore, is a decomposition for the Stanley–Reisner ring of a finite simplicial complex, and natural generalizations. PMID:19620727

  16. Decompositions of the polyhedral product functor with applications to moment-angle complexes and related spaces.

    PubMed

    Bahri, A; Bendersky, M; Cohen, F R; Gitler, S

    2009-07-28

    This article gives a natural decomposition of the suspension of a generalized moment-angle complex or partial product space which arises as the polyhedral product functor described below. The introduction and application of the smash product moment-angle complex provides a precise identification of the stable homotopy type of the values of the polyhedral product functor. One direct consequence is an analysis of the associated cohomology. For the special case of the complements of certain subspace arrangements, the geometrical decomposition implies the homological decomposition in earlier work of others as described below. Because the splitting is geometric, an analogous homological decomposition for a generalized moment-angle complex applies for any homology theory. Implied, therefore, is a decomposition for the Stanley-Reisner ring of a finite simplicial complex, and natural generalizations.

  17. A stable and accurate partitioned algorithm for conjugate heat transfer

    NASA Astrophysics Data System (ADS)

    Meng, F.; Banks, J. W.; Henshaw, W. D.; Schwendeman, D. W.

    2017-09-01

    We describe a new partitioned approach for solving conjugate heat transfer (CHT) problems where the governing temperature equations in different material domains are time-stepped in an implicit manner, but where the interface coupling is explicit. The new approach, called the CHAMP scheme (Conjugate Heat transfer Advanced Multi-domain Partitioned), is based on a discretization of the interface coupling conditions using a generalized Robin (mixed) condition. The weights in the Robin condition are determined from the optimization of a condition derived from a local stability analysis of the coupling scheme. The interface treatment combines ideas from optimized-Schwarz methods for domain-decomposition problems together with the interface jump conditions and additional compatibility jump conditions derived from the governing equations. For many problems (i.e. for a wide range of material properties, grid-spacings and time-steps) the CHAMP algorithm is stable and second-order accurate using no sub-time-step iterations (i.e. a single implicit solve of the temperature equation in each domain). In extreme cases (e.g. very fine grids with very large time-steps) it may be necessary to perform one or more sub-iterations. Each sub-iteration generally increases the range of stability substantially and thus one sub-iteration is likely sufficient for the vast majority of practical problems. The CHAMP algorithm is developed first for a model problem and analyzed using normal-mode theory. The theory provides a mechanism for choosing optimal parameters in the mixed interface condition. A comparison is made to the classical Dirichlet-Neumann (DN) method and, where applicable, to the optimized-Schwarz (OS) domain-decomposition method. For problems with different thermal conductivities and diffusivities, the CHAMP algorithm outperforms the DN scheme. For domain-decomposition problems with uniform conductivities and diffusivities, the CHAMP algorithm performs better than the typical OS scheme with one grid-cell overlap. The CHAMP scheme is also developed for general curvilinear grids and CHT examples are presented using composite overset grids that confirm the theory and demonstrate the effectiveness of the approach.

  18. A stable and accurate partitioned algorithm for conjugate heat transfer

    DOE PAGES

    Meng, F.; Banks, J. W.; Henshaw, W. D.; ...

    2017-04-25

    We describe a new partitioned approach for solving conjugate heat transfer (CHT) problems where the governing temperature equations in different material domains are time-stepped in a implicit manner, but where the interface coupling is explicit. The new approach, called the CHAMP scheme (Conjugate Heat transfer Advanced Multi-domain Partitioned), is based on a discretization of the interface coupling conditions using a generalized Robin (mixed) condition. The weights in the Robin condition are determined from the optimization of a condition derived from a local stability analysis of the coupling scheme. The interface treatment combines ideas from optimized-Schwarz methods for domain-decomposition problems togethermore » with the interface jump conditions and additional compatibility jump conditions derived from the governing equations. For many problems (i.e. for a wide range of material properties, grid-spacings and time-steps) the CHAMP algorithm is stable and second-order accurate using no sub-time-step iterations (i.e. a single implicit solve of the temperature equation in each domain). In extreme cases (e.g. very fine grids with very large time-steps) it may be necessary to perform one or more sub-iterations. Each sub-iteration generally increases the range of stability substantially and thus one sub-iteration is likely sufficient for the vast majority of practical problems. The CHAMP algorithm is developed first for a model problem and analyzed using normal-mode the- ory. The theory provides a mechanism for choosing optimal parameters in the mixed interface condition. A comparison is made to the classical Dirichlet-Neumann (DN) method and, where applicable, to the optimized- Schwarz (OS) domain-decomposition method. For problems with different thermal conductivities and dif- fusivities, the CHAMP algorithm outperforms the DN scheme. For domain-decomposition problems with uniform conductivities and diffusivities, the CHAMP algorithm performs better than the typical OS scheme with one grid-cell overlap. Lastly, the CHAMP scheme is also developed for general curvilinear grids and CHT ex- amples are presented using composite overset grids that confirm the theory and demonstrate the effectiveness of the approach.« less

  19. Intelligent Fault Diagnosis of HVCB with Feature Space Optimization-Based Random Forest

    PubMed Central

    Ma, Suliang; Wu, Jianwen; Wang, Yuhao; Jia, Bowen; Jiang, Yuan

    2018-01-01

    Mechanical faults of high-voltage circuit breakers (HVCBs) always happen over long-term operation, so extracting the fault features and identifying the fault type have become a key issue for ensuring the security and reliability of power supply. Based on wavelet packet decomposition technology and random forest algorithm, an effective identification system was developed in this paper. First, compared with the incomplete description of Shannon entropy, the wavelet packet time-frequency energy rate (WTFER) was adopted as the input vector for the classifier model in the feature selection procedure. Then, a random forest classifier was used to diagnose the HVCB fault, assess the importance of the feature variable and optimize the feature space. Finally, the approach was verified based on actual HVCB vibration signals by considering six typical fault classes. The comparative experiment results show that the classification accuracy of the proposed method with the origin feature space reached 93.33% and reached up to 95.56% with optimized input feature vector of classifier. This indicates that feature optimization procedure is successful, and the proposed diagnosis algorithm has higher efficiency and robustness than traditional methods. PMID:29659548

  20. Decomposition of Proteins into Dynamic Units from Atomic Cross-Correlation Functions.

    PubMed

    Calligari, Paolo; Gerolin, Marco; Abergel, Daniel; Polimeno, Antonino

    2017-01-10

    In this article, we present a clustering method of atoms in proteins based on the analysis of the correlation times of interatomic distance correlation functions computed from MD simulations. The goal is to provide a coarse-grained description of the protein in terms of fewer elements that can be treated as dynamically independent subunits. Importantly, this domain decomposition method does not take into account structural properties of the protein. Instead, the clustering of protein residues in terms of networks of dynamically correlated domains is defined on the basis of the effective correlation times of the pair distance correlation functions. For these properties, our method stands as a complementary analysis to the customary protein decomposition in terms of quasi-rigid, structure-based domains. Results obtained for a prototypal protein structure illustrate the approach proposed.

  1. Three dimensional empirical mode decomposition analysis apparatus, method and article manufacture

    NASA Technical Reports Server (NTRS)

    Gloersen, Per (Inventor)

    2004-01-01

    An apparatus and method of analysis for three-dimensional (3D) physical phenomena. The physical phenomena may include any varying 3D phenomena such as time varying polar ice flows. A repesentation of the 3D phenomena is passed through a Hilbert transform to convert the data into complex form. A spatial variable is separated from the complex representation by producing a time based covariance matrix. The temporal parts of the principal components are produced by applying Singular Value Decomposition (SVD). Based on the rapidity with which the eigenvalues decay, the first 3-10 complex principal components (CPC) are selected for Empirical Mode Decomposition into intrinsic modes. The intrinsic modes produced are filtered in order to reconstruct the spatial part of the CPC. Finally, a filtered time series may be reconstructed from the first 3-10 filtered complex principal components.

  2. Structure and decomposition of the silver formate Ag(HCO{sub 2})

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Puzan, Anna N., E-mail: anna_puzan@mail.ru; Baumer, Vyacheslav N.; Mateychenko, Pavel V.

    Crystal structure of the silver formate Ag(HCO{sub 2}) has been determined (orthorhombic, sp.gr. Pccn, a=7.1199(5), b=10.3737(4), c=6.4701(3)Å, V=477.88(4) Å{sup 3}, Z=8). The structure contains isolated formate ions and the pairs Ag{sub 2}{sup 2+} which form the layers in (001) planes (the shortest Ag–Ag distances is 2.919 in the pair and 3.421 and 3.716 Å between the nearest Ag atoms of adjacent pairs). Silver formate is unstable compound which decompose spontaneously vs time. Decomposition was studied using Rietveld analysis of the powder diffraction patterns. It was concluded that the diffusion of Ag atoms leads to the formation of plate-like metal particlesmore » as nuclei in the (100) planes which settle parallel to (001) planes of the silver formate matrix. - Highlights: • Silver formate Ag(HCO{sub 2}) was synthesized and characterized. • Layered packing of Ag-Ag pairs in the structure was found. • Decomposition of Ag(HCO{sub 2}) and formation of metal phase were studied. • Rietveld-refined micro-structural characteristics during decomposition reveal the space relationship between the matrix structure and forming Ag phase REPLACE with: Space relationship between the matrix structure and forming Ag phase.« less

  3. Capturing molecular multimode relaxation processes in excitable gases based on decomposition of acoustic relaxation spectra

    NASA Astrophysics Data System (ADS)

    Zhu, Ming; Liu, Tingting; Wang, Shu; Zhang, Kesheng

    2017-08-01

    Existing two-frequency reconstructive methods can only capture primary (single) molecular relaxation processes in excitable gases. In this paper, we present a reconstructive method based on the novel decomposition of frequency-dependent acoustic relaxation spectra to capture the entire molecular multimode relaxation process. This decomposition of acoustic relaxation spectra is developed from the frequency-dependent effective specific heat, indicating that a multi-relaxation process is the sum of the interior single-relaxation processes. Based on this decomposition, we can reconstruct the entire multi-relaxation process by capturing the relaxation times and relaxation strengths of N interior single-relaxation processes, using the measurements of acoustic absorption and sound speed at 2N frequencies. Experimental data for the gas mixtures CO2-N2 and CO2-O2 validate our decomposition and reconstruction approach.

  4. Rapid determination of particle velocity from space-time images using the Radon transform

    PubMed Central

    Drew, Patrick J.; Blinder, Pablo; Cauwenberghs, Gert; Shih, Andy Y.; Kleinfeld, David

    2016-01-01

    Laser-scanning methods are a means to observe streaming particles, such as the flow of red blood cells in a blood vessel. Typically, particle velocity is extracted from images formed from cyclically repeated line-scan data that is obtained along the center-line of the vessel; motion leads to streaks whose angle is a function of the velocity. Past methods made use of shearing or rotation of the images and a Singular Value Decomposition (SVD) to automatically estimate the average velocity in a temporal window of data. Here we present an alternative method that makes use of the Radon transform to calculate the velocity of streaming particles. We show that this method is over an order of magnitude faster than the SVD-based algorithm and is more robust to noise. PMID:19459038

  5. The reduced space Sequential Quadratic Programming (SQP) method for calculating the worst resonance response of nonlinear systems

    NASA Astrophysics Data System (ADS)

    Liao, Haitao; Wu, Wenwang; Fang, Daining

    2018-07-01

    A coupled approach combining the reduced space Sequential Quadratic Programming (SQP) method with the harmonic balance condensation technique for finding the worst resonance response is developed. The nonlinear equality constraints of the optimization problem are imposed on the condensed harmonic balance equations. Making use of the null space decomposition technique, the original optimization formulation in the full space is mathematically simplified, and solved in the reduced space by means of the reduced SQP method. The transformation matrix that maps the full space to the null space of the constrained optimization problem is constructed via the coordinate basis scheme. The removal of the nonlinear equality constraints is accomplished, resulting in a simple optimization problem subject to bound constraints. Moreover, second order correction technique is introduced to overcome Maratos effect. The combination application of the reduced SQP method and condensation technique permits a large reduction of the computational cost. Finally, the effectiveness and applicability of the proposed methodology is demonstrated by two numerical examples.

  6. Signal decomposition for surrogate modeling of a constrained ultrasonic design space

    NASA Astrophysics Data System (ADS)

    Homa, Laura; Sparkman, Daniel; Wertz, John; Welter, John; Aldrin, John C.

    2018-04-01

    The U.S. Air Force seeks to improve the methods and measures by which the lifecycle of composite structures are managed. Nondestructive evaluation of damage - particularly internal damage resulting from impact - represents a significant input to that improvement. Conventional ultrasound can detect this damage; however, full 3D characterization has not been demonstrated. A proposed approach for robust characterization uses model-based inversion through fitting of simulated results to experimental data. One challenge with this approach is the high computational expense of the forward model to simulate the ultrasonic B-scans for each damage scenario. A potential solution is to construct a surrogate model using a subset of simulated ultrasonic scans built using a highly accurate, computationally expensive forward model. However, the dimensionality of these simulated B-scans makes interpolating between them a difficult and potentially infeasible problem. Thus, we propose using the chirplet decomposition to reduce the dimensionality of the data, and allow for interpolation in the chirplet parameter space. By applying the chirplet decomposition, we are able to extract the salient features in the data and construct a surrogate forward model.

  7. Power independent EMG based gesture recognition for robotics.

    PubMed

    Li, Ling; Looney, David; Park, Cheolsoo; Rehman, Naveed U; Mandic, Danilo P

    2011-01-01

    A novel method for detecting muscle contraction is presented. This method is further developed for identifying four different gestures to facilitate a hand gesture controlled robot system. It is achieved based on surface Electromyograph (EMG) measurements of groups of arm muscles. The cross-information is preserved through a simultaneous processing of EMG channels using a recent multivariate extension of Empirical Mode Decomposition (EMD). Next, phase synchrony measures are employed to make the system robust to different power levels due to electrode placements and impedances. The multiple pairwise muscle synchronies are used as features of a discrete gesture space comprising four gestures (flexion, extension, pronation, supination). Simulations on real-time robot control illustrate the enhanced accuracy and robustness of the proposed methodology.

  8. The Design Manager's Aid for Intelligent Decomposition (DeMAID)

    NASA Technical Reports Server (NTRS)

    Rogers, James L.

    1994-01-01

    Before the design of new complex systems such as large space platforms can begin, the possible interactions among subsystems and their parts must be determined. Once this is completed, the proposed system can be decomposed to identify its hierarchical structure. The design manager's aid for intelligent decomposition (DeMAID) is a knowledge based system for ordering the sequence of modules and identifying a possible multilevel structure for design. Although DeMAID requires an investment of time to generate and refine the list of modules for input, it could save considerable money and time in the total design process, particularly in new design problems where the ordering of the modules has not been defined.

  9. A review of parametric approaches specific to aerodynamic design process

    NASA Astrophysics Data System (ADS)

    Zhang, Tian-tian; Wang, Zhen-guo; Huang, Wei; Yan, Li

    2018-04-01

    Parametric modeling of aircrafts plays a crucial role in the aerodynamic design process. Effective parametric approaches have large design space with a few variables. Parametric methods that commonly used nowadays are summarized in this paper, and their principles have been introduced briefly. Two-dimensional parametric methods include B-Spline method, Class/Shape function transformation method, Parametric Section method, Hicks-Henne method and Singular Value Decomposition method, and all of them have wide application in the design of the airfoil. This survey made a comparison among them to find out their abilities in the design of the airfoil, and the results show that the Singular Value Decomposition method has the best parametric accuracy. The development of three-dimensional parametric methods is limited, and the most popular one is the Free-form deformation method. Those methods extended from two-dimensional parametric methods have promising prospect in aircraft modeling. Since different parametric methods differ in their characteristics, real design process needs flexible choice among them to adapt to subsequent optimization procedure.

  10. Efficient material decomposition method for dual-energy X-ray cargo inspection system

    NASA Astrophysics Data System (ADS)

    Lee, Donghyeon; Lee, Jiseoc; Min, Jonghwan; Lee, Byungcheol; Lee, Byeongno; Oh, Kyungmin; Kim, Jaehyun; Cho, Seungryong

    2018-03-01

    Dual-energy X-ray inspection systems are widely used today for it provides X-ray attenuation contrast of the imaged object and also its material information. Material decomposition capability allows a higher detection sensitivity of potential targets including purposely loaded impurities in agricultural product inspections and threats in security scans for example. Dual-energy X-ray transmission data can be transformed into two basis material thickness data, and its transformation accuracy heavily relies on a calibration of material decomposition process. The calibration process in general can be laborious and time consuming. Moreover, a conventional calibration method is often challenged by the nonuniform spectral characteristics of the X-ray beam in the entire field-of-view (FOV). In this work, we developed an efficient material decomposition calibration process for a linear accelerator (LINAC) based high-energy X-ray cargo inspection system. We also proposed a multi-spot calibration method to improve the decomposition performance throughout the entire FOV. Experimental validation of the proposed method has been demonstrated by use of a cargo inspection system that supports 6 MV and 9 MV dual-energy imaging.

  11. The Fourier decomposition method for nonlinear and non-stationary time series analysis.

    PubMed

    Singh, Pushpendra; Joshi, Shiv Dutt; Patney, Rakesh Kumar; Saha, Kaushik

    2017-03-01

    for many decades, there has been a general perception in the literature that Fourier methods are not suitable for the analysis of nonlinear and non-stationary data. In this paper, we propose a novel and adaptive Fourier decomposition method (FDM), based on the Fourier theory, and demonstrate its efficacy for the analysis of nonlinear and non-stationary time series. The proposed FDM decomposes any data into a small number of 'Fourier intrinsic band functions' (FIBFs). The FDM presents a generalized Fourier expansion with variable amplitudes and variable frequencies of a time series by the Fourier method itself. We propose an idea of zero-phase filter bank-based multivariate FDM (MFDM), for the analysis of multivariate nonlinear and non-stationary time series, using the FDM. We also present an algorithm to obtain cut-off frequencies for MFDM. The proposed MFDM generates a finite number of band-limited multivariate FIBFs (MFIBFs). The MFDM preserves some intrinsic physical properties of the multivariate data, such as scale alignment, trend and instantaneous frequency. The proposed methods provide a time-frequency-energy (TFE) distribution that reveals the intrinsic structure of a data. Numerical computations and simulations have been carried out and comparison is made with the empirical mode decomposition algorithms.

  12. Developing a Complex Independent Component Analysis (CICA) Technique to Extract Non-stationary Patterns from Geophysical Time Series

    NASA Astrophysics Data System (ADS)

    Forootan, Ehsan; Kusche, Jürgen; Talpe, Matthieu; Shum, C. K.; Schmidt, Michael

    2017-12-01

    In recent decades, decomposition techniques have enabled increasingly more applications for dimension reduction, as well as extraction of additional information from geophysical time series. Traditionally, the principal component analysis (PCA)/empirical orthogonal function (EOF) method and more recently the independent component analysis (ICA) have been applied to extract, statistical orthogonal (uncorrelated), and independent modes that represent the maximum variance of time series, respectively. PCA and ICA can be classified as stationary signal decomposition techniques since they are based on decomposing the autocovariance matrix and diagonalizing higher (than two) order statistical tensors from centered time series, respectively. However, the stationarity assumption in these techniques is not justified for many geophysical and climate variables even after removing cyclic components, e.g., the commonly removed dominant seasonal cycles. In this paper, we present a novel decomposition method, the complex independent component analysis (CICA), which can be applied to extract non-stationary (changing in space and time) patterns from geophysical time series. Here, CICA is derived as an extension of real-valued ICA, where (a) we first define a new complex dataset that contains the observed time series in its real part, and their Hilbert transformed series as its imaginary part, (b) an ICA algorithm based on diagonalization of fourth-order cumulants is then applied to decompose the new complex dataset in (a), and finally, (c) the dominant independent complex modes are extracted and used to represent the dominant space and time amplitudes and associated phase propagation patterns. The performance of CICA is examined by analyzing synthetic data constructed from multiple physically meaningful modes in a simulation framework, with known truth. Next, global terrestrial water storage (TWS) data from the Gravity Recovery And Climate Experiment (GRACE) gravimetry mission (2003-2016), and satellite radiometric sea surface temperature (SST) data (1982-2016) over the Atlantic and Pacific Oceans are used with the aim of demonstrating signal separations of the North Atlantic Oscillation (NAO) from the Atlantic Multi-decadal Oscillation (AMO), and the El Niño Southern Oscillation (ENSO) from the Pacific Decadal Oscillation (PDO). CICA results indicate that ENSO-related patterns can be extracted from the Gravity Recovery And Climate Experiment Terrestrial Water Storage (GRACE TWS) with an accuracy of 0.5-1 cm in terms of equivalent water height (EWH). The magnitude of errors in extracting NAO or AMO from SST data using the complex EOF (CEOF) approach reaches up to 50% of the signal itself, while it is reduced to 16% when applying CICA. Larger errors with magnitudes of 100% and 30% of the signal itself are found while separating ENSO from PDO using CEOF and CICA, respectively. We thus conclude that the CICA is more effective than CEOF in separating non-stationary patterns.

  13. Lamb Waves Decomposition and Mode Identification Using Matching Pursuit Method

    DTIC Science & Technology

    2009-01-01

    Wigner - Ville distribution ( WVD ). However, WVD suffers from severe interferences, called cross-terms. Cross- terms are the area of a time-frequency...transform (STFT), wavelet transform, Wigner - Ville distribution , matching pursuit decomposition, etc. 1 Report Documentation Page Form ApprovedOMB No...MP decomposition using chirplet dictionary was applied to a simulated S0 mode Lamb wave shown previously in Figure 2a. Wigner - Ville distribution of

  14. Accelerating the discovery of space-time patterns of infectious diseases using parallel computing.

    PubMed

    Hohl, Alexander; Delmelle, Eric; Tang, Wenwu; Casas, Irene

    2016-11-01

    Infectious diseases have complex transmission cycles, and effective public health responses require the ability to monitor outbreaks in a timely manner. Space-time statistics facilitate the discovery of disease dynamics including rate of spread and seasonal cyclic patterns, but are computationally demanding, especially for datasets of increasing size, diversity and availability. High-performance computing reduces the effort required to identify these patterns, however heterogeneity in the data must be accounted for. We develop an adaptive space-time domain decomposition approach for parallel computation of the space-time kernel density. We apply our methodology to individual reported dengue cases from 2010 to 2011 in the city of Cali, Colombia. The parallel implementation reaches significant speedup compared to sequential counterparts. Density values are visualized in an interactive 3D environment, which facilitates the identification and communication of uneven space-time distribution of disease events. Our framework has the potential to enhance the timely monitoring of infectious diseases. Copyright © 2016 Elsevier Ltd. All rights reserved.

  15. An asymptotic induced numerical method for the convection-diffusion-reaction equation

    NASA Technical Reports Server (NTRS)

    Scroggs, Jeffrey S.; Sorensen, Danny C.

    1988-01-01

    A parallel algorithm for the efficient solution of a time dependent reaction convection diffusion equation with small parameter on the diffusion term is presented. The method is based on a domain decomposition that is dictated by singular perturbation analysis. The analysis is used to determine regions where certain reduced equations may be solved in place of the full equation. Parallelism is evident at two levels. Domain decomposition provides parallelism at the highest level, and within each domain there is ample opportunity to exploit parallelism. Run time results demonstrate the viability of the method.

  16. An Aquatic Decomposition Scoring Method to Potentially Predict the Postmortem Submersion Interval of Bodies Recovered from the North Sea.

    PubMed

    van Daalen, Marjolijn A; de Kat, Dorothée S; Oude Grotebevelsborg, Bernice F L; de Leeuwe, Roosje; Warnaar, Jeroen; Oostra, Roelof Jan; M Duijst-Heesters, Wilma L J

    2017-03-01

    This study aimed to develop an aquatic decomposition scoring (ADS) method and investigated the predictive value of this method in estimating the postmortem submersion interval (PMSI) of bodies recovered from the North Sea. This method, consisting of an ADS item list and a pictorial reference atlas, showed a high interobserver agreement (Krippendorff's alpha ≥ 0.93) and hence proved to be valid. This scoring method was applied to data, collected from closed cases-cases in which the postmortal submersion interval (PMSI) was known-concerning bodies recovered from the North Sea from 1990 to 2013. Thirty-eight cases met the inclusion criteria and were scored by quantifying the observed total aquatic decomposition score (TADS). Statistical analysis demonstrated that TADS accurately predicts the PMSI (p < 0.001), confirming that the decomposition process in the North Sea is strongly correlated to time. © 2017 American Academy of Forensic Sciences.

  17. Wavelet bases on the L-shaped domain

    NASA Astrophysics Data System (ADS)

    Jouini, Abdellatif; Lemarié-Rieusset, Pierre Gilles

    2013-07-01

    We present in this paper two elementary constructions of multiresolution analyses on the L-shaped domain D. In the first one, we shall describe a direct method to define an orthonormal multiresolution analysis. In the second one, we use the decomposition method for constructing a biorthogonal multiresolution analysis. These analyses are adapted for the study of the Sobolev spaces Hs(D)(s∈N).

  18. A Short Proof of the Large Time Energy Growth for the Boussinesq System

    NASA Astrophysics Data System (ADS)

    Brandolese, Lorenzo; Mouzouni, Charafeddine

    2017-10-01

    We give a direct proof of the fact that the L^p-norms of global solutions of the Boussinesq system in R^3 grow large as t→ ∞ for 1

  19. Scalability of a Low-Cost Multi-Teraflop Linux Cluster for High-End Classical Atomistic and Quantum Mechanical Simulations

    NASA Technical Reports Server (NTRS)

    Kikuchi, Hideaki; Kalia, Rajiv K.; Nakano, Aiichiro; Vashishta, Priya; Shimojo, Fuyuki; Saini, Subhash

    2003-01-01

    Scalability of a low-cost, Intel Xeon-based, multi-Teraflop Linux cluster is tested for two high-end scientific applications: Classical atomistic simulation based on the molecular dynamics method and quantum mechanical calculation based on the density functional theory. These scalable parallel applications use space-time multiresolution algorithms and feature computational-space decomposition, wavelet-based adaptive load balancing, and spacefilling-curve-based data compression for scalable I/O. Comparative performance tests are performed on a 1,024-processor Linux cluster and a conventional higher-end parallel supercomputer, 1,184-processor IBM SP4. The results show that the performance of the Linux cluster is comparable to that of the SP4. We also study various effects, such as the sharing of memory and L2 cache among processors, on the performance.

  20. Stochastic methods for analysis of power flow in electric networks

    NASA Astrophysics Data System (ADS)

    1982-09-01

    The modeling and effects of probabilistic behavior on steady state power system operation were analyzed. A solution to the steady state network flow equations which adhere both to Kirchoff's Laws and probabilistic laws, using either combinatorial or functional approximation techniques was obtained. The development of sound techniques for producing meaningful data to serve as input is examined. Electric demand modeling, equipment failure analysis, and algorithm development are investigated. Two major development areas are described: a decomposition of stochastic processes which gives stationarity, ergodicity, and even normality; and a powerful surrogate probability approach using proportions of time which allows the calculation of joint events from one dimensional probability spaces.

  1. Utilization of a balanced steady state free precession signal model for improved fat/water decomposition.

    PubMed

    Henze Bancroft, Leah C; Strigel, Roberta M; Hernando, Diego; Johnson, Kevin M; Kelcz, Frederick; Kijowski, Richard; Block, Walter F

    2016-03-01

    Chemical shift based fat/water decomposition methods such as IDEAL are frequently used in challenging imaging environments with large B0 inhomogeneity. However, they do not account for the signal modulations introduced by a balanced steady state free precession (bSSFP) acquisition. Here we demonstrate improved performance when the bSSFP frequency response is properly incorporated into the multipeak spectral fat model used in the decomposition process. Balanced SSFP allows for rapid imaging but also introduces a characteristic frequency response featuring periodic nulls and pass bands. Fat spectral components in adjacent pass bands will experience bulk phase offsets and magnitude modulations that change the expected constructive and destructive interference between the fat spectral components. A bSSFP signal model was incorporated into the fat/water decomposition process and used to generate images of a fat phantom, and bilateral breast and knee images in four normal volunteers at 1.5 Tesla. Incorporation of the bSSFP signal model into the decomposition process improved the performance of the fat/water decomposition. Incorporation of this model allows rapid bSSFP imaging sequences to use robust fat/water decomposition methods such as IDEAL. While only one set of imaging parameters were presented, the method is compatible with any field strength or repetition time. © 2015 Wiley Periodicals, Inc.

  2. The finite scaling for S = 1 XXZ chains with uniaxial single-ion-type anisotropy

    NASA Astrophysics Data System (ADS)

    Wang, Honglei; Xiong, Xingliang

    2014-03-01

    The scaling behavior of criticality for spin-1 XXZ chains with uniaxial single-ion-type anisotropy is investigated by employing the infinite matrix product state representation with the infinite time evolving block decimation method. At criticality, the accuracy of the ground state of a system is limited by the truncation dimension χ of the local Hilbert space. We present four evidences for the scaling of the entanglement entropy, the largest eigenvalue of the Schmidt decomposition, the correlation length, and the connection between the actual correlation length ξ and the energy. The result shows that the finite scalings are governed by the central charge of the critical system. Also, it demonstrates that the infinite time evolving block decimation algorithm by the infinite matrix product state representation can be a quite accurate method to simulate the critical properties at criticality.

  3. High performance computation of radiative transfer equation using the finite element method

    NASA Astrophysics Data System (ADS)

    Badri, M. A.; Jolivet, P.; Rousseau, B.; Favennec, Y.

    2018-05-01

    This article deals with an efficient strategy for numerically simulating radiative transfer phenomena using distributed computing. The finite element method alongside the discrete ordinate method is used for spatio-angular discretization of the monochromatic steady-state radiative transfer equation in an anisotropically scattering media. Two very different methods of parallelization, angular and spatial decomposition methods, are presented. To do so, the finite element method is used in a vectorial way. A detailed comparison of scalability, performance, and efficiency on thousands of processors is established for two- and three-dimensional heterogeneous test cases. Timings show that both algorithms scale well when using proper preconditioners. It is also observed that our angular decomposition scheme outperforms our domain decomposition method. Overall, we perform numerical simulations at scales that were previously unattainable by standard radiative transfer equation solvers.

  4. River flow prediction using hybrid models of support vector regression with the wavelet transform, singular spectrum analysis and chaotic approach

    NASA Astrophysics Data System (ADS)

    Baydaroğlu, Özlem; Koçak, Kasım; Duran, Kemal

    2018-06-01

    Prediction of water amount that will enter the reservoirs in the following month is of vital importance especially for semi-arid countries like Turkey. Climate projections emphasize that water scarcity will be one of the serious problems in the future. This study presents a methodology for predicting river flow for the subsequent month based on the time series of observed monthly river flow with hybrid models of support vector regression (SVR). Monthly river flow over the period 1940-2012 observed for the Kızılırmak River in Turkey has been used for training the method, which then has been applied for predictions over a period of 3 years. SVR is a specific implementation of support vector machines (SVMs), which transforms the observed input data time series into a high-dimensional feature space (input matrix) by way of a kernel function and performs a linear regression in this space. SVR requires a special input matrix. The input matrix was produced by wavelet transforms (WT), singular spectrum analysis (SSA), and a chaotic approach (CA) applied to the input time series. WT convolutes the original time series into a series of wavelets, and SSA decomposes the time series into a trend, an oscillatory and a noise component by singular value decomposition. CA uses a phase space formed by trajectories, which represent the dynamics producing the time series. These three methods for producing the input matrix for the SVR proved successful, while the SVR-WT combination resulted in the highest coefficient of determination and the lowest mean absolute error.

  5. Definition of a parametric form of nonsingular Mueller matrices.

    PubMed

    Devlaminck, Vincent; Terrier, Patrick

    2008-11-01

    The goal of this paper is to propose a mathematical framework to define and analyze a general parametric form of an arbitrary nonsingular Mueller matrix. Starting from previous results about nondepolarizing matrices, we generalize the method to any nonsingular Mueller matrix. We address this problem in a six-dimensional space in order to introduce a transformation group with the same number of degrees of freedom and explain why subsets of O(5,1), the orthogonal group associated with six-dimensional Minkowski space, is a physically admissible solution to this question. Generators of this group are used to define possible expressions of an arbitrary nonsingular Mueller matrix. Ultimately, the problem of decomposition of these matrices is addressed, and we point out that the "reverse" and "forward" decomposition concepts recently introduced may be inferred from the formalism we propose.

  6. Computer implemented empirical mode decomposition method apparatus, and article of manufacture utilizing curvature extrema

    NASA Technical Reports Server (NTRS)

    Shen, Zheng (Inventor); Huang, Norden Eh (Inventor)

    2003-01-01

    A computer implemented physical signal analysis method is includes two essential steps and the associated presentation techniques of the results. All the steps exist only in a computer: there are no analytic expressions resulting from the method. The first step is a computer implemented Empirical Mode Decomposition to extract a collection of Intrinsic Mode Functions (IMF) from nonlinear, nonstationary physical signals based on local extrema and curvature extrema. The decomposition is based on the direct extraction of the energy associated with various intrinsic time scales in the physical signal. Expressed in the IMF's, they have well-behaved Hilbert Transforms from which instantaneous frequencies can be calculated. The second step is the Hilbert Transform. The final result is the Hilbert Spectrum. Thus, the invention can localize any event on the time as well as the frequency axis. The decomposition can also be viewed as an expansion of the data in terms of the IMF's. Then, these IMF's, based on and derived from the data, can serve as the basis of that expansion. The local energy and the instantaneous frequency derived from the IMF's through the Hilbert transform give a full energy-frequency-time distribution of the data which is designated as the Hilbert Spectrum.

  7. Plasmonic Thermal Decomposition/Digestion of Proteins: A Rapid On-Surface Protein Digestion Technique for Mass Spectrometry Imaging.

    PubMed

    Zhou, Rong; Basile, Franco

    2017-09-05

    A method based on plasmon surface resonance absorption and heating was developed to perform a rapid on-surface protein thermal decomposition and digestion suitable for imaging mass spectrometry (MS) and/or profiling. This photothermal process or plasmonic thermal decomposition/digestion (plasmonic-TDD) method incorporates a continuous wave (CW) laser excitation and gold nanoparticles (Au-NPs) to induce known thermal decomposition reactions that cleave peptides and proteins specifically at the C-terminus of aspartic acid and at the N-terminus of cysteine. These thermal decomposition reactions are induced by heating a solid protein sample to temperatures between 200 and 270 °C for a short period of time (10-50 s per 200 μm segment) and are reagentless and solventless, and thus are devoid of sample product delocalization. In the plasmonic-TDD setup the sample is coated with Au-NPs and irradiated with 532 nm laser radiation to induce thermoplasmonic heating and bring about site-specific thermal decomposition on solid peptide/protein samples. In this manner the Au-NPs act as nanoheaters that result in a highly localized thermal decomposition and digestion of the protein sample that is independent of the absorption properties of the protein, making the method universally applicable to all types of proteinaceous samples (e.g., tissues or protein arrays). Several experimental variables were optimized to maximize product yield, and they include heating time, laser intensity, size of Au-NPs, and surface coverage of Au-NPs. Using optimized parameters, proof-of-principle experiments confirmed the ability of the plasmonic-TDD method to induce both C-cleavage and D-cleavage on several peptide standards and the protein lysozyme by detecting their thermal decomposition products with matrix-assisted laser desorption/ionization mass spectrometry (MALDI-MS). The high spatial specificity of the plasmonic-TDD method was demonstrated by using a mask to digest designated sections of the sample surface with the heating laser and MALDI-MS imaging to map the resulting products. The solventless nature of the plasmonic-TDD method enabled the nonenzymatic on-surface digestion of proteins to proceed with undetectable delocalization of the resulting products from their precursor protein location. The advantages of this novel plasmonic-TDD method include short reaction times (<30 s/200 μm), compatibility with MALDI, universal sample compatibility, high spatial specificity, and localization of the digestion products. These advantages point to potential applications of this method for on-tissue protein digestion and MS-imaging/profiling for the identification of proteins, high-fidelity MS imaging of high molecular weight (>30 kDa) proteins, and the rapid analysis of formalin-fixed paraffin-embedded (FFPE) tissue samples.

  8. A Decomposition Method for Security Constrained Economic Dispatch of a Three-Layer Power System

    NASA Astrophysics Data System (ADS)

    Yang, Junfeng; Luo, Zhiqiang; Dong, Cheng; Lai, Xiaowen; Wang, Yang

    2018-01-01

    This paper proposes a new decomposition method for the security-constrained economic dispatch in a three-layer large-scale power system. The decomposition is realized using two main techniques. The first is to use Ward equivalencing-based network reduction to reduce the number of variables and constraints in the high-layer model without sacrificing accuracy. The second is to develop a price response function to exchange signal information between neighboring layers, which significantly improves the information exchange efficiency of each iteration and results in less iterations and less computational time. The case studies based on the duplicated RTS-79 system demonstrate the effectiveness and robustness of the proposed method.

  9. Native conflict awared layout decomposition in triple patterning lithography using bin-based library matching method

    NASA Astrophysics Data System (ADS)

    Ke, Xianhua; Jiang, Hao; Lv, Wen; Liu, Shiyuan

    2016-03-01

    Triple patterning (TP) lithography becomes a feasible technology for manufacturing as the feature size further scale down to sub 14/10 nm. In TP, a layout is decomposed into three masks followed with exposures and etches/freezing processes respectively. Previous works mostly focus on layout decomposition with minimal conflicts and stitches simultaneously. However, since any existence of native conflict will result in layout re-design/modification and reperforming the time-consuming decomposition, the effective method that can be aware of native conflicts (NCs) in layout is desirable. In this paper, a bin-based library matching method is proposed for NCs detection and layout decomposition. First, a layout is divided into bins and the corresponding conflict graph in each bin is constructed. Then, we match the conflict graph in a prebuilt colored library, and as a result the NCs can be located and highlighted quickly.

  10. The Fourier decomposition method for nonlinear and non-stationary time series analysis

    PubMed Central

    Joshi, Shiv Dutt; Patney, Rakesh Kumar; Saha, Kaushik

    2017-01-01

    for many decades, there has been a general perception in the literature that Fourier methods are not suitable for the analysis of nonlinear and non-stationary data. In this paper, we propose a novel and adaptive Fourier decomposition method (FDM), based on the Fourier theory, and demonstrate its efficacy for the analysis of nonlinear and non-stationary time series. The proposed FDM decomposes any data into a small number of ‘Fourier intrinsic band functions’ (FIBFs). The FDM presents a generalized Fourier expansion with variable amplitudes and variable frequencies of a time series by the Fourier method itself. We propose an idea of zero-phase filter bank-based multivariate FDM (MFDM), for the analysis of multivariate nonlinear and non-stationary time series, using the FDM. We also present an algorithm to obtain cut-off frequencies for MFDM. The proposed MFDM generates a finite number of band-limited multivariate FIBFs (MFIBFs). The MFDM preserves some intrinsic physical properties of the multivariate data, such as scale alignment, trend and instantaneous frequency. The proposed methods provide a time–frequency–energy (TFE) distribution that reveals the intrinsic structure of a data. Numerical computations and simulations have been carried out and comparison is made with the empirical mode decomposition algorithms. PMID:28413352

  11. Production of continuous mullite fiber via sol-gel processing

    NASA Technical Reports Server (NTRS)

    Tucker, Dennis S.; Sparks, J. Scott; Esker, David C.

    1990-01-01

    The development of a continuous ceramic fiber which could be used in rocket engine and rocket boosters applications was investigated at the Marshall Space Flight Center. Methods of ceramic fiber production such as melt spinning, chemical vapor deposition, and precursor polymeric fiber decomposition are discussed and compared with sol-gel processing. The production of ceramics via the sol-gel method consists of two steps, hydrolysis and polycondensation, to form the preceramic, followed by consolidation into the glass or ceramic structure. The advantages of the sol-gel method include better homogeneity and purity, lower preparation temperature, and the ability to form unique compositions. The disadvantages are the high cost of raw materials, large shrinkage during drying and firing which can lead to cracks, and long processing times. Preparation procedures for aluminosilicate sol-gel and for continuous mullite fibers are described.

  12. A data-driven method to enhance vibration signal decomposition for rolling bearing fault analysis

    NASA Astrophysics Data System (ADS)

    Grasso, M.; Chatterton, S.; Pennacchi, P.; Colosimo, B. M.

    2016-12-01

    Health condition analysis and diagnostics of rotating machinery requires the capability of properly characterizing the information content of sensor signals in order to detect and identify possible fault features. Time-frequency analysis plays a fundamental role, as it allows determining both the existence and the causes of a fault. The separation of components belonging to different time-frequency scales, either associated to healthy or faulty conditions, represents a challenge that motivates the development of effective methodologies for multi-scale signal decomposition. In this framework, the Empirical Mode Decomposition (EMD) is a flexible tool, thanks to its data-driven and adaptive nature. However, the EMD usually yields an over-decomposition of the original signals into a large number of intrinsic mode functions (IMFs). The selection of most relevant IMFs is a challenging task, and the reference literature lacks automated methods to achieve a synthetic decomposition into few physically meaningful modes by avoiding the generation of spurious or meaningless modes. The paper proposes a novel automated approach aimed at generating a decomposition into a minimal number of relevant modes, called Combined Mode Functions (CMFs), each consisting in a sum of adjacent IMFs that share similar properties. The final number of CMFs is selected in a fully data driven way, leading to an enhanced characterization of the signal content without any information loss. A novel criterion to assess the dissimilarity between adjacent CMFs is proposed, based on probability density functions of frequency spectra. The method is suitable to analyze vibration signals that may be periodically acquired within the operating life of rotating machineries. A rolling element bearing fault analysis based on experimental data is presented to demonstrate the performances of the method and the provided benefits.

  13. Distance descending ordering method: An O(n) algorithm for inverting the mass matrix in simulation of macromolecules with long branches

    NASA Astrophysics Data System (ADS)

    Xu, Xiankun; Li, Peiwen

    2017-11-01

    Fixman's work in 1974 and the follow-up studies have developed a method that can factorize the inverse of mass matrix into an arithmetic combination of three sparse matrices-one of them is positive definite and needs to be further factorized by using the Cholesky decomposition or similar methods. When the molecule subjected to study is of serial chain structure, this method can achieve O (n) time complexity. However, for molecules with long branches, Cholesky decomposition about the corresponding positive definite matrix will introduce massive fill-in due to its nonzero structure. Although there are several methods can be used to reduce the number of fill-in, none of them could strictly guarantee for zero fill-in for all molecules according to our test, and thus cannot obtain O (n) time complexity by using these traditional methods. In this paper we present a new method that can guarantee for no fill-in in doing the Cholesky decomposition, which was developed based on the correlations between the mass matrix and the geometrical structure of molecules. As a result, the inverting of mass matrix will remain the O (n) time complexity, no matter the molecule structure has long branches or not.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Besse, Nicolas; Latu, Guillaume; Ghizzo, Alain

    In this paper we present a new method for the numerical solution of the relativistic Vlasov-Maxwell system on a phase-space grid using an adaptive semi-Lagrangian method. The adaptivity is performed through a wavelet multiresolution analysis, which gives a powerful and natural refinement criterion based on the local measurement of the approximation error and regularity of the distribution function. Therefore, the multiscale expansion of the distribution function allows to get a sparse representation of the data and thus save memory space and CPU time. We apply this numerical scheme to reduced Vlasov-Maxwell systems arising in laser-plasma physics. Interaction of relativistically strongmore » laser pulses with overdense plasma slabs is investigated. These Vlasov simulations revealed a rich variety of phenomena associated with the fast particle dynamics induced by electromagnetic waves as electron trapping, particle acceleration, and electron plasma wavebreaking. However, the wavelet based adaptive method that we developed here, does not yield significant improvements compared to Vlasov solvers on a uniform mesh due to the substantial overhead that the method introduces. Nonetheless they might be a first step towards more efficient adaptive solvers based on different ideas for the grid refinement or on a more efficient implementation. Here the Vlasov simulations are performed in a two-dimensional phase-space where the development of thin filaments, strongly amplified by relativistic effects requires an important increase of the total number of points of the phase-space grid as they get finer as time goes on. The adaptive method could be more useful in cases where these thin filaments that need to be resolved are a very small fraction of the hyper-volume, which arises in higher dimensions because of the surface-to-volume scaling and the essentially one-dimensional structure of the filaments. Moreover, the main way to improve the efficiency of the adaptive method is to increase the local character in phase-space of the numerical scheme, by considering multiscale reconstruction with more compact support and by replacing the semi-Lagrangian method with more local - in space - numerical scheme as compact finite difference schemes, discontinuous-Galerkin method or finite element residual schemes which are well suited for parallel domain decomposition techniques.« less

  15. WAKES: Wavelet Adaptive Kinetic Evolution Solvers

    NASA Astrophysics Data System (ADS)

    Mardirian, Marine; Afeyan, Bedros; Larson, David

    2016-10-01

    We are developing a general capability to adaptively solve phase space evolution equations mixing particle and continuum techniques in an adaptive manner. The multi-scale approach is achieved using wavelet decompositions which allow phase space density estimation to occur with scale dependent increased accuracy and variable time stepping. Possible improvements on the SFK method of Larson are discussed, including the use of multiresolution analysis based Richardson-Lucy Iteration, adaptive step size control in explicit vs implicit approaches. Examples will be shown with KEEN waves and KEEPN (Kinetic Electrostatic Electron Positron Nonlinear) waves, which are the pair plasma generalization of the former, and have a much richer span of dynamical behavior. WAKES techniques are well suited for the study of driven and released nonlinear, non-stationary, self-organized structures in phase space which have no fluid, limit nor a linear limit, and yet remain undamped and coherent well past the drive period. The work reported here is based on the Vlasov-Poisson model of plasma dynamics. Work supported by a Grant from the AFOSR.

  16. Component isolation for multi-component signal analysis using a non-parametric gaussian latent feature model

    NASA Astrophysics Data System (ADS)

    Yang, Yang; Peng, Zhike; Dong, Xingjian; Zhang, Wenming; Clifton, David A.

    2018-03-01

    A challenge in analysing non-stationary multi-component signals is to isolate nonlinearly time-varying signals especially when they are overlapped in time and frequency plane. In this paper, a framework integrating time-frequency analysis-based demodulation and a non-parametric Gaussian latent feature model is proposed to isolate and recover components of such signals. The former aims to remove high-order frequency modulation (FM) such that the latter is able to infer demodulated components while simultaneously discovering the number of the target components. The proposed method is effective in isolating multiple components that have the same FM behavior. In addition, the results show that the proposed method is superior to generalised demodulation with singular-value decomposition-based method, parametric time-frequency analysis with filter-based method and empirical model decomposition base method, in recovering the amplitude and phase of superimposed components.

  17. Delineating gas bearing reservoir by using spectral decomposition attribute: Case study of Steenkool formation, Bintuni Basin

    NASA Astrophysics Data System (ADS)

    Haris, A.; Pradana, G. S.; Riyanto, A.

    2017-07-01

    Tectonic setting of the Bird Head Papua Island becomes an important model for petroleum system in Eastern part of Indonesia. The current exploration has been started since the oil seepage finding in Bintuni and Salawati Basin. The biogenic gas in shallow layer turns out to become an interesting issue in the hydrocarbon exploration. The hydrocarbon accumulation appearance in a shallow layer with dry gas type, appeal biogenic gas for further research. This paper aims at delineating the sweet spot hydrocarbon potential in shallow layer by applying the spectral decomposition technique. The spectral decomposition is decomposing the seismic signal into an individual frequency, which has significant geological meaning. One of spectral decomposition methods is Continuous Wavelet Transform (CWT), which transforms the seismic signal into individual time and frequency simultaneously. This method is able to make easier time-frequency map analysis. When time resolution increases, the frequency resolution will be decreased, and vice versa. In this study, we perform low-frequency shadow zone analysis in which the amplitude anomaly at a low frequency of 15 Hz was observed and we then compare it to the amplitude at the mid (20 Hz) and the high-frequency (30 Hz). The appearance of the amplitude anomaly at a low frequency was disappeared at high frequency, this anomaly disappears. The spectral decomposition by using CWT algorithm has been successfully applied to delineate the sweet spot zone.

  18. High-frequency Total Focusing Method (TFM) imaging in strongly attenuating materials with the decomposition of the time reversal operator associated with orthogonal coded excitations

    NASA Astrophysics Data System (ADS)

    Villaverde, Eduardo Lopez; Robert, Sébastien; Prada, Claire

    2017-02-01

    In the present work, the Total Focusing Method (TFM) is used to image defects in a High Density Polyethylene (HDPE) pipe. The viscoelastic attenuation of this material corrupts the images with a high electronic noise. In order to improve the image quality, the Decomposition of the Time Reversal Operator (DORT) filtering is combined with spatial Walsh-Hadamard coded transmissions before calculating the images. Experiments on a complex HDPE joint demonstrate that this method improves the signal-to-noise ratio by more than 40 dB in comparison with the conventional TFM.

  19. Statistical properties and time-frequency analysis of temperature, salinity and turbidity measured by the MAREL Carnot station in the coastal waters of Boulogne-sur-Mer (France)

    NASA Astrophysics Data System (ADS)

    Kbaier Ben Ismail, Dhouha; Lazure, Pascal; Puillat, Ingrid

    2016-10-01

    In marine sciences, many fields display high variability over a large range of spatial and temporal scales, from seconds to thousands of years. The longer recorded time series, with an increasing sampling frequency, in this field are often nonlinear, nonstationary, multiscale and noisy. Their analysis faces new challenges and thus requires the implementation of adequate and specific methods. The objective of this paper is to highlight time series analysis methods already applied in econometrics, signal processing, health, etc. to the environmental marine domain, assess advantages and inconvenients and compare classical techniques with more recent ones. Temperature, turbidity and salinity are important quantities for ecosystem studies. The authors here consider the fluctuations of sea level, salinity, turbidity and temperature recorded from the MAREL Carnot system of Boulogne-sur-Mer (France), which is a moored buoy equipped with physico-chemical measuring devices, working in continuous and autonomous conditions. In order to perform adequate statistical and spectral analyses, it is necessary to know the nature of the considered time series. For this purpose, the stationarity of the series and the occurrence of unit-root are addressed with the Augmented-Dickey Fuller tests. As an example, the harmonic analysis is not relevant for temperature, turbidity and salinity due to the nonstationary condition, except for the nearly stationary sea level datasets. In order to consider the dominant frequencies associated to the dynamics, the large number of data provided by the sensors should enable the estimation of Fourier spectral analysis. Different power spectra show a complex variability and reveal an influence of environmental factors such as tides. However, the previous classical spectral analysis, namely the Blackman-Tukey method, requires not only linear and stationary data but also evenly-spaced data. Interpolating the time series introduces numerous artifacts to the data. The Lomb-Scargle algorithm is adapted to unevenly-spaced data and is used as an alternative. The limits of the method are also set out. It was found that beyond 50% of missing measures, few significant frequencies are detected, several seasonalities are no more visible, and even a whole range of high frequency disappears progressively. Furthermore, two time-frequency decomposition methods, namely wavelets and Hilbert-Huang Transformation (HHT), are applied for the analysis of the entire dataset. Using the Continuous Wavelet Transform (CWT), some properties of the time series are determined. Then, the inertial wave and several low-frequency tidal waves are identified by the application of the Empirical Mode Decomposition (EMD). Finally, EMD based Time Dependent Intrinsic Correlation (TDIC) analysis is applied to consider the correlation between two nonstationary time series.

  20. Within outlying mean indexes: refining the OMI analysis for the realized niche decomposition.

    PubMed

    Karasiewicz, Stéphane; Dolédec, Sylvain; Lefebvre, Sébastien

    2017-01-01

    The ecological niche concept has regained interest under environmental change (e.g., climate change, eutrophication, and habitat destruction), especially to study the impacts on niche shift and conservatism. Here, we propose the within outlying mean indexes (WitOMI), which refine the outlying mean index (OMI) analysis by using its properties in combination with the K -select analysis species marginality decomposition. The purpose is to decompose the ecological niche into subniches associated with the experimental design, i.e., taking into account temporal and/or spatial subsets. WitOMI emphasize the habitat conditions that contribute (1) to the definition of species' niches using all available conditions and, at the same time, (2) to the delineation of species' subniches according to given subsets of dates or sites. The latter aspect allows addressing niche dynamics by highlighting the influence of atypical habitat conditions on species at a given time and/or space. Then, (3) the biological constraint exerted on the species subniche becomes observable within Euclidean space as the difference between the existing fundamental subniche and the realized subniche. We illustrate the decomposition of published OMI analyses, using spatial and temporal examples. The species assemblage's subniches are comparable to the same environmental gradient, producing a more accurate and precise description of the assemblage niche distribution under environmental change. The WitOMI calculations are available in the open-access R package "subniche."

  1. Empirical Mode Decomposition and k-Nearest Embedding Vectors for Timely Analyses of Antibiotic Resistance Trends

    PubMed Central

    Teodoro, Douglas; Lovis, Christian

    2013-01-01

    Background Antibiotic resistance is a major worldwide public health concern. In clinical settings, timely antibiotic resistance information is key for care providers as it allows appropriate targeted treatment or improved empirical treatment when the specific results of the patient are not yet available. Objective To improve antibiotic resistance trend analysis algorithms by building a novel, fully data-driven forecasting method from the combination of trend extraction and machine learning models for enhanced biosurveillance systems. Methods We investigate a robust model for extraction and forecasting of antibiotic resistance trends using a decade of microbiology data. Our method consists of breaking down the resistance time series into independent oscillatory components via the empirical mode decomposition technique. The resulting waveforms describing intrinsic resistance trends serve as the input for the forecasting algorithm. The algorithm applies the delay coordinate embedding theorem together with the k-nearest neighbor framework to project mappings from past events into the future dimension and estimate the resistance levels. Results The algorithms that decompose the resistance time series and filter out high frequency components showed statistically significant performance improvements in comparison with a benchmark random walk model. We present further qualitative use-cases of antibiotic resistance trend extraction, where empirical mode decomposition was applied to highlight the specificities of the resistance trends. Conclusion The decomposition of the raw signal was found not only to yield valuable insight into the resistance evolution, but also to produce novel models of resistance forecasters with boosted prediction performance, which could be utilized as a complementary method in the analysis of antibiotic resistance trends. PMID:23637796

  2. Time-varying singular value decomposition for periodic transient identification in bearing fault diagnosis

    NASA Astrophysics Data System (ADS)

    Zhang, Shangbin; Lu, Siliang; He, Qingbo; Kong, Fanrang

    2016-09-01

    For rotating machines, the defective faults of bearings generally are represented as periodic transient impulses in acquired signals. The extraction of transient features from signals has been a key issue for fault diagnosis. However, the background noise reduces identification performance of periodic faults in practice. This paper proposes a time-varying singular value decomposition (TSVD) method to enhance the identification of periodic faults. The proposed method is inspired by the sliding window method. By applying singular value decomposition (SVD) to the signal under a sliding window, we can obtain a time-varying singular value matrix (TSVM). Each column in the TSVM is occupied by the singular values of the corresponding sliding window, and each row represents the intrinsic structure of the raw signal, namely time-singular-value-sequence (TSVS). Theoretical and experimental analyses show that the frequency of TSVS is exactly twice that of the corresponding intrinsic structure. Moreover, the signal-to-noise ratio (SNR) of TSVS is improved significantly in comparison with the raw signal. The proposed method takes advantages of the TSVS in noise suppression and feature extraction to enhance fault frequency for diagnosis. The effectiveness of the TSVD is verified by means of simulation studies and applications to diagnosis of bearing faults. Results indicate that the proposed method is superior to traditional methods for bearing fault diagnosis.

  3. Image compression using singular value decomposition

    NASA Astrophysics Data System (ADS)

    Swathi, H. R.; Sohini, Shah; Surbhi; Gopichand, G.

    2017-11-01

    We often need to transmit and store the images in many applications. Smaller the image, less is the cost associated with transmission and storage. So we often need to apply data compression techniques to reduce the storage space consumed by the image. One approach is to apply Singular Value Decomposition (SVD) on the image matrix. In this method, digital image is given to SVD. SVD refactors the given digital image into three matrices. Singular values are used to refactor the image and at the end of this process, image is represented with smaller set of values, hence reducing the storage space required by the image. Goal here is to achieve the image compression while preserving the important features which describe the original image. SVD can be adapted to any arbitrary, square, reversible and non-reversible matrix of m × n size. Compression ratio and Mean Square Error is used as performance metrics.

  4. Fully decoupled monolithic projection method for natural convection problems

    NASA Astrophysics Data System (ADS)

    Pan, Xiaomin; Kim, Kyoungyoun; Lee, Changhoon; Choi, Jung-Il

    2017-04-01

    To solve time-dependent natural convection problems, we propose a fully decoupled monolithic projection method. The proposed method applies the Crank-Nicolson scheme in time and the second-order central finite difference in space. To obtain a non-iterative monolithic method from the fully discretized nonlinear system, we first adopt linearizations of the nonlinear convection terms and the general buoyancy term with incurring second-order errors in time. Approximate block lower-upper decompositions, along with an approximate factorization technique, are additionally employed to a global linearly coupled system, which leads to several decoupled subsystems, i.e., a fully decoupled monolithic procedure. We establish global error estimates to verify the second-order temporal accuracy of the proposed method for velocity, pressure, and temperature in terms of a discrete l2-norm. Moreover, according to the energy evolution, the proposed method is proved to be stable if the time step is less than or equal to a constant. In addition, we provide numerical simulations of two-dimensional Rayleigh-Bénard convection and periodic forced flow. The results demonstrate that the proposed method significantly mitigates the time step limitation, reduces the computational cost because only one Poisson equation is required to be solved, and preserves the second-order temporal accuracy for velocity, pressure, and temperature. Finally, the proposed method reasonably predicts a three-dimensional Rayleigh-Bénard convection for different Rayleigh numbers.

  5. Topics in Modeling of Cochlear Dynamics: Computation, Response and Stability Analysis

    NASA Astrophysics Data System (ADS)

    Filo, Maurice G.

    This thesis touches upon several topics in cochlear modeling. Throughout the literature, mathematical models of the cochlea vary according to the degree of biological realism to be incorporated. This thesis casts the cochlear model as a continuous space-time dynamical system using operator language. This framework encompasses a wider class of cochlear models and makes the dynamics more transparent and easier to analyze before applying any numerical method to discretize space. In fact, several numerical methods are investigated to study the computational efficiency of the finite dimensional realizations in space. Furthermore, we study the effects of the active gain perturbations on the stability of the linearized dynamics. The stability analysis is used to explain possible mechanisms underlying spontaneous otoacoustic emissions and tinnitus. Dynamic Mode Decomposition (DMD) is introduced as a useful tool to analyze the response of nonlinear cochlear models. Cochlear response features are illustrated using DMD which has the advantage of explicitly revealing the spatial modes of vibrations occurring in the Basilar Membrane (BM). Finally, we address the dynamic estimation problem of BM vibrations using Extended Kalman Filters (EKF). Due to the limitations of noninvasive sensing schemes, such algorithms are inevitable to estimate the dynamic behavior of a living cochlea.

  6. Preparation and structural characterization of zwitterionic surfactant intercalated into NiZn-layered hydroxide salts

    NASA Astrophysics Data System (ADS)

    Liu, Jiexiang; Wang, Jianlong; Zhang, Xiaoguang; Fang, Binbin; Hu, Pan; Zhao, Xuyang

    2015-10-01

    Three zwitterionic surfactants, dodecyl dimethyl carboxylbetaine (DCB), dodecyl dimethyl sulfobetaine (DSB) and N-dodecyl-β-aminoprpionate (DAP), intercalated into NiZn-layered hydroxide salts (NZL-DCB, NZL-DSB and NZL-DAP) were synthesized by the coprecipitation method. The effect of surfactant content, pH, temperature and time of hydrothermal treatment on preparation was investigated and discussed. The NZL-DCB, NZL-DSB and NZL-DAP were characterized by powder X-ray diffraction (PXRD), Fourier transform infrared spectroscopy (FTIR), and thermogravimetry analysis and differential thermal analysis (TGA/DTA). The results showed that basal spacings of NZL-DCB, NZL-DSB and NZL-DAP were around 3.45, 3.68 and 3.94 nm, respectively. DCB, DSB and DAP probably form an overlapped bilayer in the gallery. TGA/DTA data indicated that NZL-DCB, NZL-DSB and NZL-DAP displayed three loss weight stages: loss of adsorbed and structural water, dehydroxylation of matrix and decomposition of nitrate ions, decomposition and combustion of surfactants. Furthermore, chemical analysis data, BET surface area and scanning electron microscopic (SEM) were also measured and analyzed.

  7. Parallel implementation of the particle simulation method with dynamic load balancing: Toward realistic geodynamical simulation

    NASA Astrophysics Data System (ADS)

    Furuichi, M.; Nishiura, D.

    2015-12-01

    Fully Lagrangian methods such as Smoothed Particle Hydrodynamics (SPH) and Discrete Element Method (DEM) have been widely used to solve the continuum and particles motions in the computational geodynamics field. These mesh-free methods are suitable for the problems with the complex geometry and boundary. In addition, their Lagrangian nature allows non-diffusive advection useful for tracking history dependent properties (e.g. rheology) of the material. These potential advantages over the mesh-based methods offer effective numerical applications to the geophysical flow and tectonic processes, which are for example, tsunami with free surface and floating body, magma intrusion with fracture of rock, and shear zone pattern generation of granular deformation. In order to investigate such geodynamical problems with the particle based methods, over millions to billion particles are required for the realistic simulation. Parallel computing is therefore important for handling such huge computational cost. An efficient parallel implementation of SPH and DEM methods is however known to be difficult especially for the distributed-memory architecture. Lagrangian methods inherently show workload imbalance problem for parallelization with the fixed domain in space, because particles move around and workloads change during the simulation. Therefore dynamic load balance is key technique to perform the large scale SPH and DEM simulation. In this work, we present the parallel implementation technique of SPH and DEM method utilizing dynamic load balancing algorithms toward the high resolution simulation over large domain using the massively parallel super computer system. Our method utilizes the imbalances of the executed time of each MPI process as the nonlinear term of parallel domain decomposition and minimizes them with the Newton like iteration method. In order to perform flexible domain decomposition in space, the slice-grid algorithm is used. Numerical tests show that our approach is suitable for solving the particles with different calculation costs (e.g. boundary particles) as well as the heterogeneous computer architecture. We analyze the parallel efficiency and scalability on the super computer systems (K-computer, Earth simulator 3, etc.).

  8. Matrix decomposition graphics processing unit solver for Poisson image editing

    NASA Astrophysics Data System (ADS)

    Lei, Zhao; Wei, Li

    2012-10-01

    In recent years, gradient-domain methods have been widely discussed in the image processing field, including seamless cloning and image stitching. These algorithms are commonly carried out by solving a large sparse linear system: the Poisson equation. However, solving the Poisson equation is a computational and memory intensive task which makes it not suitable for real-time image editing. A new matrix decomposition graphics processing unit (GPU) solver (MDGS) is proposed to settle the problem. A matrix decomposition method is used to distribute the work among GPU threads, so that MDGS will take full advantage of the computing power of current GPUs. Additionally, MDGS is a hybrid solver (combines both the direct and iterative techniques) and has two-level architecture. These enable MDGS to generate identical solutions with those of the common Poisson methods and achieve high convergence rate in most cases. This approach is advantageous in terms of parallelizability, enabling real-time image processing, low memory-taken and extensive applications.

  9. Construction of Optimally Reduced Empirical Model by Spatially Distributed Climate Data

    NASA Astrophysics Data System (ADS)

    Gavrilov, A.; Mukhin, D.; Loskutov, E.; Feigin, A.

    2016-12-01

    We present an approach to empirical reconstruction of the evolution operator in stochastic form by space-distributed time series. The main problem in empirical modeling consists in choosing appropriate phase variables which can efficiently reduce the dimension of the model at minimal loss of information about system's dynamics which consequently leads to more robust model and better quality of the reconstruction. For this purpose we incorporate in the model two key steps. The first step is standard preliminary reduction of observed time series dimension by decomposition via certain empirical basis (e. g. empirical orthogonal function basis or its nonlinear or spatio-temporal generalizations). The second step is construction of an evolution operator by principal components (PCs) - the time series obtained by the decomposition. In this step we introduce a new way of reducing the dimension of the embedding in which the evolution operator is constructed. It is based on choosing proper combinations of delayed PCs to take into account the most significant spatio-temporal couplings. The evolution operator is sought as nonlinear random mapping parameterized using artificial neural networks (ANN). Bayesian approach is used to learn the model and to find optimal hyperparameters: the number of PCs, the dimension of the embedding, the degree of the nonlinearity of ANN. The results of application of the method to climate data (sea surface temperature, sea level pressure) and their comparing with the same method based on non-reduced embedding are presented. The study is supported by Government of Russian Federation (agreement #14.Z50.31.0033 with the Institute of Applied Physics of RAS).

  10. A POD reduced order model for resolving angular direction in neutron/photon transport problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buchan, A.G., E-mail: andrew.buchan@imperial.ac.uk; Calloo, A.A.; Goffin, M.G.

    2015-09-01

    This article presents the first Reduced Order Model (ROM) that efficiently resolves the angular dimension of the time independent, mono-energetic Boltzmann Transport Equation (BTE). It is based on Proper Orthogonal Decomposition (POD) and uses the method of snapshots to form optimal basis functions for resolving the direction of particle travel in neutron/photon transport problems. A unique element of this work is that the snapshots are formed from the vector of angular coefficients relating to a high resolution expansion of the BTE's angular dimension. In addition, the individual snapshots are not recorded through time, as in standard POD, but instead theymore » are recorded through space. In essence this work swaps the roles of the dimensions space and time in standard POD methods, with angle and space respectively. It is shown here how the POD model can be formed from the POD basis functions in a highly efficient manner. The model is then applied to two radiation problems; one involving the transport of radiation through a shield and the other through an infinite array of pins. Both problems are selected for their complex angular flux solutions in order to provide an appropriate demonstration of the model's capabilities. It is shown that the POD model can resolve these fluxes efficiently and accurately. In comparison to high resolution models this POD model can reduce the size of a problem by up to two orders of magnitude without compromising accuracy. Solving times are also reduced by similar factors.« less

  11. Wavelet-based statistical classification of skin images acquired with reflectance confocal microscopy

    PubMed Central

    Halimi, Abdelghafour; Batatia, Hadj; Le Digabel, Jimmy; Josse, Gwendal; Tourneret, Jean Yves

    2017-01-01

    Detecting skin lentigo in reflectance confocal microscopy images is an important and challenging problem. This imaging modality has not yet been widely investigated for this problem and there are a few automatic processing techniques. They are mostly based on machine learning approaches and rely on numerous classical image features that lead to high computational costs given the very large resolution of these images. This paper presents a detection method with very low computational complexity that is able to identify the skin depth at which the lentigo can be detected. The proposed method performs multiresolution decomposition of the image obtained at each skin depth. The distribution of image pixels at a given depth can be approximated accurately by a generalized Gaussian distribution whose parameters depend on the decomposition scale, resulting in a very-low-dimension parameter space. SVM classifiers are then investigated to classify the scale parameter of this distribution allowing real-time detection of lentigo. The method is applied to 45 healthy and lentigo patients from a clinical study, where sensitivity of 81.4% and specificity of 83.3% are achieved. Our results show that lentigo is identifiable at depths between 50μm and 60μm, corresponding to the average location of the the dermoepidermal junction. This result is in agreement with the clinical practices that characterize the lentigo by assessing the disorganization of the dermoepidermal junction. PMID:29296480

  12. De Rham-Hodge decomposition and vanishing of harmonic forms by derivation operators on the Poisson space

    NASA Astrophysics Data System (ADS)

    Privault, Nicolas

    2016-05-01

    We construct differential forms of all orders and a covariant derivative together with its adjoint on the probability space of a standard Poisson process, using derivation operators. In this framewok we derive a de Rham-Hodge-Kodaira decomposition as well as Weitzenböck and Clark-Ocone formulas for random differential forms. As in the Wiener space setting, this construction provides two distinct approaches to the vanishing of harmonic differential forms.

  13. CFD modeling of space-time evolution of fast pyrolysis products in a bench-scale fluidized-bed reactor

    USDA-ARS?s Scientific Manuscript database

    A model for the evolution of pyrolysis products in a fluidized bed has been developed. In this study the unsteady constitutive transport equations for inert gas flow and decomposition kinetics were modeled using the commercial computational fluid dynamics (CFD) software FLUENT-12. The model system d...

  14. Seasonal variation of carcass decomposition and gravesoil chemistry in a cold (Dfa) climate.

    PubMed

    Meyer, Jessica; Anderson, Brianna; Carter, David O

    2013-09-01

    It is well known that temperature significantly affects corpse decomposition. Yet relatively few taphonomy studies investigate the effects of seasonality on decomposition. Here, we propose the use of the Köppen-Geiger climate classification system and describe the decomposition of swine (Sus scrofa domesticus) carcasses during the summer and winter near Lincoln, Nebraska, USA. Decomposition was scored, and gravesoil chemistry (total carbon, total nitrogen, ninhydrin-reactive nitrogen, ammonium, nitrate, and soil pH) was assessed. Gross carcass decomposition in summer was three to seven times greater than in winter. Initial significant changes in gravesoil chemistry occurred following approximately 320 accumulated degree days, regardless of season. Furthermore, significant (p < 0.05) correlations were observed between ammonium and pH (positive correlation) and between nitrate and pH (negative correlation). We hope that future decomposition studies employ the Köppen-Geiger climate classification system to understand the seasonality of corpse decomposition, to validate taphonomic methods, and to facilitate cross-climate comparisons of carcass decomposition. © 2013 American Academy of Forensic Sciences.

  15. Degradation of organic wastewater by hydrodynamic cavitation combined with acoustic cavitation.

    PubMed

    Yi, Chunhai; Lu, Qianqian; Wang, Yun; Wang, Yixuan; Yang, Bolun

    2018-05-01

    In this paper, the decomposition of Rhodamine B (RhB) by hydrodynamic cavitation (HC), acoustic cavitation (AC) and the combination of these individual methods (HAC) have been investigated. The degradation of 20 L RhB aqueous solution was carried out in a self-designed HAC reactor, where hydrodynamic cavitation and acoustic cavitation could take place in the same space simultaneously. The effects of initial concentration, inlet pressure, solution temperature and ultrasonic power were studied and discussed. Obvious synergies were found in the HAC process. The combined method achieved the best conversion, and the synergistic effect in HAC was even up to 119% with the ultrasonic power of 220 W in a treatment time of 30 min. The time-independent synergistic factor based on rate constant was introduced and the maximum value reached 40% in the HAC system. Besides, the hybrid HAC method showed great superiority in energy efficiency at lower ultrasonic power (88-176 W). Therefore, HAC technology can be visualized as a promising method for wastewater treatment with good scale-up possibilities. Copyright © 2018 Elsevier B.V. All rights reserved.

  16. Efficient anisotropic quasi-P wavefield extrapolation using an isotropic low-rank approximation

    NASA Astrophysics Data System (ADS)

    Zhang, Zhen-dong; Liu, Yike; Alkhalifah, Tariq; Wu, Zedong

    2018-04-01

    The computational cost of quasi-P wave extrapolation depends on the complexity of the medium, and specifically the anisotropy. Our effective-model method splits the anisotropic dispersion relation into an isotropic background and a correction factor to handle this dependency. The correction term depends on the slope (measured using the gradient) of current wavefields and the anisotropy. As a result, the computational cost is independent of the nature of anisotropy, which makes the extrapolation efficient. A dynamic implementation of this approach decomposes the original pseudo-differential operator into a Laplacian, handled using the low-rank approximation of the spectral operator, plus an angular dependent correction factor applied in the space domain to correct for anisotropy. We analyse the role played by the correction factor and propose a new spherical decomposition of the dispersion relation. The proposed method provides accurate wavefields in phase and more balanced amplitudes than a previous spherical decomposition. Also, it is free of SV-wave artefacts. Applications to a simple homogeneous transverse isotropic medium with a vertical symmetry axis (VTI) and a modified Hess VTI model demonstrate the effectiveness of the approach. The Reverse Time Migration applied to a modified BP VTI model reveals that the anisotropic migration using the proposed modelling engine performs better than an isotropic migration.

  17. Efficient analysis of three dimensional EUV mask induced imaging artifacts using the waveguide decomposition method

    NASA Astrophysics Data System (ADS)

    Shao, Feng; Evanschitzky, Peter; Fühner, Tim; Erdmann, Andreas

    2009-10-01

    This paper employs the Waveguide decomposition method as an efficient rigorous electromagnetic field (EMF) solver to investigate three dimensional mask-induced imaging artifacts in EUV lithography. The major mask diffraction induced imaging artifacts are first identified by applying the Zernike analysis of the mask nearfield spectrum of 2D lines/spaces. Three dimensional mask features like 22nm semidense/dense contacts/posts, isolated elbows and line-ends are then investigated in terms of lithographic results. After that, the 3D mask-induced imaging artifacts such as feature orientation dependent best focus shift, process window asymmetries, and other aberration-like phenomena are explored for the studied mask features. The simulation results can help lithographers to understand the reasons of EUV-specific imaging artifacts and to devise illumination and feature dependent strategies for their compensation in the optical proximity correction (OPC) for EUV masks. At last, an efficient approach using the Zernike analysis together with the Waveguide decomposition technique is proposed to characterize the impact of mask properties for the future OPC process.

  18. Speech rhythm analysis with decomposition of the amplitude envelope: characterizing rhythmic patterns within and across languages.

    PubMed

    Tilsen, Sam; Arvaniti, Amalia

    2013-07-01

    This study presents a method for analyzing speech rhythm using empirical mode decomposition of the speech amplitude envelope, which allows for extraction and quantification of syllabic- and supra-syllabic time-scale components of the envelope. The method of empirical mode decomposition of a vocalic energy amplitude envelope is illustrated in detail, and several types of rhythm metrics derived from this method are presented. Spontaneous speech extracted from the Buckeye Corpus is used to assess the effect of utterance length on metrics, and it is shown how metrics representing variability in the supra-syllabic time-scale components of the envelope can be used to identify stretches of speech with targeted rhythmic characteristics. Furthermore, the envelope-based metrics are used to characterize cross-linguistic differences in speech rhythm in the UC San Diego Speech Lab corpus of English, German, Greek, Italian, Korean, and Spanish speech elicited in read sentences, read passages, and spontaneous speech. The envelope-based metrics exhibit significant effects of language and elicitation method that argue for a nuanced view of cross-linguistic rhythm patterns.

  19. Time-frequency analysis of band-limited EEG with BMFLC and Kalman filter for BCI applications

    PubMed Central

    2013-01-01

    Background Time-Frequency analysis of electroencephalogram (EEG) during different mental tasks received significant attention. As EEG is non-stationary, time-frequency analysis is essential to analyze brain states during different mental tasks. Further, the time-frequency information of EEG signal can be used as a feature for classification in brain-computer interface (BCI) applications. Methods To accurately model the EEG, band-limited multiple Fourier linear combiner (BMFLC), a linear combination of truncated multiple Fourier series models is employed. A state-space model for BMFLC in combination with Kalman filter/smoother is developed to obtain accurate adaptive estimation. By virtue of construction, BMFLC with Kalman filter/smoother provides accurate time-frequency decomposition of the bandlimited signal. Results The proposed method is computationally fast and is suitable for real-time BCI applications. To evaluate the proposed algorithm, a comparison with short-time Fourier transform (STFT) and continuous wavelet transform (CWT) for both synthesized and real EEG data is performed in this paper. The proposed method is applied to BCI Competition data IV for ERD detection in comparison with existing methods. Conclusions Results show that the proposed algorithm can provide optimal time-frequency resolution as compared to STFT and CWT. For ERD detection, BMFLC-KF outperforms STFT and BMFLC-KS in real-time applicability with low computational requirement. PMID:24274109

  20. Application of wavelet-based multi-model Kalman filters to real-time flood forecasting

    NASA Astrophysics Data System (ADS)

    Chou, Chien-Ming; Wang, Ru-Yih

    2004-04-01

    This paper presents the application of a multimodel method using a wavelet-based Kalman filter (WKF) bank to simultaneously estimate decomposed state variables and unknown parameters for real-time flood forecasting. Applying the Haar wavelet transform alters the state vector and input vector of the state space. In this way, an overall detail plus approximation describes each new state vector and input vector, which allows the WKF to simultaneously estimate and decompose state variables. The wavelet-based multimodel Kalman filter (WMKF) is a multimodel Kalman filter (MKF), in which the Kalman filter has been substituted for a WKF. The WMKF then obtains M estimated state vectors. Next, the M state-estimates, each of which is weighted by its possibility that is also determined on-line, are combined to form an optimal estimate. Validations conducted for the Wu-Tu watershed, a small watershed in Taiwan, have demonstrated that the method is effective because of the decomposition of wavelet transform, the adaptation of the time-varying Kalman filter and the characteristics of the multimodel method. Validation results also reveal that the resulting method enhances the accuracy of the runoff prediction of the rainfall-runoff process in the Wu-Tu watershed.

  1. Semi-analytical solution for the generalized absorbing boundary condition in molecular dynamics simulations

    NASA Astrophysics Data System (ADS)

    Lee, Chung-Shuo; Chen, Yan-Yu; Yu, Chi-Hua; Hsu, Yu-Chuan; Chen, Chuin-Shan

    2017-07-01

    We present a semi-analytical solution of a time-history kernel for the generalized absorbing boundary condition in molecular dynamics (MD) simulations. To facilitate the kernel derivation, the concept of virtual atoms in real space that can conform with an arbitrary boundary in an arbitrary lattice is adopted. The generalized Langevin equation is regularized using eigenvalue decomposition and, consequently, an analytical expression of an inverse Laplace transform is obtained. With construction of dynamical matrices in the virtual domain, a semi-analytical form of the time-history kernel functions for an arbitrary boundary in an arbitrary lattice can be found. The time-history kernel functions for different crystal lattices are derived to show the generality of the proposed method. Non-equilibrium MD simulations in a triangular lattice with and without the absorbing boundary condition are conducted to demonstrate the validity of the solution.

  2. Phase unwrapping with graph cuts optimization and dual decomposition acceleration for 3D high-resolution MRI data.

    PubMed

    Dong, Jianwu; Chen, Feng; Zhou, Dong; Liu, Tian; Yu, Zhaofei; Wang, Yi

    2017-03-01

    Existence of low SNR regions and rapid-phase variations pose challenges to spatial phase unwrapping algorithms. Global optimization-based phase unwrapping methods are widely used, but are significantly slower than greedy methods. In this paper, dual decomposition acceleration is introduced to speed up a three-dimensional graph cut-based phase unwrapping algorithm. The phase unwrapping problem is formulated as a global discrete energy minimization problem, whereas the technique of dual decomposition is used to increase the computational efficiency by splitting the full problem into overlapping subproblems and enforcing the congruence of overlapping variables. Using three dimensional (3D) multiecho gradient echo images from an agarose phantom and five brain hemorrhage patients, we compared this proposed method with an unaccelerated graph cut-based method. Experimental results show up to 18-fold acceleration in computation time. Dual decomposition significantly improves the computational efficiency of 3D graph cut-based phase unwrapping algorithms. Magn Reson Med 77:1353-1358, 2017. © 2016 International Society for Magnetic Resonance in Medicine. © 2016 International Society for Magnetic Resonance in Medicine.

  3. White blood cell segmentation by color-space-based k-means clustering.

    PubMed

    Zhang, Congcong; Xiao, Xiaoyan; Li, Xiaomei; Chen, Ying-Jie; Zhen, Wu; Chang, Jun; Zheng, Chengyun; Liu, Zhi

    2014-09-01

    White blood cell (WBC) segmentation, which is important for cytometry, is a challenging issue because of the morphological diversity of WBCs and the complex and uncertain background of blood smear images. This paper proposes a novel method for the nucleus and cytoplasm segmentation of WBCs for cytometry. A color adjustment step was also introduced before segmentation. Color space decomposition and k-means clustering were combined for segmentation. A database including 300 microscopic blood smear images were used to evaluate the performance of our method. The proposed segmentation method achieves 95.7% and 91.3% overall accuracy for nucleus segmentation and cytoplasm segmentation, respectively. Experimental results demonstrate that the proposed method can segment WBCs effectively with high accuracy.

  4. Reducing variation in decomposition odour profiling using comprehensive two-dimensional gas chromatography.

    PubMed

    Perrault, Katelynn A; Stefanuto, Pierre-Hugues; Stuart, Barbara H; Rai, Tapan; Focant, Jean-François; Forbes, Shari L

    2015-01-01

    Challenges in decomposition odour profiling have led to variation in the documented odour profile by different research groups worldwide. Background subtraction and use of controls are important considerations given the variation introduced by decomposition studies conducted in different geographical environments. The collection of volatile organic compounds (VOCs) from soil beneath decomposing remains is challenging due to the high levels of inherent soil VOCs, further confounded by the use of highly sensitive instrumentation. This study presents a method that provides suitable chromatographic resolution for profiling decomposition odour in soil by comprehensive two-dimensional gas chromatography coupled with time-of-flight mass spectrometry using appropriate controls and field blanks. Logarithmic transformation and t-testing of compounds permitted the generation of a compound list of decomposition VOCs in soil. Principal component analysis demonstrated the improved discrimination between experimental and control soil, verifying the value of the data handling method. Data handling procedures have not been well documented in this field and standardisation would thereby reduce misidentification of VOCs present in the surrounding environment as decomposition byproducts. Uniformity of data handling and instrumental procedures will reduce analytical variation, increasing confidence in the future when investigating the effect of taphonomic variables on the decomposition VOC profile. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Examining responses of ecosystem carbon exchange to environmental changes using particle filtering mathod

    NASA Astrophysics Data System (ADS)

    Yokozawa, M.

    2017-12-01

    Attention has been paid to the agricultural field that could regulate ecosystem carbon exchange by water management and residual treatments. However, there have been less known about the dynamic responses of the ecosystem to environmental changes. In this study, focussing on paddy field, where CO2 emissions due to microbial decomposition of organic matter are suppressed and alternatively CH4 emitted under flooding condition during rice growth season and subsequently CO2 emission following the fallow season after harvest, the responses of ecosystem carbon exchange were examined. We conducted model data fusion analysis for examining the response of cropland-atmosphere carbon exchange to environmental variation. The used model consists of two sub models, paddy rice growth sub-model and soil decomposition sub-model. The crop growth sub-model mimics the rice plant growth processes including formation of reproductive organs as well as leaf expansion. The soil decomposition sub-model simulates the decomposition process of soil organic carbon. Assimilating the data on the time changes in CO2 flux measured by eddy covariance method, rice plant biomass, LAI and the final yield with the model, the parameters were calibrated using a stochastic optimization algorithm with a particle filter method. The particle filter method, which is one of the Monte Carlo filters, enable us to evaluating time changes in parameters based on the observed data until the time and to make prediction of the system. Iterative filtering and prediction with changing parameters and/or boundary condition enable us to obtain time changes in parameters governing the crop production as well as carbon exchange. In this study, we focused on the parameters related to crop production as well as soil carbon storage. As the results, the calibrated model with estimated parameters could accurately predict the NEE flux in the subsequent years. The temperature sensitivity, denoted by Q10s in the decomposition rate of soil organic carbon (SOC) were obtained as 1.4 for no cultivation period and 2.9 for cultivation period (submerged soil condition in flooding season). It suggests that the response of ecosystem carbon exchange differs due to SOC decomposition process which is sensitive to environmental variation during paddy rice cultivation period.

  6. Morphometry of network and nonnetwork space of basins

    NASA Astrophysics Data System (ADS)

    Chockalingam, L.; Daya Sagar, B. S.

    2005-08-01

    Morphometric analysis of channel network of a basin provides several scale- independent measures. To better characterize basin morphology, one requires, besides channel morphometric properties, scale-independent but shape-dependent measures to record the sensitive differences in the morphological organization of nonnetwork spaces. These spaces are planar forms of hillslopes or the retained portion after subtracting the channel network from the basin space. The principal aim of this paper is to focus on explaining the importance of alternative scale-independent but shape-dependent measures of nonnetwork spaces of basins. Toward this goal, we explore how mathematical morphology-based decomposition procedures can be used to derive basic measures required to quantify estimates, such as dimensionless power laws, that are useful to express the importance of characteristics of nonnetwork spaces via decomposition rules. We demonstrate our results through characterization of nonnetwork spaces of eight subbasins of the Gunung Ledang region of peninsular Malaysia. We decompose the nonnetwork spaces of eight fourth-order basins in a two-dimensional discrete space into simple nonoverlapping disks (NODs) of various sizes by employing morphological transformations. Furthermore, we show relationships between the dimensions estimated via morphometries of the network and their corresponding nonnetwork spaces. This study can be extended to characterize hillslope morphologies, where decomposition of three-dimensional hillslopes needs to be addressed.

  7. Dynamics of entanglement and uncertainty relation in coupled harmonic oscillator system: exact results

    NASA Astrophysics Data System (ADS)

    Park, DaeKil

    2018-06-01

    The dynamics of entanglement and uncertainty relation is explored by solving the time-dependent Schrödinger equation for coupled harmonic oscillator system analytically when the angular frequencies and coupling constant are arbitrarily time dependent. We derive the spectral and Schmidt decompositions for vacuum solution. Using the decompositions, we derive the analytical expressions for von Neumann and Rényi entropies. Making use of Wigner distribution function defined in phase space, we derive the time dependence of position-momentum uncertainty relations. To show the dynamics of entanglement and uncertainty relation graphically, we introduce two toy models and one realistic quenched model. While the dynamics can be conjectured by simple consideration in the toy models, the dynamics in the realistic quenched model is somewhat different from that in the toy models. In particular, the dynamics of entanglement exhibits similar pattern to dynamics of uncertainty parameter in the realistic quenched model.

  8. Comparison of methods for extracting annual cycle with changing amplitude in climate science

    NASA Astrophysics Data System (ADS)

    Deng, Q.; Fu, Z.

    2017-12-01

    Changes of annual cycle gains a growing concern recently. The basic hypothesis regards annual cycle as constant. Climatology mean within a time period is usually used to depict the annual cycle. Obviously this hypothesis contradicts with the fact that annual cycle is changing every year. For the lack of a unified definition about annual cycle, the approaches adopted in extracting annual cycle are various and may lead to different results. The precision and validity of these methods need to be examined. In this work we numerical experiments with known monofrequent annual cycle are set to evaluate five popular extracting methods: fitting sinusoids, complex demodulation, Ensemble Empirical Mode Decomposition (EEMD), Nonlinear Mode Decomposition (NMD) and Seasonal trend decomposition procedure based on loess (STL). Three different types of changing amplitude will be generated: steady, linear increasing and nonlinearly varying. Comparing the annual cycle extracted by these methods with the generated annual cycle, we find that (1) NMD performs best in depicting annual cycle itself and its amplitude change, (2) fitting sinusoids, complex demodulation and EEMD methods are more sensitive to long-term memory(LTM) of generated time series thus lead to overfitting annual cycle and too noisy amplitude, oppositely the result of STL underestimate the amplitude variation (3)all of them can present the amplitude trend correctly in long-time scale but the errors on account of noise and LTM are common in some methods over short time scales.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Glascoe, E A; Zaug, J M; Burnham, A K

    The effect of pressure on the thermal decomposition rate of the energetic material HMX was studied. HMX was precompressed in a diamond anvil cell (DAC) and heated at various rates. The parent species population was monitored as a function of time and temperature using Fourier transform infrared (FTIR) spectroscopy. Decomposition rates were determined by fitting the fraction reacted to the extended-Prout-Tompkins nucleation-growth model and the Friedman isoconversional method. The results of these experiments and analysis indicate that pressure accelerates the decomposition at low to moderate pressures (i.e. between ambient pressure and 1 GPa) and decelerates the decomposition at higher pressures.more » The decomposition acceleration is attributed to pressure enhanced autocatalysis whereas the deceleration at high pressures is attributed pressure inhibiting bond homolysis step(s), which would result in an increase in volume. These results indicate that both {beta} and {delta} phase HMX are sensitive to pressure in the thermally induced decomposition kinetics.« less

  10. Empirical mode decomposition apparatus, method and article of manufacture for analyzing biological signals and performing curve fitting

    NASA Technical Reports Server (NTRS)

    Huang, Norden E. (Inventor)

    2004-01-01

    A computer implemented physical signal analysis method includes four basic steps and the associated presentation techniques of the results. The first step is a computer implemented Empirical Mode Decomposition that extracts a collection of Intrinsic Mode Functions (IMF) from nonlinear, nonstationary physical signals. The decomposition is based on the direct extraction of the energy associated with various intrinsic time scales in the physical signal. Expressed in the IMF's, they have well-behaved Hilbert Transforms from which instantaneous frequencies can be calculated. The second step is the Hilbert Transform which produces a Hilbert Spectrum. Thus, the invention can localize any event on the time as well as the frequency axis. The decomposition can also be viewed as an expansion of the data in terms of the IMF's. Then, these IMF's, based on and derived from the data, can serve as the basis of that expansion. The local energy and the instantaneous frequency derived from the IMF's through the Hilbert transform give a full energy-frequency-time distribution of the data which is designated as the Hilbert Spectrum. The third step filters the physical signal by combining a subset of the IMFs. In the fourth step, a curve may be fitted to the filtered signal which may not have been possible with the original, unfiltered signal.

  11. Empirical mode decomposition apparatus, method and article of manufacture for analyzing biological signals and performing curve fitting

    NASA Technical Reports Server (NTRS)

    Huang, Norden E. (Inventor)

    2002-01-01

    A computer implemented physical signal analysis method includes four basic steps and the associated presentation techniques of the results. The first step is a computer implemented Empirical Mode Decomposition that extracts a collection of Intrinsic Mode Functions (IMF) from nonlinear, nonstationary physical signals. The decomposition is based on the direct extraction of the energy associated with various intrinsic time scales in the physical signal. Expressed in the IMF's, they have well-behaved Hilbert Transforms from which instantaneous frequencies can be calculated. The second step is the Hilbert Transform which produces a Hilbert Spectrum. Thus, the invention can localize any event on the time as well as the frequency axis. The decomposition can also be viewed as an expansion of the data in terms of the IMF's. Then, these IMF's, based on and derived from the data, can serve as the basis of that expansion. The local energy and the instantaneous frequency derived from the IMF's through the Hilbert transform give a full energy-frequency-time distribution of the data which is designated as the Hilbert Spectrum. The third step filters the physical signal by combining a subset of the IMFs. In the fourth step, a curve may be fitted to the filtered signal which may not have been possible with the original, unfiltered signal.

  12. Spectral functions with the density matrix renormalization group: Krylov-space approach for correction vectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None, None

    Frequency-dependent correlations, such as the spectral function and the dynamical structure factor, help illustrate condensed matter experiments. Within the density matrix renormalization group (DMRG) framework, an accurate method for calculating spectral functions directly in frequency is the correction-vector method. The correction vector can be computed by solving a linear equation or by minimizing a functional. Our paper proposes an alternative to calculate the correction vector: to use the Krylov-space approach. This paper also studies the accuracy and performance of the Krylov-space approach, when applied to the Heisenberg, the t-J, and the Hubbard models. The cases we studied indicate that themore » Krylov-space approach can be more accurate and efficient than the conjugate gradient, and that the error of the former integrates best when a Krylov-space decomposition is also used for ground state DMRG.« less

  13. Spectral functions with the density matrix renormalization group: Krylov-space approach for correction vectors

    DOE PAGES

    None, None

    2016-11-21

    Frequency-dependent correlations, such as the spectral function and the dynamical structure factor, help illustrate condensed matter experiments. Within the density matrix renormalization group (DMRG) framework, an accurate method for calculating spectral functions directly in frequency is the correction-vector method. The correction vector can be computed by solving a linear equation or by minimizing a functional. Our paper proposes an alternative to calculate the correction vector: to use the Krylov-space approach. This paper also studies the accuracy and performance of the Krylov-space approach, when applied to the Heisenberg, the t-J, and the Hubbard models. The cases we studied indicate that themore » Krylov-space approach can be more accurate and efficient than the conjugate gradient, and that the error of the former integrates best when a Krylov-space decomposition is also used for ground state DMRG.« less

  14. Development of WRF-ROI system by incorporating eigen-decomposition

    NASA Astrophysics Data System (ADS)

    Kim, S.; Noh, N.; Song, H.; Lim, G.

    2011-12-01

    This study presents the development of WRF-ROI system, which is the implementation of Retrospective Optimal Interpolation (ROI) to the Weather Research and Forecasting model (WRF). ROI is a new data assimilation algorithm introduced by Song et al. (2009) and Song and Lim (2009). The formulation of ROI is similar with that of Optimal Interpolation (OI), but ROI iteratively assimilates an observation set at a post analysis time into a prior analysis, possibly providing the high quality reanalysis data. ROI method assimilates the data at post analysis time using perturbation method (Errico and Raeder, 1999) without adjoint model. In previous study, ROI method is applied to Lorenz 40-variable model (Lorenz, 1996) to validate the algorithm and to investigate the capability. It is therefore required to apply this ROI method into a more realistic and complicated model framework such as WRF. In this research, the reduced-rank formulation of ROI is used instead of a reduced-resolution method. The computational costs can be reduced due to the eigen-decomposition of background error covariance in the reduced-rank method. When single profile of observations is assimilated in the WRF-ROI system by incorporating eigen-decomposition, the analysis error tends to be reduced if compared with the background error. The difference between forecast errors with assimilation and without assimilation is obviously increased as time passed, which means the improvement of forecast error by assimilation.

  15. Automatic network coupling analysis for dynamical systems based on detailed kinetic models.

    PubMed

    Lebiedz, Dirk; Kammerer, Julia; Brandt-Pollmann, Ulrich

    2005-10-01

    We introduce a numerical complexity reduction method for the automatic identification and analysis of dynamic network decompositions in (bio)chemical kinetics based on error-controlled computation of a minimal model dimension represented by the number of (locally) active dynamical modes. Our algorithm exploits a generalized sensitivity analysis along state trajectories and subsequent singular value decomposition of sensitivity matrices for the identification of these dominant dynamical modes. It allows for a dynamic coupling analysis of (bio)chemical species in kinetic models that can be exploited for the piecewise computation of a minimal model on small time intervals and offers valuable functional insight into highly nonlinear reaction mechanisms and network dynamics. We present results for the identification of network decompositions in a simple oscillatory chemical reaction, time scale separation based model reduction in a Michaelis-Menten enzyme system and network decomposition of a detailed model for the oscillatory peroxidase-oxidase enzyme system.

  16. Three-pattern decomposition of global atmospheric circulation: part I—decomposition model and theorems

    NASA Astrophysics Data System (ADS)

    Hu, Shujuan; Chou, Jifan; Cheng, Jianbo

    2018-04-01

    In order to study the interactions between the atmospheric circulations at the middle-high and low latitudes from the global perspective, the authors proposed the mathematical definition of three-pattern circulations, i.e., horizontal, meridional and zonal circulations with which the actual atmospheric circulation is expanded. This novel decomposition method is proved to accurately describe the actual atmospheric circulation dynamics. The authors used the NCEP/NCAR reanalysis data to calculate the climate characteristics of those three-pattern circulations, and found that the decomposition model agreed with the observed results. Further dynamical analysis indicates that the decomposition model is more accurate to capture the major features of global three dimensional atmospheric motions, compared to the traditional definitions of Rossby wave, Hadley circulation and Walker circulation. The decomposition model for the first time realized the decomposition of global atmospheric circulation using three orthogonal circulations within the horizontal, meridional and zonal planes, offering new opportunities to study the large-scale interactions between the middle-high latitudes and low latitudes circulations.

  17. Pressure-dependent decomposition kinetics of the energetic material HMX up to 3.6 GPa.

    PubMed

    Glascoe, Elizabeth A; Zaug, Joseph M; Burnham, Alan K

    2009-12-03

    The effect of pressure on the global thermal decomposition rate of the energetic material HMX was studied. HMX was precompressed in a diamond anvil cell (DAC) and heated at various rates. The parent species population was monitored as a function of time and temperature using Fourier transform infrared (FTIR) spectroscopy. Global decomposition rates were determined by fitting the fraction reacted to the extended-Prout-Tompkins nucleation-growth model and the Friedman isoconversional method. The results of these experiments and analysis indicate that pressure accelerates the decomposition at low-to-moderate pressures (i.e., between ambient pressure and 0.1 GPa) and decelerates the decomposition at higher pressures. The decomposition acceleration is attributed to pressure-enhanced autocatalysis, whereas the deceleration at high pressures is attributed to pressure-inhibiting bond homolysis step(s), which would result in an increase in volume. These results indicate that both the beta- and delta-polymorphs of HMX are sensitive to pressure in the thermally induced decomposition kinetics.

  18. Elastic and acoustic wavefield decompositions and application to reverse time migrations

    NASA Astrophysics Data System (ADS)

    Wang, Wenlong

    P- and S-waves coexist in elastic wavefields, and separation between them is an essential step in elastic reverse-time migrations (RTMs). Unlike the traditional separation methods that use curl and divergence operators, which do not preserve the wavefield vector component information, we propose and compare two vector decomposition methods, which preserve the same vector components that exist in the input elastic wavefield. The amplitude and phase information is automatically preserved, so no amplitude or phase corrections are required. The decoupled propagation method is extended from elastic to viscoelastic wavefields. To use the decomposed P and S vector wavefields and generate PP and PS images, we create a new 2D migration context for isotropic, elastic RTM which includes PS vector decomposition; the propagation directions of both incident and reflected P- and S-waves are calculated directly from the stress and particle velocity definitions of the decomposed P- and S-wave Poynting vectors. Then an excitation-amplitude image condition that scales the receiver wavelet by the source vector magnitude produces angle-dependent images of PP and PS reflection coefficients with the correct polarities, polarization, and amplitudes. It thus simplifies the process of obtaining PP and PS angle-domain common-image gathers (ADCIGs); it is less effort to generate ADCIGs from vector data than from scalar data. Besides P- and S-waves decomposition, separations of up- and down-going waves are also a part of processing of multi-component recorded data and propagating wavefields. A complex trace based up/down separation approach is extended from acoustic to elastic, and combined with P- and S-wave decomposition by decoupled propagation. This eliminates the need for a Fourier transform over time, thereby significantly reducing the storage cost and improving computational efficiency. Wavefield decomposition is applied to both synthetic elastic VSP data and propagating wavefield snapshots. Poynting vectors obtained from the particle-velocity and stress fields after P/S and up/down decompositions are much more accurate than those without. The up/down separation algorithm is also applicable in acoustic RTMs, where both (forward-time extrapolated) source and (reverse-time extrapolated) receiver wavefields are decomposed into up-going and down-going parts. Together with the crosscorrelation imaging condition, four images (down-up, up-down, up-up and down-down) are generated, which facilitate the analysis of artifacts and the imaging ability of the four images. Artifacts may exist in all the decomposed images, but their positions and types are different. The causes of artifacts in different images are explained and illustrated with sketches and numerical tests.

  19. Regularization of nonlinear decomposition of spectral x-ray projection images.

    PubMed

    Ducros, Nicolas; Abascal, Juan Felipe Perez-Juste; Sixou, Bruno; Rit, Simon; Peyrin, Françoise

    2017-09-01

    Exploiting the x-ray measurements obtained in different energy bins, spectral computed tomography (CT) has the ability to recover the 3-D description of a patient in a material basis. This may be achieved solving two subproblems, namely the material decomposition and the tomographic reconstruction problems. In this work, we address the material decomposition of spectral x-ray projection images, which is a nonlinear ill-posed problem. Our main contribution is to introduce a material-dependent spatial regularization in the projection domain. The decomposition problem is solved iteratively using a Gauss-Newton algorithm that can benefit from fast linear solvers. A Matlab implementation is available online. The proposed regularized weighted least squares Gauss-Newton algorithm (RWLS-GN) is validated on numerical simulations of a thorax phantom made of up to five materials (soft tissue, bone, lung, adipose tissue, and gadolinium), which is scanned with a 120 kV source and imaged by a 4-bin photon counting detector. To evaluate the method performance of our algorithm, different scenarios are created by varying the number of incident photons, the concentration of the marker and the configuration of the phantom. The RWLS-GN method is compared to the reference maximum likelihood Nelder-Mead algorithm (ML-NM). The convergence of the proposed method and its dependence on the regularization parameter are also studied. We show that material decomposition is feasible with the proposed method and that it converges in few iterations. Material decomposition with ML-NM was very sensitive to noise, leading to decomposed images highly affected by noise, and artifacts even for the best case scenario. The proposed method was less sensitive to noise and improved contrast-to-noise ratio of the gadolinium image. Results were superior to those provided by ML-NM in terms of image quality and decomposition was 70 times faster. For the assessed experiments, material decomposition was possible with the proposed method when the number of incident photons was equal or larger than 10 5 and when the marker concentration was equal or larger than 0.03 g·cm -3 . The proposed method efficiently solves the nonlinear decomposition problem for spectral CT, which opens up new possibilities such as material-specific regularization in the projection domain and a parallelization framework, in which projections are solved in parallel. © 2017 American Association of Physicists in Medicine.

  20. The detection of flaws in austenitic welds using the decomposition of the time-reversal operator

    NASA Astrophysics Data System (ADS)

    Cunningham, Laura J.; Mulholland, Anthony J.; Tant, Katherine M. M.; Gachagan, Anthony; Harvey, Gerry; Bird, Colin

    2016-04-01

    The non-destructive testing of austenitic welds using ultrasound plays an important role in the assessment of the structural integrity of safety critical structures. The internal microstructure of these welds is highly scattering and can lead to the obscuration of defects when investigated by traditional imaging algorithms. This paper proposes an alternative objective method for the detection of flaws embedded in austenitic welds based on the singular value decomposition of the time-frequency domain response matrices. The distribution of the singular values is examined in the cases where a flaw exists and where there is no flaw present. A lower threshold on the singular values, specific to austenitic welds, is derived which, when exceeded, indicates the presence of a flaw. The detection criterion is successfully implemented on both synthetic and experimental data. The datasets arising from welds containing a flaw are further interrogated using the decomposition of the time-reversal operator (DORT) method and the total focusing method (TFM), and it is shown that images constructed via the DORT algorithm typically exhibit a higher signal-to-noise ratio than those constructed by the TFM algorithm.

  1. The detection of flaws in austenitic welds using the decomposition of the time-reversal operator

    PubMed Central

    Cunningham, Laura J.; Mulholland, Anthony J.; Gachagan, Anthony; Harvey, Gerry; Bird, Colin

    2016-01-01

    The non-destructive testing of austenitic welds using ultrasound plays an important role in the assessment of the structural integrity of safety critical structures. The internal microstructure of these welds is highly scattering and can lead to the obscuration of defects when investigated by traditional imaging algorithms. This paper proposes an alternative objective method for the detection of flaws embedded in austenitic welds based on the singular value decomposition of the time-frequency domain response matrices. The distribution of the singular values is examined in the cases where a flaw exists and where there is no flaw present. A lower threshold on the singular values, specific to austenitic welds, is derived which, when exceeded, indicates the presence of a flaw. The detection criterion is successfully implemented on both synthetic and experimental data. The datasets arising from welds containing a flaw are further interrogated using the decomposition of the time-reversal operator (DORT) method and the total focusing method (TFM), and it is shown that images constructed via the DORT algorithm typically exhibit a higher signal-to-noise ratio than those constructed by the TFM algorithm. PMID:27274683

  2. A time domain frequency-selective multivariate Granger causality approach.

    PubMed

    Leistritz, Lutz; Witte, Herbert

    2016-08-01

    The investigation of effective connectivity is one of the major topics in computational neuroscience to understand the interaction between spatially distributed neuronal units of the brain. Thus, a wide variety of methods has been developed during the last decades to investigate functional and effective connectivity in multivariate systems. Their spectrum ranges from model-based to model-free approaches with a clear separation into time and frequency range methods. We present in this simulation study a novel time domain approach based on Granger's principle of predictability, which allows frequency-selective considerations of directed interactions. It is based on a comparison of prediction errors of multivariate autoregressive models fitted to systematically modified time series. These modifications are based on signal decompositions, which enable a targeted cancellation of specific signal components with specific spectral properties. Depending on the embedded signal decomposition method, a frequency-selective or data-driven signal-adaptive Granger Causality Index may be derived.

  3. Surrogate models for sheet metal stamping problem based on the combination of proper orthogonal decomposition and radial basis function

    NASA Astrophysics Data System (ADS)

    Dang, Van Tuan; Lafon, Pascal; Labergere, Carl

    2017-10-01

    In this work, a combination of Proper Orthogonal Decomposition (POD) and Radial Basis Function (RBF) is proposed to build a surrogate model based on the Benchmark Springback 3D bending from the Numisheet2011 congress. The influence of the two design parameters, the geometrical parameter of the die radius and the process parameter of the blank holder force, on the springback of the sheet after a stamping operation is analyzed. The classical Design of Experience (DoE) uses Full Factorial to design the parameter space with sample points as input data for finite element method (FEM) numerical simulation of the sheet metal stamping process. The basic idea is to consider the design parameters as additional dimensions for the solution of the displacement fields. The order of the resultant high-fidelity model is reduced through the use of POD method which performs model space reduction and results in the basis functions of the low order model. Specifically, the snapshot method is used in our work, in which the basis functions is derived from snapshot deviation of the matrix of the final displacements fields of the FEM numerical simulation. The obtained basis functions are then used to determine the POD coefficients and RBF is used for the interpolation of these POD coefficients over the parameter space. Finally, the presented POD-RBF approach which is used for shape optimization can be performed with high accuracy.

  4. Application of reiteration of Hankel singular value decomposition in quality control

    NASA Astrophysics Data System (ADS)

    Staniszewski, Michał; Skorupa, Agnieszka; Boguszewicz, Łukasz; Michalczuk, Agnieszka; Wereszczyński, Kamil; Wicher, Magdalena; Konopka, Marek; Sokół, Maria; Polański, Andrzej

    2017-07-01

    Medical centres are obliged to store past medical records, including the results of quality assurance (QA) tests of the medical equipment, which is especially useful in checking reproducibility of medical devices and procedures. Analysis of multivariate time series is an important part of quality control of NMR data. In this work we proposean anomaly detection tool based on Reiteration of Hankel Singular Value Decomposition method. The presented method was compared with external software and authors obtained comparable results.

  5. An examination of the concept of driving point receptance

    NASA Astrophysics Data System (ADS)

    Sheng, X.; He, Y.; Zhong, T.

    2018-04-01

    In the field of vibration, driving point receptance is a well-established and widely applied concept. However, as demonstrated in this paper, when a driving point receptance is calculated using the finite element (FE) method with solid elements, it does not converge as the FE mesh becomes finer, suggesting that there is a singularity. Hence, the concept of driving point receptance deserves a rigorous examination. In this paper, it is firstly shown that, for a point harmonic force applied on the surface of an elastic half-space, the Boussinesq formula can be applied to calculate the displacement amplitude of the surface if the response point is sufficiently close to the load. Secondly, by applying the Betti reciprocal theorem, it is shown that the displacement of an elastic body near a point harmonic force can be decomposed into two parts, with the first one being the displacement of an elastic half-space. This decomposition is useful, since it provides a solid basis for the introduction of a contact spring between a wheel and a rail in interaction. However, according to the Boussinesq formula, this decomposition also leads to the conclusion that a driving point receptance is infinite (singular), and would be undefinable. Nevertheless, driving point receptances have been calculated using different methods. Since the singularity identified in this paper was not appreciated, no account was given to the singularity in these calculations. Thus, the validity of these calculation methods must be examined. This constructs the third part of the paper. As the final development of the paper, the above decomposition is utilised to define and determine driving point receptances required for dealing with wheel/rail interactions.

  6. Horizontal decomposition of data table for finding one reduct

    NASA Astrophysics Data System (ADS)

    Hońko, Piotr

    2018-04-01

    Attribute reduction, being one of the most essential tasks in rough set theory, is a challenge for data that does not fit in the available memory. This paper proposes new definitions of attribute reduction using horizontal data decomposition. Algorithms for computing superreduct and subsequently exact reducts of a data table are developed and experimentally verified. In the proposed approach, the size of subtables obtained during the decomposition can be arbitrarily small. Reducts of the subtables are computed independently from one another using any heuristic method for finding one reduct. Compared with standard attribute reduction methods, the proposed approach can produce superreducts that usually inconsiderably differ from an exact reduct. The approach needs comparable time and much less memory to reduce the attribute set. The method proposed for removing unnecessary attributes from superreducts executes relatively fast for bigger databases.

  7. Killing-Yano tensors in spaces admitting a hypersurface orthogonal Killing vector

    NASA Astrophysics Data System (ADS)

    Garfinkle, David; Glass, E. N.

    2013-03-01

    Methods are presented for finding Killing-Yano tensors, conformal Killing-Yano tensors, and conformal Killing vectors in spacetimes with a hypersurface orthogonal Killing vector. These methods are similar to a method developed by the authors for finding Killing tensors. In all cases one decomposes both the tensor and the equation it satisfies into pieces along the Killing vector and pieces orthogonal to the Killing vector. Solving the separate equations that result from this decomposition requires less computing than integrating the original equation. In each case, examples are given to illustrate the method.

  8. Resolvent estimates in homogenisation of periodic problems of fractional elasticity

    NASA Astrophysics Data System (ADS)

    Cherednichenko, Kirill; Waurick, Marcus

    2018-03-01

    We provide operator-norm convergence estimates for solutions to a time-dependent equation of fractional elasticity in one spatial dimension, with rapidly oscillating coefficients that represent the material properties of a viscoelastic composite medium. Assuming periodicity in the coefficients, we prove operator-norm convergence estimates for an operator fibre decomposition obtained by applying to the original fractional elasticity problem the Fourier-Laplace transform in time and Gelfand transform in space. We obtain estimates on each fibre that are uniform in the quasimomentum of the decomposition and in the period of oscillations of the coefficients as well as quadratic with respect to the spectral variable. On the basis of these uniform estimates we derive operator-norm-type convergence estimates for the original fractional elasticity problem, for a class of sufficiently smooth densities of applied forces.

  9. Evolutionary Algorithm Based Feature Optimization for Multi-Channel EEG Classification.

    PubMed

    Wang, Yubo; Veluvolu, Kalyana C

    2017-01-01

    The most BCI systems that rely on EEG signals employ Fourier based methods for time-frequency decomposition for feature extraction. The band-limited multiple Fourier linear combiner is well-suited for such band-limited signals due to its real-time applicability. Despite the improved performance of these techniques in two channel settings, its application in multiple-channel EEG is not straightforward and challenging. As more channels are available, a spatial filter will be required to eliminate the noise and preserve the required useful information. Moreover, multiple-channel EEG also adds the high dimensionality to the frequency feature space. Feature selection will be required to stabilize the performance of the classifier. In this paper, we develop a new method based on Evolutionary Algorithm (EA) to solve these two problems simultaneously. The real-valued EA encodes both the spatial filter estimates and the feature selection into its solution and optimizes it with respect to the classification error. Three Fourier based designs are tested in this paper. Our results show that the combination of Fourier based method with covariance matrix adaptation evolution strategy (CMA-ES) has the best overall performance.

  10. Linear decomposition approach for a class of nonconvex programming problems.

    PubMed

    Shen, Peiping; Wang, Chunfeng

    2017-01-01

    This paper presents a linear decomposition approach for a class of nonconvex programming problems by dividing the input space into polynomially many grids. It shows that under certain assumptions the original problem can be transformed and decomposed into a polynomial number of equivalent linear programming subproblems. Based on solving a series of liner programming subproblems corresponding to those grid points we can obtain the near-optimal solution of the original problem. Compared to existing results in the literature, the proposed algorithm does not require the assumptions of quasi-concavity and differentiability of the objective function, and it differs significantly giving an interesting approach to solving the problem with a reduced running time.

  11. Iterative filtering decomposition based on local spectral evolution kernel

    PubMed Central

    Wang, Yang; Wei, Guo-Wei; Yang, Siyang

    2011-01-01

    The synthesizing information, achieving understanding, and deriving insight from increasingly massive, time-varying, noisy and possibly conflicting data sets are some of most challenging tasks in the present information age. Traditional technologies, such as Fourier transform and wavelet multi-resolution analysis, are inadequate to handle all of the above-mentioned tasks. The empirical model decomposition (EMD) has emerged as a new powerful tool for resolving many challenging problems in data processing and analysis. Recently, an iterative filtering decomposition (IFD) has been introduced to address the stability and efficiency problems of the EMD. Another data analysis technique is the local spectral evolution kernel (LSEK), which provides a near prefect low pass filter with desirable time-frequency localizations. The present work utilizes the LSEK to further stabilize the IFD, and offers an efficient, flexible and robust scheme for information extraction, complexity reduction, and signal and image understanding. The performance of the present LSEK based IFD is intensively validated over a wide range of data processing tasks, including mode decomposition, analysis of time-varying data, information extraction from nonlinear dynamic systems, etc. The utility, robustness and usefulness of the proposed LESK based IFD are demonstrated via a large number of applications, such as the analysis of stock market data, the decomposition of ocean wave magnitudes, the understanding of physiologic signals and information recovery from noisy images. The performance of the proposed method is compared with that of existing methods in the literature. Our results indicate that the LSEK based IFD improves both the efficiency and the stability of conventional EMD algorithms. PMID:22350559

  12. Exploring Patterns of Soil Organic Matter Decomposition with Students and the Public Through the Global Decomposition Project (GDP)

    NASA Astrophysics Data System (ADS)

    Wood, J. H.; Natali, S.

    2014-12-01

    The Global Decomposition Project (GDP) is a program designed to introduce and educate students and the general public about soil organic matter and decomposition through a standardized protocol for collecting, reporting, and sharing data. This easy-to-use hands-on activity focuses on questions such as "How do environmental conditions control decomposition of organic matter in soil?" and "Why do some areas accumulate organic matter and others do not?" Soil organic matter is important to local ecosystems because it affects soil structure, regulates soil moisture and temperature, and provides energy and nutrients to soil organisms. It is also important globally because it stores a large amount of carbon, and when microbes "eat", or decompose organic matter they release greenhouse gasses such as carbon dioxide and methane into the atmosphere, which affects the earth's climate. The protocol describes a commonly used method to measure decomposition using a paper made of cellulose, a component of plant cell walls. Participants can receive pre-made cellulose decomposition bags, or make decomposition bags using instructions in the protocol and easily obtained materials (e.g., window screen and lignin-free paper). Individual results will be shared with all participants and the broader public through an online database. We will present decomposition bag results from a research site in Alaskan tundra, as well as from a middle-school-student led experiment in California. The GDP demonstrates how scientific methods can be extended to educate broader audiences, while at the same time, data collected by students and the public can provide new insight into global patterns of soil decomposition. The GDP provides a pathway for scientists and educators to interact and reach meaningful education and research goals.

  13. The identification of multi-cave combinations in carbonate reservoirs based on sparsity constraint inverse spectral decomposition

    NASA Astrophysics Data System (ADS)

    Li, Qian; Di, Bangrang; Wei, Jianxin; Yuan, Sanyi; Si, Wenpeng

    2016-12-01

    Sparsity constraint inverse spectral decomposition (SCISD) is a time-frequency analysis method based on the convolution model, in which minimizing the l1 norm of the time-frequency spectrum of the seismic signal is adopted as a sparsity constraint term. The SCISD method has higher time-frequency resolution and more concentrated time-frequency distribution than the conventional spectral decomposition methods, such as short-time Fourier transformation (STFT), continuous-wavelet transform (CWT) and S-transform. Due to these good features, the SCISD method has gradually been used in low-frequency anomaly detection, horizon identification and random noise reduction for sandstone and shale reservoirs. However, it has not yet been used in carbonate reservoir prediction. The carbonate fractured-vuggy reservoir is the major hydrocarbon reservoir in the Halahatang area of the Tarim Basin, north-west China. If reasonable predictions for the type of multi-cave combinations are not made, it may lead to an incorrect explanation for seismic responses of the multi-cave combinations. Furthermore, it will result in large errors in reserves estimation of the carbonate reservoir. In this paper, the energy and phase spectra of the SCISD are applied to identify the multi-cave combinations in carbonate reservoirs. The examples of physical model data and real seismic data illustrate that the SCISD method can detect the combination types and the number of caves of multi-cave combinations and can provide a favourable basis for the subsequent reservoir prediction and quantitative estimation of the cave-type carbonate reservoir volume.

  14. Planetary Gears Feature Extraction and Fault Diagnosis Method Based on VMD and CNN.

    PubMed

    Liu, Chang; Cheng, Gang; Chen, Xihui; Pang, Yusong

    2018-05-11

    Given local weak feature information, a novel feature extraction and fault diagnosis method for planetary gears based on variational mode decomposition (VMD), singular value decomposition (SVD), and convolutional neural network (CNN) is proposed. VMD was used to decompose the original vibration signal to mode components. The mode matrix was partitioned into a number of submatrices and local feature information contained in each submatrix was extracted as a singular value vector using SVD. The singular value vector matrix corresponding to the current fault state was constructed according to the location of each submatrix. Finally, by training a CNN using singular value vector matrices as inputs, planetary gear fault state identification and classification was achieved. The experimental results confirm that the proposed method can successfully extract local weak feature information and accurately identify different faults. The singular value vector matrices of different fault states have a distinct difference in element size and waveform. The VMD-based partition extraction method is better than ensemble empirical mode decomposition (EEMD), resulting in a higher CNN total recognition rate of 100% with fewer training times (14 times). Further analysis demonstrated that the method can also be applied to the degradation recognition of planetary gears. Thus, the proposed method is an effective feature extraction and fault diagnosis technique for planetary gears.

  15. Planetary Gears Feature Extraction and Fault Diagnosis Method Based on VMD and CNN

    PubMed Central

    Cheng, Gang; Chen, Xihui

    2018-01-01

    Given local weak feature information, a novel feature extraction and fault diagnosis method for planetary gears based on variational mode decomposition (VMD), singular value decomposition (SVD), and convolutional neural network (CNN) is proposed. VMD was used to decompose the original vibration signal to mode components. The mode matrix was partitioned into a number of submatrices and local feature information contained in each submatrix was extracted as a singular value vector using SVD. The singular value vector matrix corresponding to the current fault state was constructed according to the location of each submatrix. Finally, by training a CNN using singular value vector matrices as inputs, planetary gear fault state identification and classification was achieved. The experimental results confirm that the proposed method can successfully extract local weak feature information and accurately identify different faults. The singular value vector matrices of different fault states have a distinct difference in element size and waveform. The VMD-based partition extraction method is better than ensemble empirical mode decomposition (EEMD), resulting in a higher CNN total recognition rate of 100% with fewer training times (14 times). Further analysis demonstrated that the method can also be applied to the degradation recognition of planetary gears. Thus, the proposed method is an effective feature extraction and fault diagnosis technique for planetary gears. PMID:29751671

  16. Real-time determination of laser beam quality by modal decomposition.

    PubMed

    Schmidt, Oliver A; Schulze, Christian; Flamm, Daniel; Brüning, Robert; Kaiser, Thomas; Schröter, Siegmund; Duparré, Michael

    2011-03-28

    We present a real-time method to determine the beam propagation ratio M2 of laser beams. The all-optical measurement of modal amplitudes yields M2 parameters conform to the ISO standard method. The experimental technique is simple and fast, which allows to investigate laser beams under conditions inaccessible to other methods.

  17. Bearing Fault Diagnosis Based on Statistical Locally Linear Embedding

    PubMed Central

    Wang, Xiang; Zheng, Yuan; Zhao, Zhenzhou; Wang, Jinping

    2015-01-01

    Fault diagnosis is essentially a kind of pattern recognition. The measured signal samples usually distribute on nonlinear low-dimensional manifolds embedded in the high-dimensional signal space, so how to implement feature extraction, dimensionality reduction and improve recognition performance is a crucial task. In this paper a novel machinery fault diagnosis approach based on a statistical locally linear embedding (S-LLE) algorithm which is an extension of LLE by exploiting the fault class label information is proposed. The fault diagnosis approach first extracts the intrinsic manifold features from the high-dimensional feature vectors which are obtained from vibration signals that feature extraction by time-domain, frequency-domain and empirical mode decomposition (EMD), and then translates the complex mode space into a salient low-dimensional feature space by the manifold learning algorithm S-LLE, which outperforms other feature reduction methods such as PCA, LDA and LLE. Finally in the feature reduction space pattern classification and fault diagnosis by classifier are carried out easily and rapidly. Rolling bearing fault signals are used to validate the proposed fault diagnosis approach. The results indicate that the proposed approach obviously improves the classification performance of fault pattern recognition and outperforms the other traditional approaches. PMID:26153771

  18. Wavelet domain textual coding of Ottoman script images

    NASA Astrophysics Data System (ADS)

    Gerek, Oemer N.; Cetin, Enis A.; Tewfik, Ahmed H.

    1996-02-01

    Image coding using wavelet transform, DCT, and similar transform techniques is well established. On the other hand, these coding methods neither take into account the special characteristics of the images in a database nor are they suitable for fast database search. In this paper, the digital archiving of Ottoman printings is considered. Ottoman documents are printed in Arabic letters. Witten et al. describes a scheme based on finding the characters in binary document images and encoding the positions of the repeated characters This method efficiently compresses document images and is suitable for database research, but it cannot be applied to Ottoman or Arabic documents as the concept of character is different in Ottoman or Arabic. Typically, one has to deal with compound structures consisting of a group of letters. Therefore, the matching criterion will be according to those compound structures. Furthermore, the text images are gray tone or color images for Ottoman scripts for the reasons that are described in the paper. In our method the compound structure matching is carried out in wavelet domain which reduces the search space and increases the compression ratio. In addition to the wavelet transformation which corresponds to the linear subband decomposition, we also used nonlinear subband decomposition. The filters in the nonlinear subband decomposition have the property of preserving edges in the low resolution subband image.

  19. Autonomous micromotor based on catalytically pneumatic behavior of balloon-like MnO(x)-graphene crumples.

    PubMed

    Chen, Xueli; Wu, Guan; Lan, Tian; Chen, Wei

    2014-07-11

    A novel autonomous micromotor, based on catalytically pneumatic behaviour of balloon-like MnOx-graphene crumples, has been synthesized via an ultrasonic spray pyrolysis method. Through catalytic decomposition of H2O2 into O2, the gas accumulated in a confined space and was released to generate a strong force to push the micromotor.

  20. A Thin Codimension-One Decomposition of the Hilbert Cube

    ERIC Educational Resources Information Center

    Phon-On, Aniruth

    2010-01-01

    For cell-like upper semicontinuous (usc) decompositions "G" of finite dimensional manifolds "M", the decomposition space "M/G" turns out to be an ANR provided "M/G" is finite dimensional ([Dav07], page 129). Furthermore, if "M/G" is finite dimensional and has the Disjoint Disks Property (DDP), then "M/G" is homeomorphic to "M" ([Dav07], page 181).…

  1. Elegant Face-Down Liquid-Space-Restricted Deposition of CsPbBr3 Films for Efficient Carbon-Based All-Inorganic Planar Perovskite Solar Cells.

    PubMed

    Teng, Pengpeng; Han, Xiaopeng; Li, Jiawei; Xu, Ya; Kang, Lei; Wang, Yangrunqian; Yang, Ying; Yu, Tao

    2018-03-21

    It is a great challenge to obtain the uniform films of bromide-rich perovskites such as CsPbBr 3 in the two-step sequential solution process (two-step method), which was mainly due to the decomposition of the precursor films in solution. Herein, we demonstrated a novel and elegant face-down liquid-space-restricted deposition to inhibit the decomposition and fabricate high-quality CsPbBr 3 perovskite films. This method is highly reproducible, and the surface of the films was smooth and uniform with an average grain size of 860 nm. As a consequence, the planar perovskite solar cells (PSCs) without the hole-transport layer based on CsPbBr 3 and carbon electrodes exhibit enhanced power conversion efficiency (PCE) along with high open circuit voltage ( V OC ). The champion device has achieved a PCE of 5.86% with a V OC of 1.34 V, which to our knowledge is the highest performing CsPbBr 3 PSC in planar structure. Our results suggest an efficient and low-cost route to fabricate the high-quality planar all-inorganic PSCs.

  2. Crossing symmetry in alpha space

    NASA Astrophysics Data System (ADS)

    Hogervorst, Matthijs; van Rees, Balt C.

    2017-11-01

    We initiate the study of the conformal bootstrap using Sturm-Liouville theory, specializing to four-point functions in one-dimensional CFTs. We do so by decomposing conformal correlators using a basis of eigenfunctions of the Casimir which are labeled by a complex number α. This leads to a systematic method for computing conformal block decompositions. Analyzing bootstrap equations in alpha space turns crossing symmetry into an eigenvalue problem for an integral operator K. The operator K is closely related to the Wilson transform, and some of its eigenfunctions can be found in closed form.

  3. A methodology to find the elementary landscape decomposition of combinatorial optimization problems.

    PubMed

    Chicano, Francisco; Whitley, L Darrell; Alba, Enrique

    2011-01-01

    A small number of combinatorial optimization problems have search spaces that correspond to elementary landscapes, where the objective function f is an eigenfunction of the Laplacian that describes the neighborhood structure of the search space. Many problems are not elementary; however, the objective function of a combinatorial optimization problem can always be expressed as a superposition of multiple elementary landscapes if the underlying neighborhood used is symmetric. This paper presents theoretical results that provide the foundation for algebraic methods that can be used to decompose the objective function of an arbitrary combinatorial optimization problem into a sum of subfunctions, where each subfunction is an elementary landscape. Many steps of this process can be automated, and indeed a software tool could be developed that assists the researcher in finding a landscape decomposition. This methodology is then used to show that the subset sum problem is a superposition of two elementary landscapes, and to show that the quadratic assignment problem is a superposition of three elementary landscapes.

  4. Low-rank matrix decomposition and spatio-temporal sparse recovery for STAP radar

    DOE PAGES

    Sen, Satyabrata

    2015-08-04

    We develop space-time adaptive processing (STAP) methods by leveraging the advantages of sparse signal processing techniques in order to detect a slowly-moving target. We observe that the inherent sparse characteristics of a STAP problem can be formulated as the low-rankness of clutter covariance matrix when compared to the total adaptive degrees-of-freedom, and also as the sparse interference spectrum on the spatio-temporal domain. By exploiting these sparse properties, we propose two approaches for estimating the interference covariance matrix. In the first approach, we consider a constrained matrix rank minimization problem (RMP) to decompose the sample covariance matrix into a low-rank positivemore » semidefinite and a diagonal matrix. The solution of RMP is obtained by applying the trace minimization technique and the singular value decomposition with matrix shrinkage operator. Our second approach deals with the atomic norm minimization problem to recover the clutter response-vector that has a sparse support on the spatio-temporal plane. We use convex relaxation based standard sparse-recovery techniques to find the solutions. With extensive numerical examples, we demonstrate the performances of proposed STAP approaches with respect to both the ideal and practical scenarios, involving Doppler-ambiguous clutter ridges, spatial and temporal decorrelation effects. As a result, the low-rank matrix decomposition based solution requires secondary measurements as many as twice the clutter rank to attain a near-ideal STAP performance; whereas the spatio-temporal sparsity based approach needs a considerably small number of secondary data.« less

  5. Acceleration of aircraft-level Traffic Flow Management

    NASA Astrophysics Data System (ADS)

    Rios, Joseph Lucio

    This dissertation describes novel approaches to solving large-scale, high fidelity, aircraft-level Traffic Flow Management scheduling problems. Depending on the methods employed, solving these problems to optimality can take longer than the length of the planning horizon in question. Research in this domain typically focuses on the quality of the modeling used to describe the problem and the benefits achieved from the optimized solution, often treating computational aspects as secondary or tertiary. The work presented here takes the complementary view and considers the computational aspect as the primary concern. To this end, a previously published model for solving this Traffic Flow Management scheduling problem is used as starting point for this study. The model proposed by Bertsimas and Stock-Patterson is a binary integer program taking into account all major resource capacities and the trajectories of each flight to decide which flights should be held in which resource for what amount of time in order to satisfy all capacity requirements. For large instances, the solve time using state-of-the-art solvers is prohibitive for use within a potential decision support tool. With this dissertation, however, it will be shown that solving can be achieved in reasonable time for instances of real-world size. Five other techniques developed and tested for this dissertation will be described in detail. These are heuristic methods that provide good results. Performance is measured in terms of runtime and "optimality gap." We then describe the most successful method presented in this dissertation: Dantzig-Wolfe Decomposition. Results indicate that a parallel implementation of Dantzig-Wolfe Decomposition optimally solves the original problem in much reduced time and with better integrality and smaller optimality gap than any of the heuristic methods or state-of-the-art, commercial solvers. The solution quality improves in every measureable way as the number of subproblems solved in parallel increases. A maximal decomposition provides the best results of any method tested. The convergence qualities of Dantzig-Wolfe Decomposition have been criticized in the past, so we examine what makes the Bertsimas-Stock Patterson model so amenable to use of this method. These mathematical qualities of the model are generalized to provide guidance on other problems that may benefit from massively parallel Dantzig-Wolfe Decomposition. This result, together with the development of the software, and the experimental results indicating the feasibility of real-time, nationwide Traffic Flow Management scheduling represent the major contributions of this dissertation.

  6. Decomposition-Based Failure Mode Identification Method for Risk-Free Design of Large Systems

    NASA Technical Reports Server (NTRS)

    Tumer, Irem Y.; Stone, Robert B.; Roberts, Rory A.; Clancy, Daniel (Technical Monitor)

    2002-01-01

    When designing products, it is crucial to assure failure and risk-free operation in the intended operating environment. Failures are typically studied and eliminated as much as possible during the early stages of design. The few failures that go undetected result in unacceptable damage and losses in high-risk applications where public safety is of concern. Published NASA and NTSB accident reports point to a variety of components identified as sources of failures in the reported cases. In previous work, data from these reports were processed and placed in matrix form for all the system components and failure modes encountered, and then manipulated using matrix methods to determine similarities between the different components and failure modes. In this paper, these matrices are represented in the form of a linear combination of failures modes, mathematically formed using Principal Components Analysis (PCA) decomposition. The PCA decomposition results in a low-dimensionality representation of all failure modes and components of interest, represented in a transformed coordinate system. Such a representation opens the way for efficient pattern analysis and prediction of failure modes with highest potential risks on the final product, rather than making decisions based on the large space of component and failure mode data. The mathematics of the proposed method are explained first using a simple example problem. The method is then applied to component failure data gathered from helicopter, accident reports to demonstrate its potential.

  7. Multidimensional k-nearest neighbor model based on EEMD for financial time series forecasting

    NASA Astrophysics Data System (ADS)

    Zhang, Ningning; Lin, Aijing; Shang, Pengjian

    2017-07-01

    In this paper, we propose a new two-stage methodology that combines the ensemble empirical mode decomposition (EEMD) with multidimensional k-nearest neighbor model (MKNN) in order to forecast the closing price and high price of the stocks simultaneously. The modified algorithm of k-nearest neighbors (KNN) has an increasingly wide application in the prediction of all fields. Empirical mode decomposition (EMD) decomposes a nonlinear and non-stationary signal into a series of intrinsic mode functions (IMFs), however, it cannot reveal characteristic information of the signal with much accuracy as a result of mode mixing. So ensemble empirical mode decomposition (EEMD), an improved method of EMD, is presented to resolve the weaknesses of EMD by adding white noise to the original data. With EEMD, the components with true physical meaning can be extracted from the time series. Utilizing the advantage of EEMD and MKNN, the new proposed ensemble empirical mode decomposition combined with multidimensional k-nearest neighbor model (EEMD-MKNN) has high predictive precision for short-term forecasting. Moreover, we extend this methodology to the case of two-dimensions to forecast the closing price and high price of the four stocks (NAS, S&P500, DJI and STI stock indices) at the same time. The results indicate that the proposed EEMD-MKNN model has a higher forecast precision than EMD-KNN, KNN method and ARIMA.

  8. Scalable parallel elastic-plastic finite element analysis using a quasi-Newton method with a balancing domain decomposition preconditioner

    NASA Astrophysics Data System (ADS)

    Yusa, Yasunori; Okada, Hiroshi; Yamada, Tomonori; Yoshimura, Shinobu

    2018-04-01

    A domain decomposition method for large-scale elastic-plastic problems is proposed. The proposed method is based on a quasi-Newton method in conjunction with a balancing domain decomposition preconditioner. The use of a quasi-Newton method overcomes two problems associated with the conventional domain decomposition method based on the Newton-Raphson method: (1) avoidance of a double-loop iteration algorithm, which generally has large computational complexity, and (2) consideration of the local concentration of nonlinear deformation, which is observed in elastic-plastic problems with stress concentration. Moreover, the application of a balancing domain decomposition preconditioner ensures scalability. Using the conventional and proposed domain decomposition methods, several numerical tests, including weak scaling tests, were performed. The convergence performance of the proposed method is comparable to that of the conventional method. In particular, in elastic-plastic analysis, the proposed method exhibits better convergence performance than the conventional method.

  9. Variability-aware double-patterning layout optimization for analog circuits

    NASA Astrophysics Data System (ADS)

    Li, Yongfu; Perez, Valerio; Tripathi, Vikas; Lee, Zhao Chuan; Tseng, I.-Lun; Ong, Jonathan Yoong Seang

    2018-03-01

    The semiconductor industry has adopted multi-patterning techniques to manage the delay in the extreme ultraviolet lithography technology. During the design process of double-patterning lithography layout masks, two polygons are assigned to different masks if their spacing is less than the minimum printable spacing. With these additional design constraints, it is very difficult to find experienced layout-design engineers who have a good understanding of the circuit to manually optimize the mask layers in order to minimize color-induced circuit variations. In this work, we investigate the impact of double-patterning lithography on analog circuits and provide quantitative analysis for our designers to select the optimal mask to minimize the circuit's mismatch. To overcome the problem and improve the turn-around time, we proposed our smart "anchoring" placement technique to optimize mask decomposition for analog circuits. We have developed a software prototype that is capable of providing anchoring markers in the layout, allowing industry standard tools to perform automated color decomposition process.

  10. Entanglement branching operator

    NASA Astrophysics Data System (ADS)

    Harada, Kenji

    2018-01-01

    We introduce an entanglement branching operator to split a composite entanglement flow in a tensor network which is a promising theoretical tool for many-body systems. We can optimize an entanglement branching operator by solving a minimization problem based on squeezing operators. The entanglement branching is a new useful operation to manipulate a tensor network. For example, finding a particular entanglement structure by an entanglement branching operator, we can improve a higher-order tensor renormalization group method to catch a proper renormalization flow in a tensor network space. This new method yields a new type of tensor network states. The second example is a many-body decomposition of a tensor by using an entanglement branching operator. We can use it for a perfect disentangling among tensors. Applying a many-body decomposition recursively, we conceptually derive projected entangled pair states from quantum states that satisfy the area law of entanglement entropy.

  11. A simple method for decomposition of peracetic acid in a microalgal cultivation system.

    PubMed

    Sung, Min-Gyu; Lee, Hansol; Nam, Kibok; Rexroth, Sascha; Rögner, Matthias; Kwon, Jong-Hee; Yang, Ji-Won

    2015-03-01

    A cost-efficient process devoid of several washing steps was developed, which is related to direct cultivation following the decomposition of the sterilizer. Peracetic acid (PAA) is known to be an efficient antimicrobial agent due to its high oxidizing potential. Sterilization by 2 mM PAA demands at least 1 h incubation time for an effective disinfection. Direct degradation of PAA was demonstrated by utilizing components in conventional algal medium. Consequently, ferric ion and pH buffer (HEPES) showed a synergetic effect for the decomposition of PAA within 6 h. On the contrary, NaNO3, one of the main components in algal media, inhibits the decomposition of PAA. The improved growth of Chlorella vulgaris and Synechocystis PCC6803 was observed in the prepared BG11 by decomposition of PAA. This process involving sterilization and decomposition of PAA should help cost-efficient management of photobioreactors in a large scale for the production of value-added products and biofuels from microalgal biomass.

  12. The development of a post-mortem interval estimation for human remains found on land in the Netherlands.

    PubMed

    Gelderman, H T; Boer, L; Naujocks, T; IJzermans, A C M; Duijst, W L J M

    2018-05-01

    The decomposition process of human remains can be used to estimate the post-mortem interval (PMI), but decomposition varies due to many factors. Temperature is believed to be the most important and can be connected to decomposition by using the accumulated degree days (ADD). The aim of this research was to develop a decomposition scoring method and to develop a formula to estimate the PMI by using the developed decomposition scoring method and ADD.A decomposition scoring method and a Book of Reference (visual resource) were made. Ninety-one cases were used to develop a method to estimate the PMI. The photographs were scored using the decomposition scoring method. The temperature data was provided by the Royal Netherlands Meteorological Institute. The PMI was estimated using the total decomposition score (TDS) and using the TDS and ADD. The latter required an additional step, namely to calculate the ADD from the finding date back until the predicted day of death.The developed decomposition scoring method had a high interrater reliability. The TDS significantly estimates the PMI (R 2  = 0.67 and 0.80 for indoor and outdoor bodies, respectively). When using the ADD, the R 2 decreased to 0.66 and 0.56.The developed decomposition scoring method is a practical method to measure decomposition for human remains found on land. The PMI can be estimated using this method, but caution is advised in cases with a long PMI. The ADD does not account for all the heat present in a decomposing remain and is therefore a possible bias.

  13. Better Decomposition Heuristics for the Maximum-Weight Connected Graph Problem Using Betweenness Centrality

    NASA Astrophysics Data System (ADS)

    Yamamoto, Takanori; Bannai, Hideo; Nagasaki, Masao; Miyano, Satoru

    We present new decomposition heuristics for finding the optimal solution for the maximum-weight connected graph problem, which is known to be NP-hard. Previous optimal algorithms for solving the problem decompose the input graph into subgraphs using heuristics based on node degree. We propose new heuristics based on betweenness centrality measures, and show through computational experiments that our new heuristics tend to reduce the number of subgraphs in the decomposition, and therefore could lead to the reduction in computational time for finding the optimal solution. The method is further applied to analysis of biological pathway data.

  14. Accelerating solutions of one-dimensional unsteady PDEs with GPU-based swept time-space decomposition

    NASA Astrophysics Data System (ADS)

    Magee, Daniel J.; Niemeyer, Kyle E.

    2018-03-01

    The expedient design of precision components in aerospace and other high-tech industries requires simulations of physical phenomena often described by partial differential equations (PDEs) without exact solutions. Modern design problems require simulations with a level of resolution difficult to achieve in reasonable amounts of time-even in effectively parallelized solvers. Though the scale of the problem relative to available computing power is the greatest impediment to accelerating these applications, significant performance gains can be achieved through careful attention to the details of memory communication and access. The swept time-space decomposition rule reduces communication between sub-domains by exhausting the domain of influence before communicating boundary values. Here we present a GPU implementation of the swept rule, which modifies the algorithm for improved performance on this processing architecture by prioritizing use of private (shared) memory, avoiding interblock communication, and overwriting unnecessary values. It shows significant improvement in the execution time of finite-difference solvers for one-dimensional unsteady PDEs, producing speedups of 2 - 9 × for a range of problem sizes, respectively, compared with simple GPU versions and 7 - 300 × compared with parallel CPU versions. However, for a more sophisticated one-dimensional system of equations discretized with a second-order finite-volume scheme, the swept rule performs 1.2 - 1.9 × worse than a standard implementation for all problem sizes.

  15. Compiling Techniques for East Antarctic Ice Velocity Mapping Based on Historical Optical Imagery

    NASA Astrophysics Data System (ADS)

    Li, X.; Li, R.; Qiao, G.; Cheng, Y.; Ye, W.; Gao, T.; Huang, Y.; Tian, Y.; Tong, X.

    2018-05-01

    Ice flow velocity over long time series in East Antarctica plays a vital role in estimating and predicting the mass balance of Antarctic Ice Sheet and its contribution to global sea level rise. However, there is no Antarctic ice velocity product with large space scale available showing the East Antarctic ice flow velocity pattern before the 1990s. We proposed three methods including parallax decomposition, grid-based NCC image matching, feature and gird-based image matching with constraints for estimation of surface velocity in East Antarctica based on ARGON KH-5 and LANDSAT imagery, showing the feasibility of using historical optical imagery to obtain Antarctic ice motion. Based on these previous studies, we presented a set of systematic method for developing ice surface velocity product for the entire East Antarctica from the 1960s to the 1980s in this paper.

  16. Characteristic analysis on UAV-MIMO channel based on normalized correlation matrix.

    PubMed

    Gao, Xi jun; Chen, Zi li; Hu, Yong Jiang

    2014-01-01

    Based on the three-dimensional GBSBCM (geometrically based double bounce cylinder model) channel model of MIMO for unmanned aerial vehicle (UAV), the simple form of UAV space-time-frequency channel correlation function which includes the LOS, SPE, and DIF components is presented. By the methods of channel matrix decomposition and coefficient normalization, the analytic formula of UAV-MIMO normalized correlation matrix is deduced. This formula can be used directly to analyze the condition number of UAV-MIMO channel matrix, the channel capacity, and other characteristic parameters. The simulation results show that this channel correlation matrix can be applied to describe the changes of UAV-MIMO channel characteristics under different parameter settings comprehensively. This analysis method provides a theoretical basis for improving the transmission performance of UAV-MIMO channel. The development of MIMO technology shows practical application value in the field of UAV communication.

  17. Characteristic Analysis on UAV-MIMO Channel Based on Normalized Correlation Matrix

    PubMed Central

    Xi jun, Gao; Zi li, Chen; Yong Jiang, Hu

    2014-01-01

    Based on the three-dimensional GBSBCM (geometrically based double bounce cylinder model) channel model of MIMO for unmanned aerial vehicle (UAV), the simple form of UAV space-time-frequency channel correlation function which includes the LOS, SPE, and DIF components is presented. By the methods of channel matrix decomposition and coefficient normalization, the analytic formula of UAV-MIMO normalized correlation matrix is deduced. This formula can be used directly to analyze the condition number of UAV-MIMO channel matrix, the channel capacity, and other characteristic parameters. The simulation results show that this channel correlation matrix can be applied to describe the changes of UAV-MIMO channel characteristics under different parameter settings comprehensively. This analysis method provides a theoretical basis for improving the transmission performance of UAV-MIMO channel. The development of MIMO technology shows practical application value in the field of UAV communication. PMID:24977185

  18. Frenkel-exciton decomposition analysis of circular dichroism and circularly polarized luminescence for multichromophoric systems.

    PubMed

    Shiraogawa, Takafumi; Ehara, Masahiro; Jurinovich, Sandro; Cupellini, Lorenzo; Mennucci, Benedetta

    2018-06-15

    Recently, a method to calculate the absorption and circular dichroism (CD) spectra based on the exciton coupling has been developed. In this work, the method was utilized for the decomposition of the CD and circularly polarized luminescence (CPL) spectra of a multichromophoric system into chromophore contributions for recently developed through-space conjugated oligomers. The method which has been implemented using rotatory strength in the velocity form and therefore it is gauge-invariant, enables us to evaluate the contribution from each chromophoric unit and locally excited state to the CD and CPL spectra of the total system. The excitonic calculations suitably reproduce the full calculations of the system, as well as the experimental results. We demonstrate that the interactions between electric transition dipole moments of adjacent chromophoric units are crucial in the CD and CPL spectra of the multichromophoric systems, while the interactions between electric and magnetic transition dipole moments are not negligible. © 2018 Wiley Periodicals, Inc. © 2018 Wiley Periodicals, Inc.

  19. A complete analytical solution for the inverse instantaneous kinematics of a spherical-revolute-spherical (7R) redundant manipulator

    NASA Technical Reports Server (NTRS)

    Podhorodeski, R. P.; Fenton, R. G.; Goldenberg, A. A.

    1989-01-01

    Using a method based upon resolving joint velocities using reciprocal screw quantities, compact analytical expressions are generated for the inverse solution of the joint rates of a seven revolute (spherical-revolute-spherical) manipulator. The method uses a sequential decomposition of screw coordinates to identify reciprocal screw quantities used in the resolution of a particular joint rate solution, and also to identify a Jacobian null-space basis used for the direct solution of optimal joint rates. The results of the screw decomposition are used to study special configurations of the manipulator, generating expressions for the inverse velocity solution for all non-singular configurations of the manipulator, and identifying singular configurations and their characteristics. Two functions are therefore served: a new general method for the solution of the inverse velocity problem is presented; and complete analytical expressions are derived for the resolution of the joint rates of a seven degree of freedom manipulator useful for telerobotic and industrial robotic application.

  20. Application of higher order SVD to vibration-based system identification and damage detection

    NASA Astrophysics Data System (ADS)

    Chao, Shu-Hsien; Loh, Chin-Hsiung; Weng, Jian-Huang

    2012-04-01

    Singular value decomposition (SVD) is a powerful linear algebra tool. It is widely used in many different signal processing methods, such principal component analysis (PCA), singular spectrum analysis (SSA), frequency domain decomposition (FDD), subspace identification and stochastic subspace identification method ( SI and SSI ). In each case, the data is arranged appropriately in matrix form and SVD is used to extract the feature of the data set. In this study three different algorithms on signal processing and system identification are proposed: SSA, SSI-COV and SSI-DATA. Based on the extracted subspace and null-space from SVD of data matrix, damage detection algorithms can be developed. The proposed algorithm is used to process the shaking table test data of the 6-story steel frame. Features contained in the vibration data are extracted by the proposed method. Damage detection can then be investigated from the test data of the frame structure through subspace-based and nullspace-based damage indices.

  1. 1,3,5-trinitro-1,3,5-triazine decomposition and chemisorption on Al(111) surface: first-principles molecular dynamics study.

    PubMed

    Umezawa, Naoto; Kalia, Rajiv K; Nakano, Aiichiro; Vashista, Priya; Shimojo, Fuyuki

    2007-06-21

    We have investigated the decomposition and chemisorption of a 1,3,5-trinitro-1,3,5-triazine (RDX) molecule on Al(111) surface using molecular dynamics simulations, in which interatomic forces are computed quantum mechanically in the framework of the density functional theory (DFT). The real-space DFT calculations are based on higher-order finite difference and norm-conserving pseudopotential methods. Strong attractive forces between oxygen and aluminum atoms break N-O and N-N bonds in the RDX and, subsequently, the dissociated oxygen atoms and NO molecules oxidize the Al surface. In addition to these Al surface-assisted decompositions, ring cleavage of the RDX molecule is also observed. These reactions occur spontaneously without potential barriers and result in the attachment of the rest of the RDX molecule to the surface. This opens up the possibility of coating Al nanoparticles with RDX molecules to avoid the detrimental effect of oxidation in high energy density material applications.

  2. Microbial Signatures of Cadaver Gravesoil During Decomposition.

    PubMed

    Finley, Sheree J; Pechal, Jennifer L; Benbow, M Eric; Robertson, B K; Javan, Gulnaz T

    2016-04-01

    Genomic studies have estimated there are approximately 10(3)-10(6) bacterial species per gram of soil. The microbial species found in soil associated with decomposing human remains (gravesoil) have been investigated and recognized as potential molecular determinants for estimates of time since death. The nascent era of high-throughput amplicon sequencing of the conserved 16S ribosomal RNA (rRNA) gene region of gravesoil microbes is allowing research to expand beyond more subjective empirical methods used in forensic microbiology. The goal of the present study was to evaluate microbial communities and identify taxonomic signatures associated with the gravesoil human cadavers. Using 16S rRNA gene amplicon-based sequencing, soil microbial communities were surveyed from 18 cadavers placed on the surface or buried that were allowed to decompose over a range of decomposition time periods (3-303 days). Surface soil microbial communities showed a decreasing trend in taxon richness, diversity, and evenness over decomposition, while buried cadaver-soil microbial communities demonstrated increasing taxon richness, consistent diversity, and decreasing evenness. The results show that ubiquitous Proteobacteria was confirmed as the most abundant phylum in all gravesoil samples. Surface cadaver-soil communities demonstrated a decrease in Acidobacteria and an increase in Firmicutes relative abundance over decomposition, while buried soil communities were consistent in their community composition throughout decomposition. Better understanding of microbial community structure and its shifts over time may be important for advancing general knowledge of decomposition soil ecology and its potential use during forensic investigations.

  3. Hierarchical modeling for reliability analysis using Markov models. B.S./M.S. Thesis - MIT

    NASA Technical Reports Server (NTRS)

    Fagundo, Arturo

    1994-01-01

    Markov models represent an extremely attractive tool for the reliability analysis of many systems. However, Markov model state space grows exponentially with the number of components in a given system. Thus, for very large systems Markov modeling techniques alone become intractable in both memory and CPU time. Often a particular subsystem can be found within some larger system where the dependence of the larger system on the subsystem is of a particularly simple form. This simple dependence can be used to decompose such a system into one or more subsystems. A hierarchical technique is presented which can be used to evaluate these subsystems in such a way that their reliabilities can be combined to obtain the reliability for the full system. This hierarchical approach is unique in that it allows the subsystem model to pass multiple aggregate state information to the higher level model, allowing more general systems to be evaluated. Guidelines are developed to assist in the system decomposition. An appropriate method for determining subsystem reliability is also developed. This method gives rise to some interesting numerical issues. Numerical error due to roundoff and integration are discussed at length. Once a decomposition is chosen, the remaining analysis is straightforward but tedious. However, an approach is developed for simplifying the recombination of subsystem reliabilities. Finally, a real world system is used to illustrate the use of this technique in a more practical context.

  4. Multiresolution forecasting for futures trading using wavelet decompositions.

    PubMed

    Zhang, B L; Coggins, R; Jabri, M A; Dersch, D; Flower, B

    2001-01-01

    We investigate the effectiveness of a financial time-series forecasting strategy which exploits the multiresolution property of the wavelet transform. A financial series is decomposed into an over complete, shift invariant scale-related representation. In transform space, each individual wavelet series is modeled by a separate multilayer perceptron (MLP). We apply the Bayesian method of automatic relevance determination to choose short past windows (short-term history) for the inputs to the MLPs at lower scales and long past windows (long-term history) at higher scales. To form the overall forecast, the individual forecasts are then recombined by the linear reconstruction property of the inverse transform with the chosen autocorrelation shell representation, or by another perceptron which learns the weight of each scale in the prediction of the original time series. The forecast results are then passed to a money management system to generate trades.

  5. An optimization model for energy generation and distribution in a dynamic facility

    NASA Technical Reports Server (NTRS)

    Lansing, F. L.

    1981-01-01

    An analytical model is described using linear programming for the optimum generation and distribution of energy demands among competing energy resources and different economic criteria. The model, which will be used as a general engineering tool in the analysis of the Deep Space Network ground facility, considers several essential decisions for better design and operation. The decisions sought for the particular energy application include: the optimum time to build an assembly of elements, inclusion of a storage medium of some type, and the size or capacity of the elements that will minimize the total life-cycle cost over a given number of years. The model, which is structured in multiple time divisions, employ the decomposition principle for large-size matrices, the branch-and-bound method in mixed-integer programming, and the revised simplex technique for efficient and economic computer use.

  6. A multiscale method for a robust detection of the default mode network

    NASA Astrophysics Data System (ADS)

    Baquero, Katherine; Gómez, Francisco; Cifuentes, Christian; Guldenmund, Pieter; Demertzi, Athena; Vanhaudenhuyse, Audrey; Gosseries, Olivia; Tshibanda, Jean-Flory; Noirhomme, Quentin; Laureys, Steven; Soddu, Andrea; Romero, Eduardo

    2013-11-01

    The Default Mode Network (DMN) is a resting state network widely used for the analysis and diagnosis of mental disorders. It is normally detected in fMRI data, but for its detection in data corrupted by motion artefacts or low neuronal activity, the use of a robust analysis method is mandatory. In fMRI it has been shown that the signal-to-noise ratio (SNR) and the detection sensitivity of neuronal regions is increased with di erent smoothing kernels sizes. Here we propose to use a multiscale decomposition based of a linear scale-space representation for the detection of the DMN. Three main points are proposed in this methodology: rst, the use of fMRI data at di erent smoothing scale-spaces, second, detection of independent neuronal components of the DMN at each scale by using standard preprocessing methods and ICA decomposition at scale-level, and nally, a weighted contribution of each scale by the Goodness of Fit measurement. This method was applied to a group of control subjects and was compared with a standard preprocesing baseline. The detection of the DMN was improved at single subject level and at group level. Based on these results, we suggest to use this methodology to enhance the detection of the DMN in data perturbed with artefacts or applied to subjects with low neuronal activity. Furthermore, the multiscale method could be extended for the detection of other resting state neuronal networks.

  7. Generalized decompositions of dynamic systems and vector Lyapunov functions

    NASA Astrophysics Data System (ADS)

    Ikeda, M.; Siljak, D. D.

    1981-10-01

    The notion of decomposition is generalized to provide more freedom in constructing vector Lyapunov functions for stability analysis of nonlinear dynamic systems. A generalized decomposition is defined as a disjoint decomposition of a system which is obtained by expanding the state-space of a given system. An inclusion principle is formulated for the solutions of the expansion to include the solutions of the original system, so that stability of the expansion implies stability of the original system. Stability of the expansion can then be established by standard disjoint decompositions and vector Lyapunov functions. The applicability of the new approach is demonstrated using the Lotka-Volterra equations.

  8. Root growth and development in response to CO2 enrichment

    NASA Technical Reports Server (NTRS)

    Day, Frank P., Jr.

    1994-01-01

    A non-destructive technique (minirhizotron observation tubes) was used to assess the effects of CO2 enrichment on root growth and development in experimental plots in a scrub oak-palmetto community at the Kennedy Space Center. Potential effects of CO2 enrichment on plants have a global significance in light of concerns over increasing CO2 concentrations in the Earth's atmosphere. The study at Kennedy Space Center focused on aboveground physiological responses (photosynthetic efficiency and water use efficiency), effects on process rates (litter decomposition and nutrient turnover), and belowground responses of the plants. Belowground dynamics are an exceptionally important component of total plant response but are frequently ignored due to methodological difficulties. Most methods used to examine root growth and development are destructive and, therefore, severely compromise results. Minirhizotrons allow nondestructive observation and quantification of the same soil volume and roots through time. Root length density and root phenology were evaluated for CO2 effects with this nondestructive technique.

  9. Uncertainty propagation in orbital mechanics via tensor decomposition

    NASA Astrophysics Data System (ADS)

    Sun, Yifei; Kumar, Mrinal

    2016-03-01

    Uncertainty forecasting in orbital mechanics is an essential but difficult task, primarily because the underlying Fokker-Planck equation (FPE) is defined on a relatively high dimensional (6-D) state-space and is driven by the nonlinear perturbed Keplerian dynamics. In addition, an enormously large solution domain is required for numerical solution of this FPE (e.g. encompassing the entire orbit in the x-y-z subspace), of which the state probability density function (pdf) occupies a tiny fraction at any given time. This coupling of large size, high dimensionality and nonlinearity makes for a formidable computational task, and has caused the FPE for orbital uncertainty propagation to remain an unsolved problem. To the best of the authors' knowledge, this paper presents the first successful direct solution of the FPE for perturbed Keplerian mechanics. To tackle the dimensionality issue, the time-varying state pdf is approximated in the CANDECOMP/PARAFAC decomposition tensor form where all the six spatial dimensions as well as the time dimension are separated from one other. The pdf approximation for all times is obtained simultaneously via the alternating least squares algorithm. Chebyshev spectral differentiation is employed for discretization on account of its spectral ("super-fast") convergence rate. To facilitate the tensor decomposition and control the solution domain size, system dynamics is expressed using spherical coordinates in a noninertial reference frame. Numerical results obtained on a regular personal computer are compared with Monte Carlo simulations.

  10. A Raman spectroscopic determination of the kinetics of decomposition of ammonium chromate (NH 4) 2CrO 4

    NASA Astrophysics Data System (ADS)

    De Waal, D.; Heyns, A. M.; Range, K.-J.

    1989-06-01

    Raman spectroscopy was used as a method in the kinetic investigation of the thermal decomposition of solid (NH 4) 2CrO 4. Time-dependent measurements of the intensity of the totally symmetric stretching CrO mode of (NH 4) 2CrO 4 have been made between 343 and 363 K. A short initial acceleratory period is observed at lower temperatures and the decomposition reaction decelerates after the maximum decomposition rate has been reached at all temperatures. These results can be interpreted in terms of the Avrami-Erofe'ev law 1 - (χ r) {1}/{2} = kt , where χr is the fraction of reactant at time t. At 358 K, k is equal to 1.76 ± 0.01 × 10 -3 sec -1 for microcrystals and for powdered samples. Activation energies of 97 ± 10 and 49 ± 0.9 kJ mole -1 have been calculated for microcrystalline and powdered samples, respectively.

  11. Extended vector-tensor theories

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kimura, Rampei; Naruko, Atsushi; Yoshida, Daisuke, E-mail: rampei@th.phys.titech.ac.jp, E-mail: naruko@th.phys.titech.ac.jp, E-mail: yoshida@th.phys.titech.ac.jp

    Recently, several extensions of massive vector theory in curved space-time have been proposed in many literatures. In this paper, we consider the most general vector-tensor theories that contain up to two derivatives with respect to metric and vector field. By imposing a degeneracy condition of the Lagrangian in the context of ADM decomposition of space-time to eliminate an unwanted mode, we construct a new class of massive vector theories where five degrees of freedom can propagate, corresponding to three for massive vector modes and two for massless tensor modes. We find that the generalized Proca and the beyond generalized Procamore » theories up to the quartic Lagrangian, which should be included in this formulation, are degenerate theories even in curved space-time. Finally, introducing new metric and vector field transformations, we investigate the properties of thus obtained theories under such transformations.« less

  12. Defect inspection using a time-domain mode decomposition technique

    NASA Astrophysics Data System (ADS)

    Zhu, Jinlong; Goddard, Lynford L.

    2018-03-01

    In this paper, we propose a technique called time-varying frequency scanning (TVFS) to meet the challenges in killer defect inspection. The proposed technique enables the dynamic monitoring of defects by checking the hopping in the instantaneous frequency data and the classification of defect types by comparing the difference in frequencies. The TVFS technique utilizes the bidimensional empirical mode decomposition (BEMD) method to separate the defect information from the sea of system errors. This significantly improve the signal-to-noise ratio (SNR) and moreover, it potentially enables reference-free defect inspection.

  13. Classification of Partial Discharge Signals by Combining Adaptive Local Iterative Filtering and Entropy Features

    PubMed Central

    Morison, Gordon; Boreham, Philip

    2018-01-01

    Electromagnetic Interference (EMI) is a technique for capturing Partial Discharge (PD) signals in High-Voltage (HV) power plant apparatus. EMI signals can be non-stationary which makes their analysis difficult, particularly for pattern recognition applications. This paper elaborates upon a previously developed software condition-monitoring model for improved EMI events classification based on time-frequency signal decomposition and entropy features. The idea of the proposed method is to map multiple discharge source signals captured by EMI and labelled by experts, including PD, from the time domain to a feature space, which aids in the interpretation of subsequent fault information. Here, instead of using only one permutation entropy measure, a more robust measure, called Dispersion Entropy (DE), is added to the feature vector. Multi-Class Support Vector Machine (MCSVM) methods are utilized for classification of the different discharge sources. Results show an improved classification accuracy compared to previously proposed methods. This yields to a successful development of an expert’s knowledge-based intelligent system. Since this method is demonstrated to be successful with real field data, it brings the benefit of possible real-world application for EMI condition monitoring. PMID:29385030

  14. Explosive and pyrotechnic aging demonstration

    NASA Technical Reports Server (NTRS)

    Rouch, L. L., Jr.; Maycock, J. N.

    1976-01-01

    The survivability was experimentally verified of fine selected explosive and pyrotechnic propellant materials when subjected to sterilization, and prolonged exposure to space environments. This verification included thermal characterization, sterilization heat cycling, sublimation measurements, isothermal decomposition measurements, and accelerated aging at a preselected elevated temperature. Temperatures chosen for sublimation and isothermal decomposition measurements were those in which the decomposition processess occurring would be the same as those taking place in real-time aging. The elevated temperature selected (84 C) for accelerated aging was based upon the parameters calculated from the kinetic data obtained in the isothermal measurement tests and was such that one month of accelerated aging in the laboratory approximated one year of real-time aging at 66 C. Results indicate that HNS-IIA, pure PbN6, KDNBF, and Zr/KC10 are capable of withstanding sterilization. The accelerated aging tests indicated that unsterilized HNS-IIA and Zr/KC104 can withstand the 10 year, elevated temperature exposure, pure PbN6 and KDNBF exhibit small weight losses (less than 2 percent) and B/KC104 exhibits significant changes in its thermal characteristics. Accelerated aging tests after sterilization indicated that only HNS-IIA exhibited high stability.

  15. Empirical Investigation of Critical Transitions in Paleoclimate

    NASA Astrophysics Data System (ADS)

    Loskutov, E. M.; Mukhin, D.; Gavrilov, A.; Feigin, A.

    2016-12-01

    In this work we apply a new empirical method for the analysis of complex spatially distributed systems to the analysis of paleoclimate data. The method consists of two general parts: (i) revealing the optimal phase-space variables and (ii) construction the empirical prognostic model by observed time series. The method of phase space variables construction based on the data decomposition into nonlinear dynamical modes which was successfully applied to global SST field and allowed clearly separate time scales and reveal climate shift in the observed data interval [1]. The second part, the Bayesian approach to optimal evolution operator reconstruction by time series is based on representation of evolution operator in the form of nonlinear stochastic function represented by artificial neural networks [2,3]. In this work we are focused on the investigation of critical transitions - the abrupt changes in climate dynamics - in match longer time scale process. It is well known that there were number of critical transitions on different time scales in the past. In this work, we demonstrate the first results of applying our empirical methods to analysis of paleoclimate variability. In particular, we discuss the possibility of detecting, identifying and prediction such critical transitions by means of nonlinear empirical modeling using the paleoclimate record time series. The study is supported by Government of Russian Federation (agreement #14.Z50.31.0033 with the Institute of Applied Physics of RAS). 1. Mukhin, D., Gavrilov, A., Feigin, A., Loskutov, E., & Kurths, J. (2015). Principal nonlinear dynamical modes of climate variability. Scientific Reports, 5, 15510. http://doi.org/10.1038/srep155102. Ya. I. Molkov, D. N. Mukhin, E. M. Loskutov, A.M. Feigin, (2012) : Random dynamical models from time series. Phys. Rev. E, Vol. 85, n.3.3. Mukhin, D., Kondrashov, D., Loskutov, E., Gavrilov, A., Feigin, A., & Ghil, M. (2015). Predicting Critical Transitions in ENSO models. Part II: Spatially Dependent Models. Journal of Climate, 28(5), 1962-1976. http://doi.org/10.1175/JCLI-D-14-00240.1

  16. A Study of Strong Stability of Distributed Systems. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Cataltepe, Tayfun

    1989-01-01

    The strong stability of distributed systems is studied and the problem of characterizing strongly stable semigroups of operators associated with distributed systems is addressed. Main emphasis is on contractive systems. Three different approaches to characterization of strongly stable contractive semigroups are developed. The first one is an operator theoretical approach. Using the theory of dilations, it is shown that every strongly stable contractive semigroup is related to the left shift semigroup on an L(exp 2) space. Then, a decomposition for the state space which identifies strongly stable and unstable states is introduced. Based on this decomposition, conditions for a contractive semigroup to be strongly stable are obtained. Finally, extensions of Lyapunov's equation for distributed parameter systems are investigated. Sufficient conditions for weak and strong stabilities of uniformly bounded semigroups are obtained by relaxing the equivalent norm condition on the right hand side of the Lyanupov equation. These characterizations are then applied to the problem of feedback stabilization. First, it is shown via the state space decomposition that under certain conditions a contractive system (A,B) can be strongly stabilized by the feedback -B(*). Then, application of the extensions of the Lyapunov equation results in sufficient conditions for weak, strong, and exponential stabilizations of contractive systems by the feedback -B(*). Finally, it is shown that for a contractive system, the first derivative of x with respect to time = Ax + Bu (where B is any linear bounded operator), there is a related linear quadratic regulator problem and a corresponding steady state Riccati equation which always has a bounded nonnegative solution.

  17. A time-domain decomposition iterative method for the solution of distributed linear quadratic optimal control problems

    NASA Astrophysics Data System (ADS)

    Heinkenschloss, Matthias

    2005-01-01

    We study a class of time-domain decomposition-based methods for the numerical solution of large-scale linear quadratic optimal control problems. Our methods are based on a multiple shooting reformulation of the linear quadratic optimal control problem as a discrete-time optimal control (DTOC) problem. The optimality conditions for this DTOC problem lead to a linear block tridiagonal system. The diagonal blocks are invertible and are related to the original linear quadratic optimal control problem restricted to smaller time-subintervals. This motivates the application of block Gauss-Seidel (GS)-type methods for the solution of the block tridiagonal systems. Numerical experiments show that the spectral radii of the block GS iteration matrices are larger than one for typical applications, but that the eigenvalues of the iteration matrices decay to zero fast. Hence, while the GS method is not expected to convergence for typical applications, it can be effective as a preconditioner for Krylov-subspace methods. This is confirmed by our numerical tests.A byproduct of this research is the insight that certain instantaneous control techniques can be viewed as the application of one step of the forward block GS method applied to the DTOC optimality system.

  18. Preparation and catalytic activities for H{sub 2}O{sub 2} decomposition of Rh/Au bimetallic nanoparticles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Haijun, E-mail: zhanghaijun@wust.edu.cn; The State Key Laboratory of Refractory and Metallurgy, Wuhan University of Science and Technology, Wuhan 430081; Deng, Xiangong

    2016-07-15

    Graphical abstract: PVP-protected Rh/Au bimetallic nanoparticles (BNPs) were prepared by using hydrogen sacrificial reduction method, the activity of Rh80Au20 BNPs were about 3.6 times higher than that of Rh NPs. - Highlights: • Rh/Au bimetallic nanoparticles (BNPs) of 3∼5 nm in diameter were prepared. • Activity for H{sub 2}O{sub 2} decomposition of BNPs is 3.6 times higher than that of Rh NPs. • The high activity of BNPs was caused by the existence of charged Rh atoms. • The apparent activation energy for H{sub 2}O{sub 2} decomposition over the BNPs was calculated. - Abstract: PVP-protected Rh/Au bimetallic nanoparticles (BNPs) weremore » prepared by using hydrogen sacrificial reduction method and characterized by UV–vis, XRD, FT-IR, XPS, TEM, HR-TEM and DF-STEM, the effects of composition on their particle sizes and catalytic activities for H{sub 2}O{sub 2} decomposition were also studied. The as-prepared Rh/Au BNPs possessed a high catalytic activity for the H{sub 2}O{sub 2} decomposition, and the activity of the Rh{sub 80}Au{sub 20} BNPs with average size of 2.7 nm were about 3.6 times higher than that of Rh monometallic nanoparticles (MNPs) even the Rh MNPs possess a smaller particle size of 1.7 nm. In contrast, Au MNPs with size of 2.7 nm show no any activity. Density functional theory (DFT) calculation as well as XPS results showed that charged Rh and Au atoms formed via electronic charge transfer effects could be responsible for the high catalytic activity of the BNPs.« less

  19. The trait contribution to wood decomposition rates of 15 Neotropical tree species.

    PubMed

    van Geffen, Koert G; Poorter, Lourens; Sass-Klaassen, Ute; van Logtestijn, Richard S P; Cornelissen, Johannes H C

    2010-12-01

    The decomposition of dead wood is a critical uncertainty in models of the global carbon cycle. Despite this, relatively few studies have focused on dead wood decomposition, with a strong bias to higher latitudes. Especially the effect of interspecific variation in species traits on differences in wood decomposition rates remains unknown. In order to fill these gaps, we applied a novel method to study long-term wood decomposition of 15 tree species in a Bolivian semi-evergreen tropical moist forest. We hypothesized that interspecific differences in species traits are important drivers of variation in wood decomposition rates. Wood decomposition rates (fractional mass loss) varied between 0.01 and 0.31 yr(-1). We measured 10 different chemical, anatomical, and morphological traits for all species. The species' average traits were useful predictors of wood decomposition rates, particularly the average diameter (dbh) of the tree species (R2 = 0.41). Lignin concentration further increased the proportion of explained inter-specific variation in wood decomposition (both negative relations, cumulative R2 = 0.55), although it did not significantly explain variation in wood decomposition rates if considered alone. When dbh values of the actual dead trees sampled for decomposition rate determination were used as a predictor variable, the final model (including dead tree dbh and lignin concentration) explained even more variation in wood decomposition rates (R2 = 0.71), underlining the importance of dbh in wood decomposition. Other traits, including wood density, wood anatomical traits, macronutrient concentrations, and the amount of phenolic extractives could not significantly explain the variation in wood decomposition rates. The surprising results of this multi-species study, in which for the first time a large set of traits is explicitly linked to wood decomposition rates, merits further testing in other forest ecosystems.

  20. Stoichiometric vs hydroclimatic controls on soil biogeochemical processes

    NASA Astrophysics Data System (ADS)

    Manzoni, Stefano; Porporato, Amilcare

    2010-05-01

    Soil nutrient cycles are controlled by both stoichiometric constraints (e.g., carbon to nutrient ratios) and hydroclimatic conditions (e.g., soil moisture and temperature). Both controls tend to act in a nonlinear manner and give rise to complex dynamics in soil biogeochemistry at different space-time scales. We first review the theoretical basis of soil biogeochemical models, looking for the general principles underlying these models across space-time scales and scientific disciplines. By comparing more than 250 models, we show that similar kinetic and stoichiometric laws, formulated to mechanistically represent the complex biochemical constraints to decomposition, are common to most models, providing a basis for their classification. Moreover, a historic analysis reveals that the complexity (e.g., phase space dimension, model architecture) and degree and number of nonlinearities generally increased with date, while they decreased with increasing spatial and temporal scale of interest. Soil biogeochmical dynamics may be suitable conceptualized using a number of compartments (e.g., decomposers, organic substrates, inorganic ions) interacting among each other at rates that depend (nonlinearly) on climatic drivers. As a consequence, hydroclimatic-induced fluctuations at the daily scale propagate through the various soil compartments leading to cascading effects ranging from short-term fluctuations in the smaller pools to long-lasting changes in the larger ones. Such cascading effects are known to occur in dryland ecosystems, and are increasingly being recongnized to control the long-term carbon and nutrient balances in more mesic ecosystems. We also show that separating biochemical from climatic impacts on organic matter decomposition results in universal curves describing data of plant residue decomposition and nutrient mineralization across the globe. Future extensions to larger spatial scales and managed ecosystems are also briefly outlined. It is critical that future modeling efforts carefully account for the scale-dependence of their mathematical formulations, especially when applied to a wide range of scales.

  1. INDDGO: Integrated Network Decomposition & Dynamic programming for Graph Optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Groer, Christopher S; Sullivan, Blair D; Weerapurage, Dinesh P

    2012-10-01

    It is well-known that dynamic programming algorithms can utilize tree decompositions to provide a way to solve some \\emph{NP}-hard problems on graphs where the complexity is polynomial in the number of nodes and edges in the graph, but exponential in the width of the underlying tree decomposition. However, there has been relatively little computational work done to determine the practical utility of such dynamic programming algorithms. We have developed software to construct tree decompositions using various heuristics and have created a fast, memory-efficient dynamic programming implementation for solving maximum weighted independent set. We describe our software and the algorithms wemore » have implemented, focusing on memory saving techniques for the dynamic programming. We compare the running time and memory usage of our implementation with other techniques for solving maximum weighted independent set, including a commercial integer programming solver and a semi-definite programming solver. Our results indicate that it is possible to solve some instances where the underlying decomposition has width much larger than suggested by the literature. For certain types of problems, our dynamic programming code runs several times faster than these other methods.« less

  2. Improving prediction accuracy of cooling load using EMD, PSR and RBFNN

    NASA Astrophysics Data System (ADS)

    Shen, Limin; Wen, Yuanmei; Li, Xiaohong

    2017-08-01

    To increase the accuracy for the prediction of cooling load demand, this work presents an EMD (empirical mode decomposition)-PSR (phase space reconstruction) based RBFNN (radial basis function neural networks) method. Firstly, analyzed the chaotic nature of the real cooling load demand, transformed the non-stationary cooling load historical data into several stationary intrinsic mode functions (IMFs) by using EMD. Secondly, compared the RBFNN prediction accuracies of each IMFs and proposed an IMF combining scheme that is combine the lower-frequency components (called IMF4-IMF6 combined) while keep the higher frequency component (IMF1, IMF2, IMF3) and the residual unchanged. Thirdly, reconstruct phase space for each combined components separately, process the highest frequency component (IMF1) by differential method and predict with RBFNN in the reconstructed phase spaces. Real cooling load data of a centralized ice storage cooling systems in Guangzhou are used for simulation. The results show that the proposed hybrid method outperforms the traditional methods.

  3. ADM For Solving Linear Second-Order Fredholm Integro-Differential Equations

    NASA Astrophysics Data System (ADS)

    Karim, Mohd F.; Mohamad, Mahathir; Saifullah Rusiman, Mohd; Che-Him, Norziha; Roslan, Rozaini; Khalid, Kamil

    2018-04-01

    In this paper, we apply Adomian Decomposition Method (ADM) as numerically analyse linear second-order Fredholm Integro-differential Equations. The approximate solutions of the problems are calculated by Maple package. Some numerical examples have been considered to illustrate the ADM for solving this equation. The results are compared with the existing exact solution. Thus, the Adomian decomposition method can be the best alternative method for solving linear second-order Fredholm Integro-Differential equation. It converges to the exact solution quickly and in the same time reduces computational work for solving the equation. The result obtained by ADM shows the ability and efficiency for solving these equations.

  4. Micromorphological aspects of forensic geopedology: time-dependent markers of decomposition and permanence in soil in experimental burials

    NASA Astrophysics Data System (ADS)

    Zangarini, Sara; Cattaneo, Cristina; Trombino, Luca

    2014-05-01

    The importance of the role played by soil scientists grows up in the modern forensic sciences, in particular when buried human remains strongly decomposed or skeletonized are found in different environment situations. An interdisciplinary team, formed by earth and legal medicine researchers from the University of Milan is working on several sets of experimental burial of pigs in different soil types and for different times of burial, in order to get new evidences on environmental responses to the burial, focusing specifically on geopedological and micropedological aspects. The present work is aimed at the micromorphological (petrographic microscope) and ultramicroscopic (SEM) cross characterization of bone tissue in buried remains, in order to describe bone alteration pathways due both to decomposition and to permanence in soil. These methods allow identifying in the tissues of analysed bones: - Unusual concentrations of metal oxides (i.e. Fe, Mn), in the form of violet-blue colorations (in XPL), which seem to be related to chemical conditions in the burial area; their presence could be a method to discriminate permanence in soil rather than a different environment of decomposition. - Magnesium phosphate (i.e. Mg3(PO4)2 ) crystallizations, usually noticed in bones buried from 7 to 103 weeks; their presence seems to be related to the decomposition both of the bones themselves and of soft tissues. - The presence of significant sulphur levels (i.e. SO3) in bones buried for over 7 weeks, which seem to be related to the transport and fixation of soft tissues decomposition fluids. These results point out that micromorphological techniques coupled with spatially resolved chemical analyses allow identifying both indicators of the permanence of the remains into the soil (i.e. metal oxides concentrations) and time-dependent markers of decomposition (i.e. significant sulphur levels and magnesium phosphate) in order to determine PMI (post-mortem-interval) and TSB (time-since-burial). Further studies and new experiments are in progress in order to better clarify the bone alteration pathways on different skeletal districts and in different kind of soils.

  5. Low-rank plus sparse decomposition for exoplanet detection in direct-imaging ADI sequences. The LLSG algorithm

    NASA Astrophysics Data System (ADS)

    Gomez Gonzalez, C. A.; Absil, O.; Absil, P.-A.; Van Droogenbroeck, M.; Mawet, D.; Surdej, J.

    2016-05-01

    Context. Data processing constitutes a critical component of high-contrast exoplanet imaging. Its role is almost as important as the choice of a coronagraph or a wavefront control system, and it is intertwined with the chosen observing strategy. Among the data processing techniques for angular differential imaging (ADI), the most recent is the family of principal component analysis (PCA) based algorithms. It is a widely used statistical tool developed during the first half of the past century. PCA serves, in this case, as a subspace projection technique for constructing a reference point spread function (PSF) that can be subtracted from the science data for boosting the detectability of potential companions present in the data. Unfortunately, when building this reference PSF from the science data itself, PCA comes with certain limitations such as the sensitivity of the lower dimensional orthogonal subspace to non-Gaussian noise. Aims: Inspired by recent advances in machine learning algorithms such as robust PCA, we aim to propose a localized subspace projection technique that surpasses current PCA-based post-processing algorithms in terms of the detectability of companions at near real-time speed, a quality that will be useful for future direct imaging surveys. Methods: We used randomized low-rank approximation methods recently proposed in the machine learning literature, coupled with entry-wise thresholding to decompose an ADI image sequence locally into low-rank, sparse, and Gaussian noise components (LLSG). This local three-term decomposition separates the starlight and the associated speckle noise from the planetary signal, which mostly remains in the sparse term. We tested the performance of our new algorithm on a long ADI sequence obtained on β Pictoris with VLT/NACO. Results: Compared to a standard PCA approach, LLSG decomposition reaches a higher signal-to-noise ratio and has an overall better performance in the receiver operating characteristic space. This three-term decomposition brings a detectability boost compared to the full-frame standard PCA approach, especially in the small inner working angle region where complex speckle noise prevents PCA from discerning true companions from noise.

  6. System for thermal energy storage, space heating and cooling and power conversion

    DOEpatents

    Gruen, Dieter M.; Fields, Paul R.

    1981-04-21

    An integrated system for storing thermal energy, for space heating and cong and for power conversion is described which utilizes the reversible thermal decomposition characteristics of two hydrides having different decomposition pressures at the same temperature for energy storage and space conditioning and the expansion of high-pressure hydrogen for power conversion. The system consists of a plurality of reaction vessels, at least one containing each of the different hydrides, three loops of circulating heat transfer fluid which can be selectively coupled to the vessels for supplying the heat of decomposition from any appropriate source of thermal energy from the outside ambient environment or from the spaces to be cooled and for removing the heat of reaction to the outside ambient environment or to the spaces to be heated, and a hydrogen loop for directing the flow of hydrogen gas between the vessels. When used for power conversion, at least two vessels contain the same hydride and the hydrogen loop contains an expansion engine. The system is particularly suitable for the utilization of thermal energy supplied by solar collectors and concentrators, but may be used with any source of heat, including a source of low-grade heat.

  7. Accelerating Dynamic Magnetic Resonance Imaging (MRI) for Lung Tumor Tracking Based on Low-Rank Decomposition in the Spatial–Temporal Domain: A Feasibility Study Based on Simulation and Preliminary Prospective Undersampled MRI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sarma, Manoj; Department of Radiation Oncology, University of California, Los Angeles, California; Hu, Peng

    Purpose: To evaluate a low-rank decomposition method to reconstruct down-sampled k-space data for the purpose of tumor tracking. Methods and Materials: Seven retrospective lung cancer patients were included in the simulation study. The fully-sampled k-space data were first generated from existing 2-dimensional dynamic MR images and then down-sampled by 5 × -20 × before reconstruction using a Cartesian undersampling mask. Two methods, a low-rank decomposition method using combined dynamic MR images (k-t SLR based on sparsity and low-rank penalties) and a total variation (TV) method using individual dynamic MR frames, were used to reconstruct images. The tumor trajectories were derived on the basis ofmore » autosegmentation of the resultant images. To further test its feasibility, k-t SLR was used to reconstruct prospective data of a healthy subject. An undersampled balanced steady-state free precession sequence with the same undersampling mask was used to acquire the imaging data. Results: In the simulation study, higher imaging fidelity and low noise levels were achieved with the k-t SLR compared with TV. At 10 × undersampling, the k-t SLR method resulted in an average normalized mean square error <0.05, as opposed to 0.23 by using the TV reconstruction on individual frames. Less than 6% showed tracking errors >1 mm with 10 × down-sampling using k-t SLR, as opposed to 17% using TV. In the prospective study, k-t SLR substantially reduced reconstruction artifacts and retained anatomic details. Conclusions: Magnetic resonance reconstruction using k-t SLR on highly undersampled dynamic MR imaging data results in high image quality useful for tumor tracking. The k-t SLR was superior to TV by better exploiting the intrinsic anatomic coherence of the same patient. The feasibility of k-t SLR was demonstrated by prospective imaging acquisition and reconstruction.« less

  8. Generalization of the Gaussian electrostatic model: Extension to arbitrary angular momentum, distributed multipoles, and speedup with reciprocal space methods

    NASA Astrophysics Data System (ADS)

    Cisneros, G. Andrés; Piquemal, Jean-Philip; Darden, Thomas A.

    2006-11-01

    The simulation of biological systems by means of current empirical force fields presents shortcomings due to their lack of accuracy, especially in the description of the nonbonded terms. We have previously introduced a force field based on density fitting termed the Gaussian electrostatic model-0 (GEM-0) J.-P. Piquemal et al. [J. Chem. Phys. 124, 104101 (2006)] that improves the description of the nonbonded interactions. GEM-0 relies on density fitting methodology to reproduce each contribution of the constrained space orbital variation (CSOV) energy decomposition scheme, by expanding the electronic density of the molecule in s-type Gaussian functions centered at specific sites. In the present contribution we extend the Coulomb and exchange components of the force field to auxiliary basis sets of arbitrary angular momentum. Since the basis functions with higher angular momentum have directionality, a reference molecular frame (local frame) formalism is employed for the rotation of the fitted expansion coefficients. In all cases the intermolecular interaction energies are calculated by means of Hermite Gaussian functions using the McMurchie-Davidson [J. Comput. Phys. 26, 218 (1978)] recursion to calculate all the required integrals. Furthermore, the use of Hermite Gaussian functions allows a point multipole decomposition determination at each expansion site. Additionally, the issue of computational speed is investigated by reciprocal space based formalisms which include the particle mesh Ewald (PME) and fast Fourier-Poisson (FFP) methods. Frozen-core (Coulomb and exchange-repulsion) intermolecular interaction results for ten stationary points on the water dimer potential-energy surface, as well as a one-dimensional surface scan for the canonical water dimer, formamide, stacked benzene, and benzene water dimers, are presented. All results show reasonable agreement with the corresponding CSOV calculated reference contributions, around 0.1 and 0.15kcal/mol error for Coulomb and exchange, respectively. Timing results for single Coulomb energy-force calculations for (H2O)n, n =64, 128, 256, 512, and 1024, in periodic boundary conditions with PME and FFP at two different rms force tolerances are also presented. For the small and intermediate auxiliaries, PME shows faster times than FFP at both accuracies and the advantage of PME widens at higher accuracy, while for the largest auxiliary, the opposite occurs.

  9. Generalization of the Gaussian electrostatic model: Extension to arbitrary angular momentum, distributed multipoles, and speedup with reciprocal space methods

    PubMed Central

    Cisneros, G. Andrés; Piquemal, Jean-Philip; Darden, Thomas A.

    2007-01-01

    The simulation of biological systems by means of current empirical force fields presents shortcomings due to their lack of accuracy, especially in the description of the nonbonded terms. We have previously introduced a force field based on density fitting termed the Gaussian electrostatic model-0 (GEM-0) J.-P. Piquemal et al. [J. Chem. Phys. 124, 104101 (2006)] that improves the description of the nonbonded interactions. GEM-0 relies on density fitting methodology to reproduce each contribution of the constrained space orbital variation (CSOV) energy decomposition scheme, by expanding the electronic density of the molecule in s-type Gaussian functions centered at specific sites. In the present contribution we extend the Coulomb and exchange components of the force field to auxiliary basis sets of arbitrary angular momentum. Since the basis functions with higher angular momentum have directionality, a reference molecular frame (local frame) formalism is employed for the rotation of the fitted expansion coefficients. In all cases the intermolecular interaction energies are calculated by means of Hermite Gaussian functions using the McMurchie-Davidson [J. Comput. Phys. 26, 218 (1978)] recursion to calculate all the required integrals. Furthermore, the use of Hermite Gaussian functions allows a point multipole decomposition determination at each expansion site. Additionally, the issue of computational speed is investigated by reciprocal space based formalisms which include the particle mesh Ewald (PME) and fast Fourier-Poisson (FFP) methods. Frozen-core (Coulomb and exchange-repulsion) intermolecular interaction results for ten stationary points on the water dimer potential-energy surface, as well as a one-dimensional surface scan for the canonical water dimer, formamide, stacked benzene, and benzene water dimers, are presented. All results show reasonable agreement with the corresponding CSOV calculated reference contributions, around 0.1 and 0.15 kcal/mol error for Coulomb and exchange, respectively. Timing results for single Coulomb energy-force calculations for (H2O)n, n=64, 128, 256, 512, and 1024, in periodic boundary conditions with PME and FFP at two different rms force tolerances are also presented. For the small and intermediate auxiliaries, PME shows faster times than FFP at both accuracies and the advantage of PME widens at higher accuracy, while for the largest auxiliary, the opposite occurs. PMID:17115732

  10. Modal Analysis Using the Singular Value Decomposition and Rational Fraction Polynomials

    DTIC Science & Technology

    2017-04-06

    information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and...results. The programs are designed for experimental datasets with multiple drive and response points and have proven effective even for systems with... designed for experimental datasets with multiple drive and response points and have proven effective even for systems with numerous closely-spaced

  11. Resident Load Influence Analysis Method for Price Based on Non-intrusive Load Monitoring and Decomposition Data

    NASA Astrophysics Data System (ADS)

    Jiang, Wenqian; Zeng, Bo; Yang, Zhou; Li, Gang

    2018-01-01

    In the non-invasive load monitoring mode, the load decomposition can reflect the running state of each load, which will help the user reduce unnecessary energy costs. With the demand side management measures of time of using price, a resident load influence analysis method for time of using price (TOU) based on non-intrusive load monitoring data are proposed in the paper. Relying on the current signal of the resident load classification, the user equipment type, and different time series of self-elasticity and cross-elasticity of the situation could be obtained. Through the actual household load data test with the impact of TOU, part of the equipment will be transferred to the working hours, and users in the peak price of electricity has been reduced, and in the electricity at the time of the increase Electrical equipment, with a certain regularity.

  12. Optical ranked-order filtering using threshold decomposition

    DOEpatents

    Allebach, Jan P.; Ochoa, Ellen; Sweeney, Donald W.

    1990-01-01

    A hybrid optical/electronic system performs median filtering and related ranked-order operations using threshold decomposition to encode the image. Threshold decomposition transforms the nonlinear neighborhood ranking operation into a linear space-invariant filtering step followed by a point-to-point threshold comparison step. Spatial multiplexing allows parallel processing of all the threshold components as well as recombination by a second linear, space-invariant filtering step. An incoherent optical correlation system performs the linear filtering, using a magneto-optic spatial light modulator as the input device and a computer-generated hologram in the filter plane. Thresholding is done electronically. By adjusting the value of the threshold, the same architecture is used to perform median, minimum, and maximum filtering of images. A totally optical system is also disclosed.

  13. Effective metrics and a fully covariant description of constitutive tensors in electrodynamics

    NASA Astrophysics Data System (ADS)

    Schuster, Sebastian; Visser, Matt

    2017-12-01

    Using electromagnetism to study analogue space-times is tantamount to considering consistency conditions for when a given (meta-) material would provide an analogue space-time model or—vice versa—characterizing which given metric could be modeled with a (meta-) material. While the consistency conditions themselves are by now well known and studied, the form the metric takes once they are satisfied is not. This question is mostly easily answered by keeping the formalisms of the two research fields here in contact as close to each other as possible. While fully covariant formulations of the electrodynamics of media have been around for a long while, they are usually abandoned for (3 +1 )- or six-dimensional formalisms. Here we use the fully unified and fully covariant approach. This enables us even to generalize the consistency conditions for the existence of an effective metric to arbitrary background metrics beyond flat space-time electrodynamics. We also show how the familiar matrices for permittivity ɛ , permeability μ-1, and magnetoelectric effects ζ can be seen as the three independent pieces of the Bel decomposition for the constitutive tensor Za b c d, i.e., the components of an orthogonal decomposition with respect to a given observer with four-velocity Va. Finally, we use the Moore-Penrose pseudoinverse and the closely related pseudodeterminant to then gain the desired reconstruction of the effective metric in terms of the permittivity tensor ɛa b, the permeability tensor [μ-1]a b, and the magnetoelectric tensor ζa b, as an explicit function geff(ɛ ,μ-1,ζ ).

  14. Forecasting outpatient visits using empirical mode decomposition coupled with back-propagation artificial neural networks optimized by particle swarm optimization

    PubMed Central

    Huang, Daizheng; Wu, Zhihui

    2017-01-01

    Accurately predicting the trend of outpatient visits by mathematical modeling can help policy makers manage hospitals effectively, reasonably organize schedules for human resources and finances, and appropriately distribute hospital material resources. In this study, a hybrid method based on empirical mode decomposition and back-propagation artificial neural networks optimized by particle swarm optimization is developed to forecast outpatient visits on the basis of monthly numbers. The data outpatient visits are retrieved from January 2005 to December 2013 and first obtained as the original time series. Second, the original time series is decomposed into a finite and often small number of intrinsic mode functions by the empirical mode decomposition technique. Third, a three-layer back-propagation artificial neural network is constructed to forecast each intrinsic mode functions. To improve network performance and avoid falling into a local minimum, particle swarm optimization is employed to optimize the weights and thresholds of back-propagation artificial neural networks. Finally, the superposition of forecasting results of the intrinsic mode functions is regarded as the ultimate forecasting value. Simulation indicates that the proposed method attains a better performance index than the other four methods. PMID:28222194

  15. Forecasting outpatient visits using empirical mode decomposition coupled with back-propagation artificial neural networks optimized by particle swarm optimization.

    PubMed

    Huang, Daizheng; Wu, Zhihui

    2017-01-01

    Accurately predicting the trend of outpatient visits by mathematical modeling can help policy makers manage hospitals effectively, reasonably organize schedules for human resources and finances, and appropriately distribute hospital material resources. In this study, a hybrid method based on empirical mode decomposition and back-propagation artificial neural networks optimized by particle swarm optimization is developed to forecast outpatient visits on the basis of monthly numbers. The data outpatient visits are retrieved from January 2005 to December 2013 and first obtained as the original time series. Second, the original time series is decomposed into a finite and often small number of intrinsic mode functions by the empirical mode decomposition technique. Third, a three-layer back-propagation artificial neural network is constructed to forecast each intrinsic mode functions. To improve network performance and avoid falling into a local minimum, particle swarm optimization is employed to optimize the weights and thresholds of back-propagation artificial neural networks. Finally, the superposition of forecasting results of the intrinsic mode functions is regarded as the ultimate forecasting value. Simulation indicates that the proposed method attains a better performance index than the other four methods.

  16. Interface conditions for domain decomposition with radical grid refinement

    NASA Technical Reports Server (NTRS)

    Scroggs, Jeffrey S.

    1991-01-01

    Interface conditions for coupling the domains in a physically motivated domain decomposition method are discussed. The domain decomposition is based on an asymptotic-induced method for the numerical solution of hyperbolic conservation laws with small viscosity. The method consists of multiple stages. The first stage is to obtain a first approximation using a first-order method, such as the Godunov scheme. Subsequent stages of the method involve solving internal-layer problem via a domain decomposition. The method is derived and justified via singular perturbation techniques.

  17. Spectral decomposition of nonlinear systems with memory

    NASA Astrophysics Data System (ADS)

    Svenkeson, Adam; Glaz, Bryan; Stanton, Samuel; West, Bruce J.

    2016-02-01

    We present an alternative approach to the analysis of nonlinear systems with long-term memory that is based on the Koopman operator and a Lévy transformation in time. Memory effects are considered to be the result of interactions between a system and its surrounding environment. The analysis leads to the decomposition of a nonlinear system with memory into modes whose temporal behavior is anomalous and lacks a characteristic scale. On average, the time evolution of a mode follows a Mittag-Leffler function, and the system can be described using the fractional calculus. The general theory is demonstrated on the fractional linear harmonic oscillator and the fractional nonlinear logistic equation. When analyzing data from an ill-defined (black-box) system, the spectral decomposition in terms of Mittag-Leffler functions that we propose may uncover inherent memory effects through identification of a small set of dynamically relevant structures that would otherwise be obscured by conventional spectral methods. Consequently, the theoretical concepts we present may be useful for developing more general methods for numerical modeling that are able to determine whether observables of a dynamical system are better represented by memoryless operators, or operators with long-term memory in time, when model details are unknown.

  18. Domain decomposition for aerodynamic and aeroacoustic analyses, and optimization

    NASA Technical Reports Server (NTRS)

    Baysal, Oktay

    1995-01-01

    The overarching theme was the domain decomposition, which intended to improve the numerical solution technique for the partial differential equations at hand; in the present study, those that governed either the fluid flow, or the aeroacoustic wave propagation, or the sensitivity analysis for a gradient-based optimization. The role of the domain decomposition extended beyond the original impetus of discretizing geometrical complex regions or writing modular software for distributed-hardware computers. It induced function-space decompositions and operator decompositions that offered the valuable property of near independence of operator evaluation tasks. The objectives have gravitated about the extensions and implementations of either the previously developed or concurrently being developed methodologies: (1) aerodynamic sensitivity analysis with domain decomposition (SADD); (2) computational aeroacoustics of cavities; and (3) dynamic, multibody computational fluid dynamics using unstructured meshes.

  19. Material decomposition in an arbitrary number of dimensions using noise compensating projection

    NASA Astrophysics Data System (ADS)

    O'Donnell, Thomas; Halaweish, Ahmed; Cormode, David; Cheheltani, Rabee; Fayad, Zahi A.; Mani, Venkatesh

    2017-03-01

    Purpose: Multi-energy CT (e.g., dual energy or photon counting) facilitates the identification of certain compounds via data decomposition. However, the standard approach to decomposition (i.e., solving a system of linear equations) fails if - due to noise - a pixel's vector of HU values falls outside the boundary of values describing possible pure or mixed basis materials. Typically, this is addressed by either throwing away those pixels or projecting them onto the closest point on this boundary. However, when acquiring four (or more) energy volumes, the space bounded by three (or more) materials that may be found in the human body (either naturally or through injection) can be quite small. Noise may significantly limit the number of those pixels to be included within. Therefore, projection onto the boundary becomes an important option. But, projection in higher than 3 dimensional space is not possible with standard vector algebra: the cross-product is not defined. Methods: We describe a technique which employs Clifford Algebra to perform projection in an arbitrary number of dimensions. Clifford Algebra describes a manipulation of vectors that incorporates the concepts of addition, subtraction, multiplication, and division. Thereby, vectors may be operated on like scalars forming a true algebra. Results: We tested our approach on a phantom containing inserts of calcium, gadolinium, iodine, gold nanoparticles and mixtures of pairs thereof. Images were acquired on a prototype photon counting CT scanner under a range of threshold combinations. Comparison of the accuracy of different threshold combinations versus ground truth are presented. Conclusions: Material decomposition is possible with three or more materials and four or more energy thresholds using Clifford Algebra projection to mitigate noise.

  20. On the decomposition of synchronous state mechines using sequence invariant state machines

    NASA Technical Reports Server (NTRS)

    Hebbalalu, K.; Whitaker, S.; Cameron, K.

    1992-01-01

    This paper presents a few techniques for the decomposition of Synchronous State Machines of medium to large sizes into smaller component machines. The methods are based on the nature of the transitions and sequences of states in the machine and on the number and variety of inputs to the machine. The results of the decomposition, and of using the Sequence Invariant State Machine (SISM) Design Technique for generating the component machines, include great ease and quickness in the design and implementation processes. Furthermore, there is increased flexibility in making modifications to the original design leading to negligible re-design time.

  1. Sub-domain decomposition methods and computational controls for multibody dynamical systems. [of spacecraft structures

    NASA Technical Reports Server (NTRS)

    Menon, R. G.; Kurdila, A. J.

    1992-01-01

    This paper presents a concurrent methodology to simulate the dynamics of flexible multibody systems with a large number of degrees of freedom. A general class of open-loop structures is treated and a redundant coordinate formulation is adopted. A range space method is used in which the constraint forces are calculated using a preconditioned conjugate gradient method. By using a preconditioner motivated by the regular ordering of the directed graph of the structures, it is shown that the method is order N in the total number of coordinates of the system. The overall formulation has the advantage that it permits fine parallelization and does not rely on system topology to induce concurrency. It can be efficiently implemented on the present generation of parallel computers with a large number of processors. Validation of the method is presented via numerical simulations of space structures incorporating large number of flexible degrees of freedom.

  2. Radiolytic decomposition of ammonium halides

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Orlov, S.L.; Gromov, V.V.; Saunin, E.I.

    1988-11-01

    Chromatographic analyses were made of the gaseous products of the radiolysis of polycrystalline NH/sub 4/F, NH/sub 4/Cl, NH/sub 4/Br, and NH/sub 4/I, of particle size 0.25-0.5 mm. The irradiation was performed with /sup 60/Co ..sigma..-quanta, at room temperature in previously evacuated and sealed glass ampules. Determination was made of the amount of gas liberated into the space of the ampule during the irradiation, and of the amount retained in the crystal matrix and evolved on dissolution of the resulting samples in deaerated water. At the same time quantitative determinations of halogen were made by the thiosulfate method. It was shownmore » that hydrogen and nitrogen were formed in the radiolysis of all the compounds investigated. The yields are listed.« less

  3. Approaches to optimization of SS/TDMA time slot assignment. [satellite switched time division multiple access

    NASA Technical Reports Server (NTRS)

    Wade, T. O.

    1984-01-01

    Reduction techniques for traffic matrices are explored in some detail. These matrices arise in satellite switched time-division multiple access (SS/TDMA) techniques whereby switching of uplink and downlink beams is required to facilitate interconnectivity of beam zones. A traffic matrix is given to represent that traffic to be transmitted from n uplink beams to n downlink beams within a TDMA frame typically of 1 ms duration. The frame is divided into segments of time and during each segment a portion of the traffic is represented by a switching mode. This time slot assignment is characterized by a mode matrix in which there is not more than a single non-zero entry on each line (row or column) of the matrix. Investigation is confined to decomposition of an n x n traffic matrix by mode matrices with a requirement that the decomposition be 100 percent efficient or, equivalently, that the line(s) in the original traffic matrix whose sum is maximal (called critical line(s)) remain maximal as mode matrices are subtracted throughout the decomposition process. A method of decomposition of an n x n traffic matrix by mode matrices results in a number of steps that is bounded by n(2) - 2n + 2. It is shown that this upper bound exists for an n x n matrix wherein all the lines are maximal (called a quasi doubly stochastic (QDS) matrix) or for an n x n matrix that is completely arbitrary. That is, the fact that no method can exist with a lower upper bound is shown for both QDS and arbitrary matrices, in an elementary and straightforward manner.

  4. Local polynomial chaos expansion for linear differential equations with high dimensional random inputs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Yi; Jakeman, John; Gittelson, Claude

    2015-01-08

    In this paper we present a localized polynomial chaos expansion for partial differential equations (PDE) with random inputs. In particular, we focus on time independent linear stochastic problems with high dimensional random inputs, where the traditional polynomial chaos methods, and most of the existing methods, incur prohibitively high simulation cost. Furthermore, the local polynomial chaos method employs a domain decomposition technique to approximate the stochastic solution locally. In each subdomain, a subdomain problem is solved independently and, more importantly, in a much lower dimensional random space. In a postprocesing stage, accurate samples of the original stochastic problems are obtained frommore » the samples of the local solutions by enforcing the correct stochastic structure of the random inputs and the coupling conditions at the interfaces of the subdomains. Overall, the method is able to solve stochastic PDEs in very large dimensions by solving a collection of low dimensional local problems and can be highly efficient. In our paper we present the general mathematical framework of the methodology and use numerical examples to demonstrate the properties of the method.« less

  5. Adomian decomposition method used to solve the one-dimensional acoustic equations

    NASA Astrophysics Data System (ADS)

    Dispini, Meta; Mungkasi, Sudi

    2017-05-01

    In this paper we propose the use of Adomian decomposition method to solve one-dimensional acoustic equations. This recursive method can be calculated easily and the result is an approximation of the exact solution. We use the Maple software to compute the series in the Adomian decomposition. We obtain that the Adomian decomposition method is able to solve the acoustic equations with the physically correct behavior.

  6. The promise of the state space approach to time series analysis for nursing research.

    PubMed

    Levy, Janet A; Elser, Heather E; Knobel, Robin B

    2012-01-01

    Nursing research, particularly related to physiological development, often depends on the collection of time series data. The state space approach to time series analysis has great potential to answer exploratory questions relevant to physiological development but has not been used extensively in nursing. The aim of the study was to introduce the state space approach to time series analysis and demonstrate potential applicability to neonatal monitoring and physiology. We present a set of univariate state space models; each one describing a process that generates a variable of interest over time. Each model is presented algebraically and a realization of the process is presented graphically from simulated data. This is followed by a discussion of how the model has been or may be used in two nursing projects on neonatal physiological development. The defining feature of the state space approach is the decomposition of the series into components that are functions of time; specifically, slowly varying level, faster varying periodic, and irregular components. State space models potentially simulate developmental processes where a phenomenon emerges and disappears before stabilizing, where the periodic component may become more regular with time, or where the developmental trajectory of a phenomenon is irregular. The ultimate contribution of this approach to nursing science will require close collaboration and cross-disciplinary education between nurses and statisticians.

  7. Discretization and Preconditioning Algorithms for the Euler and Navier-Stokes Equations on Unstructured Meshes

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.; Kutler, Paul (Technical Monitor)

    1998-01-01

    Several stabilized demoralization procedures for conservation law equations on triangulated domains will be considered. Specifically, numerical schemes based on upwind finite volume, fluctuation splitting, Galerkin least-squares, and space discontinuous Galerkin demoralization will be considered in detail. A standard energy analysis for several of these methods will be given via entropy symmetrization. Next, we will present some relatively new theoretical results concerning congruence relationships for left or right symmetrized equations. These results suggest new variants of existing FV, DG, GLS, and FS methods which are computationally more efficient while retaining the pleasant theoretical properties achieved by entropy symmetrization. In addition, the task of Jacobean linearization of these schemes for use in Newton's method is greatly simplified owing to exploitation of exact symmetries which exist in the system. The FV, FS and DG schemes also permit discrete maximum principle analysis and enforcement which greatly adds to the robustness of the methods. Discrete maximum principle theory will be presented for general finite volume approximations on unstructured meshes. Next, we consider embedding these nonlinear space discretizations into exact and inexact Newton solvers which are preconditioned using a nonoverlapping (Schur complement) domain decomposition technique. Elements of nonoverlapping domain decomposition for elliptic problems will be reviewed followed by the present extension to hyperbolic and elliptic-hyperbolic problems. Other issues of practical relevance such the meshing of geometries, code implementation, turbulence modeling, global convergence, etc, will. be addressed as needed.

  8. Discretization and Preconditioning Algorithms for the Euler and Navier-Stokes Equations on Unstructured Meshes

    NASA Technical Reports Server (NTRS)

    Barth, Timothy; Chancellor, Marisa K. (Technical Monitor)

    1997-01-01

    Several stabilized discretization procedures for conservation law equations on triangulated domains will be considered. Specifically, numerical schemes based on upwind finite volume, fluctuation splitting, Galerkin least-squares, and space discontinuous Galerkin discretization will be considered in detail. A standard energy analysis for several of these methods will be given via entropy symmetrization. Next, we will present some relatively new theoretical results concerning congruence relationships for left or right symmetrized equations. These results suggest new variants of existing FV, DG, GLS and FS methods which are computationally more efficient while retaining the pleasant theoretical properties achieved by entropy symmetrization. In addition, the task of Jacobian linearization of these schemes for use in Newton's method is greatly simplified owing to exploitation of exact symmetries which exist in the system. These variants have been implemented in the "ELF" library for which example calculations will be shown. The FV, FS and DG schemes also permit discrete maximum principle analysis and enforcement which greatly adds to the robustness of the methods. Some prevalent limiting strategies will be reviewed. Next, we consider embedding these nonlinear space discretizations into exact and inexact Newton solvers which are preconditioned using a nonoverlapping (Schur complement) domain decomposition technique. Elements of nonoverlapping domain decomposition for elliptic problems will be reviewed followed by the present extension to hyperbolic and elliptic-hyperbolic problems. Other issues of practical relevance such the meshing of geometries, code implementation, turbulence modeling, global convergence, etc. will be addressed as needed.

  9. Separation of spatial-temporal patterns ('climatic modes') by combined analysis of really measured and generated numerically vector time series

    NASA Astrophysics Data System (ADS)

    Feigin, A. M.; Mukhin, D.; Volodin, E. M.; Gavrilov, A.; Loskutov, E. M.

    2013-12-01

    The new method of decomposition of the Earth's climate system into well separated spatial-temporal patterns ('climatic modes') is discussed. The method is based on: (i) generalization of the MSSA (Multichannel Singular Spectral Analysis) [1] for expanding vector (space-distributed) time series in basis of spatial-temporal empirical orthogonal functions (STEOF), which makes allowance delayed correlations of the processes recorded in spatially separated points; (ii) expanding both real SST data, and longer by several times SST data generated numerically, in STEOF basis; (iii) use of the numerically produced STEOF basis for exclusion of 'too slow' (and thus not represented correctly) processes from real data. The application of the method allows by means of vector time series generated numerically by the INM RAS Coupled Climate Model [2] to separate from real SST anomalies data [3] two climatic modes possessing by noticeably different time scales: 3-5 and 9-11 years. Relations of separated modes to ENSO and PDO are investigated. Possible applications of spatial-temporal climatic patterns concept to prognosis of climate system evolution is discussed. 1. Ghil, M., R. M. Allen, M. D. Dettinger, K. Ide, D. Kondrashov, et al. (2002) "Advanced spectral methods for climatic time series", Rev. Geophys. 40(1), 3.1-3.41. 2. http://83.149.207.89/GCM_DATA_PLOTTING/GCM_INM_DATA_XY_en.htm 3. http://iridl.ldeo.columbia.edu/SOURCES/.KAPLAN/.EXTENDED/.v2/.ssta/

  10. Massively Parallel Dantzig-Wolfe Decomposition Applied to Traffic Flow Scheduling

    NASA Technical Reports Server (NTRS)

    Rios, Joseph Lucio; Ross, Kevin

    2009-01-01

    Optimal scheduling of air traffic over the entire National Airspace System is a computationally difficult task. To speed computation, Dantzig-Wolfe decomposition is applied to a known linear integer programming approach for assigning delays to flights. The optimization model is proven to have the block-angular structure necessary for Dantzig-Wolfe decomposition. The subproblems for this decomposition are solved in parallel via independent computation threads. Experimental evidence suggests that as the number of subproblems/threads increases (and their respective sizes decrease), the solution quality, convergence, and runtime improve. A demonstration of this is provided by using one flight per subproblem, which is the finest possible decomposition. This results in thousands of subproblems and associated computation threads. This massively parallel approach is compared to one with few threads and to standard (non-decomposed) approaches in terms of solution quality and runtime. Since this method generally provides a non-integral (relaxed) solution to the original optimization problem, two heuristics are developed to generate an integral solution. Dantzig-Wolfe followed by these heuristics can provide a near-optimal (sometimes optimal) solution to the original problem hundreds of times faster than standard (non-decomposed) approaches. In addition, when massive decomposition is employed, the solution is shown to be more likely integral, which obviates the need for an integerization step. These results indicate that nationwide, real-time, high fidelity, optimal traffic flow scheduling is achievable for (at least) 3 hour planning horizons.

  11. Decomposition of timed automata for solving scheduling problems

    NASA Astrophysics Data System (ADS)

    Nishi, Tatsushi; Wakatake, Masato

    2014-03-01

    A decomposition algorithm for scheduling problems based on timed automata (TA) model is proposed. The problem is represented as an optimal state transition problem for TA. The model comprises of the parallel composition of submodels such as jobs and resources. The procedure of the proposed methodology can be divided into two steps. The first step is to decompose the TA model into several submodels by using decomposable condition. The second step is to combine individual solution of subproblems for the decomposed submodels by the penalty function method. A feasible solution for the entire model is derived through the iterated computation of solving the subproblem for each submodel. The proposed methodology is applied to solve flowshop and jobshop scheduling problems. Computational experiments demonstrate the effectiveness of the proposed algorithm compared with a conventional TA scheduling algorithm without decomposition.

  12. Enhancement of nitric oxide decomposition efficiency achieved with lanthanum-based perovskite-type catalyst.

    PubMed

    Pan, Kuan Lun; Chen, Mei Chung; Yu, Sheng Jen; Yan, Shaw Yi; Chang, Moo Been

    2016-06-01

    Direct decompositions of nitric oxide (NO) by La0.7Ce0.3SrNiO4, La0.4Ba0.4Ce0.2SrNiO4, and Pr0.4Ba0.4Ce0.2SrNiO4 are experimentally investigated, and the catalysts are tested with different operating parameters to evaluate their activities. Experimental results indicate that the physical and chemical properties of La0.7Ce0.3SrNiO4 are significantly improved by doping with Ba and partial substitution with Pr. NO decomposition efficiencies achieved with La0.4Ba0.4Ce0.2SrNiO4 and Pr0.4Ba0.4Ce0.2SrNiO4 are 32% and 68%, respectively, at 400 °C with He as carrier gas. As the temperature is increased to 600 °C, NO decomposition efficiencies achieved with La0.4Ba0.4Ce0.2SrNiO4 and Pr0.4Ba0.4Ce0.2SrNiO4, respectively, reach 100% with the inlet NO concentration of 1000 ppm while the space velocity is fixed at 8000 hr(-1). Effects of O2, H2O(g), and CO2 contents and space velocity on NO decomposition are also explored. The results indicate that NO decomposition efficiencies achieved with La0.4Ba0.4Ce0.2SrNiO4 and Pr0.4Ba0.4Ce0.2SrNiO4, respectively, are slightly reduced as space velocity is increased from 8000 to 20,000 hr(-1) at 500 °C. In addition, the activities of both catalysts (La0.4Ba0.4Ce0.2SrNiO4 and Pr0.4Ba0.4Ce0.2SrNiO4) for NO decomposition are slightly reduced in the presence of 5% O2, 5% CO2, or 5% H2O(g). For durability test, with the space velocity of 8000 hr(-1) and operating temperature of 600 °C, high N2 yield is maintained throughout the durability test of 60 hr, revealing the long-term stability of Pr0.4Ba0.4Ce0.2SrNiO4 for NO decomposition. Overall, Pr0.4Ba0.4Ce0.2SrNiO4 shows good catalytic activity for NO decomposition. Nitrous oxide (NO) not only causes adverse environmental effects such as acid rain, photochemical smog, and deterioration of visibility and water quality, but also harms human lungs and respiratory system. Pervoskite-type catalysts, including La0.7Ce0.3SrNiO4, La0.4Ba0.4Ce0.2SrNiO4, and Pr0.4Ba0.4Ce0.2SrNiO4, are applied for direct NO decomposition. The results show that NO decomposition can be enhanced as La0.7Ce0.3SrNiO4 is substituted with Ba and/or Pr. At 600 °C, NO decomposition efficiencies achieved with La0.4Ba0.4Ce0.2SrNiO4 and Pr0.4Ba0.4Ce0.2SrNiO4 reach 100%, demonstrating high activity and good potential for direct NO decomposition. Effects of O2, H2O(g), and CO2 contents on catalytic activities are also evaluated and discussed.

  13. Smoothing spline ANOVA frailty model for recurrent event data.

    PubMed

    Du, Pang; Jiang, Yihua; Wang, Yuedong

    2011-12-01

    Gap time hazard estimation is of particular interest in recurrent event data. This article proposes a fully nonparametric approach for estimating the gap time hazard. Smoothing spline analysis of variance (ANOVA) decompositions are used to model the log gap time hazard as a joint function of gap time and covariates, and general frailty is introduced to account for between-subject heterogeneity and within-subject correlation. We estimate the nonparametric gap time hazard function and parameters in the frailty distribution using a combination of the Newton-Raphson procedure, the stochastic approximation algorithm (SAA), and the Markov chain Monte Carlo (MCMC) method. The convergence of the algorithm is guaranteed by decreasing the step size of parameter update and/or increasing the MCMC sample size along iterations. Model selection procedure is also developed to identify negligible components in a functional ANOVA decomposition of the log gap time hazard. We evaluate the proposed methods with simulation studies and illustrate its use through the analysis of bladder tumor data. © 2011, The International Biometric Society.

  14. Methodes de decomposition pour la planification a moyen terme de la production hydroelectrique sous incertitude

    NASA Astrophysics Data System (ADS)

    Carpentier, Pierre-Luc

    In this thesis, we consider the midterm production planning problem (MTPP) of hydroelectricity generation under uncertainty. The aim of this problem is to manage a set of interconnected hydroelectric reservoirs over several months. We are particularly interested in high dimensional reservoir systems that are operated by large hydroelectricity producers such as Hydro-Quebec. The aim of this thesis is to develop and evaluate different decomposition methods for solving the MTPP under uncertainty. This thesis is divided in three articles. The first article demonstrates the applicability of the progressive hedging algorithm (PHA), a scenario decomposition method, for managing hydroelectric reservoirs with multiannual storage capacity under highly variable operating conditions in Canada. The PHA is a classical stochastic optimization method designed to solve general multistage stochastic programs defined on a scenario tree. This method works by applying an augmented Lagrangian relaxation on non-anticipativity constraints (NACs) of the stochastic program. At each iteration of the PHA, a sequence of subproblems must be solved. Each subproblem corresponds to a deterministic version of the original stochastic program for a particular scenario in the scenario tree. Linear and a quadratic terms must be included in subproblem's objective functions to penalize any violation of NACs. An important limitation of the PHA is due to the fact that the number of subproblems to be solved and the number of penalty terms increase exponentially with the branching level in the tree. This phenomenon can make the application of the PHA particularly difficult when the scenario tree covers several tens of time periods. Another important limitation of the PHA is caused by the fact that the difficulty level of NACs generally increases as the variability of scenarios increases. Consequently, applying the PHA becomes particularly challenging in hydroclimatic regions that are characterized by a high level of seasonal and interannual variability. These two types of limitations can slow down the algorithm's convergence rate and increase the running time per iteration. In this study, we apply the PHA on Hydro-Quebec's power system over a 92-week planning horizon. Hydrologic uncertainty is represented by a scenario tree containing 6 branching stages and 1,635 nodes. The PHA is especially well-suited for this particular application given that the company already possess a deterministic optimization model to solve the MTPP. The second article presents a new approach which enhances the performance of the PHA for solving general Mstochastic programs. The proposed method works by applying a multiscenario decomposition scheme on the stochastic program. Our heuristic method aims at constructing an optimal partition of the scenario set by minimizing the number of NACs on which an augmented Lagrangean relaxation must be applied. Each subproblem is a stochastic program defined on a group of scenarios. NACs linking scenarios sharing a common group are represented implicitly in subproblems by using a group-node system index instead of the traditional scenario-time index system. Only the NACs that link the different scenario groups are represented explicitly and relaxed. The proposed method is evaluated numerically on an hydroelectric reservoir management problem in Quebec. The results of this experiment show that our method has several advantages. Firstly, it allows to reduce the running time per iteration of the PHA by reducing the number of penalty terms that are included in the objective function and by reducing the amount of duplicated constraints and variables. In turn, this allows to reduce the running time per iteration of the algorithm. Secondly, it allows to increase the algorithm's convergence rate by reducing the variability of intermediary solutions at duplicated tree nodes. Thirdly, our approach reduces the amount of random-access memory (RAM) required for storing Lagrange multipliers associated with relaxed NACs. The third article presents an extension of the L-Shaped method designed specifically for managing hydroelectric reservoir systems with a high storage capacity. The method proposed in this paper enables to consider a higher branching level than conventional decomposition method enables. To achieve this, we assume that the stochastic process driving random parameters has a memory loss at time period t = tau. Because of this assumption, the scenario tree possess a special symmetrical structure at the second stage (t > tau). We exploit this feature using a two-stage Benders decomposition method. Each decomposition stage covers several consecutive time periods. The proposed method works by constructing a convex and piecewise linear recourse function that represents the expected cost at the second stage in the master problem. The subproblem and the master problem are stochastic program defined on scenario subtrees and can be solved using a conventional decomposition method or directly. We test the proposed method on an hydroelectric power system in Quebec over a 104-week planning horizon. (Abstract shortened by UMI.).

  15. Image Analysis Using Quantum Entropy Scale Space and Diffusion Concepts

    DTIC Science & Technology

    2009-11-01

    images using a combination of analytic methods and prototype Matlab and Mathematica programs. We investigated concepts of generalized entropy and...Schmidt strength from quantum logic gate decomposition. This form of entropy gives a measure of the nonlocal content of an entangling logic gate...11 We recall that the Schmidt number is an indicator of entanglement , but not a measure of entanglement . For instance, let us compare

  16. Highly Efficient and Scalable Compound Decomposition of Two-Electron Integral Tensor and Its Application in Coupled Cluster Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peng, Bo; Kowalski, Karol

    The representation and storage of two-electron integral tensors are vital in large- scale applications of accurate electronic structure methods. Low-rank representation and efficient storage strategy of integral tensors can significantly reduce the numerical overhead and consequently time-to-solution of these methods. In this paper, by combining pivoted incomplete Cholesky decomposition (CD) with a follow-up truncated singular vector decomposition (SVD), we develop a decomposition strategy to approximately represent the two-electron integral tensor in terms of low-rank vectors. A systematic benchmark test on a series of 1-D, 2-D, and 3-D carbon-hydrogen systems demonstrates high efficiency and scalability of the compound two-step decomposition ofmore » the two-electron integral tensor in our implementation. For the size of atomic basis set N_b ranging from ~ 100 up to ~ 2, 000, the observed numerical scaling of our implementation shows O(N_b^{2.5~3}) versus O(N_b^{3~4}) of single CD in most of other implementations. More importantly, this decomposition strategy can significantly reduce the storage requirement of the atomic-orbital (AO) two-electron integral tensor from O(N_b^4) to O(N_b^2 log_{10}(N_b)) with moderate decomposition thresholds. The accuracy tests have been performed using ground- and excited-state formulations of coupled- cluster formalism employing single and double excitations (CCSD) on several bench- mark systems including the C_{60} molecule described by nearly 1,400 basis functions. The results show that the decomposition thresholds can be generally set to 10^{-4} to 10^{-3} to give acceptable compromise between efficiency and accuracy.« less

  17. Highly Efficient and Scalable Compound Decomposition of Two-Electron Integral Tensor and Its Application in Coupled Cluster Calculations.

    PubMed

    Peng, Bo; Kowalski, Karol

    2017-09-12

    The representation and storage of two-electron integral tensors are vital in large-scale applications of accurate electronic structure methods. Low-rank representation and efficient storage strategy of integral tensors can significantly reduce the numerical overhead and consequently time-to-solution of these methods. In this work, by combining pivoted incomplete Cholesky decomposition (CD) with a follow-up truncated singular vector decomposition (SVD), we develop a decomposition strategy to approximately represent the two-electron integral tensor in terms of low-rank vectors. A systematic benchmark test on a series of 1-D, 2-D, and 3-D carbon-hydrogen systems demonstrates high efficiency and scalability of the compound two-step decomposition of the two-electron integral tensor in our implementation. For the size of the atomic basis set, N b , ranging from ∼100 up to ∼2,000, the observed numerical scaling of our implementation shows [Formula: see text] versus [Formula: see text] cost of performing single CD on the two-electron integral tensor in most of the other implementations. More importantly, this decomposition strategy can significantly reduce the storage requirement of the atomic orbital (AO) two-electron integral tensor from [Formula: see text] to [Formula: see text] with moderate decomposition thresholds. The accuracy tests have been performed using ground- and excited-state formulations of coupled cluster formalism employing single and double excitations (CCSD) on several benchmark systems including the C 60 molecule described by nearly 1,400 basis functions. The results show that the decomposition thresholds can be generally set to 10 -4 to 10 -3 to give acceptable compromise between efficiency and accuracy.

  18. An improved pulse sequence and inversion algorithm of T2 spectrum

    NASA Astrophysics Data System (ADS)

    Ge, Xinmin; Chen, Hua; Fan, Yiren; Liu, Juntao; Cai, Jianchao; Liu, Jianyu

    2017-03-01

    The nuclear magnetic resonance transversal relaxation time is widely applied in geological prospecting, both in laboratory and downhole environments. However, current methods used for data acquisition and inversion should be reformed to characterize geological samples with complicated relaxation components and pore size distributions, such as samples of tight oil, gas shale, and carbonate. We present an improved pulse sequence to collect transversal relaxation signals based on the CPMG (Carr, Purcell, Meiboom, and Gill) pulse sequence. The echo spacing is not constant but varies in different windows, depending on prior knowledge or customer requirements. We use the entropy based truncated singular value decomposition (TSVD) to compress the ill-posed matrix and discard small singular values which cause the inversion instability. A hybrid algorithm combining the iterative TSVD and a simultaneous iterative reconstruction technique is implemented to reach the global convergence and stability of the inversion. Numerical simulations indicate that the improved pulse sequence leads to the same result as CPMG, but with lower echo numbers and computational time. The proposed method is a promising technique for geophysical prospecting and other related fields in future.

  19. Decomposability and scalability in space-based observatory scheduling

    NASA Technical Reports Server (NTRS)

    Muscettola, Nicola; Smith, Stephen F.

    1992-01-01

    In this paper, we discuss issues of problem and model decomposition within the HSTS scheduling framework. HSTS was developed and originally applied in the context of the Hubble Space Telescope (HST) scheduling problem, motivated by the limitations of the current solution and, more generally, the insufficiency of classical planning and scheduling approaches in this problem context. We first summarize the salient architectural characteristics of HSTS and their relationship to previous scheduling and AI planning research. Then, we describe some key problem decomposition techniques supported by HSTS and underlying our integrated planning and scheduling approach, and we discuss the leverage they provide in solving space-based observatory scheduling problems.

  20. A technique for plasma velocity-space cross-correlation

    NASA Astrophysics Data System (ADS)

    Mattingly, Sean; Skiff, Fred

    2018-05-01

    An advance in experimental plasma diagnostics is presented and used to make the first measurement of a plasma velocity-space cross-correlation matrix. The velocity space correlation function can detect collective fluctuations of plasmas through a localized measurement. An empirical decomposition, singular value decomposition, is applied to this Hermitian matrix in order to obtain the plasma fluctuation eigenmode structure on the ion distribution function. A basic theory is introduced and compared to the modes obtained by the experiment. A full characterization of these modes is left for future work, but an outline of this endeavor is provided. Finally, the requirements for this experimental technique in other plasma regimes are discussed.

  1. Applications of Hilbert Spectral Analysis for Speech and Sound Signals

    NASA Technical Reports Server (NTRS)

    Huang, Norden E.

    2003-01-01

    A new method for analyzing nonlinear and nonstationary data has been developed, and the natural applications are to speech and sound signals. The key part of the method is the Empirical Mode Decomposition method with which any complicated data set can be decomposed into a finite and often small number of Intrinsic Mode Functions (IMF). An IMF is defined as any function having the same numbers of zero-crossing and extrema, and also having symmetric envelopes defined by the local maxima and minima respectively. The IMF also admits well-behaved Hilbert transform. This decomposition method is adaptive, and, therefore, highly efficient. Since the decomposition is based on the local characteristic time scale of the data, it is applicable to nonlinear and nonstationary processes. With the Hilbert transform, the Intrinsic Mode Functions yield instantaneous frequencies as functions of time, which give sharp identifications of imbedded structures. This method invention can be used to process all acoustic signals. Specifically, it can process the speech signals for Speech synthesis, Speaker identification and verification, Speech recognition, and Sound signal enhancement and filtering. Additionally, as the acoustical signals from machinery are essentially the way the machines are talking to us. Therefore, the acoustical signals, from the machines, either from sound through air or vibration on the machines, can tell us the operating conditions of the machines. Thus, we can use the acoustic signal to diagnosis the problems of machines.

  2. Pseudospectral reverse time migration based on wavefield decomposition

    NASA Astrophysics Data System (ADS)

    Du, Zengli; Liu, Jianjun; Xu, Feng; Li, Yongzhang

    2017-05-01

    The accuracy of seismic numerical simulations and the effectiveness of imaging conditions are important in reverse time migration studies. Using the pseudospectral method, the precision of the calculated spatial derivative of the seismic wavefield can be improved, increasing the vertical resolution of images. Low-frequency background noise, generated by the zero-lag cross-correlation of mismatched forward-propagated and backward-propagated wavefields at the impedance interfaces, can be eliminated effectively by using the imaging condition based on the wavefield decomposition technique. The computation complexity can be reduced when imaging is performed in the frequency domain. Since the Fourier transformation in the z-axis may be derived directly as one of the intermediate results of the spatial derivative calculation, the computation load of the wavefield decomposition can be reduced, improving the computation efficiency of imaging. Comparison of the results for a pulse response in a constant-velocity medium indicates that, compared with the finite difference method, the peak frequency of the Ricker wavelet can be increased by 10-15 Hz for avoiding spatial numerical dispersion, when the second-order spatial derivative of the seismic wavefield is obtained using the pseudospectral method. The results for the SEG/EAGE and Sigsbee2b models show that the signal-to-noise ratio of the profile and the imaging quality of the boundaries of the salt dome migrated using the pseudospectral method are better than those obtained using the finite difference method.

  3. Fault feature analysis of cracked gear based on LOD and analytical-FE method

    NASA Astrophysics Data System (ADS)

    Wu, Jiateng; Yang, Yu; Yang, Xingkai; Cheng, Junsheng

    2018-01-01

    At present, there are two main ideas for gear fault diagnosis. One is the model-based gear dynamic analysis; the other is signal-based gear vibration diagnosis. In this paper, a method for fault feature analysis of gear crack is presented, which combines the advantages of dynamic modeling and signal processing. Firstly, a new time-frequency analysis method called local oscillatory-characteristic decomposition (LOD) is proposed, which has the attractive feature of extracting fault characteristic efficiently and accurately. Secondly, an analytical-finite element (analytical-FE) method which is called assist-stress intensity factor (assist-SIF) gear contact model, is put forward to calculate the time-varying mesh stiffness (TVMS) under different crack states. Based on the dynamic model of the gear system with 6 degrees of freedom, the dynamic simulation response was obtained for different tooth crack depths. For the dynamic model, the corresponding relation between the characteristic parameters and the degree of the tooth crack is established under a specific condition. On the basis of the methods mentioned above, a novel gear tooth root crack diagnosis method which combines the LOD with the analytical-FE is proposed. Furthermore, empirical mode decomposition (EMD) and ensemble empirical mode decomposition (EEMD) are contrasted with the LOD by gear crack fault vibration signals. The analysis results indicate that the proposed method performs effectively and feasibility for the tooth crack stiffness calculation and the gear tooth crack fault diagnosis.

  4. A benders decomposition approach to multiarea stochastic distributed utility planning

    NASA Astrophysics Data System (ADS)

    McCusker, Susan Ann

    Until recently, small, modular generation and storage options---distributed resources (DRs)---have been installed principally in areas too remote for economic power grid connection and sensitive applications requiring backup capacity. Recent regulatory changes and DR advances, however, have lead utilities to reconsider the role of DRs. To a utility facing distribution capacity bottlenecks or uncertain load growth, DRs can be particularly valuable since they can be dispersed throughout the system and constructed relatively quickly. DR value is determined by comparing its costs to avoided central generation expenses (i.e., marginal costs) and distribution investments. This requires a comprehensive central and local planning and production model, since central system marginal costs result from system interactions over space and time. This dissertation develops and applies an iterative generalized Benders decomposition approach to coordinate models for optimal DR evaluation. Three coordinated models exchange investment, net power demand, and avoided cost information to minimize overall expansion costs. Local investment and production decisions are made by a local mixed integer linear program. Central system investment decisions are made by a LP, and production costs are estimated by a stochastic multi-area production costing model with Kirchhoff's Voltage and Current Law constraints. The nested decomposition is a new and unique method for distributed utility planning that partitions the variables twice to separate local and central investment and production variables, and provides upper and lower bounds on expected expansion costs. Kirchhoff's Voltage Law imposes nonlinear, nonconvex constraints that preclude use of LP if transmission capacity is available in a looped transmission system. This dissertation develops KVL constraint approximations that permit the nested decomposition to consider new transmission resources, while maintaining linearity in the three individual models. These constraints are presented as a heuristic for the given examples; future research will investigate conditions for convergence. A ten-year multi-area example demonstrates the decomposition approach and suggests the ability of DRs and new transmission to modify capacity additions and production costs by changing demand and power flows. Results demonstrate that DR and new transmission options may lead to greater capacity additions, but resulting production cost savings more than offset extra capacity costs.

  5. Efficient calculation of full waveform time domain inversion for electromagnetic problem using fictitious wave domain method and cascade decimation decomposition

    NASA Astrophysics Data System (ADS)

    Imamura, N.; Schultz, A.

    2016-12-01

    Recently, a full waveform time domain inverse solution has been developed for the magnetotelluric (MT) and controlled-source electromagnetic (CSEM) methods. The ultimate goal of this approach is to obtain a computationally tractable direct waveform joint inversion to solve simultaneously for source fields and earth conductivity structure in three and four dimensions. This is desirable on several grounds, including the improved spatial resolving power expected from use of a multitude of source illuminations, the ability to operate in areas of high levels of source signal spatial complexity, and non-stationarity. This goal would not be obtainable if one were to adopt the pure time domain solution for the inverse problem. This is particularly true for the case of MT surveys, since an enormous number of degrees of freedom are required to represent the observed MT waveforms across a large frequency bandwidth. This means that for the forward simulation, the smallest time steps should be finer than that required to represent the highest frequency, while the number of time steps should also cover the lowest frequency. This leads to a sensitivity matrix that is computationally burdensome to solve a model update. We have implemented a code that addresses this situation through the use of cascade decimation decomposition to reduce the size of the sensitivity matrix substantially, through quasi-equivalent time domain decomposition. We also use a fictitious wave domain method to speed up computation time of the forward simulation in the time domain. By combining these refinements, we have developed a full waveform joint source field/earth conductivity inverse modeling method. We found that cascade decimation speeds computations of the sensitivity matrices dramatically, keeping the solution close to that of the undecimated case. For example, for a model discretized into 2.6x105 cells, we obtain model updates in less than 1 hour on a 4U rack-mounted workgroup Linux server, which is a practical computational time for the inverse problem.

  6. Preparation, structural characterization, and decomposition studies of two new γ-octamolybdates of 4-methylpyridine.

    PubMed

    Szymańska, Anna; Nitek, Wojciech; Rutkowska-Zbik, Dorota; Łasocha, Wiesław

    We synthesized two new γ-octamolybdates, and determined their crystal structures from single-crystal X-ray diffraction data. Orange-yellow tetrakis(4-methylpyridinium) bis(4-methylpyridine)-γ-octamolybdate 1 crystallizes in space group P2 1 /c with a  = 11.586(2) Å, b  = 15.526(2) Å, c  = 16.247(2) Å, β  = 118.753(1)º, Z  = 2. White tetrakis(4-methylpyridinium) bis(4-methylpyridine)-γ-octamolybdate hydrate 2 crystallizes in space group C2/c with a  = 27.086(4) Å, b  = 11.917(2) Å, c  = 19.332(2) Å, β  = 124.427(1)º, Z  = 4. Results of crystal structure determinations are presented and discussed in this paper. Thermal stability and decomposition studies of the obtained two new γ-octamolybdates were performed using TG/DSC and XRPD methods. Both compounds decomposed with the formation of 4-methylpyridinium β-octamolybdate. The two compounds are pseudo-polymorphs, exhibiting both striking similarities as well as significant differences in their structures and properties.

  7. High-purity Cu nanocrystal synthesis by a dynamic decomposition method.

    PubMed

    Jian, Xian; Cao, Yu; Chen, Guozhang; Wang, Chao; Tang, Hui; Yin, Liangjun; Luan, Chunhong; Liang, Yinglin; Jiang, Jing; Wu, Sixin; Zeng, Qing; Wang, Fei; Zhang, Chengui

    2014-12-01

    Cu nanocrystals are applied extensively in several fields, particularly in the microelectron, sensor, and catalysis. The catalytic behavior of Cu nanocrystals depends mainly on the structure and particle size. In this work, formation of high-purity Cu nanocrystals is studied using a common chemical vapor deposition precursor of cupric tartrate. This process is investigated through a combined experimental and computational approach. The decomposition kinetics is researched via differential scanning calorimetry and thermogravimetric analysis using Flynn-Wall-Ozawa, Kissinger, and Starink methods. The growth was found to be influenced by the factors of reaction temperature, protective gas, and time. And microstructural and thermal characterizations were performed by X-ray diffraction, scanning electron microscopy, transmission electron microscopy, and differential scanning calorimetry. Decomposition of cupric tartrate at different temperatures was simulated by density functional theory calculations under the generalized gradient approximation. High crystalline Cu nanocrystals without floccules were obtained from thermal decomposition of cupric tartrate at 271°C for 8 h under Ar. This general approach paves a way to controllable synthesis of Cu nanocrystals with high purity.

  8. High-purity Cu nanocrystal synthesis by a dynamic decomposition method

    NASA Astrophysics Data System (ADS)

    Jian, Xian; Cao, Yu; Chen, Guozhang; Wang, Chao; Tang, Hui; Yin, Liangjun; Luan, Chunhong; Liang, Yinglin; Jiang, Jing; Wu, Sixin; Zeng, Qing; Wang, Fei; Zhang, Chengui

    2014-12-01

    Cu nanocrystals are applied extensively in several fields, particularly in the microelectron, sensor, and catalysis. The catalytic behavior of Cu nanocrystals depends mainly on the structure and particle size. In this work, formation of high-purity Cu nanocrystals is studied using a common chemical vapor deposition precursor of cupric tartrate. This process is investigated through a combined experimental and computational approach. The decomposition kinetics is researched via differential scanning calorimetry and thermogravimetric analysis using Flynn-Wall-Ozawa, Kissinger, and Starink methods. The growth was found to be influenced by the factors of reaction temperature, protective gas, and time. And microstructural and thermal characterizations were performed by X-ray diffraction, scanning electron microscopy, transmission electron microscopy, and differential scanning calorimetry. Decomposition of cupric tartrate at different temperatures was simulated by density functional theory calculations under the generalized gradient approximation. High crystalline Cu nanocrystals without floccules were obtained from thermal decomposition of cupric tartrate at 271°C for 8 h under Ar. This general approach paves a way to controllable synthesis of Cu nanocrystals with high purity.

  9. Scalable and fast heterogeneous molecular simulation with predictive parallelization schemes

    NASA Astrophysics Data System (ADS)

    Guzman, Horacio V.; Junghans, Christoph; Kremer, Kurt; Stuehn, Torsten

    2017-11-01

    Multiscale and inhomogeneous molecular systems are challenging topics in the field of molecular simulation. In particular, modeling biological systems in the context of multiscale simulations and exploring material properties are driving a permanent development of new simulation methods and optimization algorithms. In computational terms, those methods require parallelization schemes that make a productive use of computational resources for each simulation and from its genesis. Here, we introduce the heterogeneous domain decomposition approach, which is a combination of an heterogeneity-sensitive spatial domain decomposition with an a priori rearrangement of subdomain walls. Within this approach, the theoretical modeling and scaling laws for the force computation time are proposed and studied as a function of the number of particles and the spatial resolution ratio. We also show the new approach capabilities, by comparing it to both static domain decomposition algorithms and dynamic load-balancing schemes. Specifically, two representative molecular systems have been simulated and compared to the heterogeneous domain decomposition proposed in this work. These two systems comprise an adaptive resolution simulation of a biomolecule solvated in water and a phase-separated binary Lennard-Jones fluid.

  10. Applying Novel Time-Frequency Moments Singular Value Decomposition Method and Artificial Neural Networks for Ballistocardiography

    NASA Astrophysics Data System (ADS)

    Akhbardeh, Alireza; Junnila, Sakari; Koivuluoma, Mikko; Koivistoinen, Teemu; Värri, Alpo

    2006-12-01

    As we know, singular value decomposition (SVD) is designed for computing singular values (SVs) of a matrix. Then, if it is used for finding SVs of an [InlineEquation not available: see fulltext.]-by-1 or 1-by- [InlineEquation not available: see fulltext.] array with elements representing samples of a signal, it will return only one singular value that is not enough to express the whole signal. To overcome this problem, we designed a new kind of the feature extraction method which we call ''time-frequency moments singular value decomposition (TFM-SVD).'' In this new method, we use statistical features of time series as well as frequency series (Fourier transform of the signal). This information is then extracted into a certain matrix with a fixed structure and the SVs of that matrix are sought. This transform can be used as a preprocessing stage in pattern clustering methods. The results in using it indicate that the performance of a combined system including this transform and classifiers is comparable with the performance of using other feature extraction methods such as wavelet transforms. To evaluate TFM-SVD, we applied this new method and artificial neural networks (ANNs) for ballistocardiogram (BCG) data clustering to look for probable heart disease of six test subjects. BCG from the test subjects was recorded using a chair-like ballistocardiograph, developed in our project. This kind of device combined with automated recording and analysis would be suitable for use in many places, such as home, office, and so forth. The results show that the method has high performance and it is almost insensitive to BCG waveform latency or nonlinear disturbance.

  11. Vertically-oriented graphenes supported Mn3O4 as advanced catalysts in post plasma-catalysis for toluene decomposition

    NASA Astrophysics Data System (ADS)

    Bo, Zheng; Hao, Han; Yang, Shiling; Zhu, Jinhui; Yan, Jianhua; Cen, Kefa

    2018-04-01

    This work reports the catalytic performance of vertically-oriented graphenes (VGs) supported manganese oxide catalysts toward toluene decomposition in post plasma-catalysis (PPC) system. Dense networks of VGs were synthesized on carbon paper (CP) via a microwave plasma-enhanced chemical vapor deposition (PECVD) method. A constant current approach was applied in a conventional three-electrode electrochemical system for the electrodeposition of Mn3O4 catalysts on VGs. The as-obtained catalysts were characterized and investigated for ozone conversion and toluene decomposition in a PPC system. Experimental results show that the Mn3O4 catalyst loading mass on VG-coated CP was significantly higher than that on pristine CP (almost 1.8 times for an electrodeposition current of 10 mA). Moreover, the decoration of VGs led to both enhanced catalytic activity for ozone conversion and increased toluene decomposition, exhibiting a great promise in PPC system for the effective decomposition of volatile organic compounds.

  12. Finite Element Analysis of Poroelastic Composites Undergoing Thermal and Gas Diffusion

    NASA Technical Reports Server (NTRS)

    Salamon, N. J. (Principal Investigator); Sullivan, Roy M.; Lee, Sunpyo

    1995-01-01

    A theory for time-dependent thermal and gas diffusion in mechanically time-rate-independent anisotropic poroelastic composites has been developed. This theory advances previous work by the latter two authors by providing for critical transverse shear through a three-dimensional axisymmetric formulation and using it in a new hypothesis for determining the Biot fluid pressure-solid stress coupling factor. The derived governing equations couple material deformation with temperature and internal pore pressure and more strongly couple gas diffusion and heat transfer than the previous theory. Hence the theory accounts for the interactions between conductive heat transfer in the porous body and convective heat carried by the mass flux through the pores. The Bubnov Galerkin finite element method is applied to the governing equations to transform them into a semidiscrete finite element system. A numerical procedure is developed to solve the coupled equations in the space and time domains. The method is used to simulate two high temperature tests involving thermal-chemical decomposition of carbon-phenolic composites. In comparison with measured data, the results are accurate. Moreover unlike previous work, for a single set of poroelastic parameters, they are consistent with two measurements in a restrained thermal growth test.

  13. Sparse time-frequency decomposition based on dictionary adaptation.

    PubMed

    Hou, Thomas Y; Shi, Zuoqiang

    2016-04-13

    In this paper, we propose a time-frequency analysis method to obtain instantaneous frequencies and the corresponding decomposition by solving an optimization problem. In this optimization problem, the basis that is used to decompose the signal is not known a priori. Instead, it is adapted to the signal and is determined as part of the optimization problem. In this sense, this optimization problem can be seen as a dictionary adaptation problem, in which the dictionary is adaptive to one signal rather than a training set in dictionary learning. This dictionary adaptation problem is solved by using the augmented Lagrangian multiplier (ALM) method iteratively. We further accelerate the ALM method in each iteration by using the fast wavelet transform. We apply our method to decompose several signals, including signals with poor scale separation, signals with outliers and polluted by noise and a real signal. The results show that this method can give accurate recovery of both the instantaneous frequencies and the intrinsic mode functions. © 2016 The Author(s).

  14. NASREN: Standard reference model for telerobot control

    NASA Technical Reports Server (NTRS)

    Albus, J. S.; Lumia, R.; Mccain, H.

    1987-01-01

    A hierarchical architecture is described which supports space station telerobots in a variety of modes. The system is divided into three hierarchies: task decomposition, world model, and sensory processing. Goals at each level of the task dedomposition heirarchy are divided both spatially and temporally into simpler commands for the next lower level. This decomposition is repreated until, at the lowest level, the drive signals to the robot actuators are generated. To accomplish its goals, task decomposition modules must often use information stored it the world model. The purpose of the sensory system is to update the world model as rapidly as possible to keep the model in registration with the physical world. The architecture of the entire control system hierarch is described and how it can be applied to space telerobot applications.

  15. Optical ranked-order filtering using threshold decomposition

    DOEpatents

    Allebach, J.P.; Ochoa, E.; Sweeney, D.W.

    1987-10-09

    A hybrid optical/electronic system performs median filtering and related ranked-order operations using threshold decomposition to encode the image. Threshold decomposition transforms the nonlinear neighborhood ranking operation into a linear space-invariant filtering step followed by a point-to-point threshold comparison step. Spatial multiplexing allows parallel processing of all the threshold components as well as recombination by a second linear, space-invariant filtering step. An incoherent optical correlation system performs the linear filtering, using a magneto-optic spatial light modulator as the input device and a computer-generated hologram in the filter plane. Thresholding is done electronically. By adjusting the value of the threshold, the same architecture is used to perform median, minimum, and maximum filtering of images. A totally optical system is also disclosed. 3 figs.

  16. Rapid acquisition of data dense solid-state CPMG NMR spectral sets using multi-dimensional statistical analysis

    DOE PAGES

    Mason, H. E.; Uribe, E. C.; Shusterman, J. A.

    2018-01-01

    Tensor-rank decomposition methods have been applied to variable contact time 29 Si{ 1 H} CP/CPMG NMR data sets to extract NMR dynamics information and dramatically decrease conventional NMR acquisition times.

  17. Rapid acquisition of data dense solid-state CPMG NMR spectral sets using multi-dimensional statistical analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mason, H. E.; Uribe, E. C.; Shusterman, J. A.

    Tensor-rank decomposition methods have been applied to variable contact time 29 Si{ 1 H} CP/CPMG NMR data sets to extract NMR dynamics information and dramatically decrease conventional NMR acquisition times.

  18. Development of parallel algorithms for electrical power management in space applications

    NASA Technical Reports Server (NTRS)

    Berry, Frederick C.

    1989-01-01

    The application of parallel techniques for electrical power system analysis is discussed. The Newton-Raphson method of load flow analysis was used along with the decomposition-coordination technique to perform load flow analysis. The decomposition-coordination technique enables tasks to be performed in parallel by partitioning the electrical power system into independent local problems. Each independent local problem represents a portion of the total electrical power system on which a loan flow analysis can be performed. The load flow analysis is performed on these partitioned elements by using the Newton-Raphson load flow method. These independent local problems will produce results for voltage and power which can then be passed to the coordinator portion of the solution procedure. The coordinator problem uses the results of the local problems to determine if any correction is needed on the local problems. The coordinator problem is also solved by an iterative method much like the local problem. The iterative method for the coordination problem will also be the Newton-Raphson method. Therefore, each iteration at the coordination level will result in new values for the local problems. The local problems will have to be solved again along with the coordinator problem until some convergence conditions are met.

  19. Modeling Oil Shale Pyrolysis: High-Temperature Unimolecular Decomposition Pathways for Thiophene.

    PubMed

    Vasiliou, AnGayle K; Hu, Hui; Cowell, Thomas W; Whitman, Jared C; Porterfield, Jessica; Parish, Carol A

    2017-10-12

    The thermal decomposition mechanism of thiophene has been investigated both experimentally and theoretically. Thermal decomposition experiments were done using a 1 mm × 3 cm pulsed silicon carbide microtubular reactor, C 4 H 4 S + Δ → Products. Unlike previous studies these experiments were able to identify the initial thiophene decomposition products. Thiophene was entrained in either Ar, Ne, or He carrier gas, passed through a heated (300-1700 K) SiC microtubular reactor (roughly ≤100 μs residence time), and exited into a vacuum chamber. The resultant molecular beam was probed by photoionization mass spectroscopy and IR spectroscopy. The pyrolysis mechanisms of thiophene were also investigated with the CBS-QB3 method using UB3LYP/6-311++G(2d,p) optimized geometries. In particular, these electronic structure methods were used to explore pathways for the formation of elemental sulfur as well as for the formation of H 2 S and 1,3-butadiyne. Thiophene was found to undergo unimolecular decomposition by five pathways: C 4 H 4 S → (1) S═C═CH 2 + HCCH, (2) CS + HCCCH 3 , (3) HCS + HCCCH 2 , (4) H 2 S + HCC-CCH, and (5) S + HCC-CH═CH 2 . The experimental and theoretical findings are in excellent agreement.

  20. Monodisperse Iron Oxide Nanoparticles by Thermal Decomposition: Elucidating Particle Formation by Second-Resolved in Situ Small-Angle X-ray Scattering

    PubMed Central

    2017-01-01

    The synthesis of iron oxide nanoparticles (NPs) by thermal decomposition of iron precursors using oleic acid as surfactant has evolved to a state-of-the-art method to produce monodisperse, spherical NPs. The principles behind such monodisperse syntheses are well-known: the key is a separation between burst nucleation and growth phase, whereas the size of the population is set by the precursor-to-surfactant ratio. Here we follow the thermal decomposition of iron pentacarbonyl in the presence of oleic acid via in situ X-ray scattering. This method allows reaction kinetics and precursor states to be followed with high time resolution and statistical significance. Our investigation demonstrates that the final particle size is directly related to a phase of inorganic cluster formation that takes place between precursor decomposition and particle nucleation. The size and concentration of clusters were shown to be dependent on precursor-to-surfactant ratio and heating rate, which in turn led to differences in the onset of nucleation and concentration of nuclei after the burst nucleation phase. This first direct observation of prenucleation formation of inorganic and micellar structures in iron oxide nanoparticle synthesis by thermal decomposition likely has implications for synthesis of other NPs by similar routes. PMID:28572705

  1. A new multivariate empirical mode decomposition method for improving the performance of SSVEP-based brain-computer interface

    NASA Astrophysics Data System (ADS)

    Chen, Yi-Feng; Atal, Kiran; Xie, Sheng-Quan; Liu, Quan

    2017-08-01

    Objective. Accurate and efficient detection of steady-state visual evoked potentials (SSVEP) in electroencephalogram (EEG) is essential for the related brain-computer interface (BCI) applications. Approach. Although the canonical correlation analysis (CCA) has been applied extensively and successfully to SSVEP recognition, the spontaneous EEG activities and artifacts that often occur during data recording can deteriorate the recognition performance. Therefore, it is meaningful to extract a few frequency sub-bands of interest to avoid or reduce the influence of unrelated brain activity and artifacts. This paper presents an improved method to detect the frequency component associated with SSVEP using multivariate empirical mode decomposition (MEMD) and CCA (MEMD-CCA). EEG signals from nine healthy volunteers were recorded to evaluate the performance of the proposed method for SSVEP recognition. Main results. We compared our method with CCA and temporally local multivariate synchronization index (TMSI). The results suggest that the MEMD-CCA achieved significantly higher accuracy in contrast to standard CCA and TMSI. It gave the improvements of 1.34%, 3.11%, 3.33%, 10.45%, 15.78%, 18.45%, 15.00% and 14.22% on average over CCA at time windows from 0.5 s to 5 s and 0.55%, 1.56%, 7.78%, 14.67%, 13.67%, 7.33% and 7.78% over TMSI from 0.75 s to 5 s. The method outperformed the filter-based decomposition (FB), empirical mode decomposition (EMD) and wavelet decomposition (WT) based CCA for SSVEP recognition. Significance. The results demonstrate the ability of our proposed MEMD-CCA to improve the performance of SSVEP-based BCI.

  2. Unitary Operators on the Document Space.

    ERIC Educational Resources Information Center

    Hoenkamp, Eduard

    2003-01-01

    Discusses latent semantic indexing (LSI) that would allow search engines to reduce the dimension of the document space by mapping it into a space spanned by conceptual indices. Topics include vector space models; singular value decomposition (SVD); unitary operators; the Haar transform; and new algorithms. (Author/LRW)

  3. Rapid characterization of lithium ion battery electrolytes and thermal aging products by low-temperature plasma ambient ionization high-resolution mass spectrometry.

    PubMed

    Vortmann, Britta; Nowak, Sascha; Engelhard, Carsten

    2013-03-19

    Lithium ion batteries (LIBs) are key components for portable electronic devices that are used around the world. However, thermal decomposition products in the battery reduce its lifetime, and decomposition processes are still not understood. In this study, a rapid method for in situ analysis and reaction monitoring in LIB electrolytes is presented based on high-resolution mass spectrometry (HR-MS) with low-temperature plasma probe (LTP) ambient desorption/ionization for the first time. This proof-of-principle study demonstrates the capabilities of ambient mass spectrometry in battery research. LTP-HR-MS is ideally suited for qualitative analysis in the ambient environment because it allows direct sample analysis independent of the sample size, geometry, and structure. Further, it is environmental friendly because it eliminates the need of organic solvents that are typically used in separation techniques coupled to mass spectrometry. Accurate mass measurements were used to identify the time-/condition-dependent formation of electrolyte decomposition compounds. A LIB model electrolyte containing ethylene carbonate and dimethyl carbonate was analyzed before and after controlled thermal stress and over the course of several weeks. Major decomposition products identified include difluorophosphoric acid, monofluorophosphoric acid methyl ester, monofluorophosphoric acid dimethyl ester, and hexafluorophosphate. Solvents (i.e., dimethyl carbonate) were partly consumed via an esterification pathway. LTP-HR-MS is considered to be an attractive method for fundamental LIB studies.

  4. Fast non-overlapping Schwarz domain decomposition methods for solving the neutron diffusion equation

    NASA Astrophysics Data System (ADS)

    Jamelot, Erell; Ciarlet, Patrick

    2013-05-01

    Studying numerically the steady state of a nuclear core reactor is expensive, in terms of memory storage and computational time. In order to address both requirements, one can use a domain decomposition method, implemented on a parallel computer. We present here such a method for the mixed neutron diffusion equations, discretized with Raviart-Thomas-Nédélec finite elements. This method is based on the Schwarz iterative algorithm with Robin interface conditions to handle communications. We analyse this method from the continuous point of view to the discrete point of view, and we give some numerical results in a realistic highly heterogeneous 3D configuration. Computations are carried out with the MINOS solver of the APOLLO3® neutronics code. APOLLO3 is a registered trademark in France.

  5. Multi-scale clustering of functional data with application to hydraulic gradients in wetlands

    USGS Publications Warehouse

    Greenwood, Mark C.; Sojda, Richard S.; Sharp, Julia L.; Peck, Rory G.; Rosenberry, Donald O.

    2011-01-01

    A new set of methods are developed to perform cluster analysis of functions, motivated by a data set consisting of hydraulic gradients at several locations distributed across a wetland complex. The methods build on previous work on clustering of functions, such as Tarpey and Kinateder (2003) and Hitchcock et al. (2007), but explore functions generated from an additive model decomposition (Wood, 2006) of the original time se- ries. Our decomposition targets two aspects of the series, using an adaptive smoother for the trend and circular spline for the diurnal variation in the series. Different measures for comparing locations are discussed, including a method for efficiently clustering time series that are of different lengths using a functional data approach. The complicated nature of these wetlands are highlighted by the shifting group memberships depending on which scale of variation and year of the study are considered.

  6. Non invasive transcostal focusing based on the decomposition of the time reversal operator: in vitro validation

    NASA Astrophysics Data System (ADS)

    Cochard, Étienne; Prada, Claire; Aubry, Jean-François; Fink, Mathias

    2010-03-01

    Thermal ablation induced by high intensity focused ultrasound has produced promising clinical results to treat hepatocarcinoma and other liver tumors. However skin burns have been reported due to the high absorption of ultrasonic energy by the ribs. This study proposes a method to produce an acoustic field focusing on a chosen target while sparing the ribs, using the decomposition of the time-reversal operator (DORT method). The idea is to apply an excitation weight vector to the transducers array which is orthogonal to the subspace of emissions focusing on the ribs. The ratio of the energies absorbed at the focal point and on the ribs has been enhanced up to 100-fold as demonstrated by the measured specific absorption rates.

  7. Computer implemented empirical mode decomposition method, apparatus, and article of manufacture for two-dimensional signals

    NASA Technical Reports Server (NTRS)

    Huang, Norden E. (Inventor)

    2001-01-01

    A computer implemented method of processing two-dimensional physical signals includes five basic components and the associated presentation techniques of the results. The first component decomposes the two-dimensional signal into one-dimensional profiles. The second component is a computer implemented Empirical Mode Decomposition that extracts a collection of Intrinsic Mode Functions (IMF's) from each profile based on local extrema and/or curvature extrema. The decomposition is based on the direct extraction of the energy associated with various intrinsic time scales in the profiles. In the third component, the IMF's of each profile are then subjected to a Hilbert Transform. The fourth component collates the Hilbert transformed IMF's of the profiles to form a two-dimensional Hilbert Spectrum. A fifth component manipulates the IMF's by, for example, filtering the two-dimensional signal by reconstructing the two-dimensional signal from selected IMF(s).

  8. Experimental detection and focusing in shallow water by decomposition of the time reversal operator.

    PubMed

    Prada, Claire; de Rosny, Julien; Clorennec, Dominique; Minonzio, Jean-Gabriel; Aubry, Alexandre; Fink, Mathias; Berniere, Lothar; Billand, Philippe; Hibral, Sidonie; Folegot, Thomas

    2007-08-01

    A rigid 24-element source-receiver array in the 10-15 kHz frequency band, connected to a programmable electronic system, was deployed in the Bay of Brest during spring 2005. In this 10- to 18-m-deep environment, backscattered data from submerged targets were recorded. Successful detection and focusing experiments in very shallow water using the decomposition of the time reversal operator (DORT method) are shown. The ability of the DORT method to separate the echo of a target from reverberation as well as the echo from two different targets at 250 m is shown. An example of active focusing within the waveguide using the first invariant of the time reversal operator is presented, showing the enhanced focusing capability. Furthermore, the localization of the scatterers in the water column is obtained using a range-dependent acoustic model.

  9. Iterative variational mode decomposition based automated detection of glaucoma using fundus images.

    PubMed

    Maheshwari, Shishir; Pachori, Ram Bilas; Kanhangad, Vivek; Bhandary, Sulatha V; Acharya, U Rajendra

    2017-09-01

    Glaucoma is one of the leading causes of permanent vision loss. It is an ocular disorder caused by increased fluid pressure within the eye. The clinical methods available for the diagnosis of glaucoma require skilled supervision. They are manual, time consuming, and out of reach of common people. Hence, there is a need for an automated glaucoma diagnosis system for mass screening. In this paper, we present a novel method for an automated diagnosis of glaucoma using digital fundus images. Variational mode decomposition (VMD) method is used in an iterative manner for image decomposition. Various features namely, Kapoor entropy, Renyi entropy, Yager entropy, and fractal dimensions are extracted from VMD components. ReliefF algorithm is used to select the discriminatory features and these features are then fed to the least squares support vector machine (LS-SVM) for classification. Our proposed method achieved classification accuracies of 95.19% and 94.79% using three-fold and ten-fold cross-validation strategies, respectively. This system can aid the ophthalmologists in confirming their manual reading of classes (glaucoma or normal) using fundus images. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Demodulation of moire fringes in digital holographic interferometry using an extended Kalman filter.

    PubMed

    Ramaiah, Jagadesh; Rastogi, Pramod; Rajshekhar, Gannavarpu

    2018-03-10

    This paper presents a method for extracting multiple phases from a single moire fringe pattern in digital holographic interferometry. The method relies on component separation using singular value decomposition and an extended Kalman filter for demodulating the moire fringes. The Kalman filter is applied by modeling the interference field locally as a multi-component polynomial phase signal and extracting the associated multiple polynomial coefficients using the state space approach. In addition to phase, the corresponding multiple phase derivatives can be simultaneously extracted using the proposed method. The applicability of the proposed method is demonstrated using simulation and experimental results.

  11. How Does the Low-Rank Matrix Decomposition Help Internal and External Learnings for Super-Resolution.

    PubMed

    Wang, Shuang; Yue, Bo; Liang, Xuefeng; Jiao, Licheng

    2018-03-01

    Wisely utilizing the internal and external learning methods is a new challenge in super-resolution problem. To address this issue, we analyze the attributes of two methodologies and find two observations of their recovered details: 1) they are complementary in both feature space and image plane and 2) they distribute sparsely in the spatial space. These inspire us to propose a low-rank solution which effectively integrates two learning methods and then achieves a superior result. To fit this solution, the internal learning method and the external learning method are tailored to produce multiple preliminary results. Our theoretical analysis and experiment prove that the proposed low-rank solution does not require massive inputs to guarantee the performance, and thereby simplifying the design of two learning methods for the solution. Intensive experiments show the proposed solution improves the single learning method in both qualitative and quantitative assessments. Surprisingly, it shows more superior capability on noisy images and outperforms state-of-the-art methods.

  12. Analysis of a parallelized nonlinear elliptic boundary value problem solver with application to reacting flows

    NASA Technical Reports Server (NTRS)

    Keyes, David E.; Smooke, Mitchell D.

    1987-01-01

    A parallelized finite difference code based on the Newton method for systems of nonlinear elliptic boundary value problems in two dimensions is analyzed in terms of computational complexity and parallel efficiency. An approximate cost function depending on 15 dimensionless parameters is derived for algorithms based on stripwise and boxwise decompositions of the domain and a one-to-one assignment of the strip or box subdomains to processors. The sensitivity of the cost functions to the parameters is explored in regions of parameter space corresponding to model small-order systems with inexpensive function evaluations and also a coupled system of nineteen equations with very expensive function evaluations. The algorithm was implemented on the Intel Hypercube, and some experimental results for the model problems with stripwise decompositions are presented and compared with the theory. In the context of computational combustion problems, multiprocessors of either message-passing or shared-memory type may be employed with stripwise decompositions to realize speedup of O(n), where n is mesh resolution in one direction, for reasonable n.

  13. Canonical decomposition of magnetotelluric responses: Experiment on 1D anisotropic structures

    NASA Astrophysics Data System (ADS)

    Guo, Ze-qiu; Wei, Wen-bo; Ye, Gao-feng; Jin, Sheng; Jing, Jian-en

    2015-08-01

    Horizontal electrical heterogeneity of subsurface earth is mostly originated from structural complexity and electrical anisotropy, and local near-surface electrical heterogeneity will severely distort regional electromagnetic responses. Conventional distortion analyses for magnetotelluric soundings are primarily physical decomposition methods with respect to isotropic models, which mostly presume that the geoelectric distribution of geological structures is of local and regional patterns represented by 3D/2D models. Due to the widespread anisotropy of earth media, the confusion between 1D anisotropic responses and 2D isotropic responses, and the defects of physical decomposition methods, we propose to conduct modeling experiments with canonical decomposition in terms of 1D layered anisotropic models, and the method is one of the mathematical decomposition methods based on eigenstate analyses differentiated from distortion analyses, which can be used to recover electrical information such as strike directions, and maximum and minimum conductivity. We tested this method with numerical simulation experiments on several 1D synthetic models, which turned out that canonical decomposition is quite effective to reveal geological anisotropic information. Finally, for the background of anisotropy from previous study by geological and seismological methods, canonical decomposition is applied to real data acquired in North China Craton for 1D anisotropy analyses, and the result shows that, with effective modeling and cautious interpretation, canonical decomposition could be another good method to detect anisotropy of geological media.

  14. Nonconforming mortar element methods: Application to spectral discretizations

    NASA Technical Reports Server (NTRS)

    Maday, Yvon; Mavriplis, Cathy; Patera, Anthony

    1988-01-01

    Spectral element methods are p-type weighted residual techniques for partial differential equations that combine the generality of finite element methods with the accuracy of spectral methods. Presented here is a new nonconforming discretization which greatly improves the flexibility of the spectral element approach as regards automatic mesh generation and non-propagating local mesh refinement. The method is based on the introduction of an auxiliary mortar trace space, and constitutes a new approach to discretization-driven domain decomposition characterized by a clean decoupling of the local, structure-preserving residual evaluations and the transmission of boundary and continuity conditions. The flexibility of the mortar method is illustrated by several nonconforming adaptive Navier-Stokes calculations in complex geometry.

  15. Enhanced Thermal Decomposition Properties of CL-20 through Space-Confining in Three-Dimensional Hierarchically Ordered Porous Carbon.

    PubMed

    Chen, Jin; He, Simin; Huang, Bing; Wu, Peng; Qiao, Zhiqiang; Wang, Jun; Zhang, Liyuan; Yang, Guangcheng; Huang, Hui

    2017-03-29

    High energy and low signature properties are the future trend of solid propellant development. As a new and promising oxidizer, hexanitrohexaazaisowurtzitane (CL-20) is expected to replace the conventional oxidizer ammonium perchlorate to reach above goals. However, the high pressure exponent of CL-20 hinders its application in solid propellants so that the development of effective catalysts to improve the thermal decomposition properties of CL-20 still remains challenging. Here, 3D hierarchically ordered porous carbon (3D HOPC) is presented as a catalyst for the thermal decomposition of CL-20 via synthesizing a series of nanostructured CL-20/HOPC composites. In these nanocomposites, CL-20 is homogeneously space-confined into the 3D HOPC scaffold as nanocrystals 9.2-26.5 nm in diameter. The effect of the pore textural parameters and surface modification of 3D HOPC as well as CL-20 loading amount on the thermal decomposition of CL-20 is discussed. A significant improvement of the thermal decomposition properties of CL-20 is achieved with remarkable decrease in decomposition peak temperature (from 247.0 to 174.8 °C) and activation energy (from 165.5 to 115.3 kJ/mol). The exceptional performance of 3D HOPC could be attributed to its well-connected 3D hierarchically ordered porous structure, high surface area, and the confined CL-20 nanocrystals. This work clearly demonstrates that 3D HOPC is a superior catalyst for CL-20 thermal decomposition and opens new potential for further applications of CL-20 in solid propellants.

  16. Human decomposition and the reliability of a 'Universal' model for post mortem interval estimations.

    PubMed

    Cockle, Diane L; Bell, Lynne S

    2015-08-01

    Human decomposition is a complex biological process driven by an array of variables which are not clearly understood. The medico-legal community have long been searching for a reliable method to establish the post-mortem interval (PMI) for those whose deaths have either been hidden, or gone un-noticed. To date, attempts to develop a PMI estimation method based on the state of the body either at the scene or at autopsy have been unsuccessful. One recent study has proposed that two simple formulae, based on the level of decomposition humidity and temperature, could be used to accurately calculate the PMI for bodies outside, on or under the surface worldwide. This study attempted to validate 'Formula I' [1] (for bodies on the surface) using 42 Canadian cases with known PMIs. The results indicated that bodies exposed to warm temperatures consistently overestimated the known PMI by a large and inconsistent margin for Formula I estimations. And for bodies exposed to cold and freezing temperatures (less than 4°C), then the PMI was dramatically under estimated. The ability of 'Formulae II' to estimate the PMI for buried bodies was also examined using a set of 22 known Canadian burial cases. As these cases used in this study are retrospective, some of the data needed for Formula II was not available. The 4.6 value used in Formula II to represent the standard ratio of time that burial decelerates the rate of decomposition was examined. The average time taken to achieve each stage of decomposition both on, and under the surface was compared for the 118 known cases. It was found that the rate of decomposition was not consistent throughout all stages of decomposition. The rates of autolysis above and below the ground were equivalent with the buried cases staying in a state of putrefaction for a prolonged period of time. It is suggested that differences in temperature extremes and humidity levels between geographic regions may make it impractical to apply formulas developed in one region to any other region. These results also suggest that there are other variables, apart from temperature and humidity that may impact the rate of human decomposition. These variables, or complex of variables, are considered regionally specific. Neither of the Universal Formulae performed well, and our results do not support the proposition of Universality for PMI estimation. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  17. The decomposition of deformation: New metrics to enhance shape analysis in medical imaging.

    PubMed

    Varano, Valerio; Piras, Paolo; Gabriele, Stefano; Teresi, Luciano; Nardinocchi, Paola; Dryden, Ian L; Torromeo, Concetta; Puddu, Paolo E

    2018-05-01

    In landmarks-based Shape Analysis size is measured, in most cases, with Centroid Size. Changes in shape are decomposed in affine and non affine components. Furthermore the non affine component can be in turn decomposed in a series of local deformations (partial warps). If the extent of deformation between two shapes is small, the difference between Centroid Size and m-Volume increment is barely appreciable. In medical imaging applied to soft tissues bodies can undergo very large deformations, involving large changes in size. The cardiac example, analyzed in the present paper, shows changes in m-Volume that can reach the 60%. We show here that standard Geometric Morphometrics tools (landmarks, Thin Plate Spline, and related decomposition of the deformation) can be generalized to better describe the very large deformations of biological tissues, without losing a synthetic description. In particular, the classical decomposition of the space tangent to the shape space in affine and non affine components is enriched to include also the change in size, in order to give a complete description of the tangent space to the size-and-shape space. The proposed generalization is formulated by means of a new Riemannian metric describing the change in size as change in m-Volume rather than change in Centroid Size. This leads to a redefinition of some aspects of the Kendall's size-and-shape space without losing Kendall's original formulation. This new formulation is discussed by means of simulated examples using 2D and 3D platonic shapes as well as a real example from clinical 3D echocardiographic data. We demonstrate that our decomposition based approaches discriminate very effectively healthy subjects from patients affected by Hypertrophic Cardiomyopathy. Copyright © 2018 Elsevier B.V. All rights reserved.

  18. An adaptive proper orthogonal decomposition method for model order reduction of multi-disc rotor system

    NASA Astrophysics Data System (ADS)

    Jin, Yulin; Lu, Kuan; Hou, Lei; Chen, Yushu

    2017-12-01

    The proper orthogonal decomposition (POD) method is a main and efficient tool for order reduction of high-dimensional complex systems in many research fields. However, the robustness problem of this method is always unsolved, although there are some modified POD methods which were proposed to solve this problem. In this paper, a new adaptive POD method called the interpolation Grassmann manifold (IGM) method is proposed to address the weakness of local property of the interpolation tangent-space of Grassmann manifold (ITGM) method in a wider parametric region. This method is demonstrated here by a nonlinear rotor system of 33-degrees of freedom (DOFs) with a pair of liquid-film bearings and a pedestal looseness fault. The motion region of the rotor system is divided into two parts: simple motion region and complex motion region. The adaptive POD method is compared with the ITGM method for the large and small spans of parameter in the two parametric regions to present the advantage of this method and disadvantage of the ITGM method. The comparisons of the responses are applied to verify the accuracy and robustness of the adaptive POD method, as well as the computational efficiency is also analyzed. As a result, the new adaptive POD method has a strong robustness and high computational efficiency and accuracy in a wide scope of parameter.

  19. Investigation of Prediction Method and Fundamental Thermo-decomposition Properties on Gasification of Woody Biomass

    NASA Astrophysics Data System (ADS)

    Morita, Akihiro

    Recently, development of energy transfer technology based on woody biomass remarkably has been forwarding accompanied biomass boom for gasification and liquefaction. To elevate on yield of energy into biomass for transportation and exergy is extremely important for essential utilization and production of bio-fuels. Because, conversion to bio-fuel must be discussion in detail thermo-decomposition characteristics for biomass main composition formed on cellulose and hemicelluloses, lignin. In this research, we analyze thermo-decomposition characteristics of each biomass main composition on both active (air) and passive (N2) atmosphere. Especially, we suggest predict model of gasification based on change of atomic carbon ratio with thermo-decomposition. 1) Even if it heat-treats cedar chip by 473K, loss of energy hardly produces it. From this, it acquired that the substance contributed to weight reduction was a low ingredient of energy value. 2) If cedar chip is heated in the 473K around, it can be predicted that the substance with a low energy value like water or acetic acid has arisen by thermal decomposition. It suggested that the transportation performance of the biomass improved by choosing and eliminating these. 3) Each ingredient of hydrogen, nitrogen, and oxygen which dissipated in the gasification process acquired that it was direct proportion to the carbonaceous dissipation rate. 4) The action at the time of thermo-decomposition of (the carbon, hydrogen, nitrogen, oxygen which are) the main constituent factors of the biomass suggested a possibility of being predicted by a statistical method.

  20. Tree decomposition based fast search of RNA structures including pseudoknots in genomes.

    PubMed

    Song, Yinglei; Liu, Chunmei; Malmberg, Russell; Pan, Fangfang; Cai, Liming

    2005-01-01

    Searching genomes for RNA secondary structure with computational methods has become an important approach to the annotation of non-coding RNAs. However, due to the lack of efficient algorithms for accurate RNA structure-sequence alignment, computer programs capable of fast and effectively searching genomes for RNA secondary structures have not been available. In this paper, a novel RNA structure profiling model is introduced based on the notion of a conformational graph to specify the consensus structure of an RNA family. Tree decomposition yields a small tree width t for such conformation graphs (e.g., t = 2 for stem loops and only a slight increase for pseudo-knots). Within this modelling framework, the optimal alignment of a sequence to the structure model corresponds to finding a maximum valued isomorphic subgraph and consequently can be accomplished through dynamic programming on the tree decomposition of the conformational graph in time O(k(t)N(2)), where k is a small parameter; and N is the size of the projiled RNA structure. Experiments show that the application of the alignment algorithm to search in genomes yields the same search accuracy as methods based on a Covariance model with a significant reduction in computation time. In particular; very accurate searches of tmRNAs in bacteria genomes and of telomerase RNAs in yeast genomes can be accomplished in days, as opposed to months required by other methods. The tree decomposition based searching tool is free upon request and can be downloaded at our site h t t p ://w.uga.edu/RNA-informatics/software/index.php.

  1. Uncertainty Analysis of Decomposing Polyurethane Foam

    NASA Technical Reports Server (NTRS)

    Hobbs, Michael L.; Romero, Vicente J.

    2000-01-01

    Sensitivity/uncertainty analyses are necessary to determine where to allocate resources for improved predictions in support of our nation's nuclear safety mission. Yet, sensitivity/uncertainty analyses are not commonly performed on complex combustion models because the calculations are time consuming, CPU intensive, nontrivial exercises that can lead to deceptive results. To illustrate these ideas, a variety of sensitivity/uncertainty analyses were used to determine the uncertainty associated with thermal decomposition of polyurethane foam exposed to high radiative flux boundary conditions. The polyurethane used in this study is a rigid closed-cell foam used as an encapsulant. Related polyurethane binders such as Estane are used in many energetic materials of interest to the JANNAF community. The complex, finite element foam decomposition model used in this study has 25 input parameters that include chemistry, polymer structure, and thermophysical properties. The response variable was selected as the steady-state decomposition front velocity calculated as the derivative of the decomposition front location versus time. An analytical mean value sensitivity/uncertainty (MV) analysis was used to determine the standard deviation by taking numerical derivatives of the response variable with respect to each of the 25 input parameters. Since the response variable is also a derivative, the standard deviation was essentially determined from a second derivative that was extremely sensitive to numerical noise. To minimize the numerical noise, 50-micrometer element dimensions and approximately 1-msec time steps were required to obtain stable uncertainty results. As an alternative method to determine the uncertainty and sensitivity in the decomposition front velocity, surrogate response surfaces were generated for use with a constrained Latin Hypercube Sampling (LHS) technique. Two surrogate response surfaces were investigated: 1) a linear surrogate response surface (LIN) and 2) a quadratic response surface (QUAD). The LHS techniques do not require derivatives of the response variable and are subsequently relatively insensitive to numerical noise. To compare the LIN and QUAD methods to the MV method, a direct LHS analysis (DLHS) was performed using the full grid and timestep resolved finite element model. The surrogate response models (LIN and QUAD) are shown to give acceptable values of the mean and standard deviation when compared to the fully converged DLHS model.

  2. FAST TRACK COMMUNICATION: \\ {P}\\ {T}-symmetry, Cartan decompositions, Lie triple systems and Krein space-related Clifford algebras

    NASA Astrophysics Data System (ADS)

    Günther, Uwe; Kuzhel, Sergii

    2010-10-01

    Gauged \\ {P}\\ {T} quantum mechanics (PTQM) and corresponding Krein space setups are studied. For models with constant non-Abelian gauge potentials and extended parity inversions compact and noncompact Lie group components are analyzed via Cartan decompositions. A Lie-triple structure is found and an interpretation as \\ {P}\\ {T}-symmetrically generalized Jaynes-Cummings model is possible with close relation to recently studied cavity QED setups with transmon states in multilevel artificial atoms. For models with Abelian gauge potentials a hidden Clifford algebra structure is found and used to obtain the fundamental symmetry of Krein space-related J-self-adjoint extensions for PTQM setups with ultra-localized potentials.

  3. Integrated ensemble noise-reconstructed empirical mode decomposition for mechanical fault detection

    NASA Astrophysics Data System (ADS)

    Yuan, Jing; Ji, Feng; Gao, Yuan; Zhu, Jun; Wei, Chenjun; Zhou, Yu

    2018-05-01

    A new branch of fault detection is utilizing the noise such as enhancing, adding or estimating the noise so as to improve the signal-to-noise ratio (SNR) and extract the fault signatures. Hereinto, ensemble noise-reconstructed empirical mode decomposition (ENEMD) is a novel noise utilization method to ameliorate the mode mixing and denoised the intrinsic mode functions (IMFs). Despite the possibility of superior performance in detecting weak and multiple faults, the method still suffers from the major problems of the user-defined parameter and the powerless capability for a high SNR case. Hence, integrated ensemble noise-reconstructed empirical mode decomposition is proposed to overcome the drawbacks, improved by two noise estimation techniques for different SNRs as well as the noise estimation strategy. Independent from the artificial setup, the noise estimation by the minimax thresholding is improved for a low SNR case, which especially shows an outstanding interpretation for signature enhancement. For approximating the weak noise precisely, the noise estimation by the local reconfiguration using singular value decomposition (SVD) is proposed for a high SNR case, which is particularly powerful for reducing the mode mixing. Thereinto, the sliding window for projecting the phase space is optimally designed by the correlation minimization. Meanwhile, the reasonable singular order for the local reconfiguration to estimate the noise is determined by the inflection point of the increment trend of normalized singular entropy. Furthermore, the noise estimation strategy, i.e. the selection approaches of the two estimation techniques along with the critical case, is developed and discussed for different SNRs by means of the possible noise-only IMF family. The method is validated by the repeatable simulations to demonstrate the synthetical performance and especially confirm the capability of noise estimation. Finally, the method is applied to detect the local wear fault from a dual-axis stabilized platform and the gear crack from an operating electric locomotive to verify its effectiveness and feasibility.

  4. Multiscale Characterization of PM2.5 in Southern Taiwan based on Noise-assisted Multivariate Empirical Mode Decomposition and Time-dependent Intrinsic Correlation

    NASA Astrophysics Data System (ADS)

    Hsiao, Y. R.; Tsai, C.

    2017-12-01

    As the WHO Air Quality Guideline indicates, ambient air pollution exposes world populations under threat of fatal symptoms (e.g. heart disease, lung cancer, asthma etc.), raising concerns of air pollution sources and relative factors. This study presents a novel approach to investigating the multiscale variations of PM2.5 in southern Taiwan over the past decade, with four meteorological influencing factors (Temperature, relative humidity, precipitation and wind speed),based on Noise-assisted Multivariate Empirical Mode Decomposition(NAMEMD) algorithm, Hilbert Spectral Analysis(HSA) and Time-dependent Intrinsic Correlation(TDIC) method. NAMEMD algorithm is a fully data-driven approach designed for nonlinear and nonstationary multivariate signals, and is performed to decompose multivariate signals into a collection of channels of Intrinsic Mode Functions (IMFs). TDIC method is an EMD-based method using a set of sliding window sizes to quantify localized correlation coefficients for multiscale signals. With the alignment property and quasi-dyadic filter bank of NAMEMD algorithm, one is able to produce same number of IMFs for all variables and estimates the cross correlation in a more accurate way. The performance of spectral representation of NAMEMD-HSA method is compared with Complementary Empirical Mode Decomposition/ Hilbert Spectral Analysis (CEEMD-HSA) and Wavelet Analysis. The nature of NAMAMD-based TDICC analysis is then compared with CEEMD-based TDIC analysis and the traditional correlation analysis.

  5. RESOLVING THE ACTIVE GALACTIC NUCLEUS AND HOST EMISSION IN THE MID-INFRARED USING A MODEL-INDEPENDENT SPECTRAL DECOMPOSITION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hernán-Caballero, Antonio; Alonso-Herrero, Almudena; Hatziminaoglou, Evanthia

    2015-04-20

    We present results on the spectral decomposition of 118 Spitzer Infrared Spectrograph (IRS) spectra from local active galactic nuclei (AGNs) using a large set of Spitzer/IRS spectra as templates. The templates are themselves IRS spectra from extreme cases where a single physical component (stellar, interstellar, or AGN) completely dominates the integrated mid-infrared emission. We show that a linear combination of one template for each physical component reproduces the observed IRS spectra of AGN hosts with unprecedented fidelity for a template fitting method with no need to model extinction separately. We use full probability distribution functions to estimate expectation values andmore » uncertainties for observables, and find that the decomposition results are robust against degeneracies. Furthermore, we compare the AGN spectra derived from the spectral decomposition with sub-arcsecond resolution nuclear photometry and spectroscopy from ground-based observations. We find that the AGN component derived from the decomposition closely matches the nuclear spectrum with a 1σ dispersion of 0.12 dex in luminosity and typical uncertainties of ∼0.19 in the spectral index and ∼0.1 in the silicate strength. We conclude that the emission from the host galaxy can be reliably removed from the IRS spectra of AGNs. This allows for unbiased studies of the AGN emission in intermediate- and high-redshift galaxies—currently inaccesible to ground-based observations—with archival Spitzer/IRS data and in the future with the Mid-InfraRed Instrument of the James Webb Space Telescope. The decomposition code and templates are available at http://denebola.org/ahc/deblendIRS.« less

  6. Modeling hydrogen-cyanide absorption in fires

    NASA Technical Reports Server (NTRS)

    Cagliostro, D. E.; Islas, A.

    1981-01-01

    A mathematical model is developed for predicting blood concentrations of cyanide as functions of exposure time to constant levels of cyanide in the atmosphere. A toxic gas (which may form as a result of decomposition of combustion materials used in transportation vehicles) is breathed into the alveolar space and transferred from the alveolar space to the blood by a first-order process, dependent on the concentration of the toxicant in the alveolar space. The model predicts that blood cyanide levels are more sensitive to the breathing cycle than to blood circulation. A model estimate of the relative effects of CO and HCN atmospheres, generated in an experimental chamber with an epoxy polymer, shows that toxic effects of cyanide occur long before those of carbon monoxide.

  7. Spectral-decomposition techniques for the identification of periodic and anomalous phenomena in radon time-series.

    NASA Astrophysics Data System (ADS)

    Crockett, R. G. M.; Perrier, F.; Richon, P.

    2009-04-01

    Building on independent investigations by research groups at both IPGP, France, and the University of Northampton, UK, hourly-sampled radon time-series of durations exceeding one year have been investigated for periodic and anomalous phenomena using a variety of established and novel techniques. These time-series have been recorded in locations having no routine human behaviour and thus are effectively free of significant anthropogenic influences. With regard to periodic components, the long durations of these time-series allow, in principle, very high frequency resolutions for established spectral-measurement techniques such as Fourier and maximum-entropy. However, as has been widely observed, the stochastic nature of radon emissions from rocks and soils, coupled with sensitivity to a wide variety influences such as temperature, wind-speed and soil moisture-content has made interpretation of the results obtained by such techniques very difficult, with uncertain results, in many cases. We here report developments in the investigation of radon-time series for periodic and anomalous phenomena using spectral-decomposition techniques. These techniques, in variously separating ‘high', ‘middle' and ‘low' frequency components, effectively ‘de-noise' the data by allowing components of interest to be isolated from others which (might) serve to obscure weaker information-containing components. Once isolated, these components can be investigated using a variety of techniques. Whilst this is very much work in early stages of development, spectral decomposition methods have been used successfully to indicate the presence of diurnal and sub-diurnal cycles in radon concentration which we provisionally attribute to tidal influences. Also, these methods have been used to enhance the identification of short-duration anomalies, attributable to a variety of causes including, for example, earthquakes and rapid large-magnitude changes in weather conditions. Keywords: radon; earthquakes; tidal-influences; anomalies; time series; spectral-decomposition.

  8. On the computation and updating of the modified Cholesky decomposition of a covariance matrix

    NASA Technical Reports Server (NTRS)

    Vanrooy, D. L.

    1976-01-01

    Methods for obtaining and updating the modified Cholesky decomposition (MCD) for the particular case of a covariance matrix when one is given only the original data are described. These methods are the standard method of forming the covariance matrix K then solving for the MCD, L and D (where K=LDLT); a method based on Householder reflections; and lastly, a method employing the composite-t algorithm. For many cases in the analysis of remotely sensed data, the composite-t method is the superior method despite the fact that it is the slowest one, since (1) the relative amount of time computing MCD's is often quite small, (2) the stability properties of it are the best of the three, and (3) it affords an efficient and numerically stable procedure for updating the MCD. The properties of these methods are discussed and FORTRAN programs implementing these algorithms are listed.

  9. Efficient implementation of a 3-dimensional ADI method on the iPSC/860

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Van der Wijngaart, R.F.

    1993-12-31

    A comparison is made between several domain decomposition strategies for the solution of three-dimensional partial differential equations on a MIMD distributed memory parallel computer. The grids used are structured, and the numerical algorithm is ADI. Important implementation issues regarding load balancing, storage requirements, network latency, and overlap of computations and communications are discussed. Results of the solution of the three-dimensional heat equation on the Intel iPSC/860 are presented for the three most viable methods. It is found that the Bruno-Cappello decomposition delivers optimal computational speed through an almost complete elimination of processor idle time, while providing good memory efficiency.

  10. Single Channel EEG Artifact Identification Using Two-Dimensional Multi-Resolution Analysis.

    PubMed

    Taherisadr, Mojtaba; Dehzangi, Omid; Parsaei, Hossein

    2017-12-13

    As a diagnostic monitoring approach, electroencephalogram (EEG) signals can be decoded by signal processing methodologies for various health monitoring purposes. However, EEG recordings are contaminated by other interferences, particularly facial and ocular artifacts generated by the user. This is specifically an issue during continuous EEG recording sessions, and is therefore a key step in using EEG signals for either physiological monitoring and diagnosis or brain-computer interface to identify such artifacts from useful EEG components. In this study, we aim to design a new generic framework in order to process and characterize EEG recording as a multi-component and non-stationary signal with the aim of localizing and identifying its component (e.g., artifact). In the proposed method, we gather three complementary algorithms together to enhance the efficiency of the system. Algorithms include time-frequency (TF) analysis and representation, two-dimensional multi-resolution analysis (2D MRA), and feature extraction and classification. Then, a combination of spectro-temporal and geometric features are extracted by combining key instantaneous TF space descriptors, which enables the system to characterize the non-stationarities in the EEG dynamics. We fit a curvelet transform (as a MRA method) to 2D TF representation of EEG segments to decompose the given space to various levels of resolution. Such a decomposition efficiently improves the analysis of the TF spaces with different characteristics (e.g., resolution). Our experimental results demonstrate that the combination of expansion to TF space, analysis using MRA, and extracting a set of suitable features and applying a proper predictive model is effective in enhancing the EEG artifact identification performance. We also compare the performance of the designed system with another common EEG signal processing technique-namely, 1D wavelet transform. Our experimental results reveal that the proposed method outperforms 1D wavelet.

  11. Domain decomposition and matching for time-domain analysis of motions of ships advancing in head sea

    NASA Astrophysics Data System (ADS)

    Tang, Kai; Zhu, Ren-chuan; Miao, Guo-ping; Fan, Ju

    2014-08-01

    A domain decomposition and matching method in the time-domain is outlined for simulating the motions of ships advancing in waves. The flow field is decomposed into inner and outer domains by an imaginary control surface, and the Rankine source method is applied to the inner domain while the transient Green function method is used in the outer domain. Two initial boundary value problems are matched on the control surface. The corresponding numerical codes are developed, and the added masses, wave exciting forces and ship motions advancing in head sea for Series 60 ship and S175 containership, are presented and verified. A good agreement has been obtained when the numerical results are compared with the experimental data and other references. It shows that the present method is more efficient because of the panel discretization only in the inner domain during the numerical calculation, and good numerical stability is proved to avoid divergence problem regarding ships with flare.

  12. Applications and Comparisons of Four Time Series Models in Epidemiological Surveillance Data

    PubMed Central

    Young, Alistair A.; Li, Xiaosong

    2014-01-01

    Public health surveillance systems provide valuable data for reliable predication of future epidemic events. This paper describes a study that used nine types of infectious disease data collected through a national public health surveillance system in mainland China to evaluate and compare the performances of four time series methods, namely, two decomposition methods (regression and exponential smoothing), autoregressive integrated moving average (ARIMA) and support vector machine (SVM). The data obtained from 2005 to 2011 and in 2012 were used as modeling and forecasting samples, respectively. The performances were evaluated based on three metrics: mean absolute error (MAE), mean absolute percentage error (MAPE), and mean square error (MSE). The accuracy of the statistical models in forecasting future epidemic disease proved their effectiveness in epidemiological surveillance. Although the comparisons found that no single method is completely superior to the others, the present study indeed highlighted that the SVMs outperforms the ARIMA model and decomposition methods in most cases. PMID:24505382

  13. Quantitative analysis of microbial biomass yield in aerobic bioreactor.

    PubMed

    Watanabe, Osamu; Isoda, Satoru

    2013-12-01

    We have studied the integrated model of reaction rate equations with thermal energy balance in aerobic bioreactor for food waste decomposition and showed that the integrated model has the capability both of monitoring microbial activity in real time and of analyzing biodegradation kinetics and thermal-hydrodynamic properties. On the other hand, concerning microbial metabolism, it was known that balancing catabolic reactions with anabolic reactions in terms of energy and electron flow provides stoichiometric metabolic reactions and enables the estimation of microbial biomass yield (stoichiometric reaction model). We have studied a method for estimating real-time microbial biomass yield in the bioreactor during food waste decomposition by combining the integrated model with the stoichiometric reaction model. As a result, it was found that the time course of microbial biomass yield in the bioreactor during decomposition can be evaluated using the operational data of the bioreactor (weight of input food waste and bed temperature) by the combined model. The combined model can be applied to manage a food waste decomposition not only for controlling system operation to keep microbial activity stable, but also for producing value-added products such as compost on optimum condition. Copyright © 2013 The Research Centre for Eco-Environmental Sciences, Chinese Academy of Sciences. Published by Elsevier B.V. All rights reserved.

  14. Enhanced precipitation promotes decomposition and soil C stabilization in semiarid ecosystems, but seasonal timing of wetting matters

    USGS Publications Warehouse

    Campos, Xochi; Germino, Matthew; de Graaff, Marie-Anne

    2017-01-01

    AimsChanging precipitation regimes in semiarid ecosystems will affect the balance of soil carbon (C) input and release, but the net effect on soil C storage is unclear. We asked how changes in the amount and timing of precipitation affect litter decomposition, and soil C stabilization in semiarid ecosystems.MethodsThe study took place at a long-term (18 years) ecohydrology experiment located in Idaho. Precipitation treatments consisted of a doubling of annual precipitation (+200 mm) added either in the cold-dormant season or in the growing season. Experimental plots were planted with big sagebrush (Artemisia tridentata), or with crested wheatgrass (Agropyron cristatum). We quantified decomposition of sagebrush leaf litter, and we assessed organic soil C (SOC) in aggregates, and silt and clay fractions.ResultsWe found that: (1) increased precipitation applied in the growing season consistently enhanced decomposition rates relative to the ambient treatment, and (2) precipitation applied in the dormant season enhanced soil C stabilization.ConclusionsThese data indicate that prolonged increases in precipitation can promote soil C storage in semiarid ecosystems, but only if these increases happen at times of the year when conditions allow for precipitation to promote plant C inputs rates to soil.

  15. Canonical Sectors and Evolution of Firms in the US Stock Markets

    NASA Astrophysics Data System (ADS)

    Hayden, Lorien; Chachra, Ricky; Alemi, Alexander; Ginsparg, Paul; Sethna, James

    2015-03-01

    In this work, we show how unsupervised machine learning can provide a more objective and comprehensive broad-level sector decomposition of stocks. Classification of companies into sectors of the economy is important for macroeconomic analysis, and for investments into the sector-specific financial indices and exchange traded funds (ETFs). Historically, these major industrial classification systems and financial indices have been based on expert opinion and developed manually. Our method, in contrast, produces an emergent low-dimensional structure in the space of historical stock price returns. This emergent structure automatically identifies ``canonical sectors'' in the market, and assigns every stock a participation weight into these sectors. Furthermore, by analyzing data from different periods, we show how these weights for listed firms have evolved over time. This work was partially supported by NSF Grants DMR 1312160, OCI 0926550 and DGE-1144153 (LXH).

  16. Localised burst reconstruction from space-time PODs in a turbulent channel

    NASA Astrophysics Data System (ADS)

    Garcia-Gutierrez, Adrian; Jimenez, Javier

    2017-11-01

    The traditional proper orthogonal decomposition of the turbulent velocity fluctuations in a channel is extended to time under the assumption that the attractor is statistically stationary and can be treated as periodic for long-enough times. The objective is to extract space- and time-localised eddies that optimally represent the kinetic energy (and two-event correlation) of the flow. Using time-resolved data of a small-box simulation at Reτ = 1880 , minimal for y / h 0.25 , PODs are computed from the two-point spectral-density tensor Φ(kx ,kz , y ,y' , ω) . They are Fourier components in x, z and time, and depend on y and on the temporal frequency ω, or, equivalently, on the convection velocity c = ω /kx . Although the latter depends on y, a spatially and temporally localised `burst' can be synthesised by adding a range of PODs with specific phases. The results are localised bursts that are amplified and tilted, in a time-periodic version of Orr-like behaviour. Funded by the ERC COTURB project.

  17. Validation of satellite-based rainfall in Kalahari

    NASA Astrophysics Data System (ADS)

    Lekula, Moiteela; Lubczynski, Maciek W.; Shemang, Elisha M.; Verhoef, Wouter

    2018-06-01

    Water resources management in arid and semi-arid areas is hampered by insufficient rainfall data, typically obtained from sparsely distributed rain gauges. Satellite-based rainfall estimates (SREs) are alternative sources of such data in these areas. In this study, daily rainfall estimates from FEWS-RFE∼11 km, TRMM-3B42∼27 km, CMOPRH∼27 km and CMORPH∼8 km were evaluated against nine, daily rain gauge records in Central Kalahari Basin (CKB), over a five-year period, 01/01/2001-31/12/2005. The aims were to evaluate the daily rainfall detection capabilities of the four SRE algorithms, analyze the spatio-temporal variability of rainfall in the CKB and perform bias-correction of the four SREs. Evaluation methods included scatter plot analysis, descriptive statistics, categorical statistics and bias decomposition. The spatio-temporal variability of rainfall, was assessed using the SREs' mean annual rainfall, standard deviation, coefficient of variation and spatial correlation functions. Bias correction of the four SREs was conducted using a Time-Varying Space-Fixed bias-correction scheme. The results underlined the importance of validating daily SREs, as they had different rainfall detection capabilities in the CKB. The FEWS-RFE∼11 km performed best, providing better results of descriptive and categorical statistics than the other three SREs, although bias decomposition showed that all SREs underestimated rainfall. The analysis showed that the most reliable SREs performance analysis indicator were the frequency of "miss" rainfall events and the "miss-bias", as they directly indicated SREs' sensitivity and bias of rainfall detection, respectively. The Time Varying and Space Fixed (TVSF) bias-correction scheme, improved some error measures but resulted in the reduction of the spatial correlation distance, thus increased, already high, spatial rainfall variability of all the four SREs. This study highlighted SREs as valuable source of daily rainfall data providing good spatio-temporal data coverage especially suitable for areas with limited rain gauges, such as the CKB, but also emphasized SREs' drawbacks, creating avenue for follow up research.

  18. An hybrid neuro-wavelet approach for long-term prediction of solar wind

    NASA Astrophysics Data System (ADS)

    Napoli, Christian; Bonanno, Francesco; Capizzi, Giacomo

    2011-06-01

    Nowadays the interest for space weather and solar wind forecasting is increasing to become a main relevance problem especially for telecommunication industry, military, and for scientific research. At present the goal for weather forecasting reach the ultimate high ground of the cosmos where the environment can affect the technological instrumentation. Some interests then rise about the correct prediction of space events, like ionized turbulence in the ionosphere or impacts from the energetic particles in the Van Allen belts, then of the intensity and features of the solar wind and magnetospheric response. The problem of data prediction can be faced using hybrid computation methods so as wavelet decomposition and recurrent neural networks (RNNs). Wavelet analysis was used in order to reduce the data redundancies so obtaining representation which can express their intrinsic structure. The main advantage of the wavelet use is the ability to pack the energy of a signal, and in turn the relevant carried informations, in few significant uncoupled coefficients. Neural networks (NNs) are a promising technique to exploit the complexity of non-linear data correlation. To obtain a correct prediction of solar wind an RNN was designed starting on the data series. As reported in literature, because of the temporal memory of the data an Adaptative Amplitude Real Time Recurrent Learning algorithm was used for a full connected RNN with temporal delays. The inputs for the RNN were given by the set of coefficients coming from the biorthogonal wavelet decomposition of the solar wind velocity time series. The experimental data were collected during the NASA mission WIND. It is a spin stabilized spacecraft launched in 1994 in a halo orbit around the L1 point. The data are provided by the SWE, a subsystem of the main craft designed to measure the flux of thermal protons and positive ions.

  19. Gibbsian Stationary Non-equilibrium States

    NASA Astrophysics Data System (ADS)

    De Carlo, Leonardo; Gabrielli, Davide

    2017-09-01

    We study the structure of stationary non-equilibrium states for interacting particle systems from a microscopic viewpoint. In particular we discuss two different discrete geometric constructions. We apply both of them to determine non reversible transition rates corresponding to a fixed invariant measure. The first one uses the equivalence of this problem with the construction of divergence free flows on the transition graph. Since divergence free flows are characterized by cyclic decompositions we can generate families of models from elementary cycles on the configuration space. The second construction is a functional discrete Hodge decomposition for translational covariant discrete vector fields. According to this, for example, the instantaneous current of any interacting particle system on a finite torus can be canonically decomposed in a gradient part, a circulation term and an harmonic component. All the three components are associated with functions on the configuration space. This decomposition is unique and constructive. The stationary condition can be interpreted as an orthogonality condition with respect to an harmonic discrete vector field and we use this decomposition to construct models having a fixed invariant measure.

  20. Fourier imaging of non-linear structure formation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brandbyge, Jacob; Hannestad, Steen, E-mail: jacobb@phys.au.dk, E-mail: sth@phys.au.dk

    We perform a Fourier space decomposition of the dynamics of non-linear cosmological structure formation in ΛCDM models. From N -body simulations involving only cold dark matter we calculate 3-dimensional non-linear density, velocity divergence and vorticity Fourier realizations, and use these to calculate the fully non-linear mode coupling integrals in the corresponding fluid equations. Our approach allows for a reconstruction of the amount of mode coupling between any two wavenumbers as a function of redshift. With our Fourier decomposition method we identify the transfer of power from larger to smaller scales, the stable clustering regime, the scale where vorticity becomes important,more » and the suppression of the non-linear divergence power spectrum as compared to linear theory. Our results can be used to improve and calibrate semi-analytical structure formation models.« less

  1. Thermal engineering of FAPbI3 perovskite material via radiative thermal annealing and in situ XRD

    PubMed Central

    Pool, Vanessa L.; Dou, Benjia; Van Campen, Douglas G.; Klein-Stockert, Talysa R.; Barnes, Frank S.; Shaheen, Sean E.; Ahmad, Md I.; van Hest, Maikel F. A. M.; Toney, Michael F.

    2017-01-01

    Lead halide perovskites have emerged as successful optoelectronic materials with high photovoltaic power conversion efficiencies and low material cost. However, substantial challenges remain in the scalability, stability and fundamental understanding of the materials. Here we present the application of radiative thermal annealing, an easily scalable processing method for synthesizing formamidinium lead iodide (FAPbI3) perovskite solar absorbers. Devices fabricated from films formed via radiative thermal annealing have equivalent efficiencies to those annealed using a conventional hotplate. By coupling results from in situ X-ray diffraction using a radiative thermal annealing system with device performances, we mapped the processing phase space of FAPbI3 and corresponding device efficiencies. Our map of processing-structure-performance space suggests the commonly used FAPbI3 annealing time, 10 min at 170 °C, can be significantly reduced to 40 s at 170 °C without affecting the photovoltaic performance. The Johnson-Mehl-Avrami model was used to determine the activation energy for decomposition of FAPbI3 into PbI2. PMID:28094249

  2. Thermal engineering of FAPbI 3 perovskite material via radiative thermal annealing and in situ XRD

    DOE PAGES

    Pool, Vanessa L.; Dou, Benjia; Van Campen, Douglas G.; ...

    2017-01-17

    Lead halide perovskites have emerged as successful optoelectronic materials with high photovoltaic power conversion efficiencies and low material cost. However, substantial challenges remain in the scalability, stability and fundamental understanding of the materials. Here we present the application of radiative thermal annealing, an easily scalable processing method for synthesizing formamidinium lead iodide (FAPbI 3) perovskite solar absorbers. Devices fabricated from films formed via radiative thermal annealing have equivalent efficiencies to those annealed using a conventional hotplate. By coupling results from in situ X-ray diffraction using a radiative thermal annealing system with device performances, we mapped the processing phase space ofmore » FAPbI 3 and corresponding device efficiencies. Our map of processing-structure-performance space suggests the commonly used FAPbI 3 annealing time, 10 min at 170 degrees C, can be significantly reduced to 40 s at 170 degrees C without affecting the photovoltaic performance. Lastly, the Johnson-Mehl-Avrami model was used to determine the activation energy for decomposition of FAPbI 3 into PbI 2.« less

  3. Time-frequency analysis based on ensemble local mean decomposition and fast kurtogram for rotating machinery fault diagnosis

    NASA Astrophysics Data System (ADS)

    Wang, Lei; Liu, Zhiwen; Miao, Qiang; Zhang, Xin

    2018-03-01

    A time-frequency analysis method based on ensemble local mean decomposition (ELMD) and fast kurtogram (FK) is proposed for rotating machinery fault diagnosis. Local mean decomposition (LMD), as an adaptive non-stationary and nonlinear signal processing method, provides the capability to decompose multicomponent modulation signal into a series of demodulated mono-components. However, the occurring mode mixing is a serious drawback. To alleviate this, ELMD based on noise-assisted method was developed. Still, the existing environmental noise in the raw signal remains in corresponding PF with the component of interest. FK has good performance in impulse detection while strong environmental noise exists. But it is susceptible to non-Gaussian noise. The proposed method combines the merits of ELMD and FK to detect the fault for rotating machinery. Primarily, by applying ELMD the raw signal is decomposed into a set of product functions (PFs). Then, the PF which mostly characterizes fault information is selected according to kurtosis index. Finally, the selected PF signal is further filtered by an optimal band-pass filter based on FK to extract impulse signal. Fault identification can be deduced by the appearance of fault characteristic frequencies in the squared envelope spectrum of the filtered signal. The advantages of ELMD over LMD and EEMD are illustrated in the simulation analyses. Furthermore, the efficiency of the proposed method in fault diagnosis for rotating machinery is demonstrated on gearbox case and rolling bearing case analyses.

  4. Three-Component Decomposition of Polarimetric SAR Data Integrating Eigen-Decomposition Results

    NASA Astrophysics Data System (ADS)

    Lu, Da; He, Zhihua; Zhang, Huan

    2018-01-01

    This paper presents a novel three-component scattering power decomposition of polarimetric SAR data. There are two problems in three-component decomposition method: volume scattering component overestimation in urban areas and artificially set parameter to be a fixed value. Though volume scattering component overestimation can be partly solved by deorientation process, volume scattering still dominants some oriented urban areas. The speckle-like decomposition results introduced by artificially setting value are not conducive to further image interpretation. This paper integrates the results of eigen-decomposition to solve the aforementioned problems. Two principal eigenvectors are used to substitute the surface scattering model and the double bounce scattering model. The decomposed scattering powers are obtained using a constrained linear least-squares method. The proposed method has been verified using an ESAR PolSAR image, and the results show that the proposed method has better performance in urban area.

  5. Recharge signal identification based on groundwater level observations.

    PubMed

    Yu, Hwa-Lung; Chu, Hone-Jay

    2012-10-01

    This study applied a method of the rotated empirical orthogonal functions to directly decompose the space-time groundwater level variations and determine the potential recharge zones by investigating the correlation between the identified groundwater signals and the observed local rainfall records. The approach is used to analyze the spatiotemporal process of piezometric heads estimated by Bayesian maximum entropy method from monthly observations of 45 wells in 1999-2007 located in the Pingtung Plain of Taiwan. From the results, the primary potential recharge area is located at the proximal fan areas where the recharge process accounts for 88% of the spatiotemporal variations of piezometric heads in the study area. The decomposition of groundwater levels associated with rainfall can provide information on the recharge process since rainfall is an important contributor to groundwater recharge in semi-arid regions. Correlation analysis shows that the identified recharge closely associates with the temporal variation of the local precipitation with a delay of 1-2 months in the study area.

  6. Temperature sensitivity of soil organic carbon decomposition increased with mean carbon residence time: Field incubation and data assimilation.

    PubMed

    Zhou, Xuhui; Xu, Xia; Zhou, Guiyao; Luo, Yiqi

    2018-02-01

    Temperature sensitivity of soil organic carbon (SOC) decomposition is one of the major uncertainties in predicting climate-carbon (C) cycle feedback. Results from previous studies are highly contradictory with old soil C decomposition being more, similarly, or less sensitive to temperature than decomposition of young fractions. The contradictory results are partly from difficulties in distinguishing old from young SOC and their changes over time in the experiments with or without isotopic techniques. In this study, we have conducted a long-term field incubation experiment with deep soil collars (0-70 cm in depth, 10 cm in diameter of PVC tubes) for excluding root C input to examine apparent temperature sensitivity of SOC decomposition under ambient and warming treatments from 2002 to 2008. The data from the experiment were infused into a multi-pool soil C model to estimate intrinsic temperature sensitivity of SOC decomposition and C residence times of three SOC fractions (i.e., active, slow, and passive) using a data assimilation (DA) technique. As active SOC with the short C residence time was progressively depleted in the deep soil collars under both ambient and warming treatments, the residences times of the whole SOC became longer over time. Concomitantly, the estimated apparent and intrinsic temperature sensitivity of SOC decomposition also became gradually higher over time as more than 50% of active SOC was depleted. Thus, the temperature sensitivity of soil C decomposition in deep soil collars was positively correlated with the mean C residence times. However, the regression slope of the temperature sensitivity against the residence time was lower under the warming treatment than under ambient temperature, indicating that other processes also regulated temperature sensitivity of SOC decomposition. These results indicate that old SOC decomposition is more sensitive to temperature than young components, making the old C more vulnerable to future warmer climate. © 2017 John Wiley & Sons Ltd.

  7. Time-resolved spectroscopic measurements of shock-wave induced decomposition in cyclotrimethylene trinitramine (RDX) crystals: anisotropic response.

    PubMed

    Dang, Nhan C; Dreger, Zbigniew A; Gupta, Yogendra M; Hooks, Daniel E

    2010-11-04

    Plate impact experiments on the (210), (100), and (111) planes were performed to examine the role of crystalline anisotropy on the shock-induced decomposition of cyclotrimethylenetrinitramine (RDX) crystals. Time-resolved emission spectroscopy was used to probe the decomposition of single crystals shocked to peak stresses ranging between 7 and 20 GPa. Emission produced by decomposition intermediates was analyzed in terms of induction time to emission, emission intensity, and the emission spectra shapes as a function of stress and time. Utilizing these features, we found that the shock-induced decomposition of RDX crystals exhibits considerable anisotropy. Crystals shocked on the (210) and (100) planes were more sensitive to decomposition than crystals shocked on the (111) plane. The possible sources of the observed anisotropy are discussed with regard to the inelastic deformation mechanisms of shocked RDX. Our results suggest that, despite the anisotropy observed for shock initiation, decomposition pathways for all three orientations are similar.

  8. Time-dependent quantum transport: An efficient method based on Liouville-von-Neumann equation for single-electron density matrix

    NASA Astrophysics Data System (ADS)

    Xie, Hang; Jiang, Feng; Tian, Heng; Zheng, Xiao; Kwok, Yanho; Chen, Shuguang; Yam, ChiYung; Yan, YiJing; Chen, Guanhua

    2012-07-01

    Basing on our hierarchical equations of motion for time-dependent quantum transport [X. Zheng, G. H. Chen, Y. Mo, S. K. Koo, H. Tian, C. Y. Yam, and Y. J. Yan, J. Chem. Phys. 133, 114101 (2010), 10.1063/1.3475566], we develop an efficient and accurate numerical algorithm to solve the Liouville-von-Neumann equation. We solve the real-time evolution of the reduced single-electron density matrix at the tight-binding level. Calculations are carried out to simulate the transient current through a linear chain of atoms, with each represented by a single orbital. The self-energy matrix is expanded in terms of multiple Lorentzian functions, and the Fermi distribution function is evaluated via the Padè spectrum decomposition. This Lorentzian-Padè decomposition scheme is employed to simulate the transient current. With sufficient Lorentzian functions used to fit the self-energy matrices, we show that the lead spectral function and the dynamics response can be treated accurately. Compared to the conventional master equation approaches, our method is much more efficient as the computational time scales cubically with the system size and linearly with the simulation time. As a result, the simulations of the transient currents through systems containing up to one hundred of atoms have been carried out. As density functional theory is also an effective one-particle theory, the Lorentzian-Padè decomposition scheme developed here can be generalized for first-principles simulation of realistic systems.

  9. A singular-value method for reconstruction of nonradial and lossy objects.

    PubMed

    Jiang, Wei; Astheimer, Jeffrey; Waag, Robert

    2012-03-01

    Efficient inverse scattering algorithms for nonradial lossy objects are presented using singular-value decomposition to form reduced-rank representations of the scattering operator. These algorithms extend eigenfunction methods that are not applicable to nonradial lossy scattering objects because the scattering operators for these objects do not have orthonormal eigenfunction decompositions. A method of local reconstruction by segregation of scattering contributions from different local regions is also presented. Scattering from each region is isolated by forming a reduced-rank representation of the scattering operator that has domain and range spaces comprised of far-field patterns with retransmitted fields that focus on the local region. Methods for the estimation of the boundary, average sound speed, and average attenuation slope of the scattering object are also given. These methods yielded approximations of scattering objects that were sufficiently accurate to allow residual variations to be reconstructed in a single iteration. Calculated scattering from a lossy elliptical object with a random background, internal features, and white noise is used to evaluate the proposed methods. Local reconstruction yielded images with spatial resolution that is finer than a half wavelength of the center frequency and reproduces sound speed and attenuation slope with relative root-mean-square errors of 1.09% and 11.45%, respectively.

  10. A practical material decomposition method for x-ray dual spectral computed tomography.

    PubMed

    Hu, Jingjing; Zhao, Xing

    2016-03-17

    X-ray dual spectral CT (DSCT) scans the measured object with two different x-ray spectra, and the acquired rawdata can be used to perform the material decomposition of the object. Direct calibration methods allow a faster material decomposition for DSCT and can be separated in two groups: image-based and rawdata-based. The image-based method is an approximative method, and beam hardening artifacts remain in the resulting material-selective images. The rawdata-based method generally obtains better image quality than the image-based method, but this method requires geometrically consistent rawdata. However, today's clinical dual energy CT scanners usually measure different rays for different energy spectra and acquire geometrically inconsistent rawdata sets, and thus cannot meet the requirement. This paper proposes a practical material decomposition method to perform rawdata-based material decomposition in the case of inconsistent measurement. This method first yields the desired consistent rawdata sets from the measured inconsistent rawdata sets, and then employs rawdata-based technique to perform material decomposition and reconstruct material-selective images. The proposed method was evaluated by use of simulated FORBILD thorax phantom rawdata and dental CT rawdata, and simulation results indicate that this method can produce highly quantitative DSCT images in the case of inconsistent DSCT measurements.

  11. Optical characterization of shock-induced chemistry in the explosive nitromethane using DFT and time-dependent DFT

    NASA Astrophysics Data System (ADS)

    Pellouchoud, Lenson; Reed, Evan

    2014-03-01

    With continual improvements in ultrafast optical spectroscopy and new multi-scale methods for simulating chemistry for hundreds of picoseconds, the opportunity is beginning to exist to connect experiments with simulations on the same timescale. We compute the optical properties of the liquid phase energetic material nitromethane (CH3NO2) for the first 100 picoseconds behind the front of a simulated shock at 6.5km/s, close to the experimentally observed detonation shock speed. We utilize molecular dynamics trajectories computed using the multi-scale shock technique (MSST) for time-resolved optical spectrum calculations based on both linear response time-dependent DFT (TDDFT) and the Kubo-Greenwood (KG) formula within Kohn-Sham DFT. We find that TDDFT predicts optical conductivities 25-35% lower than KG-based values and provides better agreement with the experimentally measured index of refraction of unreacted nitromethane. We investigate the influence of electronic temperature on the KG spectra and find no significant effect at optical wavelengths. With all methods, the spectra evolve non-monotonically in time as shock-induced chemistry takes place. We attribute the time-resolved absorption at optical wavelengths to time-dependent populations of molecular decomposition products, including NO, CNO, CNOH, H2O, and larger molecules. Supported by NASA Space Technology Research Fellowship (NSTRF) #NNX12AM48H.

  12. Data-driven Climate Modeling and Prediction

    NASA Astrophysics Data System (ADS)

    Kondrashov, D. A.; Chekroun, M.

    2016-12-01

    Global climate models aim to simulate a broad range of spatio-temporal scales of climate variability with state vector having many millions of degrees of freedom. On the other hand, while detailed weather prediction out to a few days requires high numerical resolution, it is fairly clear that a major fraction of large-scale climate variability can be predicted in a much lower-dimensional phase space. Low-dimensional models can simulate and predict this fraction of climate variability, provided they are able to account for linear and nonlinear interactions between the modes representing large scales of climate dynamics, as well as their interactions with a much larger number of modes representing fast and small scales. This presentation will highlight several new applications by Multilayered Stochastic Modeling (MSM) [Kondrashov, Chekroun and Ghil, 2015] framework that has abundantly proven its efficiency in the modeling and real-time forecasting of various climate phenomena. MSM is a data-driven inverse modeling technique that aims to obtain a low-order nonlinear system of prognostic equations driven by stochastic forcing, and estimates both the dynamical operator and the properties of the driving noise from multivariate time series of observations or a high-end model's simulation. MSM leads to a system of stochastic differential equations (SDEs) involving hidden (auxiliary) variables of fast-small scales ranked by layers, which interact with the macroscopic (observed) variables of large-slow scales to model the dynamics of the latter, and thus convey memory effects. New MSM climate applications focus on development of computationally efficient low-order models by using data-adaptive decomposition methods that convey memory effects by time-embedding techniques, such as Multichannel Singular Spectrum Analysis (M-SSA) [Ghil et al. 2002] and recently developed Data-Adaptive Harmonic (DAH) decomposition method [Chekroun and Kondrashov, 2016]. In particular, new results by DAH-MSM modeling and prediction of Arctic Sea Ice, as well as decadal predictions of near-surface Earth temperatures will be presented.

  13. Local Descriptors of Dynamic and Nondynamic Correlation.

    PubMed

    Ramos-Cordoba, Eloy; Matito, Eduard

    2017-06-13

    Quantitatively accurate electronic structure calculations rely on the proper description of electron correlation. A judicious choice of the approximate quantum chemistry method depends upon the importance of dynamic and nondynamic correlation, which is usually assesed by scalar measures. Existing measures of electron correlation do not consider separately the regions of the Cartesian space where dynamic or nondynamic correlation are most important. We introduce real-space descriptors of dynamic and nondynamic electron correlation that admit orbital decomposition. Integration of the local descriptors yields global numbers that can be used to quantify dynamic and nondynamic correlation. Illustrative examples over different chemical systems with varying electron correlation regimes are used to demonstrate the capabilities of the local descriptors. Since the expressions only require orbitals and occupation numbers, they can be readily applied in the context of local correlation methods, hybrid methods, density matrix functional theory, and fractional-occupancy density functional theory.

  14. Turbulent fluid motion IV-averages, Reynolds decomposition, and the closure problem

    NASA Technical Reports Server (NTRS)

    Deissler, Robert G.

    1992-01-01

    Ensemble, time, and space averages as applied to turbulent quantities are discussed, and pertinent properties of the averages are obtained. Those properties, together with Reynolds decomposition, are used to derive the averaged equations of motion and the one- and two-point moment or correlation equations. The terms in the various equations are interpreted. The closure problem of the averaged equations is discussed, and possible closure schemes are considered. Those schemes usually require an input of supplemental information unless the averaged equations are closed by calculating their terms by a numerical solution of the original unaveraged equations. The law of the wall for velocities and temperatures, the velocity- and temperature-defect laws, and the logarithmic laws for velocities and temperatures are derived. Various notions of randomness and their relation to turbulence are considered in light of ergodic theory.

  15. Investigations on the hierarchy of reference frames in geodesy and geodynamics

    NASA Technical Reports Server (NTRS)

    Grafarend, E. W.; Mueller, I. I.; Papo, H. B.; Richter, B.

    1979-01-01

    Problems related to reference directions were investigated. Space and time variant angular parameters are illustrated in hierarchic structures or towers. Using least squares techniques, model towers of triads are presented which allow the formation of linear observation equations. Translational and rotational degrees of freedom (origin and orientation) are discussed along with and the notion of length and scale degrees of freedom. According to the notion of scale parallelism, scale factors with respect to a unit length are given. Three-dimensional geodesy was constructed from the set of three base vectors (gravity, earth-rotation and the ecliptic normal vector). Space and time variations are given with respect to a polar and singular value decomposition or in terms of changes in translation, rotation, deformation (shear, dilatation or angular and scale distortions).

  16. Separability of spatiotemporal spectra of image sequences. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Eckert, Michael P.; Buchsbaum, Gershon; Watson, Andrew B.

    1992-01-01

    The spatiotemporal power spectrum was calculated of 14 image sequences in order to determine the degree to which the spectra are separable in space and time, and to assess the validity of the commonly used exponential correlation model found in the literature. The spectrum was expanded by a Singular Value Decomposition into a sum of separable terms and an index was defined of spatiotemporal separability as the fraction of the signal energy that can be represented by the first (largest) separable term. All spectra were found to be highly separable with an index of separability above 0.98. The power spectra of the sequences were well fit by a separable model. The power spectrum model corresponds to a product of exponential autocorrelation functions separable in space and time.

  17. Time-frequency analysis : mathematical analysis of the empirical mode decomposition.

    DOT National Transportation Integrated Search

    2009-01-01

    Invented over 10 years ago, empirical mode : decomposition (EMD) provides a nonlinear : time-frequency analysis with the ability to successfully : analyze nonstationary signals. Mathematical : Analysis of the Empirical Mode Decomposition : is a...

  18. Initial mechanisms for the unimolecular decomposition of electronically excited bisfuroxan based energetic materials.

    PubMed

    Yuan, Bing; Bernstein, Elliot R

    2017-01-07

    Unimolecular decomposition of energetic molecules, 3,3'-diamino-4,4'-bisfuroxan (labeled as A) and 4,4'-diamino-3,3'-bisfuroxan (labeled as B), has been explored via 226/236 nm single photon laser excitation/decomposition. These two energetic molecules, subsequent to UV excitation, create NO as an initial decomposition product at the nanosecond excitation energies (5.0-5.5 eV) with warm vibrational temperature (1170 ± 50 K for A, 1400 ± 50 K for B) and cold rotational temperature (<55 K). Initial decomposition mechanisms for these two electronically excited, isolated molecules are explored at the complete active space self-consistent field (CASSCF(12,12)/6-31G(d)) level with and without MP2 correction. Potential energy surface calculations illustrate that conical intersections play an essential role in the calculated decomposition mechanisms. Based on experimental observations and theoretical calculations, NO product is released through opening of the furoxan ring: ring opening can occur either on the S 1 excited or S 0 ground electronic state. The reaction path with the lowest energetic barrier is that for which the furoxan ring opens on the S 1 state via the breaking of the N1-O1 bond. Subsequently, the molecule moves to the ground S 0 state through related ring-opening conical intersections, and an NO product is formed on the ground state surface with little rotational excitation at the last NO dissociation step. For the ground state ring opening decomposition mechanism, the N-O bond and C-N bond break together in order to generate dissociated NO. With the MP2 correction for the CASSCF(12,12) surface, the potential energies of molecules with dissociated NO product are in the range from 2.04 to 3.14 eV, close to the theoretical result for the density functional theory (B3LYP) and MP2 methods. The CASMP2(12,12) corrected approach is essential in order to obtain a reasonable potential energy surface that corresponds to the observed decomposition behavior of these molecules. Apparently, highly excited states are essential for an accurate representation of the kinetics and dynamics of excited state decomposition of both of these bisfuroxan energetic molecules. The experimental vibrational temperatures of NO products of A and B are about 800-1000 K lower than previously studied energetic molecules with NO as a decomposition product.

  19. Adaptive wavelet collocation methods for initial value boundary problems of nonlinear PDE's

    NASA Technical Reports Server (NTRS)

    Cai, Wei; Wang, Jian-Zhong

    1993-01-01

    We have designed a cubic spline wavelet decomposition for the Sobolev space H(sup 2)(sub 0)(I) where I is a bounded interval. Based on a special 'point-wise orthogonality' of the wavelet basis functions, a fast Discrete Wavelet Transform (DWT) is constructed. This DWT transform will map discrete samples of a function to its wavelet expansion coefficients in O(N log N) operations. Using this transform, we propose a collocation method for the initial value boundary problem of nonlinear PDE's. Then, we test the efficiency of the DWT transform and apply the collocation method to solve linear and nonlinear PDE's.

  20. Scalable and fast heterogeneous molecular simulation with predictive parallelization schemes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guzman, Horacio V.; Junghans, Christoph; Kremer, Kurt

    Multiscale and inhomogeneous molecular systems are challenging topics in the field of molecular simulation. In particular, modeling biological systems in the context of multiscale simulations and exploring material properties are driving a permanent development of new simulation methods and optimization algorithms. In computational terms, those methods require parallelization schemes that make a productive use of computational resources for each simulation and from its genesis. Here, we introduce the heterogeneous domain decomposition approach, which is a combination of an heterogeneity-sensitive spatial domain decomposition with an a priori rearrangement of subdomain walls. Within this approach and paper, the theoretical modeling and scalingmore » laws for the force computation time are proposed and studied as a function of the number of particles and the spatial resolution ratio. We also show the new approach capabilities, by comparing it to both static domain decomposition algorithms and dynamic load-balancing schemes. Specifically, two representative molecular systems have been simulated and compared to the heterogeneous domain decomposition proposed in this work. Finally, these two systems comprise an adaptive resolution simulation of a biomolecule solvated in water and a phase-separated binary Lennard-Jones fluid.« less

  1. Scalable and fast heterogeneous molecular simulation with predictive parallelization schemes

    DOE PAGES

    Guzman, Horacio V.; Junghans, Christoph; Kremer, Kurt; ...

    2017-11-27

    Multiscale and inhomogeneous molecular systems are challenging topics in the field of molecular simulation. In particular, modeling biological systems in the context of multiscale simulations and exploring material properties are driving a permanent development of new simulation methods and optimization algorithms. In computational terms, those methods require parallelization schemes that make a productive use of computational resources for each simulation and from its genesis. Here, we introduce the heterogeneous domain decomposition approach, which is a combination of an heterogeneity-sensitive spatial domain decomposition with an a priori rearrangement of subdomain walls. Within this approach and paper, the theoretical modeling and scalingmore » laws for the force computation time are proposed and studied as a function of the number of particles and the spatial resolution ratio. We also show the new approach capabilities, by comparing it to both static domain decomposition algorithms and dynamic load-balancing schemes. Specifically, two representative molecular systems have been simulated and compared to the heterogeneous domain decomposition proposed in this work. Finally, these two systems comprise an adaptive resolution simulation of a biomolecule solvated in water and a phase-separated binary Lennard-Jones fluid.« less

  2. Variational methods for direct/inverse problems of atmospheric dynamics and chemistry

    NASA Astrophysics Data System (ADS)

    Penenko, Vladimir; Penenko, Alexey; Tsvetova, Elena

    2013-04-01

    We present a variational approach for solving direct and inverse problems of atmospheric hydrodynamics and chemistry. It is important that the accurate matching of numerical schemes has to be provided in the chain of objects: direct/adjoint problems - sensitivity relations - inverse problems, including assimilation of all available measurement data. To solve the problems we have developed a new enhanced set of cost-effective algorithms. The matched description of the multi-scale processes is provided by a specific choice of the variational principle functionals for the whole set of integrated models. Then all functionals of variational principle are approximated in space and time by splitting and decomposition methods. Such approach allows us to separately consider, for example, the space-time problems of atmospheric chemistry in the frames of decomposition schemes for the integral identity sum analogs of the variational principle at each time step and in each of 3D finite-volumes. To enhance the realization efficiency, the set of chemical reactions is divided on the subsets related to the operators of production and destruction. Then the idea of the Euler's integrating factors is applied in the frames of the local adjoint problem technique [1]-[3]. The analytical solutions of such adjoint problems play the role of integrating factors for differential equations describing atmospheric chemistry. With their help, the system of differential equations is transformed to the equivalent system of integral equations. As a result we avoid the construction and inversion of preconditioning operators containing the Jacobi matrixes which arise in traditional implicit schemes for ODE solution. This is the main advantage of our schemes. At the same time step but on the different stages of the "global" splitting scheme, the system of atmospheric dynamic equations is solved. For convection - diffusion equations for all state functions in the integrated models we have developed the monotone and stable discrete-analytical numerical schemes [1]-[3] conserving the positivity of the chemical substance concentrations and possessing the properties of energy and mass balance that are postulated in the general variational principle for integrated models. All algorithms for solution of transport, diffusion and transformation problems are direct (without iterations). The work is partially supported by the Programs No 4 of Presidium RAS and No 3 of Mathematical Department of RAS, by RFBR project 11-01-00187 and Integrating projects of SD RAS No 8 and 35. Our studies are in the line with the goals of COST Action ES1004. References Penenko V., Tsvetova E. Discrete-analytical methods for the implementation of variational principles in environmental applications// Journal of computational and applied mathematics, 2009, v. 226, 319-330. Penenko A.V. Discrete-analytic schemes for solving an inverse coefficient heat conduction problem in a layered medium with gradient methods// Numerical Analysis and Applications, 2012, V. 5, pp 326-341. V. Penenko, E. Tsvetova. Variational methods for constructing the monotone approximations for atmospheric chemistry models //Numerical Analysis and Applications, 2013 (in press).

  3. Full waveform time domain solutions for source and induced magnetotelluric and controlled-source electromagnetic fields using quasi-equivalent time domain decomposition and GPU parallelization

    NASA Astrophysics Data System (ADS)

    Imamura, N.; Schultz, A.

    2015-12-01

    Recently, a full waveform time domain solution has been developed for the magnetotelluric (MT) and controlled-source electromagnetic (CSEM) methods. The ultimate goal of this approach is to obtain a computationally tractable direct waveform joint inversion for source fields and earth conductivity structure in three and four dimensions. This is desirable on several grounds, including the improved spatial resolving power expected from use of a multitude of source illuminations of non-zero wavenumber, the ability to operate in areas of high levels of source signal spatial complexity and non-stationarity, etc. This goal would not be obtainable if one were to adopt the finite difference time-domain (FDTD) approach for the forward problem. This is particularly true for the case of MT surveys, since an enormous number of degrees of freedom are required to represent the observed MT waveforms across the large frequency bandwidth. It means that for FDTD simulation, the smallest time steps should be finer than that required to represent the highest frequency, while the number of time steps should also cover the lowest frequency. This leads to a linear system that is computationally burdensome to solve. We have implemented our code that addresses this situation through the use of a fictitious wave domain method and GPUs to speed up the computation time. We also substantially reduce the size of the linear systems by applying concepts from successive cascade decimation, through quasi-equivalent time domain decomposition. By combining these refinements, we have made good progress toward implementing the core of a full waveform joint source field/earth conductivity inverse modeling method. From results, we found the use of previous generation of CPU/GPU speeds computations by an order of magnitude over a parallel CPU only approach. In part, this arises from the use of the quasi-equivalent time domain decomposition, which shrinks the size of the linear system dramatically.

  4. Nonlinear mode decomposition: A noise-robust, adaptive decomposition method

    NASA Astrophysics Data System (ADS)

    Iatsenko, Dmytro; McClintock, Peter V. E.; Stefanovska, Aneta

    2015-09-01

    The signals emanating from complex systems are usually composed of a mixture of different oscillations which, for a reliable analysis, should be separated from each other and from the inevitable background of noise. Here we introduce an adaptive decomposition tool—nonlinear mode decomposition (NMD)—which decomposes a given signal into a set of physically meaningful oscillations for any wave form, simultaneously removing the noise. NMD is based on the powerful combination of time-frequency analysis techniques—which, together with the adaptive choice of their parameters, make it extremely noise robust—and surrogate data tests used to identify interdependent oscillations and to distinguish deterministic from random activity. We illustrate the application of NMD to both simulated and real signals and demonstrate its qualitative and quantitative superiority over other approaches, such as (ensemble) empirical mode decomposition, Karhunen-Loève expansion, and independent component analysis. We point out that NMD is likely to be applicable and useful in many different areas of research, such as geophysics, finance, and the life sciences. The necessary matlab codes for running NMD are freely available for download.

  5. Agent-Based Modeling of China's Rural-Urban Migration and Social Network Structure.

    PubMed

    Fu, Zhaohao; Hao, Lingxin

    2018-01-15

    We analyze China's rural-urban migration and endogenous social network structures using agent-based modeling. The agents from census micro data are located in their rural origin with an empirical-estimated prior propensity to move. The population-scale social network is a hybrid one, combining observed family ties and locations of the origin with a parameter space calibrated from census, survey and aggregate data and sampled using a stepwise Latin Hypercube Sampling method. At monthly intervals, some agents migrate and these migratory acts change the social network by turning within-nonmigrant connections to between-migrant-nonmigrant connections, turning local connections to nonlocal connections, and adding among-migrant connections. In turn, the changing social network structure updates migratory propensities of those well-connected nonmigrants who become more likely to move. These two processes iterate over time. Using a core-periphery method developed from the k -core decomposition method, we identify and quantify the network structural changes and map these changes with the migration acceleration patterns. We conclude that network structural changes are essential for explaining migration acceleration observed in China during the 1995-2000 period.

  6. Agent-based modeling of China's rural-urban migration and social network structure

    NASA Astrophysics Data System (ADS)

    Fu, Zhaohao; Hao, Lingxin

    2018-01-01

    We analyze China's rural-urban migration and endogenous social network structures using agent-based modeling. The agents from census micro data are located in their rural origin with an empirical-estimated prior propensity to move. The population-scale social network is a hybrid one, combining observed family ties and locations of the origin with a parameter space calibrated from census, survey and aggregate data and sampled using a stepwise Latin Hypercube Sampling method. At monthly intervals, some agents migrate and these migratory acts change the social network by turning within-nonmigrant connections to between-migrant-nonmigrant connections, turning local connections to nonlocal connections, and adding among-migrant connections. In turn, the changing social network structure updates migratory propensities of those well-connected nonmigrants who become more likely to move. These two processes iterate over time. Using a core-periphery method developed from the k-core decomposition method, we identify and quantify the network structural changes and map these changes with the migration acceleration patterns. We conclude that network structural changes are essential for explaining migration acceleration observed in China during the 1995-2000 period.

  7. On-line range images registration with GPGPU

    NASA Astrophysics Data System (ADS)

    Będkowski, J.; Naruniec, J.

    2013-03-01

    This paper concerns implementation of algorithms in the two important aspects of modern 3D data processing: data registration and segmentation. Solution proposed for the first topic is based on the 3D space decomposition, while the latter on image processing and local neighbourhood search. Data processing is implemented by using NVIDIA compute unified device architecture (NIVIDIA CUDA) parallel computation. The result of the segmentation is a coloured map where different colours correspond to different objects, such as walls, floor and stairs. The research is related to the problem of collecting 3D data with a RGB-D camera mounted on a rotated head, to be used in mobile robot applications. Performance of the data registration algorithm is aimed for on-line processing. The iterative closest point (ICP) approach is chosen as a registration method. Computations are based on the parallel fast nearest neighbour search. This procedure decomposes 3D space into cubic buckets and, therefore, the time of the matching is deterministic. First technique of the data segmentation uses accele-rometers integrated with a RGB-D sensor to obtain rotation compensation and image processing method for defining pre-requisites of the known categories. The second technique uses the adapted nearest neighbour search procedure for obtaining normal vectors for each range point.

  8. GPR random noise reduction using BPD and EMD

    NASA Astrophysics Data System (ADS)

    Ostoori, Roya; Goudarzi, Alireza; Oskooi, Behrooz

    2018-04-01

    Ground-penetrating radar (GPR) exploration is a new high-frequency technology that explores near-surface objects and structures accurately. The high-frequency antenna of the GPR system makes it a high-resolution method compared to other geophysical methods. The frequency range of recorded GPR is so wide that random noise recording is inevitable due to acquisition. This kind of noise comes from unknown sources and its correlation to the adjacent traces is nearly zero. This characteristic of random noise along with the higher accuracy of GPR system makes denoising very important for interpretable results. The main objective of this paper is to reduce GPR random noise based on pursuing denoising using empirical mode decomposition. Our results showed that empirical mode decomposition in combination with basis pursuit denoising (BPD) provides satisfactory outputs due to the sifting process compared to the time-domain implementation of the BPD method on both synthetic and real examples. Our results demonstrate that because of the high computational costs, the BPD-empirical mode decomposition technique should only be used for heavily noisy signals.

  9. Reactivity continuum modeling of leaf, root, and wood decomposition across biomes

    NASA Astrophysics Data System (ADS)

    Koehler, Birgit; Tranvik, Lars J.

    2015-07-01

    Large carbon dioxide amounts are released to the atmosphere during organic matter decomposition. Yet the large-scale and long-term regulation of this critical process in global carbon cycling by litter chemistry and climate remains poorly understood. We used reactivity continuum (RC) modeling to analyze the decadal data set of the "Long-term Intersite Decomposition Experiment," in which fine litter and wood decomposition was studied in eight biome types (224 time series). In 32 and 46% of all sites the litter content of the acid-unhydrolyzable residue (AUR, formerly referred to as lignin) and the AUR/nitrogen ratio, respectively, retarded initial decomposition rates. This initial rate-retarding effect generally disappeared within the first year of decomposition, and rate-stimulating effects of nutrients and a rate-retarding effect of the carbon/nitrogen ratio became more prevalent. For needles and leaves/grasses, the influence of climate on decomposition decreased over time. For fine roots, the climatic influence was initially smaller but increased toward later-stage decomposition. The climate decomposition index was the strongest climatic predictor of decomposition. The similar variability in initial decomposition rates across litter categories as across biome types suggested that future changes in decomposition may be dominated by warming-induced changes in plant community composition. In general, the RC model parameters successfully predicted independent decomposition data for the different litter-biome combinations (196 time series). We argue that parameterization of large-scale decomposition models with RC model parameters, as opposed to the currently common discrete multiexponential models, could significantly improve their mechanistic foundation and predictive accuracy across climate zones and litter categories.

  10. Oxygen Mass Flow Rate Generated for Monitoring Hydrogen Peroxide Stability

    NASA Technical Reports Server (NTRS)

    Ross, H. Richard

    2002-01-01

    Recent interest in propellants with non-toxic reaction products has led to a resurgence of interest in hydrogen peroxide for various propellant applications. Because peroxide is sensitive to contaminants, material interactions, stability and storage issues, monitoring decomposition rates is important. Stennis Space Center (SSC) uses thermocouples to monitor bulk fluid temperature (heat evolution) to determine reaction rates. Unfortunately, large temperature rises are required to offset the heat lost into the surrounding fluid. Also, tank penetration to accomodate a thermocouple can entail modification of a tank or line and act as a source of contamination. The paper evaluates a method for monitoring oxygen evolution as a means to determine peroxide stability. Oxygen generation is not only directly related to peroxide decomposition, but occurs immediately. Measuring peroxide temperature to monitor peroxide stability has significant limitations. The bulk decomposition of 1% / week in a large volume tank can produce in excess of 30 cc / min. This oxygen flow rate corresponds to an equivalent temperature rise of approximately 14 millidegrees C, which is difficult to measure reliably. Thus, if heat transfer were included, there would be no temperature rise. Temperature changes from the surrounding environment and heat lost to the peroxide will also mask potential problems. The use of oxygen flow measurements provides an ultra sensitive technique for monitoring reaction events and will provide an earlier indication of an abnormal decomposition when compared to measuring temperature rise.

  11. Accuracy assessment of a surface electromyogram decomposition system in human first dorsal interosseus muscle

    NASA Astrophysics Data System (ADS)

    Hu, Xiaogang; Rymer, William Z.; Suresh, Nina L.

    2014-04-01

    Objective. The aim of this study is to assess the accuracy of a surface electromyogram (sEMG) motor unit (MU) decomposition algorithm during low levels of muscle contraction. Approach. A two-source method was used to verify the accuracy of the sEMG decomposition system, by utilizing simultaneous intramuscular and surface EMG recordings from the human first dorsal interosseous muscle recorded during isometric trapezoidal force contractions. Spike trains from each recording type were decomposed independently utilizing two different algorithms, EMGlab and dEMG decomposition algorithms. The degree of agreement of the decomposed spike timings was assessed for three different segments of the EMG signals, corresponding to specified regions in the force task. A regression analysis was performed to examine whether certain properties of the sEMG and force signal can predict the decomposition accuracy. Main results. The average accuracy of successful decomposition among the 119 MUs that were common to both intramuscular and surface records was approximately 95%, and the accuracy was comparable between the different segments of the sEMG signals (i.e., force ramp-up versus steady state force versus combined). The regression function between the accuracy and properties of sEMG and force signals revealed that the signal-to-noise ratio of the action potential and stability in the action potential records were significant predictors of the surface decomposition accuracy. Significance. The outcomes of our study confirm the accuracy of the sEMG decomposition algorithm during low muscle contraction levels and provide confidence in the overall validity of the surface dEMG decomposition algorithm.

  12. Adjustable vector Airy light-sheet single optical tweezers: negative radiation forces on a subwavelength spheroid and spin torque reversal

    NASA Astrophysics Data System (ADS)

    Mitri, Farid G.

    2018-01-01

    Generalized solutions of vector Airy light-sheets, adjustable per their derivative order m, are introduced stemming from the Lorenz gauge condition and Maxwell's equations using the angular spectrum decomposition method. The Cartesian components of the incident radiated electric, magnetic and time-averaged Poynting vector fields in free space (excluding evanescent waves) are determined and computed with particular emphasis on the derivative order of the Airy light-sheet and the polarization on the magnetic vector potential forming the beam. Negative transverse time-averaged Poynting vector components can arise, while the longitudinal counterparts are always positive. Moreover, the analysis is extended to compute the optical radiation force and spin torque vector components on a lossless dielectric prolate subwavelength spheroid in the framework of the electric dipole approximation. The results show that negative forces and spin torques sign reversal arise depending on the derivative order of the beam, the polarization of the magnetic vector potential, and the orientation of the subwavelength prolate spheroid in space. The spin torque sign reversal suggests that counter-clockwise or clockwise rotations around the center of mass of the subwavelength spheroid can occur. The results find useful applications in single Airy light-sheet tweezers, particle manipulation, handling, and rotation applications to name a few examples.

  13. Deposition of device quality, low hydrogen content, hydrogenated amorphous silicon at high deposition rates

    DOEpatents

    Mahan, Archie Harvin; Molenbroek, Edith C.; Gallagher, Alan C.; Nelson, Brent P.; Iwaniczko, Eugene; Xu, Yueqin

    2002-01-01

    A method of fabricating device quality, thin-film a-Si:H for use as semiconductor material in photovoltaic and other devices, comprising in any order; positioning a substrate in a vacuum chamber adjacent a plurality of heatable filaments with a spacing distance L between the substrate and the filaments; heating the filaments to a temperature that is high enough to obtain complete decomposition of silicohydride molecules that impinge said filaments into Si and H atomic species; providing a flow of silicohydride gas, or a mixture of silicohydride gas containing Si and H, in said vacuum chamber while maintaining a pressure P of said gas in said chamber, which, in combination with said spacing distance L, provides a P.times.L product in a range of 10-300 mT-cm to ensure that most of the Si atomic species react with silicohydride molecules in the gas before reaching the substrate, to thereby grow a a-Si:H film at a rate of at least 50 .ANG./sec.; and maintaining the substrate at a temperature that balances out-diffusion of H from the growing a-Si:H film with time needed for radical species containing Si and H to migrate to preferred bonding sites.

  14. Decomposition techniques

    USGS Publications Warehouse

    Chao, T.T.; Sanzolone, R.F.

    1992-01-01

    Sample decomposition is a fundamental and integral step in the procedure of geochemical analysis. It is often the limiting factor to sample throughput, especially with the recent application of the fast and modern multi-element measurement instrumentation. The complexity of geological materials makes it necessary to choose the sample decomposition technique that is compatible with the specific objective of the analysis. When selecting a decomposition technique, consideration should be given to the chemical and mineralogical characteristics of the sample, elements to be determined, precision and accuracy requirements, sample throughput, technical capability of personnel, and time constraints. This paper addresses these concerns and discusses the attributes and limitations of many techniques of sample decomposition along with examples of their application to geochemical analysis. The chemical properties of reagents as to their function as decomposition agents are also reviewed. The section on acid dissolution techniques addresses the various inorganic acids that are used individually or in combination in both open and closed systems. Fluxes used in sample fusion are discussed. The promising microwave-oven technology and the emerging field of automation are also examined. A section on applications highlights the use of decomposition techniques for the determination of Au, platinum group elements (PGEs), Hg, U, hydride-forming elements, rare earth elements (REEs), and multi-elements in geological materials. Partial dissolution techniques used for geochemical exploration which have been treated in detail elsewhere are not discussed here; nor are fire-assaying for noble metals and decomposition techniques for X-ray fluorescence or nuclear methods be discussed. ?? 1992.

  15. Hidden discriminative features extraction for supervised high-order time series modeling.

    PubMed

    Nguyen, Ngoc Anh Thi; Yang, Hyung-Jeong; Kim, Sunhee

    2016-11-01

    In this paper, an orthogonal Tucker-decomposition-based extraction of high-order discriminative subspaces from a tensor-based time series data structure is presented, named as Tensor Discriminative Feature Extraction (TDFE). TDFE relies on the employment of category information for the maximization of the between-class scatter and the minimization of the within-class scatter to extract optimal hidden discriminative feature subspaces that are simultaneously spanned by every modality for supervised tensor modeling. In this context, the proposed tensor-decomposition method provides the following benefits: i) reduces dimensionality while robustly mining the underlying discriminative features, ii) results in effective interpretable features that lead to an improved classification and visualization, and iii) reduces the processing time during the training stage and the filtering of the projection by solving the generalized eigenvalue issue at each alternation step. Two real third-order tensor-structures of time series datasets (an epilepsy electroencephalogram (EEG) that is modeled as channel×frequency bin×time frame and a microarray data that is modeled as gene×sample×time) were used for the evaluation of the TDFE. The experiment results corroborate the advantages of the proposed method with averages of 98.26% and 89.63% for the classification accuracies of the epilepsy dataset and the microarray dataset, respectively. These performance averages represent an improvement on those of the matrix-based algorithms and recent tensor-based, discriminant-decomposition approaches; this is especially the case considering the small number of samples that are used in practice. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Accurate analytical periodic solution of the elliptical Kepler equation using the Adomian decomposition method

    NASA Astrophysics Data System (ADS)

    Alshaery, Aisha; Ebaid, Abdelhalim

    2017-11-01

    Kepler's equation is one of the fundamental equations in orbital mechanics. It is a transcendental equation in terms of the eccentric anomaly of a planet which orbits the Sun. Determining the position of a planet in its orbit around the Sun at a given time depends upon the solution of Kepler's equation, which we will solve in this paper by the Adomian decomposition method (ADM). Several properties of the periodicity of the obtained approximate solutions have been proved in lemmas. Our calculations demonstrated a rapid convergence of the obtained approximate solutions which are displayed in tables and graphs. Also, it has been shown in this paper that only a few terms of the Adomian decomposition series are sufficient to achieve highly accurate numerical results for any number of revolutions of the Earth around the Sun as a consequence of the periodicity property. Numerically, the four-term approximate solution coincides with the Bessel-Fourier series solution in the literature up to seven decimal places at some values of the time parameter and nine decimal places at other values. Moreover, the absolute error approaches zero using the nine term approximate Adomian solution. In addition, the approximate Adomian solutions for the eccentric anomaly have been used to show the convergence of the approximate radial distances of the Earth from the Sun for any number of revolutions. The minimal distance (perihelion) and maximal distance (aphelion) approach 147 million kilometers and 152.505 million kilometers, respectively, and these coincide with the well known results in astronomical physics. Therefore, the Adomian decomposition method is validated as an effective tool to solve Kepler's equation for elliptical orbits.

  17. Measuring Water in Bioreactor Landfills

    NASA Astrophysics Data System (ADS)

    Han, B.; Gallagher, V. N.; Imhoff, P. T.; Yazdani, R.; Chiu, P.

    2004-12-01

    Methane is an important greenhouse gas, and landfills are the largest anthropogenic source in many developed countries. Bioreactor landfills have been proposed as one means of abating greenhouse gas emissions from landfills. Here, the decomposition of organic wastes is enhanced by the controlled addition of water or leachate to maintain optimal conditions for waste decomposition. Greenhouse gas abatement is accomplished by sequestration of photosynthetically derived carbon in wastes, CO2 offsets from energy use of waste derived gas, and mitigation of methane emission from the wastes. Maintaining optimal moisture conditions for waste degradation is perhaps the most important operational parameter in bioreactor landfills. To determine how much water is needed and where to add it, methods are required to measure water within solid waste. However, there is no reliable method that can measure moisture content simply and accurately in the heterogeneous environment typical of landfills. While well drilling and analysis of solid waste samples is sometimes used to determine moisture content, this is an expensive, time-consuming, and destructive procedure. To overcome these problems, a new technology recently developed by hydrologists for measuring water in the vadose zone --- the partitioning tracer test (PTT) --- was evaluated for measuring water in solid waste in a full-scale bioreactor landfill in Yolo County, CA. Two field tests were conducted in different regions of an aerobic bioreactor landfill, with each test measuring water in ≈ 250 ft3 of solid waste. Tracers were injected through existing tubes inserted in the landfill, and tracer breakthrough curves were measured through time from the landfill's gas collection system. Gas samples were analyzed on site using a field-portable gas chromatograph and shipped offsite for more accurate laboratory analysis. In the center of the landfill, PTT measurements indicated that the fraction of the pore space filled with water was 29%, while the moisture content, the mass of water divided by total wet mass of solid waste, was 28%. Near the sloped sides of the landfill, PTT results indicated that only 7.1% of the pore space was filled with water, while the moisture content was estimated to be 6.9%. These measurements are in close agreement with gravimetric measurements made on solid waste samples collected after each PTT: moisture content of 27% in the center of the landfill and only 6% near the edge of the landfill. We discuss these measurements in detail, the limitations of the PTT method for landfills, and operational guidelines for achieving unbiased measurements of moisture content in landfills using the PTT method.

  18. Extracting fingerprint of wireless devices based on phase noise and multiple level wavelet decomposition

    NASA Astrophysics Data System (ADS)

    Zhao, Weichen; Sun, Zhuo; Kong, Song

    2016-10-01

    Wireless devices can be identified by the fingerprint extracted from the signal transmitted, which is useful in wireless communication security and other fields. This paper presents a method that extracts fingerprint based on phase noise of signal and multiple level wavelet decomposition. The phase of signal will be extracted first and then decomposed by multiple level wavelet decomposition. The statistic value of each wavelet coefficient vector is utilized for constructing fingerprint. Besides, the relationship between wavelet decomposition level and recognition accuracy is simulated. And advertised decomposition level is revealed as well. Compared with previous methods, our method is simpler and the accuracy of recognition remains high when Signal Noise Ratio (SNR) is low.

  19. Application of Direct Parallel Methods to Reconstruction and Forecasting Problems

    NASA Astrophysics Data System (ADS)

    Song, Changgeun

    Many important physical processes in nature are represented by partial differential equations. Numerical weather prediction in particular, requires vast computational resources. We investigate the significance of parallel processing technology to the real world problem of atmospheric prediction. In this paper we consider the classic problem of decomposing the observed wind field into the irrotational and nondivergent components. Recognizing the fact that on a limited domain this problem has a non-unique solution, Lynch (1989) described eight different ways to accomplish the decomposition. One set of elliptic equations is associated with the decomposition--this determines the initial nondivergent state for the forecast model. It is shown that the entire decomposition problem can be solved in a fraction of a second using multi-vector processor such as ALLIANT FX/8. Secondly, the barotropic model is used to track hurricanes. Also, one set of elliptic equations is solved to recover the streamfunction from the forecasted vorticity. A 72 h prediction of Elena is made while it is in the Gulf of Mexico. During this time the hurricane executes a dramatic re-curvature that is captured by the model. Furthermore, an improvement in the track prediction results when a simple assimilation strategy is used. This technique makes use of the wind fields in the 24 h period immediately preceding the initial time for the prediction. In this particular application, solutions to systems of elliptic equations are the center of the computational mechanics. We demonstrate that direct, parallel methods based on accelerated block cyclic reduction (BCR) significantly reduce the computational time required to solve the elliptic equations germane to the decomposition, the forecast and adjoint assimilation.

  20. Application of singular value decomposition to structural dynamics systems with constraints

    NASA Technical Reports Server (NTRS)

    Juang, J.-N.; Pinson, L. D.

    1985-01-01

    Singular value decomposition is used to construct a coordinate transformation for a linear dynamic system subject to linear, homogeneous constraint equations. The method is compared with two commonly used methods, namely classical Gaussian elimination and Walton-Steeves approach. Although the classical method requires fewer numerical operations, the singular value decomposition method is more accurate and convenient in eliminating the dependent coordinates. Numerical examples are presented to demonstrate the application of the method.

  1. FormTracer. A mathematica tracing package using FORM

    NASA Astrophysics Data System (ADS)

    Cyrol, Anton K.; Mitter, Mario; Strodthoff, Nils

    2017-10-01

    We present FormTracer, a high-performance, general purpose, easy-to-use Mathematica tracing package which uses FORM. It supports arbitrary space and spinor dimensions as well as an arbitrary number of simple compact Lie groups. While keeping the usability of the Mathematica interface, it relies on the efficiency of FORM. An additional performance gain is achieved by a decomposition algorithm that avoids redundant traces in the product tensors spaces. FormTracer supports a wide range of syntaxes which endows it with a high flexibility. Mathematica notebooks that automatically install the package and guide the user through performing standard traces in space-time, spinor and gauge-group spaces are provided. Program Files doi:http://dx.doi.org/10.17632/7rd29h4p3m.1 Licensing provisions: GPLv3 Programming language: Mathematica and FORM Nature of problem: Efficiently compute traces of large expressions Solution method: The expression to be traced is decomposed into its subspaces by a recursive Mathematica expansion algorithm. The result is subsequently translated to a FORM script that takes the traces. After FORM is executed, the final result is either imported into Mathematica or exported as optimized C/C++/Fortran code. Unusual features: The outstanding features of FormTracer are the simple interface, the capability to efficiently handle an arbitrary number of Lie groups in addition to Dirac and Lorentz tensors, and a customizable input-syntax.

  2. A sparse grid based method for generative dimensionality reduction of high-dimensional data

    NASA Astrophysics Data System (ADS)

    Bohn, Bastian; Garcke, Jochen; Griebel, Michael

    2016-03-01

    Generative dimensionality reduction methods play an important role in machine learning applications because they construct an explicit mapping from a low-dimensional space to the high-dimensional data space. We discuss a general framework to describe generative dimensionality reduction methods, where the main focus lies on a regularized principal manifold learning variant. Since most generative dimensionality reduction algorithms exploit the representer theorem for reproducing kernel Hilbert spaces, their computational costs grow at least quadratically in the number n of data. Instead, we introduce a grid-based discretization approach which automatically scales just linearly in n. To circumvent the curse of dimensionality of full tensor product grids, we use the concept of sparse grids. Furthermore, in real-world applications, some embedding directions are usually more important than others and it is reasonable to refine the underlying discretization space only in these directions. To this end, we employ a dimension-adaptive algorithm which is based on the ANOVA (analysis of variance) decomposition of a function. In particular, the reconstruction error is used to measure the quality of an embedding. As an application, the study of large simulation data from an engineering application in the automotive industry (car crash simulation) is performed.

  3. Coupling experimental data and a prototype model to probe the physical and chemical processes of 2,4-dinitroimidazole solid-phase thermal decomposition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Behrens, R.; Minier, L.; Bulusu, S.

    1998-12-31

    The time-dependent, solid-phase thermal decomposition behavior of 2,4-dinitroimidazole (2,4-DNI) has been measured utilizing simultaneous thermogravimetric modulated beam mass spectrometry (STMBMS) methods. The decomposition products consist of gaseous and non-volatile polymeric products. The temporal behavior of the gas formation rates of the identified products indicate that the overall thermal decomposition process is complex. In isothermal experiments with 2,4-DNI in the solid phase, four distinguishing features are observed: (1) elevated rates of gas formation are observed during the early stages of the decomposition, which appear to be correlated to the presence of exogenous water in the sample; (2) this is followed bymore » a period of relatively constant rates of gas formation; (3) next, the rates of gas formation accelerate, characteristic of an autocatalytic reaction; (4) finally, the 2,4-DNI is depleted and gaseous decomposition products continue to evolve at a decreasing rate. A physicochemical and mathematical model of the decomposition of 2,4-DNI has been developed and applied to the experimental results. The first generation of this model is described in this paper. Differences between the first generation of the model and the experimental data collected under different conditions suggest refinements for the next generation of the model.« less

  4. General Relativity without paradigm of space-time covariance, and resolution of the problem of time

    NASA Astrophysics Data System (ADS)

    Soo, Chopin; Yu, Hoi-Lai

    2014-01-01

    The framework of a theory of gravity from the quantum to the classical regime is presented. The paradigm shift from full space-time covariance to spatial diffeomorphism invariance, together with clean decomposition of the canonical structure, yield transparent physical dynamics and a resolution of the problem of time. The deep divide between quantum mechanics and conventional canonical formulations of quantum gravity is overcome with a Schrödinger equation for quantum geometrodynamics that describes evolution in intrinsic time. Unitary time development with gauge-invariant temporal ordering is also viable. All Kuchar observables become physical; and classical space-time, with direct correlation between its proper times and intrinsic time intervals, emerges from constructive interference. The framework not only yields a physical Hamiltonian for Einstein's theory, but also prompts natural extensions and improvements towards a well behaved quantum theory of gravity. It is a consistent canonical scheme to discuss Horava-Lifshitz theories with intrinsic time evolution, and of the many possible alternatives that respect 3-covariance (rather than the more restrictive 4-covariance of Einstein's theory), Horava's "detailed balance" form of the Hamiltonian constraint is essentially pinned down by this framework. Issues in quantum gravity that depend on radiative corrections and the rigorous definition and regularization of the Hamiltonian operator are not addressed in this work.

  5. Domain Decomposition By the Advancing-Partition Method for Parallel Unstructured Grid Generation

    NASA Technical Reports Server (NTRS)

    Pirzadeh, Shahyar Z.; Zagaris, George

    2009-01-01

    A new method of domain decomposition has been developed for generating unstructured grids in subdomains either sequentially or using multiple computers in parallel. Domain decomposition is a crucial and challenging step for parallel grid generation. Prior methods are generally based on auxiliary, complex, and computationally intensive operations for defining partition interfaces and usually produce grids of lower quality than those generated in single domains. The new technique, referred to as "Advancing Partition," is based on the Advancing-Front method, which partitions a domain as part of the volume mesh generation in a consistent and "natural" way. The benefits of this approach are: 1) the process of domain decomposition is highly automated, 2) partitioning of domain does not compromise the quality of the generated grids, and 3) the computational overhead for domain decomposition is minimal. The new method has been implemented in NASA's unstructured grid generation code VGRID.

  6. Domain Decomposition By the Advancing-Partition Method

    NASA Technical Reports Server (NTRS)

    Pirzadeh, Shahyar Z.

    2008-01-01

    A new method of domain decomposition has been developed for generating unstructured grids in subdomains either sequentially or using multiple computers in parallel. Domain decomposition is a crucial and challenging step for parallel grid generation. Prior methods are generally based on auxiliary, complex, and computationally intensive operations for defining partition interfaces and usually produce grids of lower quality than those generated in single domains. The new technique, referred to as "Advancing Partition," is based on the Advancing-Front method, which partitions a domain as part of the volume mesh generation in a consistent and "natural" way. The benefits of this approach are: 1) the process of domain decomposition is highly automated, 2) partitioning of domain does not compromise the quality of the generated grids, and 3) the computational overhead for domain decomposition is minimal. The new method has been implemented in NASA's unstructured grid generation code VGRID.

  7. Photoacoustic tomography from weak and noisy signals by using a pulse decomposition algorithm in the time-domain.

    PubMed

    Liu, Liangbing; Tao, Chao; Liu, XiaoJun; Deng, Mingxi; Wang, Senhua; Liu, Jun

    2015-10-19

    Photoacoustic tomography is a promising and rapidly developed methodology of biomedical imaging. It confronts an increasing urgent problem to reconstruct the image from weak and noisy photoacoustic signals, owing to its high benefit in extending the imaging depth and decreasing the dose of laser exposure. Based on the time-domain characteristics of photoacoustic signals, a pulse decomposition algorithm is proposed to reconstruct a photoacoustic image from signals with low signal-to-noise ratio. In this method, a photoacoustic signal is decomposed as the weighted summation of a set of pulses in the time-domain. Images are reconstructed from the weight factors, which are directly related to the optical absorption coefficient. Both simulation and experiment are conducted to test the performance of the method. Numerical simulations show that when the signal-to-noise ratio is -4 dB, the proposed method decreases the reconstruction error to about 17%, in comparison with the conventional back-projection method. Moreover, it can produce acceptable images even when the signal-to-noise ratio is decreased to -10 dB. Experiments show that, when the laser influence level is low, the proposed method achieves a relatively clean image of a hair phantom with some well preserved pattern details. The proposed method demonstrates imaging potential of photoacoustic tomography in expanding applications.

  8. A Walking Method for Non-Decomposition Intersection and Union of Arbitrary Polygons and Polyhedrons

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Graham, M.; Yao, J.

    We present a method for computing the intersection and union of non- convex polyhedrons without decomposition in O(n log n) time, where n is the total number of faces of both polyhedrons. We include an accompanying Python package which addresses many of the practical issues associated with implementation and serves as a proof of concept. The key to the method is that by considering the edges of the original ob- jects and the intersections between faces as walking routes, we can e ciently nd the boundary of the intersection of arbitrary objects using directional walks, thus handling the concave casemore » in a natural manner. The method also easily extends to plane slicing and non-convex polyhedron unions, and both the polyhedron and its constituent faces may be non-convex.« less

  9. Differential Decomposition Among Pig, Rabbit, and Human Remains.

    PubMed

    Dautartas, Angela; Kenyhercz, Michael W; Vidoli, Giovanna M; Meadows Jantz, Lee; Mundorff, Amy; Steadman, Dawnie Wolfe

    2018-03-30

    While nonhuman animal remains are often utilized in forensic research to develop methods to estimate the postmortem interval, systematic studies that directly validate animals as proxies for human decomposition are lacking. The current project compared decomposition rates among pigs, rabbits, and humans at the University of Tennessee's Anthropology Research Facility across three seasonal trials that spanned nearly 2 years. The Total Body Score (TBS) method was applied to quantify decomposition changes and calculate the postmortem interval (PMI) in accumulated degree days (ADD). Decomposition trajectories were analyzed by comparing the estimated and actual ADD for each seasonal trial and by fuzzy cluster analysis. The cluster analysis demonstrated that the rabbits formed one group while pigs and humans, although more similar to each other than either to rabbits, still showed important differences in decomposition patterns. The decomposition trends show that neither nonhuman model captured the pattern, rate, and variability of human decomposition. © 2018 American Academy of Forensic Sciences.

  10. Spatial, temporal, and hybrid decompositions for large-scale vehicle routing with time windows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bent, Russell W

    This paper studies the use of decomposition techniques to quickly find high-quality solutions to large-scale vehicle routing problems with time windows. It considers an adaptive decomposition scheme which iteratively decouples a routing problem based on the current solution. Earlier work considered vehicle-based decompositions that partitions the vehicles across the subproblems. The subproblems can then be optimized independently and merged easily. This paper argues that vehicle-based decompositions, although very effective on various problem classes also have limitations. In particular, they do not accommodate temporal decompositions and may produce spatial decompositions that are not focused enough. This paper then proposes customer-based decompositionsmore » which generalize vehicle-based decouplings and allows for focused spatial and temporal decompositions. Experimental results on class R2 of the extended Solomon benchmarks demonstrates the benefits of the customer-based adaptive decomposition scheme and its spatial, temporal, and hybrid instantiations. In particular, they show that customer-based decompositions bring significant benefits over large neighborhood search in contrast to vehicle-based decompositions.« less

  11. Improving the blind restoration of retinal images by means of point-spread-function estimation assessment

    NASA Astrophysics Data System (ADS)

    Marrugo, Andrés. G.; Millán, María. S.; Å orel, Michal; Kotera, Jan; Å roubek, Filip

    2015-01-01

    Retinal images often suffer from blurring which hinders disease diagnosis and progression assessment. The restoration of the images is carried out by means of blind deconvolution, but the success of the restoration depends on the correct estimation of the point-spread-function (PSF) that blurred the image. The restoration can be space-invariant or space-variant. Because a retinal image has regions without texture or sharp edges, the blind PSF estimation may fail. In this paper we propose a strategy for the correct assessment of PSF estimation in retinal images for restoration by means of space-invariant or space-invariant blind deconvolution. Our method is based on a decomposition in Zernike coefficients of the estimated PSFs to identify valid PSFs. This significantly improves the quality of the image restoration revealed by the increased visibility of small details like small blood vessels and by the lack of restoration artifacts.

  12. Motor current signature analysis for gearbox condition monitoring under transient speeds using wavelet analysis and dual-level time synchronous averaging

    NASA Astrophysics Data System (ADS)

    Bravo-Imaz, Inaki; Davari Ardakani, Hossein; Liu, Zongchang; García-Arribas, Alfredo; Arnaiz, Aitor; Lee, Jay

    2017-09-01

    This paper focuses on analyzing motor current signature for fault diagnosis of gearboxes operating under transient speed regimes. Two different strategies are evaluated, extensively tested and compared to analyze the motor current signature in order to implement a condition monitoring system for gearboxes in industrial machinery. A specially designed test bench is used, thoroughly monitored to fully characterize the experiments, in which gears in different health status are tested. The measured signals are analyzed using discrete wavelet decomposition, in different decomposition levels using a range of mother wavelets. Moreover, a dual-level time synchronous averaging analysis is performed on the same signal to compare the performance of the two methods. From both analyses, the relevant features of the signals are extracted and cataloged using a self-organizing map, which allows for an easy detection and classification of the diverse health states of the gears. The results demonstrate the effectiveness of both methods for diagnosing gearbox faults. A slightly better performance was observed for dual-level time synchronous averaging method. Based on the obtained results, the proposed methods can used as effective and reliable condition monitoring procedures for gearbox condition monitoring using only motor current signature.

  13. Study of a two-dimension transient heat propagation in cylindrical coordinates by means of two finite difference methods

    NASA Astrophysics Data System (ADS)

    Dumencu, A.; Horbaniuc, B.; Dumitraşcu, G.

    2016-08-01

    The analytical approach of unsteady conduction heat transfer under actual conditions represent a very difficult (if not insurmountable) problem due to the issues related to finding analytical solutions for the conduction heat transfer equation. Various techniques have been developed in order to overcome these difficulties, among which the alternate directions method and the decomposition method. Both of them are particularly suited for two-dimension heat propagation. The paper deals with both techniques in order to verify whether the results provided are in good accordance. The studied case consists of a long hollow cylinder, and considers that the time-dependent temperature field varies both in the radial and the axial directions. The implicit technique is used in both methods and involves the simultaneous solving of a set of equations for all of the nodes for each time step successively for each of the two directions. Gauss elimination is used to obtain the solution of the set, representing the nodal temperatures. After using the two techniques the results show a very good agreement, and since the decomposition is easier to use in terms of computer code and running time, this technique seems to be more recommendable.

  14. New insights into the crowd characteristics in Mina

    NASA Astrophysics Data System (ADS)

    Wang, J. Y.; Weng, W. G.; Zhang, X. L.

    2014-11-01

    The significance of the study of the characteristics of crowd behavior is indubitable for safely organizing mass activities. There is insufficient material to conduct such research. In this paper, the Mina crowd disaster is quantitatively re-investigated. Its instantaneous velocity field is extracted from video material based on the cross-correlation algorithm. The properties of the stop-and-go waves, including fluctuation frequencies, wave propagation speeds, characteristic speeds, and time and space averaged velocity variances, are analyzed in detail. Thus, the database of the stop-and-go wave features is enriched, which is very important to crowd studies. The ‘turbulent’ flows are investigated with the proper orthogonal decomposition (POD) method which is widely used in fluid mechanics. And time series and spatial analysis are conducted to investigate the characteristics of the ‘turbulent’ flows. In this paper, the coherent structures and movement process are described by the POD method. The relationship between the jamming point and crowd path is analyzed. And the pressure buffer recognized in this paper is consistent with Helbing's high-pressure region. The results revealed here may be helpful for facilities design, modeling crowded scenarios and the organization of large-scale mass activities.

  15. Accelerating the reconstruction of magnetic resonance imaging by three-dimensional dual-dictionary learning using CUDA.

    PubMed

    Jiansen Li; Jianqi Sun; Ying Song; Yanran Xu; Jun Zhao

    2014-01-01

    An effective way to improve the data acquisition speed of magnetic resonance imaging (MRI) is using under-sampled k-space data, and dictionary learning method can be used to maintain the reconstruction quality. Three-dimensional dictionary trains the atoms in dictionary in the form of blocks, which can utilize the spatial correlation among slices. Dual-dictionary learning method includes a low-resolution dictionary and a high-resolution dictionary, for sparse coding and image updating respectively. However, the amount of data is huge for three-dimensional reconstruction, especially when the number of slices is large. Thus, the procedure is time-consuming. In this paper, we first utilize the NVIDIA Corporation's compute unified device architecture (CUDA) programming model to design the parallel algorithms on graphics processing unit (GPU) to accelerate the reconstruction procedure. The main optimizations operate in the dictionary learning algorithm and the image updating part, such as the orthogonal matching pursuit (OMP) algorithm and the k-singular value decomposition (K-SVD) algorithm. Then we develop another version of CUDA code with algorithmic optimization. Experimental results show that more than 324 times of speedup is achieved compared with the CPU-only codes when the number of MRI slices is 24.

  16. Comparison of three-way and four-way calibration for the real-time quantitative analysis of drug hydrolysis in complex dynamic samples by excitation-emission matrix fluorescence.

    PubMed

    Yin, Xiao-Li; Gu, Hui-Wen; Liu, Xiao-Lu; Zhang, Shan-Hui; Wu, Hai-Long

    2018-03-05

    Multiway calibration in combination with spectroscopic technique is an attractive tool for online or real-time monitoring of target analyte(s) in complex samples. However, how to choose a suitable multiway calibration method for the resolution of spectroscopic-kinetic data is a troubling problem in practical application. In this work, for the first time, three-way and four-way fluorescence-kinetic data arrays were generated during the real-time monitoring of the hydrolysis of irinotecan (CPT-11) in human plasma by excitation-emission matrix fluorescence. Alternating normalization-weighted error (ANWE) and alternating penalty trilinear decomposition (APTLD) were used as three-way calibration for the decomposition of the three-way kinetic data array, whereas alternating weighted residual constraint quadrilinear decomposition (AWRCQLD) and alternating penalty quadrilinear decomposition (APQLD) were applied as four-way calibration to the four-way kinetic data array. The quantitative results of the two kinds of calibration models were fully compared from the perspective of predicted real-time concentrations, spiked recoveries of initial concentration, and analytical figures of merit. The comparison study demonstrated that both three-way and four-way calibration models could achieve real-time quantitative analysis of the hydrolysis of CPT-11 in human plasma under certain conditions. However, it was also found that both of them possess some critical advantages and shortcomings during the process of dynamic analysis. The conclusions obtained in this paper can provide some helpful guidance for the reasonable selection of multiway calibration models to achieve the real-time quantitative analysis of target analyte(s) in complex dynamic systems. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Compression of hyper-spectral images using an accelerated nonnegative tensor decomposition

    NASA Astrophysics Data System (ADS)

    Li, Jin; Liu, Zilong

    2017-12-01

    Nonnegative tensor Tucker decomposition (NTD) in a transform domain (e.g., 2D-DWT, etc) has been used in the compression of hyper-spectral images because it can remove redundancies between spectrum bands and also exploit spatial correlations of each band. However, the use of a NTD has a very high computational cost. In this paper, we propose a low complexity NTD-based compression method of hyper-spectral images. This method is based on a pair-wise multilevel grouping approach for the NTD to overcome its high computational cost. The proposed method has a low complexity under a slight decrease of the coding performance compared to conventional NTD. We experimentally confirm this method, which indicates that this method has the less processing time and keeps a better coding performance than the case that the NTD is not used. The proposed approach has a potential application in the loss compression of hyper-spectral or multi-spectral images

  18. Flash Pyrolysis of t-Butyl Hydroperoxide and Di-t-butyl Peroxide: Evidence of Roaming in the Decomposition of Organic Hydroperoxides.

    PubMed

    Jones, Paul J; Riser, Blake; Zhang, Jingsong

    2017-10-19

    Thermal decomposition of t-butyl hydroperoxide and di-t-butyl peroxide was investigated using flash pyrolysis (in a short reaction time of <100 μs) and vacuum-ultraviolet (λ = 118.2 nm) single-photon ionization time-of-flight mass spectrometry (VUV-SPI-TOFMS) at temperatures up to 1120 K and quantum computational methods. Acetone and methyl radical were detected as the predominant products in the initial decomposition of di-t-butyl peroxide via O-O bond fission. In the initial dissociation of t-butyl hydroperoxide, acetone, methyl radical, isobutylene, and isobutylene oxide products were identified. The novel detection of the unimolecular formation of isobutylene oxide, as supported by the computational study, was found to proceed via a roaming hydroxyl radical facilitated by a hydrogen-bonded intermediate. This new pathway could provide a new class of reactions to consider in the modeling of the low temperature oxidation of alkanes.

  19. Identification of Successive ``Unobservable'' Cyber Data Attacks in Power Systems Through Matrix Decomposition

    NASA Astrophysics Data System (ADS)

    Gao, Pengzhi; Wang, Meng; Chow, Joe H.; Ghiocel, Scott G.; Fardanesh, Bruce; Stefopoulos, George; Razanousky, Michael P.

    2016-11-01

    This paper presents a new framework of identifying a series of cyber data attacks on power system synchrophasor measurements. We focus on detecting "unobservable" cyber data attacks that cannot be detected by any existing method that purely relies on measurements received at one time instant. Leveraging the approximate low-rank property of phasor measurement unit (PMU) data, we formulate the identification problem of successive unobservable cyber attacks as a matrix decomposition problem of a low-rank matrix plus a transformed column-sparse matrix. We propose a convex-optimization-based method and provide its theoretical guarantee in the data identification. Numerical experiments on actual PMU data from the Central New York power system and synthetic data are conducted to verify the effectiveness of the proposed method.

  20. Temporal structure of neuronal population oscillations with empirical model decomposition

    NASA Astrophysics Data System (ADS)

    Li, Xiaoli

    2006-08-01

    Frequency analysis of neuronal oscillation is very important for understanding the neural information processing and mechanism of disorder in the brain. This Letter addresses a new method to analyze the neuronal population oscillations with empirical mode decomposition (EMD). Following EMD of neuronal oscillation, a series of intrinsic mode functions (IMFs) are obtained, then Hilbert transform of IMFs can be used to extract the instantaneous time frequency structure of neuronal oscillation. The method is applied to analyze the neuronal oscillation in the hippocampus of epileptic rats in vivo, the results show the neuronal oscillations have different descriptions during the pre-ictal, seizure onset and ictal periods of the epileptic EEG at the different frequency band. This new method is very helpful to provide a view for the temporal structure of neural oscillation.

  1. Structure-seeking multilinear methods for the analysis of fMRI data.

    PubMed

    Andersen, Anders H; Rayens, William S

    2004-06-01

    In comprehensive fMRI studies of brain function, the data structures often contain higher-order ways such as trial, task condition, subject, and group in addition to the intrinsic dimensions of time and space. While multivariate bilinear methods such as principal component analysis (PCA) have been used successfully for extracting information about spatial and temporal features in data from a single fMRI run, the need to unfold higher-order data sets into bilinear arrays has led to decompositions that are nonunique and to the loss of multiway linkages and interactions present in the data. These additional dimensions or ways can be retained in multilinear models to produce structures that are unique and which admit interpretations that are neurophysiologically meaningful. Multiway analysis of fMRI data from multiple runs of a bilateral finger-tapping paradigm was performed using the parallel factor (PARAFAC) model. A trilinear model was fitted to a data cube of dimensions voxels by time by run. Similarly, a quadrilinear model was fitted to a higher-way structure of dimensions voxels by time by trial by run. The spatial and temporal response components were extracted and validated by comparison to results from traditional SVD/PCA analyses based on scenarios of unfolding into lower-order bilinear structures.

  2. Stability analysis of gyroscopic systems with delay via decomposition

    NASA Astrophysics Data System (ADS)

    Aleksandrov, A. Yu.; Zhabko, A. P.; Chen, Y.

    2018-05-01

    A mechanical system describing by the second order linear differential equations with a positive parameter at the velocity forces and with time delay in the positional forces is studied. Using the decomposition method and Lyapunov-Krasovskii functionals, conditions are obtained under which from the asymptotic stability of two auxiliary first order subsystems it follows that, for sufficiently large values of the parameter, the original system is also asymptotically stable. Moreover, it is shown that the proposed approach can be applied to the stability investigation of linear gyroscopic systems with switched positional forces.

  3. Parallelization of combinatorial search when solving knapsack optimization problem on computing systems based on multicore processors

    NASA Astrophysics Data System (ADS)

    Rahman, P. A.

    2018-05-01

    This scientific paper deals with the model of the knapsack optimization problem and method of its solving based on directed combinatorial search in the boolean space. The offered by the author specialized mathematical model of decomposition of the search-zone to the separate search-spheres and the algorithm of distribution of the search-spheres to the different cores of the multi-core processor are also discussed. The paper also provides an example of decomposition of the search-zone to the several search-spheres and distribution of the search-spheres to the different cores of the quad-core processor. Finally, an offered by the author formula for estimation of the theoretical maximum of the computational acceleration, which can be achieved due to the parallelization of the search-zone to the search-spheres on the unlimited number of the processor cores, is also given.

  4. A trimodal porous carbon as an effective catalyst for hydrogen production by methane decomposition.

    PubMed

    Shen, Yi; Lua, Aik Chong

    2016-01-15

    A new type of porous carbon with an interconnected trimodal pore system is synthesized by a nanocasting method using nanoparticulated bimodal micro-mesoporous silica particles as the template. The synthesized template and carbon material are characterized using transmission electron microscopy (TEM), field emission electron scanning microscopy (FESEM) and nitrogen adsorption-desorption test. The synthesized carbon material has an extremely high surface area, a large pore volume and an interconnected pore structure, which could provide abundant active sites and space for chemical reactions and minimize the diffusion resistance of the reactants. The resulting carbon is used as the catalyst for hydrogen production by the thermal decomposition of methane. The catalytic results show that the as-synthesized carbon in this study produces much higher methane conversion and hydrogen yield than the commercial carbon materials. Copyright © 2015 Elsevier Inc. All rights reserved.

  5. Domain decomposition methods in aerodynamics

    NASA Technical Reports Server (NTRS)

    Venkatakrishnan, V.; Saltz, Joel

    1990-01-01

    Compressible Euler equations are solved for two-dimensional problems by a preconditioned conjugate gradient-like technique. An approximate Riemann solver is used to compute the numerical fluxes to second order accuracy in space. Two ways to achieve parallelism are tested, one which makes use of parallelism inherent in triangular solves and the other which employs domain decomposition techniques. The vectorization/parallelism in triangular solves is realized by the use of a recording technique called wavefront ordering. This process involves the interpretation of the triangular matrix as a directed graph and the analysis of the data dependencies. It is noted that the factorization can also be done in parallel with the wave front ordering. The performances of two ways of partitioning the domain, strips and slabs, are compared. Results on Cray YMP are reported for an inviscid transonic test case. The performances of linear algebra kernels are also reported.

  6. Segmented Domain Decomposition Multigrid For 3-D Turbomachinery Flows

    NASA Technical Reports Server (NTRS)

    Celestina, M. L.; Adamczyk, J. J.; Rubin, S. G.

    2001-01-01

    A Segmented Domain Decomposition Multigrid (SDDMG) procedure was developed for three-dimensional viscous flow problems as they apply to turbomachinery flows. The procedure divides the computational domain into a coarse mesh comprised of uniformly spaced cells. To resolve smaller length scales such as the viscous layer near a surface, segments of the coarse mesh are subdivided into a finer mesh. This is repeated until adequate resolution of the smallest relevant length scale is obtained. Multigrid is used to communicate information between the different grid levels. To test the procedure, simulation results will be presented for a compressor and turbine cascade. These simulations are intended to show the ability of the present method to generate grid independent solutions. Comparisons with data will also be presented. These comparisons will further demonstrate the usefulness of the present work for they allow an estimate of the accuracy of the flow modeling equations independent of error attributed to numerical discretization.

  7. Blurred Palmprint Recognition Based on Stable-Feature Extraction Using a Vese–Osher Decomposition Model

    PubMed Central

    Hong, Danfeng; Su, Jian; Hong, Qinggen; Pan, Zhenkuan; Wang, Guodong

    2014-01-01

    As palmprints are captured using non-contact devices, image blur is inevitably generated because of the defocused status. This degrades the recognition performance of the system. To solve this problem, we propose a stable-feature extraction method based on a Vese–Osher (VO) decomposition model to recognize blurred palmprints effectively. A Gaussian defocus degradation model is first established to simulate image blur. With different degrees of blurring, stable features are found to exist in the image which can be investigated by analyzing the blur theoretically. Then, a VO decomposition model is used to obtain structure and texture layers of the blurred palmprint images. The structure layer is stable for different degrees of blurring (this is a theoretical conclusion that needs to be further proved via experiment). Next, an algorithm based on weighted robustness histogram of oriented gradients (WRHOG) is designed to extract the stable features from the structure layer of the blurred palmprint image. Finally, a normalized correlation coefficient is introduced to measure the similarity in the palmprint features. We also designed and performed a series of experiments to show the benefits of the proposed method. The experimental results are used to demonstrate the theoretical conclusion that the structure layer is stable for different blurring scales. The WRHOG method also proves to be an advanced and robust method of distinguishing blurred palmprints. The recognition results obtained using the proposed method and data from two palmprint databases (PolyU and Blurred–PolyU) are stable and superior in comparison to previous high-performance methods (the equal error rate is only 0.132%). In addition, the authentication time is less than 1.3 s, which is fast enough to meet real-time demands. Therefore, the proposed method is a feasible way of implementing blurred palmprint recognition. PMID:24992328

  8. Blurred palmprint recognition based on stable-feature extraction using a Vese-Osher decomposition model.

    PubMed

    Hong, Danfeng; Su, Jian; Hong, Qinggen; Pan, Zhenkuan; Wang, Guodong

    2014-01-01

    As palmprints are captured using non-contact devices, image blur is inevitably generated because of the defocused status. This degrades the recognition performance of the system. To solve this problem, we propose a stable-feature extraction method based on a Vese-Osher (VO) decomposition model to recognize blurred palmprints effectively. A Gaussian defocus degradation model is first established to simulate image blur. With different degrees of blurring, stable features are found to exist in the image which can be investigated by analyzing the blur theoretically. Then, a VO decomposition model is used to obtain structure and texture layers of the blurred palmprint images. The structure layer is stable for different degrees of blurring (this is a theoretical conclusion that needs to be further proved via experiment). Next, an algorithm based on weighted robustness histogram of oriented gradients (WRHOG) is designed to extract the stable features from the structure layer of the blurred palmprint image. Finally, a normalized correlation coefficient is introduced to measure the similarity in the palmprint features. We also designed and performed a series of experiments to show the benefits of the proposed method. The experimental results are used to demonstrate the theoretical conclusion that the structure layer is stable for different blurring scales. The WRHOG method also proves to be an advanced and robust method of distinguishing blurred palmprints. The recognition results obtained using the proposed method and data from two palmprint databases (PolyU and Blurred-PolyU) are stable and superior in comparison to previous high-performance methods (the equal error rate is only 0.132%). In addition, the authentication time is less than 1.3 s, which is fast enough to meet real-time demands. Therefore, the proposed method is a feasible way of implementing blurred palmprint recognition.

  9. Application of the wavelet packet transform to vibration signals for surface roughness monitoring in CNC turning operations

    NASA Astrophysics Data System (ADS)

    García Plaza, E.; Núñez López, P. J.

    2018-01-01

    The wavelet packet transform method decomposes a time signal into several independent time-frequency signals called packets. This enables the temporary location of transient events occurring during the monitoring of the cutting processes, which is advantageous in monitoring condition and fault diagnosis. This paper proposes the monitoring of surface roughness using a single low cost sensor that is easily implemented in numerical control machine tools in order to make on-line decisions on workpiece surface finish quality. Packet feature extraction in vibration signals was applied to correlate the sensor signals to measured surface roughness. For the successful application of the WPT method, mother wavelets, packet decomposition level, and appropriate packet selection methods should be considered, but are poorly understood aspects in the literature. In this novel contribution, forty mother wavelets, optimal decomposition level, and packet reduction methods were analysed, as well as identifying the effective frequency range providing the best packet feature extraction for monitoring surface finish. The results show that mother wavelet biorthogonal 4.4 in decomposition level L3 with the fusion of the orthogonal vibration components (ax + ay + az) were the best option in the vibration signal and surface roughness correlation. The best packets were found in the medium-high frequency DDA (6250-9375 Hz) and high frequency ADA (9375-12500 Hz) ranges, and the feed acceleration component ay was the primary source of information. The packet reduction methods forfeited packets with relevant features to the signal, leading to poor results for the prediction of surface roughness. WPT is a robust vibration signal processing method for the monitoring of surface roughness using a single sensor without other information sources, satisfactory results were obtained in comparison to other processing methods with a low computational cost.

  10. a Novel Two-Component Decomposition for Co-Polar Channels of GF-3 Quad-Pol Data

    NASA Astrophysics Data System (ADS)

    Kwok, E.; Li, C. H.; Zhao, Q. H.; Li, Y.

    2018-04-01

    Polarimetric target decomposition theory is the most dynamic and exploratory research area in the field of PolSAR. But most methods of target decomposition are based on fully polarized data (quad pol) and seldom utilize dual-polar data for target decomposition. Given this, we proposed a novel two-component decomposition method for co-polar channels of GF-3 quad-pol data. This method decomposes the data into two scattering contributions: surface, double bounce in dual co-polar channels. To save this underdetermined problem, a criterion for determining the model is proposed. The criterion can be named as second-order averaged scattering angle, which originates from the H/α decomposition. and we also put forward an alternative parameter of it. To validate the effectiveness of proposed decomposition, Liaodong Bay is selected as research area. The area is located in northeastern China, where it grows various wetland resources and appears sea ice phenomenon in winter. and we use the GF-3 quad-pol data as study data, which which is China's first C-band polarimetric synthetic aperture radar (PolSAR) satellite. The dependencies between the features of proposed algorithm and comparison decompositions (Pauli decomposition, An&Yang decomposition, Yamaguchi S4R decomposition) were investigated in the study. Though several aspects of the experimental discussion, we can draw the conclusion: the proposed algorithm may be suitable for special scenes with low vegetation coverage or low vegetation in the non-growing season; proposed decomposition features only using co-polar data are highly correlated with the corresponding comparison decomposition features under quad-polarization data. Moreover, it would be become input of the subsequent classification or parameter inversion.

  11. Fast heap transform-based QR-decomposition of real and complex matrices: algorithms and codes

    NASA Astrophysics Data System (ADS)

    Grigoryan, Artyom M.

    2015-03-01

    In this paper, we describe a new look on the application of Givens rotations to the QR-decomposition problem, which is similar to the method of Householder transformations. We apply the concept of the discrete heap transform, or signal-induced unitary transforms which had been introduced by Grigoryan (2006) and used in signal and image processing. Both cases of real and complex nonsingular matrices are considered and examples of performing QR-decomposition of square matrices are given. The proposed method of QR-decomposition for the complex matrix is novel and differs from the known method of complex Givens rotation and is based on analytical equations for the heap transforms. Many examples illustrated the proposed heap transform method of QR-decomposition are given, algorithms are described in detail, and MATLAB-based codes are included.

  12. Decision Support Methods and Tools

    NASA Technical Reports Server (NTRS)

    Green, Lawrence L.; Alexandrov, Natalia M.; Brown, Sherilyn A.; Cerro, Jeffrey A.; Gumbert, Clyde r.; Sorokach, Michael R.; Burg, Cecile M.

    2006-01-01

    This paper is one of a set of papers, developed simultaneously and presented within a single conference session, that are intended to highlight systems analysis and design capabilities within the Systems Analysis and Concepts Directorate (SACD) of the National Aeronautics and Space Administration (NASA) Langley Research Center (LaRC). This paper focuses on the specific capabilities of uncertainty/risk analysis, quantification, propagation, decomposition, and management, robust/reliability design methods, and extensions of these capabilities into decision analysis methods within SACD. These disciplines are discussed together herein under the name of Decision Support Methods and Tools. Several examples are discussed which highlight the application of these methods within current or recent aerospace research at the NASA LaRC. Where applicable, commercially available, or government developed software tools are also discussed

  13. Conception of discrete systems decomposition algorithm using p-invariants and hypergraphs

    NASA Astrophysics Data System (ADS)

    Stefanowicz, Ł.

    2016-09-01

    In the article author presents an idea of decomposition algorithm of discrete systems described by Petri Nets using pinvariants. Decomposition process is significant from the point of view of discrete systems design, because it allows separation of the smaller sequential parts. Proposed algorithm uses modified Martinez-Silva method as well as author's selection algorithm. The developed method is a good complement of classical decomposition algorithms using graphs and hypergraphs.

  14. Generalized neurofuzzy network modeling algorithms using Bézier-Bernstein polynomial functions and additive decomposition.

    PubMed

    Hong, X; Harris, C J

    2000-01-01

    This paper introduces a new neurofuzzy model construction algorithm for nonlinear dynamic systems based upon basis functions that are Bézier-Bernstein polynomial functions. This paper is generalized in that it copes with n-dimensional inputs by utilising an additive decomposition construction to overcome the curse of dimensionality associated with high n. This new construction algorithm also introduces univariate Bézier-Bernstein polynomial functions for the completeness of the generalized procedure. Like the B-spline expansion based neurofuzzy systems, Bézier-Bernstein polynomial function based neurofuzzy networks hold desirable properties such as nonnegativity of the basis functions, unity of support, and interpretability of basis function as fuzzy membership functions, moreover with the additional advantages of structural parsimony and Delaunay input space partition, essentially overcoming the curse of dimensionality associated with conventional fuzzy and RBF networks. This new modeling network is based on additive decomposition approach together with two separate basis function formation approaches for both univariate and bivariate Bézier-Bernstein polynomial functions used in model construction. The overall network weights are then learnt using conventional least squares methods. Numerical examples are included to demonstrate the effectiveness of this new data based modeling approach.

  15. Assessment of a new method for the analysis of decomposition gases of polymers by a combining thermogravimetric solid-phase extraction and thermal desorption gas chromatography mass spectrometry.

    PubMed

    Duemichen, E; Braun, U; Senz, R; Fabian, G; Sturm, H

    2014-08-08

    For analysis of the gaseous thermal decomposition products of polymers, the common techniques are thermogravimetry, combined with Fourier transformed infrared spectroscopy (TGA-FTIR) and mass spectrometry (TGA-MS). These methods offer a simple approach to the decomposition mechanism, especially for small decomposition molecules. Complex spectra of gaseous mixtures are very often hard to identify because of overlapping signals. In this paper a new method is described to adsorb the decomposition products during controlled conditions in TGA on solid-phase extraction (SPE) material: twisters. Subsequently the twisters were analysed with thermal desorption gas chromatography mass spectrometry (TDS-GC-MS), which allows the decomposition products to be separated and identified using an MS library. The thermoplastics polyamide 66 (PA 66) and polybutylene terephthalate (PBT) were used as example polymers. The influence of the sample mass and of the purge gas flow during the decomposition process was investigated in TGA. The advantages and limitations of the method were presented in comparison to the common analysis techniques, TGA-FTIR and TGA-MS. Copyright © 2014 Elsevier B.V. All rights reserved.

  16. Variational calculation of macrostate transition rates

    NASA Astrophysics Data System (ADS)

    Ulitsky, Alex; Shalloway, David

    1998-08-01

    We develop the macrostate variational method (MVM) for computing reaction rates of diffusive conformational transitions in multidimensional systems by a variational coarse-grained "macrostate" decomposition of the Smoluchowski equation. MVM uses multidimensional Gaussian packets to identify and focus computational effort on the "transition region," a localized, self-consistently determined region in conformational space positioned roughly between the macrostates. It also determines the "transition direction" which optimally specifies the projected potential of mean force for mean first-passage time calculations. MVM is complementary to variational transition state theory in that it can efficiently solve multidimensional problems but does not accommodate memory-friction effects. It has been tested on model 1- and 2-dimensional potentials and on the 12-dimensional conformational transition between the isoforms of a microcluster of six-atoms having only van der Waals interactions. Comparison with Brownian dynamics calculations shows that MVM obtains equivalent results at a fraction of the computational cost.

  17. Realization of quantum gates with multiple control qubits or multiple target qubits in a cavity

    NASA Astrophysics Data System (ADS)

    Waseem, Muhammad; Irfan, Muhammad; Qamar, Shahid

    2015-06-01

    We propose a scheme to realize a three-qubit controlled phase gate and a multi-qubit controlled NOT gate of one qubit simultaneously controlling n-target qubits with a four-level quantum system in a cavity. The implementation time for multi-qubit controlled NOT gate is independent of the number of qubit. Three-qubit phase gate is generalized to n-qubit phase gate with multiple control qubits. The number of steps reduces linearly as compared to conventional gate decomposition method. Our scheme can be applied to various types of physical systems such as superconducting qubits coupled to a resonator and trapped atoms in a cavity. Our scheme does not require adjustment of level spacing during the gate implementation. We also show the implementation of Deutsch-Joza algorithm. Finally, we discuss the imperfections due to cavity decay and the possibility of physical implementation of our scheme.

  18. Fast multidimensional ensemble empirical mode decomposition for the analysis of big spatio-temporal datasets.

    PubMed

    Wu, Zhaohua; Feng, Jiaxin; Qiao, Fangli; Tan, Zhe-Min

    2016-04-13

    In this big data era, it is more urgent than ever to solve two major issues: (i) fast data transmission methods that can facilitate access to data from non-local sources and (ii) fast and efficient data analysis methods that can reveal the key information from the available data for particular purposes. Although approaches in different fields to address these two questions may differ significantly, the common part must involve data compression techniques and a fast algorithm. This paper introduces the recently developed adaptive and spatio-temporally local analysis method, namely the fast multidimensional ensemble empirical mode decomposition (MEEMD), for the analysis of a large spatio-temporal dataset. The original MEEMD uses ensemble empirical mode decomposition to decompose time series at each spatial grid and then pieces together the temporal-spatial evolution of climate variability and change on naturally separated timescales, which is computationally expensive. By taking advantage of the high efficiency of the expression using principal component analysis/empirical orthogonal function analysis for spatio-temporally coherent data, we design a lossy compression method for climate data to facilitate its non-local transmission. We also explain the basic principles behind the fast MEEMD through decomposing principal components instead of original grid-wise time series to speed up computation of MEEMD. Using a typical climate dataset as an example, we demonstrate that our newly designed methods can (i) compress data with a compression rate of one to two orders; and (ii) speed-up the MEEMD algorithm by one to two orders. © 2016 The Authors.

  19. A modified multiscale peak alignment method combined with trilinear decomposition to study the volatile/heat-labile components in Ligusticum chuanxiong Hort - Cyperus rotundus rhizomes by HS-SPME-GC/MS.

    PubMed

    He, Min; Yan, Pan; Yang, Zhi-Yu; Zhang, Zhi-Min; Yang, Tian-Biao; Hong, Liang

    2018-03-15

    Head Space/Solid Phase Micro-Extraction (HS-SPME) coupled with Gas Chromatography/Mass Spectrometer (GC/MS) was used to determine the volatile/heat-labile components in Ligusticum chuanxiong Hort - Cyperus rotundus rhizomes. Facing co-eluting peaks in k samples, a trilinear structure was reconstructed to obtain the second-order advantage. The retention time (RT) shift with multi-channel detection signals for different samples has been vital in maintaining the trilinear structure, thus a modified multiscale peak alignment (mMSPA) method was proposed in this paper. The peak position and peak width of representative ion profile were firstly detected by mMSPA using Continuous Wavelet Transform with Haar wavelet as the mother wavelet (Haar CWT). Then, the raw shift was confirmed by Fast Fourier Transform (FFT) cross correlation calculation. To obtain the optimal shift, Haar CWT was again used to detect the subtle deviations and be amalgamated in calculation. Here, to ensure there is no peaks shape alternation, the alignment was performed in local domains of data matrices, and all data points in the peak zone were moved via linear interpolation in non-peak parts. Finally, chemical components of interest in Ligusticum chuanxiong Hort - Cyperus rotundus rhizomes were analyzed by HS-SPME-GCMS and mMSPA-alternating trilinear decomposition (ATLD) resolution. As a result, the concentration variation between herbs and their pharmaceutical products can provide a scientific basic for the quality standard establishment of traditional Chinese medicines. Copyright © 2018 Elsevier B.V. All rights reserved.

  20. Multispectral image fusion for illumination-invariant palmprint recognition

    PubMed Central

    Zhang, Xinman; Xu, Xuebin; Shang, Dongpeng

    2017-01-01

    Multispectral palmprint recognition has shown broad prospects for personal identification due to its high accuracy and great stability. In this paper, we develop a novel illumination-invariant multispectral palmprint recognition method. To combine the information from multiple spectral bands, an image-level fusion framework is completed based on a fast and adaptive bidimensional empirical mode decomposition (FABEMD) and a weighted Fisher criterion. The FABEMD technique decomposes the multispectral images into their bidimensional intrinsic mode functions (BIMFs), on which an illumination compensation operation is performed. The weighted Fisher criterion is to construct the fusion coefficients at the decomposition level, making the images be separated correctly in the fusion space. The image fusion framework has shown strong robustness against illumination variation. In addition, a tensor-based extreme learning machine (TELM) mechanism is presented for feature extraction and classification of two-dimensional (2D) images. In general, this method has fast learning speed and satisfying recognition accuracy. Comprehensive experiments conducted on the PolyU multispectral palmprint database illustrate that the proposed method can achieve favorable results. For the testing under ideal illumination, the recognition accuracy is as high as 99.93%, and the result is 99.50% when the lighting condition is unsatisfied. PMID:28558064

  1. Multispectral image fusion for illumination-invariant palmprint recognition.

    PubMed

    Lu, Longbin; Zhang, Xinman; Xu, Xuebin; Shang, Dongpeng

    2017-01-01

    Multispectral palmprint recognition has shown broad prospects for personal identification due to its high accuracy and great stability. In this paper, we develop a novel illumination-invariant multispectral palmprint recognition method. To combine the information from multiple spectral bands, an image-level fusion framework is completed based on a fast and adaptive bidimensional empirical mode decomposition (FABEMD) and a weighted Fisher criterion. The FABEMD technique decomposes the multispectral images into their bidimensional intrinsic mode functions (BIMFs), on which an illumination compensation operation is performed. The weighted Fisher criterion is to construct the fusion coefficients at the decomposition level, making the images be separated correctly in the fusion space. The image fusion framework has shown strong robustness against illumination variation. In addition, a tensor-based extreme learning machine (TELM) mechanism is presented for feature extraction and classification of two-dimensional (2D) images. In general, this method has fast learning speed and satisfying recognition accuracy. Comprehensive experiments conducted on the PolyU multispectral palmprint database illustrate that the proposed method can achieve favorable results. For the testing under ideal illumination, the recognition accuracy is as high as 99.93%, and the result is 99.50% when the lighting condition is unsatisfied.

  2. A Fantastic Decomposition: Unsettling the Fury of Having to Wait

    ERIC Educational Resources Information Center

    Holmes, Rachel

    2012-01-01

    This article draws on data from a single element of a larger project, which focused on the issue of "how children develop a reputation as "naughty" in the early years classroom." The author draws attention to the (in)corporeal (re)formation of the line in school, undertaking a decomposition of the topological spaces of research/art/education. She…

  3. Rapid surface defect detection based on singular value decomposition using steel strips as an example

    NASA Astrophysics Data System (ADS)

    Sun, Qianlai; Wang, Yin; Sun, Zhiyi

    2018-05-01

    For most surface defect detection methods based on image processing, image segmentation is a prerequisite for determining and locating the defect. In our previous work, a method based on singular value decomposition (SVD) was used to determine and approximately locate surface defects on steel strips without image segmentation. For the SVD-based method, the image to be inspected was projected onto its first left and right singular vectors respectively. If there were defects in the image, there would be sharp changes in the projections. Then the defects may be determined and located according sharp changes in the projections of each image to be inspected. This method was simple and practical but the SVD should be performed for each image to be inspected. Owing to the high time complexity of SVD itself, it did not have a significant advantage in terms of time consumption over image segmentation-based methods. Here, we present an improved SVD-based method. In the improved method, a defect-free image is considered as the reference image which is acquired under the same environment as the image to be inspected. The singular vectors of each image to be inspected are replaced by the singular vectors of the reference image, and SVD is performed only once for the reference image off-line before detecting of the defects, thus greatly reducing the time required. The improved method is more conducive to real-time defect detection. Experimental results confirm its validity.

  4. Adaptive truncation of matrix decompositions and efficient estimation of NMR relaxation distributions

    NASA Astrophysics Data System (ADS)

    Teal, Paul D.; Eccles, Craig

    2015-04-01

    The two most successful methods of estimating the distribution of nuclear magnetic resonance relaxation times from two dimensional data are data compression followed by application of the Butler-Reeds-Dawson algorithm, and a primal-dual interior point method using preconditioned conjugate gradient. Both of these methods have previously been presented using a truncated singular value decomposition of matrices representing the exponential kernel. In this paper it is shown that other matrix factorizations are applicable to each of these algorithms, and that these illustrate the different fundamental principles behind the operation of the algorithms. These are the rank-revealing QR (RRQR) factorization and the LDL factorization with diagonal pivoting, also known as the Bunch-Kaufman-Parlett factorization. It is shown that both algorithms can be improved by adaptation of the truncation as the optimization process progresses, improving the accuracy as the optimal value is approached. A variation on the interior method viz, the use of barrier function instead of the primal-dual approach, is found to offer considerable improvement in terms of speed and reliability. A third type of algorithm, related to the algorithm known as Fast iterative shrinkage-thresholding algorithm, is applied to the problem. This method can be efficiently formulated without the use of a matrix decomposition.

  5. Grouping individual independent BOLD effects: a new way to ICA group analysis

    NASA Astrophysics Data System (ADS)

    Duann, Jeng-Ren; Jung, Tzyy-Ping; Sejnowski, Terrence J.; Makeig, Scott

    2009-04-01

    A new group analysis method to summarize the task-related BOLD responses based on independent component analysis (ICA) was presented. As opposite to the previously proposed group ICA (gICA) method, which first combined multi-subject fMRI data in either temporal or spatial domain and applied ICA decomposition only once to the combined fMRI data to extract the task-related BOLD effects, the method presented here applied ICA decomposition to the individual subjects' fMRI data to first find the independent BOLD effects specifically for each individual subject. Then, the task-related independent BOLD component was selected among the resulting independent components from the single-subject ICA decomposition and hence grouped across subjects to derive the group inference. In this new ICA group analysis (ICAga) method, one does not need to assume that the task-related BOLD time courses are identical across brain areas and subjects as used in the grand ICA decomposition on the spatially concatenated fMRI data. Neither does one need to assume that after spatial normalization, the voxels at the same coordinates represent exactly the same functional or structural brain anatomies across different subjects. These two assumptions have been problematic given the recent BOLD activation evidences. Further, since the independent BOLD effects were obtained from each individual subject, the ICAga method can better account for the individual differences in the task-related BOLD effects. Unlike the gICA approach whereby the task-related BOLD effects could only be accounted for by a single unified BOLD model across multiple subjects. As a result, the newly proposed method, ICAga, was able to better fit the task-related BOLD effects at individual level and thus allow grouping more appropriate multisubject BOLD effects in the group analysis.

  6. Recovery and removal of nutrients from swine wastewater by using a novel integrated reactor for struvite decomposition and recycling

    PubMed Central

    Huang, Haiming; Xiao, Dean; Liu, Jiahui; Hou, Li; Ding, Li

    2015-01-01

    In the present study, struvite decomposition was performed by air stripping for ammonia release and a novel integrated reactor was designed for the simultaneous removal and recovery of total ammonia-nitrogen (TAN) and total orthophosphate (PT) from swine wastewater by internal struvite recycling. Decomposition of struvite by air stripping was found to be feasible. Without supplementation with additional magnesium and phosphate sources, the removal ratio of TAN from synthetic wastewater was maintained at >80% by recycling of the struvite decomposition product formed under optimal conditions, six times. Continuous operation of the integrated reactor indicated that approximately 91% TAN and 97% PT in the swine wastewater could be removed and recovered by the proposed recycling process with the supplementation of bittern. Economic evaluation of the proposed system showed that struvite precipitation cost can be saved by approximately 54% by adopting the proposed recycling process in comparison with no recycling method. PMID:25960246

  7. Electrochemical and Infrared Absorption Spectroscopy Detection of SF₆ Decomposition Products.

    PubMed

    Dong, Ming; Zhang, Chongxing; Ren, Ming; Albarracín, Ricardo; Ye, Rixin

    2017-11-15

    Sulfur hexafluoride (SF₆) gas-insulated electrical equipment is widely used in high-voltage (HV) and extra-high-voltage (EHV) power systems. Partial discharge (PD) and local heating can occur in the electrical equipment because of insulation faults, which results in SF₆ decomposition and ultimately generates several types of decomposition products. These SF₆ decomposition products can be qualitatively and quantitatively detected with relevant detection methods, and such detection contributes to diagnosing the internal faults and evaluating the security risks of the equipment. At present, multiple detection methods exist for analyzing the SF₆ decomposition products, and electrochemical sensing (ES) and infrared (IR) spectroscopy are well suited for application in online detection. In this study, the combination of ES with IR spectroscopy is used to detect SF₆ gas decomposition. First, the characteristics of these two detection methods are studied, and the data analysis matrix is established. Then, a qualitative and quantitative analysis ES-IR model is established by adopting a two-step approach. A SF₆ decomposition detector is designed and manufactured by combining an electrochemical sensor and IR spectroscopy technology. The detector is used to detect SF₆ gas decomposition and is verified to reliably and accurately detect the gas components and concentrations.

  8. Preparation, crystal structure, thermal decomposition, quantum chemical calculations on [K(ZTO)ṡH2O]∞ and its ligand ZTO

    NASA Astrophysics Data System (ADS)

    Ma, Cong; Huang, Jie; Ma, Hai-Xia; Xu, Kang-Zhen; Lv, Xing-Qiang; Song, Ji-Rong; Zhao, Ning-Ning; He, Jian-Yun; Zhao, Yi-Sha

    2013-03-01

    A novel potassium complex has been synthesized and characterized under the non-isothermal conditions by DSC and TG-DTG method. The 4,4-azo-1,2,4-triazol-5-one (ZTO) has the molecular formula C4H4N8O2. The thermodynamic parameters, HOMO-LUMO energy gap, total energy and electrostatic potential (MEP) of ZTO are conducted by density functional theory DFT/B3LYP calculation method with 6-311G basis set. In the coordination polymer, with the ligand anion (ZTO-) as space linkers, two types of potassium atoms centers are joined together to form three-dimensional frameworks. The enthalpy, apparent activation energy and pre-exponential factor of the second exothermic decomposition reaction are 85.43 kJ mol-1, 414.4 kJ mol-1and 1037.92 s-1, respectively. The critical temperature of thermal explosion (Tb) for [K(ZTO)ṡH2O]∞ is 275.08 °C. [K(ZTO)ṡH2O]∞ CCDC: 902339.

  9. Reconciling Mechanistic Hypotheses About Rhizosphere Priming

    NASA Astrophysics Data System (ADS)

    Cheng, W.

    2016-12-01

    Rhizosphere priming on soil organic matter decomposition has emerged as a key mechanism regulating biogeochemnical cycling of carbon, nitrogen and other elements from local to global scales. The level of the rhizosphere priming effect on decomposition rates can be comparable to the levels of controls from soil temperature and moisture conditions. However, our understanding on mechanisms responsible for rhizosphere priming remains rudimentary and controversial. The following individual hypotheses have been postulated in the published literature: (1) microbial activation, (2) microbial community succession, (3) aggregate turnover, (4) nitrogen mining, (5) nutrient competition, (6) preferential substrate utilization, and (7) drying-rewetting. Meshing these hypotheses with existing empirical evidence tends to support a general conclusion: each of these 7 hypotheses represents an aspect of the overall rhizosphere priming complex while the relative contribution by each individual aspect varies depending on the actual plant-soil conditions across time and space.

  10. Kinematics of reflections in subsurface offset and angle-domain image gathers

    NASA Astrophysics Data System (ADS)

    Dafni, Raanan; Symes, William W.

    2018-05-01

    Seismic migration in the angle-domain generates multiple images of the earth's interior in which reflection takes place at different scattering-angles. Mechanically, the angle-dependent reflection is restricted to happen instantaneously and at a fixed point in space: Incident wave hits a discontinuity in the subsurface media and instantly generates a scattered wave at the same common point of interaction. Alternatively, the angle-domain image may be associated with space-shift (regarded as subsurface offset) extended migration that artificially splits the reflection geometry. Meaning that, incident and scattered waves interact at some offset distance. The geometric differences between the two approaches amount to a contradictory angle-domain behaviour, and unlike kinematic description. We present a phase space depiction of migration methods extended by the peculiar subsurface offset split and stress its profound dissimilarity. In spite of being in radical contradiction with the general physics, the subsurface offset reveals a link to some valuable angle-domain quantities, via post-migration transformations. The angle quantities are indicated by the direction normal to the subsurface offset extended image. They specifically define the local dip and scattering angles if the velocity at the split reflection coordinates is the same for incident and scattered wave pairs. Otherwise, the reflector normal is not a bisector of the opening angle, but of the corresponding slowness vectors. This evidence, together with the distinguished geometry configuration, fundamentally differentiates the angle-domain decomposition based on the subsurface offset split from the conventional decomposition at a common reflection point. An asymptotic simulation of angle-domain moveout curves in layered media exposes the notion of split versus common reflection point geometry. Traveltime inversion methods that involve the subsurface offset extended migration must accommodate the split geometry in the inversion scheme for a robust and successful convergence at the optimal velocity model.

  11. Aeroelastic System Development Using Proper Orthogonal Decomposition and Volterra Theory

    NASA Technical Reports Server (NTRS)

    Lucia, David J.; Beran, Philip S.; Silva, Walter A.

    2003-01-01

    This research combines Volterra theory and proper orthogonal decomposition (POD) into a hybrid methodology for reduced-order modeling of aeroelastic systems. The out-come of the method is a set of linear ordinary differential equations (ODEs) describing the modal amplitudes associated with both the structural modes and the POD basis functions for the uid. For this research, the structural modes are sine waves of varying frequency, and the Volterra-POD approach is applied to the fluid dynamics equations. The structural modes are treated as forcing terms which are impulsed as part of the uid model realization. Using this approach, structural and uid operators are coupled into a single aeroelastic operator. This coupling converts a free boundary uid problem into an initial value problem, while preserving the parameter (or parameters) of interest for sensitivity analysis. The approach is applied to an elastic panel in supersonic cross ow. The hybrid Volterra-POD approach provides a low-order uid model in state-space form. The linear uid model is tightly coupled with a nonlinear panel model using an implicit integration scheme. The resulting aeroelastic model provides correct limit-cycle oscillation prediction over a wide range of panel dynamic pressure values. Time integration of the reduced-order aeroelastic model is four orders of magnitude faster than the high-order solution procedure developed for this research using traditional uid and structural solvers.

  12. Performance of a plastic-wrapped composting system for biosecure emergency disposal of disease-related swine mortalities.

    PubMed

    Glanville, Thomas D; Ahn, Heekwon; Akdeniz, Neslihan; Crawford, Benjamin P; Koziel, Jacek A

    2016-02-01

    A passively-ventilated plastic-wrapped composting system initially developed for biosecure disposal of poultry mortalities caused by avian influenza was adapted and tested to assess its potential as an emergency disposal option for disease-related swine mortalities. Fresh air was supplied through perforated plastic tubing routed through the base of the compost pile. The combined air inlet and top vent area is ⩽∼1% of the gas exchange surface of a conventional uncovered windrow. Parameters evaluated included: (1) spatial and temporal variations in matrix moisture content (m.c.), leachate production, and matrix O2 concentrations; (2) extent of soft tissue decomposition; and (3) internal temperature and the success rate in achieving USEPA time/temperature (T) criteria for pathogen reduction. Six envelope materials (wood shavings, corn silage, ground cornstalks, ground oat straw, ground soybean straw, or ground alfalfa hay) and two initial m.c.'s (15-30% w.b. for materials stored indoors, and 45-65% w.b. to simulate materials exposed to precipitation) were tested to determine their effect on performance parameters (1-3). Results of triple-replicated field trials showed that the composting system did not accumulate moisture despite the 150kg carcass water load (65% of 225kg total carcass mass) released during decomposition. Mean compost m.c. in the carcass layer declined by ∼7 percentage points during 8-week trials, and a leachate accumulation was rare. Matrix O2 concentrations for all materials other than silage were ⩾10% using the equivalent of 2m inlet/vent spacing. In silage O2 dropped below 5% in some cases even when 0.5m inlet/vent spacing was used. Eight week soft tissue decomposition ranged from 87% in cornstalks to 72% in silage. Success rates for achievement of USEPA Class B time/temperature criteria ranged from 91% for silage to 33-57% for other materials. Companion laboratory biodegradation studies suggest that Class B success rates can be improved by slightly increasing envelope material m.c. Moistening initially dry (15% m.c.) envelope materials to 35% m.c. nearly doubled their heat production potential, boosting it to levels ⩾silage. The 'contradictory' silage test results showing high temperatures paired with slow soft tissue degradation are likely due to this material's high density, low gas permeability and low water vapor loss. While slow decomposition typically suggests low microbial activity and heat production, it does not rule out high internal temperatures if the heat produced is conserved. Occasional short-term odor releases during the first 2weeks of composting were associated with top-to-bottom gas flow which is contrary to the typical bottom-to-top flow typically observed in conventional compost piles. In cases where biosecurity concerns are paramount, results of this study show the plastic-wrapped passively-ventilated composting method to have good potential for above-ground swine mortality disposal. Copyright © 2015 Elsevier Ltd. All rights reserved.

  13. mm_par2.0: An object-oriented molecular dynamics simulation program parallelized using a hierarchical scheme with MPI and OPENMP

    NASA Astrophysics Data System (ADS)

    Oh, Kwang Jin; Kang, Ji Hoon; Myung, Hun Joo

    2012-02-01

    We have revised a general purpose parallel molecular dynamics simulation program mm_par using the object-oriented programming. We parallelized the revised version using a hierarchical scheme in order to utilize more processors for a given system size. The benchmark result will be presented here. New version program summaryProgram title: mm_par2.0 Catalogue identifier: ADXP_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADXP_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC license, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 2 390 858 No. of bytes in distributed program, including test data, etc.: 25 068 310 Distribution format: tar.gz Programming language: C++ Computer: Any system operated by Linux or Unix Operating system: Linux Classification: 7.7 External routines: We provide wrappers for FFTW [1], Intel MKL library [2] FFT routine, and Numerical recipes [3] FFT, random number generator, and eigenvalue solver routines, SPRNG [4] random number generator, Mersenne Twister [5] random number generator, space filling curve routine. Catalogue identifier of previous version: ADXP_v1_0 Journal reference of previous version: Comput. Phys. Comm. 174 (2006) 560 Does the new version supersede the previous version?: Yes Nature of problem: Structural, thermodynamic, and dynamical properties of fluids and solids from microscopic scales to mesoscopic scales. Solution method: Molecular dynamics simulation in NVE, NVT, and NPT ensemble, Langevin dynamics simulation, dissipative particle dynamics simulation. Reasons for new version: First, object-oriented programming has been used, which is known to be open for extension and closed for modification. It is also known to be better for maintenance. Second, version 1.0 was based on atom decomposition and domain decomposition scheme [6] for parallelization. However, atom decomposition is not popular due to its poor scalability. On the other hand, domain decomposition scheme is better for scalability. It still has a limitation in utilizing a large number of cores on recent petascale computers due to the requirement that the domain size is larger than the potential cutoff distance. To go beyond such a limitation, a hierarchical parallelization scheme has been adopted in this new version and implemented using MPI [7] and OPENMP [8]. Summary of revisions: (1) Object-oriented programming has been used. (2) A hierarchical parallelization scheme has been adopted. (3) SPME routine has been fully parallelized with parallel 3D FFT using volumetric decomposition scheme [9]. K.J.O. thanks Mr. Seung Min Lee for useful discussion on programming and debugging. Running time: Running time depends on system size and methods used. For test system containing a protein (PDB id: 5DHFR) with CHARMM22 force field [10] and 7023 TIP3P [11] waters in simulation box having dimension 62.23 Å×62.23 Å×62.23 Å, the benchmark results are given in Fig. 1. Here the potential cutoff distance was set to 12 Å and the switching function was applied from 10 Å for the force calculation in real space. For the SPME [12] calculation, K, K, and K were set to 64 and the interpolation order was set to 4. To do the fast Fourier transform, we used Intel MKL library. All bonds including hydrogen atoms were constrained using SHAKE/RATTLE algorithms [13,14]. The code was compiled using Intel compiler version 11.1 and mvapich2 version 1.5. Fig. 2 shows performance gains from using CUDA-enabled version [15] of mm_par for 5DHFR simulation in water on Intel Core2Quad 2.83 GHz and GeForce GTX 580. Even though mm_par2.0 is not ported yet for GPU, its performance data would be useful to expect mm_par2.0 performance on GPU. Timing results for 1000 MD steps. 1, 2, 4, and 8 in the figure mean the number of OPENMP threads. Timing results for 1000 MD steps from double precision simulation on CPU, single precision simulation on GPU, and double precision simulation on GPU.

  14. Bianchi identities and the automatic conservation of energy-momentum and angular momentum in general-relativistic field theories

    NASA Astrophysics Data System (ADS)

    Hehl, Friedrich W.; McCrea, J. Dermott

    1986-03-01

    Automatic conservation of energy-momentum and angular momentum is guaranteed in a gravitational theory if, via the field equations, the conservation laws for the material currents are reduced to the contracted Bianchi identities. We first execute an irreducible decomposition of the Bianchi identities in a Riemann-Cartan space-time. Then, starting from a Riemannian space-time with or without torsion, we determine those gravitational theories which have automatic conservation: general relativity and the Einstein-Cartan-Sciama-Kibble theory, both with cosmological constant, and the nonviable pseudoscalar model. The Poincaré gauge theory of gravity, like gauge theories of internal groups, has no automatic conservation in the sense defined above. This does not lead to any difficulties in principle. Analogies to 3-dimensional continuum mechanics are stressed throughout the article.

  15. Clustering of Multi-Temporal Fully Polarimetric L-Band SAR Data for Agricultural Land Cover Mapping

    NASA Astrophysics Data System (ADS)

    Tamiminia, H.; Homayouni, S.; Safari, A.

    2015-12-01

    Recently, the unique capabilities of Polarimetric Synthetic Aperture Radar (PolSAR) sensors make them an important and efficient tool for natural resources and environmental applications, such as land cover and crop classification. The aim of this paper is to classify multi-temporal full polarimetric SAR data using kernel-based fuzzy C-means clustering method, over an agricultural region. This method starts with transforming input data into the higher dimensional space using kernel functions and then clustering them in the feature space. Feature space, due to its inherent properties, has the ability to take in account the nonlinear and complex nature of polarimetric data. Several SAR polarimetric features extracted using target decomposition algorithms. Features from Cloude-Pottier, Freeman-Durden and Yamaguchi algorithms used as inputs for the clustering. This method was applied to multi-temporal UAVSAR L-band images acquired over an agricultural area near Winnipeg, Canada, during June and July in 2012. The results demonstrate the efficiency of this approach with respect to the classical methods. In addition, using multi-temporal data in the clustering process helped to investigate the phenological cycle of plants and significantly improved the performance of agricultural land cover mapping.

  16. Spectral estimation—What is new? What is next?

    NASA Astrophysics Data System (ADS)

    Tary, Jean Baptiste; Herrera, Roberto Henry; Han, Jiajun; van der Baan, Mirko

    2014-12-01

    Spectral estimation, and corresponding time-frequency representation for nonstationary signals, is a cornerstone in geophysical signal processing and interpretation. The last 10-15 years have seen the development of many new high-resolution decompositions that are often fundamentally different from Fourier and wavelet transforms. These conventional techniques, like the short-time Fourier transform and the continuous wavelet transform, show some limitations in terms of resolution (localization) due to the trade-off between time and frequency localizations and smearing due to the finite size of the time series of their template. Well-known techniques, like autoregressive methods and basis pursuit, and recently developed techniques, such as empirical mode decomposition and the synchrosqueezing transform, can achieve higher time-frequency localization due to reduced spectral smearing and leakage. We first review the theory of various established and novel techniques, pointing out their assumptions, adaptability, and expected time-frequency localization. We illustrate their performances on a provided collection of benchmark signals, including a laughing voice, a volcano tremor, a microseismic event, and a global earthquake, with the intention to provide a fair comparison of the pros and cons of each method. Finally, their outcomes are discussed and possible avenues for improvements are proposed.

  17. Modeling diffusion control on organic matter decomposition in unsaturated soil pore space

    NASA Astrophysics Data System (ADS)

    Vogel, Laure; Pot, Valérie; Garnier, Patricia; Vieublé-Gonod, Laure; Nunan, Naoise; Raynaud, Xavier; Chenu, Claire

    2014-05-01

    Soil Organic Matter decomposition is affected by soil structure and water content, but field and laboratory studies about this issue conclude to highly variable outcomes. Variability could be explained by the discrepancy between the scale at which key processes occur and the measurements scale. We think that physical and biological interactions driving carbon transformation dynamics can be best understood at the pore scale. Because of the spatial disconnection between carbon sources and decomposers, the latter rely on nutrient transport unless they can actively move. In hydrostatic case, diffusion in soil pore space is thus thought to regulate biological activity. In unsaturated conditions, the heterogeneous distribution of water modifies diffusion pathways and rates, thus affects diffusion control on decomposition. Innovative imaging and modeling tools offer new means to address these effects. We have developed a new model based on the association between a 3D Lattice-Boltzmann Model and an adimensional decomposition module. We designed scenarios to study the impact of physical (geometry, saturation, decomposers position) and biological properties on decomposition. The model was applied on porous media with various morphologies. We selected three cubic images of 100 voxels side from µCT-scanned images of an undisturbed soil sample at 68µm resolution. We used LBM to perform phase separation and obtained water phase distributions at equilibrium for different saturation indices. We then simulated the diffusion of a simple soluble substrate (glucose) and its consumption by bacteria. The same mass of glucose was added as a pulse at the beginning of all simulations. Bacteria were placed in few voxels either regularly spaced or concentrated close to or far from the glucose source. We modulated physiological features of decomposers in order to weight them against abiotic conditions. We could evidence several effects creating unequal substrate access conditions for decomposers, hence inducing contrasted decomposition kinetics: position of bacteria relative to the substrate diffusion pathways, diffusion rate and hydraulic connectivity between bacteria and substrate source, local substrate enrichment due to restricted mass transfer. Physiological characteristics had a strong impact on decomposition only when glucose diffused easily but not when diffusion limitation prevailed. This suggests that carbon dynamics should not be considered to derive from decomposers' physiology alone but rather from the interactions of biological and physical processes at the microscale.

  18. GC × GC-TOFMS and supervised multivariate approaches to study human cadaveric decomposition olfactive signatures.

    PubMed

    Stefanuto, Pierre-Hugues; Perrault, Katelynn A; Stadler, Sonja; Pesesse, Romain; LeBlanc, Helene N; Forbes, Shari L; Focant, Jean-François

    2015-06-01

    In forensic thanato-chemistry, the understanding of the process of soft tissue decomposition is still limited. A better understanding of the decomposition process and the characterization of the associated volatile organic compounds (VOC) can help to improve the training of victim recovery (VR) canines, which are used to search for trapped victims in natural disasters or to locate corpses during criminal investigations. The complexity of matrices and the dynamic nature of this process require the use of comprehensive analytical methods for investigation. Moreover, the variability of the environment and between individuals creates additional difficulties in terms of normalization. The resolution of the complex mixture of VOCs emitted by a decaying corpse can be improved using comprehensive two-dimensional gas chromatography (GC × GC), compared to classical single-dimensional gas chromatography (1DGC). This study combines the analytical advantages of GC × GC coupled to time-of-flight mass spectrometry (TOFMS) with the data handling robustness of supervised multivariate statistics to investigate the VOC profile of human remains during early stages of decomposition. Various supervised multivariate approaches are compared to interpret the large data set. Moreover, early decomposition stages of pig carcasses (typically used as human surrogates in field studies) are also monitored to obtain a direct comparison of the two VOC profiles and estimate the robustness of this human decomposition analog model. In this research, we demonstrate that pig and human decomposition processes can be described by the same trends for the major compounds produced during the early stages of soft tissue decomposition.

  19. Effect of Copper Oxide, Titanium Dioxide, and Lithium Fluoride on the Thermal Behavior and Decomposition Kinetics of Ammonium Nitrate

    NASA Astrophysics Data System (ADS)

    Vargeese, Anuj A.; Mija, S. J.; Muralidharan, Krishnamurthi

    2014-07-01

    Ammonium nitrate (AN) is crystallized along with copper oxide, titanium dioxide, and lithium fluoride. Thermal kinetic constants for the decomposition reaction of the samples were calculated by model-free (Friedman's differential and Vyzovkins nonlinear integral) and model-fitting (Coats-Redfern) methods. To determine the decomposition mechanisms, 12 solid-state mechanisms were tested using the Coats-Redfern method. The results of the Coats-Redfern method show that the decomposition mechanism for all samples is the contracting cylinder mechanism. The phase behavior of the obtained samples was evaluated by differential scanning calorimetry (DSC), and structural properties were determined by X-ray powder diffraction (XRPD). The results indicate that copper oxide modifies the phase transition behavior and can catalyze AN decomposition, whereas LiF inhibits AN decomposition, and TiO2 shows no influence on the rate of decomposition. Possible explanations for these results are discussed. Supplementary materials are available for this article. Go to the publisher's online edition of the Journal of Energetic Materials to view the free supplemental file.

  20. Development Of Polarimetric Decomposition Techniques For Indian Forest Resource Assessment Using Radar Imaging Satellite (Risat-1) Images

    NASA Astrophysics Data System (ADS)

    Sridhar, J.

    2015-12-01

    The focus of this work is to examine polarimetric decomposition techniques primarily focussed on Pauli decomposition and Sphere Di-Plane Helix (SDH) decomposition for forest resource assessment. The data processing methods adopted are Pre-processing (Geometric correction and Radiometric calibration), Speckle Reduction, Image Decomposition and Image Classification. Initially to classify forest regions, unsupervised classification was applied to determine different unknown classes. It was observed K-means clustering method gave better results in comparison with ISO Data method.Using the algorithm developed for Radar Tools, the code for decomposition and classification techniques were applied in Interactive Data Language (IDL) and was applied to RISAT-1 image of Mysore-Mandya region of Karnataka, India. This region is chosen for studying forest vegetation and consists of agricultural lands, water and hilly regions. Polarimetric SAR data possess a high potential for classification of earth surface.After applying the decomposition techniques, classification was done by selecting region of interests andpost-classification the over-all accuracy was observed to be higher in the SDH decomposed image, as it operates on individual pixels on a coherent basis and utilises the complete intrinsic coherent nature of polarimetric SAR data. Thereby, making SDH decomposition particularly suited for analysis of high-resolution SAR data. The Pauli Decomposition represents all the polarimetric information in a single SAR image however interpretation of the resulting image is difficult. The SDH decomposition technique seems to produce better results and interpretation as compared to Pauli Decomposition however more quantification and further analysis are being done in this area of research. The comparison of Polarimetric decomposition techniques and evolutionary classification techniques will be the scope of this work.

Top