Sample records for reduced basis method

  1. A Reduced Basis Method with Exact-Solution Certificates for Symmetric Coercive Equations

    DTIC Science & Technology

    2013-11-06

    the energy associated with the infinite - dimensional weak solution of parametrized symmetric coercive partial differential equations with piecewise...builds bounds with respect to the infinite - dimensional weak solution, aims to entirely remove the issue of the “truth” within the certified reduced basis...framework. We in particular introduce a reduced basis method that provides rigorous upper and lower bounds

  2. Model's sparse representation based on reduced mixed GMsFE basis methods

    NASA Astrophysics Data System (ADS)

    Jiang, Lijian; Li, Qiuqi

    2017-06-01

    In this paper, we propose a model's sparse representation based on reduced mixed generalized multiscale finite element (GMsFE) basis methods for elliptic PDEs with random inputs. A typical application for the elliptic PDEs is the flow in heterogeneous random porous media. Mixed generalized multiscale finite element method (GMsFEM) is one of the accurate and efficient approaches to solve the flow problem in a coarse grid and obtain the velocity with local mass conservation. When the inputs of the PDEs are parameterized by the random variables, the GMsFE basis functions usually depend on the random parameters. This leads to a large number degree of freedoms for the mixed GMsFEM and substantially impacts on the computation efficiency. In order to overcome the difficulty, we develop reduced mixed GMsFE basis methods such that the multiscale basis functions are independent of the random parameters and span a low-dimensional space. To this end, a greedy algorithm is used to find a set of optimal samples from a training set scattered in the parameter space. Reduced mixed GMsFE basis functions are constructed based on the optimal samples using two optimal sampling strategies: basis-oriented cross-validation and proper orthogonal decomposition. Although the dimension of the space spanned by the reduced mixed GMsFE basis functions is much smaller than the dimension of the original full order model, the online computation still depends on the number of coarse degree of freedoms. To significantly improve the online computation, we integrate the reduced mixed GMsFE basis methods with sparse tensor approximation and obtain a sparse representation for the model's outputs. The sparse representation is very efficient for evaluating the model's outputs for many instances of parameters. To illustrate the efficacy of the proposed methods, we present a few numerical examples for elliptic PDEs with multiscale and random inputs. In particular, a two-phase flow model in random porous media is simulated by the proposed sparse representation method.

  3. Model's sparse representation based on reduced mixed GMsFE basis methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Lijian, E-mail: ljjiang@hnu.edu.cn; Li, Qiuqi, E-mail: qiuqili@hnu.edu.cn

    2017-06-01

    In this paper, we propose a model's sparse representation based on reduced mixed generalized multiscale finite element (GMsFE) basis methods for elliptic PDEs with random inputs. A typical application for the elliptic PDEs is the flow in heterogeneous random porous media. Mixed generalized multiscale finite element method (GMsFEM) is one of the accurate and efficient approaches to solve the flow problem in a coarse grid and obtain the velocity with local mass conservation. When the inputs of the PDEs are parameterized by the random variables, the GMsFE basis functions usually depend on the random parameters. This leads to a largemore » number degree of freedoms for the mixed GMsFEM and substantially impacts on the computation efficiency. In order to overcome the difficulty, we develop reduced mixed GMsFE basis methods such that the multiscale basis functions are independent of the random parameters and span a low-dimensional space. To this end, a greedy algorithm is used to find a set of optimal samples from a training set scattered in the parameter space. Reduced mixed GMsFE basis functions are constructed based on the optimal samples using two optimal sampling strategies: basis-oriented cross-validation and proper orthogonal decomposition. Although the dimension of the space spanned by the reduced mixed GMsFE basis functions is much smaller than the dimension of the original full order model, the online computation still depends on the number of coarse degree of freedoms. To significantly improve the online computation, we integrate the reduced mixed GMsFE basis methods with sparse tensor approximation and obtain a sparse representation for the model's outputs. The sparse representation is very efficient for evaluating the model's outputs for many instances of parameters. To illustrate the efficacy of the proposed methods, we present a few numerical examples for elliptic PDEs with multiscale and random inputs. In particular, a two-phase flow model in random porous media is simulated by the proposed sparse representation method.« less

  4. Development and comparison of advanced reduced-basis methods for the transient structural analysis of unconstrained structures

    NASA Technical Reports Server (NTRS)

    Mcgowan, David M.; Bostic, Susan W.; Camarda, Charles J.

    1993-01-01

    The development of two advanced reduced-basis methods, the force derivative method and the Lanczos method, and two widely used modal methods, the mode displacement method and the mode acceleration method, for transient structural analysis of unconstrained structures is presented. Two example structural problems are studied: an undamped, unconstrained beam subject to a uniformly distributed load which varies as a sinusoidal function of time and an undamped high-speed civil transport aircraft subject to a normal wing tip load which varies as a sinusoidal function of time. These example problems are used to verify the methods and to compare the relative effectiveness of each of the four reduced-basis methods for performing transient structural analyses on unconstrained structures. The methods are verified with a solution obtained by integrating directly the full system of equations of motion, and they are compared using the number of basis vectors required to obtain a desired level of accuracy and the associated computational times as comparison criteria.

  5. An intertwined method for making low-rank, sum-of-product basis functions that makes it possible to compute vibrational spectra of molecules with more than 10 atoms

    PubMed Central

    Thomas, Phillip S.

    2017-01-01

    We propose a method for solving the vibrational Schrödinger equation with which one can compute spectra for molecules with more than ten atoms. It uses sum-of-product (SOP) basis functions stored in a canonical polyadic tensor format and generated by evaluating matrix-vector products. By doing a sequence of partial optimizations, in each of which the factors in a SOP basis function for a single coordinate are optimized, the rank of the basis functions is reduced as matrix-vector products are computed. This is better than using an alternating least squares method to reduce the rank, as is done in the reduced-rank block power method. Partial optimization is better because it speeds up the calculation by about an order of magnitude and allows one to significantly reduce the memory cost. We demonstrate the effectiveness of the new method by computing vibrational spectra of two molecules, ethylene oxide (C2H4O) and cyclopentadiene (C5H6), with 7 and 11 atoms, respectively. PMID:28571348

  6. An intertwined method for making low-rank, sum-of-product basis functions that makes it possible to compute vibrational spectra of molecules with more than 10 atoms.

    PubMed

    Thomas, Phillip S; Carrington, Tucker

    2017-05-28

    We propose a method for solving the vibrational Schrödinger equation with which one can compute spectra for molecules with more than ten atoms. It uses sum-of-product (SOP) basis functions stored in a canonical polyadic tensor format and generated by evaluating matrix-vector products. By doing a sequence of partial optimizations, in each of which the factors in a SOP basis function for a single coordinate are optimized, the rank of the basis functions is reduced as matrix-vector products are computed. This is better than using an alternating least squares method to reduce the rank, as is done in the reduced-rank block power method. Partial optimization is better because it speeds up the calculation by about an order of magnitude and allows one to significantly reduce the memory cost. We demonstrate the effectiveness of the new method by computing vibrational spectra of two molecules, ethylene oxide (C 2 H 4 O) and cyclopentadiene (C 5 H 6 ), with 7 and 11 atoms, respectively.

  7. Reduced basis ANOVA methods for partial differential equations with high-dimensional random inputs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liao, Qifeng, E-mail: liaoqf@shanghaitech.edu.cn; Lin, Guang, E-mail: guanglin@purdue.edu

    2016-07-15

    In this paper we present a reduced basis ANOVA approach for partial deferential equations (PDEs) with random inputs. The ANOVA method combined with stochastic collocation methods provides model reduction in high-dimensional parameter space through decomposing high-dimensional inputs into unions of low-dimensional inputs. In this work, to further reduce the computational cost, we investigate spatial low-rank structures in the ANOVA-collocation method, and develop efficient spatial model reduction techniques using hierarchically generated reduced bases. We present a general mathematical framework of the methodology, validate its accuracy and demonstrate its efficiency with numerical experiments.

  8. Adaptive h -refinement for reduced-order models: ADAPTIVE h -refinement for reduced-order models

    DOE PAGES

    Carlberg, Kevin T.

    2014-11-05

    Our work presents a method to adaptively refine reduced-order models a posteriori without requiring additional full-order-model solves. The technique is analogous to mesh-adaptive h-refinement: it enriches the reduced-basis space online by ‘splitting’ a given basis vector into several vectors with disjoint support. The splitting scheme is defined by a tree structure constructed offline via recursive k-means clustering of the state variables using snapshot data. This method identifies the vectors to split online using a dual-weighted-residual approach that aims to reduce error in an output quantity of interest. The resulting method generates a hierarchy of subspaces online without requiring large-scale operationsmore » or full-order-model solves. Furthermore, it enables the reduced-order model to satisfy any prescribed error tolerance regardless of its original fidelity, as a completely refined reduced-order model is mathematically equivalent to the original full-order model. Experiments on a parameterized inviscid Burgers equation highlight the ability of the method to capture phenomena (e.g., moving shocks) not contained in the span of the original reduced basis.« less

  9. Comparison of different eigensolvers for calculating vibrational spectra using low-rank, sum-of-product basis functions

    NASA Astrophysics Data System (ADS)

    Leclerc, Arnaud; Thomas, Phillip S.; Carrington, Tucker

    2017-08-01

    Vibrational spectra and wavefunctions of polyatomic molecules can be calculated at low memory cost using low-rank sum-of-product (SOP) decompositions to represent basis functions generated using an iterative eigensolver. Using a SOP tensor format does not determine the iterative eigensolver. The choice of the interative eigensolver is limited by the need to restrict the rank of the SOP basis functions at every stage of the calculation. We have adapted, implemented and compared different reduced-rank algorithms based on standard iterative methods (block-Davidson algorithm, Chebyshev iteration) to calculate vibrational energy levels and wavefunctions of the 12-dimensional acetonitrile molecule. The effect of using low-rank SOP basis functions on the different methods is analysed and the numerical results are compared with those obtained with the reduced rank block power method. Relative merits of the different algorithms are presented, showing that the advantage of using a more sophisticated method, although mitigated by the use of reduced-rank SOP functions, is noticeable in terms of CPU time.

  10. Reduced Order Methods for Prediction of Thermal-Acoustic Fatigue

    NASA Technical Reports Server (NTRS)

    Przekop, A.; Rizzi, S. A.

    2004-01-01

    The goal of this investigation is to assess the quality of high-cycle-fatigue life estimation via a reduced order method, for structures undergoing random nonlinear vibrations in a presence of thermal loading. Modal reduction is performed with several different suites of basis functions. After numerically solving the reduced order system equations of motion, the physical displacement time history is obtained by an inverse transformation and stresses are recovered. Stress ranges obtained through the rainflow counting procedure are used in a linear damage accumulation method to yield fatigue estimates. Fatigue life estimates obtained using various basis functions in the reduced order method are compared with those obtained from numerical simulation in physical degrees-of-freedom.

  11. A Nonlinear Reduced Order Method for Prediction of Acoustic Fatigue

    NASA Technical Reports Server (NTRS)

    Przekop, Adam; Rizzi, Stephen A.

    2006-01-01

    The goal of this investigation is to assess the quality of high-cycle-fatigue life estimation via a reduced order method, for structures undergoing geometrically nonlinear random vibrations. Modal reduction is performed with several different suites of basis functions. After numerically solving the reduced order system equations of motion, the physical displacement time history is obtained by an inverse transformation and stresses are recovered. Stress ranges obtained through the rainflow counting procedure are used in a linear damage accumulation method to yield fatigue estimates. Fatigue life estimates obtained using various basis functions in the reduced order method are compared with those obtained from numerical simulation in physical degrees-of-freedom.

  12. The reduced basis method for the electric field integral equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fares, M., E-mail: fares@cerfacs.f; Hesthaven, J.S., E-mail: Jan_Hesthaven@Brown.ed; Maday, Y., E-mail: maday@ann.jussieu.f

    We introduce the reduced basis method (RBM) as an efficient tool for parametrized scattering problems in computational electromagnetics for problems where field solutions are computed using a standard Boundary Element Method (BEM) for the parametrized electric field integral equation (EFIE). This combination enables an algorithmic cooperation which results in a two step procedure. The first step consists of a computationally intense assembling of the reduced basis, that needs to be effected only once. In the second step, we compute output functionals of the solution, such as the Radar Cross Section (RCS), independently of the dimension of the discretization space, formore » many different parameter values in a many-query context at very little cost. Parameters include the wavenumber, the angle of the incident plane wave and its polarization.« less

  13. Fast online generalized multiscale finite element method using constraint energy minimization

    NASA Astrophysics Data System (ADS)

    Chung, Eric T.; Efendiev, Yalchin; Leung, Wing Tat

    2018-02-01

    Local multiscale methods often construct multiscale basis functions in the offline stage without taking into account input parameters, such as source terms, boundary conditions, and so on. These basis functions are then used in the online stage with a specific input parameter to solve the global problem at a reduced computational cost. Recently, online approaches have been introduced, where multiscale basis functions are adaptively constructed in some regions to reduce the error significantly. In multiscale methods, it is desired to have only 1-2 iterations to reduce the error to a desired threshold. Using Generalized Multiscale Finite Element Framework [10], it was shown that by choosing sufficient number of offline basis functions, the error reduction can be made independent of physical parameters, such as scales and contrast. In this paper, our goal is to improve this. Using our recently proposed approach [4] and special online basis construction in oversampled regions, we show that the error reduction can be made sufficiently large by appropriately selecting oversampling regions. Our numerical results show that one can achieve a three order of magnitude error reduction, which is better than our previous methods. We also develop an adaptive algorithm and enrich in selected regions with large residuals. In our adaptive method, we show that the convergence rate can be determined by a user-defined parameter and we confirm this by numerical simulations. The analysis of the method is presented.

  14. A model and variance reduction method for computing statistical outputs of stochastic elliptic partial differential equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vidal-Codina, F., E-mail: fvidal@mit.edu; Nguyen, N.C., E-mail: cuongng@mit.edu; Giles, M.B., E-mail: mike.giles@maths.ox.ac.uk

    We present a model and variance reduction method for the fast and reliable computation of statistical outputs of stochastic elliptic partial differential equations. Our method consists of three main ingredients: (1) the hybridizable discontinuous Galerkin (HDG) discretization of elliptic partial differential equations (PDEs), which allows us to obtain high-order accurate solutions of the governing PDE; (2) the reduced basis method for a new HDG discretization of the underlying PDE to enable real-time solution of the parameterized PDE in the presence of stochastic parameters; and (3) a multilevel variance reduction method that exploits the statistical correlation among the different reduced basismore » approximations and the high-fidelity HDG discretization to accelerate the convergence of the Monte Carlo simulations. The multilevel variance reduction method provides efficient computation of the statistical outputs by shifting most of the computational burden from the high-fidelity HDG approximation to the reduced basis approximations. Furthermore, we develop a posteriori error estimates for our approximations of the statistical outputs. Based on these error estimates, we propose an algorithm for optimally choosing both the dimensions of the reduced basis approximations and the sizes of Monte Carlo samples to achieve a given error tolerance. We provide numerical examples to demonstrate the performance of the proposed method.« less

  15. On the theoretical link between LLL-reduction and Lambda-decorrelation

    NASA Astrophysics Data System (ADS)

    Lannes, A.

    2013-04-01

    The LLL algorithm, introduced by Lenstra et al. (Math Ann 261:515-534, 1982), plays a key role in many fields of applied mathematics. In particular, it is used as an effective numerical tool for preconditioning the integer least-squares problems arising in high-precision geodetic positioning and Global Navigation Satellite Systems (GNSS). In 1992, Teunissen developed a method for solving these nearest-lattice point (NLP) problems. This method is referred to as Lambda (for Least-squares AMBiguity Decorrelation Adjustment). The preconditioning stage of Lambda corresponds to its decorrelation algorithm. From an epistemological point of view, the latter was devised through an innovative statistical approach completely independent of the LLL algorithm. Recent papers pointed out some similarities between the LLL algorithm and the Lambda-decorrelation algorithm. We try to clarify this point in the paper. We first introduce a parameter measuring the orthogonality defect of the integer basis in which the NLP problem is solved, the LLL-reduced basis of the LLL algorithm, or the Λ -basis of the Lambda method. With regard to this problem, the potential qualities of these bases can then be compared. The Λ -basis is built by working at the level of the variance-covariance matrix of the float solution, while the LLL-reduced basis is built by working at the level of its inverse. As a general rule, the orthogonality defect of the Λ -basis is greater than that of the corresponding LLL-reduced basis; these bases are however very close to one another. To specify this tight relationship, we present a method that provides the dual LLL-reduced basis of a given Λ -basis. As a consequence of this basic link, all the recent developments made on the LLL algorithm can be applied to the Lambda-decorrelation algorithm. This point is illustrated in a concrete manner: we present a parallel Λ -type decorrelation algorithm derived from the parallel LLL algorithm of Luo and Qiao (Proceedings of the fourth international C^* conference on computer science and software engineering. ACM Int Conf P Series. ACM Press, pp 93-101, 2012).

  16. Alternative Modal Basis Selection Procedures For Reduced-Order Nonlinear Random Response Simulation

    NASA Technical Reports Server (NTRS)

    Przekop, Adam; Guo, Xinyun; Rizi, Stephen A.

    2012-01-01

    Three procedures to guide selection of an efficient modal basis in a nonlinear random response analysis are examined. One method is based only on proper orthogonal decomposition, while the other two additionally involve smooth orthogonal decomposition. Acoustic random response problems are employed to assess the performance of the three modal basis selection approaches. A thermally post-buckled beam exhibiting snap-through behavior, a shallowly curved arch in the auto-parametric response regime and a plate structure are used as numerical test articles. The results of a computationally taxing full-order analysis in physical degrees of freedom are taken as the benchmark for comparison with the results from the three reduced-order analyses. For the cases considered, all three methods are shown to produce modal bases resulting in accurate and computationally efficient reduced-order nonlinear simulations.

  17. Toward a New Method of Decoding Algebraic Codes Using Groebner Bases

    DTIC Science & Technology

    1993-10-01

    variables over GF(2m). A celebrated algorithm by Buchberger produces a reduced Groebner basis of that ideal. It tums out that, since the common roots of...all the polynomials in the ideal are a set of isolated points, this reduced Groebner basis is in triangular form, and the univariate polynomial in that

  18. A Unified Development of Basis Reduction Methods for Rotor Blade Analysis

    NASA Technical Reports Server (NTRS)

    Ruzicka, Gene C.; Hodges, Dewey H.; Rutkowski, Michael (Technical Monitor)

    2001-01-01

    The axial foreshortening effect plays a key role in rotor blade dynamics, but approximating it accurately in reduced basis models has long posed a difficult problem for analysts. Recently, though, several methods have been shown to be effective in obtaining accurate,reduced basis models for rotor blades. These methods are the axial elongation method,the mixed finite element method, and the nonlinear normal mode method. The main objective of this paper is to demonstrate the close relationships among these methods, which are seemingly disparate at first glance. First, the difficulties inherent in obtaining reduced basis models of rotor blades are illustrated by examining the modal reduction accuracy of several blade analysis formulations. It is shown that classical, displacement-based finite elements are ill-suited for rotor blade analysis because they can't accurately represent the axial strain in modal space, and that this problem may be solved by employing the axial force as a variable in the analysis. It is shown that the mixed finite element method is a convenient means for accomplishing this, and the derivation of a mixed finite element for rotor blade analysis is outlined. A shortcoming of the mixed finite element method is that is that it increases the number of variables in the analysis. It is demonstrated that this problem may be rectified by solving for the axial displacements in terms of the axial forces and the bending displacements. Effectively, this procedure constitutes a generalization of the widely used axial elongation method to blades of arbitrary topology. The procedure is developed first for a single element, and then extended to an arbitrary assemblage of elements of arbitrary type. Finally, it is shown that the generalized axial elongation method is essentially an approximate solution for an invariant manifold that can be used as the basis for a nonlinear normal mode.

  19. Fast online inverse scattering with Reduced Basis Method (RBM) for a 3D phase grating with specific line roughness

    NASA Astrophysics Data System (ADS)

    Kleemann, Bernd H.; Kurz, Julian; Hetzler, Jochen; Pomplun, Jan; Burger, Sven; Zschiedrich, Lin; Schmidt, Frank

    2011-05-01

    Finite element methods (FEM) for the rigorous electromagnetic solution of Maxwell's equations are known to be very accurate. They possess a high convergence rate for the determination of near field and far field quantities of scattering and diffraction processes of light with structures having feature sizes in the range of the light wavelength. We are using FEM software for 3D scatterometric diffraction calculations allowing the application of a brilliant and extremely fast solution method: the reduced basis method (RBM). The RBM constructs a reduced model of the scattering problem from precalculated snapshot solutions, guided self-adaptively by an error estimator. Using RBM, we achieve an efficiency accuracy of about 10-4 compared to the direct problem with only 35 precalculated snapshots being the reduced basis dimension. This speeds up the calculation of diffraction amplitudes by a factor of about 1000 compared to the conventional solution of Maxwell's equations by FEM. This allows us to reconstruct the three geometrical parameters of our phase grating from "measured" scattering data in a 3D parameter manifold online in a minute having the full FEM accuracy available. Additionally, also a sensitivity analysis or the choice of robust measuring strategies, for example, can be done online in a few minutes.

  20. Reduced-cost linear-response CC2 method based on natural orbitals and natural auxiliary functions

    PubMed Central

    Mester, Dávid

    2017-01-01

    A reduced-cost density fitting (DF) linear-response second-order coupled-cluster (CC2) method has been developed for the evaluation of excitation energies. The method is based on the simultaneous truncation of the molecular orbital (MO) basis and the auxiliary basis set used for the DF approximation. For the reduction of the size of the MO basis, state-specific natural orbitals (NOs) are constructed for each excited state using the average of the second-order Møller–Plesset (MP2) and the corresponding configuration interaction singles with perturbative doubles [CIS(D)] density matrices. After removing the NOs of low occupation number, natural auxiliary functions (NAFs) are constructed [M. Kállay, J. Chem. Phys. 141, 244113 (2014)], and the NAF basis is also truncated. Our results show that, for a triple-zeta basis set, about 60% of the virtual MOs can be dropped, while the size of the fitting basis can be reduced by a factor of five. This results in a dramatic reduction of the computational costs of the solution of the CC2 equations, which are in our approach about as expensive as the evaluation of the MP2 and CIS(D) density matrices. All in all, an average speedup of more than an order of magnitude can be achieved at the expense of a mean absolute error of 0.02 eV in the calculated excitation energies compared to the canonical CC2 results. Our benchmark calculations demonstrate that the new approach enables the efficient computation of CC2 excitation energies for excited states of all types of medium-sized molecules composed of up to 100 atoms with triple-zeta quality basis sets. PMID:28527453

  1. Using a pruned, nondirect product basis in conjunction with the multi-configuration time-dependent Hartree (MCTDH) method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wodraszka, Robert, E-mail: Robert.Wodraszka@chem.queensu.ca; Carrington, Tucker, E-mail: Tucker.Carrington@queensu.ca

    In this paper, we propose a pruned, nondirect product multi-configuration time dependent Hartree (MCTDH) method for solving the Schrödinger equation. MCTDH uses optimized 1D basis functions, called single particle functions, but the size of the standard direct product MCTDH basis scales exponentially with D, the number of coordinates. We compare the pruned approach to standard MCTDH calculations for basis sizes small enough that the latter are possible and demonstrate that pruning the basis reduces the CPU cost of computing vibrational energy levels of acetonitrile (D = 12) by more than two orders of magnitude. Using the pruned method, it ismore » possible to do calculations with larger bases, for which the cost of standard MCTDH calculations is prohibitive. Pruning the basis complicates the evaluation of matrix-vector products. In this paper, they are done term by term for a sum-of-products Hamiltonian. When no attempt is made to exploit the fact that matrices representing some of the factors of a term are identity matrices, one needs only to carefully constrain indices. In this paper, we develop new ideas that make it possible to further reduce the CPU time by exploiting identity matrices.« less

  2. Alternative Modal Basis Selection Procedures for Nonlinear Random Response Simulation

    NASA Technical Reports Server (NTRS)

    Przekop, Adam; Guo, Xinyun; Rizzi, Stephen A.

    2010-01-01

    Three procedures to guide selection of an efficient modal basis in a nonlinear random response analysis are examined. One method is based only on proper orthogonal decomposition, while the other two additionally involve smooth orthogonal decomposition. Acoustic random response problems are employed to assess the performance of the three modal basis selection approaches. A thermally post-buckled beam exhibiting snap-through behavior, a shallowly curved arch in the auto-parametric response regime and a plate structure are used as numerical test articles. The results of the three reduced-order analyses are compared with the results of the computationally taxing simulation in the physical degrees of freedom. For the cases considered, all three methods are shown to produce modal bases resulting in accurate and computationally efficient reduced-order nonlinear simulations.

  3. A high-order multiscale finite-element method for time-domain acoustic-wave modeling

    NASA Astrophysics Data System (ADS)

    Gao, Kai; Fu, Shubin; Chung, Eric T.

    2018-05-01

    Accurate and efficient wave equation modeling is vital for many applications in such as acoustics, electromagnetics, and seismology. However, solving the wave equation in large-scale and highly heterogeneous models is usually computationally expensive because the computational cost is directly proportional to the number of grids in the model. We develop a novel high-order multiscale finite-element method to reduce the computational cost of time-domain acoustic-wave equation numerical modeling by solving the wave equation on a coarse mesh based on the multiscale finite-element theory. In contrast to existing multiscale finite-element methods that use only first-order multiscale basis functions, our new method constructs high-order multiscale basis functions from local elliptic problems which are closely related to the Gauss-Lobatto-Legendre quadrature points in a coarse element. Essentially, these basis functions are not only determined by the order of Legendre polynomials, but also by local medium properties, and therefore can effectively convey the fine-scale information to the coarse-scale solution with high-order accuracy. Numerical tests show that our method can significantly reduce the computation time while maintain high accuracy for wave equation modeling in highly heterogeneous media by solving the corresponding discrete system only on the coarse mesh with the new high-order multiscale basis functions.

  4. A high-order multiscale finite-element method for time-domain acoustic-wave modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao, Kai; Fu, Shubin; Chung, Eric T.

    Accurate and efficient wave equation modeling is vital for many applications in such as acoustics, electromagnetics, and seismology. However, solving the wave equation in large-scale and highly heterogeneous models is usually computationally expensive because the computational cost is directly proportional to the number of grids in the model. We develop a novel high-order multiscale finite-element method to reduce the computational cost of time-domain acoustic-wave equation numerical modeling by solving the wave equation on a coarse mesh based on the multiscale finite-element theory. In contrast to existing multiscale finite-element methods that use only first-order multiscale basis functions, our new method constructsmore » high-order multiscale basis functions from local elliptic problems which are closely related to the Gauss–Lobatto–Legendre quadrature points in a coarse element. Essentially, these basis functions are not only determined by the order of Legendre polynomials, but also by local medium properties, and therefore can effectively convey the fine-scale information to the coarse-scale solution with high-order accuracy. Numerical tests show that our method can significantly reduce the computation time while maintain high accuracy for wave equation modeling in highly heterogeneous media by solving the corresponding discrete system only on the coarse mesh with the new high-order multiscale basis functions.« less

  5. A high-order multiscale finite-element method for time-domain acoustic-wave modeling

    DOE PAGES

    Gao, Kai; Fu, Shubin; Chung, Eric T.

    2018-02-04

    Accurate and efficient wave equation modeling is vital for many applications in such as acoustics, electromagnetics, and seismology. However, solving the wave equation in large-scale and highly heterogeneous models is usually computationally expensive because the computational cost is directly proportional to the number of grids in the model. We develop a novel high-order multiscale finite-element method to reduce the computational cost of time-domain acoustic-wave equation numerical modeling by solving the wave equation on a coarse mesh based on the multiscale finite-element theory. In contrast to existing multiscale finite-element methods that use only first-order multiscale basis functions, our new method constructsmore » high-order multiscale basis functions from local elliptic problems which are closely related to the Gauss–Lobatto–Legendre quadrature points in a coarse element. Essentially, these basis functions are not only determined by the order of Legendre polynomials, but also by local medium properties, and therefore can effectively convey the fine-scale information to the coarse-scale solution with high-order accuracy. Numerical tests show that our method can significantly reduce the computation time while maintain high accuracy for wave equation modeling in highly heterogeneous media by solving the corresponding discrete system only on the coarse mesh with the new high-order multiscale basis functions.« less

  6. Decoy-state quantum key distribution with biased basis choice

    PubMed Central

    Wei, Zhengchao; Wang, Weilong; Zhang, Zhen; Gao, Ming; Ma, Zhi; Ma, Xiongfeng

    2013-01-01

    We propose a quantum key distribution scheme that combines a biased basis choice with the decoy-state method. In this scheme, Alice sends all signal states in the Z basis and decoy states in the X and Z basis with certain probabilities, and Bob measures received pulses with optimal basis choice. This scheme simplifies the system and reduces the random number consumption. From the simulation result taking into account of statistical fluctuations, we find that in a typical experimental setup, the proposed scheme can increase the key rate by at least 45% comparing to the standard decoy-state scheme. In the postprocessing, we also apply a rigorous method to upper bound the phase error rate of the single-photon components of signal states. PMID:23948999

  7. Decoy-state quantum key distribution with biased basis choice.

    PubMed

    Wei, Zhengchao; Wang, Weilong; Zhang, Zhen; Gao, Ming; Ma, Zhi; Ma, Xiongfeng

    2013-01-01

    We propose a quantum key distribution scheme that combines a biased basis choice with the decoy-state method. In this scheme, Alice sends all signal states in the Z basis and decoy states in the X and Z basis with certain probabilities, and Bob measures received pulses with optimal basis choice. This scheme simplifies the system and reduces the random number consumption. From the simulation result taking into account of statistical fluctuations, we find that in a typical experimental setup, the proposed scheme can increase the key rate by at least 45% comparing to the standard decoy-state scheme. In the postprocessing, we also apply a rigorous method to upper bound the phase error rate of the single-photon components of signal states.

  8. Model Order Reduction for the fast solution of 3D Stokes problems and its application in geophysical inversion

    NASA Astrophysics Data System (ADS)

    Ortega Gelabert, Olga; Zlotnik, Sergio; Afonso, Juan Carlos; Díez, Pedro

    2017-04-01

    The determination of the present-day physical state of the thermal and compositional structure of the Earth's lithosphere and sub-lithospheric mantle is one of the main goals in modern lithospheric research. All this data is essential to build Earth's evolution models and to reproduce many geophysical observables (e.g. elevation, gravity anomalies, travel time data, heat flow, etc) together with understanding the relationship between them. Determining the lithospheric state involves the solution of high-resolution inverse problems and, consequently, the solution of many direct models is required. The main objective of this work is to contribute to the existing inversion techniques in terms of improving the estimation of the elevation (topography) by including a dynamic component arising from sub-lithospheric mantle flow. In order to do so, we implement an efficient Reduced Order Method (ROM) built upon classic Finite Elements. ROM allows to reduce significantly the computational cost of solving a family of problems, for example all the direct models that are required in the solution of the inverse problem. The strategy of the method consists in creating a (reduced) basis of solutions, so that when a new problem has to be solved, its solution is sought within the basis instead of attempting to solve the problem itself. In order to check the Reduced Basis approach, we implemented the method in a 3D domain reproducing a portion of Earth that covers up to 400 km depth. Within the domain the Stokes equation is solved with realistic viscosities and densities. The different realizations (the family of problems) is created by varying viscosities and densities in a similar way as it would happen in an inversion problem. The Reduced Basis method is shown to be an extremely efficiently solver for the Stokes equation in this context.

  9. Localized basis functions and other computational improvements in variational nonorthogonal basis function methods for quantum mechanical scattering problems involving chemical reactions

    NASA Technical Reports Server (NTRS)

    Schwenke, David W.; Truhlar, Donald G.

    1990-01-01

    The Generalized Newton Variational Principle for 3D quantum mechanical reactive scattering is briefly reviewed. Then three techniques are described which improve the efficiency of the computations. First, the fact that the Hamiltonian is Hermitian is used to reduce the number of integrals computed, and then the properties of localized basis functions are exploited in order to eliminate redundant work in the integral evaluation. A new type of localized basis function with desirable properties is suggested. It is shown how partitioned matrices can be used with localized basis functions to reduce the amount of work required to handle the complex boundary conditions. The new techniques do not introduce any approximations into the calculations, so they may be used to obtain converged solutions of the Schroedinger equation.

  10. Assessment of multireference approaches to explicitly correlated full configuration interaction quantum Monte Carlo

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kersten, J. A. F., E-mail: jennifer.kersten@cantab.net; Alavi, Ali, E-mail: a.alavi@fkf.mpg.de; Max Planck Institute for Solid State Research, Heisenbergstraße 1, 70569 Stuttgart

    2016-08-07

    The Full Configuration Interaction Quantum Monte Carlo (FCIQMC) method has proved able to provide near-exact solutions to the electronic Schrödinger equation within a finite orbital basis set, without relying on an expansion about a reference state. However, a drawback to the approach is that being based on an expansion of Slater determinants, the FCIQMC method suffers from a basis set incompleteness error that decays very slowly with the size of the employed single particle basis. The FCIQMC results obtained in a small basis set can be improved significantly with explicitly correlated techniques. Here, we present a study that assesses andmore » compares two contrasting “universal” explicitly correlated approaches that fit into the FCIQMC framework: the [2]{sub R12} method of Kong and Valeev [J. Chem. Phys. 135, 214105 (2011)] and the explicitly correlated canonical transcorrelation approach of Yanai and Shiozaki [J. Chem. Phys. 136, 084107 (2012)]. The former is an a posteriori internally contracted perturbative approach, while the latter transforms the Hamiltonian prior to the FCIQMC simulation. These comparisons are made across the 55 molecules of the G1 standard set. We found that both methods consistently reduce the basis set incompleteness, for accurate atomization energies in small basis sets, reducing the error from 28 mE{sub h} to 3-4 mE{sub h}. While many of the conclusions hold in general for any combination of multireference approaches with these methodologies, we also consider FCIQMC-specific advantages of each approach.« less

  11. A fast solver for the Helmholtz equation based on the generalized multiscale finite-element method

    NASA Astrophysics Data System (ADS)

    Fu, Shubin; Gao, Kai

    2017-11-01

    Conventional finite-element methods for solving the acoustic-wave Helmholtz equation in highly heterogeneous media usually require finely discretized mesh to represent the medium property variations with sufficient accuracy. Computational costs for solving the Helmholtz equation can therefore be considerably expensive for complicated and large geological models. Based on the generalized multiscale finite-element theory, we develop a novel continuous Galerkin method to solve the Helmholtz equation in acoustic media with spatially variable velocity and mass density. Instead of using conventional polynomial basis functions, we use multiscale basis functions to form the approximation space on the coarse mesh. The multiscale basis functions are obtained from multiplying the eigenfunctions of a carefully designed local spectral problem with an appropriate multiscale partition of unity. These multiscale basis functions can effectively incorporate the characteristics of heterogeneous media's fine-scale variations, thus enable us to obtain accurate solution to the Helmholtz equation without directly solving the large discrete system formed on the fine mesh. Numerical results show that our new solver can significantly reduce the dimension of the discrete Helmholtz equation system, and can also obviously reduce the computational time.

  12. Accurate evaluation of exchange fields in finite element micromagnetic solvers

    NASA Astrophysics Data System (ADS)

    Chang, R.; Escobar, M. A.; Li, S.; Lubarda, M. V.; Lomakin, V.

    2012-04-01

    Quadratic basis functions (QBFs) are implemented for solving the Landau-Lifshitz-Gilbert equation via the finite element method. This involves the introduction of a set of special testing functions compatible with the QBFs for evaluating the Laplacian operator. The results by using QBFs are significantly more accurate than those via linear basis functions. QBF approach leads to significantly more accurate results than conventionally used approaches based on linear basis functions. Importantly QBFs allow reducing the error of computing the exchange field by increasing the mesh density for structured and unstructured meshes. Numerical examples demonstrate the feasibility of the method.

  13. Correction of energy-dependent systematic errors in dual-energy X-ray CT using a basis material coefficients transformation method

    NASA Astrophysics Data System (ADS)

    Goh, K. L.; Liew, S. C.; Hasegawa, B. H.

    1997-12-01

    Computer simulation results from our previous studies showed that energy dependent systematic errors exist in the values of attenuation coefficient synthesized using the basis material decomposition technique with acrylic and aluminum as the basis materials, especially when a high atomic number element (e.g., iodine from radiographic contrast media) was present in the body. The errors were reduced when a basis set was chosen from materials mimicking those found in the phantom. In the present study, we employed a basis material coefficients transformation method to correct for the energy-dependent systematic errors. In this method, the basis material coefficients were first reconstructed using the conventional basis materials (acrylic and aluminum) as the calibration basis set. The coefficients were then numerically transformed to those for a more desirable set materials. The transformation was done at the energies of the low and high energy windows of the X-ray spectrum. With this correction method using acrylic and an iodine-water mixture as our desired basis set, computer simulation results showed that accuracy of better than 2% could be achieved even when iodine was present in the body at a concentration as high as 10% by mass. Simulation work had also been carried out on a more inhomogeneous 2D thorax phantom of the 3D MCAT phantom. The results of the accuracy of quantitation were presented here.

  14. A Study on Gröbner Basis with Inexact Input

    NASA Astrophysics Data System (ADS)

    Nagasaka, Kosaku

    Gröbner basis is one of the most important tools in recent symbolic algebraic computations. However, computing a Gröbner basis for the given polynomial ideal is not easy and it is not numerically stable if polynomials have inexact coefficients. In this paper, we study what we should get for computing a Gröbner basis with inexact coefficients and introduce a naive method to compute a Gröbner basis by reduced row echelon form, for the ideal generated by the given polynomial set having a priori errors on their coefficients.

  15. Non-intrusive reduced order modeling of nonlinear problems using neural networks

    NASA Astrophysics Data System (ADS)

    Hesthaven, J. S.; Ubbiali, S.

    2018-06-01

    We develop a non-intrusive reduced basis (RB) method for parametrized steady-state partial differential equations (PDEs). The method extracts a reduced basis from a collection of high-fidelity solutions via a proper orthogonal decomposition (POD) and employs artificial neural networks (ANNs), particularly multi-layer perceptrons (MLPs), to accurately approximate the coefficients of the reduced model. The search for the optimal number of neurons and the minimum amount of training samples to avoid overfitting is carried out in the offline phase through an automatic routine, relying upon a joint use of the Latin hypercube sampling (LHS) and the Levenberg-Marquardt (LM) training algorithm. This guarantees a complete offline-online decoupling, leading to an efficient RB method - referred to as POD-NN - suitable also for general nonlinear problems with a non-affine parametric dependence. Numerical studies are presented for the nonlinear Poisson equation and for driven cavity viscous flows, modeled through the steady incompressible Navier-Stokes equations. Both physical and geometrical parametrizations are considered. Several results confirm the accuracy of the POD-NN method and show the substantial speed-up enabled at the online stage as compared to a traditional RB strategy.

  16. Accelerated solution of discrete ordinates approximation to the Boltzmann transport equation via model reduction

    DOE PAGES

    Tencer, John; Carlberg, Kevin; Larsen, Marvin; ...

    2017-06-17

    Radiation heat transfer is an important phenomenon in many physical systems of practical interest. When participating media is important, the radiative transfer equation (RTE) must be solved for the radiative intensity as a function of location, time, direction, and wavelength. In many heat-transfer applications, a quasi-steady assumption is valid, thereby removing time dependence. The dependence on wavelength is often treated through a weighted sum of gray gases (WSGG) approach. The discrete ordinates method (DOM) is one of the most common methods for approximating the angular (i.e., directional) dependence. The DOM exactly solves for the radiative intensity for a finite numbermore » of discrete ordinate directions and computes approximations to integrals over the angular space using a quadrature rule; the chosen ordinate directions correspond to the nodes of this quadrature rule. This paper applies a projection-based model-reduction approach to make high-order quadrature computationally feasible for the DOM for purely absorbing applications. First, the proposed approach constructs a reduced basis from (high-fidelity) solutions of the radiative intensity computed at a relatively small number of ordinate directions. Then, the method computes inexpensive approximations of the radiative intensity at the (remaining) quadrature points of a high-order quadrature using a reduced-order model constructed from the reduced basis. Finally, this results in a much more accurate solution than might have been achieved using only the ordinate directions used to compute the reduced basis. One- and three-dimensional test problems highlight the efficiency of the proposed method.« less

  17. Accelerated solution of discrete ordinates approximation to the Boltzmann transport equation via model reduction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tencer, John; Carlberg, Kevin; Larsen, Marvin

    Radiation heat transfer is an important phenomenon in many physical systems of practical interest. When participating media is important, the radiative transfer equation (RTE) must be solved for the radiative intensity as a function of location, time, direction, and wavelength. In many heat-transfer applications, a quasi-steady assumption is valid, thereby removing time dependence. The dependence on wavelength is often treated through a weighted sum of gray gases (WSGG) approach. The discrete ordinates method (DOM) is one of the most common methods for approximating the angular (i.e., directional) dependence. The DOM exactly solves for the radiative intensity for a finite numbermore » of discrete ordinate directions and computes approximations to integrals over the angular space using a quadrature rule; the chosen ordinate directions correspond to the nodes of this quadrature rule. This paper applies a projection-based model-reduction approach to make high-order quadrature computationally feasible for the DOM for purely absorbing applications. First, the proposed approach constructs a reduced basis from (high-fidelity) solutions of the radiative intensity computed at a relatively small number of ordinate directions. Then, the method computes inexpensive approximations of the radiative intensity at the (remaining) quadrature points of a high-order quadrature using a reduced-order model constructed from the reduced basis. Finally, this results in a much more accurate solution than might have been achieved using only the ordinate directions used to compute the reduced basis. One- and three-dimensional test problems highlight the efficiency of the proposed method.« less

  18. Patch-based image reconstruction for PET using prior-image derived dictionaries

    NASA Astrophysics Data System (ADS)

    Tahaei, Marzieh S.; Reader, Andrew J.

    2016-09-01

    In PET image reconstruction, regularization is often needed to reduce the noise in the resulting images. Patch-based image processing techniques have recently been successfully used for regularization in medical image reconstruction through a penalized likelihood framework. Re-parameterization within reconstruction is another powerful regularization technique in which the object in the scanner is re-parameterized using coefficients for spatially-extensive basis vectors. In this work, a method for extracting patch-based basis vectors from the subject’s MR image is proposed. The coefficients for these basis vectors are then estimated using the conventional MLEM algorithm. Furthermore, using the alternating direction method of multipliers, an algorithm for optimizing the Poisson log-likelihood while imposing sparsity on the parameters is also proposed. This novel method is then utilized to find sparse coefficients for the patch-based basis vectors extracted from the MR image. The results indicate the superiority of the proposed methods to patch-based regularization using the penalized likelihood framework.

  19. PROPERTY APPRAISAL PROVIDES CONTROL, INSURANCE BASIS, AND VALUE ESTIMATE.

    ERIC Educational Resources Information Center

    THOMSON, JACK

    A COMPLETE PROPERTY APPRAISAL SERVES AS A BASIS FOR CONTROL, INSURANCE AND VALUE ESTIMATE. A PROFESSIONAL APPRAISAL FIRM SHOULD PERFORM THIS FUNCTION BECAUSE (1) IT IS FAMILIAR WITH PROPER METHODS, (2) IT CAN PREPARE THE REPORT WITH MINIMUM CONFUSION AND INTERRRUPTION OF THE COLLEGE OPERATION, (3) USE OF ITS PRICING LIBRARY REDUCES TIME NEEDED AND…

  20. Incomplete Gröbner basis as a preconditioner for polynomial systems

    NASA Astrophysics Data System (ADS)

    Sun, Yang; Tao, Yu-Hui; Bai, Feng-Shan

    2009-04-01

    Precondition plays a critical role in the numerical methods for large and sparse linear systems. It is also true for nonlinear algebraic systems. In this paper incomplete Gröbner basis (IGB) is proposed as a preconditioner of homotopy methods for polynomial systems of equations, which transforms a deficient system into a system with the same finite solutions, but smaller degree. The reduced system can thus be solved faster. Numerical results show the efficiency of the preconditioner.

  1. Effects of worksite health interventions involving reduced work hours and physical exercise on sickness absence costs.

    PubMed

    von Thiele Schwarz, Ulrica; Hasson, Henna

    2012-05-01

    To investigate the effects of physical exercise during work hours (PE) and reduced work hours (RWH) on direct and indirect costs associated with sickness absence (SA). Sickness absence and related costs at six workplaces, matched and randomized to three conditions (PE, RWH, and referents), were retrieved from company records and/or estimated using salary conversion methods or value-added equations on the basis of interview data. Although SA days decreased in all conditions (PE, 11.4%; RWH, 4.9%; referents, 15.9%), costs were reduced in the PE (22.2%) and RWH (4.9%) conditions but not among referents (10.2% increase). Worksite health interventions may generate savings in SA costs. Costs may not be linear to changes in SA days. Combing the friction method with indirect cost estimates on the basis of value-added productivity may help illuminate both direct and indirect SA costs.

  2. Reducing the cost of using collocation to compute vibrational energy levels: Results for CH2NH.

    PubMed

    Avila, Gustavo; Carrington, Tucker

    2017-08-14

    In this paper, we improve the collocation method for computing vibrational spectra that was presented in the work of Avila and Carrington, Jr. [J. Chem. Phys. 143, 214108 (2015)]. Known quadrature and collocation methods using a Smolyak grid require storing intermediate vectors with more elements than points on the Smolyak grid. This is due to the fact that grid labels are constrained among themselves and basis labels are constrained among themselves. We show that by using the so-called hierarchical basis functions, one can significantly reduce the memory required. In this paper, the intermediate vectors have only as many elements as the Smolyak grid. The ideas are tested by computing energy levels of CH 2 NH.

  3. 26 CFR 1.167(b)-1 - Straight line method.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 26 Internal Revenue 2 2014-04-01 2014-04-01 false Straight line method. 1.167(b)-1 Section 1.167(b... Straight line method. (a) In general. Under the straight line method the cost or other basis of the... may be reduced to a percentage or fraction. The straight line method may be used in determining a...

  4. Approximate techniques of structural reanalysis

    NASA Technical Reports Server (NTRS)

    Noor, A. K.; Lowder, H. E.

    1974-01-01

    A study is made of two approximate techniques for structural reanalysis. These include Taylor series expansions for response variables in terms of design variables and the reduced-basis method. In addition, modifications to these techniques are proposed to overcome some of their major drawbacks. The modifications include a rational approach to the selection of the reduced-basis vectors and the use of Taylor series approximation in an iterative process. For the reduced basis a normalized set of vectors is chosen which consists of the original analyzed design and the first-order sensitivity analysis vectors. The use of the Taylor series approximation as a first (initial) estimate in an iterative process, can lead to significant improvements in accuracy, even with one iteration cycle. Therefore, the range of applicability of the reanalysis technique can be extended. Numerical examples are presented which demonstrate the gain in accuracy obtained by using the proposed modification techniques, for a wide range of variations in the design variables.

  5. Fast Minimum Variance Beamforming Based on Legendre Polynomials.

    PubMed

    Bae, MooHo; Park, Sung Bae; Kwon, Sung Jae

    2016-09-01

    Currently, minimum variance beamforming (MV) is actively investigated as a method that can improve the performance of an ultrasound beamformer, in terms of the lateral and contrast resolution. However, this method has the disadvantage of excessive computational complexity since the inverse spatial covariance matrix must be calculated. Some noteworthy methods among various attempts to solve this problem include beam space adaptive beamforming methods and the fast MV method based on principal component analysis, which are similar in that the original signal in the element space is transformed to another domain using an orthonormal basis matrix and the dimension of the covariance matrix is reduced by approximating the matrix only with important components of the matrix, hence making the inversion of the matrix very simple. Recently, we proposed a new method with further reduced computational demand that uses Legendre polynomials as the basis matrix for such a transformation. In this paper, we verify the efficacy of the proposed method through Field II simulations as well as in vitro and in vivo experiments. The results show that the approximation error of this method is less than or similar to those of the above-mentioned methods and that the lateral response of point targets and the contrast-to-speckle noise in anechoic cysts are also better than or similar to those methods when the dimensionality of the covariance matrices is reduced to the same dimension.

  6. Optimization of global model composed of radial basis functions using the term-ranking approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cai, Peng; Tao, Chao, E-mail: taochao@nju.edu.cn; Liu, Xiao-Jun

    2014-03-15

    A term-ranking method is put forward to optimize the global model composed of radial basis functions to improve the predictability of the model. The effectiveness of the proposed method is examined by numerical simulation and experimental data. Numerical simulations indicate that this method can significantly lengthen the prediction time and decrease the Bayesian information criterion of the model. The application to real voice signal shows that the optimized global model can capture more predictable component in chaos-like voice data and simultaneously reduce the predictable component (periodic pitch) in the residual signal.

  7. A robust component mode synthesis method for stochastic damped vibroacoustics

    NASA Astrophysics Data System (ADS)

    Tran, Quang Hung; Ouisse, Morvan; Bouhaddi, Noureddine

    2010-01-01

    In order to reduce vibrations or sound levels in industrial vibroacoustic problems, the low-cost and efficient way consists in introducing visco- and poro-elastic materials either on the structure or on cavity walls. Depending on the frequency range of interest, several numerical approaches can be used to estimate the behavior of the coupled problem. In the context of low frequency applications related to acoustic cavities with surrounding vibrating structures, the finite elements method (FEM) is one of the most efficient techniques. Nevertheless, industrial problems lead to large FE models which are time-consuming in updating or optimization processes. A classical way to reduce calculation time is the component mode synthesis (CMS) method, whose classical formulation is not always efficient to predict dynamical behavior of structures including visco-elastic and/or poro-elastic patches. Then, to ensure an efficient prediction, the fluid and structural bases used for the model reduction need to be updated as a result of changes in a parametric optimization procedure. For complex models, this leads to prohibitive numerical costs in the optimization phase or for management and propagation of uncertainties in the stochastic vibroacoustic problem. In this paper, the formulation of an alternative CMS method is proposed and compared to classical ( u, p) CMS method: the Ritz basis is completed with static residuals associated to visco-elastic and poro-elastic behaviors. This basis is also enriched by the static response of residual forces due to structural modifications, resulting in a so-called robust basis, also adapted to Monte Carlo simulations for uncertainties propagation using reduced models.

  8. Analysis of different thermal processing methods of foodstuffs to optimize protein, calcium, and phosphorus content for dialysis patients.

    PubMed

    Vrdoljak, Ivica; Panjkota Krbavčić, Ines; Bituh, Martina; Vrdoljak, Tea; Dujmić, Zoran

    2015-05-01

    To analyze how different thermal processing methods affect the protein, calcium, and phosphorus content of hospital food served to dialysis patients and to generate recommendations for preparing menus that optimize nutritional content while minimizing the risk of hyperphosphatemia. Standard Official Methods of Analysis (AOAC) methods were used to determine dry matter, protein, calcium, and phosphorus content in potatoes, fresh and frozen carrots, frozen green beans, chicken, beef and pork, frozen hake, pasta, and rice. These levels were determined both before and after boiling in water, steaming, stewing in oil or water, or roasting. Most of the thermal processing methods did not significantly reduce protein content. Boiling increased calcium content in all foodstuffs because of calcium absorption from the hard water. In contrast, stewing in oil containing a small amount of water decreased the calcium content of vegetables by 8% to 35% and of chicken meat by 12% to 40% on a dry weight basis. Some types of thermal processing significantly reduced the phosphorus content of the various foodstuffs, with levels decreasing by 27% to 43% for fresh and frozen vegetables, 10% to 49% for meat, 7% for pasta, and 22.8% for rice on a dry weight basis. On the basis of these results, we modified the thermal processing methods used to prepare a standard hospital menu for dialysis patients. Foodstuffs prepared according to the optimized menu were similar in protein content, higher in calcium, and significantly lower in phosphorus than foodstuffs prepared according to the standard menu. Boiling in water and stewing in oil containing some water significantly reduced phosphorus content without affecting protein content. Soaking meat in cold water for 1 h before thermal processing reduced phosphorus content even more. These results may help optimize the design of menus for dialysis patients. Copyright © 2015 National Kidney Foundation, Inc. Published by Elsevier Inc. All rights reserved.

  9. A method for reducing the order of nonlinear dynamic systems

    NASA Astrophysics Data System (ADS)

    Masri, S. F.; Miller, R. K.; Sassi, H.; Caughey, T. K.

    1984-06-01

    An approximate method that uses conventional condensation techniques for linear systems together with the nonparametric identification of the reduced-order model generalized nonlinear restoring forces is presented for reducing the order of discrete multidegree-of-freedom dynamic systems that possess arbitrary nonlinear characteristics. The utility of the proposed method is demonstrated by considering a redundant three-dimensional finite-element model half of whose elements incorporate hysteretic properties. A nonlinear reduced-order model, of one-third the order of the original model, is developed on the basis of wideband stationary random excitation and the validity of the reduced-order model is subsequently demonstrated by its ability to predict with adequate accuracy the transient response of the original nonlinear model under a different nonstationary random excitation.

  10. A multiscale restriction-smoothed basis method for high contrast porous media represented on unstructured grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Møyner, Olav, E-mail: olav.moyner@sintef.no; Lie, Knut-Andreas, E-mail: knut-andreas.lie@sintef.no

    2016-01-01

    A wide variety of multiscale methods have been proposed in the literature to reduce runtime and provide better scaling for the solution of Poisson-type equations modeling flow in porous media. We present a new multiscale restricted-smoothed basis (MsRSB) method that is designed to be applicable to both rectilinear grids and unstructured grids. Like many other multiscale methods, MsRSB relies on a coarse partition of the underlying fine grid and a set of local prolongation operators (multiscale basis functions) that map unknowns associated with the fine grid cells to unknowns associated with blocks in the coarse partition. These mappings are constructedmore » by restricted smoothing: Starting from a constant, a localized iterative scheme is applied directly to the fine-scale discretization to compute prolongation operators that are consistent with the local properties of the differential operators. The resulting method has three main advantages: First of all, both the coarse and the fine grid can have general polyhedral geometry and unstructured topology. This means that partitions and good prolongation operators can easily be constructed for complex models involving high media contrasts and unstructured cell connections introduced by faults, pinch-outs, erosion, local grid refinement, etc. In particular, the coarse partition can be adapted to geological or flow-field properties represented on cells or faces to improve accuracy. Secondly, the method is accurate and robust when compared to existing multiscale methods and does not need expensive recomputation of local basis functions to account for transient behavior: Dynamic mobility changes are incorporated by continuing to iterate a few extra steps on existing basis functions. This way, the cost of updating the prolongation operators becomes proportional to the amount of change in fluid mobility and one reduces the need for expensive, tolerance-based updates. Finally, since the MsRSB method is formulated on top of a cell-centered, conservative, finite-volume method, it is applicable to any flow model in which one can isolate a pressure equation. Herein, we only discuss single and two-phase incompressible models. Compressible flow, e.g., as modeled by the black-oil equations, is discussed in a separate paper.« less

  11. Hybrid Grid and Basis Set Approach to Quantum Chemistry DMRG

    NASA Astrophysics Data System (ADS)

    Stoudenmire, Edwin Miles; White, Steven

    We present a new approach for using DMRG for quantum chemistry that combines the advantages of a basis set with that of a grid approximation. Because DMRG scales linearly for quasi-one-dimensional systems, it is feasible to approximate the continuum with a fine grid in one direction while using a standard basis set approach for the transverse directions. Compared to standard basis set methods, we reach larger systems and achieve better scaling when approaching the basis set limit. The flexibility and reduced costs of our approach even make it feasible to incoporate advanced DMRG techniques such as simulating real-time dynamics. Supported by the Simons Collaboration on the Many-Electron Problem.

  12. Sparse dynamics for partial differential equations

    PubMed Central

    Schaeffer, Hayden; Caflisch, Russel; Hauck, Cory D.; Osher, Stanley

    2013-01-01

    We investigate the approximate dynamics of several differential equations when the solutions are restricted to a sparse subset of a given basis. The restriction is enforced at every time step by simply applying soft thresholding to the coefficients of the basis approximation. By reducing or compressing the information needed to represent the solution at every step, only the essential dynamics are represented. In many cases, there are natural bases derived from the differential equations, which promote sparsity. We find that our method successfully reduces the dynamics of convection equations, diffusion equations, weak shocks, and vorticity equations with high-frequency source terms. PMID:23533273

  13. Sparse dynamics for partial differential equations.

    PubMed

    Schaeffer, Hayden; Caflisch, Russel; Hauck, Cory D; Osher, Stanley

    2013-04-23

    We investigate the approximate dynamics of several differential equations when the solutions are restricted to a sparse subset of a given basis. The restriction is enforced at every time step by simply applying soft thresholding to the coefficients of the basis approximation. By reducing or compressing the information needed to represent the solution at every step, only the essential dynamics are represented. In many cases, there are natural bases derived from the differential equations, which promote sparsity. We find that our method successfully reduces the dynamics of convection equations, diffusion equations, weak shocks, and vorticity equations with high-frequency source terms.

  14. Ground State and Finite Temperature Lanczos Methods

    NASA Astrophysics Data System (ADS)

    Prelovšek, P.; Bonča, J.

    The present review will focus on recent development of exact- diagonalization (ED) methods that use Lanczos algorithm to transform large sparse matrices onto the tridiagonal form. We begin with a review of basic principles of the Lanczos method for computing ground-state static as well as dynamical properties. Next, generalization to finite-temperatures in the form of well established finite-temperature Lanczos method is described. The latter allows for the evaluation of temperatures T>0 static and dynamic quantities within various correlated models. Several extensions and modification of the latter method introduced more recently are analysed. In particular, the low-temperature Lanczos method and the microcanonical Lanczos method, especially applicable within the high-T regime. In order to overcome the problems of exponentially growing Hilbert spaces that prevent ED calculations on larger lattices, different approaches based on Lanczos diagonalization within the reduced basis have been developed. In this context, recently developed method based on ED within a limited functional space is reviewed. Finally, we briefly discuss the real-time evolution of correlated systems far from equilibrium, which can be simulated using the ED and Lanczos-based methods, as well as approaches based on the diagonalization in a reduced basis.

  15. Spectral CT metal artifact reduction with an optimization-based reconstruction algorithm

    NASA Astrophysics Data System (ADS)

    Gilat Schmidt, Taly; Barber, Rina F.; Sidky, Emil Y.

    2017-03-01

    Metal objects cause artifacts in computed tomography (CT) images. This work investigated the feasibility of a spectral CT method to reduce metal artifacts. Spectral CT acquisition combined with optimization-based reconstruction is proposed to reduce artifacts by modeling the physical effects that cause metal artifacts and by providing the flexibility to selectively remove corrupted spectral measurements in the spectral-sinogram space. The proposed Constrained `One-Step' Spectral CT Image Reconstruction (cOSSCIR) algorithm directly estimates the basis material maps while enforcing convex constraints. The incorporation of constraints on the reconstructed basis material maps is expected to mitigate undersampling effects that occur when corrupted data is excluded from reconstruction. The feasibility of the cOSSCIR algorithm to reduce metal artifacts was investigated through simulations of a pelvis phantom. The cOSSCIR algorithm was investigated with and without the use of a third basis material representing metal. The effects of excluding data corrupted by metal were also investigated. The results demonstrated that the proposed cOSSCIR algorithm reduced metal artifacts and improved CT number accuracy. For example, CT number error in a bright shading artifact region was reduced from 403 HU in the reference filtered backprojection reconstruction to 33 HU using the proposed algorithm in simulation. In the dark shading regions, the error was reduced from 1141 HU to 25 HU. Of the investigated approaches, decomposing the data into three basis material maps and excluding the corrupted data demonstrated the greatest reduction in metal artifacts.

  16. Evaluation Methods Basis of Strategy Development Effectiveness of the Enterprise

    ERIC Educational Resources Information Center

    Zotova, Anna S.; Kandrashina, Elena A.; Ivliev, Andrey D.; Charaeva, Marina V.

    2016-01-01

    The urgency to study the problem is caused by the necessity to make management decisions in terms of reducing goods life cycle, reducing profitability of production, increasing speed of technological progress. In this context, this article aims to identify and elaborate the tools for integrated diagnostic of strategy efficiency, taking into…

  17. BaSi2 formation mechanism in thermally evaporated films and its application to reducing oxygen impurity concentration

    NASA Astrophysics Data System (ADS)

    Hara, Kosuke O.; Yamamoto, Chiaya; Yamanaka, Junji; Arimoto, Keisuke; Nakagawa, Kiyokazu; Usami, Noritaka

    2018-04-01

    Thermal evaporation is a simple and rapid method to fabricate semiconducting BaSi2 films. In this study, to elucidate the BaSi2 formation mechanism, the microstructure of a BaSi2 epitaxial film fabricated by thermal evaporation has been investigated by transmission electron microscopy. The BaSi2 film is found to consist of three layers with different microstructural characteristics, which is well explained by assuming two stages of film deposition. In the first stage, BaSi2 forms through the diffusion of Ba atoms from the deposited Ba-rich film to the Si substrate while in the second stage, the mutual diffusion of Ba and Si atoms in the film leads to BaSi2 formation. On the basis of the BaSi2 formation mechanism, two issues are addressed. One is the as-yet unclarified reason for epitaxial growth. It is found important to quickly form BaSi2 in the first stage for the epitaxial growth of upper layers. The other issue is the high oxygen concentration in BaSi2 films around the BaSi2-Si interface. Two routes of oxygen incorporation, i.e., oxidation of the Si substrate surface and initially deposited Ba-rich layer by the residual gas, are identified. On the basis of this knowledge, oxygen concentration is decreased by reducing the holding time of the substrate at high temperatures and by premelting of the source. In addition, X-ray diffraction results show that the decrease in oxygen concentration can lead to an increased proportion of a-axis-oriented grains.

  18. Geminal-spanning orbitals make explicitly correlated reduced-scaling coupled-cluster methods robust, yet simple

    NASA Astrophysics Data System (ADS)

    Pavošević, Fabijan; Neese, Frank; Valeev, Edward F.

    2014-08-01

    We present a production implementation of reduced-scaling explicitly correlated (F12) coupled-cluster singles and doubles (CCSD) method based on pair-natural orbitals (PNOs). A key feature is the reformulation of the explicitly correlated terms using geminal-spanning orbitals that greatly reduce the truncation errors of the F12 contribution. For the standard S66 benchmark of weak intermolecular interactions, the cc-pVDZ-F12 PNO CCSD F12 interaction energies reproduce the complete basis set CCSD limit with mean absolute error <0.1 kcal/mol, and at a greatly reduced cost compared to the conventional CCSD F12.

  19. Reduced nicotine product standards for combustible tobacco: Building an empirical basis for effective regulation

    PubMed Central

    Donny, Eric C.; Hatsukami, Dorothy K.; Benowitz, Neal L.; Sved, Alan F.; Tidey, Jennifer W.; Cassidy, Rachel N.

    2014-01-01

    Introduction Both the Tobacco Control Act in the U.S. and Article 9 of the Framework Convention on Tobacco Control enable governments to directly address the addictiveness of combustible tobacco by reducing nicotine through product standards. Although nicotine may have some harmful effects, the detrimental health effects of smoked tobacco are primarily due to non-nicotine constituents. Hence, the health effects of nicotine reduction would likely be determined by changes in behavior that result in changes in smoke exposure. Methods Herein, we review the current evidence on nicotine reduction and discuss some of the challenges in establishing the empirical basis for regulatory decisions. Results To date, research suggests that very low nicotine content cigarettes produce a desirable set of outcomes, including reduced exposure to nicotine, reduced smoking, and reduced dependence, without significant safety concerns. However, much is still unknown, including the effects of gradual versus abrupt changes in nicotine content, effects in vulnerable populations, and impact on youth. Discussion A coordinated effort must be made to provide the best possible scientific basis for regulatory decisions. The outcome of this effort may provide the foundation for a novel approach to tobacco control that dramatically reduces the devastating health consequences of smoked tobacco. PMID:24967958

  20. Research on numerical method for multiple pollution source discharge and optimal reduction program

    NASA Astrophysics Data System (ADS)

    Li, Mingchang; Dai, Mingxin; Zhou, Bin; Zou, Bin

    2018-03-01

    In this paper, the optimal method for reduction program is proposed by the nonlinear optimal algorithms named that genetic algorithm. The four main rivers in Jiangsu province, China are selected for reducing the environmental pollution in nearshore district. Dissolved inorganic nitrogen (DIN) is studied as the only pollutant. The environmental status and standard in the nearshore district is used to reduce the discharge of multiple river pollutant. The research results of reduction program are the basis of marine environmental management.

  1. Preconditioned MoM Solutions for Complex Planar Arrays

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fasenfest, B J; Jackson, D; Champagne, N

    2004-01-23

    The numerical analysis of large arrays is a complex problem. There are several techniques currently under development in this area. One such technique is the FAIM (Faster Adaptive Integral Method). This method uses a modification of the standard AIM approach which takes into account the reusability properties of matrices that arise from identical array elements. If the array consists of planar conducting bodies, the array elements are meshed using standard subdomain basis functions, such as the RWG basis. These bases are then projected onto a regular grid of interpolating polynomials. This grid can then be used in a 2D ormore » 3D FFT to accelerate the matrix-vector product used in an iterative solver. The method has been proven to greatly reduce solve time by speeding the matrix-vector product computation. The FAIM approach also reduces fill time and memory requirements, since only the near element interactions need to be calculated exactly. The present work extends FAIM by modifying it to allow for layered material Green's Functions and dielectrics. In addition, a preconditioner is implemented to greatly reduce the number of iterations required for a solution. The general scheme of the FAIM method is reported in; this contribution is limited to presenting new results.« less

  2. Calculating vibrational spectra with sum of product basis functions without storing full-dimensional vectors or matrices.

    PubMed

    Leclerc, Arnaud; Carrington, Tucker

    2014-05-07

    We propose an iterative method for computing vibrational spectra that significantly reduces the memory cost of calculations. It uses a direct product primitive basis, but does not require storing vectors with as many components as there are product basis functions. Wavefunctions are represented in a basis each of whose functions is a sum of products (SOP) and the factorizable structure of the Hamiltonian is exploited. If the factors of the SOP basis functions are properly chosen, wavefunctions are linear combinations of a small number of SOP basis functions. The SOP basis functions are generated using a shifted block power method. The factors are refined with a rank reduction algorithm to cap the number of terms in a SOP basis function. The ideas are tested on a 20-D model Hamiltonian and a realistic CH3CN (12 dimensional) potential. For the 20-D problem, to use a standard direct product iterative approach one would need to store vectors with about 10(20) components and would hence require about 8 × 10(11) GB. With the approach of this paper only 1 GB of memory is necessary. Results for CH3CN agree well with those of a previous calculation on the same potential.

  3. A parametric model order reduction technique for poroelastic finite element models.

    PubMed

    Lappano, Ettore; Polanz, Markus; Desmet, Wim; Mundo, Domenico

    2017-10-01

    This research presents a parametric model order reduction approach for vibro-acoustic problems in the frequency domain of systems containing poroelastic materials (PEM). The method is applied to the Finite Element (FE) discretization of the weak u-p integral formulation based on the Biot-Allard theory and makes use of reduced basis (RB) methods typically employed for parametric problems. The parametric reduction is obtained rewriting the Biot-Allard FE equations for poroelastic materials using an affine representation of the frequency (therefore allowing for RB methods) and projecting the frequency-dependent PEM system on a global reduced order basis generated with the proper orthogonal decomposition instead of standard modal approaches. This has proven to be better suited to describe the nonlinear frequency dependence and the strong coupling introduced by damping. The methodology presented is tested on two three-dimensional systems: in the first experiment, the surface impedance of a PEM layer sample is calculated and compared with results of the literature; in the second, the reduced order model of a multilayer system coupled to an air cavity is assessed and the results are compared to those of the reference FE model.

  4. Methods for evaluating information in managing the enterprise on the basis of a hybrid three-tier system

    NASA Astrophysics Data System (ADS)

    Vasil'ev, V. A.; Dobrynina, N. V.

    2017-01-01

    The article presents data on the influence of information upon the functioning of complex systems in the process of ensuring their effective management. Ways and methods for evaluating multidimensional information that reduce time and resources, improve the validity of the studied system management decisions, were proposed.

  5. Desensitization And Study-Skills Training As Treatment For Two Types of Test-Anxious Students

    ERIC Educational Resources Information Center

    Osterhouse, Robert A.

    1972-01-01

    This study compared the effectiveness of systematic desensitization and training in efficient study methods for reducing test anxiety among subjects selected on the basis of two types of self reported anxiety. Desensitization offered more promise as a treatment method for test anxiety than did training in study skills. (Author)

  6. A new implementation of the CMRH method for solving dense linear systems

    NASA Astrophysics Data System (ADS)

    Heyouni, M.; Sadok, H.

    2008-04-01

    The CMRH method [H. Sadok, Methodes de projections pour les systemes lineaires et non lineaires, Habilitation thesis, University of Lille1, Lille, France, 1994; H. Sadok, CMRH: A new method for solving nonsymmetric linear systems based on the Hessenberg reduction algorithm, Numer. Algorithms 20 (1999) 303-321] is an algorithm for solving nonsymmetric linear systems in which the Arnoldi component of GMRES is replaced by the Hessenberg process, which generates Krylov basis vectors which are orthogonal to standard unit basis vectors rather than mutually orthogonal. The iterate is formed from these vectors by solving a small least squares problem involving a Hessenberg matrix. Like GMRES, this method requires one matrix-vector product per iteration. However, it can be implemented to require half as much arithmetic work and less storage. Moreover, numerical experiments show that this method performs accurately and reduces the residual about as fast as GMRES. With this new implementation, we show that the CMRH method is the only method with long-term recurrence which requires not storing at the same time the entire Krylov vectors basis and the original matrix as in the GMRES algorithmE A comparison with Gaussian elimination is provided.

  7. Planning & Priority Setting for Basic Research

    DTIC Science & Technology

    2010-05-05

    Integrated into numerous commercial codes in aerospace, automotive , semiconductor, and chemical industries Fast Multipole Methods (ONR 31) Applications... Use knowledge (even failures) to reduce risk in acquisition Provide the basis for future Navy and arine Corps syste s Ensure research...relevancy to Naval S&T strategy Transition pro ising Basic Research to applications Use kno ledge (even failures) to reduce risk in acquisition Maintain

  8. Dynamic Snap-Through of Thin-Walled Structures by a Reduced Order Method

    NASA Technical Reports Server (NTRS)

    Przekop, Adam; Rizzi, Stephen A.

    2006-01-01

    The goal of this investigation is to further develop nonlinear modal numerical simulation methods for application to geometrically nonlinear response of structures exposed to combined high intensity random pressure fluctuations and thermal loadings. The study is conducted on a flat aluminum beam, which permits a comparison of results obtained by a reduced-order analysis with those obtained from a numerically intensive simulation in physical degrees-of-freedom. A uniformly distributed thermal loading is first applied to investigate the dynamic instability associated with thermal buckling. A uniformly distributed random loading is added to investigate the combined thermal-acoustic response. In the latter case, three types of response characteristics are considered, namely: (i) small amplitude vibration around one of the two stable buckling equilibrium positions, (ii) intermittent snap-through response between the two equilibrium positions, and (iii) persistent snap-through response between the two equilibrium positions. For the reduced order analysis, four categories of modal basis functions are identified including those having symmetric transverse (ST), anti-symmetric transverse (AT), symmetric in-plane (SI), and anti-symmetric in-plane (AI) displacements. The effect of basis selection on the quality of results is investigated for the dynamic thermal buckling and combined thermal-acoustic response. It is found that despite symmetric geometry, loading, and boundary conditions, the AT and SI modes must be included in the basis as they participate in the snap-through behavior.

  9. Dynamic Snap-Through of Thermally Buckled Structures by a Reduced Order Method

    NASA Technical Reports Server (NTRS)

    Przekop, Adam; Rizzi, Stephen A.

    2007-01-01

    The goal of this investigation is to further develop nonlinear modal numerical simulation methods for application to geometrically nonlinear response of structures exposed to combined high intensity random pressure fluctuations and thermal loadings. The study is conducted on a flat aluminum beam, which permits a comparison of results obtained by a reduced-order analysis with those obtained from a numerically intensive simulation in physical degrees-of-freedom. A uniformly distributed thermal loading is first applied to investigate the dynamic instability associated with thermal buckling. A uniformly distributed random loading is added to investigate the combined thermal-acoustic response. In the latter case, three types of response characteristics are considered, namely: (i) small amplitude vibration around one of the two stable buckling equilibrium positions, (ii) intermittent snap-through response between the two equilibrium positions, and (iii) persistent snap-through response between the two equilibrium positions. For the reduced-order analysis, four categories of modal basis functions are identified including those having symmetric transverse, anti-symmetric transverse, symmetric in-plane, and anti-symmetric in-plane displacements. The effect of basis selection on the quality of results is investigated for the dynamic thermal buckling and combined thermal-acoustic response. It is found that despite symmetric geometry, loading, and boundary conditions, the anti-symmetric transverse and symmetric in-plane modes must be included in the basis as they participate in the snap-through behavior.

  10. Optimization of a protocol for myocardial perfusion scintigraphy by using an anthropomorphic phantom*

    PubMed Central

    Ramos, Susie Medeiros Oliveira; Glavam, Adriana Pereira; Kubo, Tadeu Takao Almodovar; de Sá, Lidia Vasconcellos

    2014-01-01

    Objective To develop a study aiming at optimizing myocardial perfusion imaging. Materials and Methods Imaging of an anthropomorphic thorax phantom with a GE SPECT Ventri gamma camera, with varied activities and acquisition times, in order to evaluate the influence of these parameters on the quality of the reconstructed medical images. The 99mTc-sestamibi radiotracer was utilized, and then the images were clinically evaluated on the basis of data such as summed stress score, and on the technical image quality and perfusion. The software ImageJ was utilized in the data quantification. Results The results demonstrated that for the standard acquisition time utilized in the procedure (15 seconds per angle), the injected activity could be reduced by 33.34%. Additionally, even if the standard scan time is reduced by 53.34% (7 seconds per angle), the standard injected activity could still be reduced by 16.67%, without impairing the image quality and the diagnostic reliability. Conclusion The described method and respective results provide a basis for the development of a clinical trial of patients in an optimized protocol. PMID:25741088

  11. Direct recovery of regional tracer kinetics from temporally inconsistent dynamic ECT projections using dimension-reduced time-activity basis

    NASA Astrophysics Data System (ADS)

    Maltz, Jonathan S.

    2000-11-01

    We present an algorithm of reduced computational cost which is able to estimate kinetic model parameters directly from dynamic ECT sinograms made up of temporally inconsistent projections. The algorithm exploits the extreme degree of parameter redundancy inherent in linear combinations of the exponential functions which represent the modes of first-order compartmental systems. The singular value decomposition is employed to find a small set of orthogonal functions, the linear combinations of which are able to accurately represent all modes within the physiologically anticipated range in a given study. The reduced-dimension basis is formed as the convolution of this orthogonal set with a measured input function. The Moore-Penrose pseudoinverse is used to find coefficients of this basis. Algorithm performance is evaluated at realistic count rates using MCAT phantom and clinical 99mTc-teboroxime myocardial study data. Phantom data are modelled as originating from a Poisson process. For estimates recovered from a single slice projection set containing 2.5×105 total counts, recovered tissue responses compare favourably with those obtained using more computationally intensive methods. The corresponding kinetic parameter estimates (coefficients of the new basis) exhibit negligible bias, while parameter variances are low, falling within 30% of the Cramér-Rao lower bound.

  12. Time Domain Propagation of Quantum and Classical Systems using a Wavelet Basis Set Method

    NASA Astrophysics Data System (ADS)

    Lombardini, Richard; Nowara, Ewa; Johnson, Bruce

    2015-03-01

    The use of an orthogonal wavelet basis set (Optimized Maximum-N Generalized Coiflets) to effectively model physical systems in the time domain, in particular the electromagnetic (EM) pulse and quantum mechanical (QM) wavefunction, is examined in this work. Although past research has demonstrated the benefits of wavelet basis sets to handle computationally expensive problems due to their multiresolution properties, the overlapping supports of neighboring wavelet basis functions poses problems when dealing with boundary conditions, especially with material interfaces in the EM case. Specifically, this talk addresses this issue using the idea of derivative matching creating fictitious grid points (T.A. Driscoll and B. Fornberg), but replaces the latter element with fictitious wavelet projections in conjunction with wavelet reconstruction filters. Two-dimensional (2D) systems are analyzed, EM pulse incident on silver cylinders and the QM electron wave packet circling the proton in a hydrogen atom system (reduced to 2D), and the new wavelet method is compared to the popular finite-difference time-domain technique.

  13. Using multi-dimensional Smolyak interpolation to make a sum-of-products potential

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Avila, Gustavo, E-mail: Gustavo-Avila@telefonica.net; Carrington, Tucker, E-mail: Tucker.Carrington@queensu.ca

    2015-07-28

    We propose a new method for obtaining potential energy surfaces in sum-of-products (SOP) form. If the number of terms is small enough, a SOP potential surface significantly reduces the cost of quantum dynamics calculations by obviating the need to do multidimensional integrals by quadrature. The method is based on a Smolyak interpolation technique and uses polynomial-like or spectral basis functions and 1D Lagrange-type functions. When written in terms of the basis functions from which the Lagrange-type functions are built, the Smolyak interpolant has only a modest number of terms. The ideas are tested for HONO (nitrous acid)

  14. The reduced space Sequential Quadratic Programming (SQP) method for calculating the worst resonance response of nonlinear systems

    NASA Astrophysics Data System (ADS)

    Liao, Haitao; Wu, Wenwang; Fang, Daining

    2018-07-01

    A coupled approach combining the reduced space Sequential Quadratic Programming (SQP) method with the harmonic balance condensation technique for finding the worst resonance response is developed. The nonlinear equality constraints of the optimization problem are imposed on the condensed harmonic balance equations. Making use of the null space decomposition technique, the original optimization formulation in the full space is mathematically simplified, and solved in the reduced space by means of the reduced SQP method. The transformation matrix that maps the full space to the null space of the constrained optimization problem is constructed via the coordinate basis scheme. The removal of the nonlinear equality constraints is accomplished, resulting in a simple optimization problem subject to bound constraints. Moreover, second order correction technique is introduced to overcome Maratos effect. The combination application of the reduced SQP method and condensation technique permits a large reduction of the computational cost. Finally, the effectiveness and applicability of the proposed methodology is demonstrated by two numerical examples.

  15. Multi-stage approach for structural damage detection problem using basis pursuit and particle swarm optimization

    NASA Astrophysics Data System (ADS)

    Gerist, Saleheh; Maheri, Mahmoud R.

    2016-12-01

    In order to solve structural damage detection problem, a multi-stage method using particle swarm optimization is presented. First, a new spars recovery method, named Basis Pursuit (BP), is utilized to preliminarily identify structural damage locations. The BP method solves a system of equations which relates the damage parameters to the structural modal responses using the sensitivity matrix. Then, the results of this stage are subsequently enhanced to the exact damage locations and extents using the PSO search engine. Finally, the search space is reduced by elimination of some low damage variables using micro search (MS) operator embedded in the PSO algorithm. To overcome the noise present in structural responses, a method known as Basis Pursuit De-Noising (BPDN) is also used. The efficiency of the proposed method is investigated by three numerical examples: a cantilever beam, a plane truss and a portal plane frame. The frequency response is used to detect damage in the examples. The simulation results demonstrate the accuracy and efficiency of the proposed method in detecting multiple damage cases and exhibit its robustness regarding noise and its advantages compared to other reported solution algorithms.

  16. Drude conductivity exhibited by chemically synthesized reduced graphene oxide

    NASA Astrophysics Data System (ADS)

    Younas, Daniyal; Javed, Qurat-ul-Ain; Fatima, Sabeen; Kalsoom, Riffat; Abbas, Hussain; Khan, Yaqoob

    2017-09-01

    Electrical conductance in graphene layers having Drude like response due to massless Dirac fermions have been well explained theoretically as well as experimentally. In this paper Drude like electrical conductivity response of reduced graphene oxide synthesized by chemical route is presented. A method slightly different from conventional methods is used to synthesize graphene oxide which is then converted to reduced graphene oxide. Various analytic techniques were employed to verify the successful oxidation and reductions in the process and were also used to measure various parameters like thickness of layers and conductivity. Obtained reduced graphene oxide has very thin layers of thickness around 13 nm on average and reduced graphene oxide has average thickness below 20 nm. Conductivity of the reduced graphene was observed to have Drude like response which is explained on basis of Drude model for conductors.

  17. Matrix-product-state method with local basis optimization for nonequilibrium electron-phonon systems

    NASA Astrophysics Data System (ADS)

    Heidrich-Meisner, Fabian; Brockt, Christoph; Dorfner, Florian; Vidmar, Lev; Jeckelmann, Eric

    We present a method for simulating the time evolution of quasi-one-dimensional correlated systems with strongly fluctuating bosonic degrees of freedom (e.g., phonons) using matrix product states. For this purpose we combine the time-evolving block decimation (TEBD) algorithm with a local basis optimization (LBO) approach. We discuss the performance of our approach in comparison to TEBD with a bare boson basis, exact diagonalization, and diagonalization in a limited functional space. TEBD with LBO can reduce the computational cost by orders of magnitude when boson fluctuations are large and thus it allows one to investigate problems that are out of reach of other approaches. First, we test our method on the non-equilibrium dynamics of a Holstein polaron and show that it allows us to study the regime of strong electron-phonon coupling. Second, the method is applied to the scattering of an electronic wave packet off a region with electron-phonon coupling. Our study reveals a rich physics including transient self-trapping and dissipation. Supported by Deutsche Forschungsgemeinschaft (DFG) via FOR 1807.

  18. Reliable Real-Time Solution of Parametrized Partial Differential Equations: Reduced-Basis Output Bound Methods. Appendix 2

    NASA Technical Reports Server (NTRS)

    Prudhomme, C.; Rovas, D. V.; Veroy, K.; Machiels, L.; Maday, Y.; Patera, A. T.; Turinici, G.; Zang, Thomas A., Jr. (Technical Monitor)

    2002-01-01

    We present a technique for the rapid and reliable prediction of linear-functional outputs of elliptic (and parabolic) partial differential equations with affine parameter dependence. The essential components are (i) (provably) rapidly convergent global reduced basis approximations, Galerkin projection onto a space W(sub N) spanned by solutions of the governing partial differential equation at N selected points in parameter space; (ii) a posteriori error estimation, relaxations of the error-residual equation that provide inexpensive yet sharp and rigorous bounds for the error in the outputs of interest; and (iii) off-line/on-line computational procedures, methods which decouple the generation and projection stages of the approximation process. The operation count for the on-line stage, in which, given a new parameter value, we calculate the output of interest and associated error bound, depends only on N (typically very small) and the parametric complexity of the problem; the method is thus ideally suited for the repeated and rapid evaluations required in the context of parameter estimation, design, optimization, and real-time control.

  19. A combined emitter threat assessment method based on ICW-RCM

    NASA Astrophysics Data System (ADS)

    Zhang, Ying; Wang, Hongwei; Guo, Xiaotao; Wang, Yubing

    2017-08-01

    Considering that the tradition al emitter threat assessment methods are difficult to intuitively reflect the degree of target threaten and the deficiency of real-time and complexity, on the basis of radar chart method(RCM), an algorithm of emitter combined threat assessment based on ICW-RCM (improved combination weighting method, ICW) is proposed. The coarse sorting is integrated with fine sorting in emitter combined threat assessment, sequencing the emitter threat level roughly accordance to radar operation mode, and reducing task priority of the low-threat emitter; On the basis of ICW-RCM, sequencing the same radar operation mode emitter roughly, finally, obtain the results of emitter threat assessment through coarse and fine sorting. Simulation analyses show the correctness and effectiveness of this algorithm. Comparing with classical method of emitter threat assessment based on CW-RCM, the algorithm is visual in image and can work quickly with lower complexity.

  20. Sparse-grid, reduced-basis Bayesian inversion: Nonaffine-parametric nonlinear equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Peng, E-mail: peng@ices.utexas.edu; Schwab, Christoph, E-mail: christoph.schwab@sam.math.ethz.ch

    2016-07-01

    We extend the reduced basis (RB) accelerated Bayesian inversion methods for affine-parametric, linear operator equations which are considered in [16,17] to non-affine, nonlinear parametric operator equations. We generalize the analysis of sparsity of parametric forward solution maps in [20] and of Bayesian inversion in [48,49] to the fully discrete setting, including Petrov–Galerkin high-fidelity (“HiFi”) discretization of the forward maps. We develop adaptive, stochastic collocation based reduction methods for the efficient computation of reduced bases on the parametric solution manifold. The nonaffinity and nonlinearity with respect to (w.r.t.) the distributed, uncertain parameters and the unknown solution is collocated; specifically, by themore » so-called Empirical Interpolation Method (EIM). For the corresponding Bayesian inversion problems, computational efficiency is enhanced in two ways: first, expectations w.r.t. the posterior are computed by adaptive quadratures with dimension-independent convergence rates proposed in [49]; the present work generalizes [49] to account for the impact of the PG discretization in the forward maps on the convergence rates of the Quantities of Interest (QoI for short). Second, we propose to perform the Bayesian estimation only w.r.t. a parsimonious, RB approximation of the posterior density. Based on the approximation results in [49], the infinite-dimensional parametric, deterministic forward map and operator admit N-term RB and EIM approximations which converge at rates which depend only on the sparsity of the parametric forward map. In several numerical experiments, the proposed algorithms exhibit dimension-independent convergence rates which equal, at least, the currently known rate estimates for N-term approximation. We propose to accelerate Bayesian estimation by first offline construction of reduced basis surrogates of the Bayesian posterior density. The parsimonious surrogates can then be employed for online data assimilation and for Bayesian estimation. They also open a perspective for optimal experimental design.« less

  1. Term Cancellations in Computing Floating-Point Gröbner Bases

    NASA Astrophysics Data System (ADS)

    Sasaki, Tateaki; Kako, Fujio

    We discuss the term cancellation which makes the floating-point Gröbner basis computation unstable, and show that error accumulation is never negligible in our previous method. Then, we present a new method, which removes accumulated errors as far as possible by reducing matrices constructed from coefficient vectors by the Gaussian elimination. The method manifests amounts of term cancellations caused by the existence of approximate linearly dependent relations among input polynomials.

  2. Groebner Basis Methods for Stationary Solutions of a Low-Dimensional Model for a Shear Flow

    NASA Astrophysics Data System (ADS)

    Pausch, Marina; Grossmann, Florian; Eckhardt, Bruno; Romanovski, Valery G.

    2014-10-01

    We use Groebner basis methods to extract all stationary solutions for the nine-mode shear flow model described in Moehlis et al. (New J Phys 6:56, 2004). Using rational approximations to irrational wave numbers and algebraic manipulation techniques we reduce the problem of determining all stationary states to finding roots of a polynomial of order 30. The coefficients differ by 30 powers of 10, so that algorithms for extended precision are needed to extract the roots reliably. We find that there are eight stationary solutions consisting of two distinct states, each of which appears in four symmetry-related phases. We discuss extensions of these results for other flows.

  3. Gaussian functional regression for output prediction: Model assimilation and experimental design

    NASA Astrophysics Data System (ADS)

    Nguyen, N. C.; Peraire, J.

    2016-03-01

    In this paper, we introduce a Gaussian functional regression (GFR) technique that integrates multi-fidelity models with model reduction to efficiently predict the input-output relationship of a high-fidelity model. The GFR method combines the high-fidelity model with a low-fidelity model to provide an estimate of the output of the high-fidelity model in the form of a posterior distribution that can characterize uncertainty in the prediction. A reduced basis approximation is constructed upon the low-fidelity model and incorporated into the GFR method to yield an inexpensive posterior distribution of the output estimate. As this posterior distribution depends crucially on a set of training inputs at which the high-fidelity models are simulated, we develop a greedy sampling algorithm to select the training inputs. Our approach results in an output prediction model that inherits the fidelity of the high-fidelity model and has the computational complexity of the reduced basis approximation. Numerical results are presented to demonstrate the proposed approach.

  4. Reducing Postharvest Losses during Storage of Grain Crops to Strengthen Food Security in Developing Countries.

    PubMed

    Kumar, Deepak; Kalita, Prasanta

    2017-01-15

    While fulfilling the food demand of an increasing population remains a major global concern, more than one-third of food is lost or wasted in postharvest operations. Reducing the postharvest losses, especially in developing countries, could be a sustainable solution to increase food availability, reduce pressure on natural resources, eliminate hunger and improve farmers' livelihoods. Cereal grains are the basis of staple food in most of the developing nations, and account for the maximum postharvest losses on a calorific basis among all agricultural commodities. As much as 50%-60% cereal grains can be lost during the storage stage due only to the lack of technical inefficiency. Use of scientific storage methods can reduce these losses to as low as 1%-2%. This paper provides a comprehensive literature review of the grain postharvest losses in developing countries, the status and causes of storage losses and discusses the technological interventions to reduce these losses. The basics of hermetic storage, various technology options, and their effectiveness on several crops in different localities are discussed in detail.

  5. Reducing Postharvest Losses during Storage of Grain Crops to Strengthen Food Security in Developing Countries

    PubMed Central

    Kumar, Deepak; Kalita, Prasanta

    2017-01-01

    While fulfilling the food demand of an increasing population remains a major global concern, more than one-third of food is lost or wasted in postharvest operations. Reducing the postharvest losses, especially in developing countries, could be a sustainable solution to increase food availability, reduce pressure on natural resources, eliminate hunger and improve farmers’ livelihoods. Cereal grains are the basis of staple food in most of the developing nations, and account for the maximum postharvest losses on a calorific basis among all agricultural commodities. As much as 50%–60% cereal grains can be lost during the storage stage due only to the lack of technical inefficiency. Use of scientific storage methods can reduce these losses to as low as 1%–2%. This paper provides a comprehensive literature review of the grain postharvest losses in developing countries, the status and causes of storage losses and discusses the technological interventions to reduce these losses. The basics of hermetic storage, various technology options, and their effectiveness on several crops in different localities are discussed in detail. PMID:28231087

  6. Domain decomposition for a mixed finite element method in three dimensions

    USGS Publications Warehouse

    Cai, Z.; Parashkevov, R.R.; Russell, T.F.; Wilson, J.D.; Ye, X.

    2003-01-01

    We consider the solution of the discrete linear system resulting from a mixed finite element discretization applied to a second-order elliptic boundary value problem in three dimensions. Based on a decomposition of the velocity space, these equations can be reduced to a discrete elliptic problem by eliminating the pressure through the use of substructures of the domain. The practicality of the reduction relies on a local basis, presented here, for the divergence-free subspace of the velocity space. We consider additive and multiplicative domain decomposition methods for solving the reduced elliptic problem, and their uniform convergence is established.

  7. Optimization of auxiliary basis sets for the LEDO expansion and a projection technique for LEDO-DFT.

    PubMed

    Götz, Andreas W; Kollmar, Christian; Hess, Bernd A

    2005-09-01

    We present a systematic procedure for the optimization of the expansion basis for the limited expansion of diatomic overlap density functional theory (LEDO-DFT) and report on optimized auxiliary orbitals for the Ahlrichs split valence plus polarization basis set (SVP) for the elements H, Li--F, and Na--Cl. A new method to deal with near-linear dependences in the LEDO expansion basis is introduced, which greatly reduces the computational effort of LEDO-DFT calculations. Numerical results for a test set of small molecules demonstrate the accuracy of electronic energies, structural parameters, dipole moments, and harmonic frequencies. For larger molecular systems the numerical errors introduced by the LEDO approximation can lead to an uncontrollable behavior of the self-consistent field (SCF) process. A projection technique suggested by Löwdin is presented in the framework of LEDO-DFT, which guarantees for SCF convergence. Numerical results on some critical test molecules suggest the general applicability of the auxiliary orbitals presented in combination with this projection technique. Timing results indicate that LEDO-DFT is competitive with conventional density fitting methods. (c) 2005 Wiley Periodicals, Inc.

  8. Nonlinear spline wavefront reconstruction through moment-based Shack-Hartmann sensor measurements.

    PubMed

    Viegers, M; Brunner, E; Soloviev, O; de Visser, C C; Verhaegen, M

    2017-05-15

    We propose a spline-based aberration reconstruction method through moment measurements (SABRE-M). The method uses first and second moment information from the focal spots of the SH sensor to reconstruct the wavefront with bivariate simplex B-spline basis functions. The proposed method, since it provides higher order local wavefront estimates with quadratic and cubic basis functions can provide the same accuracy for SH arrays with a reduced number of subapertures and, correspondingly, larger lenses which can be beneficial for application in low light conditions. In numerical experiments the performance of SABRE-M is compared to that of the first moment method SABRE for aberrations of different spatial orders and for different sizes of the SH array. The results show that SABRE-M is superior to SABRE, in particular for the higher order aberrations and that SABRE-M can give equal performance as SABRE on a SH grid of halved sampling.

  9. Accurate D-bar Reconstructions of Conductivity Images Based on a Method of Moment with Sinc Basis.

    PubMed

    Abbasi, Mahdi

    2014-01-01

    Planar D-bar integral equation is one of the inverse scattering solution methods for complex problems including inverse conductivity considered in applications such as Electrical impedance tomography (EIT). Recently two different methodologies are considered for the numerical solution of D-bar integrals equation, namely product integrals and multigrid. The first one involves high computational burden and the other one suffers from low convergence rate (CR). In this paper, a novel high speed moment method based using the sinc basis is introduced to solve the two-dimensional D-bar integral equation. In this method, all functions within D-bar integral equation are first expanded using the sinc basis functions. Then, the orthogonal properties of their products dissolve the integral operator of the D-bar equation and results a discrete convolution equation. That is, the new moment method leads to the equation solution without direct computation of the D-bar integral. The resulted discrete convolution equation maybe adapted to a suitable structure to be solved using fast Fourier transform. This allows us to reduce the order of computational complexity to as low as O (N (2)log N). Simulation results on solving D-bar equations arising in EIT problem show that the proposed method is accurate with an ultra-linear CR.

  10. A Method to Improve Electron Density Measurement of Cone-Beam CT Using Dual Energy Technique

    PubMed Central

    Men, Kuo; Dai, Jian-Rong; Li, Ming-Hui; Chen, Xin-Yuan; Zhang, Ke; Tian, Yuan; Huang, Peng; Xu, Ying-Jie

    2015-01-01

    Purpose. To develop a dual energy imaging method to improve the accuracy of electron density measurement with a cone-beam CT (CBCT) device. Materials and Methods. The imaging system is the XVI CBCT system on Elekta Synergy linac. Projection data were acquired with the high and low energy X-ray, respectively, to set up a basis material decomposition model. Virtual phantom simulation and phantoms experiments were carried out for quantitative evaluation of the method. Phantoms were also scanned twice with the high and low energy X-ray, respectively. The data were decomposed into projections of the two basis material coefficients according to the model set up earlier. The two sets of decomposed projections were used to reconstruct CBCT images of the basis material coefficients. Then, the images of electron densities were calculated with these CBCT images. Results. The difference between the calculated and theoretical values was within 2% and the correlation coefficient of them was about 1.0. The dual energy imaging method obtained more accurate electron density values and reduced the beam hardening artifacts obviously. Conclusion. A novel dual energy CBCT imaging method to calculate the electron densities was developed. It can acquire more accurate values and provide a platform potentially for dose calculation. PMID:26346510

  11. Computer Models Identify Methods to Reduce Bridge Cracks During Production Processes : Brief

    DOT National Transportation Integrated Search

    2012-06-01

    While most motorists see only the bridge deck, strong bridge girders are the basis for a stable system to support the traffic-handling surface. Concrete bridge girders often have two major components: the web, which is an I-shaped cross section and v...

  12. Wikipedia mining of hidden links between political leaders

    NASA Astrophysics Data System (ADS)

    Frahm, Klaus M.; Jaffrès-Runser, Katia; Shepelyansky, Dima L.

    2016-12-01

    We describe a new method of reduced Google matrix which allows to establish direct and hidden links between a subset of nodes of a large directed network. This approach uses parallels with quantum scattering theory, developed for processes in nuclear and mesoscopic physics and quantum chaos. The method is applied to the Wikipedia networks in different language editions analyzing several groups of political leaders of USA, UK, Germany, France, Russia and G20. We demonstrate that this approach allows to recover reliably direct and hidden links among political leaders. We argue that the reduced Google matrix method can form the mathematical basis for studies in social and political sciences analyzing Leader-Members eXchange (LMX).

  13. Advanced Discontinuous Galerkin Algorithms and First Open-Field Line Turbulence Simulations

    NASA Astrophysics Data System (ADS)

    Hammett, G. W.; Hakim, A.; Shi, E. L.

    2016-10-01

    New versions of Discontinuous Galerkin (DG) algorithms have interesting features that may help with challenging problems of higher-dimensional kinetic problems. We are developing the gyrokinetic code Gkeyll based on DG. DG also has features that may help with the next generation of Exascale computers. Higher-order methods do more FLOPS to extract more information per byte, thus reducing memory and communications costs (which are a bottleneck at exascale). DG uses efficient Gaussian quadrature like finite elements, but keeps the calculation local for the kinetic solver, also reducing communication. Sparse grid methods might further reduce the cost significantly in higher dimensions. The inner product norm can be chosen to preserve energy conservation with non-polynomial basis functions (such as Maxwellian-weighted bases), which can be viewed as a Petrov-Galerkin method. This allows a full- F code to benefit from similar Gaussian quadrature as used in popular δf gyrokinetic codes. Consistent basis functions avoid high-frequency numerical modes from electromagnetic terms. We will show our first results of 3 x + 2 v simulations of open-field line/SOL turbulence in a simple helical geometry (like Helimak/TORPEX), with parameters from LAPD, TORPEX, and NSTX. Supported by the Max-Planck/Princeton Center for Plasma Physics, the SciDAC Center for the Study of Plasma Microturbulence, and DOE Contract DE-AC02-09CH11466.

  14. Accelerating wavefunction in density-functional-theory embedding by truncating the active basis set

    NASA Astrophysics Data System (ADS)

    Bennie, Simon J.; Stella, Martina; Miller, Thomas F.; Manby, Frederick R.

    2015-07-01

    Methods where an accurate wavefunction is embedded in a density-functional description of the surrounding environment have recently been simplified through the use of a projection operator to ensure orthogonality of orbital subspaces. Projector embedding already offers significant performance gains over conventional post-Hartree-Fock methods by reducing the number of correlated occupied orbitals. However, in our first applications of the method, we used the atomic-orbital basis for the full system, even for the correlated wavefunction calculation in a small, active subsystem. Here, we further develop our method for truncating the atomic-orbital basis to include only functions within or close to the active subsystem. The number of atomic orbitals in a calculation on a fixed active subsystem becomes asymptotically independent of the size of the environment, producing the required O ( N 0 ) scaling of cost of the calculation in the active subsystem, and accuracy is controlled by a single parameter. The applicability of this approach is demonstrated for the embedded many-body expansion of binding energies of water hexamers and calculation of reaction barriers of SN2 substitution of fluorine by chlorine in α-fluoroalkanes.

  15. Steerable Principal Components for Space-Frequency Localized Images*

    PubMed Central

    Landa, Boris; Shkolnisky, Yoel

    2017-01-01

    As modern scientific image datasets typically consist of a large number of images of high resolution, devising methods for their accurate and efficient processing is a central research task. In this paper, we consider the problem of obtaining the steerable principal components of a dataset, a procedure termed “steerable PCA” (steerable principal component analysis). The output of the procedure is the set of orthonormal basis functions which best approximate the images in the dataset and all of their planar rotations. To derive such basis functions, we first expand the images in an appropriate basis, for which the steerable PCA reduces to the eigen-decomposition of a block-diagonal matrix. If we assume that the images are well localized in space and frequency, then such an appropriate basis is the prolate spheroidal wave functions (PSWFs). We derive a fast method for computing the PSWFs expansion coefficients from the images' equally spaced samples, via a specialized quadrature integration scheme, and show that the number of required quadrature nodes is similar to the number of pixels in each image. We then establish that our PSWF-based steerable PCA is both faster and more accurate then existing methods, and more importantly, provides us with rigorous error bounds on the entire procedure. PMID:29081879

  16. Developing germplasm resources to identify the genetic basis of resistance to common scab in potato

    USDA-ARS?s Scientific Manuscript database

    Common scab, caused mainly by the soil-borne bacterium Streptomyces scabies, produces lesions on potato tubers, reducing tuber quality and profitability. Methods to manage common scab are often expensive, impractical, and can be ineffective. Therefore, creating cultivars that are resistant to common...

  17. Establishing an index arbitrage model by applying neural networks method--a case study of Nikkei 225 index.

    PubMed

    Chen, A P; Chianglin, C Y; Chung, H P

    2001-10-01

    This paper applies the neural network method to establish an index arbitrage model and compares the arbitrage performances to that from traditional cost of carry arbitrage model. From the empirical results of the Nikkei 225 stock index market, following conclusions can be stated: (1) The basis will get enlarged for a time period, more profitability may be obtained from the trend. (2) If the neural network is applied within the index arbitrage model, twofold of return would be obtained than traditional arbitrage model can do. (3) If the T_basis has volatile trend, the neural network arbitrage model will ignore the peak. Although arbitrageur would lose the chance to get profit, they may reduce the market impact risk.

  18. Primary decomposition of zero-dimensional ideals over finite fields

    NASA Astrophysics Data System (ADS)

    Gao, Shuhong; Wan, Daqing; Wang, Mingsheng

    2009-03-01

    A new algorithm is presented for computing primary decomposition of zero-dimensional ideals over finite fields. Like Berlekamp's algorithm for univariate polynomials, the new method is based on the invariant subspace of the Frobenius map acting on the quotient algebra. The dimension of the invariant subspace equals the number of primary components, and a basis of the invariant subspace yields a complete decomposition. Unlike previous approaches for decomposing multivariate polynomial systems, the new method does not need primality testing nor any generic projection, instead it reduces the general decomposition problem directly to root finding of univariate polynomials over the ground field. Also, it is shown how Groebner basis structure can be used to get partial primary decomposition without any root finding.

  19. Power Source Status Estimation and Drive Control Method for Autonomous Decentralized Hybrid Train

    NASA Astrophysics Data System (ADS)

    Furuya, Takemasa; Ogawa, Kenichi; Yamamoto, Takamitsu; Hasegawa, Hitoshi

    A hybrid control system has two main functions: power sharing and equipment protection. In this paper, we discuss the design, construction and testing of a drive control method for an autonomous decentralized hybrid train with 100-kW-class fuel cells (FC) and 36-kWh lithium-ion batteries (Li-Batt). The main objectives of this study are to identify the operation status of the power sources on the basis of the input voltage of the traction inverter and to estimate the maximum traction power control basis of the power-source status. The proposed control method is useful in preventing overload operation of the onboard power sources in an autonomous decentralized hybrid system that has a flexible main circuit configuration and a few control signal lines. Further, with this method, the initial cost of a hybrid system can be reduced and the retrofit design of the hybrid system can be simplified. The effectiveness of the proposed method is experimentally confirmed by using a real-scale hybrid train system.

  20. The relation between periods’ identification and noises in hydrologic series data

    NASA Astrophysics Data System (ADS)

    Sang, Yan-Fang; Wang, Dong; Wu, Ji-Chun; Zhu, Qing-Ping; Wang, Ling

    2009-04-01

    SummaryIdentification of dominant periods is a typical and important issue in hydrologic series data analysis, since it is the basis of building effective stochastic models, understanding complex hydrologic processes, etc. However it is still a difficult task due to the influence of many interrelated factors, such as noises in hydrologic series data. In this paper, firstly the great influence of noises on periods' identification has been analyzed. Then, based on two conventional methods of hydrologic series analysis: wavelet analysis (WA) and maximum entropy spectral analysis (MESA), a new method of periods' identification of hydrologic series data, main series spectral analysis (MSSA), has been put forward, whose main idea is to identify periods of the main series on the basis of reducing hydrologic noises. Various methods (include fast Fourier transform (FFT), MESA and MSSA) have been applied to both synthetic series and observed hydrologic series. Results show that conventional methods (FFT and MESA) are not as good as expected due to the great influence of noises. However, this influence is not so strong while using the new method MSSA. In addition, by using the new de-noising method proposed in this paper, which is suitable for both normal noises and skew noises, the results are more reasonable, since noises separated from hydrologic series data generally follow skew probability distributions. In conclusion, based on comprehensive analyses, it can be stated that the proposed method MSSA could improve periods' identification by effectively reducing the influence of hydrologic noises.

  1. Sparse maps—A systematic infrastructure for reduced-scaling electronic structure methods. I. An efficient and simple linear scaling local MP2 method that uses an intermediate basis of pair natural orbitals

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pinski, Peter; Riplinger, Christoph; Neese, Frank, E-mail: evaleev@vt.edu, E-mail: frank.neese@cec.mpg.de

    2015-07-21

    In this work, a systematic infrastructure is described that formalizes concepts implicit in previous work and greatly simplifies computer implementation of reduced-scaling electronic structure methods. The key concept is sparse representation of tensors using chains of sparse maps between two index sets. Sparse map representation can be viewed as a generalization of compressed sparse row, a common representation of a sparse matrix, to tensor data. By combining few elementary operations on sparse maps (inversion, chaining, intersection, etc.), complex algorithms can be developed, illustrated here by a linear-scaling transformation of three-center Coulomb integrals based on our compact code library that implementsmore » sparse maps and operations on them. The sparsity of the three-center integrals arises from spatial locality of the basis functions and domain density fitting approximation. A novel feature of our approach is the use of differential overlap integrals computed in linear-scaling fashion for screening products of basis functions. Finally, a robust linear scaling domain based local pair natural orbital second-order Möller-Plesset (DLPNO-MP2) method is described based on the sparse map infrastructure that only depends on a minimal number of cutoff parameters that can be systematically tightened to approach 100% of the canonical MP2 correlation energy. With default truncation thresholds, DLPNO-MP2 recovers more than 99.9% of the canonical resolution of the identity MP2 (RI-MP2) energy while still showing a very early crossover with respect to the computational effort. Based on extensive benchmark calculations, relative energies are reproduced with an error of typically <0.2 kcal/mol. The efficiency of the local MP2 (LMP2) method can be drastically improved by carrying out the LMP2 iterations in a basis of pair natural orbitals. While the present work focuses on local electron correlation, it is of much broader applicability to computation with sparse tensors in quantum chemistry and beyond.« less

  2. Sparse maps—A systematic infrastructure for reduced-scaling electronic structure methods. I. An efficient and simple linear scaling local MP2 method that uses an intermediate basis of pair natural orbitals.

    PubMed

    Pinski, Peter; Riplinger, Christoph; Valeev, Edward F; Neese, Frank

    2015-07-21

    In this work, a systematic infrastructure is described that formalizes concepts implicit in previous work and greatly simplifies computer implementation of reduced-scaling electronic structure methods. The key concept is sparse representation of tensors using chains of sparse maps between two index sets. Sparse map representation can be viewed as a generalization of compressed sparse row, a common representation of a sparse matrix, to tensor data. By combining few elementary operations on sparse maps (inversion, chaining, intersection, etc.), complex algorithms can be developed, illustrated here by a linear-scaling transformation of three-center Coulomb integrals based on our compact code library that implements sparse maps and operations on them. The sparsity of the three-center integrals arises from spatial locality of the basis functions and domain density fitting approximation. A novel feature of our approach is the use of differential overlap integrals computed in linear-scaling fashion for screening products of basis functions. Finally, a robust linear scaling domain based local pair natural orbital second-order Möller-Plesset (DLPNO-MP2) method is described based on the sparse map infrastructure that only depends on a minimal number of cutoff parameters that can be systematically tightened to approach 100% of the canonical MP2 correlation energy. With default truncation thresholds, DLPNO-MP2 recovers more than 99.9% of the canonical resolution of the identity MP2 (RI-MP2) energy while still showing a very early crossover with respect to the computational effort. Based on extensive benchmark calculations, relative energies are reproduced with an error of typically <0.2 kcal/mol. The efficiency of the local MP2 (LMP2) method can be drastically improved by carrying out the LMP2 iterations in a basis of pair natural orbitals. While the present work focuses on local electron correlation, it is of much broader applicability to computation with sparse tensors in quantum chemistry and beyond.

  3. Surrogate models for sheet metal stamping problem based on the combination of proper orthogonal decomposition and radial basis function

    NASA Astrophysics Data System (ADS)

    Dang, Van Tuan; Lafon, Pascal; Labergere, Carl

    2017-10-01

    In this work, a combination of Proper Orthogonal Decomposition (POD) and Radial Basis Function (RBF) is proposed to build a surrogate model based on the Benchmark Springback 3D bending from the Numisheet2011 congress. The influence of the two design parameters, the geometrical parameter of the die radius and the process parameter of the blank holder force, on the springback of the sheet after a stamping operation is analyzed. The classical Design of Experience (DoE) uses Full Factorial to design the parameter space with sample points as input data for finite element method (FEM) numerical simulation of the sheet metal stamping process. The basic idea is to consider the design parameters as additional dimensions for the solution of the displacement fields. The order of the resultant high-fidelity model is reduced through the use of POD method which performs model space reduction and results in the basis functions of the low order model. Specifically, the snapshot method is used in our work, in which the basis functions is derived from snapshot deviation of the matrix of the final displacements fields of the FEM numerical simulation. The obtained basis functions are then used to determine the POD coefficients and RBF is used for the interpolation of these POD coefficients over the parameter space. Finally, the presented POD-RBF approach which is used for shape optimization can be performed with high accuracy.

  4. Reduction of gas flow nonuniformity in gas turbine engines by means of gas-dynamic methods

    NASA Astrophysics Data System (ADS)

    Matveev, V.; Baturin, O.; Kolmakova, D.; Popov, G.

    2017-08-01

    Gas flow nonuniformity is one of the main sources of rotor blade vibrations in the gas turbine engines. Usually, the flow circumferential nonuniformity occurs near the annular frames, located in the flow channel of the engine. This leads to the increased dynamic stresses in blades and as a consequence to the blade damage. The goal of the research was to find an acceptable method of reducing the level of gas flow nonuniformity as the source of dynamic stresses in the rotor blades. Two different methods were investigated during this research. Thus, this study gives the ideas about methods of improving the flow structure in gas turbine engine. On the basis of existing conditions (under development or existing engine) it allows the selection of the most suitable method for reducing gas flow nonuniformity.

  5. A simple finite element method for the Stokes equations

    DOE PAGES

    Mu, Lin; Ye, Xiu

    2017-03-21

    The goal of this paper is to introduce a simple finite element method to solve the Stokes equations. This method is in primal velocity-pressure formulation and is so simple such that both velocity and pressure are approximated by piecewise constant functions. Implementation issues as well as error analysis are investigated. A basis for a divergence free subspace of the velocity field is constructed so that the original saddle point problem can be reduced to a symmetric and positive definite system with much fewer unknowns. The numerical experiments indicate that the method is accurate.

  6. A simple finite element method for the Stokes equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mu, Lin; Ye, Xiu

    The goal of this paper is to introduce a simple finite element method to solve the Stokes equations. This method is in primal velocity-pressure formulation and is so simple such that both velocity and pressure are approximated by piecewise constant functions. Implementation issues as well as error analysis are investigated. A basis for a divergence free subspace of the velocity field is constructed so that the original saddle point problem can be reduced to a symmetric and positive definite system with much fewer unknowns. The numerical experiments indicate that the method is accurate.

  7. K-Nearest Neighbor Algorithm Optimization in Text Categorization

    NASA Astrophysics Data System (ADS)

    Chen, Shufeng

    2018-01-01

    K-Nearest Neighbor (KNN) classification algorithm is one of the simplest methods of data mining. It has been widely used in classification, regression and pattern recognition. The traditional KNN method has some shortcomings such as large amount of sample computation and strong dependence on the sample library capacity. In this paper, a method of representative sample optimization based on CURE algorithm is proposed. On the basis of this, presenting a quick algorithm QKNN (Quick k-nearest neighbor) to find the nearest k neighbor samples, which greatly reduces the similarity calculation. The experimental results show that this algorithm can effectively reduce the number of samples and speed up the search for the k nearest neighbor samples to improve the performance of the algorithm.

  8. Reconstruction of local perturbations in periodic surfaces

    NASA Astrophysics Data System (ADS)

    Lechleiter, Armin; Zhang, Ruming

    2018-03-01

    This paper concerns the inverse scattering problem to reconstruct a local perturbation in a periodic structure. Unlike the periodic problems, the periodicity for the scattered field no longer holds, thus classical methods, which reduce quasi-periodic fields in one periodic cell, are no longer available. Based on the Floquet-Bloch transform, a numerical method has been developed to solve the direct problem, that leads to a possibility to design an algorithm for the inverse problem. The numerical method introduced in this paper contains two steps. The first step is initialization, that is to locate the support of the perturbation by a simple method. This step reduces the inverse problem in an infinite domain into one periodic cell. The second step is to apply the Newton-CG method to solve the associated optimization problem. The perturbation is then approximated by a finite spline basis. Numerical examples are given at the end of this paper, showing the efficiency of the numerical method.

  9. Application of Non-destructive Methods of Stress-strain State at Hazardous Production Facilities

    NASA Astrophysics Data System (ADS)

    Shram, V.; Kravtsova, Ye; Selsky, A.; Bezborodov, Yu; Lysyannikova, N.; Lysyannikov, A.

    2016-06-01

    The paper deals with the sources of accidents in distillation columns, on the basis of which the most dangerous defects are detected. The analysis of the currently existing methods of non-destructive testing of the stress-strain state is performed. It is proposed to apply strain and acoustic emission techniques to continuously monitor dangerous objects, which helps prevent the possibility of accidents, as well as reduce the work.

  10. Less is Better. Laboratory Chemical Management for Waste Reduction.

    ERIC Educational Resources Information Center

    American Chemical Society, Washington, DC.

    An objective of the American Chemical Society is to promote alternatives to landfilling for the disposal of laboratory chemical wastes. One method is to reduce the amount of chemicals that become wastes. This is the basis for the "less is better" philosophy. This bulletin discusses various techniques involved in purchasing control,…

  11. Rapidly calculated density functional theory (DFT) relaxed Iso-potential Phi Si Maps: Beta-cellobiose

    USDA-ARS?s Scientific Manuscript database

    New cellobiose Phi-H/Si-H maps are rapidly generated using a mixed basis set DFT method, found to achieve a high level of confidence while reducing computer resources dramatically. Relaxed iso-potential maps are made for different conformational states of cellobiose, showing how glycosidic bond dihe...

  12. Method for Evaluating Information to Solve Problems of Control, Monitoring and Diagnostics

    NASA Astrophysics Data System (ADS)

    Vasil'ev, V. A.; Dobrynina, N. V.

    2017-06-01

    The article describes a method for evaluating information to solve problems of control, monitoring and diagnostics. It is necessary for reducing the dimensionality of informational indicators of situations, bringing them to relative units, for calculating generalized information indicators on their basis, ranking them by characteristic levels, for calculating the efficiency criterion of a system functioning in real time. The design of information evaluation system has been developed on its basis that allows analyzing, processing and assessing information about the object. Such object can be a complex technical, economic and social system. The method and the based system thereof can find a wide application in the field of analysis, processing and evaluation of information on the functioning of the systems, regardless of their purpose, goals, tasks and complexity. For example, they can be used to assess the innovation capacities of industrial enterprises and management decisions.

  13. Calculation of smooth potential energy surfaces using local electron correlation methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mata, Ricardo A.; Werner, Hans-Joachim

    2006-11-14

    The geometry dependence of excitation domains in local correlation methods can lead to noncontinuous potential energy surfaces. We propose a simple domain merging procedure which eliminates this problem in many situations. The method is applied to heterolytic bond dissociations of ketene and propadienone, to SN2 reactions of Cl{sup -} with alkylchlorides, and in a quantum mechanical/molecular mechanical study of the chorismate mutase enzyme. It is demonstrated that smooth potentials are obtained in all cases. Furthermore, basis set superposition error effects are reduced in local calculations, and it is found that this leads to better basis set convergence when computing barriermore » heights or weak interactions. When the electronic structure strongly changes between reactants or products and the transition state, the domain merging procedure leads to a balanced description of all structures and accurate barrier heights.« less

  14. [Optimized application of nested PCR method for detection of malaria].

    PubMed

    Yao-Guang, Z; Li, J; Zhen-Yu, W; Li, C

    2017-04-28

    Objective To optimize the application of the nested PCR method for the detection of malaria according to the working practice, so as to improve the efficiency of malaria detection. Methods Premixing solution of PCR, internal primers for further amplification and new designed primers that aimed at two Plasmodium ovale subspecies were employed to optimize the reaction system, reaction condition and specific primers of P . ovale on basis of routine nested PCR. Then the specificity and the sensitivity of the optimized method were analyzed. The positive blood samples and examination samples of malaria were detected by the routine nested PCR and the optimized method simultaneously, and the detection results were compared and analyzed. Results The optimized method showed good specificity, and its sensitivity could reach the pg to fg level. The two methods were used to detect the same positive malarial blood samples simultaneously, the results indicated that the PCR products of the two methods had no significant difference, but the non-specific amplification reduced obviously and the detection rates of P . ovale subspecies improved, as well as the total specificity also increased through the use of the optimized method. The actual detection results of 111 cases of malarial blood samples showed that the sensitivity and specificity of the routine nested PCR were 94.57% and 86.96%, respectively, and those of the optimized method were both 93.48%, and there was no statistically significant difference between the two methods in the sensitivity ( P > 0.05), but there was a statistically significant difference between the two methods in the specificity ( P < 0.05). Conclusion The optimized PCR can improve the specificity without reducing the sensitivity on the basis of the routine nested PCR, it also can save the cost and increase the efficiency of malaria detection as less experiment links.

  15. Equation-of-motion coupled-cluster method for doubly ionized states with spin-orbit coupling.

    PubMed

    Wang, Zhifan; Hu, Shu; Wang, Fan; Guo, Jingwei

    2015-04-14

    In this work, we report implementation of the equation-of-motion coupled-cluster method for doubly ionized states (EOM-DIP-CC) with spin-orbit coupling (SOC) using a closed-shell reference. Double ionization potentials (DIPs) are calculated in the space spanned by 2h and 3h1p determinants with the EOM-DIP-CC approach at the CC singles and doubles level (CCSD). Time-reversal symmetry together with spatial symmetry is exploited to reduce computational effort. To circumvent the problem of unstable dianion references when diffuse basis functions are included, nuclear charges are scaled. Effect of this stabilization potential on DIPs is estimated based on results from calculations using a small basis set without diffuse basis functions. DIPs and excitation energies of some low-lying states for a series of open-shell atoms and molecules containing heavy elements with two unpaired electrons have been calculated with the EOM-DIP-CCSD approach. Results show that this approach is able to afford a reliable description on SOC splitting. Furthermore, the EOM-DIP-CCSD approach is shown to provide reasonable excitation energies for systems with a dianion reference when diffuse basis functions are not employed.

  16. Equation-of-motion coupled-cluster method for doubly ionized states with spin-orbit coupling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Zhifan; Hu, Shu; Guo, Jingwei

    2015-04-14

    In this work, we report implementation of the equation-of-motion coupled-cluster method for doubly ionized states (EOM-DIP-CC) with spin-orbit coupling (SOC) using a closed-shell reference. Double ionization potentials (DIPs) are calculated in the space spanned by 2h and 3h1p determinants with the EOM-DIP-CC approach at the CC singles and doubles level (CCSD). Time-reversal symmetry together with spatial symmetry is exploited to reduce computational effort. To circumvent the problem of unstable dianion references when diffuse basis functions are included, nuclear charges are scaled. Effect of this stabilization potential on DIPs is estimated based on results from calculations using a small basis setmore » without diffuse basis functions. DIPs and excitation energies of some low-lying states for a series of open-shell atoms and molecules containing heavy elements with two unpaired electrons have been calculated with the EOM-DIP-CCSD approach. Results show that this approach is able to afford a reliable description on SOC splitting. Furthermore, the EOM-DIP-CCSD approach is shown to provide reasonable excitation energies for systems with a dianion reference when diffuse basis functions are not employed.« less

  17. [Ambient air interference in oxygen intake measurements in liquid incubating media with the use of open polarographic cells].

    PubMed

    Miniaev, M V; Voronchikhina, L I

    2007-01-01

    A model of oxygen intake by aerobic bio-objects in liquid incubating media was applied to investigate the influence air-media interface area on accuracy of measuring the oxygen intake and error value. It was shown that intrusion of air oxygen increases the relative error to 24% in open polarographic cells and to 13% in cells with a reduced interface area. Results of modeling passive media oxygenation laid a basis for proposing a method to reduce relative error by 66% for open cells and by 15% for cells with a reduced interface area.

  18. Scanning tunneling microscopy image simulation of the rutile (110) TiO2 surface with hybrid functionals and the localized basis set approach

    NASA Astrophysics Data System (ADS)

    Di Valentin, Cristiana

    2007-10-01

    In this work we present a simplified procedure to use hybrid functionals and localized atomic basis sets to simulate scanning tunneling microscopy (STM) images of stoichiometric, reduced and hydroxylated rutile (110) TiO2 surface. For the two defective systems it is necessary to introduce some exact Hartree-Fock exchange in the exchange functional in order to correctly describe the details of the electronic structure. Results are compared to the standard density functional theory and planewave basis set approach. Both methods have advantages and drawbacks that are analyzed in detail. In particular, for the localized basis set approach, it is necessary to introduce a number of Gaussian function in the vacuum region above the surface in order to correctly describe the exponential decay of the integrated local density of states from the surface. In the planewave periodic approach, a thick vacuum region is required to achieve correct results. Simulated STM images are obtained for both the reduced and hydroxylated surface which nicely compare with experimental findings. A direct comparison of the two defects as displayed in the simulated STM images indicates that the OH groups should appear brighter than oxygen vacancies in perfect agreement with the experimental STM data.

  19. Chemical compositions, chromatographic fingerprints and antioxidant activities of Andrographis Herba.

    PubMed

    Zhao, Yang; Kao, Chun-Pin; Wu, Kun-Chang; Liao, Chi-Ren; Ho, Yu-Ling; Chang, Yuan-Shiun

    2014-11-10

    This paper describes the development of an HPLC-UV-MS method for quantitative determination of andrographolide and dehydroandrographolide in Andrographis Herba and establishment of its chromatographic fingerprint. The method was validated for linearity, limit of detection and quantification, inter- and intra-day precisions, repeatability, stability and recovery. All the validation results of quantitative determination and fingerprinting methods were satisfactory. The developed method was then applied to assay the contents of andrographolide and dehydroandrographolide and to acquire the fingerprints of all the collected Andrographis Herba samples. Furthermore, similarity analysis and principal component analysis were used to reveal the similarities and differences between the samples on the basis of the characteristic peaks. More importantly, the DPPH free radical-scavenging and ferric reducing capacities of the Andrographis Herba samples were assayed. By bivariate correlation analysis, we found that six compounds are positively correlated to DPPH free radical scavenging and ferric reducing capacities, and four compounds are negatively correlated to DPPH free radical scavenging and ferric reducing capacities.

  20. Basis material decomposition method for material discrimination with a new spectrometric X-ray imaging detector

    NASA Astrophysics Data System (ADS)

    Brambilla, A.; Gorecki, A.; Potop, A.; Paulus, C.; Verger, L.

    2017-08-01

    Energy sensitive photon counting X-ray detectors provide energy dependent information which can be exploited for material identification. The attenuation of an X-ray beam as a function of energy depends on the effective atomic number Zeff and the density. However, the measured attenuation is degraded by the imperfections of the detector response such as charge sharing or pile-up. These imperfections lead to non-linearities that limit the benefits of energy resolved imaging. This work aims to implement a basis material decomposition method which overcomes these problems. Basis material decomposition is based on the fact that the attenuation of any material or complex object can be accurately reproduced by a combination of equivalent thicknesses of basis materials. Our method is based on a calibration phase to learn the response of the detector for different combinations of thicknesses of the basis materials. The decomposition algorithm finds the thicknesses of basis material whose spectrum is closest to the measurement, using a maximum likelihood criterion assuming a Poisson law distribution of photon counts for each energy bin. The method was used with a ME100 linear array spectrometric X-ray imager to decompose different plastic materials on a Polyethylene and Polyvinyl Chloride base. The resulting equivalent thicknesses were used to estimate the effective atomic number Zeff. The results are in good agreement with the theoretical Zeff, regardless of the plastic sample thickness. The linear behaviour of the equivalent lengths makes it possible to process overlapped materials. Moreover, the method was tested with a 3 materials base by adding gadolinium, whose K-edge is not taken into account by the other two materials. The proposed method has the advantage that it can be used with any number of energy channels, taking full advantage of the high energy resolution of the ME100 detector. Although in principle two channels are sufficient, experimental measurements show that the use of a high number of channels significantly improves the accuracy of decomposition by reducing noise and systematic bias.

  1. The Reduced Basis Method in Geosciences: Practical examples for numerical forward simulations

    NASA Astrophysics Data System (ADS)

    Degen, D.; Veroy, K.; Wellmann, F.

    2017-12-01

    Due to the highly heterogeneous character of the earth's subsurface, the complex coupling of thermal, hydrological, mechanical, and chemical processes, and the limited accessibility we have to face high-dimensional problems associated with high uncertainties in geosciences. Performing the obviously necessary uncertainty quantifications with a reasonable number of parameters is often not possible due to the high-dimensional character of the problem. Therefore, we are presenting the reduced basis (RB) method, being a model order reduction (MOR) technique, that constructs low-order approximations to, for instance, the finite element (FE) space. We use the RB method to address this computationally challenging simulations because this method significantly reduces the degrees of freedom. The RB method is decomposed into an offline and online stage, allowing to make the expensive pre-computations beforehand to get real-time results during field campaigns. Generally, the RB approach is most beneficial in the many-query and real-time context.We will illustrate the advantages of the RB method for the field of geosciences through two examples of numerical forward simulations.The first example is a geothermal conduction problem demonstrating the implementation of the RB method for a steady-state case. The second examples, a Darcy flow problem, shows the benefits for transient scenarios. In both cases, a quality evaluation of the approximations is given. Additionally, the runtimes for both the FE and the RB simulations are compared. We will emphasize the advantages of this method for repetitive simulations by showing the speed-up for the RB solution in contrast to the FE solution. Finally, we will demonstrate how the used implementation is usable in high-performance computing (HPC) infrastructures and evaluate its performance for such infrastructures. Hence, we will especially point out its scalability, yielding in an optimal usage on HPC infrastructures and normal working stations.

  2. Antioxidant activity of food constituents: an overview.

    PubMed

    Gülçin, İlhami

    2012-03-01

    Recently, there has been growing interest in research into the role of plant-derived antioxidants in food and human health. The beneficial influence of many foodstuffs and beverages including fruits, vegetables, tea, coffee, and cacao on human health has been recently recognized to originate from their antioxidant activity. For this purpose, the most commonly methods used in vitro determination of antioxidant capacity of food constituents are reviewed and presented. Also, the general chemistry underlying the assays in the present paper was clarified. Hence, this overview provides a basis and rationale for developing standardized antioxidant capacity methods for the food, nutraceutical, and dietary supplement industries. In addition, the most important advantages and shortcomings of each method were detected and highlighted. The chemical principles of these methods are outlined and critically discussed. The chemical principles of methods of 2,2'-azinobis-(3-ethylbenzothiazoline-6-sulphonate) radical (ABTS(·+)) scavenging, 1,1-diphenyl-2-picrylhydrazyl (DPPH(·)) radical scavenging, Fe(3+)-Fe(2+) transformation assay, ferric reducing antioxidant power (FRAP) assay, cupric ions (Cu(2+)) reducing power assay (Cuprac), Folin-Ciocalteu reducing capacity (FCR assay), peroxyl radical scavenging, superoxide anion radical (O (2) (·-)) scavenging, hydrogen peroxide (H(2)O(2)) scavenging, hydroxyl radical (OH(·)) scavenging, singlet oxygen ((1)O(2)) quenching assay and nitric oxide radical (NO(·)) scavenging assay are outlined and critically discussed. Also, the general antioxidant aspects of main food components were discussed by a number of methods which are currently used for detection of antioxidant properties food components. This review consists of two main sections. The first section is devoted to main components in the foodstuffs and beverages. The second general section is some definitions of the main antioxidant methods commonly used for determination of antioxidant activity of components in the foodstuffs and beverages. In addition, there are given some chemical and kinetic basis and technical details of the used methods.

  3. Cholesky-decomposed density MP2 with density fitting: Accurate MP2 and double-hybrid DFT energies for large systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maurer, Simon A.; Clin, Lucien; Ochsenfeld, Christian, E-mail: christian.ochsenfeld@uni-muenchen.de

    2014-06-14

    Our recently developed QQR-type integral screening is introduced in our Cholesky-decomposed pseudo-densities Møller-Plesset perturbation theory of second order (CDD-MP2) method. We use the resolution-of-the-identity (RI) approximation in combination with efficient integral transformations employing sparse matrix multiplications. The RI-CDD-MP2 method shows an asymptotic cubic scaling behavior with system size and a small prefactor that results in an early crossover to conventional methods for both small and large basis sets. We also explore the use of local fitting approximations which allow to further reduce the scaling behavior for very large systems. The reliability of our method is demonstrated on test sets formore » interaction and reaction energies of medium sized systems and on a diverse selection from our own benchmark set for total energies of larger systems. Timings on DNA systems show that fast calculations for systems with more than 500 atoms are feasible using a single processor core. Parallelization extends the range of accessible system sizes on one computing node with multiple cores to more than 1000 atoms in a double-zeta basis and more than 500 atoms in a triple-zeta basis.« less

  4. Model and Data Reduction for Control, Identification and Compressed Sensing

    NASA Astrophysics Data System (ADS)

    Kramer, Boris

    This dissertation focuses on problems in design, optimization and control of complex, large-scale dynamical systems from different viewpoints. The goal is to develop new algorithms and methods, that solve real problems more efficiently, together with providing mathematical insight into the success of those methods. There are three main contributions in this dissertation. In Chapter 3, we provide a new method to solve large-scale algebraic Riccati equations, which arise in optimal control, filtering and model reduction. We present a projection based algorithm utilizing proper orthogonal decomposition, which is demonstrated to produce highly accurate solutions at low rank. The method is parallelizable, easy to implement for practitioners, and is a first step towards a matrix free approach to solve AREs. Numerical examples for n ≥ 106 unknowns are presented. In Chapter 4, we develop a system identification method which is motivated by tangential interpolation. This addresses the challenge of fitting linear time invariant systems to input-output responses of complex dynamics, where the number of inputs and outputs is relatively large. The method reduces the computational burden imposed by a full singular value decomposition, by carefully choosing directions on which to project the impulse response prior to assembly of the Hankel matrix. The identification and model reduction step follows from the eigensystem realization algorithm. We present three numerical examples, a mass spring damper system, a heat transfer problem, and a fluid dynamics system. We obtain error bounds and stability results for this method. Chapter 5 deals with control and observation design for parameter dependent dynamical systems. We address this by using local parametric reduced order models, which can be used online. Data available from simulations of the system at various configurations (parameters, boundary conditions) is used to extract a sparse basis to represent the dynamics (via dynamic mode decomposition). Subsequently, a new, compressed sensing based classification algorithm is developed which incorporates the extracted dynamic information into the sensing basis. We show that this augmented classification basis makes the method more robust to noise, and results in superior identification of the correct parameter. Numerical examples consist of a Navier-Stokes, as well as a Boussinesq flow application.

  5. Analytic energy gradients for orbital-optimized MP3 and MP2.5 with the density-fitting approximation: An efficient implementation.

    PubMed

    Bozkaya, Uğur

    2018-03-15

    Efficient implementations of analytic gradients for the orbital-optimized MP3 and MP2.5 and their standard versions with the density-fitting approximation, which are denoted as DF-MP3, DF-MP2.5, DF-OMP3, and DF-OMP2.5, are presented. The DF-MP3, DF-MP2.5, DF-OMP3, and DF-OMP2.5 methods are applied to a set of alkanes and noncovalent interaction complexes to compare the computational cost with the conventional MP3, MP2.5, OMP3, and OMP2.5. Our results demonstrate that density-fitted perturbation theory (DF-MP) methods considered substantially reduce the computational cost compared to conventional MP methods. The efficiency of our DF-MP methods arise from the reduced input/output (I/O) time and the acceleration of gradient related terms, such as computations of particle density and generalized Fock matrices (PDMs and GFM), solution of the Z-vector equation, back-transformations of PDMs and GFM, and evaluation of analytic gradients in the atomic orbital basis. Further, application results show that errors introduced by the DF approach are negligible. Mean absolute errors for bond lengths of a molecular set, with the cc-pCVQZ basis set, is 0.0001-0.0002 Å. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  6. Rerating the Movie Scores in Douban through Word Embedding

    NASA Astrophysics Data System (ADS)

    Cui, Mingyu

    2018-04-01

    The movie scores in the social networking service website such as IMDb, Totten Tomatoes and Douban are important references to evaluate the movies. Always, it will influence the box office directly. However, the public rating has strong bias depended on the types of movies, release time, and ages and background of the audiences. To fix the bias and give a movie a fair judgement is an important problem. In the paper, we focus on the movie scores on Douban, which is one of the most famous Chinese movie network community. We decompose the movie scores into two parts. One is the basis scores based on the basic properties of movies. The other is the extra scores which represent the excess value of the movies. We use the word-embedding technique to reduce the movies in a small dense subspace. Then, in the reduced subspace, we use the k-means method to offer the similar movies a basis scores.

  7. Image restoration for three-dimensional fluorescence microscopy using an orthonormal basis for efficient representation of depth-variant point-spread functions

    PubMed Central

    Patwary, Nurmohammed; Preza, Chrysanthe

    2015-01-01

    A depth-variant (DV) image restoration algorithm for wide field fluorescence microscopy, using an orthonormal basis decomposition of DV point-spread functions (PSFs), is investigated in this study. The efficient PSF representation is based on a previously developed principal component analysis (PCA), which is computationally intensive. We present an approach developed to reduce the number of DV PSFs required for the PCA computation, thereby making the PCA-based approach computationally tractable for thick samples. Restoration results from both synthetic and experimental images show consistency and that the proposed algorithm addresses efficiently depth-induced aberration using a small number of principal components. Comparison of the PCA-based algorithm with a previously-developed strata-based DV restoration algorithm demonstrates that the proposed method improves performance by 50% in terms of accuracy and simultaneously reduces the processing time by 64% using comparable computational resources. PMID:26504634

  8. An Efficient Method Coupling Kernel Principal Component Analysis with Adjoint-Based Optimal Control and Its Goal-Oriented Extensions

    NASA Astrophysics Data System (ADS)

    Thimmisetty, C.; Talbot, C.; Tong, C. H.; Chen, X.

    2016-12-01

    The representativeness of available data poses a significant fundamental challenge to the quantification of uncertainty in geophysical systems. Furthermore, the successful application of machine learning methods to geophysical problems involving data assimilation is inherently constrained by the extent to which obtainable data represent the problem considered. We show how the adjoint method, coupled with optimization based on methods of machine learning, can facilitate the minimization of an objective function defined on a space of significantly reduced dimension. By considering uncertain parameters as constituting a stochastic process, the Karhunen-Loeve expansion and its nonlinear extensions furnish an optimal basis with respect to which optimization using L-BFGS can be carried out. In particular, we demonstrate that kernel PCA can be coupled with adjoint-based optimal control methods to successfully determine the distribution of material parameter values for problems in the context of channelized deformable media governed by the equations of linear elasticity. Since certain subsets of the original data are characterized by different features, the convergence rate of the method in part depends on, and may be limited by, the observations used to furnish the kernel principal component basis. By determining appropriate weights for realizations of the stochastic random field, then, one may accelerate the convergence of the method. To this end, we present a formulation of Weighted PCA combined with a gradient-based means using automatic differentiation to iteratively re-weight observations concurrent with the determination of an optimal reduced set control variables in the feature space. We demonstrate how improvements in the accuracy and computational efficiency of the weighted linear method can be achieved over existing unweighted kernel methods, and discuss nonlinear extensions of the algorithm.

  9. Quantum Dynamics with Short-Time Trajectories and Minimal Adaptive Basis Sets.

    PubMed

    Saller, Maximilian A C; Habershon, Scott

    2017-07-11

    Methods for solving the time-dependent Schrödinger equation via basis set expansion of the wave function can generally be categorized as having either static (time-independent) or dynamic (time-dependent) basis functions. We have recently introduced an alternative simulation approach which represents a middle road between these two extremes, employing dynamic (classical-like) trajectories to create a static basis set of Gaussian wavepackets in regions of phase-space relevant to future propagation of the wave function [J. Chem. Theory Comput., 11, 8 (2015)]. Here, we propose and test a modification of our methodology which aims to reduce the size of basis sets generated in our original scheme. In particular, we employ short-time classical trajectories to continuously generate new basis functions for short-time quantum propagation of the wave function; to avoid the continued growth of the basis set describing the time-dependent wave function, we employ Matching Pursuit to periodically minimize the number of basis functions required to accurately describe the wave function. Overall, this approach generates a basis set which is adapted to evolution of the wave function while also being as small as possible. In applications to challenging benchmark problems, namely a 4-dimensional model of photoexcited pyrazine and three different double-well tunnelling problems, we find that our new scheme enables accurate wave function propagation with basis sets which are around an order-of-magnitude smaller than our original trajectory-guided basis set methodology, highlighting the benefits of adaptive strategies for wave function propagation.

  10. Tensor numerical methods in quantum chemistry: from Hartree-Fock to excitation energies.

    PubMed

    Khoromskaia, Venera; Khoromskij, Boris N

    2015-12-21

    We resume the recent successes of the grid-based tensor numerical methods and discuss their prospects in real-space electronic structure calculations. These methods, based on the low-rank representation of the multidimensional functions and integral operators, first appeared as an accurate tensor calculus for the 3D Hartree potential using 1D complexity operations, and have evolved to entirely grid-based tensor-structured 3D Hartree-Fock eigenvalue solver. It benefits from tensor calculation of the core Hamiltonian and two-electron integrals (TEI) in O(n log n) complexity using the rank-structured approximation of basis functions, electron densities and convolution integral operators all represented on 3D n × n × n Cartesian grids. The algorithm for calculating TEI tensor in a form of the Cholesky decomposition is based on multiple factorizations using algebraic 1D "density fitting" scheme, which yield an almost irreducible number of product basis functions involved in the 3D convolution integrals, depending on a threshold ε > 0. The basis functions are not restricted to separable Gaussians, since the analytical integration is substituted by high-precision tensor-structured numerical quadratures. The tensor approaches to post-Hartree-Fock calculations for the MP2 energy correction and for the Bethe-Salpeter excitation energies, based on using low-rank factorizations and the reduced basis method, were recently introduced. Another direction is towards the tensor-based Hartree-Fock numerical scheme for finite lattices, where one of the numerical challenges is the summation of electrostatic potentials of a large number of nuclei. The 3D grid-based tensor method for calculation of a potential sum on a L × L × L lattice manifests the linear in L computational work, O(L), instead of the usual O(L(3) log L) scaling by the Ewald-type approaches.

  11. Two-dimensional mesh embedding for Galerkin B-spline methods

    NASA Technical Reports Server (NTRS)

    Shariff, Karim; Moser, Robert D.

    1995-01-01

    A number of advantages result from using B-splines as basis functions in a Galerkin method for solving partial differential equations. Among them are arbitrary order of accuracy and high resolution similar to that of compact schemes but without the aliasing error. This work develops another property, namely, the ability to treat semi-structured embedded or zonal meshes for two-dimensional geometries. This can drastically reduce the number of grid points in many applications. Both integer and non-integer refinement ratios are allowed. The report begins by developing an algorithm for choosing basis functions that yield the desired mesh resolution. These functions are suitable products of one-dimensional B-splines. Finally, test cases for linear scalar equations such as the Poisson and advection equation are presented. The scheme is conservative and has uniformly high order of accuracy throughout the domain.

  12. FIBER OPTICS: Theoretical basis of the method for reducing drift of the zero level of the output signal of a fiber-optic gyroscope with the aid of a Lyot depolarizer

    NASA Astrophysics Data System (ADS)

    Alekseev, É. I.; Bazarov, E. N.

    1992-09-01

    A theoretical justification is given of the widely used method of stabilization of the output signal from a fiber-optic gyroscope with a broad-band radiation source by a Lyot depolarizer. Different variants of including a depolarizer in such a gyroscope are considered and the role of the dichroism and birefringence induced in the gyroscope system is discussed.

  13. Study of alternate methods of disposal of propellants and gases at KSC

    NASA Technical Reports Server (NTRS)

    Moore, W. I.

    1970-01-01

    A comprehensive study was conducted at KSC launch support facilities to determine the nature and extent of potential hazards from propellant and gas releases to the environment. The results of the study, alternate methods for reducing or eliminating the hazards, and recommendations pertaining to these alternatives are presented. The operational modes of the propellant or hazardous gas systems considered include: system charging, system standby, system operation, and post-test operations. The results are outlined on an area-by-area basis.

  14. Factorization in large-scale many-body calculations

    DOE PAGES

    Johnson, Calvin W.; Ormand, W. Erich; Krastev, Plamen G.

    2013-08-07

    One approach for solving interacting many-fermion systems is the configuration-interaction method, also sometimes called the interacting shell model, where one finds eigenvalues of the Hamiltonian in a many-body basis of Slater determinants (antisymmetrized products of single-particle wavefunctions). The resulting Hamiltonian matrix is typically very sparse, but for large systems the nonzero matrix elements can nonetheless require terabytes or more of storage. An alternate algorithm, applicable to a broad class of systems with symmetry, in our case rotational invariance, is to exactly factorize both the basis and the interaction using additive/multiplicative quantum numbers; such an algorithm recreates the many-body matrix elementsmore » on the fly and can reduce the storage requirements by an order of magnitude or more. Here, we discuss factorization in general and introduce a novel, generalized factorization method, essentially a ‘double-factorization’ which speeds up basis generation and set-up of required arrays. Although we emphasize techniques, we also place factorization in the context of a specific (unpublished) configuration-interaction code, BIGSTICK, which runs both on serial and parallel machines, and discuss the savings in memory due to factorization.« less

  15. Electrochemical incineration of wastes

    NASA Technical Reports Server (NTRS)

    Bhardwaj, R. C.; Sharma, D. K.; Bockris, J. OM.

    1990-01-01

    The novel technology of waste removal in space vehicles by electrochemical methods is presented to convert wastes into chemicals that can be eventually recycled. The important consideration for waste oxidation is to select a right kind of electrode (anode) material that should be stable under anodic conditions and also a poor electrocatalyst for oxygen and chlorine evolution. On the basis of long term electrolysis experiments on seven different electrodes and on the basis of total organic carbon reduced, two best electrodes were identified. The effect of redox ions on the electrolyte was studied. Though most of the experiments were done in mixtures of urine and waste, the experiments with redox couples involved 2.5 M sulfuric acid in order to avoid the precipitation of redox ions by urea. Two methods for long term electrolysis of waste were investigated: (1) the oxidation on Pt and lead dioxide electrodes using the galvanostatic methods; and (2) potentiostatic method on other electrodes. The advantage of the first method is the faster rate of oxidation. The chlorine evolution in the second method is ten times less then in the first. The accomplished research has shown that urine/feces mixtures can be oxidized to carbon dioxide and water, but current densities are low and must be improved. The perovskite and Ti4O7 coated with RuO2 are the best electrode materials found. Recent experiment with the redox agent improves the current density, however, sulphuric acid is required to keep the redox agent in solution to enhance oxidation effectively. It is desirable to reduce the use of acid and/or find substitutes.

  16. A quantum retrograde canon: complete population inversion in n 2-state systems

    NASA Astrophysics Data System (ADS)

    Padan, Alon; Suchowski, Haim

    2018-04-01

    We present a novel approach for analytically reducing a family of time-dependent multi-state quantum control problems to two-state systems. The presented method translates between {SU}(2)× {SU}(2) related n 2-state systems and two-state systems, such that the former undergo complete population inversion (CPI) if and only if the latter reach specific states. For even n, the method translates any two-state CPI scheme to a family of CPI schemes in n 2-state systems. In particular, facilitating CPI in a four-state system via real time-dependent nearest-neighbors couplings is reduced to facilitating CPI in a two-level system. Furthermore, we show that the method can be used for operator control, and provide conditions for producing several universal gates for quantum computation as an example. In addition, we indicate a basis for utilizing the method in optimal control problems.

  17. Optimization of a protocol for myocardial perfusion scintigraphy by using an anthropomorphic phantom.

    PubMed

    Ramos, Susie Medeiros Oliveira; Glavam, Adriana Pereira; Kubo, Tadeu Takao Almodovar; de Sá, Lidia Vasconcellos

    2014-01-01

    To develop a study aiming at optimizing myocardial perfusion imaging. Imaging of an anthropomorphic thorax phantom with a GE SPECT Ventri gamma camera, with varied activities and acquisition times, in order to evaluate the influence of these parameters on the quality of the reconstructed medical images. The (99m)Tc-sestamibi radiotracer was utilized, and then the images were clinically evaluated on the basis of data such as summed stress score, and on the technical image quality and perfusion. The software ImageJ was utilized in the data quantification. The results demonstrated that for the standard acquisition time utilized in the procedure (15 seconds per angle), the injected activity could be reduced by 33.34%. Additionally, even if the standard scan time is reduced by 53.34% (7 seconds per angle), the standard injected activity could still be reduced by 16.67%, without impairing the image quality and the diagnostic reliability. The described method and respective results provide a basis for the development of a clinical trial of patients in an optimized protocol.

  18. Quartic scaling MP2 for solids: A highly parallelized algorithm in the plane wave basis

    NASA Astrophysics Data System (ADS)

    Schäfer, Tobias; Ramberger, Benjamin; Kresse, Georg

    2017-03-01

    We present a low-complexity algorithm to calculate the correlation energy of periodic systems in second-order Møller-Plesset (MP2) perturbation theory. In contrast to previous approximation-free MP2 codes, our implementation possesses a quartic scaling, O ( N 4 ) , with respect to the system size N and offers an almost ideal parallelization efficiency. The general issue that the correlation energy converges slowly with the number of basis functions is eased by an internal basis set extrapolation. The key concept to reduce the scaling is to eliminate all summations over virtual orbitals which can be elegantly achieved in the Laplace transformed MP2 formulation using plane wave basis sets and fast Fourier transforms. Analogously, this approach could allow us to calculate second order screened exchange as well as particle-hole ladder diagrams with a similar low complexity. Hence, the presented method can be considered as a step towards systematically improved correlation energies.

  19. Computational studies of metal-metal and metal-ligand interactions

    NASA Technical Reports Server (NTRS)

    Barnes, Leslie A.

    1992-01-01

    The geometric structure of Cr(CO)6 is optimized at the modified coupled-pair functional (MCPF), single and double excitation coupled-cluster (CCSD) and CCSD(T) levels of theory (including a perturbational estimate for connected triple excitations), and the force constants for the totally symmetric representation are determined. The geometry of Cr(CO)5 is partially optimized at the MCPF, CCSD and CCSD(T) levels of theory. Comparison with experimental data shows that the CCSD(T) method gives the best results for the structures and force constants, and that remaining errors are probably due to deficiencies in the one-particle basis sets used for CO. A detailed comparison of the properties of free CO is therefore given, at both the MCPF and CCSD/CCSD(T) levels of treatment, using a variety of basis sets. With very large one-particle basis sets, the SSCD(T) method gives excellent results for the bond distance, dipole moment and harmonic frequency of free CO. The total binding energies of Cr(CO)6 and Cr(CO)5 are also determined at the MCPF, CCSD and CCSD(T) levels of theory. The CCSD(T) method gives a much larger total binding energy than either the MCPF or CCSD methods. An analysis of the basis set superposition error (BSSE) at the MCPF level of treatment points out limitations in the one-particle basis used here and in a previous study. Calculations using larger basis sets reduced the BSSE, but the total binding energy of Cr(CO)6 is still significantly smaller than the experimental value, although the first CO bond dissociation energy of Cr(CO)6 is well described. An investigation of 3s3p correlation reveals only a small effect. The remaining discrepancy between the experimental and theoretical total binding energy of Cr(CO)6 is probably due to limitations in the one-particle basis, rather than limitations in the correlation treatment. In particular an additional d function and an f function on each C and O are needed to obtain quantitative results. This is underscored by the fact that even using a very large primitive se (1042 primitive functions contracted to 300 basis functions), the superposition error for the total binding energy of Cr(CO)6 is 22 kcal/mol at the MCPF level of treatment.

  20. Generalized multiscale finite-element method (GMsFEM) for elastic wave propagation in heterogeneous, anisotropic media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao, Kai; Fu, Shubin; Gibson, Richard L.

    It is important to develop fast yet accurate numerical methods for seismic wave propagation to characterize complex geological structures and oil and gas reservoirs. However, the computational cost of conventional numerical modeling methods, such as finite-difference method and finite-element method, becomes prohibitively expensive when applied to very large models. We propose a Generalized Multiscale Finite-Element Method (GMsFEM) for elastic wave propagation in heterogeneous, anisotropic media, where we construct basis functions from multiple local problems for both the boundaries and interior of a coarse node support or coarse element. The application of multiscale basis functions can capture the fine scale mediummore » property variations, and allows us to greatly reduce the degrees of freedom that are required to implement the modeling compared with conventional finite-element method for wave equation, while restricting the error to low values. We formulate the continuous Galerkin and discontinuous Galerkin formulation of the multiscale method, both of which have pros and cons. Applications of the multiscale method to three heterogeneous models show that our multiscale method can effectively model the elastic wave propagation in anisotropic media with a significant reduction in the degrees of freedom in the modeling system.« less

  1. Generalized Multiscale Finite-Element Method (GMsFEM) for elastic wave propagation in heterogeneous, anisotropic media

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao, Kai, E-mail: kaigao87@gmail.com; Fu, Shubin, E-mail: shubinfu89@gmail.com; Gibson, Richard L., E-mail: gibson@tamu.edu

    It is important to develop fast yet accurate numerical methods for seismic wave propagation to characterize complex geological structures and oil and gas reservoirs. However, the computational cost of conventional numerical modeling methods, such as finite-difference method and finite-element method, becomes prohibitively expensive when applied to very large models. We propose a Generalized Multiscale Finite-Element Method (GMsFEM) for elastic wave propagation in heterogeneous, anisotropic media, where we construct basis functions from multiple local problems for both the boundaries and interior of a coarse node support or coarse element. The application of multiscale basis functions can capture the fine scale mediummore » property variations, and allows us to greatly reduce the degrees of freedom that are required to implement the modeling compared with conventional finite-element method for wave equation, while restricting the error to low values. We formulate the continuous Galerkin and discontinuous Galerkin formulation of the multiscale method, both of which have pros and cons. Applications of the multiscale method to three heterogeneous models show that our multiscale method can effectively model the elastic wave propagation in anisotropic media with a significant reduction in the degrees of freedom in the modeling system.« less

  2. Generalized multiscale finite-element method (GMsFEM) for elastic wave propagation in heterogeneous, anisotropic media

    DOE PAGES

    Gao, Kai; Fu, Shubin; Gibson, Richard L.; ...

    2015-04-14

    It is important to develop fast yet accurate numerical methods for seismic wave propagation to characterize complex geological structures and oil and gas reservoirs. However, the computational cost of conventional numerical modeling methods, such as finite-difference method and finite-element method, becomes prohibitively expensive when applied to very large models. We propose a Generalized Multiscale Finite-Element Method (GMsFEM) for elastic wave propagation in heterogeneous, anisotropic media, where we construct basis functions from multiple local problems for both the boundaries and interior of a coarse node support or coarse element. The application of multiscale basis functions can capture the fine scale mediummore » property variations, and allows us to greatly reduce the degrees of freedom that are required to implement the modeling compared with conventional finite-element method for wave equation, while restricting the error to low values. We formulate the continuous Galerkin and discontinuous Galerkin formulation of the multiscale method, both of which have pros and cons. Applications of the multiscale method to three heterogeneous models show that our multiscale method can effectively model the elastic wave propagation in anisotropic media with a significant reduction in the degrees of freedom in the modeling system.« less

  3. Multi-fidelity methods for uncertainty quantification in transport problems

    NASA Astrophysics Data System (ADS)

    Tartakovsky, G.; Yang, X.; Tartakovsky, A. M.; Barajas-Solano, D. A.; Scheibe, T. D.; Dai, H.; Chen, X.

    2016-12-01

    We compare several multi-fidelity approaches for uncertainty quantification in flow and transport simulations that have a lower computational cost than the standard Monte Carlo method. The cost reduction is achieved by combining a small number of high-resolution (high-fidelity) simulations with a large number of low-resolution (low-fidelity) simulations. We propose a new method, a re-scaled Multi Level Monte Carlo (rMLMC) method. The rMLMC is based on the idea that the statistics of quantities of interest depends on scale/resolution. We compare rMLMC with existing multi-fidelity methods such as Multi Level Monte Carlo (MLMC) and reduced basis methods and discuss advantages of each approach.

  4. Estimating and Identifying Unspecified Correlation Structure for Longitudinal Data

    PubMed Central

    Hu, Jianhua; Wang, Peng; Qu, Annie

    2014-01-01

    Identifying correlation structure is important to achieving estimation efficiency in analyzing longitudinal data, and is also crucial for drawing valid statistical inference for large size clustered data. In this paper, we propose a nonparametric method to estimate the correlation structure, which is applicable for discrete longitudinal data. We utilize eigenvector-based basis matrices to approximate the inverse of the empirical correlation matrix and determine the number of basis matrices via model selection. A penalized objective function based on the difference between the empirical and model approximation of the correlation matrices is adopted to select an informative structure for the correlation matrix. The eigenvector representation of the correlation estimation is capable of reducing the risk of model misspecification, and also provides useful information on the specific within-cluster correlation pattern of the data. We show that the proposed method possesses the oracle property and selects the true correlation structure consistently. The proposed method is illustrated through simulations and two data examples on air pollution and sonar signal studies. PMID:26361433

  5. Diagnostic Performance of a Novel Coronary CT Angiography Algorithm: Prospective Multicenter Validation of an Intracycle CT Motion Correction Algorithm for Diagnostic Accuracy.

    PubMed

    Andreini, Daniele; Lin, Fay Y; Rizvi, Asim; Cho, Iksung; Heo, Ran; Pontone, Gianluca; Bartorelli, Antonio L; Mushtaq, Saima; Villines, Todd C; Carrascosa, Patricia; Choi, Byoung Wook; Bloom, Stephen; Wei, Han; Xing, Yan; Gebow, Dan; Gransar, Heidi; Chang, Hyuk-Jae; Leipsic, Jonathon; Min, James K

    2018-06-01

    Motion artifact can reduce the diagnostic accuracy of coronary CT angiography (CCTA) for coronary artery disease (CAD). The purpose of this study was to compare the diagnostic performance of an algorithm dedicated to correcting coronary motion artifact with the performance of standard reconstruction methods in a prospective international multicenter study. Patients referred for clinically indicated invasive coronary angiography (ICA) for suspected CAD prospectively underwent an investigational CCTA examination free from heart rate-lowering medications before they underwent ICA. Blinded core laboratory interpretations of motion-corrected and standard reconstructions for obstructive CAD (≥ 50% stenosis) were compared with ICA findings. Segments unevaluable owing to artifact were considered obstructive. The primary endpoint was per-subject diagnostic accuracy of the intracycle motion correction algorithm for obstructive CAD found at ICA. Among 230 patients who underwent CCTA with the motion correction algorithm and standard reconstruction, 92 (40.0%) had obstructive CAD on the basis of ICA findings. At a mean heart rate of 68.0 ± 11.7 beats/min, the motion correction algorithm reduced the number of nondiagnostic scans compared with standard reconstruction (20.4% vs 34.8%; p < 0.001). Diagnostic accuracy for obstructive CAD with the motion correction algorithm (62%; 95% CI, 56-68%) was not significantly different from that of standard reconstruction on a per-subject basis (59%; 95% CI, 53-66%; p = 0.28) but was superior on a per-vessel basis: 77% (95% CI, 74-80%) versus 72% (95% CI, 69-75%) (p = 0.02). The motion correction algorithm was superior in subgroups of patients with severely obstructive (≥ 70%) stenosis, heart rate ≥ 70 beats/min, and vessels in the atrioventricular groove. The motion correction algorithm studied reduces artifacts and improves diagnostic performance for obstructive CAD on a per-vessel basis and in selected subgroups on a per-subject basis.

  6. Development and Application of Collaborative Optimization Software for Plate - fin Heat Exchanger

    NASA Astrophysics Data System (ADS)

    Chunzhen, Qiao; Ze, Zhang; Jiangfeng, Guo; Jian, Zhang

    2017-12-01

    This paper introduces the design ideas of the calculation software and application examples for plate - fin heat exchangers. Because of the large calculation quantity in the process of designing and optimizing heat exchangers, we used Visual Basic 6.0 as a software development carrier to design a basic calculation software to reduce the calculation quantity. Its design condition is plate - fin heat exchanger which was designed according to the boiler tail flue gas. The basis of the software is the traditional design method of the plate-fin heat exchanger. Using the software for design and calculation of plate-fin heat exchangers, discovery will effectively reduce the amount of computation, and similar to traditional methods, have a high value.

  7. Structural reanalysis via a mixed method. [using Taylor series for accuracy improvement

    NASA Technical Reports Server (NTRS)

    Noor, A. K.; Lowder, H. E.

    1975-01-01

    A study is made of the approximate structural reanalysis technique based on the use of Taylor series expansion of response variables in terms of design variables in conjunction with the mixed method. In addition, comparisons are made with two reanalysis techniques based on the displacement method. These techniques are the Taylor series expansion and the modified reduced basis. It is shown that the use of the reciprocals of the sizing variables as design variables (which is the natural choice in the mixed method) can result in a substantial improvement in the accuracy of the reanalysis technique. Numerical results are presented for a space truss structure.

  8. The wind power prediction research based on mind evolutionary algorithm

    NASA Astrophysics Data System (ADS)

    Zhuang, Ling; Zhao, Xinjian; Ji, Tianming; Miao, Jingwen; Cui, Haina

    2018-04-01

    When the wind power is connected to the power grid, its characteristics of fluctuation, intermittent and randomness will affect the stability of the power system. The wind power prediction can guarantee the power quality and reduce the operating cost of power system. There were some limitations in several traditional wind power prediction methods. On the basis, the wind power prediction method based on Mind Evolutionary Algorithm (MEA) is put forward and a prediction model is provided. The experimental results demonstrate that MEA performs efficiently in term of the wind power prediction. The MEA method has broad prospect of engineering application.

  9. Optical Profilometers Using Adaptive Signal Processing

    NASA Technical Reports Server (NTRS)

    Hall, Gregory A.; Youngquist, Robert; Mikhael, Wasfy

    2006-01-01

    A method of adaptive signal processing has been proposed as the basis of a new generation of interferometric optical profilometers for measuring surfaces. The proposed profilometers would be portable, hand-held units. Sizes could be thus reduced because the adaptive-signal-processing method would make it possible to substitute lower-power coherent light sources (e.g., laser diodes) for white light sources and would eliminate the need for most of the optical components of current white-light profilometers. The adaptive-signal-processing method would make it possible to attain scanning ranges of the order of decimeters in the proposed profilometers.

  10. Inelastic scattering with Chebyshev polynomials and preconditioned conjugate gradient minimization.

    PubMed

    Temel, Burcin; Mills, Greg; Metiu, Horia

    2008-03-27

    We describe and test an implementation, using a basis set of Chebyshev polynomials, of a variational method for solving scattering problems in quantum mechanics. This minimum error method (MEM) determines the wave function Psi by minimizing the least-squares error in the function (H Psi - E Psi), where E is the desired scattering energy. We compare the MEM to an alternative, the Kohn variational principle (KVP), by solving the Secrest-Johnson model of two-dimensional inelastic scattering, which has been studied previously using the KVP and for which other numerical solutions are available. We use a conjugate gradient (CG) method to minimize the error, and by preconditioning the CG search, we are able to greatly reduce the number of iterations necessary; the method is thus faster and more stable than a matrix inversion, as is required in the KVP. Also, we avoid errors due to scattering off of the boundaries, which presents substantial problems for other methods, by matching the wave function in the interaction region to the correct asymptotic states at the specified energy; the use of Chebyshev polynomials allows this boundary condition to be implemented accurately. The use of Chebyshev polynomials allows for a rapid and accurate evaluation of the kinetic energy. This basis set is as efficient as plane waves but does not impose an artificial periodicity on the system. There are problems in surface science and molecular electronics which cannot be solved if periodicity is imposed, and the Chebyshev basis set is a good alternative in such situations.

  11. Improved l1-SPIRiT using 3D walsh transform-based sparsity basis.

    PubMed

    Feng, Zhen; Liu, Feng; Jiang, Mingfeng; Crozier, Stuart; Guo, He; Wang, Yuxin

    2014-09-01

    l1-SPIRiT is a fast magnetic resonance imaging (MRI) method which combines parallel imaging (PI) with compressed sensing (CS) by performing a joint l1-norm and l2-norm optimization procedure. The original l1-SPIRiT method uses two-dimensional (2D) Wavelet transform to exploit the intra-coil data redundancies and a joint sparsity model to exploit the inter-coil data redundancies. In this work, we propose to stack all the coil images into a three-dimensional (3D) matrix, and then a novel 3D Walsh transform-based sparsity basis is applied to simultaneously reduce the intra-coil and inter-coil data redundancies. Both the 2D Wavelet transform-based and the proposed 3D Walsh transform-based sparsity bases were investigated in the l1-SPIRiT method. The experimental results show that the proposed 3D Walsh transform-based l1-SPIRiT method outperformed the original l1-SPIRiT in terms of image quality and computational efficiency. Copyright © 2014 Elsevier Inc. All rights reserved.

  12. Fast Time-Dependent Density Functional Theory Calculations of the X-ray Absorption Spectroscopy of Large Systems.

    PubMed

    Besley, Nicholas A

    2016-10-11

    The computational cost of calculations of K-edge X-ray absorption spectra using time-dependent density functional (TDDFT) within the Tamm-Dancoff approximation is significantly reduced through the introduction of a severe integral screening procedure that includes only integrals that involve the core s basis function of the absorbing atom(s) coupled with a reduced quality numerical quadrature for integrals associated with the exchange and correlation functionals. The memory required for the calculations is reduced through construction of the TDDFT matrix within the absorbing core orbitals excitation space and exploiting further truncation of the virtual orbital space. The resulting method, denoted fTDDFTs, leads to much faster calculations and makes the study of large systems tractable. The capability of the method is demonstrated through calculations of the X-ray absorption spectra at the carbon K-edge of chlorophyll a, C 60 and C 70 .

  13. Construction of Optimally Reduced Empirical Model by Spatially Distributed Climate Data

    NASA Astrophysics Data System (ADS)

    Gavrilov, A.; Mukhin, D.; Loskutov, E.; Feigin, A.

    2016-12-01

    We present an approach to empirical reconstruction of the evolution operator in stochastic form by space-distributed time series. The main problem in empirical modeling consists in choosing appropriate phase variables which can efficiently reduce the dimension of the model at minimal loss of information about system's dynamics which consequently leads to more robust model and better quality of the reconstruction. For this purpose we incorporate in the model two key steps. The first step is standard preliminary reduction of observed time series dimension by decomposition via certain empirical basis (e. g. empirical orthogonal function basis or its nonlinear or spatio-temporal generalizations). The second step is construction of an evolution operator by principal components (PCs) - the time series obtained by the decomposition. In this step we introduce a new way of reducing the dimension of the embedding in which the evolution operator is constructed. It is based on choosing proper combinations of delayed PCs to take into account the most significant spatio-temporal couplings. The evolution operator is sought as nonlinear random mapping parameterized using artificial neural networks (ANN). Bayesian approach is used to learn the model and to find optimal hyperparameters: the number of PCs, the dimension of the embedding, the degree of the nonlinearity of ANN. The results of application of the method to climate data (sea surface temperature, sea level pressure) and their comparing with the same method based on non-reduced embedding are presented. The study is supported by Government of Russian Federation (agreement #14.Z50.31.0033 with the Institute of Applied Physics of RAS).

  14. A POD reduced order model for resolving angular direction in neutron/photon transport problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buchan, A.G., E-mail: andrew.buchan@imperial.ac.uk; Calloo, A.A.; Goffin, M.G.

    2015-09-01

    This article presents the first Reduced Order Model (ROM) that efficiently resolves the angular dimension of the time independent, mono-energetic Boltzmann Transport Equation (BTE). It is based on Proper Orthogonal Decomposition (POD) and uses the method of snapshots to form optimal basis functions for resolving the direction of particle travel in neutron/photon transport problems. A unique element of this work is that the snapshots are formed from the vector of angular coefficients relating to a high resolution expansion of the BTE's angular dimension. In addition, the individual snapshots are not recorded through time, as in standard POD, but instead theymore » are recorded through space. In essence this work swaps the roles of the dimensions space and time in standard POD methods, with angle and space respectively. It is shown here how the POD model can be formed from the POD basis functions in a highly efficient manner. The model is then applied to two radiation problems; one involving the transport of radiation through a shield and the other through an infinite array of pins. Both problems are selected for their complex angular flux solutions in order to provide an appropriate demonstration of the model's capabilities. It is shown that the POD model can resolve these fluxes efficiently and accurately. In comparison to high resolution models this POD model can reduce the size of a problem by up to two orders of magnitude without compromising accuracy. Solving times are also reduced by similar factors.« less

  15. Reducing greenhouse gas emissions in grassland ecosystems of the Central Lithuania: multi-criteria evaluation on a basis of the ARAS method.

    PubMed

    Balezentiene, Ligita; Kusta, Albinas

    2012-01-01

    N(2)O, CH(4), and CO(2) are potential greenhouse gas (GHG) contributing to climate change; therefore, solutions have to be sought to reduce their emission from agriculture. This work evaluates GHG emission from grasslands submitted to different mineral fertilizers during vegetation period (June-September) in two experimental sites, namely, seminatural grassland (8 treatments of mineral fertilizers) and cultural pasture (intensively managed) in the Training Farm of the Lithuanian University of Agriculture. Chamber method was applied for evaluation of GHG emissions on the field scale. As a result, soil chemical composition, compactness, temperature, and gravimetric moisture as well as biomass yield of fresh and dry biomass and botanical composition, were assessed during the research. Furthermore, a simulation of multi-criteria assessment of sustainable fertilizers management was carried out on a basis of ARAS method. The multicriteria analysis of different fertilizing regimes was based on a system of environmental and productivity indices. Consequently, agroecosystems of cultural pasture (N(180)P(120)K(150)) and seminatural grassland fertilizing rates N(180)P(120)K(150) and N(60)P(40)K(50) were evaluated as the most sustainable alternatives leading to reduction of emissions between biosphere-atmosphere and human-induced biogenic pollution in grassland ecosystems, thus contributing to improvement of countryside environment.

  16. Reducing Greenhouse Gas Emissions in Grassland Ecosystems of the Central Lithuania: Multi-Criteria Evaluation on a Basis of the ARAS Method

    PubMed Central

    Balezentiene, Ligita; Kusta, Albinas

    2012-01-01

    N2O, CH4, and CO2 are potential greenhouse gas (GHG) contributing to climate change; therefore, solutions have to be sought to reduce their emission from agriculture. This work evaluates GHG emission from grasslands submitted to different mineral fertilizers during vegetation period (June–September) in two experimental sites, namely, seminatural grassland (8 treatments of mineral fertilizers) and cultural pasture (intensively managed) in the Training Farm of the Lithuanian University of Agriculture. Chamber method was applied for evaluation of GHG emissions on the field scale. As a result, soil chemical composition, compactness, temperature, and gravimetric moisture as well as biomass yield of fresh and dry biomass and botanical composition, were assessed during the research. Furthermore, a simulation of multi-criteria assessment of sustainable fertilizers management was carried out on a basis of ARAS method. The multicriteria analysis of different fertilizing regimes was based on a system of environmental and productivity indices. Consequently, agroecosystems of cultural pasture (N180P120K150) and seminatural grassland fertilizing rates N180P120K150 and N60P40K50 were evaluated as the most sustainable alternatives leading to reduction of emissions between biosphere-atmosphere and human-induced biogenic pollution in grassland ecosystems, thus contributing to improvement of countryside environment. PMID:22645463

  17. Earthquake technology fights crime

    USGS Publications Warehouse

    Lahr, John C.; Ward, Peter L.; Stauffer, Peter H.; Hendley, James W.

    1996-01-01

    Scientists with the U.S. Geological Survey have adapted their methods for quickly finding the exact source of an earthquake to the problem of locating gunshots. On the basis of this work, a private company is now testing an automated gunshot-locating system in a San Francisco Bay area community. This system allows police to rapidly pinpoint and respond to illegal gunfire, helping to reduce crime in our neighborhoods.

  18. Approach to the origin of turbulence on the basis of two-point kinetic theory

    NASA Technical Reports Server (NTRS)

    Tsuge, S.

    1974-01-01

    Equations for the fluctuation correlation in an incompressible shear flow are derived on the basis of kinetic theory, utilizing the two-point distribution function which obeys the BBGKY hierarchy equation truncated with the hypothesis of 'ternary' molecular chaos. The step from the molecular to the hydrodynamic description is accomplished by a moment expansion which is a two-point version of the thirteen-moment method, and which leads to a series of correlation equations, viz., the two-point counterparts of the continuity equation, the Navier-Stokes equation, etc. For almost parallel shearing flows the two-point equation is separable and reduces to two Orr-Sommerfeld equations with different physical implications.

  19. A High-Performance Parallel Implementation of the Certified Reduced Basis Method

    DTIC Science & Technology

    2010-12-15

    point of view of model reduction due to the “curse of dimensionality”. We consider transient thermal conduction in a three– dimensional “ Swiss cheese ... Swiss cheese ” problem (see Figure 7a) there are 54 unique ordered pairs in I. A histogram of 〈δµ〉 values computed for the ntrain = 106 case is given in...our primal-dual RB method yields a very fast and accurate output approxima- tion for the “ Swiss Cheese ” problem. Our goal in this final subsection is

  20. Coatings influencing thermal stress in photonic crystal fiber laser

    NASA Astrophysics Data System (ADS)

    Pang, Dongqing; Li, Yan; Li, Yao; Hu, Minglie

    2018-06-01

    We studied how coating materials influence the thermal stress in the fiber core for three holding methods by simulating the temperature distribution and the thermal stress distribution in the photonic-crystal fiber laser. The results show that coating materials strongly influence both the thermal stress in the fiber core and the stress differences caused by holding methods. On the basis of the results, a two-coating PCF was designed. This design reduces the stress differences caused by variant holding conditions to zero, then the stability of laser operations can be improved.

  1. Reduced and simplified chemical kinetics for air dissociation using Computational Singular Perturbation

    NASA Technical Reports Server (NTRS)

    Goussis, D. A.; Lam, S. H.; Gnoffo, P. A.

    1990-01-01

    The Computational Singular Perturbation CSP methods is employed (1) in the modeling of a homogeneous isothermal reacting system and (2) in the numerical simulation of the chemical reactions in a hypersonic flowfield. Reduced and simplified mechanisms are constructed. The solutions obtained on the basis of these approximate mechanisms are shown to be in very good agreement with the exact solution based on the full mechanism. Physically meaningful approximations are derived. It is demonstrated that the deduction of these approximations from CSP is independent of the complexity of the problem and requires no intuition or experience in chemical kinetics.

  2. Complementary Reliability-Based Decodings of Binary Linear Block Codes

    NASA Technical Reports Server (NTRS)

    Fossorier, Marc P. C.; Lin, Shu

    1997-01-01

    This correspondence presents a hybrid reliability-based decoding algorithm which combines the reprocessing method based on the most reliable basis and a generalized Chase-type algebraic decoder based on the least reliable positions. It is shown that reprocessing with a simple additional algebraic decoding effort achieves significant coding gain. For long codes, the order of reprocessing required to achieve asymptotic optimum error performance is reduced by approximately 1/3. This significantly reduces the computational complexity, especially for long codes. Also, a more efficient criterion for stopping the decoding process is derived based on the knowledge of the algebraic decoding solution.

  3. Parallel, adaptive finite element methods for conservation laws

    NASA Technical Reports Server (NTRS)

    Biswas, Rupak; Devine, Karen D.; Flaherty, Joseph E.

    1994-01-01

    We construct parallel finite element methods for the solution of hyperbolic conservation laws in one and two dimensions. Spatial discretization is performed by a discontinuous Galerkin finite element method using a basis of piecewise Legendre polynomials. Temporal discretization utilizes a Runge-Kutta method. Dissipative fluxes and projection limiting prevent oscillations near solution discontinuities. A posteriori estimates of spatial errors are obtained by a p-refinement technique using superconvergence at Radau points. The resulting method is of high order and may be parallelized efficiently on MIMD computers. We compare results using different limiting schemes and demonstrate parallel efficiency through computations on an NCUBE/2 hypercube. We also present results using adaptive h- and p-refinement to reduce the computational cost of the method.

  4. Statistical prediction of dynamic distortion of inlet flow using minimum dynamic measurement. An application to the Melick statistical method and inlet flow dynamic distortion prediction without RMS measurements

    NASA Technical Reports Server (NTRS)

    Schweikhard, W. G.; Chen, Y. S.

    1986-01-01

    The Melick method of inlet flow dynamic distortion prediction by statistical means is outlined. A hypothetic vortex model is used as the basis for the mathematical formulations. The main variables are identified by matching the theoretical total pressure rms ratio with the measured total pressure rms ratio. Data comparisons, using the HiMAT inlet test data set, indicate satisfactory prediction of the dynamic peak distortion for cases with boundary layer control device vortex generators. A method for the dynamic probe selection was developed. Validity of the probe selection criteria is demonstrated by comparing the reduced-probe predictions with the 40-probe predictions. It is indicated that the the number of dynamic probes can be reduced to as few as two and still retain good accuracy.

  5. The structure and energetics of Cr(CO)6 and Cr(CO)5

    NASA Technical Reports Server (NTRS)

    Barnes, Leslie A.; Liu, Bowen; Lindh, Roland

    1992-01-01

    The geometric structure of Cr(CO)6 is optimized at the modified coupled pair functional (MCPF), single and double excitation coupled-cluster (CCSD) and CCSD(T) levels of theory (including a perturbational estimate for connected triple excitations), and the force constants for the totally symmetric representation are determined. The geometry of Cr(CO)5 is partially optimized at the MCPF, CCSD, and CCSD(T) levels of theory. Comparison with experimental data shows that the CCSD(T) method gives the best results for the structures and force constants, and that remaining errors are probably due to deficiencies in the one-particle basis sets used for CO. The total binding energies of Cr(CO)6 and Cr(CO)5 are also determined at the MCPF, CCSD, and CCSD(T) levels of theory. The CCSD(T) method gives a much larger total binding energy than either the MCPF or CCSD methods. An analysis of the basis set superposition error (BSSE) at the MCPF level of treatment points out limitations in the one-particle basis used. Calculations using larger basis sets reduce the BSSE, but the total binding energy of Cr(CO)6 is still significantly smaller than the experimental value, although the first CO bond dissociation energy of Cr(CO)6 is well described. An investigation of 3s3p correlation reveals only a small effect. In the largest basis set, the total CO binding energy of Cr(CO)6 is estimated to be 140 kcal/mol at the CCSD(T) level of theory, or about 86 percent of the experimental value. The remaining discrepancy between the experimental and theoretical value is probably due to limitations in the one-particle basis, rather than limitations in the correlation treatment. In particular an additional d function and an f function on each C and O are needed to obtain quantitative results. This is underscored by the fact that even using a very large primitive set (1042 primitive functions contracted to 300 basis functions), the superposition error for the total binding energy of Cr(CO)6 is 22 kcal/mol at the MCPF level of treatment.

  6. An Efficient Radial Basis Function Mesh Deformation Scheme within an Adjoint-Based Aerodynamic Optimization Framework

    NASA Astrophysics Data System (ADS)

    Poirier, Vincent

    Mesh deformation schemes play an important role in numerical aerodynamic optimization. As the aerodynamic shape changes, the computational mesh must adapt to conform to the deformed geometry. In this work, an extension to an existing fast and robust Radial Basis Function (RBF) mesh movement scheme is presented. Using a reduced set of surface points to define the mesh deformation increases the efficiency of the RBF method; however, at the cost of introducing errors into the parameterization by not recovering the exact displacement of all surface points. A secondary mesh movement is implemented, within an adjoint-based optimization framework, to eliminate these errors. The proposed scheme is tested within a 3D Euler flow by reducing the pressure drag while maintaining lift of a wing-body configured Boeing-747 and an Onera-M6 wing. As well, an inverse pressure design is executed on the Onera-M6 wing and an inverse span loading case is presented for a wing-body configured DLR-F6 aircraft.

  7. A radial basis function Galerkin method for inhomogeneous nonlocal diffusion

    DOE PAGES

    Lehoucq, Richard B.; Rowe, Stephen T.

    2016-02-01

    We introduce a discretization for a nonlocal diffusion problem using a localized basis of radial basis functions. The stiffness matrix entries are assembled by a special quadrature routine unique to the localized basis. Combining the quadrature method with the localized basis produces a well-conditioned, sparse, symmetric positive definite stiffness matrix. We demonstrate that both the continuum and discrete problems are well-posed and present numerical results for the convergence behavior of the radial basis function method. As a result, we explore approximating the solution to anisotropic differential equations by solving anisotropic nonlocal integral equations using the radial basis function method.

  8. 7 CFR 1412.47 - Planting flexibility.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... payments will not be reduced for the planting or harvesting of the fruit, vegetable, or wild rice; (2) The... the payment acres for the farm will be reduced on an acre-for-acre basis; or (3) The producer has a...; and (ii) The payment acres for the farm will be reduced on an acre-for-acre basis. (e) Double-cropping...

  9. Basis Selection for Wavelet Regression

    NASA Technical Reports Server (NTRS)

    Wheeler, Kevin R.; Lau, Sonie (Technical Monitor)

    1998-01-01

    A wavelet basis selection procedure is presented for wavelet regression. Both the basis and the threshold are selected using cross-validation. The method includes the capability of incorporating prior knowledge on the smoothness (or shape of the basis functions) into the basis selection procedure. The results of the method are demonstrated on sampled functions widely used in the wavelet regression literature. The results of the method are contrasted with other published methods.

  10. Resolution-of-identity stochastic time-dependent configuration interaction for dissipative electron dynamics in strong fields.

    PubMed

    Klinkusch, Stefan; Tremblay, Jean Christophe

    2016-05-14

    In this contribution, we introduce a method for simulating dissipative, ultrafast many-electron dynamics in intense laser fields. The method is based on the norm-conserving stochastic unraveling of the dissipative Liouville-von Neumann equation in its Lindblad form. The N-electron wave functions sampling the density matrix are represented in the basis of singly excited configuration state functions. The interaction with an external laser field is treated variationally and the response of the electronic density is included to all orders in this basis. The coupling to an external environment is included via relaxation operators inducing transition between the configuration state functions. Single electron ionization is represented by irreversible transition operators from the ionizing states to an auxiliary continuum state. The method finds its efficiency in the representation of the operators in the interaction picture, where the resolution-of-identity is used to reduce the size of the Hamiltonian eigenstate basis. The zeroth-order eigenstates can be obtained either at the configuration interaction singles level or from a time-dependent density functional theory reference calculation. The latter offers an alternative to explicitly time-dependent density functional theory which has the advantage of remaining strictly valid for strong field excitations while improving the description of the correlation as compared to configuration interaction singles. The method is tested on a well-characterized toy system, the excitation of the low-lying charge transfer state in LiCN.

  11. Resolution-of-identity stochastic time-dependent configuration interaction for dissipative electron dynamics in strong fields

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klinkusch, Stefan; Tremblay, Jean Christophe

    In this contribution, we introduce a method for simulating dissipative, ultrafast many-electron dynamics in intense laser fields. The method is based on the norm-conserving stochastic unraveling of the dissipative Liouville-von Neumann equation in its Lindblad form. The N-electron wave functions sampling the density matrix are represented in the basis of singly excited configuration state functions. The interaction with an external laser field is treated variationally and the response of the electronic density is included to all orders in this basis. The coupling to an external environment is included via relaxation operators inducing transition between the configuration state functions. Single electronmore » ionization is represented by irreversible transition operators from the ionizing states to an auxiliary continuum state. The method finds its efficiency in the representation of the operators in the interaction picture, where the resolution-of-identity is used to reduce the size of the Hamiltonian eigenstate basis. The zeroth-order eigenstates can be obtained either at the configuration interaction singles level or from a time-dependent density functional theory reference calculation. The latter offers an alternative to explicitly time-dependent density functional theory which has the advantage of remaining strictly valid for strong field excitations while improving the description of the correlation as compared to configuration interaction singles. The method is tested on a well-characterized toy system, the excitation of the low-lying charge transfer state in LiCN.« less

  12. Auxiliary basis expansions for large-scale electronic structure calculations.

    PubMed

    Jung, Yousung; Sodt, Alex; Gill, Peter M W; Head-Gordon, Martin

    2005-05-10

    One way to reduce the computational cost of electronic structure calculations is to use auxiliary basis expansions to approximate four-center integrals in terms of two- and three-center integrals, usually by using the variationally optimum Coulomb metric to determine the expansion coefficients. However, the long-range decay behavior of the auxiliary basis expansion coefficients has not been characterized. We find that this decay can be surprisingly slow. Numerical experiments on linear alkanes and a toy model both show that the decay can be as slow as 1/r in the distance between the auxiliary function and the fitted charge distribution. The Coulomb metric fitting equations also involve divergent matrix elements for extended systems treated with periodic boundary conditions. An attenuated Coulomb metric that is short-range can eliminate these oddities without substantially degrading calculated relative energies. The sparsity of the fit coefficients is assessed on simple hydrocarbon molecules and shows quite early onset of linear growth in the number of significant coefficients with system size using the attenuated Coulomb metric. Hence it is possible to design linear scaling auxiliary basis methods without additional approximations to treat large systems.

  13. Detecting stripe artifacts in ultrasound images.

    PubMed

    Maciak, Adam; Kier, Christian; Seidel, Günter; Meyer-Wiethe, Karsten; Hofmann, Ulrich G

    2009-10-01

    Brain perfusion diseases such as acute ischemic stroke are detectable through computed tomography (CT)-/magnetic resonance imaging (MRI)-based methods. An alternative approach makes use of ultrasound imaging. In this low-cost bedside method, noise and artifacts degrade the imaging process. Especially stripe artifacts show a similar signal behavior compared to acute stroke or brain perfusion diseases. This document describes how stripe artifacts can be detected and eliminated in ultrasound images obtained through harmonic imaging (HI). On the basis of this new method, both proper identification of areas with critically reduced brain tissue perfusion and classification between brain perfusion defects and ultrasound stripe artifacts are made possible.

  14. High Throughput Method of Extracting and Counting Strontium-90 in Urine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shkrob, I.; Kaminski, M.; Mertz, C.

    2016-03-01

    A method has been developed for the rapid extraction of Sr-90 from the urine of individuals exposed to radiation in a terrorist attack. The method employs two chromatographic ion-exchange materials: Diphonix resin and Sr resin, both of which are commercially available. The Diphonix resin reduces the alkali ion concentrations below 10 mM, and the Sr resin concentrates and decontaminates strontium-90. Experimental and calculational data are given for a variety of test conditions. On the basis of these results, a flowsheet has been developed for the rapid concentration and extraction of Sr-90 from human urine samples for subsequent beta-counting.

  15. An analytic model for footprint dispersions and its application to mission design

    NASA Technical Reports Server (NTRS)

    Rao, J. R. Jagannatha; Chen, Yi-Chao

    1992-01-01

    This is the final report on our recent research activities that are complementary to those conducted by our colleagues, Professor Farrokh Mistree and students, in the context of the Taguchi method. We have studied the mathematical model that forms the basis of the Simulation and Optimization of Rocket Trajectories (SORT) program and developed an analytic method for determining mission reliability with a reduced number of flight simulations. This method can be incorporated in a design algorithm to mathematically optimize different performance measures of a mission, thus leading to a robust and easy-to-use methodology for mission planning and design.

  16. A study of extraction process and in vitro antioxidant activity of total phenols from Rhizoma Imperatae.

    PubMed

    Zhou, Xian-rong; Wang, Jian-hua; Jiang, Bo; Shang, Jin; Zhao, Chang-qiong

    2013-01-01

    The study investigated the extraction method of Rhizoma Imperatae and its antioxidant activity, and provided a basis for its rational development. The extraction method of Rhizoma Imperatae was determined using orthogonal design test and by total phenol content, its hydroxyl radical scavenging ability was measured by Fenton reaction, and potassium ferricyanide reduction method was used to determine its reducing power. The results showed that the optimum extraction process of Rhizoma Imperatae was a 50-fold volume of water, 30 °C, three times of extraction with 2 h each. Its IC50 for scavenging of hydroxyl radicals was 0.0948 mg/mL, while IC50 of ascorbic acid was 0.1096 mg/mL; in the ferricyanide considerable reduction method, the extract exhibited reducing power comparable to that of the ascorbic acid. The study concluded that Rhizoma Imperatae extract contains relatively large amount of polyphenols, and has a good anti-oxidation ability.

  17. [Pharmacological study on hemostasis, analgesic and anti inflammation effects of the alcohol extract of Hibiscus tiliaceus].

    PubMed

    Qiu, Fen; Tian, Hui; Zhang, Zhi; Yuan, Xian-Ling; Tan, Yuan-Feng; Ning, Xiao-Qing

    2013-10-01

    To study the effects of hemostasis, analgesic and anti inflammation of the alcohol extract of Hibiscus tiliaceus and offer pharmacological and experimental basis for its safe and effective use in clinic. The effects of hemostasist were observed with tail breaking method, capillary tube method and slide method; Hot board and body distortion induced by acetic acid methods were applied in mice analgesia experiment, the mice model of acute auricle swelling induced by dmi ethylbenzene and capillary permeability induced by acetic acid were applied to observe the anti inflammatory effects. The alcohol extract of Hibiscus tiliaceus could significantly reduce the bleeding time and the clotting time, delay the plant reaction time and reduce the writhing times of the mice, and it also had effect on inhibiting swelling of mice ear and the permeability of the capillary. These results suggest that the alcohol extract of Hibiscus tiliaceus has the effects of hemostasis, analgesic and anti inflammation.

  18. 75 FR 64071 - Basis Reporting by Securities Brokers and Basis Determination for Stock

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-10-18

    ... Determination--Average Basis Method a. Definition of Dividend Reinvestment Plan i. Acquisition of Stock... this provision and allow all UITs that elect to be treated as RICs to use the average basis method. The... served to limit the average basis method to stock in a DRP, the final regulations provide that, for...

  19. Optimization Design of Minimum Total Resistance Hull Form Based on CFD Method

    NASA Astrophysics Data System (ADS)

    Zhang, Bao-ji; Zhang, Sheng-long; Zhang, Hui

    2018-06-01

    In order to reduce the resistance and improve the hydrodynamic performance of a ship, two hull form design methods are proposed based on the potential flow theory and viscous flow theory. The flow fields are meshed using body-fitted mesh and structured grids. The parameters of the hull modification function are the design variables. A three-dimensional modeling method is used to alter the geometry. The Non-Linear Programming (NLP) method is utilized to optimize a David Taylor Model Basin (DTMB) model 5415 ship under the constraints, including the displacement constraint. The optimization results show an effective reduction of the resistance. The two hull form design methods developed in this study can provide technical support and theoretical basis for designing green ships.

  20. A new multiscale noise tuning stochastic resonance for enhanced fault diagnosis in wind turbine drivetrains

    NASA Astrophysics Data System (ADS)

    Hu, Bingbing; Li, Bing

    2016-02-01

    It is very difficult to detect weak fault signatures due to the large amount of noise in a wind turbine system. Multiscale noise tuning stochastic resonance (MSTSR) has proved to be an effective way to extract weak signals buried in strong noise. However, the MSTSR method originally based on discrete wavelet transform (DWT) has disadvantages such as shift variance and the aliasing effects in engineering application. In this paper, the dual-tree complex wavelet transform (DTCWT) is introduced into the MSTSR method, which makes it possible to further improve the system output signal-to-noise ratio and the accuracy of fault diagnosis by the merits of DTCWT (nearly shift invariant and reduced aliasing effects). Moreover, this method utilizes the relationship between the two dual-tree wavelet basis functions, instead of matching the single wavelet basis function to the signal being analyzed, which may speed up the signal processing and be employed in on-line engineering monitoring. The proposed method is applied to the analysis of bearing outer ring and shaft coupling vibration signals carrying fault information. The results confirm that the method performs better in extracting the fault features than the original DWT-based MSTSR, the wavelet transform with post spectral analysis, and EMD-based spectral analysis methods.

  1. Thermal-hydraulic analysis capabilities and methods development at NYPA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feltus, M.A.

    1987-01-01

    The operation of a nuclear power plant must be regularly supported by various thermal-hydraulic (T/H) analyses that may include final safety analysis report (FSAR) design basis calculations and licensing evaluations and conservative and best-estimate analyses. The development of in-house T/H capabilities provides the following advantages: (a) it leads to a better understanding of the plant design basis and operating characteristics; (b) methods developed can be used to optimize plant operations and enhance plant safety; (c) such a capability can be used for design reviews, checking vendor calculations, and evaluating proposed plant modifications; and (d) in-house capability reduces the cost ofmore » analysis. This paper gives an overview of the T/H capabilities and current methods development activity within the engineering department of the New York Power Authority (NYPA) and will focus specifically on reactor coolant system (RCS) transients and plant dynamic response for non-loss-of-coolant accident events. This paper describes NYPA experience in performing T/H analyses in support of pressurized water reactor plant operation.« less

  2. Features of Discontinuous Galerkin Algorithms in Gkeyll, and Exponentially-Weighted Basis Functions

    NASA Astrophysics Data System (ADS)

    Hammett, G. W.; Hakim, A.; Shi, E. L.

    2016-10-01

    There are various versions of Discontinuous Galerkin (DG) algorithms that have interesting features that could help with challenging problems of higher-dimensional kinetic problems (such as edge turbulence in tokamaks and stellarators). We are developing the gyrokinetic code Gkeyll based on DG methods. Higher-order methods do more FLOPS to extract more information per byte, thus reducing memory and communication costs (which are a bottleneck for exascale computing). The inner product norm can be chosen to preserve energy conservation with non-polynomial basis functions (such as Maxwellian-weighted bases), which alternatively can be viewed as a Petrov-Galerkin method. This allows a full- F code to benefit from similar Gaussian quadrature employed in popular δf continuum gyrokinetic codes. We show some tests for a 1D Spitzer-Härm heat flux problem, which requires good resolution for the tail. For two velocity dimensions, this approach could lead to a factor of 10 or more speedup. Supported by the Max-Planck/Princeton Center for Plasma Physics, the SciDAC Center for the Study of Plasma Microturbulence, and DOE Contract DE-AC02-09CH11466.

  3. Fourier spatial frequency analysis for image classification: training the training set

    NASA Astrophysics Data System (ADS)

    Johnson, Timothy H.; Lhamo, Yigah; Shi, Lingyan; Alfano, Robert R.; Russell, Stewart

    2016-04-01

    The Directional Fourier Spatial Frequencies (DFSF) of a 2D image can identify similarity in spatial patterns within groups of related images. A Support Vector Machine (SVM) can then be used to classify images if the inter-image variance of the FSF in the training set is bounded. However, if variation in FSF increases with training set size, accuracy may decrease as the size of the training set increases. This calls for a method to identify a set of training images from among the originals that can form a vector basis for the entire class. Applying the Cauchy product method we extract the DFSF spectrum from radiographs of osteoporotic bone, and use it as a matched filter set to eliminate noise and image specific frequencies, and demonstrate that selection of a subset of superclassifiers from within a set of training images improves SVM accuracy. Central to this challenge is that the size of the search space can become computationally prohibitive for all but the smallest training sets. We are investigating methods to reduce the search space to identify an optimal subset of basis training images.

  4. Hydrologic Process Parameterization of Electrical Resistivity Imaging of Solute Plumes Using POD McMC

    NASA Astrophysics Data System (ADS)

    Awatey, M. T.; Irving, J.; Oware, E. K.

    2016-12-01

    Markov chain Monte Carlo (McMC) inversion frameworks are becoming increasingly popular in geophysics due to their ability to recover multiple equally plausible geologic features that honor the limited noisy measurements. Standard McMC methods, however, become computationally intractable with increasing dimensionality of the problem, for example, when working with spatially distributed geophysical parameter fields. We present a McMC approach based on a sparse proper orthogonal decomposition (POD) model parameterization that implicitly incorporates the physics of the underlying process. First, we generate training images (TIs) via Monte Carlo simulations of the target process constrained to a conceptual model. We then apply POD to construct basis vectors from the TIs. A small number of basis vectors can represent most of the variability in the TIs, leading to dimensionality reduction. A projection of the starting model into the reduced basis space generates the starting POD coefficients. At each iteration, only coefficients within a specified sampling window are resimulated assuming a Gaussian prior. The sampling window grows at a specified rate as the number of iteration progresses starting from the coefficients corresponding to the highest ranked basis to those of the least informative basis. We found this gradual increment in the sampling window to be more stable compared to resampling all the coefficients right from the first iteration. We demonstrate the performance of the algorithm with both synthetic and lab-scale electrical resistivity imaging of saline tracer experiments, employing the same set of basis vectors for all inversions. We consider two scenarios of unimodal and bimodal plumes. The unimodal plume is consistent with the hypothesis underlying the generation of the TIs whereas bimodality in plume morphology was not theorized. We show that uncertainty quantification using McMC can proceed in the reduced dimensionality space while accounting for the physics of the underlying process.

  5. The Fatigue Approach to Vibration and Health: is it a Practical and Viable way of Predicting the Effects on People?

    NASA Astrophysics Data System (ADS)

    Sandover, J.

    1998-08-01

    The fatigue approach assumes that the vertebral end-plates are the weak link in the spine subjected to shock and vibration, and fail as a result of material fatigue. The theory assumes that end-plate damage leads to degeneration and pain in the lumbar spine. There is evidence for both the damage predicted and the fatigue mode of failure so that the approach may provide a basis for predictive methods for use in epidemiology and standards. An available data set from a variety of heavy vehicles in practical situations was used for predictions of spinal stress and fatigue life. Although there was some disparity between the predictive methods used, the more developed methods indicated fatigue lives that appeared reasonable, taking into account the vehicles tested and our knowledge of spinal degeneration. It is argued that the modelling and fatigue approaches combined offer a basis for estimating the effects of vibration and shock on health. Although the human variables are such that the approach, as yet, only offers rough estimates, it offers a good basis for understanding. The approach indicates that peak values are important and large peaks dominate risk. The method indicates that long term r.m.s. methods probably underestimate the risk of injury. The BS 6841Wband ISO 2631Wkweightings have shortcomings when used where peak values are important. A simple model may be more appropriate. The principle can be applied to continuous vibration as well as high acceleration events so that one method can be applied universally to continuous vibrations, high acceleration events and mixtures of these. An endurance limit can be hypothesised and, if this limit is sufficiently high, then the need for many measurements can be reduced.

  6. An efficient approach for the assembly of mass and stiffness matrices of structures with modifications

    NASA Astrophysics Data System (ADS)

    Wagner, Andreas; Spelsberg-Korspeter, Gottfried

    2013-09-01

    The finite element method is one of the most common tools for the comprehensive analysis of structures with applications reaching from static, often nonlinear stress-strain, to transient dynamic analyses. For single calculations the expense to generate an appropriate mesh is often insignificant compared to the analysis time even for complex geometries and therefore negligible. However, this is not the case for certain other applications, most notably structural optimization procedures, where the (re-)meshing effort is very important with respect to the total runtime of the procedure. Thus it is desirable to find methods to efficiently generate mass and stiffness matrices allowing to reduce this effort, especially for structures with modifications of minor complexity, e.g. panels with cutouts. Therefore, a modeling approach referred to as Energy Modification Method is proposed in this paper. The underlying idea is to model and discretize the basis structure, e.g. a plate, and the modifications, e.g. holes, separately. The discretized energy expressions of the modifications are then subtracted from (or added to) the energy expressions of the basis structure and the coordinates are related to each other by kinematical constraints leading to the mass and stiffness matrices of the complete structure. This approach will be demonstrated by two simple examples, a rod with varying material properties and a rectangular plate with a rectangular or circular hole, using a finite element discretization as basis. Convergence studies of the method based on the latter example follow demonstrating the rapid convergence and efficiency of the method. Finally, the Energy Modification Method is successfully used in the structural optimization of a circular plate with holes, with the objective to split all its double eigenfrequencies.

  7. The Logarithmic Tail of Néel Walls

    NASA Astrophysics Data System (ADS)

    Melcher, Christof

    We study the multiscale problem of a parametrized planar 180° rotation of magnetization states in a thin ferromagnetic film. In an appropriate scaling and when the film thickness is comparable to the Bloch line width, the underlying variational principle has the form where the reduced stray-field operator Q approximates (-Δ)1/2 as the quality factor Q tends to zero. We show that the associated Néel wall profile u exhibits a very long logarithmic tail. The proof relies on limiting elliptic regularity methods on the basis of the associated Euler-Lagrange equation and symmetrization arguments on the basis of the variational principle. Finally we study the renormalized limit behavior as Q tends to zero.

  8. Employing general fit-bases for construction of potential energy surfaces with an adaptive density-guided approach

    NASA Astrophysics Data System (ADS)

    Klinting, Emil Lund; Thomsen, Bo; Godtliebsen, Ian Heide; Christiansen, Ove

    2018-02-01

    We present an approach to treat sets of general fit-basis functions in a single uniform framework, where the functional form is supplied on input, i.e., the use of different functions does not require new code to be written. The fit-basis functions can be used to carry out linear fits to the grid of single points, which are generated with an adaptive density-guided approach (ADGA). A non-linear conjugate gradient method is used to optimize non-linear parameters if such are present in the fit-basis functions. This means that a set of fit-basis functions with the same inherent shape as the potential cuts can be requested and no other choices with regards to the fit-basis functions need to be taken. The general fit-basis framework is explored in relation to anharmonic potentials for model systems, diatomic molecules, water, and imidazole. The behaviour and performance of Morse and double-well fit-basis functions are compared to that of polynomial fit-basis functions for unsymmetrical single-minimum and symmetrical double-well potentials. Furthermore, calculations for water and imidazole were carried out using both normal coordinates and hybrid optimized and localized coordinates (HOLCs). Our results suggest that choosing a suitable set of fit-basis functions can improve the stability of the fitting routine and the overall efficiency of potential construction by lowering the number of single point calculations required for the ADGA. It is possible to reduce the number of terms in the potential by choosing the Morse and double-well fit-basis functions. These effects are substantial for normal coordinates but become even more pronounced if HOLCs are used.

  9. The Bravyi-Kitaev transformation for quantum computation of electronic structure

    NASA Astrophysics Data System (ADS)

    Seeley, Jacob T.; Richard, Martin J.; Love, Peter J.

    2012-12-01

    Quantum simulation is an important application of future quantum computers with applications in quantum chemistry, condensed matter, and beyond. Quantum simulation of fermionic systems presents a specific challenge. The Jordan-Wigner transformation allows for representation of a fermionic operator by O(n) qubit operations. Here, we develop an alternative method of simulating fermions with qubits, first proposed by Bravyi and Kitaev [Ann. Phys. 298, 210 (2002), 10.1006/aphy.2002.6254; e-print arXiv:quant-ph/0003137v2], that reduces the simulation cost to O(log n) qubit operations for one fermionic operation. We apply this new Bravyi-Kitaev transformation to the task of simulating quantum chemical Hamiltonians, and give a detailed example for the simplest possible case of molecular hydrogen in a minimal basis. We show that the quantum circuit for simulating a single Trotter time step of the Bravyi-Kitaev derived Hamiltonian for H2 requires fewer gate applications than the equivalent circuit derived from the Jordan-Wigner transformation. Since the scaling of the Bravyi-Kitaev method is asymptotically better than the Jordan-Wigner method, this result for molecular hydrogen in a minimal basis demonstrates the superior efficiency of the Bravyi-Kitaev method for all quantum computations of electronic structure.

  10. Solar Activity Heading for a Maunder Minimum?

    NASA Astrophysics Data System (ADS)

    Schatten, K. H.; Tobiska, W. K.

    2003-05-01

    Long-range (few years to decades) solar activity prediction techniques vary greatly in their methods. They range from examining planetary orbits, to spectral analyses (e.g. Fourier, wavelet and spectral analyses), to artificial intelligence methods, to simply using general statistical techniques. Rather than concentrate on statistical/mathematical/numerical methods, we discuss a class of methods which appears to have a "physical basis." Not only does it have a physical basis, but this basis is rooted in both "basic" physics (dynamo theory), but also solar physics (Babcock dynamo theory). The class we discuss is referred to as "precursor methods," originally developed by Ohl, Brown and Williams and others, using geomagnetic observations. My colleagues and I have developed some understanding for how these methods work and have expanded the prediction methods using "solar dynamo precursor" methods, notably a "SODA" index (SOlar Dynamo Amplitude). These methods are now based upon an understanding of the Sun's dynamo processes- to explain a connection between how the Sun's fields are generated and how the Sun broadcasts its future activity levels to Earth. This has led to better monitoring of the Sun's dynamo fields and is leading to more accurate prediction techniques. Related to the Sun's polar and toroidal magnetic fields, we explain how these methods work, past predictions, the current cycle, and predictions of future of solar activity levels for the next few solar cycles. The surprising result of these long-range predictions is a rapid decline in solar activity, starting with cycle #24. If this trend continues, we may see the Sun heading towards a "Maunder" type of solar activity minimum - an extensive period of reduced levels of solar activity. For the solar physicists, who enjoy studying solar activity, we hope this isn't so, but for NASA, which must place and maintain satellites in low earth orbit (LEO), it may help with reboost problems. Space debris, and other aspects of objects in LEO will also be affected. This research is supported by the NSF and NASA.

  11. Correlated natural transition orbital framework for low-scaling excitation energy calculations (CorNFLEx).

    PubMed

    Baudin, Pablo; Kristensen, Kasper

    2017-06-07

    We present a new framework for calculating coupled cluster (CC) excitation energies at a reduced computational cost. It relies on correlated natural transition orbitals (NTOs), denoted CIS(D')-NTOs, which are obtained by diagonalizing generalized hole and particle density matrices determined from configuration interaction singles (CIS) information and additional terms that represent correlation effects. A transition-specific reduced orbital space is determined based on the eigenvalues of the CIS(D')-NTOs, and a standard CC excitation energy calculation is then performed in that reduced orbital space. The new method is denoted CorNFLEx (Correlated Natural transition orbital Framework for Low-scaling Excitation energy calculations). We calculate second-order approximate CC singles and doubles (CC2) excitation energies for a test set of organic molecules and demonstrate that CorNFLEx yields excitation energies of CC2 quality at a significantly reduced computational cost, even for relatively small systems and delocalized electronic transitions. In order to illustrate the potential of the method for large molecules, we also apply CorNFLEx to calculate CC2 excitation energies for a series of solvated formamide clusters (up to 4836 basis functions).

  12. Estimation of the Thermal Process in the Honeycomb Panel by a Monte Carlo Method

    NASA Astrophysics Data System (ADS)

    Gusev, S. A.; Nikolaev, V. N.

    2018-01-01

    A new Monte Carlo method for estimating the thermal state of the heat insulation containing honeycomb panels is proposed in the paper. The heat transfer in the honeycomb panel is described by a boundary value problem for a parabolic equation with discontinuous diffusion coefficient and boundary conditions of the third kind. To obtain an approximate solution, it is proposed to use the smoothing of the diffusion coefficient. After that, the obtained problem is solved on the basis of the probability representation. The probability representation is the expectation of the functional of the diffusion process corresponding to the boundary value problem. The process of solving the problem is reduced to numerical statistical modelling of a large number of trajectories of the diffusion process corresponding to the parabolic problem. It was used earlier the Euler method for this object, but that requires a large computational effort. In this paper the method is modified by using combination of the Euler and the random walk on moving spheres methods. The new approach allows us to significantly reduce the computation costs.

  13. Analysis of antioxidant activities of common vegetables employing oxygen radical absorbance capacity (ORAC) and ferric reducing antioxidant power (FRAP) assays: a comparative study.

    PubMed

    Ou, Boxin; Huang, Dejian; Hampsch-Woodill, Maureen; Flanagan, Judith A; Deemer, Elizabeth K

    2002-05-22

    A total of 927 freeze-dried vegetable samples, including 111 white cabbages, 59 carrots, 51 snap beans, 57 cauliflower, 33 white onions, 48 purple onions, 130 broccoli, 169 tomatoes, 25 beets, 88 peas, 88 spinach, 18 red peppers, and 50 green peppers, were analyzed using the oxygen radical absorption capacity (ORAC) and ferric reducing antioxidant capacity (FRAP) methods. The data show that the ORAC and FRAP values of vegetable are not only dependent on species, but also highly dependent on geographical origin and harvest time. The two antioxidant assay methods, ORAC and FRAP, also give different antioxidant activity trends. The discrepancy is extensively discussed based on the chemistry principles upon which these methods are built, and it is concluded that the ORAC method is chemically more relevant to chain-breaking antioxidants activity, while the FRAP has some drawbacks such as interference, reaction kinetics, and quantitation methods. On the basis of the ORAC results, green pepper, spinach, purple onion, broccoli, beet, and cauliflower are the leading sources of antioxidant activities against the peroxyl radicals.

  14. Preserving Lagrangian Structure in Nonlinear Model Reduction with Application to Structural Dynamics

    DOE PAGES

    Carlberg, Kevin; Tuminaro, Ray; Boggs, Paul

    2015-03-11

    Our work proposes a model-reduction methodology that preserves Lagrangian structure and achieves computational efficiency in the presence of high-order nonlinearities and arbitrary parameter dependence. As such, the resulting reduced-order model retains key properties such as energy conservation and symplectic time-evolution maps. We focus on parameterized simple mechanical systems subjected to Rayleigh damping and external forces, and consider an application to nonlinear structural dynamics. To preserve structure, the method first approximates the system's “Lagrangian ingredients''---the Riemannian metric, the potential-energy function, the dissipation function, and the external force---and subsequently derives reduced-order equations of motion by applying the (forced) Euler--Lagrange equation with thesemore » quantities. Moreover, from the algebraic perspective, key contributions include two efficient techniques for approximating parameterized reduced matrices while preserving symmetry and positive definiteness: matrix gappy proper orthogonal decomposition and reduced-basis sparsification. Our results for a parameterized truss-structure problem demonstrate the practical importance of preserving Lagrangian structure and illustrate the proposed method's merits: it reduces computation time while maintaining high accuracy and stability, in contrast to existing nonlinear model-reduction techniques that do not preserve structure.« less

  15. Preserving Lagrangian Structure in Nonlinear Model Reduction with Application to Structural Dynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carlberg, Kevin; Tuminaro, Ray; Boggs, Paul

    Our work proposes a model-reduction methodology that preserves Lagrangian structure and achieves computational efficiency in the presence of high-order nonlinearities and arbitrary parameter dependence. As such, the resulting reduced-order model retains key properties such as energy conservation and symplectic time-evolution maps. We focus on parameterized simple mechanical systems subjected to Rayleigh damping and external forces, and consider an application to nonlinear structural dynamics. To preserve structure, the method first approximates the system's “Lagrangian ingredients''---the Riemannian metric, the potential-energy function, the dissipation function, and the external force---and subsequently derives reduced-order equations of motion by applying the (forced) Euler--Lagrange equation with thesemore » quantities. Moreover, from the algebraic perspective, key contributions include two efficient techniques for approximating parameterized reduced matrices while preserving symmetry and positive definiteness: matrix gappy proper orthogonal decomposition and reduced-basis sparsification. Our results for a parameterized truss-structure problem demonstrate the practical importance of preserving Lagrangian structure and illustrate the proposed method's merits: it reduces computation time while maintaining high accuracy and stability, in contrast to existing nonlinear model-reduction techniques that do not preserve structure.« less

  16. Reduced Design Load Basis for Ultimate Blade Loads Estimation in Multidisciplinary Design Optimization Frameworks

    NASA Astrophysics Data System (ADS)

    Pavese, Christian; Tibaldi, Carlo; Larsen, Torben J.; Kim, Taeseong; Thomsen, Kenneth

    2016-09-01

    The aim is to provide a fast and reliable approach to estimate ultimate blade loads for a multidisciplinary design optimization (MDO) framework. For blade design purposes, the standards require a large amount of computationally expensive simulations, which cannot be efficiently run each cost function evaluation of an MDO process. This work describes a method that allows integrating the calculation of the blade load envelopes inside an MDO loop. Ultimate blade load envelopes are calculated for a baseline design and a design obtained after an iteration of an MDO. These envelopes are computed for a full standard design load basis (DLB) and a deterministic reduced DLB. Ultimate loads extracted from the two DLBs with the two blade designs each are compared and analyzed. Although the reduced DLB supplies ultimate loads of different magnitude, the shape of the estimated envelopes are similar to the one computed using the full DLB. This observation is used to propose a scheme that is computationally cheap, and that can be integrated inside an MDO framework, providing a sufficiently reliable estimation of the blade ultimate loading. The latter aspect is of key importance when design variables implementing passive control methodologies are included in the formulation of the optimization problem. An MDO of a 10 MW wind turbine blade is presented as an applied case study to show the efficacy of the reduced DLB concept.

  17. A novel design of membrane mirror with small deformation and imaging performance analysis in infrared system

    NASA Astrophysics Data System (ADS)

    Zhang, Shuqing; Wang, Yongquan; Zhi, Xiyang

    2017-05-01

    A method of diminishing the shape error of membrane mirror is proposed in this paper. The inner inflating pressure is considerably decreased by adopting the pre-shaped membrane. Small deformation of the membrane mirror with greatly reduced shape error is sequentially achieved. Primarily a finite element model of the above pre-shaped membrane is built on the basis of its mechanical properties. Then accurate shape data under different pressures can be acquired by iteratively calculating the node displacements of the model. Shape data are applicable to build up deformed reflecting surfaces for the simulative analysis in ZEMAX. Finally, ground-based imaging experiments of 4-bar targets and nature scene are conducted. Experiment results indicate that the MTF of the infrared system can reach to 0.3 at a high spatial resolution of 10l p/mm, and texture details of the nature scene are well-presented. The method can provide theoretical basis and technical support for the applications in lightweight optical components with ultra-large apertures.

  18. Computation of indirect nuclear spin-spin couplings with reduced complexity in pure and hybrid density functional approximations.

    PubMed

    Luenser, Arne; Kussmann, Jörg; Ochsenfeld, Christian

    2016-09-28

    We present a (sub)linear-scaling algorithm to determine indirect nuclear spin-spin coupling constants at the Hartree-Fock and Kohn-Sham density functional levels of theory. Employing efficient integral algorithms and sparse algebra routines, an overall (sub)linear scaling behavior can be obtained for systems with a non-vanishing HOMO-LUMO gap. Calculations on systems with over 1000 atoms and 20 000 basis functions illustrate the performance and accuracy of our reference implementation. Specifically, we demonstrate that linear algebra dominates the runtime of conventional algorithms for 10 000 basis functions and above. Attainable speedups of our method exceed 6 × in total runtime and 10 × in the linear algebra steps for the tested systems. Furthermore, a convergence study of spin-spin couplings of an aminopyrazole peptide upon inclusion of the water environment is presented: using the new method it is shown that large solvent spheres are necessary to converge spin-spin coupling values.

  19. A combined representation method for use in band structure calculations. 1: Method

    NASA Technical Reports Server (NTRS)

    Friedli, C.; Ashcroft, N. W.

    1975-01-01

    A representation was described whose basis levels combine the important physical aspects of a finite set of plane waves with those of a set of Bloch tight-binding levels. The chosen combination has a particularly simple dependence on the wave vector within the Brillouin Zone, and its use in reducing the standard one-electron band structure problem to the usual secular equation has the advantage that the lattice sums involved in the calculation of the matrix elements are actually independent of the wave vector. For systems with complicated crystal structures, for which the Korringa-Kohn-Rostoker (KKR), Augmented-Plane Wave (APW) and Orthogonalized-Plane Wave (OPW) methods are difficult to apply, the present method leads to results with satisfactory accuracy and convergence.

  20. Reduced basis technique for evaluating the sensitivity coefficients of the nonlinear tire response

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.; Tanner, John A.; Peters, Jeanne M.

    1992-01-01

    An efficient reduced-basis technique is proposed for calculating the sensitivity of nonlinear tire response to variations in the design variables. The tire is modeled using a 2-D, moderate rotation, laminated anisotropic shell theory, including the effects of variation in material and geometric parameters. The vector of structural response and its first-order and second-order sensitivity coefficients are each expressed as a linear combination of a small number of basis vectors. The effectiveness of the basis vectors used in approximating the sensitivity coefficients is demonstrated by a numerical example involving the Space Shuttle nose-gear tire, which is subjected to uniform inflation pressure.

  1. A partitioned correlation function interaction approach for describing electron correlation in atoms

    NASA Astrophysics Data System (ADS)

    Verdebout, S.; Rynkun, P.; Jönsson, P.; Gaigalas, G.; Froese Fischer, C.; Godefroid, M.

    2013-04-01

    The traditional multiconfiguration Hartree-Fock (MCHF) and configuration interaction (CI) methods are based on a single orthonormal orbital basis. For atoms with many closed core shells, or complicated shell structures, a large orbital basis is needed to saturate the different electron correlation effects such as valence, core-valence and correlation within the core shells. The large orbital basis leads to massive configuration state function (CSF) expansions that are difficult to handle, even on large computer systems. We show that it is possible to relax the orthonormality restriction on the orbital basis and break down the originally very large calculations into a series of smaller calculations that can be run in parallel. Each calculation determines a partitioned correlation function (PCF) that accounts for a specific correlation effect. The PCFs are built on optimally localized orbital sets and are added to a zero-order multireference (MR) function to form a total wave function. The expansion coefficients of the PCFs are determined from a low dimensional generalized eigenvalue problem. The interaction and overlap matrices are computed using a biorthonormal transformation technique (Verdebout et al 2010 J. Phys. B: At. Mol. Phys. 43 074017). The new method, called partitioned correlation function interaction (PCFI), converges rapidly with respect to the orbital basis and gives total energies that are lower than the ones from ordinary MCHF and CI calculations. The PCFI method is also very flexible when it comes to targeting different electron correlation effects. Focusing our attention on neutral lithium, we show that by dedicating a PCF to the single excitations from the core, spin- and orbital-polarization effects can be captured very efficiently, leading to highly improved convergence patterns for hyperfine parameters compared with MCHF calculations based on a single orthogonal radial orbital basis. By collecting separately optimized PCFs to correct the MR function, the variational degrees of freedom in the relative mixing coefficients of the CSFs building the PCFs are inhibited. The constraints on the mixing coefficients lead to small off-sets in computed properties such as hyperfine structure, isotope shift and transition rates, with respect to the correct values. By (partially) deconstraining the mixing coefficients one converges to the correct limits and keeps the tremendous advantage of improved convergence rates that comes from the use of several orbital sets. Reducing ultimately each PCF to a single CSF with its own orbital basis leads to a non-orthogonal CI approach. Various perspectives of the new method are given.

  2. Study on Resources Assessment of Coal Seams covered by Long-Distance Oil & Gas Pipelines

    NASA Astrophysics Data System (ADS)

    Han, Bing; Fu, Qiang; Pan, Wei; Hou, Hanfang

    2018-01-01

    The assessment of mineral resources covered by construction projects plays an important role in reducing the overlaying of important mineral resources and ensuring the smooth implementation of construction projects. To take a planned long-distance gas pipeline as an example, the assessment method and principles for coal resources covered by linear projects are introduced. The areas covered by multiple coal seams are determined according to the linear projection method, and the resources covered by pipelines directly and indirectly are estimated by using area segmentation method on the basis of original blocks. The research results can provide references for route optimization of projects and compensation for mining right..

  3. Anomaly detection of turbopump vibration in Space Shuttle Main Engine using statistics and neural networks

    NASA Technical Reports Server (NTRS)

    Lo, C. F.; Wu, K.; Whitehead, B. A.

    1993-01-01

    The statistical and neural networks methods have been applied to investigate the feasibility in detecting anomalies in turbopump vibration of SSME. The anomalies are detected based on the amplitude of peaks of fundamental and harmonic frequencies in the power spectral density. These data are reduced to the proper format from sensor data measured by strain gauges and accelerometers. Both methods are feasible to detect the vibration anomalies. The statistical method requires sufficient data points to establish a reasonable statistical distribution data bank. This method is applicable for on-line operation. The neural networks method also needs to have enough data basis to train the neural networks. The testing procedure can be utilized at any time so long as the characteristics of components remain unchanged.

  4. Meshless Local Petrov-Galerkin Euler-Bernoulli Beam Problems: A Radial Basis Function Approach

    NASA Technical Reports Server (NTRS)

    Raju, I. S.; Phillips, D. R.; Krishnamurthy, T.

    2003-01-01

    A radial basis function implementation of the meshless local Petrov-Galerkin (MLPG) method is presented to study Euler-Bernoulli beam problems. Radial basis functions, rather than generalized moving least squares (GMLS) interpolations, are used to develop the trial functions. This choice yields a computationally simpler method as fewer matrix inversions and multiplications are required than when GMLS interpolations are used. Test functions are chosen as simple weight functions as in the conventional MLPG method. Compactly and noncompactly supported radial basis functions are considered. The non-compactly supported cubic radial basis function is found to perform very well. Results obtained from the radial basis MLPG method are comparable to those obtained using the conventional MLPG method for mixed boundary value problems and problems with discontinuous loading conditions.

  5. Calculation of excitation energies from the CC2 linear response theory using Cholesky decomposition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baudin, Pablo, E-mail: baudin.pablo@gmail.com; qLEAP – Center for Theoretical Chemistry, Department of Chemistry, Aarhus University, Langelandsgade 140, DK-8000 Aarhus C; Marín, José Sánchez

    2014-03-14

    A new implementation of the approximate coupled cluster singles and doubles CC2 linear response model is reported. It employs a Cholesky decomposition of the two-electron integrals that significantly reduces the computational cost and the storage requirements of the method compared to standard implementations. Our algorithm also exploits a partitioning form of the CC2 equations which reduces the dimension of the problem and avoids the storage of doubles amplitudes. We present calculation of excitation energies of benzene using a hierarchy of basis sets and compare the results with conventional CC2 calculations. The reduction of the scaling is evaluated as well asmore » the effect of the Cholesky decomposition parameter on the quality of the results. The new algorithm is used to perform an extrapolation to complete basis set investigation on the spectroscopically interesting benzylallene conformers. A set of calculations on medium-sized molecules is carried out to check the dependence of the accuracy of the results on the decomposition thresholds. Moreover, CC2 singlet excitation energies of the free base porphin are also presented.« less

  6. Ensembles of radial basis function networks for spectroscopic detection of cervical precancer

    NASA Technical Reports Server (NTRS)

    Tumer, K.; Ramanujam, N.; Ghosh, J.; Richards-Kortum, R.

    1998-01-01

    The mortality related to cervical cancer can be substantially reduced through early detection and treatment. However, current detection techniques, such as Pap smear and colposcopy, fail to achieve a concurrently high sensitivity and specificity. In vivo fluorescence spectroscopy is a technique which quickly, noninvasively and quantitatively probes the biochemical and morphological changes that occur in precancerous tissue. A multivariate statistical algorithm was used to extract clinically useful information from tissue spectra acquired from 361 cervical sites from 95 patients at 337-, 380-, and 460-nm excitation wavelengths. The multivariate statistical analysis was also employed to reduce the number of fluorescence excitation-emission wavelength pairs required to discriminate healthy tissue samples from precancerous tissue samples. The use of connectionist methods such as multilayered perceptrons, radial basis function (RBF) networks, and ensembles of such networks was investigated. RBF ensemble algorithms based on fluorescence spectra potentially provide automated and near real-time implementation of precancer detection in the hands of nonexperts. The results are more reliable, direct, and accurate than those achieved by either human experts or multivariate statistical algorithms.

  7. Discontinuous Galerkin algorithms for fully kinetic plasmas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Juno, J.; Hakim, A.; TenBarge, J.

    Here, we present a new algorithm for the discretization of the non-relativistic Vlasov–Maxwell system of equations for the study of plasmas in the kinetic regime. Using the discontinuous Galerkin finite element method for the spatial discretization, we obtain a high order accurate solution for the plasma's distribution function. Time stepping for the distribution function is done explicitly with a third order strong-stability preserving Runge–Kutta method. Since the Vlasov equation in the Vlasov–Maxwell system is a high dimensional transport equation, up to six dimensions plus time, we take special care to note various features we have implemented to reduce the costmore » while maintaining the integrity of the solution, including the use of a reduced high-order basis set. A series of benchmarks, from simple wave and shock calculations, to a five dimensional turbulence simulation, are presented to verify the efficacy of our set of numerical methods, as well as demonstrate the power of the implemented features.« less

  8. Discontinuous Galerkin algorithms for fully kinetic plasmas

    DOE PAGES

    Juno, J.; Hakim, A.; TenBarge, J.; ...

    2017-10-10

    Here, we present a new algorithm for the discretization of the non-relativistic Vlasov–Maxwell system of equations for the study of plasmas in the kinetic regime. Using the discontinuous Galerkin finite element method for the spatial discretization, we obtain a high order accurate solution for the plasma's distribution function. Time stepping for the distribution function is done explicitly with a third order strong-stability preserving Runge–Kutta method. Since the Vlasov equation in the Vlasov–Maxwell system is a high dimensional transport equation, up to six dimensions plus time, we take special care to note various features we have implemented to reduce the costmore » while maintaining the integrity of the solution, including the use of a reduced high-order basis set. A series of benchmarks, from simple wave and shock calculations, to a five dimensional turbulence simulation, are presented to verify the efficacy of our set of numerical methods, as well as demonstrate the power of the implemented features.« less

  9. Vertical intensity modulation for improved radiographic penetration and reduced exclusion zone

    NASA Astrophysics Data System (ADS)

    Bendahan, J.; Langeveld, W. G. J.; Bharadwaj, V.; Amann, J.; Limborg, C.; Nosochkov, Y.

    2016-09-01

    In the present work, a method to direct the X-ray beam in real time to the desired locations in the cargo to increase penetration and reduce exclusion zone is presented. Cargo scanners employ high energy X-rays to produce radiographic images of the cargo. Most new scanners employ dual-energy to produce, in addition to attenuation maps, atomic number information in order to facilitate the detection of contraband. The electron beam producing the bremsstrahlung X-ray beam is usually directed approximately to the center of the container, concentrating the highest X-ray intensity to that area. Other parts of the container are exposed to lower radiation levels due to the large drop-off of the bremsstrahlung radiation intensity as a function of angle, especially for high energies (>6 MV). This results in lower penetration in these areas, requiring higher power sources that increase the dose and exclusion zone. The capability to modulate the X-ray source intensity on a pulse-by-pulse basis to deliver only as much radiation as required to the cargo has been reported previously. This method is, however, controlled by the most attenuating part of the inspected slice, resulting in excessive radiation to other areas of the cargo. A method to direct a dual-energy beam has been developed to provide a more precisely controlled level of required radiation to highly attenuating areas. The present method is based on steering the dual-energy electron beam using magnetic components on a pulse-to-pulse basis to a fixed location on the X-ray production target, but incident at different angles so as to direct the maximum intensity of the produced bremsstrahlung to the desired locations. The details of the technique and subsystem and simulation results are presented.

  10. Solutions to Kuessner's integral equation in unsteady flow using local basis functions

    NASA Technical Reports Server (NTRS)

    Fromme, J. A.; Halstead, D. W.

    1975-01-01

    The computational procedure and numerical results are presented for a new method to solve Kuessner's integral equation in the case of subsonic compressible flow about harmonically oscillating planar surfaces with controls. Kuessner's equation is a linear transformation from pressure to normalwash. The unknown pressure is expanded in terms of prescribed basis functions and the unknown basis function coefficients are determined in the usual manner by satisfying the given normalwash distribution either collocationally or in the complex least squares sense. The present method of solution differs from previous ones in that the basis functions are defined in a continuous fashion over a relatively small portion of the aerodynamic surface and are zero elsewhere. This method, termed the local basis function method, combines the smoothness and accuracy of distribution methods with the simplicity and versatility of panel methods. Predictions by the local basis function method for unsteady flow are shown to be in excellent agreement with other methods. Also, potential improvements to the present method and extensions to more general classes of solutions are discussed.

  11. Pharmacological Basis for Use of Selaginella moellendorffii in Gouty Arthritis: Antihyperuricemic, Anti-Inflammatory, and Xanthine Oxidase Inhibition

    PubMed Central

    Zhao, Ping; Chen, Ke-li; Zhang, Guo-li

    2017-01-01

    This study was aimed at evaluating the effects of Selaginella moellendorffii Hieron. (SM) on gouty arthritis and getting an insight of the possible mechanisms. HPLC method was developed for chemical analysis. The paw oedema, the neutrophil accumulation, inflammatory mediators, lipid peroxidation, and histopathological changes of the joints were analyzed in gouty arthritis rat model, and the kidney injury and serum urate were detected in hyperuricemic mice. Pharmacokinetic result demonstrated that the main apigenin glycosides might be quantitatively transformed into apigenin in the mammalian body. Among these compounds, the apigenin exhibited the strongest effect on xanthine oxidase (XOD). SM aqueous extract has proved to be active in reducing hyperuricemia in dose-dependent manner, and the levels of blood urea nitrogen (BUN) and creatinine (Cr) in high dose group were decreased significantly as compared with hyperuricemic control group (P < 0.01). The high dose of SM extract could significantly prevent the paw swelling, reduce gouty joint inflammatory features, reduce the release of IL-1β and TNF-α, lower malondialdehyde (MDA) and myeloperoxidase (MPO) levels, and increase superoxide dismutase (SOD) level (P < 0.01). For the first time, this study provides a rational basis for the traditional use of SM aqueous extract against gout in folk medicine. PMID:28250791

  12. Initial postbuckling analysis of elastoplastic thin-shear structures

    NASA Technical Reports Server (NTRS)

    Carnoy, E. G.; Panosyan, G.

    1984-01-01

    The design of thin shell structures with respect to elastoplastic buckling requires an extended analysis of the influence of initial imperfections. For conservative design, the most critical defect should be assumed with the maximum allowable magnitude. This defect is closely related to the initial postbuckling behavior. An algorithm is given for the quasi-static analysis of the postbuckling behavior of structures that exhibit multiple buckling points. the algorithm based upon an energy criterion allows the computation of the critical perturbation which will be employed for the definition of the critical defect. For computational efficiency, the algorithm uses the reduced basis technique with automatic update of the modal basis. The method is applied to the axisymmetric buckling of cylindrical shells under axial compression, and conclusions are given for future research.

  13. Auxiliary basis expansions for large-scale electronic structure calculations

    PubMed Central

    Jung, Yousung; Sodt, Alex; Gill, Peter M. W.; Head-Gordon, Martin

    2005-01-01

    One way to reduce the computational cost of electronic structure calculations is to use auxiliary basis expansions to approximate four-center integrals in terms of two- and three-center integrals, usually by using the variationally optimum Coulomb metric to determine the expansion coefficients. However, the long-range decay behavior of the auxiliary basis expansion coefficients has not been characterized. We find that this decay can be surprisingly slow. Numerical experiments on linear alkanes and a toy model both show that the decay can be as slow as 1/r in the distance between the auxiliary function and the fitted charge distribution. The Coulomb metric fitting equations also involve divergent matrix elements for extended systems treated with periodic boundary conditions. An attenuated Coulomb metric that is short-range can eliminate these oddities without substantially degrading calculated relative energies. The sparsity of the fit coefficients is assessed on simple hydrocarbon molecules and shows quite early onset of linear growth in the number of significant coefficients with system size using the attenuated Coulomb metric. Hence it is possible to design linear scaling auxiliary basis methods without additional approximations to treat large systems. PMID:15845767

  14. Nonlinear Reduced-Order Analysis with Time-Varying Spatial Loading Distributions

    NASA Technical Reports Server (NTRS)

    Prezekop, Adam

    2008-01-01

    Oscillating shocks acting in combination with high-intensity acoustic loadings present a challenge to the design of resilient hypersonic flight vehicle structures. This paper addresses some features of this loading condition and certain aspects of a nonlinear reduced-order analysis with emphasis on system identification leading to formation of a robust modal basis. The nonlinear dynamic response of a composite structure subject to the simultaneous action of locally strong oscillating pressure gradients and high-intensity acoustic loadings is considered. The reduced-order analysis used in this work has been previously demonstrated to be both computationally efficient and accurate for time-invariant spatial loading distributions, provided that an appropriate modal basis is used. The challenge of the present study is to identify a suitable basis for loadings with time-varying spatial distributions. Using a proper orthogonal decomposition and modal expansion, it is shown that such a basis can be developed. The basis is made more robust by incrementally expanding it to account for changes in the location, frequency and span of the oscillating pressure gradient.

  15. Comparison of predictive control methods for high consumption industrial furnace.

    PubMed

    Stojanovski, Goran; Stankovski, Mile

    2013-01-01

    We describe several predictive control approaches for high consumption industrial furnace control. These furnaces are major consumers in production industries, and reducing their fuel consumption and optimizing the quality of the products is one of the most important engineer tasks. In order to demonstrate the benefits from implementation of the advanced predictive control algorithms, we have compared several major criteria for furnace control. On the basis of the analysis, some important conclusions have been drawn.

  16. Effects of extraction methods on the antioxidant activities of polysaccharides from Agaricus blazei Murrill.

    PubMed

    Jia, Shaoyi; Li, Feng; Liu, Yong; Ren, Haitao; Gong, Guili; Wang, Yanyan; Wu, Songhai

    2013-11-01

    Five polysaccharides were obtained from Agaricus blazei Murrill (ABM) through different extraction methods including hot water extraction, single enzyme extraction (pectinase, cellulase or papain) and compound enzymes extraction (cellulase:pectinase:papain). Their characteristics such as the polysaccharide yield, polysaccharide content, protein content, infrared spectra were determined, and antioxidant activities were investigated on the basis of hydroxyl radical, DPPH free radical, ABTS free radical and reducing power. The results showed that five extracts exhibited antioxidant activities in a concentration-dependent manner. Compared with other methods, the compound enzymes extraction method was found to present the highest polysaccharides yield (17.44%). Moreover, compound enzymes extracts exhibited the strongest reducing power and highest scavenging rates on hydroxyl radicals, DPPH radicals and ABTS radicals. On the contrary, hot water extraction method had the lowest polysaccharides yield of 11.95%, whose extracts also exhibited the lowest antioxidant activities. Overall, the available data obtained in vitro models suggested that ABM extracts were natural antioxidants and compound enzymes extraction was an appropriate, mild and effective extracting method for obtaining the polysaccharide extracts from Agaricus blazei Murrill (ABM). Copyright © 2013 Elsevier B.V. All rights reserved.

  17. Cylinder pressure reconstruction based on complex radial basis function networks from vibration and speed signals

    NASA Astrophysics Data System (ADS)

    Johnsson, Roger

    2006-11-01

    Methods to measure and monitor the cylinder pressure in internal combustion engines can contribute to reduced fuel consumption, noise and exhaust emissions. As direct measurements of the cylinder pressure are expensive and not suitable for measurements in vehicles on the road indirect methods which measure cylinder pressure have great potential value. In this paper, a non-linear model based on complex radial basis function (RBF) networks is proposed for the reconstruction of in-cylinder pressure pulse waveforms. Input to the network is the Fourier transforms of both engine structure vibration and crankshaft speed fluctuation. The primary reason for the use of Fourier transforms is that different frequency regions of the signals are used for the reconstruction process. This approach also makes it easier to reduce the amount of information that is used as input to the RBF network. The complex RBF network was applied to measurements from a 6-cylinder ethanol powered diesel engine over a wide range of running conditions. Prediction accuracy was validated by comparing a number of parameters between the measured and predicted cylinder pressure waveform such as maximum pressure, maximum rate of pressure rise and indicated mean effective pressure. The performance of the network was also evaluated for a number of untrained running conditions that differ both in speed and load from the trained ones. The results for the validation set were comparable to the trained conditions.

  18. Optimization of incremental structure from motion combining a random k-d forest and pHash for unordered images in a complex scene

    NASA Astrophysics Data System (ADS)

    Zhan, Zongqian; Wang, Chendong; Wang, Xin; Liu, Yi

    2018-01-01

    On the basis of today's popular virtual reality and scientific visualization, three-dimensional (3-D) reconstruction is widely used in disaster relief, virtual shopping, reconstruction of cultural relics, etc. In the traditional incremental structure from motion (incremental SFM) method, the time cost of the matching is one of the main factors restricting the popularization of this method. To make the whole matching process more efficient, we propose a preprocessing method before the matching process: (1) we first construct a random k-d forest with the large-scale scale-invariant feature transform features in the images and combine this with the pHash method to obtain a value of relatedness, (2) we then construct a connected weighted graph based on the relatedness value, and (3) we finally obtain a planned sequence of adding images according to the principle of the minimum spanning tree. On this basis, we attempt to thin the minimum spanning tree to reduce the number of matchings and ensure that the images are well distributed. The experimental results show a great reduction in the number of matchings with enough object points, with only a small influence on the inner stability, which proves that this method can quickly and reliably improve the efficiency of the SFM method with unordered multiview images in complex scenes.

  19. Molecular Properties by Quantum Monte Carlo: An Investigation on the Role of the Wave Function Ansatz and the Basis Set in the Water Molecule

    PubMed Central

    Zen, Andrea; Luo, Ye; Sorella, Sandro; Guidoni, Leonardo

    2014-01-01

    Quantum Monte Carlo methods are accurate and promising many body techniques for electronic structure calculations which, in the last years, are encountering a growing interest thanks to their favorable scaling with the system size and their efficient parallelization, particularly suited for the modern high performance computing facilities. The ansatz of the wave function and its variational flexibility are crucial points for both the accurate description of molecular properties and the capabilities of the method to tackle large systems. In this paper, we extensively analyze, using different variational ansatzes, several properties of the water molecule, namely, the total energy, the dipole and quadrupole momenta, the ionization and atomization energies, the equilibrium configuration, and the harmonic and fundamental frequencies of vibration. The investigation mainly focuses on variational Monte Carlo calculations, although several lattice regularized diffusion Monte Carlo calculations are also reported. Through a systematic study, we provide a useful guide to the choice of the wave function, the pseudopotential, and the basis set for QMC calculations. We also introduce a new method for the computation of forces with finite variance on open systems and a new strategy for the definition of the atomic orbitals involved in the Jastrow-Antisymmetrised Geminal power wave function, in order to drastically reduce the number of variational parameters. This scheme significantly improves the efficiency of QMC energy minimization in case of large basis sets. PMID:24526929

  20. A path integral methodology for obtaining thermodynamic properties of nonadiabatic systems using Gaussian mixture distributions

    NASA Astrophysics Data System (ADS)

    Raymond, Neil; Iouchtchenko, Dmitri; Roy, Pierre-Nicholas; Nooijen, Marcel

    2018-05-01

    We introduce a new path integral Monte Carlo method for investigating nonadiabatic systems in thermal equilibrium and demonstrate an approach to reducing stochastic error. We derive a general path integral expression for the partition function in a product basis of continuous nuclear and discrete electronic degrees of freedom without the use of any mapping schemes. We separate our Hamiltonian into a harmonic portion and a coupling portion; the partition function can then be calculated as the product of a Monte Carlo estimator (of the coupling contribution to the partition function) and a normalization factor (that is evaluated analytically). A Gaussian mixture model is used to evaluate the Monte Carlo estimator in a computationally efficient manner. Using two model systems, we demonstrate our approach to reduce the stochastic error associated with the Monte Carlo estimator. We show that the selection of the harmonic oscillators comprising the sampling distribution directly affects the efficiency of the method. Our results demonstrate that our path integral Monte Carlo method's deviation from exact Trotter calculations is dominated by the choice of the sampling distribution. By improving the sampling distribution, we can drastically reduce the stochastic error leading to lower computational cost.

  1. Reduced-Order Modeling for Flutter/LCO Using Recurrent Artificial Neural Network

    NASA Technical Reports Server (NTRS)

    Yao, Weigang; Liou, Meng-Sing

    2012-01-01

    The present study demonstrates the efficacy of a recurrent artificial neural network to provide a high fidelity time-dependent nonlinear reduced-order model (ROM) for flutter/limit-cycle oscillation (LCO) modeling. An artificial neural network is a relatively straightforward nonlinear method for modeling an input-output relationship from a set of known data, for which we use the radial basis function (RBF) with its parameters determined through a training process. The resulting RBF neural network, however, is only static and is not yet adequate for an application to problems of dynamic nature. The recurrent neural network method [1] is applied to construct a reduced order model resulting from a series of high-fidelity time-dependent data of aero-elastic simulations. Once the RBF neural network ROM is constructed properly, an accurate approximate solution can be obtained at a fraction of the cost of a full-order computation. The method derived during the study has been validated for predicting nonlinear aerodynamic forces in transonic flow and is capable of accurate flutter/LCO simulations. The obtained results indicate that the present recurrent RBF neural network is accurate and efficient for nonlinear aero-elastic system analysis

  2. Controlled method of reducing electrophoretic mobility of various substances

    NASA Technical Reports Server (NTRS)

    Vanalstine, James M. (Inventor)

    1989-01-01

    A method of reducing electrophoretic mobility of macromolecules, particles, cells, and the like is provided. The method comprises interacting the particles or cells with a polymer-linked affinity compound composed of: a hydrophilic neutral polymer such as polyethylene glycol, and an affinity component consisting of a hydrophobic compound such as a fatty acid ester, an immunocompound such as an antibody or active fragment thereof or simular macromolecule, or other ligands. The reduction of electrophoretic mobility achieved is directly proportional to the concentration of the polymer-linked affinity compound employed, and the mobility reduction obtainable is up to 100 percent for particular particles and cells. The present invention is advantageous in that analytical electrophoretic separation can not be achieved for macromolecules, particles, and cells whose native surface charge structure had prevented them from being separated by normal electrophoretic means. Depending on the affinity component utilized, separation can be achieved on the basis of specific/irreversible, specific/reversible, semi-specific/reversible, relatively nonspecific/reversible, or relatively nonspecific/irreversible ligand-substance interactions. The present method is also advantageous in that it can be used in a variety of standard laboratory electrophoresis equipment.

  3. A density matrix-based method for the linear-scaling calculation of dynamic second- and third-order properties at the Hartree-Fock and Kohn-Sham density functional theory levels.

    PubMed

    Kussmann, Jörg; Ochsenfeld, Christian

    2007-11-28

    A density matrix-based time-dependent self-consistent field (D-TDSCF) method for the calculation of dynamic polarizabilities and first hyperpolarizabilities using the Hartree-Fock and Kohn-Sham density functional theory approaches is presented. The D-TDSCF method allows us to reduce the asymptotic scaling behavior of the computational effort from cubic to linear for systems with a nonvanishing band gap. The linear scaling is achieved by combining a density matrix-based reformulation of the TDSCF equations with linear-scaling schemes for the formation of Fock- or Kohn-Sham-type matrices. In our reformulation only potentially linear-scaling matrices enter the formulation and efficient sparse algebra routines can be employed. Furthermore, the corresponding formulas for the first hyperpolarizabilities are given in terms of zeroth- and first-order one-particle reduced density matrices according to Wigner's (2n+1) rule. The scaling behavior of our method is illustrated for first exemplary calculations with systems of up to 1011 atoms and 8899 basis functions.

  4. Compressed Sensing Quantum Process Tomography for Superconducting Quantum Gates

    NASA Astrophysics Data System (ADS)

    Rodionov, Andrey

    An important challenge in quantum information science and quantum computing is the experimental realization of high-fidelity quantum operations on multi-qubit systems. Quantum process tomography (QPT) is a procedure devised to fully characterize a quantum operation. We first present the results of the estimation of the process matrix for superconducting multi-qubit quantum gates using the full data set employing various methods: linear inversion, maximum likelihood, and least-squares. To alleviate the problem of exponential resource scaling needed to characterize a multi-qubit system, we next investigate a compressed sensing (CS) method for QPT of two-qubit and three-qubit quantum gates. Using experimental data for two-qubit controlled-Z gates, taken with both Xmon and superconducting phase qubits, we obtain estimates for the process matrices with reasonably high fidelities compared to full QPT, despite using significantly reduced sets of initial states and measurement configurations. We show that the CS method still works when the amount of data is so small that the standard QPT would have an underdetermined system of equations. We also apply the CS method to the analysis of the three-qubit Toffoli gate with simulated noise, and similarly show that the method works well for a substantially reduced set of data. For the CS calculations we use two different bases in which the process matrix is approximately sparse (the Pauli-error basis and the singular value decomposition basis), and show that the resulting estimates of the process matrices match with reasonably high fidelity. For both two-qubit and three-qubit gates, we characterize the quantum process by its process matrix and average state fidelity, as well as by the corresponding standard deviation defined via the variation of the state fidelity for different initial states. We calculate the standard deviation of the average state fidelity both analytically and numerically, using a Monte Carlo method. Overall, we show that CS QPT offers a significant reduction in the needed amount of experimental data for two-qubit and three-qubit quantum gates.

  5. Utilization of group theory in studies of molecular clusters

    NASA Astrophysics Data System (ADS)

    Ocak, Mahir E.

    The structure of the molecular symmetry group of molecular clusters was analyzed and it is shown that the molecular symmetry group of a molecular cluster can be written as direct products and semidirect products of its subgroups. Symmetry adaptation of basis functions in direct product groups and semidirect product groups was considered in general and the sequential symmetry adaptation procedure which is already known for direct product groups was extended to the case of semidirect product groups. By using the sequential symmetry adaptation procedure a new method for calculating the VRT spectra of molecular clusters which is named as Monomer Basis Representation (MBR) method is developed. In the MBR method, calculations starts with a single monomer with the purpose of obtaining an optimized basis for that monomer as a linear combination of some primitive basis functions. Then, an optimized basis for each identical monomer is generated from the optimized basis of this monomer. By using the optimized bases of the monomers, a basis is generated generated for the solution of the full problem, and the VRT spectra of the cluster is obtained by using this basis. Since an optimized basis is used for each monomer which has a much smaller size than the primitive basis from which the optimized bases are generated, the MBR method leads to an exponential optimization in the size of the basis that is required for the calculations. Application of the MBR method has been illustrated by calculating the VRT spectra of water dimer by using the SAPT-5st potential surface of Groenenboom et al. The rest of the calculations are in good agreement with both the original calculations of Groenenboom et al. and also with the experimental results. Comparing the size of the optimized basis with the size of the primitive basis, it can be said that the method works efficiently. Because of its efficiency, the MBR method can be used for studies of clusters bigger than dimers. Thus, MBR method can be used for studying the many-body terms and for deriving accurate potential surfaces.

  6. [Advances in studies on toxicity of aconite].

    PubMed

    Chen, Rong-Chang; Sun, Gui-Bo; Zhang, Qiang; Ye, Zu-Guang; Sun, Xiao-Bo

    2013-04-01

    Aconite has the efficacy of reviving yang for resuscitation, dispelling cold and relieving pain, which is widely used in clinic, and shows unique efficacy in treating severe diseases. However, aconite has great toxicity, with obvious cardio-toxicity and neurotoxicity. Its toxicological mechanism main shows in the effect on voltage-dependent sodium channels, release of neurotransmitters and changes in receptors, promotion of lipid peroxidation and cell apoptosis in heart, liver and other tissues. Aconite works to reduce toxicity mainly through compatibility and processing. Besides traditional processing methods, many new modern processing techniques could also help achieve the objectives of detoxification and efficacy enhancement. In order to further develop the medicinal value of aconite and reduce its side effect in clinical application, this article gives comprehensive comments on aconite's toxicity characteristics, mechanism and detoxification methods on the basis of relevant reports for aconite's toxicity and the author's experimental studies.

  7. Identification of reduced-order thermal therapy models using thermal MR images: theory and validation.

    PubMed

    Niu, Ran; Skliar, Mikhail

    2012-07-01

    In this paper, we develop and validate a method to identify computationally efficient site- and patient-specific models of ultrasound thermal therapies from MR thermal images. The models of the specific absorption rate of the transduced energy and the temperature response of the therapy target are identified in the reduced basis of proper orthogonal decomposition of thermal images, acquired in response to a mild thermal test excitation. The method permits dynamic reidentification of the treatment models during the therapy by recursively utilizing newly acquired images. Such adaptation is particularly important during high-temperature therapies, which are known to substantially and rapidly change tissue properties and blood perfusion. The developed theory was validated for the case of focused ultrasound heating of a tissue phantom. The experimental and computational results indicate that the developed approach produces accurate low-dimensional treatment models despite temporal and spatial noises in MR images and slow image acquisition rate.

  8. Moving towards the goals of FP2020 - classifying contraceptives.

    PubMed

    Festin, Mario Philip R; Kiarie, James; Solo, Julie; Spieler, Jeffrey; Malarcher, Shawn; Van Look, Paul F A; Temmerman, Marleen

    2016-10-01

    With the renewed focus on family planning, a clear and transparent understanding is needed for the consistent classification of contraceptives, especially in the commonly used modern/traditional system. The World Health Organization Department of Reproductive Health and Research and the United States Agency for International Development (USAID) therefore convened a technical consultation in January 2015 to address issues related to classifying contraceptives. The consultation defined modern contraceptive methods as having a sound basis in reproductive biology, a precise protocol for correct use and evidence of efficacy under various conditions based on appropriately designed studies. Methods in country programs like Fertility Awareness Based Methods [such as Standard Days Method (SDM) and TwoDay Method], Lactational Amenorrhea Method (LAM) and emergency contraception should be reported as modern. Herbs, charms and vaginal douching are not counted as contraceptive methods as they have no scientific basis in preventing pregnancy nor are in country programs. More research is needed on defining and measuring use of emergency contraceptive methods, to reflect their contribution to reducing unmet need. The ideal contraceptive classification system should be simple, easy to use, clear and consistent, with greater parsimony. Measurement challenges remain but should not be the driving force to determine what methods are counted or reported as modern or not. Family planning programs should consider multiple attributes of contraceptive methods (e.g., level of effectiveness, need for program support, duration of labeled use, hormonal or nonhormonal) to ensure they provide a variety of methods to meet the needs of women and men. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  9. Reduced-order modelling of parameter-dependent, linear and nonlinear dynamic partial differential equation models.

    PubMed

    Shah, A A; Xing, W W; Triantafyllidis, V

    2017-04-01

    In this paper, we develop reduced-order models for dynamic, parameter-dependent, linear and nonlinear partial differential equations using proper orthogonal decomposition (POD). The main challenges are to accurately and efficiently approximate the POD bases for new parameter values and, in the case of nonlinear problems, to efficiently handle the nonlinear terms. We use a Bayesian nonlinear regression approach to learn the snapshots of the solutions and the nonlinearities for new parameter values. Computational efficiency is ensured by using manifold learning to perform the emulation in a low-dimensional space. The accuracy of the method is demonstrated on a linear and a nonlinear example, with comparisons with a global basis approach.

  10. Reduced-order modelling of parameter-dependent, linear and nonlinear dynamic partial differential equation models

    PubMed Central

    Xing, W. W.; Triantafyllidis, V.

    2017-01-01

    In this paper, we develop reduced-order models for dynamic, parameter-dependent, linear and nonlinear partial differential equations using proper orthogonal decomposition (POD). The main challenges are to accurately and efficiently approximate the POD bases for new parameter values and, in the case of nonlinear problems, to efficiently handle the nonlinear terms. We use a Bayesian nonlinear regression approach to learn the snapshots of the solutions and the nonlinearities for new parameter values. Computational efficiency is ensured by using manifold learning to perform the emulation in a low-dimensional space. The accuracy of the method is demonstrated on a linear and a nonlinear example, with comparisons with a global basis approach. PMID:28484327

  11. Size Reduction of Hamiltonian Matrix for Large-Scale Energy Band Calculations Using Plane Wave Bases

    NASA Astrophysics Data System (ADS)

    Morifuji, Masato

    2018-01-01

    We present a method of reducing the size of a Hamiltonian matrix used in calculations of electronic states. In the electronic states calculations using plane wave basis functions, a large number of plane waves are often required to obtain precise results. Even using state-of-the-art techniques, the Hamiltonian matrix often becomes very large. The large computational time and memory necessary for diagonalization limit the widespread use of band calculations. We show a procedure of deriving a reduced Hamiltonian constructed using a small number of low-energy bases by renormalizing high-energy bases. We demonstrate numerically that the significant speedup of eigenstates evaluation is achieved without losing accuracy.

  12. Construction of energy-stable Galerkin reduced order models.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kalashnikova, Irina; Barone, Matthew Franklin; Arunajatesan, Srinivasan

    2013-05-01

    This report aims to unify several approaches for building stable projection-based reduced order models (ROMs). Attention is focused on linear time-invariant (LTI) systems. The model reduction procedure consists of two steps: the computation of a reduced basis, and the projection of the governing partial differential equations (PDEs) onto this reduced basis. Two kinds of reduced bases are considered: the proper orthogonal decomposition (POD) basis and the balanced truncation basis. The projection step of the model reduction can be done in two ways: via continuous projection or via discrete projection. First, an approach for building energy-stable Galerkin ROMs for linear hyperbolicmore » or incompletely parabolic systems of PDEs using continuous projection is proposed. The idea is to apply to the set of PDEs a transformation induced by the Lyapunov function for the system, and to build the ROM in the transformed variables. The resulting ROM will be energy-stable for any choice of reduced basis. It is shown that, for many PDE systems, the desired transformation is induced by a special weighted L2 inner product, termed the %E2%80%9Csymmetry inner product%E2%80%9D. Attention is then turned to building energy-stable ROMs via discrete projection. A discrete counterpart of the continuous symmetry inner product, a weighted L2 inner product termed the %E2%80%9CLyapunov inner product%E2%80%9D, is derived. The weighting matrix that defines the Lyapunov inner product can be computed in a black-box fashion for a stable LTI system arising from the discretization of a system of PDEs in space. It is shown that a ROM constructed via discrete projection using the Lyapunov inner product will be energy-stable for any choice of reduced basis. Connections between the Lyapunov inner product and the inner product induced by the balanced truncation algorithm are made. Comparisons are also made between the symmetry inner product and the Lyapunov inner product. The performance of ROMs constructed using these inner products is evaluated on several benchmark test cases.« less

  13. The Laplace transformed divide-expand-consolidate resolution of the identity second-order Møller-Plesset perturbation (DEC-LT-RIMP2) theory method

    NASA Astrophysics Data System (ADS)

    Kjærgaard, Thomas

    2017-01-01

    The divide-expand-consolidate resolution of the identity second-order Møller-Plesset perturbation (DEC-RI-MP2) theory method introduced in Baudin et al. [J. Chem. Phys. 144, 054102 (2016)] is significantly improved by introducing the Laplace transform of the orbital energy denominator in order to construct the double amplitudes directly in the local basis. Furthermore, this paper introduces the auxiliary reduction procedure, which reduces the set of the auxiliary functions employed in the individual fragments. The resulting Laplace transformed divide-expand-consolidate resolution of the identity second-order Møller-Plesset perturbation method is applied to the insulin molecule where we obtain a factor 9.5 speedup compared to the DEC-RI-MP2 method.

  14. An RBF-based compression method for image-based relighting.

    PubMed

    Leung, Chi-Sing; Wong, Tien-Tsin; Lam, Ping-Man; Choy, Kwok-Hung

    2006-04-01

    In image-based relighting, a pixel is associated with a number of sampled radiance values. This paper presents a two-level compression method. In the first level, the plenoptic property of a pixel is approximated by a spherical radial basis function (SRBF) network. That means that the spherical plenoptic function of each pixel is represented by a number of SRBF weights. In the second level, we apply a wavelet-based method to compress these SRBF weights. To reduce the visual artifact due to quantization noise, we develop a constrained method for estimating the SRBF weights. Our proposed approach is superior to JPEG, JPEG2000, and MPEG. Compared with the spherical harmonics approach, our approach has a lower complexity, while the visual quality is comparable. The real-time rendering method for our SRBF representation is also discussed.

  15. Optimization of selected molecular orbitals in group basis sets.

    PubMed

    Ferenczy, György G; Adams, William H

    2009-04-07

    We derive a local basis equation which may be used to determine the orbitals of a group of electrons in a system when the orbitals of that group are represented by a group basis set, i.e., not the basis set one would normally use but a subset suited to a specific electronic group. The group orbitals determined by the local basis equation minimize the energy of a system when a group basis set is used and the orbitals of other groups are frozen. In contrast, under the constraint of a group basis set, the group orbitals satisfying the Huzinaga equation do not minimize the energy. In a test of the local basis equation on HCl, the group basis set included only 12 of the 21 functions in a basis set one might ordinarily use, but the calculated active orbital energies were within 0.001 hartree of the values obtained by solving the Hartree-Fock-Roothaan (HFR) equation using all 21 basis functions. The total energy found was just 0.003 hartree higher than the HFR value. The errors with the group basis set approximation to the Huzinaga equation were larger by over two orders of magnitude. Similar results were obtained for PCl(3) with the group basis approximation. Retaining more basis functions allows an even higher accuracy as shown by the perfect reproduction of the HFR energy of HCl with 16 out of 21 basis functions in the valence basis set. When the core basis set was also truncated then no additional error was introduced in the calculations performed for HCl with various basis sets. The same calculations with fixed core orbitals taken from isolated heavy atoms added a small error of about 10(-4) hartree. This offers a practical way to calculate wave functions with predetermined fixed core and reduced base valence orbitals at reduced computational costs. The local basis equation can also be used to combine the above approximations with the assignment of local basis sets to groups of localized valence molecular orbitals and to derive a priori localized orbitals. An appropriately chosen localization and basis set assignment allowed a reproduction of the energy of n-hexane with an error of 10(-5) hartree, while the energy difference between its two conformers was reproduced with a similar accuracy for several combinations of localizations and basis set assignments. These calculations include localized orbitals extending to 4-5 heavy atoms and thus they require to solve reduced dimension secular equations. The dimensions are not expected to increase with increasing system size and thus the local basis equation may find use in linear scaling electronic structure calculations.

  16. Reduced Order Model Basis Vector Generation: Generates Basis Vectors fro ROMs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Arrighi, Bill

    2016-03-03

    libROM is a library that implements order reduction via singular value decomposition (SVD) of sampled state vectors. It implements 2 parallel, incremental SVD algorithms and one serial, non-incremental algorithm. It also provides a mechanism for adaptive sampling of basis vectors.

  17. Particle damping applied research on mining dump truck vibration control

    NASA Astrophysics Data System (ADS)

    Song, Liming; Xiao, Wangqiang; Guo, Haiquan; Yang, Zhe; Li, Zeguang

    2018-05-01

    Vehicle vibration characteristics has become an important evaluation indexes of mining dump truck. In this paper, based on particle damping technology, mining dump truck vibration control was studied by combining the theoretical simulation with actual testing, particle damping technology was successfully used in mining dump truck cab vibration control. Through testing results analysis, with a particle damper, cab vibration was reduced obviously, the methods and basis were provided for vehicle vibration control research and particle damping technology application.

  18. Behavioural cues of reproductive status in seahorses Hippocampus abdominalis.

    PubMed

    Whittington, C M; Musolf, K; Sommer, S; Wilson, A B

    2013-07-01

    A method is described to assess the reproductive status of male Hippocampus abdominalis on the basis of behavioural traits. The non-invasive nature of this technique minimizes handling stress and reduces sampling requirements for experimental work. It represents a useful tool to assist researchers in sample collection for studies of reproduction and development in viviparous syngnathids, which are emerging as important model species. © 2013 The Authors. Journal of Fish Biology © 2013 The Fisheries Society of the British Isles.

  19. The consumer revolution arrives. Using smart customer service to attract, educate, & retain satisfied members & lower costs.

    PubMed

    O'Connor, K

    1994-06-01

    Across the country, managed care organizations pursue ways to enhance customer service and maintain member satisfaction, without breaking the bank by authorizing unnecessary services. One method gaining popularity is reducing customer demand for inappropriate services through education. Approaches include welcome-to-the-plan calls, member education, automated and in-person answer lines, and 24-hour telephone coverage. Several firms have recognized the need for such services, and offer them to HMOs on an outsourcing basis, with generally positive results.

  20. Management of high-risk perioperative systems.

    PubMed

    Dain, Steven

    2006-06-01

    The perioperative system is a complex system that requires people, materials, and processes to come together in a highly ordered and timely manner. However, when working in this high-risk system, even well-organized, knowledgeable, vigilant, and well-intentioned individuals will eventually make errors. All systems need to be evaluated on a continual basis to reduce the risk of errors, make errors more easily recognizable, and provide methods for error mitigation. A simple approach to risk management that may be applied in clinical medicine is discussed.

  1. Application of spatial methods to identify areas with lime requirement in eastern Croatia

    NASA Astrophysics Data System (ADS)

    Bogunović, Igor; Kisic, Ivica; Mesic, Milan; Zgorelec, Zeljka; Percin, Aleksandra; Pereira, Paulo

    2016-04-01

    With more than 50% of acid soils in all agricultural land in Croatia, soil acidity is recognized as a big problem. Low soil pH leads to a series of negative phenomena in plant production and therefore as a compulsory measure for reclamation of acid soils is liming, recommended on the base of soil analysis. The need for liming is often erroneously determined only on the basis of the soil pH, because the determination of cation exchange capacity, the hydrolytic acidity and base saturation is a major cost to producers. Therefore, in Croatia, as well as some other countries, the amount of liming material needed to ameliorate acid soils is calculated by considering their hydrolytic acidity. For this research, several interpolation methods were tested to identify the best spatial predictor of hidrolitic acidity. The purpose of this study was to: test several interpolation methods to identify the best spatial predictor of hidrolitic acidity; and to determine the possibility of using multivariate geostatistics in order to reduce the number of needed samples for determination the hydrolytic acidity, all with an aim that the accuracy of the spatial distribution of liming requirement is not significantly reduced. Soil pH (in KCl) and hydrolytic acidity (Y1) is determined in the 1004 samples (from 0-30 cm) randomized collected in agricultural fields near Orahovica in eastern Croatia. This study tested 14 univariate interpolation models (part of ArcGIS software package) in order to provide most accurate spatial map of hydrolytic acidity on a base of: all samples (Y1 100%), and the datasets with 15% (Y1 85%), 30% (Y1 70%) and 50% fewer samples (Y1 50%). Parallel to univariate interpolation methods, the precision of the spatial distribution of the Y1 was tested by the co-kriging method with exchangeable acidity (pH in KCl) as a covariate. The soils at studied area had an average pH (KCl) 4,81, while the average Y1 10,52 cmol+ kg-1. These data suggest that liming is necessary agrotechnical measure for soil conditioning. The results show that ordinary kriging was most accurate univariate interpolation method with smallest error (RMSE) in all four data sets, while the least precise showed Radial Basis Functions (Thin Plate Spline and Inverse Multiquadratic). Furthermore, it is noticeable a trend of increasing errors (RMSE) with a reduced number of samples tested on the most accurate univariate interpolation model: 3,096 (Y1 100%), 3,258 (Y1 85%), 3,317 (Y1 70%), 3,546 (Y1 50%). The best-fit semivariograms show a strong spatial dependence in Y1 100% (Nugget/Sill 20.19) and Y1 85% (Nugget/Sill 23.83), while a further reduction of the number of samples resulted with moderate spatial dependence (Y1 70% -35,85% and Y1 50% - 32,01). Co-kriging method resulted in a reduction in RMSE compared with univariate interpolation methods for each data set with: 2,054, 1,731 and 1,734 for Y1 85%, Y1 70%, Y1 50%, respectively. The results show the possibility for reducing sampling costs by using co-kriging method which is useful from the practical viewpoint. Reduced number of samples by half for determination of hydrolytic acidity in the interaction with the soil pH provides a higher precision for variable liming compared to the univariate interpolation methods of the entire set of data. These data provide new opportunities to reduce costs in the practical plant production in Croatia.

  2. 26 CFR 1.1502-31 - Stock basis after a group structure change.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... S's stock. If T's net asset basis is a negative amount, it reduces P's basis in S's stock and, if... § 1.1502-19 for rules treating P's excess loss account as negative basis, and treating a reference to...(a)(2)(D), and S provides an appreciated asset (e.g., stock of P) as partial consideration in the...

  3. A TV-constrained decomposition method for spectral CT

    NASA Astrophysics Data System (ADS)

    Guo, Xiaoyue; Zhang, Li; Xing, Yuxiang

    2017-03-01

    Spectral CT is attracting more and more attention in medicine, industrial nondestructive testing and security inspection field. Material decomposition is an important issue to a spectral CT to discriminate materials. Because of the spectrum overlap of energy channels, as well as the correlation of basis functions, it is well acknowledged that decomposition step in spectral CT imaging causes noise amplification and artifacts in component coefficient images. In this work, we propose materials decomposition via an optimization method to improve the quality of decomposed coefficient images. On the basis of general optimization problem, total variance minimization is constrained on coefficient images in our overall objective function with adjustable weights. We solve this constrained optimization problem under the framework of ADMM. Validation on both a numerical dental phantom in simulation and a real phantom of pig leg on a practical CT system using dual-energy imaging is executed. Both numerical and physical experiments give visually obvious better reconstructions than a general direct inverse method. SNR and SSIM are adopted to quantitatively evaluate the image quality of decomposed component coefficients. All results demonstrate that the TV-constrained decomposition method performs well in reducing noise without losing spatial resolution so that improving the image quality. The method can be easily incorporated into different types of spectral imaging modalities, as well as for cases with energy channels more than two.

  4. Statistical inference for noisy nonlinear ecological dynamic systems.

    PubMed

    Wood, Simon N

    2010-08-26

    Chaotic ecological dynamic systems defy conventional statistical analysis. Systems with near-chaotic dynamics are little better. Such systems are almost invariably driven by endogenous dynamic processes plus demographic and environmental process noise, and are only observable with error. Their sensitivity to history means that minute changes in the driving noise realization, or the system parameters, will cause drastic changes in the system trajectory. This sensitivity is inherited and amplified by the joint probability density of the observable data and the process noise, rendering it useless as the basis for obtaining measures of statistical fit. Because the joint density is the basis for the fit measures used by all conventional statistical methods, this is a major theoretical shortcoming. The inability to make well-founded statistical inferences about biological dynamic models in the chaotic and near-chaotic regimes, other than on an ad hoc basis, leaves dynamic theory without the methods of quantitative validation that are essential tools in the rest of biological science. Here I show that this impasse can be resolved in a simple and general manner, using a method that requires only the ability to simulate the observed data on a system from the dynamic model about which inferences are required. The raw data series are reduced to phase-insensitive summary statistics, quantifying local dynamic structure and the distribution of observations. Simulation is used to obtain the mean and the covariance matrix of the statistics, given model parameters, allowing the construction of a 'synthetic likelihood' that assesses model fit. This likelihood can be explored using a straightforward Markov chain Monte Carlo sampler, but one further post-processing step returns pure likelihood-based inference. I apply the method to establish the dynamic nature of the fluctuations in Nicholson's classic blowfly experiments.

  5. Phenology Information Contributes to Reduce Temporal Basis Risk in Agricultural Weather Index Insurance.

    PubMed

    Dalhaus, Tobias; Musshoff, Oliver; Finger, Robert

    2018-01-08

    Weather risks are an essential and increasingly important driver of agricultural income volatility. Agricultural insurances contribute to support farmers to cope with these risks. Among these insurances, weather index insurances (WII) are an innovative tool to cope with climatic risks in agriculture. Using WII, farmers receive an indemnification not based on actual yield reductions but are compensated based on a measured weather index, such as rainfall at a nearby weather station. The discrepancy between experienced losses and actual indemnification, basis risk, is a key challenge. In particular, specifications of WII used so far do not capture critical plant growth phases adequately. Here, we contribute to reduce basis risk by proposing novel procedures how occurrence dates and shifts of growth phases over time and space can be considered and test for their risk reducing potential. Our empirical example addresses drought risks in the critical growth phase around the anthesis stage in winter wheat production in Germany. We find spatially explicit, public and open databases of phenology reports to contribute to reduce basis risk and thus improve the attractiveness of WII. In contrast, we find growth stage modelling based on growing degree days (thermal time) not to result in significant improvements.

  6. A fast simulation method for radiation maps using interpolation in a virtual environment.

    PubMed

    Li, Meng-Kun; Liu, Yong-Kuo; Peng, Min-Jun; Xie, Chun-Li; Yang, Li-Qun

    2018-05-10

    In nuclear decommissioning, virtual simulation technology is a useful tool to achieve an effective work process by using virtual environments to represent the physical and logical scheme of a real decommissioning project. This technology is cost-saving and time-saving, with the capacity to develop various decommissioning scenarios and reduce the risk of retrofitting. The method utilises a radiation map in a virtual simulation as the basis for the assessment of exposure to a virtual human. In this paper, we propose a fast simulation method using a known radiation source. The method has a unique advantage over point kernel and Monte Carlo methods because it generates the radiation map using interpolation in a virtual environment. The simulation of the radiation map including the calculation and the visualisation were realised using UNITY and MATLAB. The feasibility of the proposed method was tested on a hypothetical case and the results obtained are discussed in this paper.

  7. Electron correlation by polarization of interacting densities

    NASA Astrophysics Data System (ADS)

    Whitten, Jerry L.

    2017-02-01

    Coulomb interactions that occur in electronic structure calculations are correlated by allowing basis function components of the interacting densities to polarize dynamically, thereby reducing the magnitude of the interaction. Exchange integrals of molecular orbitals are not correlated. The modified Coulomb interactions are used in single-determinant or configuration interaction calculations. The objective is to account for dynamical correlation effects without explicitly introducing higher spherical harmonic functions into the molecular orbital basis. Molecular orbital densities are decomposed into a distribution of spherical components that conserve the charge and each of the interacting components is considered as a two-electron wavefunction embedded in the system acted on by an average field Hamiltonian plus r12-1. A method of avoiding redundancy is described. Applications to atoms, negative ions, and molecules representing different types of bonding and spin states are discussed.

  8. Anticipatory control of xenon in a pressurized water reactor

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Impink, A.J. Jr.

    1987-02-10

    A method is described for automatically dampening xenon-135 spatial transients in the core of a pressurized water reactor having control rods which regulate reactor power level, comprising the steps of: measuring the neutron flu in the reactor core at a plurality of axially spaced locations on a real-time, on-line basis; repetitively generating from the neutron flux measurements, on a point-by-point basis, signals representative of the current axial distribution of xenon-135, and signals representative of the current rate of change of the axial distribution of xenon-135; generating from the xenon-135 distribution signals and the rate of change of xenon distribution signals,more » control signals for reducing the xenon transients; and positioning the control rods as a function of the control signals to dampen the xenon-135 spatial transients.« less

  9. A rapid electrochemical monitoring platform for sensitive determination of thiamethoxam based on β-cyclodextrin-graphene composite.

    PubMed

    Zhai, XingChen; Zhang, Hua; Zhang, Min; Yang, Xin; Gu, Cheng; Zhou, GuoPeng; Zhao, HaiTian; Wang, ZhenYu; Dong, AiJun; Wang, Jing

    2017-08-01

    A rapid monitoring platform for sensitive voltammetric detection of thiamethoxam residues is reported in the present study. A β-cyclodextrin-reduced graphene oxide composite was used as a reinforcing material in electrochemical determination of thiamethoxam. Compared with bare glassy carbon electrodes, the reduction peak currents of thiamethoxam at reduced graphene oxide/glassy carbon electrode and β-cyclodextrin-reduced graphene oxide/glassy carbon electrode were increased by 70- and 124-fold, respectively. The experimental conditions influencing voltammetric determination of thiamethoxam, such as the amount of β-cyclodextrin-reduced graphene oxide, solution pH, temperature, and accumulation time, were optimized. The reduction mechanism and binding affinity of this material is also discussed. Under optimal conditions, the reduction peak currents increased linearly between 0.5 µM and 16 µM concentration of thiamethoxam. The limit of detection was 0.27 µM on the basis of a signal-to-noise ratio of 3. When the proposed method was applied to brown rice in a recovery test, the recoveries were between 92.20% and 113.75%. The results were in good concordance with the high-performance liquid chromatography method. The proposed method therefore provides a promising and effective platform for sensitive and rapid determination of thiamethoxam. Environ Toxicol Chem 2017;36:1991-1997. © 2017 SETAC. © 2017 SETAC.

  10. Reduced kernel recursive least squares algorithm for aero-engine degradation prediction

    NASA Astrophysics Data System (ADS)

    Zhou, Haowen; Huang, Jinquan; Lu, Feng

    2017-10-01

    Kernel adaptive filters (KAFs) generate a linear growing radial basis function (RBF) network with the number of training samples, thereby lacking sparseness. To deal with this drawback, traditional sparsification techniques select a subset of original training data based on a certain criterion to train the network and discard the redundant data directly. Although these methods curb the growth of the network effectively, it should be noted that information conveyed by these redundant samples is omitted, which may lead to accuracy degradation. In this paper, we present a novel online sparsification method which requires much less training time without sacrificing the accuracy performance. Specifically, a reduced kernel recursive least squares (RKRLS) algorithm is developed based on the reduced technique and the linear independency. Unlike conventional methods, our novel methodology employs these redundant data to update the coefficients of the existing network. Due to the effective utilization of the redundant data, the novel algorithm achieves a better accuracy performance, although the network size is significantly reduced. Experiments on time series prediction and online regression demonstrate that RKRLS algorithm requires much less computational consumption and maintains the satisfactory accuracy performance. Finally, we propose an enhanced multi-sensor prognostic model based on RKRLS and Hidden Markov Model (HMM) for remaining useful life (RUL) estimation. A case study in a turbofan degradation dataset is performed to evaluate the performance of the novel prognostic approach.

  11. A Cubic Radial Basis Function in the MLPG Method for Beam Problems

    NASA Technical Reports Server (NTRS)

    Raju, I. S.; Phillips, D. R.

    2002-01-01

    A non-compactly supported cubic radial basis function implementation of the MLPG method for beam problems is presented. The evaluation of the derivatives of the shape functions obtained from the radial basis function interpolation is much simpler than the evaluation of the moving least squares shape function derivatives. The radial basis MLPG yields results as accurate or better than those obtained by the conventional MLPG method for problems with discontinuous and other complex loading conditions.

  12. Orientation of N-benzoyl glycine on silver nanoparticles: SERS and DFT studies

    NASA Astrophysics Data System (ADS)

    Parameswari, A.; Asath, R. Mohamed; Premkumar, R.; Benial, A. Milton Franklin

    2017-05-01

    Surface enhanced Raman scattering (SERS) studies of N-benzoyl glycine (NBG) adsorbed on silver nanoparticles (AgNPs) was studied by experimental and density functional theory (DFT) approach. Single crystals of NBG were prepared using slow evaporation method. The AgNPs were prepared and characterized. The DFT/ B3PW91 method with LanL2DZ basis set was used to optimize the molecular structure of NBG and NBG adsorbed on silver cluster. The calculated and observed vibrational frequencies were assingned on the basis of potential energy distribution calculation. The reduced band gap value was obtained for NBG adsorbed on silver nanoparticles from the frontier molecular orbitals analysis. Natural bond orbital analysis was carried out to inspect the intra-molecular stabilization interactions, which are responsible for the bio activity and nonlinear optical property of the molecule. The spectral analysis was also evidenced that NBG would adsorb tilted orientation on the silver surface over the binding sites such as lone pair electron of N atom in amine group and through phenyl ring π system.

  13. Photoinduced dynamics to photoluminescence in Ln3+ (Ln = Ce, Pr) doped β-NaYF4 nanocrystals computed in basis of non-collinear spin DFT with spin-orbit coupling

    NASA Astrophysics Data System (ADS)

    Han, Yulun; Vogel, Dayton J.; Inerbaev, Talgat M.; May, P. Stanley; Berry, Mary T.; Kilin, Dmitri S.

    2018-03-01

    In this work, non-collinear spin DFT + U approaches with spin-orbit coupling (SOC) are applied to Ln3+ doped β-NaYF4 (Ln = Ce, Pr) nanocrystals in Vienna ab initio Simulation Package taking into account unpaired spin configurations using the Perdew-Burke-Ernzerhof functional in a plane wave basis set. The calculated absorption spectra from non-collinear spin DFT + U approaches are compared with that from spin-polarised DFT + U approaches. The spectral difference indicates the importance of spin-flip transitions of Ln3+ ions. Suite of codes for nonadiabatic dynamics has been developed for 2-component spinor orbitals. On-the-fly nonadiabatic coupling calculations provide transition probabilities facilitated by nuclear motion. Relaxation rates of electrons and holes are calculated using Redfield theory in the reduced density matrix formalism cast in the basis of non-collinear spin DFT + U with SOC. The emission spectra are calculated using the time-integrated method along the excited state trajectories based on nonadiabatic couplings.

  14. Nanocarbon coating on the basis of partially reduced graphene oxide

    NASA Astrophysics Data System (ADS)

    Bocharov, G. S.; Budaev, V. P.; Eletskii, A. V.; Fedorovich, S. D.

    2017-11-01

    There has been developed an approach to the production of graphene as a result of the thermal reduction of graphene oxide (GO). GO has been synthesized by the use of the modified Hummers method with utilization of sodium nitrate and concentrated sulfuric acid. A paper-like material of 40 - 60 μm in thickness and 1.2 g/cm3 in density was formed on a filter after deposition from dispersion. The material was cut onto samples of about 15×25 mm2 in size which were experienced to the thermal treatment at various temperatures between 100 and 800 °C. This resulted in a set of GO samples reduced to various degrees. The degree of reduction was determined on the basis of measurements of the conductivity. Along with that the evolution of samples density was studied as the annealing temperature was enhanced. The analysis of the X-ray photoelectron spectra of partially reduced GO permitted the determination of the dynamics of changing the chemical composition of the material in the process of the thermal treatment. The analysis of Raman spectra of the GO samples indicates rather high degree of the disordering of the material. A possibility of the usage of the material produced as a nanocarbon coating in experiments on the interaction of high intense liquid flows with a wall surface is discussed.

  15. Assessing consumer responses to potential reduced-exposure tobacco products: a review of tobacco industry and independent research methods.

    PubMed

    Rees, Vaughan W; Kreslake, Jennifer M; Cummings, K Michael; O'Connor, Richard J; Hatsukami, Dorothy K; Parascandola, Mark; Shields, Peter G; Connolly, Gregory N

    2009-12-01

    Internal tobacco industry documents and the mainstream literature are reviewed to identify methods and measures for evaluating tobacco consumer response. The review aims to outline areas in which established methods exist, identify gaps in current methods for assessing consumer response, and consider how these methods might be applied to evaluate potentially reduced exposure tobacco products and new products. Internal industry research reviewed included published articles, manuscript drafts, presentations, protocols, and instruments relating to consumer response measures were identified and analyzed. Peer-reviewed research was identified using PubMed and Scopus. Industry research on consumer response focuses on product development and marketing. To develop and refine new products, the tobacco industry has developed notable strategies for assessing consumers' sensory and subjective responses to product design characteristics. Independent research is often conducted to gauge the likelihood of future product adoption by measuring consumers' risk perceptions, responses to product, and product acceptability. A model that conceptualizes consumer response as comprising the separate, but interacting, domains of product perceptions and response to product is outlined. Industry and independent research supports the dual domain model and provides a wide range of methods for assessment of the construct components of consumer response. Further research is needed to validate consumer response constructs, determine the relationship between consumer response and tobacco user behavior, and improve reliability of consumer response measures. Scientifically rigorous consumer response assessment methods will provide a needed empirical basis for future regulation of potentially reduced-exposure tobacco products and new products, to counteract tobacco industry influence on consumers, and enhance the public health.

  16. Balancing aggregation and smoothing errors in inverse models

    DOE PAGES

    Turner, A. J.; Jacob, D. J.

    2015-06-30

    Inverse models use observations of a system (observation vector) to quantify the variables driving that system (state vector) by statistical optimization. When the observation vector is large, such as with satellite data, selecting a suitable dimension for the state vector is a challenge. A state vector that is too large cannot be effectively constrained by the observations, leading to smoothing error. However, reducing the dimension of the state vector leads to aggregation error as prior relationships between state vector elements are imposed rather than optimized. Here we present a method for quantifying aggregation and smoothing errors as a function ofmore » state vector dimension, so that a suitable dimension can be selected by minimizing the combined error. Reducing the state vector within the aggregation error constraints can have the added advantage of enabling analytical solution to the inverse problem with full error characterization. We compare three methods for reducing the dimension of the state vector from its native resolution: (1) merging adjacent elements (grid coarsening), (2) clustering with principal component analysis (PCA), and (3) applying a Gaussian mixture model (GMM) with Gaussian pdfs as state vector elements on which the native-resolution state vector elements are projected using radial basis functions (RBFs). The GMM method leads to somewhat lower aggregation error than the other methods, but more importantly it retains resolution of major local features in the state vector while smoothing weak and broad features.« less

  17. Balancing aggregation and smoothing errors in inverse models

    NASA Astrophysics Data System (ADS)

    Turner, A. J.; Jacob, D. J.

    2015-01-01

    Inverse models use observations of a system (observation vector) to quantify the variables driving that system (state vector) by statistical optimization. When the observation vector is large, such as with satellite data, selecting a suitable dimension for the state vector is a challenge. A state vector that is too large cannot be effectively constrained by the observations, leading to smoothing error. However, reducing the dimension of the state vector leads to aggregation error as prior relationships between state vector elements are imposed rather than optimized. Here we present a method for quantifying aggregation and smoothing errors as a function of state vector dimension, so that a suitable dimension can be selected by minimizing the combined error. Reducing the state vector within the aggregation error constraints can have the added advantage of enabling analytical solution to the inverse problem with full error characterization. We compare three methods for reducing the dimension of the state vector from its native resolution: (1) merging adjacent elements (grid coarsening), (2) clustering with principal component analysis (PCA), and (3) applying a Gaussian mixture model (GMM) with Gaussian pdfs as state vector elements on which the native-resolution state vector elements are projected using radial basis functions (RBFs). The GMM method leads to somewhat lower aggregation error than the other methods, but more importantly it retains resolution of major local features in the state vector while smoothing weak and broad features.

  18. Balancing aggregation and smoothing errors in inverse models

    NASA Astrophysics Data System (ADS)

    Turner, A. J.; Jacob, D. J.

    2015-06-01

    Inverse models use observations of a system (observation vector) to quantify the variables driving that system (state vector) by statistical optimization. When the observation vector is large, such as with satellite data, selecting a suitable dimension for the state vector is a challenge. A state vector that is too large cannot be effectively constrained by the observations, leading to smoothing error. However, reducing the dimension of the state vector leads to aggregation error as prior relationships between state vector elements are imposed rather than optimized. Here we present a method for quantifying aggregation and smoothing errors as a function of state vector dimension, so that a suitable dimension can be selected by minimizing the combined error. Reducing the state vector within the aggregation error constraints can have the added advantage of enabling analytical solution to the inverse problem with full error characterization. We compare three methods for reducing the dimension of the state vector from its native resolution: (1) merging adjacent elements (grid coarsening), (2) clustering with principal component analysis (PCA), and (3) applying a Gaussian mixture model (GMM) with Gaussian pdfs as state vector elements on which the native-resolution state vector elements are projected using radial basis functions (RBFs). The GMM method leads to somewhat lower aggregation error than the other methods, but more importantly it retains resolution of major local features in the state vector while smoothing weak and broad features.

  19. Reduced-cost second-order algebraic-diagrammatic construction method for excitation energies and transition moments

    NASA Astrophysics Data System (ADS)

    Mester, Dávid; Nagy, Péter R.; Kállay, Mihály

    2018-03-01

    A reduced-cost implementation of the second-order algebraic-diagrammatic construction [ADC(2)] method is presented. We introduce approximations by restricting virtual natural orbitals and natural auxiliary functions, which results, on average, in more than an order of magnitude speedup compared to conventional, density-fitting ADC(2) algorithms. The present scheme is the successor of our previous approach [D. Mester, P. R. Nagy, and M. Kállay, J. Chem. Phys. 146, 194102 (2017)], which has been successfully applied to obtain singlet excitation energies with the linear-response second-order coupled-cluster singles and doubles model. Here we report further methodological improvements and the extension of the method to compute singlet and triplet ADC(2) excitation energies and transition moments. The various approximations are carefully benchmarked, and conservative truncation thresholds are selected which guarantee errors much smaller than the intrinsic error of the ADC(2) method. Using the canonical values as reference, we find that the mean absolute error for both singlet and triplet ADC(2) excitation energies is 0.02 eV, while that for oscillator strengths is 0.001 a.u. The rigorous cutoff parameters together with the significantly reduced operation count and storage requirements allow us to obtain accurate ADC(2) excitation energies and transition properties using triple-ζ basis sets for systems of up to one hundred atoms.

  20. Nonlinear Reduced Order Random Response Analysis of Structures with Shallow Curvature

    NASA Technical Reports Server (NTRS)

    Przekop, Adam; Rizzi, Stephen A.

    2006-01-01

    The goal of this investigation is to further develop nonlinear modal numerical simulation methods for application to geometrically nonlinear response of structures with shallow curvature under random loadings. For reduced order analysis, the modal basis selection must be capable of reflecting the coupling in both the linear and nonlinear stiffness. For the symmetric shallow arch under consideration, four categories of modal basis functions are defined. Those having symmetric transverse displacements (ST modes) can be designated as transverse dominated (ST-T) modes and in-plane dominated (ST-I) modes. Those having anti-symmetric transverse displacements (AT modes) can similarly be designated as transverse dominated (AT-T) modes and in-plane dominated (AT-I) modes. The response of an aluminum arch under a uniformly distributed transverse random loading is investigated. Results from nonlinear modal simulations made using various modal bases are compared with those obtained from a numerical simulation in physical degrees-of-freedom. While inclusion of ST-T modes is important for all response regimes, it is found that the ST-I modes become increasingly important in the nonlinear response regime, and that AT-T and AT-I modes are critical in the autoparametric regime.

  1. Nonlinear Reduced Order Random Response Analysis of Structures With Shallow Curvature

    NASA Technical Reports Server (NTRS)

    Przekop, Adam; Rizzi, Stephen A.

    2005-01-01

    The goal of this investigation is to further develop nonlinear modal numerical simulation methods for application to geometrically nonlinear response of structures with shallow curvature under random loadings. For reduced order analysis, the modal basis selection must be capable of reflecting the coupling in both the linear and nonlinear stiffness. For the symmetric shallow arch under consideration, four categories of modal basis functions are defined. Those having symmetric transverse displacements (ST modes) can be designated as transverse dominated (ST-T) modes and in-plane dominated (ST-I) modes. Those having anti-symmetric transverse displacements (AT modes) can similarly be designated as transverse dominated (AT-T) modes and in-plane dominated (AT-I) modes. The response of an aluminum arch under a uniformly distributed transverse random loading is investigated. Results from nonlinear modal simulations made using various modal bases are compared with those obtained from a numerical simulation in physical degrees-of-freedom. While inclusion of ST-T modes is important for all response regimes, it is found that the ST-I modes become increasingly important in the nonlinear response regime, and that AT-T and AT-I modes are critical in the autoparametric regime.

  2. Multipole expansion method for supernova neutrino oscillations

    DOE PAGES

    Duan, Huaiyu; Shalgar, Shashank

    2014-10-31

    Here, we demonstrate a multipole expansion method to calculate collective neutrino oscillations in supernovae using the neutrino bulb model. We show that it is much more efficient to solve multi-angle neutrino oscillations in multipole basis than in angle basis. The multipole expansion method also provides interesting insights into multi-angle calculations that were accomplished previously in angle basis.

  3. Extraction of process zones and low-dimensional attractive subspaces in stochastic fracture mechanics

    PubMed Central

    Kerfriden, P.; Schmidt, K.M.; Rabczuk, T.; Bordas, S.P.A.

    2013-01-01

    We propose to identify process zones in heterogeneous materials by tailored statistical tools. The process zone is redefined as the part of the structure where the random process cannot be correctly approximated in a low-dimensional deterministic space. Such a low-dimensional space is obtained by a spectral analysis performed on pre-computed solution samples. A greedy algorithm is proposed to identify both process zone and low-dimensional representative subspace for the solution in the complementary region. In addition to the novelty of the tools proposed in this paper for the analysis of localised phenomena, we show that the reduced space generated by the method is a valid basis for the construction of a reduced order model. PMID:27069423

  4. Estimating the intrinsic limit of the Feller-Peterson-Dixon composite approach when applied to adiabatic ionization potentials in atoms and small molecules

    NASA Astrophysics Data System (ADS)

    Feller, David

    2017-07-01

    Benchmark adiabatic ionization potentials were obtained with the Feller-Peterson-Dixon (FPD) theoretical method for a collection of 48 atoms and small molecules. In previous studies, the FPD method demonstrated an ability to predict atomization energies (heats of formation) and electron affinities well within a 95% confidence level of ±1 kcal/mol. Large 1-particle expansions involving correlation consistent basis sets (up to aug-cc-pV8Z in many cases and aug-cc-pV9Z for some atoms) were chosen for the valence CCSD(T) starting point calculations. Despite their cost, these large basis sets were chosen in order to help minimize the residual basis set truncation error and reduce dependence on approximate basis set limit extrapolation formulas. The complementary n-particle expansion included higher order CCSDT, CCSDTQ, or CCSDTQ5 (coupled cluster theory with iterative triple, quadruple, and quintuple excitations) corrections. For all of the chemical systems examined here, it was also possible to either perform explicit full configuration interaction (CI) calculations or to otherwise estimate the full CI limit. Additionally, corrections associated with core/valence correlation, scalar relativity, anharmonic zero point vibrational energies, non-adiabatic effects, and other minor factors were considered. The root mean square deviation with respect to experiment for the ionization potentials was 0.21 kcal/mol (0.009 eV). The corresponding level of agreement for molecular enthalpies of formation was 0.37 kcal/mol and for electron affinities 0.20 kcal/mol. Similar good agreement with experiment was found in the case of molecular structures and harmonic frequencies. Overall, the combination of energetic, structural, and vibrational data (655 comparisons) reflects the consistent ability of the FPD method to achieve close agreement with experiment for small molecules using the level of theory applied in this study.

  5. Frequency-varying synchronous micro-vibration suppression for a MSFW with application of small-gain theorem

    NASA Astrophysics Data System (ADS)

    Peng, Cong; Fan, Yahong; Huang, Ziyuan; Han, Bangcheng; Fang, Jiancheng

    2017-01-01

    This paper presents a novel synchronous micro-vibration suppression method on the basis of the small gain theorem to reduce the frequency-varying synchronous micro-vibration forces for a magnetically suspended flywheel (MSFW). The proposed synchronous micro-vibration suppression method not only eliminates the synchronous current fluctuations to force the rotor spinning around the inertia axis, but also considers the compensation caused by the displacement stiffness in the permanent-magnet (PM)-biased magnetic bearings. Moreover, the stability of the proposed control system is exactly analyzed by using small gain theorem. The effectiveness of the proposed micro-vibration suppression method is demonstrated via the direct measurement of the disturbance forces for a MSFW. The main merit of the proposed method is that it provides a simple and practical method in suppressing the frequency varying micro-vibration forces and preserving the nominal performance of the baseline control system.

  6. Optimized Projection Matrix for Compressive Sensing

    NASA Astrophysics Data System (ADS)

    Xu, Jianping; Pi, Yiming; Cao, Zongjie

    2010-12-01

    Compressive sensing (CS) is mainly concerned with low-coherence pairs, since the number of samples needed to recover the signal is proportional to the mutual coherence between projection matrix and sparsifying matrix. Until now, papers on CS always assume the projection matrix to be a random matrix. In this paper, aiming at minimizing the mutual coherence, a method is proposed to optimize the projection matrix. This method is based on equiangular tight frame (ETF) design because an ETF has minimum coherence. It is impossible to solve the problem exactly because of the complexity. Therefore, an alternating minimization type method is used to find a feasible solution. The optimally designed projection matrix can further reduce the necessary number of samples for recovery or improve the recovery accuracy. The proposed method demonstrates better performance than conventional optimization methods, which brings benefits to both basis pursuit and orthogonal matching pursuit.

  7. Improving the Repair Planning System for Mining Equipment on the Basis of Non-destructive Evaluation Data

    NASA Astrophysics Data System (ADS)

    Drygin, Michael; Kuryshkin, Nicholas

    2017-11-01

    The article tells about forming a new concept of scheduled preventive repair system of the equipment at coal mining enterprises, based on the use of modem non-destructive evaluation methods. The approach to the solution for this task is based on the system-oriented analysis of the regulatory documentation, non-destructive evaluation methods and means, experimental studies with compilation of statistics and subsequent grapho-analytical analysis. The main result of the work is a feasible explanation of using non-destructive evaluation methods within the current scheduled preventive repair system, their high efficiency and the potential of gradual transition to condition-based maintenance. In practice wide use of nondestructive evaluation means w;ill allow to reduce significantly the number of equipment failures and to repair only the nodes in pre-accident condition. Considering the import phase-out policy, the solution for this task will allow to adapt the SPR system to Russian market economy conditions and give the opportunity of commercial move by reducing the expenses for maintenance of Russian-made and imported equipment.

  8. Treatment-induced mucositis: an old problem with new remedies.

    PubMed

    Symonds, R P

    1998-05-01

    Mucositis may be a painful, debilitating, dose-limiting side-effect of both chemotherapy and radiotherapy for which there is no widely accepted prophylaxis or effective treatment. The basis of management is pain relief, prevention of dehydration and adequate nutrition. When tested vigorously, most antiseptic mouthwashes and anti-ulcer agents are ineffective. Simple mechanical cleansing by saline is the most effective traditional measure. A variety of new agents are effective. Granulocyte macrophage colony-stimulating factor (GM-CSF) and granulocyte colony-stimulating factor (G-CSF) act outwith the haemopoeitic system and can reduce mucositis, but the best schedule, dosage and method of administration is not known or which is the best growth factor to prevent this side-effect. A placebo-controlled randomized trial of antibiotic pastilles has shown a significant reduction in mucositis and weight loss during radiotherapy for head and neck cancer. Another method to reduce radiation effects in normal tissue is to stimulate cells to divide before radiotherapy by silver nitrate or interleukin 1. These methods may be particularly effective when given along with hyperfractionated radiation treatment such as CHART.

  9. Remote sensing image segmentation based on Hadoop cloud platform

    NASA Astrophysics Data System (ADS)

    Li, Jie; Zhu, Lingling; Cao, Fubin

    2018-01-01

    To solve the problem that the remote sensing image segmentation speed is slow and the real-time performance is poor, this paper studies the method of remote sensing image segmentation based on Hadoop platform. On the basis of analyzing the structural characteristics of Hadoop cloud platform and its component MapReduce programming, this paper proposes a method of image segmentation based on the combination of OpenCV and Hadoop cloud platform. Firstly, the MapReduce image processing model of Hadoop cloud platform is designed, the input and output of image are customized and the segmentation method of the data file is rewritten. Then the Mean Shift image segmentation algorithm is implemented. Finally, this paper makes a segmentation experiment on remote sensing image, and uses MATLAB to realize the Mean Shift image segmentation algorithm to compare the same image segmentation experiment. The experimental results show that under the premise of ensuring good effect, the segmentation rate of remote sensing image segmentation based on Hadoop cloud Platform has been greatly improved compared with the single MATLAB image segmentation, and there is a great improvement in the effectiveness of image segmentation.

  10. Improved Algorithm For Finite-Field Normal-Basis Multipliers

    NASA Technical Reports Server (NTRS)

    Wang, C. C.

    1989-01-01

    Improved algorithm reduces complexity of calculations that must precede design of Massey-Omura finite-field normal-basis multipliers, used in error-correcting-code equipment and cryptographic devices. Algorithm represents an extension of development reported in "Algorithm To Design Finite-Field Normal-Basis Multipliers" (NPO-17109), NASA Tech Briefs, Vol. 12, No. 5, page 82.

  11. The research of collapsibility test and FEA of collapse deformation in loess collapsible under overburden pressure

    NASA Astrophysics Data System (ADS)

    yu, Zhang; hui, Li; guibo, Bao; wuyu, Zhang; ningshan, Jiang; xiaoyun, Yang

    2018-05-01

    The collapsibility test in field may have huge error with computed results[1-4]. The writer gave a compare between single-line and double-line method and then compared with the field’s result. The writer’s purpose is to reduce the error of measured value to computed value and propose a way to decrease the error through consider the matric suction’s influence to unsaturated soil in using finite element analysis, field test was completed to verify the reasonability of this method and get some regulate of development of collapse deformation and supply some calculation basis of engineering design and forecast in emergency situation.

  12. Practical implementation of spectral-intensity dispersion-canceled optical coherence tomography with artifact suppression

    NASA Astrophysics Data System (ADS)

    Shirai, Tomohiro; Friberg, Ari T.

    2018-04-01

    Dispersion-canceled optical coherence tomography (OCT) based on spectral intensity interferometry was devised as a classical counterpart of quantum OCT to enhance the basic performance of conventional OCT. In this paper, we demonstrate experimentally that an alternative method of realizing this kind of OCT by means of two optical fiber couplers and a single spectrometer is a more practical and reliable option than the existing methods proposed previously. Furthermore, we develop a recipe for reducing multiple artifacts simultaneously on the basis of simple averaging and verify experimentally that it works successfully in the sense that all the artifacts are mitigated effectively and only the true signals carrying structural information about the sample survive.

  13. Adaptive parametric model order reduction technique for optimization of vibro-acoustic models: Application to hearing aid design

    NASA Astrophysics Data System (ADS)

    Creixell-Mediante, Ester; Jensen, Jakob S.; Naets, Frank; Brunskog, Jonas; Larsen, Martin

    2018-06-01

    Finite Element (FE) models of complex structural-acoustic coupled systems can require a large number of degrees of freedom in order to capture their physical behaviour. This is the case in the hearing aid field, where acoustic-mechanical feedback paths are a key factor in the overall system performance and modelling them accurately requires a precise description of the strong interaction between the light-weight parts and the internal and surrounding air over a wide frequency range. Parametric optimization of the FE model can be used to reduce the vibroacoustic feedback in a device during the design phase; however, it requires solving the model iteratively for multiple frequencies at different parameter values, which becomes highly time consuming when the system is large. Parametric Model Order Reduction (pMOR) techniques aim at reducing the computational cost associated with each analysis by projecting the full system into a reduced space. A drawback of most of the existing techniques is that the vector basis of the reduced space is built at an offline phase where the full system must be solved for a large sample of parameter values, which can also become highly time consuming. In this work, we present an adaptive pMOR technique where the construction of the projection basis is embedded in the optimization process and requires fewer full system analyses, while the accuracy of the reduced system is monitored by a cheap error indicator. The performance of the proposed method is evaluated for a 4-parameter optimization of a frequency response for a hearing aid model, evaluated at 300 frequencies, where the objective function evaluations become more than one order of magnitude faster than for the full system.

  14. Intelligent Photovoltaic Systems by Combining the Improved Perturbation Method of Observation and Sun Location Tracking.

    PubMed

    Wang, Yajie; Shi, Yunbo; Yu, Xiaoyu; Liu, Yongjie

    2016-01-01

    Currently, tracking in photovoltaic (PV) systems suffers from some problems such as high energy consumption, poor anti-interference performance, and large tracking errors. This paper presents a solar PV tracking system on the basis of an improved perturbation and observation method, which maximizes photoelectric conversion efficiency. According to the projection principle, we design a sensor module with a light-intensity-detection module for environmental light-intensity measurement. The effect of environmental factors on the system operation is reduced, and intelligent identification of the weather is realized. This system adopts the discrete-type tracking method to reduce power consumption. A mechanical structure with a level-pitch double-degree-of-freedom is designed, and attitude correction is performed by closed-loop control. A worm-and-gear mechanism is added, and the reliability, stability, and precision of the system are improved. Finally, the perturbation and observation method designed and improved by this study was tested by simulated experiments. The experiments verified that the photoelectric sensor resolution can reach 0.344°, the tracking error is less than 2.5°, the largest improvement in the charge efficiency can reach 44.5%, and the system steadily and reliably works.

  15. Intelligent Photovoltaic Systems by Combining the Improved Perturbation Method of Observation and Sun Location Tracking

    PubMed Central

    Wang, Yajie; Shi, Yunbo; Yu, Xiaoyu; Liu, Yongjie

    2016-01-01

    Currently, tracking in photovoltaic (PV) systems suffers from some problems such as high energy consumption, poor anti-interference performance, and large tracking errors. This paper presents a solar PV tracking system on the basis of an improved perturbation and observation method, which maximizes photoelectric conversion efficiency. According to the projection principle, we design a sensor module with a light-intensity-detection module for environmental light-intensity measurement. The effect of environmental factors on the system operation is reduced, and intelligent identification of the weather is realized. This system adopts the discrete-type tracking method to reduce power consumption. A mechanical structure with a level-pitch double-degree-of-freedom is designed, and attitude correction is performed by closed-loop control. A worm-and-gear mechanism is added, and the reliability, stability, and precision of the system are improved. Finally, the perturbation and observation method designed and improved by this study was tested by simulated experiments. The experiments verified that the photoelectric sensor resolution can reach 0.344°, the tracking error is less than 2.5°, the largest improvement in the charge efficiency can reach 44.5%, and the system steadily and reliably works. PMID:27327657

  16. Advances in locally constrained k-space-based parallel MRI.

    PubMed

    Samsonov, Alexey A; Block, Walter F; Arunachalam, Arjun; Field, Aaron S

    2006-02-01

    In this article, several theoretical and methodological developments regarding k-space-based, locally constrained parallel MRI (pMRI) reconstruction are presented. A connection between Parallel MRI with Adaptive Radius in k-Space (PARS) and GRAPPA methods is demonstrated. The analysis provides a basis for unified treatment of both methods. Additionally, a weighted PARS reconstruction is proposed, which may absorb different weighting strategies for improved image reconstruction. Next, a fast and efficient method for pMRI reconstruction of data sampled on non-Cartesian trajectories is described. In the new technique, the computational burden associated with the numerous matrix inversions in the original PARS method is drastically reduced by limiting direct calculation of reconstruction coefficients to only a few reference points. The rest of the coefficients are found by interpolating between the reference sets, which is possible due to the similar configuration of points participating in reconstruction for highly symmetric trajectories, such as radial and spirals. As a result, the time requirements are drastically reduced, which makes it practical to use pMRI with non-Cartesian trajectories in many applications. The new technique was demonstrated with simulated and actual data sampled on radial trajectories. Copyright 2006 Wiley-Liss, Inc.

  17. Two methods for transmission line simulation model creation based on time domain measurements

    NASA Astrophysics Data System (ADS)

    Rinas, D.; Frei, S.

    2011-07-01

    The emission from transmission lines plays an important role in the electromagnetic compatibility of automotive electronic systems. In a frequency range below 200 MHz radiation from cables is often the dominant emission factor. In higher frequency ranges radiation from PCBs and their housing becomes more relevant. Main sources for this emission are the conducting traces. The established field measurement methods according CISPR 25 for evaluation of emissions suffer from the need to use large anechoic chambers. Furthermore measurement data can not be used for simulation model creation in order to compute the overall fields radiated from a car. In this paper a method to determine the far-fields and a simulation model of radiating transmission lines, esp. cable bundles and conducting traces on planar structures, is proposed. The method measures the electromagnetic near-field above the test object. Measurements are done in time domain in order to get phase information and to reduce measurement time. On the basis of near-field data equivalent source identification can be done. Considering correlations between sources along each conductive structure in model creation process, the model accuracy increases and computational costs can be reduced.

  18. Customer-Specific Transaction Risk Management in E-Commerce

    NASA Astrophysics Data System (ADS)

    Ruch, Markus; Sackmann, Stefan

    Increasing potential for turnover in e-commerce is inextricably linked with an increase in risk. Online retailers (e-tailers), aiming for a company-wide value orientation should manage this risk. However, current approaches to risk management either use average retail prices elevated by an overall risk premium or restrict the payment methods offered to customers. Thus, they neglect customer-specific value and risk attributes and leave turnover potentials unconsidered. To close this gap, an innovative valuation model is proposed in this contribution that integrates customer-specific risk and potential turnover. The approach presented evaluates different payment methods using their risk-turnover characteristic, provides a risk-adjusted decision basis for selecting payment methods and allows e-tailers to derive automated risk management decisions per customer and transaction without reducing turnover potential.

  19. Statistical analysis and machine learning algorithms for optical biopsy

    NASA Astrophysics Data System (ADS)

    Wu, Binlin; Liu, Cheng-hui; Boydston-White, Susie; Beckman, Hugh; Sriramoju, Vidyasagar; Sordillo, Laura; Zhang, Chunyuan; Zhang, Lin; Shi, Lingyan; Smith, Jason; Bailin, Jacob; Alfano, Robert R.

    2018-02-01

    Analyzing spectral or imaging data collected with various optical biopsy methods is often times difficult due to the complexity of the biological basis. Robust methods that can utilize the spectral or imaging data and detect the characteristic spectral or spatial signatures for different types of tissue is challenging but highly desired. In this study, we used various machine learning algorithms to analyze a spectral dataset acquired from human skin normal and cancerous tissue samples using resonance Raman spectroscopy with 532nm excitation. The algorithms including principal component analysis, nonnegative matrix factorization, and autoencoder artificial neural network are used to reduce dimension of the dataset and detect features. A support vector machine with a linear kernel is used to classify the normal tissue and cancerous tissue samples. The efficacies of the methods are compared.

  20. Relative Effectiveness of Worker Safety and Health Training Methods

    PubMed Central

    Burke, Michael J.; Sarpy, Sue Ann; Smith-Crowe, Kristin; Chan-Serafin, Suzanne; Salvador, Rommel O.; Islam, Gazi

    2006-01-01

    Objectives. We sought to determine the relative effectiveness of different methods of worker safety and health training aimed at improving safety knowledge and performance and reducing negative outcomes (accidents, illnesses, and injuries). Methods. Ninety-five quasi-experimental studies (n=20991) were included in the analysis. Three types of intervention methods were distinguished on the basis of learners’ participation in the training process: least engaging (lecture, pamphlets, videos), moderately engaging (programmed instruction, feedback interventions), and most engaging (training in behavioral modeling, hands-on training). Results. As training methods became more engaging (i.e., requiring trainees’ active participation), workers demonstrated greater knowledge acquisition, and reductions were seen in accidents, illnesses, and injuries. All methods of training produced meaningful behavioral performance improvements. Conclusions. Training involving behavioral modeling, a substantial amount of practice, and dialogue is generally more effective than other methods of safety and health training. The present findings challenge the current emphasis on more passive computer-based and distance training methods within the public health workforce. PMID:16380566

  1. The application of midbond basis sets in efficient and accurate ab initio calculations on electron-deficient systems

    NASA Astrophysics Data System (ADS)

    Choi, Chu Hwan

    2002-09-01

    Ab initio chemistry has shown great promise in reproducing experimental results and in its predictive power. The many complicated computational models and methods seem impenetrable to an inexperienced scientist, and the reliability of the results is not easily interpreted. The application of midbond orbitals is used to determine a general method for use in calculating weak intermolecular interactions, especially those involving electron-deficient systems. Using the criteria of consistency, flexibility, accuracy and efficiency we propose a supermolecular method of calculation using the full counterpoise (CP) method of Boys and Bernardi, coupled with Moller-Plesset (MP) perturbation theory as an efficient electron-correlative method. We also advocate the use of the highly efficient and reliable correlation-consistent polarized valence basis sets of Dunning. To these basis sets, we add a general set of midbond orbitals and demonstrate greatly enhanced efficiency in the calculation. The H2-H2 dimer is taken as a benchmark test case for our method, and details of the computation are elaborated. Our method reproduces with great accuracy the dissociation energies of other previous theoretical studies. The added efficiency of extending the basis sets with conventional means is compared with the performance of our midbond-extended basis sets. The improvement found with midbond functions is notably superior in every case tested. Finally, a novel application of midbond functions to the BH5 complex is presented. The system is an unusual van der Waals complex. The interaction potential curves are presented for several standard basis sets and midbond-enhanced basis sets, as well as for two popular, alternative correlation methods. We report that MP theory appears to be superior to coupled-cluster (CC) in speed, while it is more stable than B3LYP, a widely-used density functional theory (DFT). Application of our general method yields excellent results for the midbond basis sets. Again they prove superior to conventional extended basis sets. Based on these results, we recommend our general approach as a highly efficient, accurate method for calculating weakly interacting systems.

  2. Microbial ecology of a crude oil contaminated aquifer

    USGS Publications Warehouse

    Bekins, B.A.; Cozzarelli, I.M.; Warren, E.; Godsy, E.M.

    2002-01-01

    Detailed microbial analyses of a glacial outwash aquifer contaminated by crude oil provide insights into the pattern of microbial succession from iron reducing to methanogenic in the anaerobic portion of the contaminant plume. We analysed sediments from this area for populations of aerobes, iron reducers, fermenters and methanogens, using the most probable number method. On the basis of the microbial data the anaerobic area can be divided into distinct physiological zones dominated by either iron-reducers or a consortium of fermenters and methanogens. Chemistry and permeability data show that methanogenic conditions develop first in areas of high hydrocarbon flux. Thus, we find methanogens both in high permeability horizons and also where separate-phase crude oil is present in either the saturated or unsaturated zone. Microbial numbers peak at the top of the separate-phase oil suggesting that growth is most rapid in locations with access to both hydrocarbons and nutrients infiltrating from the surface.

  3. Optimal charge control strategies for stationary photovoltaic battery systems

    NASA Astrophysics Data System (ADS)

    Li, Jiahao; Danzer, Michael A.

    2014-07-01

    Battery systems coupled to photovoltaic (PV) modules for example fulfill one major function: they locally decouple PV generation and consumption of electrical power leading to two major effects. First, they reduce the grid load, especially at peak times and therewith reduce the necessity of a network expansion. And second, they increase the self-consumption in households and therewith help to reduce energy expenses. For the management of PV batteries charge control strategies need to be developed to reach the goals of both the distribution system operators and the local power producer. In this work optimal control strategies regarding various optimization goals are developed on the basis of the predicted household loads and PV generation profiles using the method of dynamic programming. The resulting charge curves are compared and essential differences discussed. Finally, a multi-objective optimization shows that charge control strategies can be derived that take all optimization goals into account.

  4. Coupled molybdenum carbide and reduced graphene oxide electrocatalysts for efficient hydrogen evolution.

    PubMed

    Li, Ji-Sen; Wang, Yu; Liu, Chun-Hui; Li, Shun-Li; Wang, Yu-Guang; Dong, Long-Zhang; Dai, Zhi-Hui; Li, Ya-Fei; Lan, Ya-Qian

    2016-04-01

    Electrochemical water splitting is one of the most economical and sustainable methods for large-scale hydrogen production. However, the development of low-cost and earth-abundant non-noble-metal catalysts for the hydrogen evolution reaction remains a challenge. Here we report a two-dimensional coupled hybrid of molybdenum carbide and reduced graphene oxide with a ternary polyoxometalate-polypyrrole/reduced graphene oxide nanocomposite as a precursor. The hybrid exhibits outstanding electrocatalytic activity for the hydrogen evolution reaction and excellent stability in acidic media, which is, to the best of our knowledge, the best among these reported non-noble-metal catalysts. Theoretical calculations on the basis of density functional theory reveal that the active sites for hydrogen evolution stem from the pyridinic nitrogens, as well as the carbon atoms, in the graphene. In a proof-of-concept trial, an electrocatalyst for hydrogen evolution is fabricated, which may open new avenues for the design of nanomaterials utilizing POMs/conducting polymer/reduced-graphene oxide nanocomposites.

  5. Coupled molybdenum carbide and reduced graphene oxide electrocatalysts for efficient hydrogen evolution

    NASA Astrophysics Data System (ADS)

    Li, Ji-Sen; Wang, Yu; Liu, Chun-Hui; Li, Shun-Li; Wang, Yu-Guang; Dong, Long-Zhang; Dai, Zhi-Hui; Li, Ya-Fei; Lan, Ya-Qian

    2016-04-01

    Electrochemical water splitting is one of the most economical and sustainable methods for large-scale hydrogen production. However, the development of low-cost and earth-abundant non-noble-metal catalysts for the hydrogen evolution reaction remains a challenge. Here we report a two-dimensional coupled hybrid of molybdenum carbide and reduced graphene oxide with a ternary polyoxometalate-polypyrrole/reduced graphene oxide nanocomposite as a precursor. The hybrid exhibits outstanding electrocatalytic activity for the hydrogen evolution reaction and excellent stability in acidic media, which is, to the best of our knowledge, the best among these reported non-noble-metal catalysts. Theoretical calculations on the basis of density functional theory reveal that the active sites for hydrogen evolution stem from the pyridinic nitrogens, as well as the carbon atoms, in the graphene. In a proof-of-concept trial, an electrocatalyst for hydrogen evolution is fabricated, which may open new avenues for the design of nanomaterials utilizing POMs/conducting polymer/reduced-graphene oxide nanocomposites.

  6. Reducing the Anaerobic Digestion Model No. 1 for its application to an industrial wastewater treatment plant treating winery effluent wastewater.

    PubMed

    García-Diéguez, Carlos; Bernard, Olivier; Roca, Enrique

    2013-03-01

    The Anaerobic Digestion Model No. 1 (ADM1) is a complex model which is widely accepted as a common platform for anaerobic process modeling and simulation. However, it has a large number of parameters and states that hinder its calibration and use in control applications. A principal component analysis (PCA) technique was extended and applied to simplify the ADM1 using data of an industrial wastewater treatment plant processing winery effluent. The method shows that the main model features could be obtained with a minimum of two reactions. A reduced stoichiometric matrix was identified and the kinetic parameters were estimated on the basis of representative known biochemical kinetics (Monod and Haldane). The obtained reduced model takes into account the measured states in the anaerobic wastewater treatment (AWT) plant and reproduces the dynamics of the process fairly accurately. The reduced model can support on-line control, optimization and supervision strategies for AWT plants. Copyright © 2013 Elsevier Ltd. All rights reserved.

  7. A Human Capital Approach to Reduce Health Disparities

    PubMed Central

    Glover, Saundra H.; Xirasagar, Sudha; Jeon, Yunho; Elder, Keith T.; Piper, Crystal N.; Pastides, Harris

    2010-01-01

    Objective To introduce a human capital approach to reduce health disparities in South Carolina by increasing the number and quality of trained minority professionals in public health practice and research. Methods The conceptual basis and elements of Project EXPORT in South Carolina are described. Project EXPORT is a community based participatory research (CBPR) translational project designed to build human capital in public health practice and research. This project involves Claflin University (CU), a Historically Black College University (HBCU) and the African American community of Orangeburg, South Carolina to reduce health disparities, utilizing resources from the University of South Carolina (USC), a level 1 research institution to build expertise at a minority serving institution. The elements of Project EXPORT were created to advance the science base of disparities reduction, increase trained minority researchers, and engage the African American community at all stages of research. Conclusion Building upon past collaborations between HBCU’s in South Carolina and USC, this project holds promise for a public health human capital approach to reduce health disparities. PMID:21814634

  8. Breast Cancer Detection with Reduced Feature Set.

    PubMed

    Mert, Ahmet; Kılıç, Niyazi; Bilgili, Erdem; Akan, Aydin

    2015-01-01

    This paper explores feature reduction properties of independent component analysis (ICA) on breast cancer decision support system. Wisconsin diagnostic breast cancer (WDBC) dataset is reduced to one-dimensional feature vector computing an independent component (IC). The original data with 30 features and reduced one feature (IC) are used to evaluate diagnostic accuracy of the classifiers such as k-nearest neighbor (k-NN), artificial neural network (ANN), radial basis function neural network (RBFNN), and support vector machine (SVM). The comparison of the proposed classification using the IC with original feature set is also tested on different validation (5/10-fold cross-validations) and partitioning (20%-40%) methods. These classifiers are evaluated how to effectively categorize tumors as benign and malignant in terms of specificity, sensitivity, accuracy, F-score, Youden's index, discriminant power, and the receiver operating characteristic (ROC) curve with its criterion values including area under curve (AUC) and 95% confidential interval (CI). This represents an improvement in diagnostic decision support system, while reducing computational complexity.

  9. A promising tool to achieve chemical accuracy for density functional theory calculations on Y-NO homolysis bond dissociation energies.

    PubMed

    Li, Hong Zhi; Hu, Li Hong; Tao, Wei; Gao, Ting; Li, Hui; Lu, Ying Hua; Su, Zhong Min

    2012-01-01

    A DFT-SOFM-RBFNN method is proposed to improve the accuracy of DFT calculations on Y-NO (Y = C, N, O, S) homolysis bond dissociation energies (BDE) by combining density functional theory (DFT) and artificial intelligence/machine learning methods, which consist of self-organizing feature mapping neural networks (SOFMNN) and radial basis function neural networks (RBFNN). A descriptor refinement step including SOFMNN clustering analysis and correlation analysis is implemented. The SOFMNN clustering analysis is applied to classify descriptors, and the representative descriptors in the groups are selected as neural network inputs according to their closeness to the experimental values through correlation analysis. Redundant descriptors and intuitively biased choices of descriptors can be avoided by this newly introduced step. Using RBFNN calculation with the selected descriptors, chemical accuracy (≤1 kcal·mol(-1)) is achieved for all 92 calculated organic Y-NO homolysis BDE calculated by DFT-B3LYP, and the mean absolute deviations (MADs) of the B3LYP/6-31G(d) and B3LYP/STO-3G methods are reduced from 4.45 and 10.53 kcal·mol(-1) to 0.15 and 0.18 kcal·mol(-1), respectively. The improved results for the minimal basis set STO-3G reach the same accuracy as those of 6-31G(d), and thus B3LYP calculation with the minimal basis set is recommended to be used for minimizing the computational cost and to expand the applications to large molecular systems. Further extrapolation tests are performed with six molecules (two containing Si-NO bonds and two containing fluorine), and the accuracy of the tests was within 1 kcal·mol(-1). This study shows that DFT-SOFM-RBFNN is an efficient and highly accurate method for Y-NO homolysis BDE. The method may be used as a tool to design new NO carrier molecules.

  10. A Promising Tool to Achieve Chemical Accuracy for Density Functional Theory Calculations on Y-NO Homolysis Bond Dissociation Energies

    PubMed Central

    Li, Hong Zhi; Hu, Li Hong; Tao, Wei; Gao, Ting; Li, Hui; Lu, Ying Hua; Su, Zhong Min

    2012-01-01

    A DFT-SOFM-RBFNN method is proposed to improve the accuracy of DFT calculations on Y-NO (Y = C, N, O, S) homolysis bond dissociation energies (BDE) by combining density functional theory (DFT) and artificial intelligence/machine learning methods, which consist of self-organizing feature mapping neural networks (SOFMNN) and radial basis function neural networks (RBFNN). A descriptor refinement step including SOFMNN clustering analysis and correlation analysis is implemented. The SOFMNN clustering analysis is applied to classify descriptors, and the representative descriptors in the groups are selected as neural network inputs according to their closeness to the experimental values through correlation analysis. Redundant descriptors and intuitively biased choices of descriptors can be avoided by this newly introduced step. Using RBFNN calculation with the selected descriptors, chemical accuracy (≤1 kcal·mol−1) is achieved for all 92 calculated organic Y-NO homolysis BDE calculated by DFT-B3LYP, and the mean absolute deviations (MADs) of the B3LYP/6-31G(d) and B3LYP/STO-3G methods are reduced from 4.45 and 10.53 kcal·mol−1 to 0.15 and 0.18 kcal·mol−1, respectively. The improved results for the minimal basis set STO-3G reach the same accuracy as those of 6-31G(d), and thus B3LYP calculation with the minimal basis set is recommended to be used for minimizing the computational cost and to expand the applications to large molecular systems. Further extrapolation tests are performed with six molecules (two containing Si-NO bonds and two containing fluorine), and the accuracy of the tests was within 1 kcal·mol−1. This study shows that DFT-SOFM-RBFNN is an efficient and highly accurate method for Y-NO homolysis BDE. The method may be used as a tool to design new NO carrier molecules. PMID:22942689

  11. A Fast MoM Solver (GIFFT) for Large Arrays of Microstrip and Cavity-Backed Antennas

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fasenfest, B J; Capolino, F; Wilton, D

    2005-02-02

    A straightforward numerical analysis of large arrays of arbitrary contour (and possibly missing elements) requires large memory storage and long computation times. Several techniques are currently under development to reduce this cost. One such technique is the GIFFT (Green's function interpolation and FFT) method discussed here that belongs to the class of fast solvers for large structures. This method uses a modification of the standard AIM approach [1] that takes into account the reusability properties of matrices that arise from identical array elements. If the array consists of planar conducting bodies, the array elements are meshed using standard subdomain basismore » functions, such as the RWG basis. The Green's function is then projected onto a sparse regular grid of separable interpolating polynomials. This grid can then be used in a 2D or 3D FFT to accelerate the matrix-vector product used in an iterative solver [2]. The method has been proven to greatly reduce solve time by speeding up the matrix-vector product computation. The GIFFT approach also reduces fill time and memory requirements, since only the near element interactions need to be calculated exactly. The present work extends GIFFT to layered material Green's functions and multiregion interactions via slots in ground planes. In addition, a preconditioner is implemented to greatly reduce the number of iterations required for a solution. The general scheme of the GIFFT method is reported in [2]; this contribution is limited to presenting new results for array antennas made of slot-excited patches and cavity-backed patch antennas.« less

  12. Impact of sensor's point spread function on land cover characterization: Assessment and deconvolution

    USGS Publications Warehouse

    Huang, C.; Townshend, J.R.G.; Liang, S.; Kalluri, S.N.V.; DeFries, R.S.

    2002-01-01

    Measured and modeled point spread functions (PSF) of sensor systems indicate that a significant portion of the recorded signal of each pixel of a satellite image originates from outside the area represented by that pixel. This hinders the ability to derive surface information from satellite images on a per-pixel basis. In this study, the impact of the PSF of the Moderate Resolution Imaging Spectroradiometer (MODIS) 250 m bands was assessed using four images representing different landscapes. Experimental results showed that though differences between pixels derived with and without PSF effects were small on the average, the PSF generally brightened dark objects and darkened bright objects. This impact of the PSF lowered the performance of a support vector machine (SVM) classifier by 5.4% in overall accuracy and increased the overall root mean square error (RMSE) by 2.4% in estimating subpixel percent land cover. An inversion method based on the known PSF model reduced the signals originating from surrounding areas by as much as 53%. This method differs from traditional PSF inversion deconvolution methods in that the PSF was adjusted with lower weighting factors for signals originating from neighboring pixels than those specified by the PSF model. By using this deconvolution method, the lost classification accuracy due to residual impact of PSF effects was reduced to only 1.66% in overall accuracy. The increase in the RMSE of estimated subpixel land cover proportions due to the residual impact of PSF effects was reduced to 0.64%. Spatial aggregation also effectively reduced the errors in estimated land cover proportion images. About 50% of the estimation errors were removed after applying the deconvolution method and aggregating derived proportion images to twice their dimensional pixel size. ?? 2002 Elsevier Science Inc. All rights reserved.

  13. Construction of energy-stable projection-based reduced order models

    DOE PAGES

    Kalashnikova, Irina; Barone, Matthew F.; Arunajatesan, Srinivasan; ...

    2014-12-15

    Our paper aims to unify and extend several approaches for building stable projection-based reduced order models (ROMs) using the energy method and the concept of “energy-stability”. Attention is focused on linear time-invariant (LTI) systems. First, an approach for building energy stable Galerkin ROMs for linear hyperbolic or incompletely parabolic systems of partial differential equations (PDEs) using continuous projection is proposed. The key idea is to apply to the system a transformation induced by the Lyapunov function for the system, and to build the ROM in the transformed variables. The result of this procedure will be a ROM that is energy-stablemore » for any choice of reduced basis. It is shown that, for many PDE systems, the desired transformation is induced by a special inner product, termed the “symmetry inner product”. Next, attention is turned to building energy-stable ROMs via discrete projection. A discrete counterpart of the continuous symmetry inner product, termed the “Lyapunov inner product”, is derived. Moreover, it is shown that the Lyapunov inner product can be computed in a black-box fashion for a stable LTI system ari sing from the discretization of a system of PDEs in space. Projection in this inner product guarantees a ROM that is energy-stable, again for any choice of reduced basis. Connections between the Lyapunov inner product and the inner product induced by the balanced truncation algorithm are made. We also made comparisons between the symmetry inner product and the Lyapunov inner product. Performance of ROMs constructed using these inner products is evaluated on several benchmark test cases.« less

  14. Pre-crash scenarios at road junctions: A clustering method for car crash data.

    PubMed

    Nitsche, Philippe; Thomas, Pete; Stuetz, Rainer; Welsh, Ruth

    2017-10-01

    Given the recent advancements in autonomous driving functions, one of the main challenges is safe and efficient operation in complex traffic situations such as road junctions. There is a need for comprehensive testing, either in virtual simulation environments or on real-world test tracks. This paper presents a novel data analysis method including the preparation, analysis and visualization of car crash data, to identify the critical pre-crash scenarios at T- and four-legged junctions as a basis for testing the safety of automated driving systems. The presented method employs k-medoids to cluster historical junction crash data into distinct partitions and then applies the association rules algorithm to each cluster to specify the driving scenarios in more detail. The dataset used consists of 1056 junction crashes in the UK, which were exported from the in-depth "On-the-Spot" database. The study resulted in thirteen crash clusters for T-junctions, and six crash clusters for crossroads. Association rules revealed common crash characteristics, which were the basis for the scenario descriptions. The results support existing findings on road junction accidents and provide benchmark situations for safety performance tests in order to reduce the possible number parameter combinations. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Algebraic solution for the forward displacement analysis of the general 6-6 stewart mechanism

    NASA Astrophysics Data System (ADS)

    Wei, Feng; Wei, Shimin; Zhang, Ying; Liao, Qizheng

    2016-01-01

    The solution for the forward displacement analysis(FDA) of the general 6-6 Stewart mechanism(i.e., the connection points of the moving and fixed platforms are not restricted to lying in a plane) has been extensively studied, but the efficiency of the solution remains to be effectively addressed. To this end, an algebraic elimination method is proposed for the FDA of the general 6-6 Stewart mechanism. The kinematic constraint equations are built using conformal geometric algebra(CGA). The kinematic constraint equations are transformed by a substitution of variables into seven equations with seven unknown variables. According to the characteristic of anti-symmetric matrices, the aforementioned seven equations can be further transformed into seven equations with four unknown variables by a substitution of variables using the Gröbner basis. Its elimination weight is increased through changing the degree of one variable, and sixteen equations with four unknown variables can be obtained using the Gröbner basis. A 40th-degree univariate polynomial equation is derived by constructing a relatively small-sized 9´9 Sylvester resultant matrix. Finally, two numerical examples are employed to verify the proposed method. The results indicate that the proposed method can effectively improve the efficiency of solution and reduce the computational burden because of the small-sized resultant matrix.

  16. Increasing farmers' adoption of agricultural index insurance: The search for a better index

    NASA Astrophysics Data System (ADS)

    Muneepeerakul, C. P.

    2015-12-01

    The weather index insurance promises to provide farmers' financial resilience when struck by adverse weather conditions, owing to its minimal moral hazard, low transaction cost, and swift compensation. Despite these advantages, the index insurance has so far received low level of adoption. One of the major causes is the presence of "basis risk"—the risk of getting an insurance payoff that falls short of the actual losses. One source of this basis risk is the production basis risk—the probability that the selected weather indexes and their thresholds do not correspond to actual damages. Here, we investigate how to reduce this production basis risk, using current knowledge in non-linear analysis and stochastic modeling from the fields of ecology and hydrology. We demonstrate how the inclusion of rainfall stochasticity can reduce production basis risk while identifying events that do not need to be insured. Through these findings, we show how much we can improve farmers' adoption of agricultural index insurance under different design contexts.

  17. Scattering by a groove in an impedance plane

    NASA Technical Reports Server (NTRS)

    Bindiganavale, Sunil; Volakis, John L.

    1993-01-01

    An analysis of two-dimensional scattering from a narrow groove in an impedance plane is presented. The groove is represented by a impedance surface and the problem reduces to that of scattering from an impedance strip in an otherwise uniform impedance plane. On the basis of this model, appropriate integral equations are constructed using a form of the impedance plane Green's functions involving rapidly convergent integrals. The integral equations are solved by introducing a single basis representation of the equivalent current on the narrow impedance insert. Both transverse electric (TE) and transverse magnetic (TM) polarizations are treated. The resulting solution is validated by comparison with results from the standard boundary integral method (BIM) and a high frequency solution. It is found that the presented solution for narrow impedance inserts can be used in conjunction with the high frequency solution for the characterization of impedance inserts of any given width.

  18. Reinforcement Learning with Orthonormal Basis Adaptation Based on Activity-Oriented Index Allocation

    NASA Astrophysics Data System (ADS)

    Satoh, Hideki

    An orthonormal basis adaptation method for function approximation was developed and applied to reinforcement learning with multi-dimensional continuous state space. First, a basis used for linear function approximation of a control function is set to an orthonormal basis. Next, basis elements with small activities are replaced with other candidate elements as learning progresses. As this replacement is repeated, the number of basis elements with large activities increases. Example chaos control problems for multiple logistic maps were solved, demonstrating that the method for adapting an orthonormal basis can modify a basis while holding the orthonormality in accordance with changes in the environment to improve the performance of reinforcement learning and to eliminate the adverse effects of redundant noisy states.

  19. Fault Diagnosis for Rotating Machinery: A Method based on Image Processing

    PubMed Central

    Lu, Chen; Wang, Yang; Ragulskis, Minvydas; Cheng, Yujie

    2016-01-01

    Rotating machinery is one of the most typical types of mechanical equipment and plays a significant role in industrial applications. Condition monitoring and fault diagnosis of rotating machinery has gained wide attention for its significance in preventing catastrophic accident and guaranteeing sufficient maintenance. With the development of science and technology, fault diagnosis methods based on multi-disciplines are becoming the focus in the field of fault diagnosis of rotating machinery. This paper presents a multi-discipline method based on image-processing for fault diagnosis of rotating machinery. Different from traditional analysis method in one-dimensional space, this study employs computing method in the field of image processing to realize automatic feature extraction and fault diagnosis in a two-dimensional space. The proposed method mainly includes the following steps. First, the vibration signal is transformed into a bi-spectrum contour map utilizing bi-spectrum technology, which provides a basis for the following image-based feature extraction. Then, an emerging approach in the field of image processing for feature extraction, speeded-up robust features, is employed to automatically exact fault features from the transformed bi-spectrum contour map and finally form a high-dimensional feature vector. To reduce the dimensionality of the feature vector, thus highlighting main fault features and reducing subsequent computing resources, t-Distributed Stochastic Neighbor Embedding is adopt to reduce the dimensionality of the feature vector. At last, probabilistic neural network is introduced for fault identification. Two typical rotating machinery, axial piston hydraulic pump and self-priming centrifugal pumps, are selected to demonstrate the effectiveness of the proposed method. Results show that the proposed method based on image-processing achieves a high accuracy, thus providing a highly effective means to fault diagnosis for rotating machinery. PMID:27711246

  20. Fault Diagnosis for Rotating Machinery: A Method based on Image Processing.

    PubMed

    Lu, Chen; Wang, Yang; Ragulskis, Minvydas; Cheng, Yujie

    2016-01-01

    Rotating machinery is one of the most typical types of mechanical equipment and plays a significant role in industrial applications. Condition monitoring and fault diagnosis of rotating machinery has gained wide attention for its significance in preventing catastrophic accident and guaranteeing sufficient maintenance. With the development of science and technology, fault diagnosis methods based on multi-disciplines are becoming the focus in the field of fault diagnosis of rotating machinery. This paper presents a multi-discipline method based on image-processing for fault diagnosis of rotating machinery. Different from traditional analysis method in one-dimensional space, this study employs computing method in the field of image processing to realize automatic feature extraction and fault diagnosis in a two-dimensional space. The proposed method mainly includes the following steps. First, the vibration signal is transformed into a bi-spectrum contour map utilizing bi-spectrum technology, which provides a basis for the following image-based feature extraction. Then, an emerging approach in the field of image processing for feature extraction, speeded-up robust features, is employed to automatically exact fault features from the transformed bi-spectrum contour map and finally form a high-dimensional feature vector. To reduce the dimensionality of the feature vector, thus highlighting main fault features and reducing subsequent computing resources, t-Distributed Stochastic Neighbor Embedding is adopt to reduce the dimensionality of the feature vector. At last, probabilistic neural network is introduced for fault identification. Two typical rotating machinery, axial piston hydraulic pump and self-priming centrifugal pumps, are selected to demonstrate the effectiveness of the proposed method. Results show that the proposed method based on image-processing achieves a high accuracy, thus providing a highly effective means to fault diagnosis for rotating machinery.

  1. Thermal highly porous insulation materials made of mineral raw materials

    NASA Astrophysics Data System (ADS)

    Mestnikov, A.

    2015-01-01

    The main objective of the study is to create insulating foam based on modified mineral binders with rapid hardening. The results of experimental studies of the composition and properties of insulating foam on the basis of rapidly hardening Portland cement (PC) and gypsum binder composite are presented in the article. The article proposes technological methods of production of insulating foamed concrete and its placement to the permanent shuttering wall enclosures in monolithic-frame construction and individual energy-efficient residential buildings, thus reducing foam shrinkage and improving crack-resistance.

  2. Satisfying positivity requirement in the Beyond Complex Langevin approach

    NASA Astrophysics Data System (ADS)

    Wyrzykowski, Adam; Ruba, Błażej Ruba

    2018-03-01

    The problem of finding a positive distribution, which corresponds to a given complex density, is studied. By the requirement that the moments of the positive distribution and of the complex density are equal, one can reduce the problem to solving the matching conditions. These conditions are a set of quadratic equations, thus Groebner basis method was used to find its solutions when it is restricted to a few lowest-order moments. For a Gaussian complex density, these approximate solutions are compared with the exact solution, that is known in this special case.

  3. Highly excited and exotic meson spectrum from dynamical lattice QCD.

    PubMed

    Dudek, Jozef J; Edwards, Robert G; Peardon, Michael J; Richards, David G; Thomas, Christopher E

    2009-12-31

    Using a new quark-field construction algorithm and a large variational basis of operators, we extract a highly excited isovector meson spectrum on dynamical anisotropic lattices. We show how carefully constructed operators can be used to reliably identify the continuum spin of extracted states, overcoming the reduced cubic symmetry of the lattice. Using this method we extract, with confidence, excited states, states with exotic quantum numbers (0+-, 1-+, and 2+-), and states of high spin, including, for the first time in lattice QCD, spin-four states.

  4. A new electromagnetic NDI-technique based on the measurement of source-sample reaction forces

    NASA Astrophysics Data System (ADS)

    Fitzpatrick, G. L.; Skaugset, R. L.; Shih, W. C. L.

    2001-04-01

    Faraday's law of induction, Lenz's law, the Lorentz force law and Newton's third law, taken together, insure that sources (e.g., coil sources) of time-dependent electromagnetic fields, and nearby "nonmagnetic" electrical conductors (e.g., aluminum), always experience mutually repulsive (source-conductor) forces. This fact forms the basis for a new method for detecting cracks and corrosion in (aging) multi-layer airframes. The presence of cracks or corrosion (e.g., material thinning) in these structures is observed to reduce (second-harmonic) source-conductor reaction forces.

  5. Fly-by-Wireless Update

    NASA Technical Reports Server (NTRS)

    Studor, George

    2010-01-01

    The presentation reviews what is meant by the term 'fly-by-wireless', common problems and motivation, provides recent examples, and examines NASA's future and basis for collaboration. The vision is to minimize cables and connectors and increase functionality across the aerospace industry by providing reliable, lower cost, modular, and higher performance alternatives to wired data connectivity to benefit the entire vehicle/program life-cycle. Focus areas are system engineering and integration methods to reduce cables and connectors, vehicle provisions for modularity and accessibility, and a 'tool box' of alternatives to wired connectivity.

  6. Characteristics of Perforated Diffusers at Free-stream Mach Number 1.90

    NASA Technical Reports Server (NTRS)

    Hunczak, Henry R; Kremzier, Emil J

    1950-01-01

    An investigation was conducted at Mach number 1.90 to determine pressure recovery and mass-flow characteristics of series of perforated convergent-divergent supersonic diffusers. Pressure recoveries as high as 96 percent were obtained, but at reduced mass flows through the diffuser. Theoretical considerations of effect of perforation distribution on shock stability in converging section of diffuser are presented and correlated with experimental data. A method of estimating relative importance of pressure recovery and mass flow on internal thrust coefficient basis is given and a comparison of various diffusers investigated is made.

  7. Urinary tract infection in the newborn and the infant: state of the art.

    PubMed

    Cataldi, Luigi; Zaffanello, Marco; Gnarra, Maria; Fanos, Vassilios

    2010-10-01

    Urinary tract infection is one of the most common causes of infection in newborns. Obtaining a urinary tract infections (UTIs) diagnosis just on the basis of the clinical findings is frequently difficult, however, being the pediatrician's goal to reduce the risk of renal scarring, a prompt diagnosis and treatment is of extreme importance. The key instrument for the diagnosis of UTIs is represented today by urine culture. However, in reality, the caregivers and investigators are increasingly demanding fast and cheap methods for a rapid and effective diagnosis.

  8. CaI and SrI molecules for iodine determination by high-resolution continuum source graphite furnace molecular absorption spectrometry: Greener molecules for practical application.

    PubMed

    Zanatta, Melina Borges Teixeira; Nakadi, Flávio Venâncio; da Veiga, Márcia Andreia Mesquita Silva

    2018-03-01

    A new method to determine iodine in drug samples by high-resolution continuum source graphite furnace molecular absorption spectrometry (HR-CS GF MAS) has been developed. The method measures the molecular absorption of a diatomic molecule, CaI or SrI (less toxic molecule-forming reagents), at 638.904 or 677.692nm, respectively, and uses a mixture containing 5μg of Pd and 0.5μg of Mg as chemical modifier. The method employs pyrolysis temperatures of 1000 and 800°C and vaporization temperatures of 2300 and 2400°C for CaI and SrI, respectively. The optimized amounts of Ca and Sr as molecule-forming reagents are 100 and 150µg, respectively. On the basis of interference studies, even small chlorine concentrations reduce CaI and SrI absorbance significantly. The developed method was used to analyze different commercial drug samples, namely thyroid hormone pills with three different iodine amounts (15.88, 31.77, and 47.66µg) and one liquid drug with 1% m v -1 active iodine in their compositions. The results agreed with the values informed by the manufacturers (95% confidence level) regardless of whether CaI or SrI was determined. Therefore, the developed method is useful for iodine determination on the basis of CaI or SrI molecular absorption. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Monitoring the size and protagonists of the drug market: combining supply and demand data sources and estimates.

    PubMed

    Rossi, Carla

    2013-06-01

    The size of the illicit drug market is an important indicator to assess the impact on society of an important part of the illegal economy and to evaluate drug policy and law enforcement interventions. The extent of illicit drug use and of the drug market can essentially only be estimated by indirect methods based on indirect measures and on data from various sources, as administrative data sets and surveys. The combined use of several methodologies and data sets allows to reduce biases and inaccuracies of estimates obtained on the basis of each of them separately. This approach has been applied to Italian data. The estimation methods applied are capture-recapture methods with latent heterogeneity and multiplier methods. Several data sets have been used, both administrative and survey data sets. First, the retail dealer prevalence has been estimated on the basis of administrative data, then the user prevalence by multiplier methods. Using information about behaviour of dealers and consumers from survey data, the average amount of a substance used or sold and the average unit cost have been estimated and allow estimating the size of the drug market. The estimates have been obtained using a supply-side approach and a demand-side approach and have been compared. These results are in turn used for estimating the interception rate for the different substances in term of the value of the substance seized with respect to the total value of the substance to be sold at retail prices.

  10. Reduction of Dynamic Loads in Mine Lifting Installations

    NASA Astrophysics Data System (ADS)

    Kuznetsov, N. K.; Eliseev, S. V.; Perelygina, A. Yu

    2018-01-01

    Article is devoted to a problem of decrease in the dynamic loadings arising in transitional operating modes of the mine lifting installations leading to heavy oscillating motions of lifting vessels and decrease in efficiency and reliability of work. The known methods and means of decrease in dynamic loadings and oscillating motions of the similar equipment are analysed. It is shown that an approach based on the concept of the inverse problems of dynamics can be effective method of the solution of this problem. The article describes the design model of a one-ended lifting installation in the form of a two-mass oscillation system, in which the inertial elements are the mass of the lifting vessel and the reduced mass of the engine, reducer, drum and pulley. The simplified mathematical model of this system and results of an efficiency research of an active way of reduction of dynamic loadings of lifting installation on the basis of the concept of the inverse problems of dynamics are given.

  11. Controlled method of reducing electrophoretic mobility of macromolecules, particles, or cells

    NASA Technical Reports Server (NTRS)

    Vanalstine, James M. (Inventor)

    1992-01-01

    A method of reducing electrophoretic mobility of macromolecules, particles, cells, and other substances is provided which comprises interacting in a conventional electrophoretic separating procedure, the substances with a polymer-linked affinity compound comprised of a hydrophilic neutral polymer such as polyethylene glycol bound to a second component such as a hydrophobic compound, an immunocompound such as an antibody or antibody active fragment, or a ligand such as a hormone, drug, antigen, or a hapten. The reduction of electrophoretic mobility achieved is directly proportional to the concentration of the polymer-linked affinity compound employed, and such reduction can comprise up to 100 percent for particular particles and cells. The present invention is advantageous in that electrophoretic separation can now be achieved for substances whose native surface charge structure had prevented them from being separated by normal electrophoretic means. Depending on the affinity component utilized, separation can be achieved on the basis of the specific/irreversible, specific/reversible, semi-specific/reversible, relatively nonspecific/reversible, or relatively nonspecific/irreversible ligand-substance interactions.

  12. Identification of Reduced-Order Thermal Therapy Models Using Thermal MR Images: Theory and Validation

    PubMed Central

    2013-01-01

    In this paper, we develop and validate a method to identify computationally efficient site- and patient-specific models of ultrasound thermal therapies from MR thermal images. The models of the specific absorption rate of the transduced energy and the temperature response of the therapy target are identified in the reduced basis of proper orthogonal decomposition of thermal images, acquired in response to a mild thermal test excitation. The method permits dynamic reidentification of the treatment models during the therapy by recursively utilizing newly acquired images. Such adaptation is particularly important during high-temperature therapies, which are known to substantially and rapidly change tissue properties and blood perfusion. The developed theory was validated for the case of focused ultrasound heating of a tissue phantom. The experimental and computational results indicate that the developed approach produces accurate low-dimensional treatment models despite temporal and spatial noises in MR images and slow image acquisition rate. PMID:22531754

  13. Relative effectiveness of worker safety and health training methods.

    PubMed

    Burke, Michael J; Sarpy, Sue Ann; Smith-Crowe, Kristin; Chan-Serafin, Suzanne; Salvador, Rommel O; Islam, Gazi

    2006-02-01

    We sought to determine the relative effectiveness of different methods of worker safety and health training aimed at improving safety knowledge and performance and reducing negative outcomes (accidents, illnesses, and injuries). Ninety-five quasi-experimental studies (n=20991) were included in the analysis. Three types of intervention methods were distinguished on the basis of learners' participation in the training process: least engaging (lecture, pamphlets, videos), moderately engaging (programmed instruction, feedback interventions), and most engaging (training in behavioral modeling, hands-on training). As training methods became more engaging (i.e., requiring trainees' active participation), workers demonstrated greater knowledge acquisition, and reductions were seen in accidents, illnesses, and injuries. All methods of training produced meaningful behavioral performance improvements. Training involving behavioral modeling, a substantial amount of practice, and dialogue is generally more effective than other methods of safety and health training. The present findings challenge the current emphasis on more passive computer-based and distance training methods within the public health workforce.

  14. A Bayesian hierarchical model to detect differentially methylated loci from single nucleotide resolution sequencing data

    PubMed Central

    Feng, Hao; Conneely, Karen N.; Wu, Hao

    2014-01-01

    DNA methylation is an important epigenetic modification that has essential roles in cellular processes including gene regulation, development and disease and is widely dysregulated in most types of cancer. Recent advances in sequencing technology have enabled the measurement of DNA methylation at single nucleotide resolution through methods such as whole-genome bisulfite sequencing and reduced representation bisulfite sequencing. In DNA methylation studies, a key task is to identify differences under distinct biological contexts, for example, between tumor and normal tissue. A challenge in sequencing studies is that the number of biological replicates is often limited by the costs of sequencing. The small number of replicates leads to unstable variance estimation, which can reduce accuracy to detect differentially methylated loci (DML). Here we propose a novel statistical method to detect DML when comparing two treatment groups. The sequencing counts are described by a lognormal-beta-binomial hierarchical model, which provides a basis for information sharing across different CpG sites. A Wald test is developed for hypothesis testing at each CpG site. Simulation results show that the proposed method yields improved DML detection compared to existing methods, particularly when the number of replicates is low. The proposed method is implemented in the Bioconductor package DSS. PMID:24561809

  15. Optimized face recognition algorithm using radial basis function neural networks and its practical applications.

    PubMed

    Yoo, Sung-Hoon; Oh, Sung-Kwun; Pedrycz, Witold

    2015-09-01

    In this study, we propose a hybrid method of face recognition by using face region information extracted from the detected face region. In the preprocessing part, we develop a hybrid approach based on the Active Shape Model (ASM) and the Principal Component Analysis (PCA) algorithm. At this step, we use a CCD (Charge Coupled Device) camera to acquire a facial image by using AdaBoost and then Histogram Equalization (HE) is employed to improve the quality of the image. ASM extracts the face contour and image shape to produce a personal profile. Then we use a PCA method to reduce dimensionality of face images. In the recognition part, we consider the improved Radial Basis Function Neural Networks (RBF NNs) to identify a unique pattern associated with each person. The proposed RBF NN architecture consists of three functional modules realizing the condition phase, the conclusion phase, and the inference phase completed with the help of fuzzy rules coming in the standard 'if-then' format. In the formation of the condition part of the fuzzy rules, the input space is partitioned with the use of Fuzzy C-Means (FCM) clustering. In the conclusion part of the fuzzy rules, the connections (weights) of the RBF NNs are represented by four kinds of polynomials such as constant, linear, quadratic, and reduced quadratic. The values of the coefficients are determined by running a gradient descent method. The output of the RBF NNs model is obtained by running a fuzzy inference method. The essential design parameters of the network (including learning rate, momentum coefficient and fuzzification coefficient used by the FCM) are optimized by means of Differential Evolution (DE). The proposed P-RBF NNs (Polynomial based RBF NNs) are applied to facial recognition and its performance is quantified from the viewpoint of the output performance and recognition rate. Copyright © 2015 Elsevier Ltd. All rights reserved.

  16. An expanded calibration study of the explicitly correlated CCSD(T)-F12b method using large basis set standard CCSD(T) atomization energies.

    PubMed

    Feller, David; Peterson, Kirk A

    2013-08-28

    The effectiveness of the recently developed, explicitly correlated coupled cluster method CCSD(T)-F12b is examined in terms of its ability to reproduce atomization energies derived from complete basis set extrapolations of standard CCSD(T). Most of the standard method findings were obtained with aug-cc-pV7Z or aug-cc-pV8Z basis sets. For a few homonuclear diatomic molecules it was possible to push the basis set to the aug-cc-pV9Z level. F12b calculations were performed with the cc-pVnZ-F12 (n = D, T, Q) basis set sequence and were also extrapolated to the basis set limit using a Schwenke-style, parameterized formula. A systematic bias was observed in the F12b method with the (VTZ-F12/VQZ-F12) basis set combination. This bias resulted in the underestimation of reference values associated with small molecules (valence correlation energies <0.5 E(h)) and an even larger overestimation of atomization energies for bigger systems. Consequently, caution should be exercised in the use of F12b for high accuracy studies. Root mean square and mean absolute deviation error metrics for this basis set combination were comparable to complete basis set values obtained with standard CCSD(T) and the aug-cc-pVDZ through aug-cc-pVQZ basis set sequence. However, the mean signed deviation was an order of magnitude larger. Problems partially due to basis set superposition error were identified with second row compounds which resulted in a weak performance for the smaller VDZ-F12/VTZ-F12 combination of basis sets.

  17. Adaptive radial basis function mesh deformation using data reduction

    NASA Astrophysics Data System (ADS)

    Gillebaart, T.; Blom, D. S.; van Zuijlen, A. H.; Bijl, H.

    2016-09-01

    Radial Basis Function (RBF) mesh deformation is one of the most robust mesh deformation methods available. Using the greedy (data reduction) method in combination with an explicit boundary correction, results in an efficient method as shown in literature. However, to ensure the method remains robust, two issues are addressed: 1) how to ensure that the set of control points remains an accurate representation of the geometry in time and 2) how to use/automate the explicit boundary correction, while ensuring a high mesh quality. In this paper, we propose an adaptive RBF mesh deformation method, which ensures the set of control points always represents the geometry/displacement up to a certain (user-specified) criteria, by keeping track of the boundary error throughout the simulation and re-selecting when needed. Opposed to the unit displacement and prescribed displacement selection methods, the adaptive method is more robust, user-independent and efficient, for the cases considered. Secondly, the analysis of a single high aspect ratio cell is used to formulate an equation for the correction radius needed, depending on the characteristics of the correction function used, maximum aspect ratio, minimum first cell height and boundary error. Based on the analysis two new radial basis correction functions are derived and proposed. This proposed automated procedure is verified while varying the correction function, Reynolds number (and thus first cell height and aspect ratio) and boundary error. Finally, the parallel efficiency is studied for the two adaptive methods, unit displacement and prescribed displacement for both the CPU as well as the memory formulation with a 2D oscillating and translating airfoil with oscillating flap, a 3D flexible locally deforming tube and deforming wind turbine blade. Generally, the memory formulation requires less work (due to the large amount of work required for evaluating RBF's), but the parallel efficiency reduces due to the limited bandwidth available between CPU and memory. In terms of parallel efficiency/scaling the different studied methods perform similarly, with the greedy algorithm being the bottleneck. In terms of absolute computational work the adaptive methods are better for the cases studied due to their more efficient selection of the control points. By automating most of the RBF mesh deformation, a robust, efficient and almost user-independent mesh deformation method is presented.

  18. Vibration Response Predictions for Heavy Panel Mounted Components from Panel Acreage Environment Specifications

    NASA Technical Reports Server (NTRS)

    Harrison, Phillip; Frady, Greg; Duvall, Lowery; Fulcher, Clay; LaVerde, Bruce

    2010-01-01

    The development of new launch vehicles in the Aerospace industry often relies on response measurements taken from previously developed vehicles during various stages of liftoff and ascent, and from wind tunnel models. These measurements include sound pressure levels, dynamic pressures in turbulent boundary layers and accelerations. Rigorous statistical scaling methods are applied to the data to derive new environments and estimate the performance of new skin panel structures. Scaling methods have proven to be reliable, particularly for designs similar to the vehicles used as the basis for scaling, and especially in regions of smooth acreage without exterior protuberances or heavy components mounted to the panel. To account for response attenuation of a panel-mounted component due to its apparent mass at higher frequencies, the vibroacoustics engineer often reduces the acreage vibration according to a weight ratio first suggested by Barrett. The accuracy of the reduction is reduced with increased weight of the panel-mounted component, and does not account for low-frequency amplification of the component/panel response as a system. A method is proposed that combines acreage vibration from scaling methods with finite element analysis to account for the frequency-dependent dynamics of heavy panel-mounted components. Since the acreage and mass-loaded skins respond to the same dynamic input pressure, such pressure may be eliminated in favor of a frequency-dependent scaling function applied to the acreage vibration to predict the mass-loaded panel response. The scaling function replaces the Barrett weight ratio, and contains all of the dynamic character of the loaded and unloaded skin panels. The solution simplifies for spatially uncorrelated and fully correlated input pressures. Since the prediction uses finite element models of the loaded and unloaded skins, a rich suite of response data are available to the design engineer, including interface forces, stress and strain, as well as acceleration and displacement. An extension of the method is also developed to incorporate the effect of a local protuberance near a heavy component. Acreage environments from traditional scaling methods with and without protuberance effects serve as the basis for the extension. Authors:

  19. Reduced nicotine product standards for combustible tobacco: building an empirical basis for effective regulation.

    PubMed

    Donny, Eric C; Hatsukami, Dorothy K; Benowitz, Neal L; Sved, Alan F; Tidey, Jennifer W; Cassidy, Rachel N

    2014-11-01

    Both the Tobacco Control Act in the U.S. and Article 9 of the Framework Convention on Tobacco Control enable governments to directly address the addictiveness of combustible tobacco by reducing nicotine through product standards. Although nicotine may have some harmful effects, the detrimental health effects of smoked tobacco are primarily due to non-nicotine constituents. Hence, the health effects of nicotine reduction would likely be determined by changes in behavior that result in changes in smoke exposure. Herein, we review the current evidence on nicotine reduction and discuss some of the challenges in establishing the empirical basis for regulatory decisions. To date, research suggests that very low nicotine content cigarettes produce a desirable set of outcomes, including reduced exposure to nicotine, reduced smoking, and reduced dependence, without significant safety concerns. However, much is still unknown, including the effects of gradual versus abrupt changes in nicotine content, effects in vulnerable populations, and impact on youth. A coordinated effort must be made to provide the best possible scientific basis for regulatory decisions. The outcome of this effort may provide the foundation for a novel approach to tobacco control that dramatically reduces the devastating health consequences of smoked tobacco. Copyright © 2014 Elsevier Inc. All rights reserved.

  20. Projected Hybrid Orbitals: A General QM/MM Method

    PubMed Central

    2015-01-01

    A projected hybrid orbital (PHO) method was described to model the covalent boundary in a hybrid quantum mechanical and molecular mechanical (QM/MM) system. The PHO approach can be used in ab initio wave function theory and in density functional theory with any basis set without introducing system-dependent parameters. In this method, a secondary basis set on the boundary atom is introduced to formulate a set of hybrid atomic orbtials. The primary basis set on the boundary atom used for the QM subsystem is projected onto the secondary basis to yield a representation that provides a good approximation to the electron-withdrawing power of the primary basis set to balance electronic interactions between QM and MM subsystems. The PHO method has been tested on a range of molecules and properties. Comparison with results obtained from QM calculations on the entire system shows that the present PHO method is a robust and balanced QM/MM scheme that preserves the structural and electronic properties of the QM region. PMID:25317748

  1. Using diffusion k-means for simple stellar population modeling of low S/N quasar host galaxy spectra

    NASA Astrophysics Data System (ADS)

    Mosby, Gregory; Tremonti, Christina A.; Hooper, Eric; Wolf, Marsha J.; Sheinis, Andrew; Richards, Joseph

    2016-01-01

    Quasar host galaxies (QHGs) represent a unique stage in galaxy evolution that can provide a glimpse into the relationship between an active supermassive black hole (SMBH) and its host galaxy. However, observing the hosts of high luminosity, unobscured quasars in the optical is complicated by the large ratio of quasar to host galaxy light. One strategy in optical spectroscopy is to use offset longslit observations of the host galaxy. This method allows the centers of QHGs to be analyzed apart from other regions of their host galaxies. But light from the accreting black hole's point spread function still enters the host galaxy observations, and where the contrast between the host and intervening quasar light is favorable, the host galaxy is faint, producing low signal-to-noise (S/N) data. This stymies traditional stellar population methods that might rely on high S/N features in galaxy spectra to recover key galaxy properties like its star formation history (SFH). In response to this challenge, we have developed a method of stellar population modeling using diffusion k-means (DFK) that can recover SFHs from rest frame optical data with S/N ~ 5 Å^-1. Specifically, we use DFK to cultivate a reduced stellar population basis set. This DFK basis set of four broad age bins is able to recover a range of SFHs. With an analytic description of the seeing, we can use this DFK basis set to simultaneously model the SFHs and the intervening quasar light of QHGs as well. We compare the results of this method with previous techniques using synthetic data and find that our new method has a clear advantage in recovering SFHs from QHGs. On average, the DFK basis set is just as accurate and decisively more precise. This new technique could be used to analyze other low S/N galaxy spectra like those from higher redshift or integral field spectroscopy surveys.This material is based upon work supported by the National Science Foundation under grant no. DGE -0718123 and the Advanced Opportunity fellowship program at the University of Wisconsin-Madison. This research was performed using the computer resources and assistance of the UW-Madison Center For High Throughput Computing (CHTC) in the Department of Computer Sciences.

  2. Generalized self-adjustment method for statistical mechanics of composite materials

    NASA Astrophysics Data System (ADS)

    Pan'kov, A. A.

    1997-03-01

    A new method is developed for the statistical mechanics of composite materials — the generalized selfadjustment method — which makes it possible to reduce the problem of predicting effective elastic properties of composites with random structures to the solution of two simpler "averaged" problems of an inclusion with transitional layers in a medium with the desired effective elastic properties. The inhomogeneous elastic properties and dimensions of the transitional layers take into account both the "approximate" order of mutual positioning, and also the variation in the dimensions and elastics properties of inclusions through appropriate special averaged indicator functions of the random structure of the composite. A numerical calculation of averaged indicator functions and effective elastic characteristics is performed by the generalized self-adjustment method for a unidirectional fiberglass on the basis of various models of actual random structures in the plane of isotropy.

  3. Research on the method of information system risk state estimation based on clustering particle filter

    NASA Astrophysics Data System (ADS)

    Cui, Jia; Hong, Bei; Jiang, Xuepeng; Chen, Qinghua

    2017-05-01

    With the purpose of reinforcing correlation analysis of risk assessment threat factors, a dynamic assessment method of safety risks based on particle filtering is proposed, which takes threat analysis as the core. Based on the risk assessment standards, the method selects threat indicates, applies a particle filtering algorithm to calculate influencing weight of threat indications, and confirms information system risk levels by combining with state estimation theory. In order to improve the calculating efficiency of the particle filtering algorithm, the k-means cluster algorithm is introduced to the particle filtering algorithm. By clustering all particles, the author regards centroid as the representative to operate, so as to reduce calculated amount. The empirical experience indicates that the method can embody the relation of mutual dependence and influence in risk elements reasonably. Under the circumstance of limited information, it provides the scientific basis on fabricating a risk management control strategy.

  4. A spectral mimetic least-squares method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bochev, Pavel; Gerritsma, Marc

    We present a spectral mimetic least-squares method for a model diffusion–reaction problem, which preserves key conservation properties of the continuum problem. Casting the model problem into a first-order system for two scalar and two vector variables shifts material properties from the differential equations to a pair of constitutive relations. We also use this system to motivate a new least-squares functional involving all four fields and show that its minimizer satisfies the differential equations exactly. Discretization of the four-field least-squares functional by spectral spaces compatible with the differential operators leads to a least-squares method in which the differential equations are alsomore » satisfied exactly. Additionally, the latter are reduced to purely topological relationships for the degrees of freedom that can be satisfied without reference to basis functions. Furthermore, numerical experiments confirm the spectral accuracy of the method and its local conservation.« less

  5. A spectral mimetic least-squares method

    DOE PAGES

    Bochev, Pavel; Gerritsma, Marc

    2014-09-01

    We present a spectral mimetic least-squares method for a model diffusion–reaction problem, which preserves key conservation properties of the continuum problem. Casting the model problem into a first-order system for two scalar and two vector variables shifts material properties from the differential equations to a pair of constitutive relations. We also use this system to motivate a new least-squares functional involving all four fields and show that its minimizer satisfies the differential equations exactly. Discretization of the four-field least-squares functional by spectral spaces compatible with the differential operators leads to a least-squares method in which the differential equations are alsomore » satisfied exactly. Additionally, the latter are reduced to purely topological relationships for the degrees of freedom that can be satisfied without reference to basis functions. Furthermore, numerical experiments confirm the spectral accuracy of the method and its local conservation.« less

  6. Adaptive DSPI phase denoising using mutual information and 2D variational mode decomposition

    NASA Astrophysics Data System (ADS)

    Xiao, Qiyang; Li, Jian; Wu, Sijin; Li, Weixian; Yang, Lianxiang; Dong, Mingli; Zeng, Zhoumo

    2018-04-01

    In digital speckle pattern interferometry (DSPI), noise interference leads to a low peak signal-to-noise ratio (PSNR) and measurement errors in the phase map. This paper proposes an adaptive DSPI phase denoising method based on two-dimensional variational mode decomposition (2D-VMD) and mutual information. Firstly, the DSPI phase map is subjected to 2D-VMD in order to obtain a series of band-limited intrinsic mode functions (BLIMFs). Then, on the basis of characteristics of the BLIMFs and in combination with mutual information, a self-adaptive denoising method is proposed to obtain noise-free components containing the primary phase information. The noise-free components are reconstructed to obtain the denoising DSPI phase map. Simulation and experimental results show that the proposed method can effectively reduce noise interference, giving a PSNR that is higher than that of two-dimensional empirical mode decomposition methods.

  7. Cut set-based risk and reliability analysis for arbitrarily interconnected networks

    DOEpatents

    Wyss, Gregory D.

    2000-01-01

    Method for computing all-terminal reliability for arbitrarily interconnected networks such as the United States public switched telephone network. The method includes an efficient search algorithm to generate minimal cut sets for nonhierarchical networks directly from the network connectivity diagram. Efficiency of the search algorithm stems in part from its basis on only link failures. The method also includes a novel quantification scheme that likewise reduces computational effort associated with assessing network reliability based on traditional risk importance measures. Vast reductions in computational effort are realized since combinatorial expansion and subsequent Boolean reduction steps are eliminated through analysis of network segmentations using a technique of assuming node failures to occur on only one side of a break in the network, and repeating the technique for all minimal cut sets generated with the search algorithm. The method functions equally well for planar and non-planar networks.

  8. Performance assessment of density functional methods with Gaussian and Slater basis sets using 7σ orbital momentum distributions of N2O

    NASA Astrophysics Data System (ADS)

    Wang, Feng; Pang, Wenning; Duffy, Patrick

    2012-12-01

    Performance of a number of commonly used density functional methods in chemistry (B3LYP, Bhandh, BP86, PW91, VWN, LB94, PBe0, SAOP and X3LYP and the Hartree-Fock (HF) method) has been assessed using orbital momentum distributions of the 7σ orbital of nitrous oxide (NNO), which models electron behaviour in a chemically significant region. The density functional methods are combined with a number of Gaussian basis sets (Pople's 6-31G*, 6-311G**, DGauss TZVP and Dunning's aug-cc-pVTZ as well as even-tempered Slater basis sets, namely, et-DZPp, et-QZ3P, et-QZ+5P and et-pVQZ). Orbital momentum distributions of the 7σ orbital in the ground electronic state of NNO, which are obtained from a Fourier transform into momentum space from single point electronic calculations employing the above models, are compared with experimental measurement of the same orbital from electron momentum spectroscopy (EMS). The present study reveals information on performance of (a) the density functional methods, (b) Gaussian and Slater basis sets, (c) combinations of the density functional methods and basis sets, that is, the models, (d) orbital momentum distributions, rather than a group of specific molecular properties and (e) the entire region of chemical significance of the orbital. It is found that discrepancies of this orbital between the measured and the calculated occur in the small momentum region (i.e. large r region). In general, Slater basis sets achieve better overall performance than the Gaussian basis sets. Performance of the Gaussian basis sets varies noticeably when combining with different Vxc functionals, but Dunning's augcc-pVTZ basis set achieves the best performance for the momentum distributions of this orbital. The overall performance of the B3LYP and BP86 models is similar to newer models such as X3LYP and SAOP. The present study also demonstrates that the combinations of the density functional methods and the basis sets indeed make a difference in the quality of the calculated orbitals.

  9. Neuroimaging correlates of aggression in schizophrenia: an update.

    PubMed

    Hoptman, Matthew J; Antonius, Daniel

    2011-03-01

    Aggression in schizophrenia is associated with poor treatment outcomes, hospital admissions, and stigmatization of patients. As such it represents an important public health issue. This article reviews recent neuroimaging studies of aggression in schizophrenia, focusing on PET/single photon emission computed tomography and MRI methods. The neuroimaging literature on aggression in schizophrenia is in a period of development. This is attributable in part to the heterogeneous nature and basis of that aggression. Radiological methods have consistently shown reduced activity in frontal and temporal regions. MRI brain volumetric studies have been less consistent, with some studies finding increased volumes of inferior frontal structures, and others finding reduced volumes in aggressive individuals with schizophrenia. Functional MRI studies have also had inconsistent results, with most finding reduced activity in inferior frontal and temporal regions, but some also finding increased activity in other regions. Some studies have made a distinction between types of aggression in schizophrenia in the context of antisocial traits, and this appears to be useful in understanding the neuroimaging literature. Frontal and temporal abnormalities appear to be a consistent feature of aggression in schizophrenia, but their precise nature likely differs because of the heterogeneous nature of that behavior.

  10. Efficient method for computing the maximum-likelihood quantum state from measurements with additive Gaussian noise.

    PubMed

    Smolin, John A; Gambetta, Jay M; Smith, Graeme

    2012-02-17

    We provide an efficient method for computing the maximum-likelihood mixed quantum state (with density matrix ρ) given a set of measurement outcomes in a complete orthonormal operator basis subject to Gaussian noise. Our method works by first changing basis yielding a candidate density matrix μ which may have nonphysical (negative) eigenvalues, and then finding the nearest physical state under the 2-norm. Our algorithm takes at worst O(d(4)) for the basis change plus O(d(3)) for finding ρ where d is the dimension of the quantum state. In the special case where the measurement basis is strings of Pauli operators, the basis change takes only O(d(3)) as well. The workhorse of the algorithm is a new linear-time method for finding the closest probability distribution (in Euclidean distance) to a set of real numbers summing to one.

  11. A machine learning approach for efficient uncertainty quantification using multiscale methods

    NASA Astrophysics Data System (ADS)

    Chan, Shing; Elsheikh, Ahmed H.

    2018-02-01

    Several multiscale methods account for sub-grid scale features using coarse scale basis functions. For example, in the Multiscale Finite Volume method the coarse scale basis functions are obtained by solving a set of local problems over dual-grid cells. We introduce a data-driven approach for the estimation of these coarse scale basis functions. Specifically, we employ a neural network predictor fitted using a set of solution samples from which it learns to generate subsequent basis functions at a lower computational cost than solving the local problems. The computational advantage of this approach is realized for uncertainty quantification tasks where a large number of realizations has to be evaluated. We attribute the ability to learn these basis functions to the modularity of the local problems and the redundancy of the permeability patches between samples. The proposed method is evaluated on elliptic problems yielding very promising results.

  12. Analysing malaria drug trials on a per-individual or per-clone basis: a comparison of methods.

    PubMed

    Jaki, Thomas; Parry, Alice; Winter, Katherine; Hastings, Ian

    2013-07-30

    There are a variety of methods used to estimate the effectiveness of antimalarial drugs in clinical trials, invariably on a per-person basis. A person, however, may have more than one malaria infection present at the time of treatment. We evaluate currently used methods for analysing malaria trials on a per-individual basis and introduce a novel method to estimate the cure rate on a per-infection (clone) basis. We used simulated and real data to highlight the differences of the various methods. We give special attention to classifying outcomes as cured, recrudescent (infections that never fully cleared) or ambiguous on the basis of genetic markers at three loci. To estimate cure rates on a per-clone basis, we used the genetic information within an individual before treatment to determine the number of clones present. We used the genetic information obtained at the time of treatment failure to classify clones as recrudescence or new infections. On the per-individual level, we find that the most accurate methods of classification label an individual as newly infected if all alleles are different at the beginning and at the time of failure and as a recrudescence if all or some alleles were the same. The most appropriate analysis method is survival analysis or alternatively for complete data/per-protocol analysis a proportion estimate that treats new infections as successes. We show that the analysis of drug effectiveness on a per-clone basis estimates the cure rate accurately and allows more detailed evaluation of the performance of the treatment. Copyright © 2012 John Wiley & Sons, Ltd.

  13. Multi-element least square HDMR methods and their applications for stochastic multiscale model reduction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jiang, Lijian, E-mail: ljjiang@hnu.edu.cn; Li, Xinping, E-mail: exping@126.com

    Stochastic multiscale modeling has become a necessary approach to quantify uncertainty and characterize multiscale phenomena for many practical problems such as flows in stochastic porous media. The numerical treatment of the stochastic multiscale models can be very challengeable as the existence of complex uncertainty and multiple physical scales in the models. To efficiently take care of the difficulty, we construct a computational reduced model. To this end, we propose a multi-element least square high-dimensional model representation (HDMR) method, through which the random domain is adaptively decomposed into a few subdomains, and a local least square HDMR is constructed in eachmore » subdomain. These local HDMRs are represented by a finite number of orthogonal basis functions defined in low-dimensional random spaces. The coefficients in the local HDMRs are determined using least square methods. We paste all the local HDMR approximations together to form a global HDMR approximation. To further reduce computational cost, we present a multi-element reduced least-square HDMR, which improves both efficiency and approximation accuracy in certain conditions. To effectively treat heterogeneity properties and multiscale features in the models, we integrate multiscale finite element methods with multi-element least-square HDMR for stochastic multiscale model reduction. This approach significantly reduces the original model's complexity in both the resolution of the physical space and the high-dimensional stochastic space. We analyze the proposed approach, and provide a set of numerical experiments to demonstrate the performance of the presented model reduction techniques. - Highlights: • Multi-element least square HDMR is proposed to treat stochastic models. • Random domain is adaptively decomposed into some subdomains to obtain adaptive multi-element HDMR. • Least-square reduced HDMR is proposed to enhance computation efficiency and approximation accuracy in certain conditions. • Integrating MsFEM and multi-element least square HDMR can significantly reduce computation complexity.« less

  14. 36 CFR 223.64 - Appraisal on a lump-sum value or rate per unit of measure basis.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... costs or selling values subsequent to the rate redetermination which reduce conversion value to less... or rate per unit of measure basis. 223.64 Section 223.64 Parks, Forests, and Public Property FOREST... Contracts Appraisal and Pricing § 223.64 Appraisal on a lump-sum value or rate per unit of measure basis...

  15. Chemical control of ticks on cattle and the resistance of these parasites to acaricides.

    PubMed

    George, J E; Pound, J M; Davey, R B

    2004-01-01

    Toward the end of the nineteenth century a complex of problems related to ticks and tick-borne diseases of cattle created a demand for methods to control ticks and reduce losses of cattle. The discovery and use of arsenical solutions in dipping vats for treating cattle to protect them against ticks revolutionized tick and tick-borne disease control programmes. Arsenic dips for cattle were used for about 40 years before the evolution of resistance of ticks to the chemical, and the development and marketing of synthetic organic acaricides after World War II provided superior alternative products. Most of the major groups of organic pesticides are represented on the list of chemicals used to control ticks on cattle. Unfortunately, the successive evolution of resistance of ticks to acaricides in each chemical group with the concomitant reduction in the usefulness of a group of acaricides is a major reason for the diversity of acaricides. Whether a producer chooses a traditional method for treating cattle with an acaricide or uses a new method, he must recognize the benefits, limitations and potential problems with each application method and product. Simulation models and research were the basis of recommendations for tick control strategies advocating approaches that reduced reliance on acaricides. These recommendations for controlling ticks on cattle are in harmony with recommendations for reducing the rate of selection for acaricide resistance. There is a need to transfer knowledge about tick control and resistance mitigation strategies to cattle producers.

  16. Exploiting the spatial locality of electron correlation within the parametric two-electron reduced-density-matrix method

    NASA Astrophysics Data System (ADS)

    DePrince, A. Eugene; Mazziotti, David A.

    2010-01-01

    The parametric variational two-electron reduced-density-matrix (2-RDM) method is applied to computing electronic correlation energies of medium-to-large molecular systems by exploiting the spatial locality of electron correlation within the framework of the cluster-in-molecule (CIM) approximation [S. Li et al., J. Comput. Chem. 23, 238 (2002); J. Chem. Phys. 125, 074109 (2006)]. The 2-RDMs of individual molecular fragments within a molecule are determined, and selected portions of these 2-RDMs are recombined to yield an accurate approximation to the correlation energy of the entire molecule. In addition to extending CIM to the parametric 2-RDM method, we (i) suggest a more systematic selection of atomic-orbital domains than that presented in previous CIM studies and (ii) generalize the CIM method for open-shell quantum systems. The resulting method is tested with a series of polyacetylene molecules, water clusters, and diazobenzene derivatives in minimal and nonminimal basis sets. Calculations show that the computational cost of the method scales linearly with system size. We also compute hydrogen-abstraction energies for a series of hydroxyurea derivatives. Abstraction of hydrogen from hydroxyurea is thought to be a key step in its treatment of sickle cell anemia; the design of hydroxyurea derivatives that oxidize more rapidly is one approach to devising more effective treatments.

  17. Dealing with Liars: Misbehavior Identification via Rényi-Ulam Games

    NASA Astrophysics Data System (ADS)

    Kozma, William; Lazos, Loukas

    We address the problem of identifying misbehaving nodes that refuse to forward packets in wireless multi-hop networks. We map the process of locating the misbehaving nodes to the classic Rényi-Ulam game of 20 questions. Compared to previous methods, our mapping allows the evaluation of node behavior on a per-packet basis, without the need for energy-expensive overhearing techniques or intensive acknowledgment schemes. Furthermore, it copes with colluding adversaries that coordinate their behavioral patterns to avoid identification and frame honest nodes. We show via simulations that our algorithms reduce the communication overhead for identifying misbehaving nodes by at least one order of magnitude compared to other methods, while increasing the identification delay logarithmically with the path size.

  18. Fast Bound Methods for Large Scale Simulation with Application for Engineering Optimization

    NASA Technical Reports Server (NTRS)

    Patera, Anthony T.; Peraire, Jaime; Zang, Thomas A. (Technical Monitor)

    2002-01-01

    In this work, we have focused on fast bound methods for large scale simulation with application for engineering optimization. The emphasis is on the development of techniques that provide both very fast turnaround and a certificate of Fidelity; these attributes ensure that the results are indeed relevant to - and trustworthy within - the engineering context. The bound methodology which underlies this work has many different instantiations: finite element approximation; iterative solution techniques; and reduced-basis (parameter) approximation. In this grant we have, in fact, treated all three, but most of our effort has been concentrated on the first and third. We describe these below briefly - but with a pointer to an Appendix which describes, in some detail, the current "state of the art."

  19. Design of hybrid radial basis function neural networks (HRBFNNs) realized with the aid of hybridization of fuzzy clustering method (FCM) and polynomial neural networks (PNNs).

    PubMed

    Huang, Wei; Oh, Sung-Kwun; Pedrycz, Witold

    2014-12-01

    In this study, we propose Hybrid Radial Basis Function Neural Networks (HRBFNNs) realized with the aid of fuzzy clustering method (Fuzzy C-Means, FCM) and polynomial neural networks. Fuzzy clustering used to form information granulation is employed to overcome a possible curse of dimensionality, while the polynomial neural network is utilized to build local models. Furthermore, genetic algorithm (GA) is exploited here to optimize the essential design parameters of the model (including fuzzification coefficient, the number of input polynomial fuzzy neurons (PFNs), and a collection of the specific subset of input PFNs) of the network. To reduce dimensionality of the input space, principal component analysis (PCA) is considered as a sound preprocessing vehicle. The performance of the HRBFNNs is quantified through a series of experiments, in which we use several modeling benchmarks of different levels of complexity (different number of input variables and the number of available data). A comparative analysis reveals that the proposed HRBFNNs exhibit higher accuracy in comparison to the accuracy produced by some models reported previously in the literature. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. Problem of unity of measurements in ensuring safety of hydraulic structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kheifits, V.Z.; Markov, A.I.; Braitsev, V.V.

    1994-07-01

    Ensuring the safety of hydraulic structures (HSs) is not only an industry but also a national and global concern, since failure of large water impounding structures can entail large losses of lives and enormous material losses related to destruction downstream. The main information on the degree of safety of a structure is obtained by comparing information about the actual state of the structure obtained on the basis of measurements in key zones of the structure with the predicted state on basis of the design model used when designing the structure for given conditions of external actions. Numerous, from hundreds tomore » thousands, string type transducers are placed in large HSs. This system of transducers monitor the stress-strain rate, seepage, and thermal regimes. These measurements are supported by the State Standards Committee which certifies the accuracy of the checking methods. To improve the instrumental monitoring of HSs, the author recommends: Calibration of methods and means of reliable diagnosis for each measuring channel in the HS, improvements to reduce measurement error, support for the system software programs, and development of appropriate standards for the design and examination of HSs.« less

  1. On 2- and 3-person games on polyhedral sets

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Belenky, A.S.

    1994-12-31

    Special classes of 3 person games are considered where the sets of players` allowable strategies are polyhedral and the payoff functions are defined as maxima, on a polyhedral set, of certain kind of sums of linear and bilinear functions. Necessary and sufficient conditions, which are easy to verify, for a Nash point in these games are established, and a finite method, based on these conditions, for calculating Nash points is proposed. It is shown that the game serves as a generalization of a model for a problem of waste products evacuation from a territory. The method makes it possible tomore » reduce calculation of a Nash point to solving some linear and quadratic programming problems formulated on the basis of the original 3-person game. A class of 2-person games on connected polyhedral sets is considered, with the payoff function being a sum of two linear functions and one bilinear function. Necessary and sufficient conditions are established for the min-max, the max-min, and for a certain equilibrium. It is shown that the corresponding points can be calculated from auxiliary linear programming problems formulated on the basis of the master game.« less

  2. A coherent discrete variable representation method on a sphere

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Hua -Gen

    Here, the coherent discrete variable representation (ZDVR) has been extended for construct- ing a multidimensional potential-optimized DVR basis on a sphere. In order to deal with the non-constant Jacobian in spherical angles, two direct product primitive basis methods are proposed so that the original ZDVR technique can be properly implemented. The method has been demonstrated by computing the lowest states of a two dimensional (2D) vibrational model. Results show that the extended ZDVR method gives accurate eigenval- ues and exponential convergence with increasing ZDVR basis size.

  3. A coherent discrete variable representation method on a sphere

    DOE PAGES

    Yu, Hua -Gen

    2017-09-05

    Here, the coherent discrete variable representation (ZDVR) has been extended for construct- ing a multidimensional potential-optimized DVR basis on a sphere. In order to deal with the non-constant Jacobian in spherical angles, two direct product primitive basis methods are proposed so that the original ZDVR technique can be properly implemented. The method has been demonstrated by computing the lowest states of a two dimensional (2D) vibrational model. Results show that the extended ZDVR method gives accurate eigenval- ues and exponential convergence with increasing ZDVR basis size.

  4. Maglev guideway route alignment and right-of-way requirements

    NASA Astrophysics Data System (ADS)

    Carlton, S.; Andriola, T.

    1992-12-01

    The use of existing rights-of-way (ROW) is assessed for maglev systems by estimating trip times and land acquisition requirements for potential maglev corridors while meeting passenger comfort limits. Right-of-way excursions improve trip time but incur a cost for purchasing land. The final report documents findings of the eight tasks in establishing right-of-way feasibility by examining three city-pair corridors in detail and developing an approximation method for estimating route length and travel times in 20 additional city-pair corridor portions and 21 new corridors. The use of routes independent of existing railroad or highway right-of-way have trip time advantages and significantly reduce the need for aggressive guideway geometries on intercity corridors. Selection of the appropriate alignment is determined by many corridor specific issues. Use of existing intercity rights-of-way may be appropriate for parts of routes on a corridor-specific basis and for urban penetration where vehicle speeds are likely to be reduced by policy due to noise and safety considerations, and where land acquisition costs are high. Detailed aspects of available rights-of-way, land acquisition costs, geotechnical issues, land use, and population centers must be examined in more detail on a specific corridor basis before the proper or best maglev alignment can be chosen.

  5. Evolutionary optimization of radial basis function classifiers for data mining applications.

    PubMed

    Buchtala, Oliver; Klimek, Manuel; Sick, Bernhard

    2005-10-01

    In many data mining applications that address classification problems, feature and model selection are considered as key tasks. That is, appropriate input features of the classifier must be selected from a given (and often large) set of possible features and structure parameters of the classifier must be adapted with respect to these features and a given data set. This paper describes an evolutionary algorithm (EA) that performs feature and model selection simultaneously for radial basis function (RBF) classifiers. In order to reduce the optimization effort, various techniques are integrated that accelerate and improve the EA significantly: hybrid training of RBF networks, lazy evaluation, consideration of soft constraints by means of penalty terms, and temperature-based adaptive control of the EA. The feasibility and the benefits of the approach are demonstrated by means of four data mining problems: intrusion detection in computer networks, biometric signature verification, customer acquisition with direct marketing methods, and optimization of chemical production processes. It is shown that, compared to earlier EA-based RBF optimization techniques, the runtime is reduced by up to 99% while error rates are lowered by up to 86%, depending on the application. The algorithm is independent of specific applications so that many ideas and solutions can be transferred to other classifier paradigms.

  6. A Flexible Method for Multi-Material Decomposition of Dual-Energy CT Images.

    PubMed

    Mendonca, Paulo R S; Lamb, Peter; Sahani, Dushyant V

    2014-01-01

    The ability of dual-energy computed-tomographic (CT) systems to determine the concentration of constituent materials in a mixture, known as material decomposition, is the basis for many of dual-energy CT's clinical applications. However, the complex composition of tissues and organs in the human body poses a challenge for many material decomposition methods, which assume the presence of only two, or at most three, materials in the mixture. We developed a flexible, model-based method that extends dual-energy CT's core material decomposition capability to handle more complex situations, in which it is necessary to disambiguate among and quantify the concentration of a larger number of materials. The proposed method, named multi-material decomposition (MMD), was used to develop two image analysis algorithms. The first was virtual unenhancement (VUE), which digitally removes the effect of contrast agents from contrast-enhanced dual-energy CT exams. VUE has the ability to reduce patient dose and improve clinical workflow, and can be used in a number of clinical applications such as CT urography and CT angiography. The second algorithm developed was liver-fat quantification (LFQ), which accurately quantifies the fat concentration in the liver from dual-energy CT exams. LFQ can form the basis of a clinical application targeting the diagnosis and treatment of fatty liver disease. Using image data collected from a cohort consisting of 50 patients and from phantoms, the application of MMD to VUE and LFQ yielded quantitatively accurate results when compared against gold standards. Furthermore, consistent results were obtained across all phases of imaging (contrast-free and contrast-enhanced). This is of particular importance since most clinical protocols for abdominal imaging with CT call for multi-phase imaging. We conclude that MMD can successfully form the basis of a number of dual-energy CT image analysis algorithms, and has the potential to improve the clinical utility of dual-energy CT in disease management.

  7. Construction of SO(5)⊃SO(3) spherical harmonics and Clebsch-Gordan coefficients

    NASA Astrophysics Data System (ADS)

    Caprio, M. A.; Rowe, D. J.; Welsh, T. A.

    2009-07-01

    The SO(5)⊃SO(3) spherical harmonics form a natural basis for expansion of nuclear collective model angular wave functions. They underlie the recently-proposed algebraic method for diagonalization of the nuclear collective model Hamiltonian in an SU(1,1)×SO(5) basis. We present a computer code for explicit construction of the SO(5)⊃SO(3) spherical harmonics and use them to compute the Clebsch-Gordan coefficients needed for collective model calculations in an SO(3)-coupled basis. With these Clebsch-Gordan coefficients it becomes possible to compute the matrix elements of collective model observables by purely algebraic methods. Program summaryProgram title: GammaHarmonic Catalogue identifier: AECY_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AECY_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 346 421 No. of bytes in distributed program, including test data, etc.: 16 037 234 Distribution format: tar.gz Programming language: Mathematica 6 Computer: Any which supports Mathematica Operating system: Any which supports Mathematica; tested under Microsoft Windows XP and Linux Classification: 4.2 Nature of problem: Explicit construction of SO(5) ⊃ SO(3) spherical harmonics on S. Evaluation of SO(3)-reduced matrix elements and SO(5) ⊃ SO(3) Clebsch-Gordan coefficients (isoscalar factors). Solution method: Construction of SO(5) ⊃ SO(3) spherical harmonics by orthonormalization, obtained from a generating set of functions, according to the method of Rowe, Turner, and Repka [1]. Matrix elements and Clebsch-Gordan coefficients follow by construction and integration of SO(3) scalar products. Running time: Depends strongly on the maximum SO(5) and SO(3) representation labels involved. A few minutes for the calculation in the Mathematica notebook. References: [1] D.J. Rowe, P.S. Turner, J. Repka, J. Math. Phys. 45 (2004) 2761.

  8. A method for spectral DNS of low Rm channel flows based on the least dissipative modes

    NASA Astrophysics Data System (ADS)

    Kornet, Kacper; Pothérat, Alban

    2015-10-01

    We put forward a new type of spectral method for the direct numerical simulation of flows where anisotropy or very fine boundary layers are present. The main idea is to take advantage of the fact that such structures are dissipative and that their presence should reduce the number of degrees of freedom of the flow, when paradoxically, their fine resolution incurs extra computational cost in most current methods. The principle of this method is to use a functional basis with elements that already include these fine structures so as to avoid these extra costs. This leads us to develop an algorithm to implement a spectral method for arbitrary functional bases, and in particular, non-orthogonal ones. We construct a basic implementation of this algorithm to simulate magnetohydrodynamic (MHD) channel flows with an externally imposed, transverse magnetic field, where very thin boundary layers are known to develop along the channel walls. In this case, the sought functional basis can be built out of the eigenfunctions of the dissipation operator, which incorporate these boundary layers, and it turns out to be non-orthogonal. We validate this new scheme against numerical simulations of freely decaying MHD turbulence based on a finite volume code and it is found to provide accurate results. Its ability to fully resolve wall-bounded turbulence with a number of modes close to that required by the dynamics is demonstrated on a simple example. This opens the way to full-blown simulations of MHD turbulence under very high magnetic fields. Until now such simulations were too computationally expensive. In contrast to traditional methods the computational cost of the proposed method, does not depend on the intensity of the magnetic field.

  9. A novel Gaussian-Sinc mixed basis set for electronic structure calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jerke, Jonathan L.; Lee, Young; Tymczak, C. J.

    2015-08-14

    A Gaussian-Sinc basis set methodology is presented for the calculation of the electronic structure of atoms and molecules at the Hartree–Fock level of theory. This methodology has several advantages over previous methods. The all-electron electronic structure in a Gaussian-Sinc mixed basis spans both the “localized” and “delocalized” regions. A basis set for each region is combined to make a new basis methodology—a lattice of orthonormal sinc functions is used to represent the “delocalized” regions and the atom-centered Gaussian functions are used to represent the “localized” regions to any desired accuracy. For this mixed basis, all the Coulomb integrals are definablemore » and can be computed in a dimensional separated methodology. Additionally, the Sinc basis is translationally invariant, which allows for the Coulomb singularity to be placed anywhere including on lattice sites. Finally, boundary conditions are always satisfied with this basis. To demonstrate the utility of this method, we calculated the ground state Hartree–Fock energies for atoms up to neon, the diatomic systems H{sub 2}, O{sub 2}, and N{sub 2}, and the multi-atom system benzene. Together, it is shown that the Gaussian-Sinc mixed basis set is a flexible and accurate method for solving the electronic structure of atomic and molecular species.« less

  10. A practical radial basis function equalizer.

    PubMed

    Lee, J; Beach, C; Tepedelenlioglu, N

    1999-01-01

    A radial basis function (RBF) equalizer design process has been developed in which the number of basis function centers used is substantially fewer than conventionally required. The reduction of centers is accomplished in two-steps. First an algorithm is used to select a reduced set of centers that lie close to the decision boundary. Then the centers in this reduced set are grouped, and an average position is chosen to represent each group. Channel order and delay, which are determining factors in setting the initial number of centers, are estimated from regression analysis. In simulation studies, an RBF equalizer with more than 2000-to-1 reduction in centers performed as well as the RBF equalizer without reduction in centers, and better than a conventional linear equalizer.

  11. PM 2.5 and other pollutants -- Reduction of health impacts

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marrack, D.

    The 1990 CAA projected a need to reduce the adverse human health and environmental impacts of exposures to particulates by regulatory reduction of anthropomorphic emissions, solely on the basis of mass reductions, at point and area sources. Ozone reduction would be by reduction of total VOC's and NO{sub x} emissions. The assumptions made about ambient air pollution's biological effects were: The observed health effect was the consequence of a measured single air pollutant treated as independent entities and that their selective reduction would have a specific identifiable health impact reduction. That within the regulated classes PM-10, PM-2.5 and VOC's allmore » components have equal biological impacts. Neither of the assumptions appears to be true. If the assumptions are not true then potentially the same reductions in health impacts could be achieved by reducing the most offensive components at possibly less cost than that required for reducing them all. Ambient pollutants are a complex matrix of dynamically interacting chemical and particle species. Their interactions are going on as they are inhaled. Pollutant measurement systems measure the predominant stable components only. Small amounts of more reactive chemicals and radicals initially present in inhaled air that contacts respiratory tract lining cells and contribute to the bio-effects are lost by the time pollutant analysis is attempted. Identification of some of the specific anthropomorphic emissions components contributing to adverse health effects are known. Methods for reducing their presence in anthropomorphic processes' emissions or their effects will be considered. Their significant role in triggering cardio-pulmonary dysfunction has now been elucidated. Reductions in specific reactive VOC species is another option. The basis for potential actions and their related biological processes will be discussed.« less

  12. [Study thought of material basis of secondary development of major traditional Chinese medicine varieties on basis of combination of in vivo and in vitro experiments].

    PubMed

    Cheng, Xu-Dong; Jia, Xiao-Bin; Feng, Liang; Jiang, Jun

    2013-12-01

    The secondary development of major traditional Chinese medicine varieties is one of important links during the modernization, scientification and standardization of traditional Chinese medicines. How to accurately and effectively identify the pharmacodynamic material basis of original formulae becomes the primary problem in the secondary development, as well as the bottleneck in the modernization development of traditional Chinese medicines. On the basis of the existing experimental methods, and according to the study thought that the multi-component and complex effects of traditional Chinese medicine components need to combine multi-disciplinary methods and technologies, we propose the study thought of the material basis of secondary development of major traditional Chinese medicine varieties based on the combination of in vivo and in vitro experiments. It is believed that studies on material basis needs three links, namely identification, screening and verification, and in vivo and in vitro study method corresponding to each link is mutually complemented and verified. Finally, the accurate and reliable material basis is selected. This thought provides reference for the secondary development of major traditional Chinese medicine varieties and studies on compound material basis.

  13. Analytic Energy Gradients for Variational Two-Electron Reduced-Density-Matrix-Driven Complete Active Space Self-Consistent Field Theory.

    PubMed

    Maradzike, Elvis; Gidofalvi, Gergely; Turney, Justin M; Schaefer, Henry F; DePrince, A Eugene

    2017-09-12

    Analytic energy gradients are presented for a variational two-electron reduced-density-matrix (2-RDM)-driven complete active space self-consistent field (CASSCF) method. The active-space 2-RDM is determined using a semidefinite programing (SDP) algorithm built upon an augmented Lagrangian formalism. Expressions for analytic gradients are simplified by the fact that the Lagrangian is stationary with respect to variations in both the primal and the dual solutions to the SDP problem. Orbital response contributions to the gradient are identical to those that arise in conventional CASSCF methods in which the electronic structure of the active space is described by a full configuration interaction (CI) wave function. We explore the relative performance of variational 2-RDM (v2RDM)- and CI-driven CASSCF for the equilibrium geometries of 20 small molecules. When enforcing two-particle N-representability conditions, full-valence v2RDM-CASSCF-optimized bond lengths display a mean unsigned error of 0.0060 Å and a maximum unsigned error of 0.0265 Å, relative to those obtained from full-valence CI-CASSCF. When enforcing partial three-particle N-representability conditions, the mean and maximum unsigned errors are reduced to only 0.0006 and 0.0054 Å, respectively. For these same molecules, full-valence v2RDM-CASSCF bond lengths computed in the cc-pVQZ basis set deviate from experimentally determined ones on average by 0.017 and 0.011 Å when enforcing two- and three-particle conditions, respectively, whereas CI-CASSCF displays an average deviation of 0.010 Å. The v2RDM-CASSCF approach with two-particle conditions is also applied to the equilibrium geometry of pentacene; optimized bond lengths deviate from those derived from experiment, on average, by 0.015 Å when using a cc-pVDZ basis set and a (22e,22o) active space.

  14. Alpha image reconstruction (AIR): A new iterative CT image reconstruction approach using voxel-wise alpha blending

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hofmann, Christian; Sawall, Stefan; Knaup, Michael

    2014-06-15

    Purpose: Iterative image reconstruction gains more and more interest in clinical routine, as it promises to reduce image noise (and thereby patient dose), to reduce artifacts, or to improve spatial resolution. Among vendors and researchers, however, there is no consensus of how to best achieve these aims. The general approach is to incorporatea priori knowledge into iterative image reconstruction, for example, by adding additional constraints to the cost function, which penalize variations between neighboring voxels. However, this approach to regularization in general poses a resolution noise trade-off because the stronger the regularization, and thus the noise reduction, the stronger themore » loss of spatial resolution and thus loss of anatomical detail. The authors propose a method which tries to improve this trade-off. The proposed reconstruction algorithm is called alpha image reconstruction (AIR). One starts with generating basis images, which emphasize certain desired image properties, like high resolution or low noise. The AIR algorithm reconstructs voxel-specific weighting coefficients that are applied to combine the basis images. By combining the desired properties of each basis image, one can generate an image with lower noise and maintained high contrast resolution thus improving the resolution noise trade-off. Methods: All simulations and reconstructions are performed in native fan-beam geometry. A water phantom with resolution bar patterns and low contrast disks is simulated. A filtered backprojection (FBP) reconstruction with a Ram-Lak kernel is used as a reference reconstruction. The results of AIR are compared against the FBP results and against a penalized weighted least squares reconstruction which uses total variation as regularization. The simulations are based on the geometry of the Siemens Somatom Definition Flash scanner. To quantitatively assess image quality, the authors analyze line profiles through resolution patterns to define a contrast factor for contrast-resolution plots. Furthermore, the authors calculate the contrast-to-noise ratio with the low contrast disks and the authors compare the agreement of the reconstructions with the ground truth by calculating the normalized cross-correlation and the root-mean-square deviation. To evaluate the clinical performance of the proposed method, the authors reconstruct patient data acquired with a Somatom Definition Flash dual source CT scanner (Siemens Healthcare, Forchheim, Germany). Results: The results of the simulation study show that among the compared algorithms AIR achieves the highest resolution and the highest agreement with the ground truth. Compared to the reference FBP reconstruction AIR is able to reduce the relative pixel noise by up to 50% and at the same time achieve a higher resolution by maintaining the edge information from the basis images. These results can be confirmed with the patient data. Conclusions: To evaluate the AIR algorithm simulated and measured patient data of a state-of-the-art clinical CT system were processed. It is shown, that generating CT images through the reconstruction of weighting coefficients has the potential to improve the resolution noise trade-off and thus to improve the dose usage in clinical CT.« less

  15. [An ADAA model and its analysis method for agronomic traits based on the double-cross mating design].

    PubMed

    Xu, Z C; Zhu, J

    2000-01-01

    According to the double-cross mating design and using principles of Cockerham's general genetic model, a genetic model with additive, dominance and epistatic effects (ADAA model) was proposed for the analysis of agronomic traits. Components of genetic effects were derived for different generations. Monte Carlo simulation was conducted for analyzing the ADAA model and its reduced AD model by using different generations. It was indicated that genetic variance components could be estimated without bias by MINQUE(1) method and genetic effects could be predicted effectively by AUP method; at least three generations (including parent, F1 of single cross and F1 of double-cross) were necessary for analyzing the ADAA model and only two generations (including parent and F1 of double-cross) were enough for the reduced AD model. When epistatic effects were taken into account, a new approach for predicting the heterosis of agronomic traits of double-crosses was given on the basis of unbiased prediction of genotypic merits of parents and their crosses. In addition, genotype x environment interaction effects and interaction heterosis due to G x E interaction were discussed briefly.

  16. MO-FG-204-01: Improved Noise Suppression for Dual-Energy CT Through Entropy Minimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Petrongolo, M; Zhu, L

    2015-06-15

    Purpose: In dual energy CT (DECT), noise amplification during signal decomposition significantly limits the utility of basis material images. Since clinically relevant objects contain a limited number of materials, we propose to suppress noise for DECT based on image entropy minimization. An adaptive weighting scheme is employed during noise suppression to improve decomposition accuracy with limited effect on spatial resolution and image texture preservation. Methods: From decomposed images, we first generate a 2D plot of scattered data points, using basis material densities as coordinates. Data points representing the same material generate a highly asymmetric cluster. We orient an axis bymore » minimizing the entropy in a 1D histogram of these points projected onto the axis. To suppress noise, we replace pixel values of decomposed images with center-of-mass values in the direction perpendicular to the optimal axis. To limit errors due to cluster overlap, we weight each data point’s contribution based on its high and low energy CT values and location within the image. The proposed method’s performance is assessed on physical phantom studies. Electron density is used as the quality metric for decomposition accuracy. Our results are compared to those without noise suppression and with a recently developed iterative method. Results: The proposed method reduces noise standard deviations of the decomposed images by at least one order of magnitude. On the Catphan phantom, this method greatly preserves the spatial resolution and texture of the CT images and limits induced error in measured electron density to below 1.2%. In the head phantom study, the proposed method performs the best in retaining fine, intricate structures. Conclusion: The entropy minimization based algorithm with adaptive weighting substantially reduces DECT noise while preserving image spatial resolution and texture. Future investigations will include extensive investigations on material decomposition accuracy that go beyond the current electron density calculations. This work was supported in part by the National Institutes of Health (NIH) under Grant Number R21 EB012700.« less

  17. An exact variational method to calculate rovibrational spectra of polyatomic molecules with large amplitude motion

    NASA Astrophysics Data System (ADS)

    Yu, Hua-Gen

    2016-08-01

    We report a new full-dimensional variational algorithm to calculate rovibrational spectra of polyatomic molecules using an exact quantum mechanical Hamiltonian. The rovibrational Hamiltonian of system is derived in a set of orthogonal polyspherical coordinates in the body-fixed frame. It is expressed in an explicitly Hermitian form. The Hamiltonian has a universal formulation regardless of the choice of orthogonal polyspherical coordinates and the number of atoms in molecule, which is suitable for developing a general program to study the spectra of many polyatomic systems. An efficient coupled-state approach is also proposed to solve the eigenvalue problem of the Hamiltonian using a multi-layer Lanczos iterative diagonalization approach via a set of direct product basis set in three coordinate groups: radial coordinates, angular variables, and overall rotational angles. A simple set of symmetric top rotational functions is used for the overall rotation whereas a potential-optimized discrete variable representation method is employed in radial coordinates. A set of contracted vibrationally diabatic basis functions is adopted in internal angular variables. Those diabatic functions are first computed using a neural network iterative diagonalization method based on a reduced-dimension Hamiltonian but only once. The final rovibrational energies are computed using a modified Lanczos method for a given total angular momentum J, which is usually fast. Two numerical applications to CH4 and H2CO are given, together with a comparison with previous results.

  18. An exact variational method to calculate rovibrational spectra of polyatomic molecules with large amplitude motion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Hua-Gen, E-mail: hgy@bnl.gov

    We report a new full-dimensional variational algorithm to calculate rovibrational spectra of polyatomic molecules using an exact quantum mechanical Hamiltonian. The rovibrational Hamiltonian of system is derived in a set of orthogonal polyspherical coordinates in the body-fixed frame. It is expressed in an explicitly Hermitian form. The Hamiltonian has a universal formulation regardless of the choice of orthogonal polyspherical coordinates and the number of atoms in molecule, which is suitable for developing a general program to study the spectra of many polyatomic systems. An efficient coupled-state approach is also proposed to solve the eigenvalue problem of the Hamiltonian using amore » multi-layer Lanczos iterative diagonalization approach via a set of direct product basis set in three coordinate groups: radial coordinates, angular variables, and overall rotational angles. A simple set of symmetric top rotational functions is used for the overall rotation whereas a potential-optimized discrete variable representation method is employed in radial coordinates. A set of contracted vibrationally diabatic basis functions is adopted in internal angular variables. Those diabatic functions are first computed using a neural network iterative diagonalization method based on a reduced-dimension Hamiltonian but only once. The final rovibrational energies are computed using a modified Lanczos method for a given total angular momentum J, which is usually fast. Two numerical applications to CH{sub 4} and H{sub 2}CO are given, together with a comparison with previous results.« less

  19. Exact exchange-correlation potentials of singlet two-electron systems

    NASA Astrophysics Data System (ADS)

    Ryabinkin, Ilya G.; Ospadov, Egor; Staroverov, Viktor N.

    2017-10-01

    We suggest a non-iterative analytic method for constructing the exchange-correlation potential, v XC ( r ) , of any singlet ground-state two-electron system. The method is based on a convenient formula for v XC ( r ) in terms of quantities determined only by the system's electronic wave function, exact or approximate, and is essentially different from the Kohn-Sham inversion technique. When applied to Gaussian-basis-set wave functions, the method yields finite-basis-set approximations to the corresponding basis-set-limit v XC ( r ) , whereas the Kohn-Sham inversion produces physically inappropriate (oscillatory and divergent) potentials. The effectiveness of the procedure is demonstrated by computing accurate exchange-correlation potentials of several two-electron systems (helium isoelectronic series, H2, H3 + ) using common ab initio methods and Gaussian basis sets.

  20. A novel method to quantify the activity of alcohol acetyltransferase Using a SnO2-based sensor of electronic nose.

    PubMed

    Hu, Zhongqiu; Li, Xiaojing; Wang, Huxuan; Niu, Chen; Yuan, Yahong; Yue, Tianli

    2016-07-15

    Alcohol acetyltransferase (AATFase) extensively catalyzes the reactions of alcohols to acetic esters in microorganisms and plants. In this work, a novel method has been proposed to quantify the activity of AATFase using a SnO2-based sensor of electronic nose, which was determined on the basis of its higher sensitivity to the reducing alcohol than the oxidizing ester. The maximum value of the first-derivative of the signals from the SnO2-based sensor was therein found to be an eigenvalue of isoamyl alcohol concentration. Quadratic polynomial regression perfectly fitted the correlation between the eigenvalue and the isoamyl alcohol concentration. The method was used to determine the AATFase activity in this type of reaction by calculating the conversion rate of isoamyl alcohol. The proposed method has been successfully applied to determine the AATFase activity of a cider yeast strain. Compared with GC-MS, the method shows promises with ideal recovery and low cost. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Evaluation of a transfinite element numerical solution method for nonlinear heat transfer problems

    NASA Technical Reports Server (NTRS)

    Cerro, J. A.; Scotti, S. J.

    1991-01-01

    Laplace transform techniques have been widely used to solve linear, transient field problems. A transform-based algorithm enables calculation of the response at selected times of interest without the need for stepping in time as required by conventional time integration schemes. The elimination of time stepping can substantially reduce computer time when transform techniques are implemented in a numerical finite element program. The coupling of transform techniques with spatial discretization techniques such as the finite element method has resulted in what are known as transfinite element methods. Recently attempts have been made to extend the transfinite element method to solve nonlinear, transient field problems. This paper examines the theoretical basis and numerical implementation of one such algorithm, applied to nonlinear heat transfer problems. The problem is linearized and solved by requiring a numerical iteration at selected times of interest. While shown to be acceptable for weakly nonlinear problems, this algorithm is ineffective as a general nonlinear solution method.

  2. Eulerian formulation of the interacting particle representation model of homogeneous turbulence

    DOE PAGES

    Campos, Alejandro; Duraisamy, Karthik; Iaccarino, Gianluca

    2016-10-21

    The Interacting Particle Representation Model (IPRM) of homogeneous turbulence incorporates information about the morphology of turbulent structures within the con nes of a one-point model. In the original formulation [Kassinos & Reynolds, Center for Turbulence Research: Annual Research Briefs, 31{51, (1996)], the IPRM was developed in a Lagrangian setting by evolving second moments of velocity conditional on a given gradient vector. In the present work, the IPRM is re-formulated in an Eulerian framework and evolution equations are developed for the marginal PDFs. Eulerian methods avoid the issues associated with statistical estimators used by Lagrangian approaches, such as slow convergence. Amore » specific emphasis of this work is to use the IPRM to examine the long time evolution of homogeneous turbulence. We first describe the derivation of the marginal PDF in spherical coordinates, which reduces the number of independent variables and the cost associated with Eulerian simulations of PDF models. Next, a numerical method based on radial basis functions over a spherical domain is adapted to the IPRM. Finally, results obtained with the new Eulerian solution method are thoroughly analyzed. The sensitivity of the Eulerian simulations to parameters of the numerical scheme, such as the size of the time step and the shape parameter of the radial basis functions, is examined. A comparison between Eulerian and Lagrangian simulations is performed to discern the capabilities of each of the methods. Finally, a linear stability analysis based on the eigenvalues of the discrete differential operators is carried out for both the new Eulerian solution method and the original Lagrangian approach.« less

  3. Min-Max Spaces and Complexity Reduction in Min-Max Expansions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gaubert, Stephane, E-mail: Stephane.Gaubert@inria.fr; McEneaney, William M., E-mail: wmceneaney@ucsd.edu

    2012-06-15

    Idempotent methods have been found to be extremely helpful in the numerical solution of certain classes of nonlinear control problems. In those methods, one uses the fact that the value function lies in the space of semiconvex functions (in the case of maximizing controllers), and approximates this value using a truncated max-plus basis expansion. In some classes, the value function is actually convex, and then one specifically approximates with suprema (i.e., max-plus sums) of affine functions. Note that the space of convex functions is a max-plus linear space, or moduloid. In extending those concepts to game problems, one finds amore » different function space, and different algebra, to be appropriate. Here we consider functions which may be represented using infima (i.e., min-max sums) of max-plus affine functions. It is natural to refer to the class of functions so represented as the min-max linear space (or moduloid) of max-plus hypo-convex functions. We examine this space, the associated notion of duality and min-max basis expansions. In using these methods for solution of control problems, and now games, a critical step is complexity-reduction. In particular, one needs to find reduced-complexity expansions which approximate the function as well as possible. We obtain a solution to this complexity-reduction problem in the case of min-max expansions.« less

  4. Harmony: EEG/MEG Linear Inverse Source Reconstruction in the Anatomical Basis of Spherical Harmonics

    PubMed Central

    Petrov, Yury

    2012-01-01

    EEG/MEG source localization based on a “distributed solution” is severely underdetermined, because the number of sources is much larger than the number of measurements. In particular, this makes the solution strongly affected by sensor noise. A new way to constrain the problem is presented. By using the anatomical basis of spherical harmonics (or spherical splines) instead of single dipoles the dimensionality of the inverse solution is greatly reduced without sacrificing the quality of the data fit. The smoothness of the resulting solution reduces the surface bias and scatter of the sources (incoherency) compared to the popular minimum-norm algorithms where single-dipole basis is used (MNE, depth-weighted MNE, dSPM, sLORETA, LORETA, IBF) and allows to efficiently reduce the effect of sensor noise. This approach, termed Harmony, performed well when applied to experimental data (two exemplars of early evoked potentials) and showed better localization precision and solution coherence than the other tested algorithms when applied to realistically simulated data. PMID:23071497

  5. Transmission intensity disturbance in a rotating polarizer

    NASA Astrophysics Data System (ADS)

    Fan, J. Y.; Li, H. X.; Wu, F. Q.

    2008-01-01

    Random disturbance was observed in transmission intensity in various rotating prism polarizers when they were used in optical systems. As a result, the transmitted intensity exhibited cyclic significant deviation from the Malus cosine-squared law with rotation of prisms. The disturbance spoils the light quality transmitted through the polarizer thus dramatically depresses the accuracies of measurements when the prim polarizers were used in light path. A rigorous model is presented based on the solid basis of multi-beams interference, and theoretical results show good agreement with measured values and also indicate effective method for reducing the disturbance.

  6. A Review of Urban Low-carbon Traffic Assessment

    NASA Astrophysics Data System (ADS)

    Chen, Jing; Yao, Jingjing

    2017-12-01

    Transportation not only promote social and economic development, but also improve people’s living standards, but its high energy consumption and high pollution brought a series of energy and environmental problems. In order to reduce the impact on the environment, countries are developing low-carbon transport as part of the socio-economic development mentioned on the agenda. On the basis of understanding the background and connotation of low-carbon transportation, this paper reviews and collates the evaluation index system and evaluation method of urban low-carbon transportation, which is used to provide reference for urban low-carbon transportation research.

  7. Advances in dental local anesthesia techniques and devices: An update

    PubMed Central

    Saxena, Payal; Gupta, Saurabh K.; Newaskar, Vilas; Chandra, Anil

    2013-01-01

    Although local anesthesia remains the backbone of pain control in dentistry, researches are going to seek new and better means of managing the pain. Most of the researches are focused on improvement in the area of anesthetic agents, delivery devices and technique involved. Newer technologies have been developed that can assist the dentist in providing enhanced pain relief with reduced injection pain and fewer adverse effects. This overview will enlighten the practicing dentists regarding newer devices and methods of rendering pain control comparing these with the earlier used ones on the basis of research and clinical studies available. PMID:24163548

  8. Specific methodology for capacitance imaging by atomic force microscopy: A breakthrough towards an elimination of parasitic effects

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Estevez, Ivan; Concept Scientific Instruments, ZA de Courtaboeuf, 2 rue de la Terre de Feu, 91940 Les Ulis; Chrétien, Pascal

    2014-02-24

    On the basis of a home-made nanoscale impedance measurement device associated with a commercial atomic force microscope, a specific operating process is proposed in order to improve absolute (in sense of “nonrelative”) capacitance imaging by drastically reducing the parasitic effects due to stray capacitance, surface topography, and sample tilt. The method, combining a two-pass image acquisition with the exploitation of approach curves, has been validated on sets of calibration samples consisting in square parallel plate capacitors for which theoretical capacitance values were numerically calculated.

  9. A new operational approach for solving fractional variational problems depending on indefinite integrals

    NASA Astrophysics Data System (ADS)

    Ezz-Eldien, S. S.; Doha, E. H.; Bhrawy, A. H.; El-Kalaawy, A. A.; Machado, J. A. T.

    2018-04-01

    In this paper, we propose a new accurate and robust numerical technique to approximate the solutions of fractional variational problems (FVPs) depending on indefinite integrals with a type of fixed Riemann-Liouville fractional integral. The proposed technique is based on the shifted Chebyshev polynomials as basis functions for the fractional integral operational matrix (FIOM). Together with the Lagrange multiplier method, these problems are then reduced to a system of algebraic equations, which greatly simplifies the solution process. Numerical examples are carried out to confirm the accuracy, efficiency and applicability of the proposed algorithm

  10. Volumetric calculations in an oil field: The basis method

    USGS Publications Warehouse

    Olea, R.A.; Pawlowsky, V.; Davis, J.C.

    1993-01-01

    The basis method for estimating oil reserves in place is compared to a traditional procedure that uses ordinary kriging. In the basis method, auxiliary variables that sum to the net thickness of pay are estimated by cokriging. In theory, the procedure should be more powerful because it makes full use of the cross-correlation between variables and forces the original variables to honor interval constraints. However, at least in our case study, the practical advantages of cokriging for estimating oil in place are marginal. ?? 1993.

  11. Optimization of Power Generation Rights Under the Requirements of Energy Conservation and Emission Reduction

    NASA Astrophysics Data System (ADS)

    Hu-ping, YANY; Chong-wei, ZHONG; Fei-fei, YAN; Cheng-yi, TANG

    2018-03-01

    In recent years, the energy crisis and greenhouse effect problem have caused wide public concern, if these issues cannot be resolved quickly, they will bring troubles to people’s lives.In response, many countries around the world have implemented policies to reduce energy consumption and greenhouse gas emissions. In our country, the electric power industry has made great contribution to the daily life of people and the development of industry, but it is also an industry of high consumption and high emission.In order to realize the sustainable development of society, it is necessary to make energy conservation and emission reduction in the power industry as an important part of the realization of this goal.In this context, power generation trade has become a hot topic in energy conservation and emission reduction.Through the electricity consumption of the units with different power efficiency and coal consumption rate,it can achieve the target of reducing coal consumption, reducing network loss, reducing greenhouse gas emission, and increasing social benefit,and so on. This article put forward a optimal energy model on the basis of guaranteeing safety and environmental protection.In this paper, they used the IEEE30, IEEE39, IEEE57 and IEEE118 node system as an example, and set up the control groups to prove the practicality of the presented model.The solving method of this model was interior-point method.

  12. Growth of BaSi2 continuous films on Ge(111) by molecular beam epitaxy and fabrication of p-BaSi2/n-Ge heterojunction solar cells

    NASA Astrophysics Data System (ADS)

    Takabe, Ryota; Yachi, Suguru; Tsukahara, Daichi; Toko, Kaoru; Suemasu, Takashi

    2017-05-01

    We grew BaSi2 films on Ge(111) substrates by various growth methods based on molecular beam epitaxy (MBE). First, we attempted to form BaSi2 films directly on Ge(111) by MBE without templates. We next formed BaSi2 films using BaGe2 templates as commonly used for MBE growth of BaSi2 on Si substrates. Contrary to our prediction, the lateral growth of BaSi2 was not promoted by these two methods; BaSi2 formed not into a continuous film but into islands. Although streaky patterns of reflection high-energy electron diffraction were observed inside the growth chamber, no X-ray diffraction lines of BaSi2 were observed in samples taken out from the growth chamber. Such BaSi2 islands were easily to get oxidized. We finally attempted to form a continuous BaSi2 template layer on Ge(111) by solid phase epitaxy, that is, the deposition of amorphous Ba-Si layers onto MBE-grown BaSi2 epitaxial islands, followed by post annealing. We achieved the formation of an approximately 5-nm-thick BaSi2 continuous layer by this method. Using this BaSi2 layer as a template, we succeeded in forming a-axis-oriented 520-nm-thick BaSi2 epitaxial films on Ge substrates, although (111)-oriented Si grains were included in the grown layer. We next formed a B-doped p-BaSi2(20 nm)/n-Ge(111) heterojunction solar cell. A wide-spectrum response from 400 to 2000 nm was achieved. At an external bias voltage of 1 V, the external quantum efficiency reached as high as 60%, demonstrating the great potential of BaSi2/Ge combination. However, the efficiency of a solar cell under AM1.5 illumination was quite low (0.1%). The origin of such a low efficiency was examined.

  13. Elastic-wave-mode separation in TTI media with inverse-distance weighted interpolation involving position shading

    NASA Astrophysics Data System (ADS)

    Wang, Jian; Meng, Xiaohong; Zheng, Wanqiu

    2017-10-01

    The elastic-wave reverse-time migration of inhomogeneous anisotropic media is becoming the hotspot of research today. In order to ensure the accuracy of the migration, it is necessary to separate the wave mode into P-wave and S-wave before migration. For inhomogeneous media, the Kelvin-Christoffel equation can be solved in the wave-number domain by using the anisotropic parameters of the mesh nodes, and the polarization vector of the P-wave and S-wave at each node can be calculated and transformed into the space domain to obtain the quasi-differential operators. However, this method is computationally expensive, especially for the process of quasi-differential operators. In order to reduce the computational complexity, the wave-mode separation of mixed domain can be realized on the basis of a reference model in the wave-number domain. But conventional interpolation methods and reference model selection methods reduce the separation accuracy. In order to further improve the separation effect, this paper introduces an inverse-distance interpolation method involving position shading and uses the reference model selection method of random points scheme. This method adds the spatial weight coefficient K, which reflects the orientation of the reference point on the conventional IDW algorithm, and the interpolation process takes into account the combined effects of the distance and azimuth of the reference points. Numerical simulation shows that the proposed method can separate the wave mode more accurately using fewer reference models and has better practical value.

  14. A simple procedure for construction of the orthonormal basis vectors of irreducible representations of O(5) in the OT (3) ⊗ON (2) basis

    NASA Astrophysics Data System (ADS)

    Pan, Feng; Ding, Xiaoxue; Launey, Kristina D.; Draayer, J. P.

    2018-06-01

    A simple and effective algebraic isospin projection procedure for constructing orthonormal basis vectors of irreducible representations of O (5) ⊃OT (3) ⊗ON (2) from those in the canonical O (5) ⊃ SUΛ (2) ⊗ SUI (2) basis is outlined. The expansion coefficients are components of null space vectors of the projection matrix with four nonzero elements in each row in general. Explicit formulae for evaluating OT (3)-reduced matrix elements of O (5) generators are derived.

  15. Paired Pulse Basis Functions for the Method of Moments EFIE Solution of Electromagnetic Problems Involving Arbitrarily-shaped, Three-dimensional Dielectric Scatterers

    NASA Technical Reports Server (NTRS)

    MacKenzie, Anne I.; Rao, Sadasiva M.; Baginski, Michael E.

    2007-01-01

    A pair of basis functions is presented for the surface integral, method of moment solution of scattering by arbitrarily-shaped, three-dimensional dielectric bodies. Equivalent surface currents are represented by orthogonal unit pulse vectors in conjunction with triangular patch modeling. The electric field integral equation is employed with closed geometries for dielectric bodies; the method may also be applied to conductors. Radar cross section results are shown for dielectric bodies having canonical spherical, cylindrical, and cubic shapes. Pulse basis function results are compared to results by other methods.

  16. Reducing cylinder drag by adding a plate

    NASA Astrophysics Data System (ADS)

    Frolov, Vladimir A.; Kozlova, Anna S.

    2017-10-01

    Reducing the drag of bodies is a central problem of modern aerohydrodynamics. The paper presents theoretical and experimental studies of a new method for reducing the drag of a circular cylinder. To reduce the drag we propose to install a flat plate along the flow in front of the cylinder. The theoretical investigation of the drag was carried out using FlowSimulation software. An experimental study of the body drag was performed in an open wind tunnel. The drag coefficient results of the cylinder depended on the different locations of the flat plate relative to the cylinder. The following geometric characteristics of the cylinder/plate are studied: the width of the gap between the cylinder and the plate and the meridional angle of the plate with respect to the cylinder. On the basis of Numerical and Physical Modeling, the values of the drag coefficient for the cylinder/plate are presented. The results included establishment the locations of the cylinder/plate which give the value of the drag coefficient for the combination of the two bodies. That total drag coefficient of the cylinder/plate can be less than the cylinder alone.

  17. Reduced Stress Tensor and Dissipation and the Transport of Lamb Vector

    NASA Technical Reports Server (NTRS)

    Wu, Jie-Zhi; Zhou, Ye; Wu, Jian-Ming

    1996-01-01

    We develop a methodology to ensure that the stress tensor, regardless of its number of independent components, can be reduced to an exactly equivalent one which has the same number of independent components as the surface force. It is applicable to the momentum balance if the shear viscosity is constant. A direct application of this method to the energy balance also leads to a reduction of the dissipation rate of kinetic energy. Following this procedure, significant saving in analysis and computation may be achieved. For turbulent flows, this strategy immediately implies that a given Reynolds stress model can always be replaced by a reduced one before putting it into computation. Furthermore, we show how the modeling of Reynolds stress tensor can be reduced to that of the mean turbulent Lamb vector alone, which is much simpler. As a first step of this alternative modeling development, we derive the governing equations for the Lamb vector and its square. These equations form a basis of new second-order closure schemes and, we believe, should be favorably compared to that of traditional Reynolds stress transport equation.

  18. A modified temporal criterion to meta-optimize the extended Kalman filter for land cover classification of remotely sensed time series

    NASA Astrophysics Data System (ADS)

    Salmon, B. P.; Kleynhans, W.; Olivier, J. C.; van den Bergh, F.; Wessels, K. J.

    2018-05-01

    Humans are transforming land cover at an ever-increasing rate. Accurate geographical maps on land cover, especially rural and urban settlements are essential to planning sustainable development. Time series extracted from MODerate resolution Imaging Spectroradiometer (MODIS) land surface reflectance products have been used to differentiate land cover classes by analyzing the seasonal patterns in reflectance values. The proper fitting of a parametric model to these time series usually requires several adjustments to the regression method. To reduce the workload, a global setting of parameters is done to the regression method for a geographical area. In this work we have modified a meta-optimization approach to setting a regression method to extract the parameters on a per time series basis. The standard deviation of the model parameters and magnitude of residuals are used as scoring function. We successfully fitted a triply modulated model to the seasonal patterns of our study area using a non-linear extended Kalman filter (EKF). The approach uses temporal information which significantly reduces the processing time and storage requirements to process each time series. It also derives reliability metrics for each time series individually. The features extracted using the proposed method are classified with a support vector machine and the performance of the method is compared to the original approach on our ground truth data.

  19. Real-Time Multi-Target Localization from Unmanned Aerial Vehicles

    PubMed Central

    Wang, Xuan; Liu, Jinghong; Zhou, Qianfei

    2016-01-01

    In order to improve the reconnaissance efficiency of unmanned aerial vehicle (UAV) electro-optical stabilized imaging systems, a real-time multi-target localization scheme based on an UAV electro-optical stabilized imaging system is proposed. First, a target location model is studied. Then, the geodetic coordinates of multi-targets are calculated using the homogeneous coordinate transformation. On the basis of this, two methods which can improve the accuracy of the multi-target localization are proposed: (1) the real-time zoom lens distortion correction method; (2) a recursive least squares (RLS) filtering method based on UAV dead reckoning. The multi-target localization error model is established using Monte Carlo theory. In an actual flight, the UAV flight altitude is 1140 m. The multi-target localization results are within the range of allowable error. After we use a lens distortion correction method in a single image, the circular error probability (CEP) of the multi-target localization is reduced by 7%, and 50 targets can be located at the same time. The RLS algorithm can adaptively estimate the location data based on multiple images. Compared with multi-target localization based on a single image, CEP of the multi-target localization using RLS is reduced by 25%. The proposed method can be implemented on a small circuit board to operate in real time. This research is expected to significantly benefit small UAVs which need multi-target geo-location functions. PMID:28029145

  20. Real-Time Multi-Target Localization from Unmanned Aerial Vehicles.

    PubMed

    Wang, Xuan; Liu, Jinghong; Zhou, Qianfei

    2016-12-25

    In order to improve the reconnaissance efficiency of unmanned aerial vehicle (UAV) electro-optical stabilized imaging systems, a real-time multi-target localization scheme based on an UAV electro-optical stabilized imaging system is proposed. First, a target location model is studied. Then, the geodetic coordinates of multi-targets are calculated using the homogeneous coordinate transformation. On the basis of this, two methods which can improve the accuracy of the multi-target localization are proposed: (1) the real-time zoom lens distortion correction method; (2) a recursive least squares (RLS) filtering method based on UAV dead reckoning. The multi-target localization error model is established using Monte Carlo theory. In an actual flight, the UAV flight altitude is 1140 m. The multi-target localization results are within the range of allowable error. After we use a lens distortion correction method in a single image, the circular error probability (CEP) of the multi-target localization is reduced by 7%, and 50 targets can be located at the same time. The RLS algorithm can adaptively estimate the location data based on multiple images. Compared with multi-target localization based on a single image, CEP of the multi-target localization using RLS is reduced by 25%. The proposed method can be implemented on a small circuit board to operate in real time. This research is expected to significantly benefit small UAVs which need multi-target geo-location functions.

  1. An all-at-once reduced Hessian SQP scheme for aerodynamic design optimization

    NASA Technical Reports Server (NTRS)

    Feng, Dan; Pulliam, Thomas H.

    1995-01-01

    This paper introduces a computational scheme for solving a class of aerodynamic design problems that can be posed as nonlinear equality constrained optimizations. The scheme treats the flow and design variables as independent variables, and solves the constrained optimization problem via reduced Hessian successive quadratic programming. It updates the design and flow variables simultaneously at each iteration and allows flow variables to be infeasible before convergence. The solution of an adjoint flow equation is never needed. In addition, a range space basis is chosen so that in a certain sense the 'cross term' ignored in reduced Hessian SQP methods is minimized. Numerical results for a nozzle design using the quasi-one-dimensional Euler equations show that this scheme is computationally efficient and robust. The computational cost of a typical nozzle design is only a fraction more than that of the corresponding analysis flow calculation. Superlinear convergence is also observed, which agrees with the theoretical properties of this scheme. All optimal solutions are obtained by starting far away from the final solution.

  2. Analysis of world terror networks from the reduced Google matrix of Wikipedia

    NASA Astrophysics Data System (ADS)

    El Zant, Samer; Frahm, Klaus M.; Jaffrès-Runser, Katia; Shepelyansky, Dima L.

    2018-01-01

    We apply the reduced Google matrix method to analyze interactions between 95 terrorist groups and determine their relationships and influence on 64 world countries. This is done on the basis of the Google matrix of the English Wikipedia (2017) composed of 5 416 537 articles which accumulate a great part of global human knowledge. The reduced Google matrix takes into account the direct and hidden links between a selection of 159 nodes (articles) appearing due to all paths of a random surfer moving over the whole network. As a result we obtain the network structure of terrorist groups and their relations with selected countries including hidden indirect links. Using the sensitivity of PageRank to a weight variation of specific links we determine the geopolitical sensitivity and influence of specific terrorist groups on world countries. The world maps of the sensitivity of various countries to influence of specific terrorist groups are obtained. We argue that this approach can find useful application for more extensive and detailed data bases analysis.

  3. Reduced order surrogate modelling (ROSM) of high dimensional deterministic simulations

    NASA Astrophysics Data System (ADS)

    Mitry, Mina

    Often, computationally expensive engineering simulations can prohibit the engineering design process. As a result, designers may turn to a less computationally demanding approximate, or surrogate, model to facilitate their design process. However, owing to the the curse of dimensionality, classical surrogate models become too computationally expensive for high dimensional data. To address this limitation of classical methods, we develop linear and non-linear Reduced Order Surrogate Modelling (ROSM) techniques. Two algorithms are presented, which are based on a combination of linear/kernel principal component analysis and radial basis functions. These algorithms are applied to subsonic and transonic aerodynamic data, as well as a model for a chemical spill in a channel. The results of this thesis show that ROSM can provide a significant computational benefit over classical surrogate modelling, sometimes at the expense of a minor loss in accuracy.

  4. [Application of State Space model in the evaluation of the prevention and control for mumps].

    PubMed

    Luo, C; Li, R Z; Xu, Q Q; Xiong, P; Liu, Y X; Xue, F Z; Xu, Q; Li, X J

    2017-09-10

    Objective: To analyze the epidemiological characteristics of mumps in 2012 and 2014, and to explore the preventive effect of the second dose of mumps-containing vaccine (MuCV) in mumps in Shandong province. Methods: On the basis of certain model assumptions, a Space State model was formulated. Iterated Filter was applied to the epidemic model to estimate the parameters. Results: The basic reproduction number ( R (0)) for children in schools was 4.49 (95 %CI : 4.30-4.67) and 2.50 (95 %CI : 2.38-2.61) respectively for the year of 2012 and 2014. Conclusions: Space State model seems suitable for mumps prevalence description. The policy of 2-dose MuCV can effectively reduce the number of total patients. Children in schools are the key to reduce the mumps.

  5. Calculation of wave-functions with frozen orbitals in mixed quantum mechanics/molecular mechanics methods. II. Application of the local basis equation.

    PubMed

    Ferenczy, György G

    2013-04-05

    The application of the local basis equation (Ferenczy and Adams, J. Chem. Phys. 2009, 130, 134108) in mixed quantum mechanics/molecular mechanics (QM/MM) and quantum mechanics/quantum mechanics (QM/QM) methods is investigated. This equation is suitable to derive local basis nonorthogonal orbitals that minimize the energy of the system and it exhibits good convergence properties in a self-consistent field solution. These features make the equation appropriate to be used in mixed QM/MM and QM/QM methods to optimize orbitals in the field of frozen localized orbitals connecting the subsystems. Calculations performed for several properties in divers systems show that the method is robust with various choices of the frozen orbitals and frontier atom properties. With appropriate basis set assignment, it gives results equivalent with those of a related approach [G. G. Ferenczy previous paper in this issue] using the Huzinaga equation. Thus, the local basis equation can be used in mixed QM/MM methods with small size quantum subsystems to calculate properties in good agreement with reference Hartree-Fock-Roothaan results. It is shown that bond charges are not necessary when the local basis equation is applied, although they are required for the self-consistent field solution of the Huzinaga equation based method. Conversely, the deformation of the wave-function near to the boundary is observed without bond charges and this has a significant effect on deprotonation energies but a less pronounced effect when the total charge of the system is conserved. The local basis equation can also be used to define a two layer quantum system with nonorthogonal localized orbitals surrounding the central delocalized quantum subsystem. Copyright © 2013 Wiley Periodicals, Inc.

  6. Acquisition of STEM Images by Adaptive Compressive Sensing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xie, Weiyi; Feng, Qianli; Srinivasan, Ramprakash

    Compressive Sensing (CS) allows a signal to be sparsely measured first and accurately recovered later in software [1]. In scanning transmission electron microscopy (STEM), it is possible to compress an image spatially by reducing the number of measured pixels, which decreases electron dose and increases sensing speed [2,3,4]. The two requirements for CS to work are: (1) sparsity of basis coefficients and (2) incoherence of the sensing system and the representation system. However, when pixels are missing from the image, it is difficult to have an incoherent sensing matrix. Nevertheless, dictionary learning techniques such as Beta-Process Factor Analysis (BPFA) [5]more » are able to simultaneously discover a basis and the sparse coefficients in the case of missing pixels. On top of CS, we would like to apply active learning [6,7] to further reduce the proportion of pixels being measured, while maintaining image reconstruction quality. Suppose we initially sample 10% of random pixels. We wish to select the next 1% of pixels that are most useful in recovering the image. Now, we have 11% of pixels, and we want to decide the next 1% of “most informative” pixels. Active learning methods are online and sequential in nature. Our goal is to adaptively discover the best sensing mask during acquisition using feedback about the structures in the image. In the end, we hope to recover a high quality reconstruction with a dose reduction relative to the non-adaptive (random) sensing scheme. In doing this, we try three metrics applied to the partial reconstructions for selecting the new set of pixels: (1) variance, (2) Kullback-Leibler (KL) divergence using a Radial Basis Function (RBF) kernel, and (3) entropy. Figs. 1 and 2 display the comparison of Peak Signal-to-Noise (PSNR) using these three different active learning methods at different percentages of sampled pixels. At 20% level, all the three active learning methods underperform the original CS without active learning. However, they all beat the original CS as more of the “most informative” pixels are sampled. One can also argue that CS equipped with active learning requires less sampled pixels to achieve the same value of PSNR than CS with pixels randomly sampled, since all the three PSNR curves with active learning grow at a faster pace than that without active learning. For this particular STEM image, by observing the reconstructed images and the sensing masks, we find that while the method based on RBF kernel acquires samples more uniformly, the one on entropy samples more areas of significant change, thus less uniformly. The KL-divergence method performs the best in terms of reconstruction error (PSNR) for this example [8].« less

  7. A numerical fragment basis approach to SCF calculations.

    NASA Astrophysics Data System (ADS)

    Hinde, Robert J.

    1997-11-01

    The counterpoise method is often used to correct for basis set superposition error in calculations of the electronic structure of bimolecular systems. One drawback of this approach is the need to specify a ``reference state'' for the system; for reactive systems, the choice of an unambiguous reference state may be difficult. An example is the reaction F^- + HCl arrow HF + Cl^-. Two obvious reference states for this reaction are F^- + HCl and HF + Cl^-; however, different counterpoise-corrected interaction energies are obtained using these two reference states. We outline a method for performing SCF calculations which employs numerical basis functions; this method attempts to eliminate basis set superposition errors in an a priori fashion. We test the proposed method on two one-dimensional, three-center systems and discuss the possibility of extending our approach to include electron correlation effects.

  8. Synthesized airfoil data method for prediction of dynamic stall and unsteady airloads

    NASA Technical Reports Server (NTRS)

    Gangwani, S. T.

    1983-01-01

    A detailed analysis of dynamic stall experiments has led to a set of relatively compact analytical expressions, called synthesized unsteady airfoil data, which accurately describe in the time-domain the unsteady aerodynamic characteristics of stalled airfoils. An analytical research program was conducted to expand and improve this synthesized unsteady airfoil data method using additional available sets of unsteady airfoil data. The primary objectives were to reduce these data to synthesized form for use in rotor airload prediction analyses and to generalize the results. Unsteady drag data were synthesized which provided the basis for successful expansion of the formulation to include computation of the unsteady pressure drag of airfoils and rotor blades. Also, an improved prediction model for airfoil flow reattachment was incorporated in the method. Application of this improved unsteady aerodynamics model has resulted in an improved correlation between analytic predictions and measured full scale helicopter blade loads and stress data.

  9. A basic guide to overlay design using nondestructive testing equipment data

    NASA Astrophysics Data System (ADS)

    Turner, Vernon R.

    1990-08-01

    The purpose of this paper is to provide a basic and concise guide to designing asphalt concrete (AC) overlays over existing AC pavements. The basis for these designs is deflection data obtained from nondestructive testing (NDT) equipment. This data is used in design procedures which produce required overlay thickness or an estimate of remaining pavement life. This guide enables one to design overlays or better monitor the designs being performed by others. This paper will discuss three types of NDT equipment, the Asphalt Institute Overlay Designs by Deflection Analysis and by the effective thickness method as well as a method of estimating remaining pavement life, correlations between NDT equipment and recent correlations in Washington State. Asphalt overlays provide one of the most cost effective methods of improving existing pavements. Asphalt overlays can be used to strengthen existing pavements, to reduce maintenance costs, to increase pavement life, to provide a smoother ride, and to improve skid resistance.

  10. Compression of head-related transfer function using autoregressive-moving-average models and Legendre polynomials.

    PubMed

    Shekarchi, Sayedali; Hallam, John; Christensen-Dalsgaard, Jakob

    2013-11-01

    Head-related transfer functions (HRTFs) are generally large datasets, which can be an important constraint for embedded real-time applications. A method is proposed here to reduce redundancy and compress the datasets. In this method, HRTFs are first compressed by conversion into autoregressive-moving-average (ARMA) filters whose coefficients are calculated using Prony's method. Such filters are specified by a few coefficients which can generate the full head-related impulse responses (HRIRs). Next, Legendre polynomials (LPs) are used to compress the ARMA filter coefficients. LPs are derived on the sphere and form an orthonormal basis set for spherical functions. Higher-order LPs capture increasingly fine spatial details. The number of LPs needed to represent an HRTF, therefore, is indicative of its spatial complexity. The results indicate that compression ratios can exceed 98% while maintaining a spectral error of less than 4 dB in the recovered HRTFs.

  11. Natural differential operations on manifolds: an algebraic approach

    NASA Astrophysics Data System (ADS)

    Katsylo, P. I.; Timashev, D. A.

    2008-10-01

    Natural algebraic differential operations on geometric quantities on smooth manifolds are considered. A method for the investigation and classification of such operations is described, the method of IT-reduction. With it the investigation of natural operations reduces to the analysis of rational maps between k-jet spaces, which are equivariant with respect to certain algebraic groups. On the basis of the method of IT-reduction a finite generation theorem is proved: for tensor bundles \\mathscr{V},\\mathscr{W}\\to M all the natural differential operations D\\colon\\Gamma(\\mathscr{V})\\to\\Gamma(\\mathscr{W}) of degree at most d can be algebraically constructed from some finite set of such operations. Conceptual proofs of known results on the classification of natural linear operations on arbitrary and symplectic manifolds are presented. A non-existence theorem is proved for natural deformation quantizations on Poisson manifolds and symplectic manifolds.Bibliography: 21 titles.

  12. Computing Finite-Time Lyapunov Exponents with Optimally Time Dependent Reduction

    NASA Astrophysics Data System (ADS)

    Babaee, Hessam; Farazmand, Mohammad; Sapsis, Themis; Haller, George

    2016-11-01

    We present a method to compute Finite-Time Lyapunov Exponents (FTLE) of a dynamical system using Optimally Time-Dependent (OTD) reduction recently introduced by H. Babaee and T. P. Sapsis. The OTD modes are a set of finite-dimensional, time-dependent, orthonormal basis {ui (x , t) } |i=1N that capture the directions associated with transient instabilities. The evolution equation of the OTD modes is derived from a minimization principle that optimally approximates the most unstable directions over finite times. To compute the FTLE, we evolve a single OTD mode along with the nonlinear dynamics. We approximate the FTLE from the reduced system obtained from projecting the instantaneous linearized dynamics onto the OTD mode. This results in a significant reduction in the computational cost compared to conventional methods for computing FTLE. We demonstrate the efficiency of our method for double Gyre and ABC flows. ARO project 66710-EG-YIP.

  13. Computing multiple periodic solutions of nonlinear vibration problems using the harmonic balance method and Groebner bases

    NASA Astrophysics Data System (ADS)

    Grolet, Aurelien; Thouverez, Fabrice

    2015-02-01

    This paper is devoted to the study of vibration of mechanical systems with geometric nonlinearities. The harmonic balance method is used to derive systems of polynomial equations whose solutions give the frequency component of the possible steady states. Groebner basis methods are used for computing all solutions of polynomial systems. This approach allows to reduce the complete system to an unique polynomial equation in one variable driving all solutions of the problem. In addition, in order to decrease the number of variables, we propose to first work on the undamped system, and recover solution of the damped system using a continuation on the damping parameter. The search for multiple solutions is illustrated on a simple system, where the influence of the retained number of harmonic is studied. Finally, the procedure is applied on a simple cyclic system and we give a representation of the multiple states versus frequency.

  14. Motion Detection in Ultrasound Image-Sequences Using Tensor Voting

    NASA Astrophysics Data System (ADS)

    Inba, Masafumi; Yanagida, Hirotaka; Tamura, Yasutaka

    2008-05-01

    Motion detection in ultrasound image sequences using tensor voting is described. We have been developing an ultrasound imaging system adopting a combination of coded excitation and synthetic aperture focusing techniques. In our method, frame rate of the system at distance of 150 mm reaches 5000 frame/s. Sparse array and short duration coded ultrasound signals are used for high-speed data acquisition. However, many artifacts appear in the reconstructed image sequences because of the incompleteness of the transmitted code. To reduce the artifacts, we have examined the application of tensor voting to the imaging method which adopts both coded excitation and synthetic aperture techniques. In this study, the basis of applying tensor voting and the motion detection method to ultrasound images is derived. It was confirmed that velocity detection and feature enhancement are possible using tensor voting in the time and space of simulated ultrasound three-dimensional image sequences.

  15. Research on the analytical method about influence of gas leakage and explosion on subway

    NASA Astrophysics Data System (ADS)

    Ji, Wendong; Yang, Ligong; Chen, Lin

    2018-05-01

    With the construction and development of city subway, the cross impact of underground rail transit and gas pipe network is becoming more and more serious, but there is no analytical method for the impact of gas explosions on the subway. According to this paper, the gas leakage is equivalent to the TNT explosion equivalent, based on which, the calculation of the explosive impact load is carried out. On the basis of the concrete manifestation of gas explosion, it is more convenient to carry out the subsequent calculation by equivalently treating the explosive impact load as a uniform load within a certain range. The overlying soil of the subway station has played a protective role for the subway, making the displacement of the subway structure in the explosion process significantly reduced. The analysis on the actual case shows that this method can be successfully applied to the quantitative analysis of such accidents.

  16. A mixed-mode crack analysis of isotropic solids using conservation laws of elasticity

    NASA Technical Reports Server (NTRS)

    Yau, J. F.; Wang, S. S.; Corten, H. T.

    1980-01-01

    A simple and convenient method of analysis for studying two-dimensional mixed-mode crack problems is presented. The analysis is formulated on the basis of conservation laws of elasticity and of fundamental relationships in fracture mechanics. The problem is reduced to the determination of mixed-mode stress-intensity factor solutions in terms of conservation integrals involving known auxiliary solutions. One of the salient features of the present analysis is that the stress-intensity solutions can be determined directly by using information extracted in the far field. Several examples with solutions available in the literature are solved to examine the accuracy and other characteristics of the current approach. This method is demonstrated to be superior in its numerical simplicity and computational efficiency to other approaches. Solutions of more complicated and practical engineering fracture problems dealing with the crack emanating from a circular hole are presented also to illustrate the capacity of this method

  17. Research on sparse feature matching of improved RANSAC algorithm

    NASA Astrophysics Data System (ADS)

    Kong, Xiangsi; Zhao, Xian

    2018-04-01

    In this paper, a sparse feature matching method based on modified RANSAC algorithm is proposed to improve the precision and speed. Firstly, the feature points of the images are extracted using the SIFT algorithm. Then, the image pair is matched roughly by generating SIFT feature descriptor. At last, the precision of image matching is optimized by the modified RANSAC algorithm,. The RANSAC algorithm is improved from three aspects: instead of the homography matrix, this paper uses the fundamental matrix generated by the 8 point algorithm as the model; the sample is selected by a random block selecting method, which ensures the uniform distribution and the accuracy; adds sequential probability ratio test(SPRT) on the basis of standard RANSAC, which cut down the overall running time of the algorithm. The experimental results show that this method can not only get higher matching accuracy, but also greatly reduce the computation and improve the matching speed.

  18. Comparison of effectiveness of convection-, transpiration-, and film-cooling methods with air as coolant

    NASA Technical Reports Server (NTRS)

    Eckert, E R G; Livingood, N B

    1954-01-01

    Various parts of aircraft propulsion engines that are in contact with hot gases often require cooling. Transpiration and film cooling, new methods that supposedly utilize cooling air more effectively than conventional convection cooling, have already been proposed. This report presents material necessary for a comparison of the cooling requirements of these three methods. Correlations that are regarded by the authors as the most reliable today are employed in evaluating each of the cooling processes. Calculations for the special case in which the gas velocity is constant along the cooled wall (flat plate) are presented. The calculations reveal that a comparison of the three cooling processes can be made on quite a general basis. The superiority of transpiration cooling is clearly shown for both laminar and turbulent flow. This superiority is reduced when the effects of radiation are included; for gas-turbine blades, however, there is evidence indicating that radiation may be neglected.

  19. A numerical method for solving a nonlinear 2-D optimal control problem with the classical diffusion equation

    NASA Astrophysics Data System (ADS)

    Mamehrashi, K.; Yousefi, S. A.

    2017-02-01

    This paper presents a numerical solution for solving a nonlinear 2-D optimal control problem (2DOP). The performance index of a nonlinear 2DOP is described with a state and a control function. Furthermore, dynamic constraint of the system is given by a classical diffusion equation. It is preferred to use the Ritz method for finding the numerical solution of the problem. The method is based upon the Legendre polynomial basis. By using this method, the given optimisation nonlinear 2DOP reduces to the problem of solving a system of algebraic equations. The benefit of the method is that it provides greater flexibility in which the given initial and boundary conditions of the problem are imposed. Moreover, compared with the eigenfunction method, the satisfactory results are obtained only in a small number of polynomials order. This numerical approach is applicable and effective for such a kind of nonlinear 2DOP. The convergence of the method is extensively discussed and finally two illustrative examples are included to observe the validity and applicability of the new technique developed in the current work.

  20. The dilemma of controlling cultural eutrophication of lakes

    PubMed Central

    Schindler, David W.

    2012-01-01

    The management of eutrophication has been impeded by reliance on short-term experimental additions of nutrients to bottles and mesocosms. These measures of proximate nutrient limitation fail to account for the gradual changes in biogeochemical nutrient cycles and nutrient fluxes from sediments, and succession of communities that are important components of whole-ecosystem responses. Erroneous assumptions about ecosystem processes and lack of accounting for hysteresis during lake recovery have further confused management of eutrophication. I conclude that long-term, whole-ecosystem experiments and case histories of lake recovery provide the only reliable evidence for policies to reduce eutrophication. The only method that has had proven success in reducing the eutrophication of lakes is reducing input of phosphorus. There are no case histories or long-term ecosystem-scale experiments to support recent claims that to reduce eutrophication of lakes, nitrogen must be controlled instead of or in addition to phosphorus. Before expensive policies to reduce nitrogen input are implemented, they require ecosystem-scale verification. The recent claim that the ‘phosphorus paradigm’ for recovering lakes from eutrophication has been ‘eroded’ has no basis. Instead, the case for phosphorus control has been strengthened by numerous case histories and large-scale experiments spanning several decades. PMID:22915669

  1. Angle-dependent strong-field molecular ionization rates with tuned range-separated time-dependent density functional theory.

    PubMed

    Sissay, Adonay; Abanador, Paul; Mauger, François; Gaarde, Mette; Schafer, Kenneth J; Lopata, Kenneth

    2016-09-07

    Strong-field ionization and the resulting electronic dynamics are important for a range of processes such as high harmonic generation, photodamage, charge resonance enhanced ionization, and ionization-triggered charge migration. Modeling ionization dynamics in molecular systems from first-principles can be challenging due to the large spatial extent of the wavefunction which stresses the accuracy of basis sets, and the intense fields which require non-perturbative time-dependent electronic structure methods. In this paper, we develop a time-dependent density functional theory approach which uses a Gaussian-type orbital (GTO) basis set to capture strong-field ionization rates and dynamics in atoms and small molecules. This involves propagating the electronic density matrix in time with a time-dependent laser potential and a spatial non-Hermitian complex absorbing potential which is projected onto an atom-centered basis set to remove ionized charge from the simulation. For the density functional theory (DFT) functional we use a tuned range-separated functional LC-PBE*, which has the correct asymptotic 1/r form of the potential and a reduced delocalization error compared to traditional DFT functionals. Ionization rates are computed for hydrogen, molecular nitrogen, and iodoacetylene under various field frequencies, intensities, and polarizations (angle-dependent ionization), and the results are shown to quantitatively agree with time-dependent Schrödinger equation and strong-field approximation calculations. This tuned DFT with GTO method opens the door to predictive all-electron time-dependent density functional theory simulations of ionization and ionization-triggered dynamics in molecular systems using tuned range-separated hybrid functionals.

  2. Angle-dependent strong-field molecular ionization rates with tuned range-separated time-dependent density functional theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sissay, Adonay; Abanador, Paul; Mauger, François

    2016-09-07

    Strong-field ionization and the resulting electronic dynamics are important for a range of processes such as high harmonic generation, photodamage, charge resonance enhanced ionization, and ionization-triggered charge migration. Modeling ionization dynamics in molecular systems from first-principles can be challenging due to the large spatial extent of the wavefunction which stresses the accuracy of basis sets, and the intense fields which require non-perturbative time-dependent electronic structure methods. In this paper, we develop a time-dependent density functional theory approach which uses a Gaussian-type orbital (GTO) basis set to capture strong-field ionization rates and dynamics in atoms and small molecules. This involves propagatingmore » the electronic density matrix in time with a time-dependent laser potential and a spatial non-Hermitian complex absorbing potential which is projected onto an atom-centered basis set to remove ionized charge from the simulation. For the density functional theory (DFT) functional we use a tuned range-separated functional LC-PBE*, which has the correct asymptotic 1/r form of the potential and a reduced delocalization error compared to traditional DFT functionals. Ionization rates are computed for hydrogen, molecular nitrogen, and iodoacetylene under various field frequencies, intensities, and polarizations (angle-dependent ionization), and the results are shown to quantitatively agree with time-dependent Schrödinger equation and strong-field approximation calculations. This tuned DFT with GTO method opens the door to predictive all-electron time-dependent density functional theory simulations of ionization and ionization-triggered dynamics in molecular systems using tuned range-separated hybrid functionals.« less

  3. Doubly stochastic radial basis function methods

    NASA Astrophysics Data System (ADS)

    Yang, Fenglian; Yan, Liang; Ling, Leevan

    2018-06-01

    We propose a doubly stochastic radial basis function (DSRBF) method for function recoveries. Instead of a constant, we treat the RBF shape parameters as stochastic variables whose distribution were determined by a stochastic leave-one-out cross validation (LOOCV) estimation. A careful operation count is provided in order to determine the ranges of all the parameters in our methods. The overhead cost for setting up the proposed DSRBF method is O (n2) for function recovery problems with n basis. Numerical experiments confirm that the proposed method not only outperforms constant shape parameter formulation (in terms of accuracy with comparable computational cost) but also the optimal LOOCV formulation (in terms of both accuracy and computational cost).

  4. Acoustical source reconstruction from non-synchronous sequential measurements by Fast Iterative Shrinkage Thresholding Algorithm

    NASA Astrophysics Data System (ADS)

    Yu, Liang; Antoni, Jerome; Leclere, Quentin; Jiang, Weikang

    2017-11-01

    Acoustical source reconstruction is a typical inverse problem, whose minimum frequency of reconstruction hinges on the size of the array and maximum frequency depends on the spacing distance between the microphones. For the sake of enlarging the frequency of reconstruction and reducing the cost of an acquisition system, Cyclic Projection (CP), a method of sequential measurements without reference, was recently investigated (JSV,2016,372:31-49). In this paper, the Propagation based Fast Iterative Shrinkage Thresholding Algorithm (Propagation-FISTA) is introduced, which improves CP in two aspects: (1) the number of acoustic sources is no longer needed and the only making assumption is that of a "weakly sparse" eigenvalue spectrum; (2) the construction of the spatial basis is much easier and adaptive to practical scenarios of acoustical measurements benefiting from the introduction of propagation based spatial basis. The proposed Propagation-FISTA is first investigated with different simulations and experimental setups and is next illustrated with an industrial case.

  5. Cracking the Code of Human Diseases Using Next-Generation Sequencing: Applications, Challenges, and Perspectives

    PubMed Central

    Precone, Vincenza; Del Monaco, Valentina; Esposito, Maria Valeria; De Palma, Fatima Domenica Elisa; Ruocco, Anna; D'Argenio, Valeria

    2015-01-01

    Next-generation sequencing (NGS) technologies have greatly impacted on every field of molecular research mainly because they reduce costs and increase throughput of DNA sequencing. These features, together with the technology's flexibility, have opened the way to a variety of applications including the study of the molecular basis of human diseases. Several analytical approaches have been developed to selectively enrich regions of interest from the whole genome in order to identify germinal and/or somatic sequence variants and to study DNA methylation. These approaches are now widely used in research, and they are already being used in routine molecular diagnostics. However, some issues are still controversial, namely, standardization of methods, data analysis and storage, and ethical aspects. Besides providing an overview of the NGS-based approaches most frequently used to study the molecular basis of human diseases at DNA level, we discuss the principal challenges and applications of NGS in the field of human genomics. PMID:26665001

  6. Development of an integrated BEM approach for hot fluid structure interaction

    NASA Technical Reports Server (NTRS)

    Dargush, G. F.; Banerjee, P. K.; Shi, Y.

    1990-01-01

    A comprehensive boundary element method is presented for transient thermoelastic analysis of hot section Earth-to-Orbit engine components. This time-domain formulation requires discretization of only the surface of the component, and thus provides an attractive alternative to finite element analysis for this class of problems. In addition, steep thermal gradients, which often occur near the surface, can be captured more readily since with a boundary element approach there are no shape functions to constrain the solution in the direction normal to the surface. For example, the circular disc analysis indicates the high level of accuracy that can be obtained. In fact, on the basis of reduced modeling effort and improved accuracy, it appears that the present boundary element method should be the preferred approach for general problems of transient thermoelasticity.

  7. Use of 35-mm color aerial photography to acquire mallard sex ratio data

    USGS Publications Warehouse

    Ferguson, Edgar L.; Jorde, Dennis G.; Sease, John L.

    1981-01-01

    A conventional 35-mm camera equipped with an f2.8 135-mm lens and ASA 64 color film was used to acquire sex ratio data on mallards (Anas platyrhynchos) wintering in the Platte River Valley of south-central Nebraska. Prelight focusing for a distance of 30.5 metres and setting of shutter speed at 1/2000 of a second eliminated focusing and reduced image motion problems and resulted in high-resolution, large-scale aerial photography of small targets. This technique has broad application to the problem of determining sex ratios of various species of waterfowl concentrated on wintering and staging areas. The aerial photographic method was cheaper than the ground ocular method when costs were compared on a per-100 bird basis.

  8. Grinding Method and Error Analysis of Eccentric Shaft Parts

    NASA Astrophysics Data System (ADS)

    Wang, Zhiming; Han, Qiushi; Li, Qiguang; Peng, Baoying; Li, Weihua

    2017-12-01

    RV reducer and various mechanical transmission parts are widely used in eccentric shaft parts, The demand of precision grinding technology for eccentric shaft parts now, In this paper, the model of X-C linkage relation of eccentric shaft grinding is studied; By inversion method, the contour curve of the wheel envelope is deduced, and the distance from the center of eccentric circle is constant. The simulation software of eccentric shaft grinding is developed, the correctness of the model is proved, the influence of the X-axis feed error, the C-axis feed error and the wheel radius error on the grinding process is analyzed, and the corresponding error calculation model is proposed. The simulation analysis is carried out to provide the basis for the contour error compensation.

  9. Stress and the Hair Growth Cycle: Cortisol-Induced Hair Growth Disruption.

    PubMed

    Thom, Erling

    2016-08-01

    The stress hormone, cortisol, is known to affect the function and cyclic regulation of the hair follicle. When cortisol is present at high levels it has been demonstrated to reduce the synthesis and accelerate the degradation of important skin elements, namely hyaluronan and proteoglycans by approximately 40%. The following discussion outlines the relationship between stress, cortisol, and the effect on the normal function of the hair follicle. As a result of this connection, important correlations have been established in the literature to form a basis for novel, effective treatments of stress-related hair growth disorders.
    Amongst various treatment methods and substances, oral supplementation with a specific bioavailable proteoglycan stands out as a promising new therapeutic treatment method.

    J Drugs Dermatol. 2016;15(8):1001-1004.

  10. Efficient spot size converter for higher-order mode fiber-chip coupling.

    PubMed

    Lai, Yaxiao; Yu, Yu; Fu, Songnian; Xu, Jing; Shum, Perry Ping; Zhang, Xinliang

    2017-09-15

    We propose and demonstrate a silicon-based spot size converter (SSC), composed of two identical tapered channel waveguides and a Y-junction. The SSC is designed for first-order mode fiber-to-chip coupling on the basis of mode petal separation and the recombination method. Compared with a traditional on-chip SSC, this method is superior with reduced coupling loss when dealing with a higher-order mode. To the best of our knowledge, we present the first experimental observations of a higher-order SSC which is fully compatible with a standard fabrication process. Average coupling losses of 3 and 5.5 dB are predicted by simulation and demonstrated experimentally. A fully covered 3 dB bandwidth over a 1515-1585 nm wavelength range is experimentally observed.

  11. Hyperspherical Sparse Approximation Techniques for High-Dimensional Discontinuity Detection

    DOE PAGES

    Zhang, Guannan; Webster, Clayton G.; Gunzburger, Max; ...

    2016-08-04

    This work proposes a hyperspherical sparse approximation framework for detecting jump discontinuities in functions in high-dimensional spaces. The need for a novel approach results from the theoretical and computational inefficiencies of well-known approaches, such as adaptive sparse grids, for discontinuity detection. Our approach constructs the hyperspherical coordinate representation of the discontinuity surface of a function. Then sparse approximations of the transformed function are built in the hyperspherical coordinate system, with values at each point estimated by solving a one-dimensional discontinuity detection problem. Due to the smoothness of the hypersurface, the new technique can identify jump discontinuities with significantly reduced computationalmore » cost, compared to existing methods. Several approaches are used to approximate the transformed discontinuity surface in the hyperspherical system, including adaptive sparse grid and radial basis function interpolation, discrete least squares projection, and compressed sensing approximation. Moreover, hierarchical acceleration techniques are also incorporated to further reduce the overall complexity. In conclusion, rigorous complexity analyses of the new methods are provided, as are several numerical examples that illustrate the effectiveness of our approach.« less

  12. Minimum energy control for a two-compartment neuron to extracellular electric fields

    NASA Astrophysics Data System (ADS)

    Yi, Guo-Sheng; Wang, Jiang; Li, Hui-Yan; Wei, Xi-Le; Deng, Bin

    2016-11-01

    The energy optimization of extracellular electric field (EF) stimulus for a neuron is considered in this paper. We employ the optimal control theory to design a low energy EF input for a reduced two-compartment model. It works by driving the neuron to closely track a prescriptive spike train. A cost function is introduced to balance the contradictory objectives, i.e., tracking errors and EF stimulus energy. By using the calculus of variations, we transform the minimization of cost function to a six-dimensional two-point boundary value problem (BVP). Through solving the obtained BVP in the cases of three fundamental bifurcations, it is shown that the control method is able to provide an optimal EF stimulus of reduced energy for the neuron to effectively track a prescriptive spike train. Further, the feasibility of the adopted method is interpreted from the point of view of the biophysical basis of spike initiation. These investigations are conducive to designing stimulating dose for extracellular neural stimulation, which are also helpful to interpret the effects of extracellular field on neural activity.

  13. Robust electromagnetically guided endoscopic procedure using enhanced particle swarm optimization for multimodal information fusion.

    PubMed

    Luo, Xiongbiao; Wan, Ying; He, Xiangjian

    2015-04-01

    Electromagnetically guided endoscopic procedure, which aims at accurately and robustly localizing the endoscope, involves multimodal sensory information during interventions. However, it still remains challenging in how to integrate these information for precise and stable endoscopic guidance. To tackle such a challenge, this paper proposes a new framework on the basis of an enhanced particle swarm optimization method to effectively fuse these information for accurate and continuous endoscope localization. The authors use the particle swarm optimization method, which is one of stochastic evolutionary computation algorithms, to effectively fuse the multimodal information including preoperative information (i.e., computed tomography images) as a frame of reference, endoscopic camera videos, and positional sensor measurements (i.e., electromagnetic sensor outputs). Since the evolutionary computation method usually limits its possible premature convergence and evolutionary factors, the authors introduce the current (endoscopic camera and electromagnetic sensor's) observation to boost the particle swarm optimization and also adaptively update evolutionary parameters in accordance with spatial constraints and the current observation, resulting in advantageous performance in the enhanced algorithm. The experimental results demonstrate that the authors' proposed method provides a more accurate and robust endoscopic guidance framework than state-of-the-art methods. The average guidance accuracy of the authors' framework was about 3.0 mm and 5.6° while the previous methods show at least 3.9 mm and 7.0°. The average position and orientation smoothness of their method was 1.0 mm and 1.6°, which is significantly better than the other methods at least with (2.0 mm and 2.6°). Additionally, the average visual quality of the endoscopic guidance was improved to 0.29. A robust electromagnetically guided endoscopy framework was proposed on the basis of an enhanced particle swarm optimization method with using the current observation information and adaptive evolutionary factors. The authors proposed framework greatly reduced the guidance errors from (4.3, 7.8) to (3.0 mm, 5.6°), compared to state-of-the-art methods.

  14. 40 CFR 75.22 - Reference test methods.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... certification or recertification of continuous emission monitoring systems and excepted monitoring systems under... a wet basis to a dry basis) and shall be used when relative accuracy test audits of continuous... traverse requirement of the method; (iv) Section 8.6 of the method allowing for the use of “Dynamic Spiking...

  15. Determination of many-electron basis functions for a quantum Hall ground state using Schur polynomials

    NASA Astrophysics Data System (ADS)

    Mandal, Sudhansu S.; Mukherjee, Sutirtha; Ray, Koushik

    2018-03-01

    A method for determining the ground state of a planar interacting many-electron system in a magnetic field perpendicular to the plane is described. The ground state wave-function is expressed as a linear combination of a set of basis functions. Given only the flux and the number of electrons describing an incompressible state, we use the combinatorics of partitioning the flux among the electrons to derive the basis wave-functions as linear combinations of Schur polynomials. The procedure ensures that the basis wave-functions form representations of the angular momentum algebra. We exemplify the method by deriving the basis functions for the 5/2 quantum Hall state with a few particles. We find that one of the basis functions is precisely the Moore-Read Pfaffian wave function.

  16. BASiNET-BiologicAl Sequences NETwork: a case study on coding and non-coding RNAs identification.

    PubMed

    Ito, Eric Augusto; Katahira, Isaque; Vicente, Fábio Fernandes da Rocha; Pereira, Luiz Filipe Protasio; Lopes, Fabrício Martins

    2018-06-05

    With the emergence of Next Generation Sequencing (NGS) technologies, a large volume of sequence data in particular de novo sequencing was rapidly produced at relatively low costs. In this context, computational tools are increasingly important to assist in the identification of relevant information to understand the functioning of organisms. This work introduces BASiNET, an alignment-free tool for classifying biological sequences based on the feature extraction from complex network measurements. The method initially transform the sequences and represents them as complex networks. Then it extracts topological measures and constructs a feature vector that is used to classify the sequences. The method was evaluated in the classification of coding and non-coding RNAs of 13 species and compared to the CNCI, PLEK and CPC2 methods. BASiNET outperformed all compared methods in all adopted organisms and datasets. BASiNET have classified sequences in all organisms with high accuracy and low standard deviation, showing that the method is robust and non-biased by the organism. The proposed methodology is implemented in open source in R language and freely available for download at https://cran.r-project.org/package=BASiNET.

  17. Ultra-Low-Dose Fetal CT With Model-Based Iterative Reconstruction: A Prospective Pilot Study.

    PubMed

    Imai, Rumi; Miyazaki, Osamu; Horiuchi, Tetsuya; Asano, Keisuke; Nishimura, Gen; Sago, Haruhiko; Nosaka, Shunsuke

    2017-06-01

    Prenatal diagnosis of skeletal dysplasia by means of 3D skeletal CT examination is highly accurate. However, it carries a risk of fetal exposure to radiation. Model-based iterative reconstruction (MBIR) technology can reduce radiation exposure; however, to our knowledge, the lower limit of an optimal dose is currently unknown. The objectives of this study are to establish ultra-low-dose fetal CT as a method for prenatal diagnosis of skeletal dysplasia and to evaluate the appropriate radiation dose for ultra-low-dose fetal CT. Relationships between tube current and image noise in adaptive statistical iterative reconstruction and MBIR were examined using a 32-cm CT dose index (CTDI) phantom. On the basis of the results of this examination and the recommended methods for the MBIR option and the known relationship between noise and tube current for filtered back projection, as represented by the expression SD = (milliamperes) -0.5 , the lower limit of the optimal dose in ultra-low-dose fetal CT with MBIR was set. The diagnostic power of the CT images obtained using the aforementioned scanning conditions was evaluated, and the radiation exposure associated with ultra-low-dose fetal CT was compared with that noted in previous reports. Noise increased in nearly inverse proportion to the square root of the dose in adaptive statistical iterative reconstruction and in inverse proportion to the fourth root of the dose in MBIR. Ultra-low-dose fetal CT was found to have a volume CTDI of 0.5 mGy. Prenatal diagnosis was accurately performed on the basis of ultra-low-dose fetal CT images that were obtained using this protocol. The level of fetal exposure to radiation was 0.7 mSv. The use of ultra-low-dose fetal CT with MBIR led to a substantial reduction in radiation exposure, compared with the CT imaging method currently used at our institution, but it still enabled diagnosis of skeletal dysplasia without reducing diagnostic power.

  18. Reducing Production Basis Risk through Rainfall Intensity Frequency (RIF) Indexes: Global Sensitivity Analysis' Implication on Policy Design

    NASA Astrophysics Data System (ADS)

    Muneepeerakul, Chitsomanus; Huffaker, Ray; Munoz-Carpena, Rafael

    2016-04-01

    The weather index insurance promises financial resilience to farmers struck by harsh weather conditions with swift compensation at affordable premium thanks to its minimal adverse selection and moral hazard. Despite these advantages, the very nature of indexing causes the presence of "production basis risk" that the selected weather indexes and their thresholds do not correspond to actual damages. To reduce basis risk without additional data collection cost, we propose the use of rain intensity and frequency as indexes as it could offer better protection at the lower premium by avoiding basis risk-strike trade-off inherent in the total rainfall index. We present empirical evidences and modeling results that even under the similar cumulative rainfall and temperature environment, yield can significantly differ especially for drought sensitive crops. We further show that deriving the trigger level and payoff function from regression between historical yield and total rainfall data may pose significant basis risk owing to their non-unique relationship in the insured range of rainfall. Lastly, we discuss the design of index insurance in terms of contract specifications based on the results from global sensitivity analysis.

  19. Comparison of detailed and reduced kinetics mechanisms of silane oxidation in the basis of detonation wave structure problem

    NASA Astrophysics Data System (ADS)

    Fedorov, A. V.; Tropin, D. A.; Fomin, P. A.

    2018-03-01

    The paper deals with the problem of the structure of detonation waves in the silane-air mixture within the framework of mathematical model of a nonequilibrium gas dynamics. Detailed kinetic scheme of silane oxidation as well as the newly developed reduced kinetic model of detonation combustion of silane are used. On its basis the detonation wave (DW) structure in stoichiometric silane - air mixture and dependences of Chapman-Jouguet parameters of mixture on stoichiometric ratio between the fuel (silane) and an oxidizer (air) were obtained.

  20. A reduced basis method for molecular dynamics simulation

    NASA Astrophysics Data System (ADS)

    Vincent-Finley, Rachel Elisabeth

    In this dissertation, we develop a method for molecular simulation based on principal component analysis (PCA) of a molecular dynamics trajectory and least squares approximation of a potential energy function. Molecular dynamics (MD) simulation is a computational tool used to study molecular systems as they evolve through time. With respect to protein dynamics, local motions, such as bond stretching, occur within femtoseconds, while rigid body and large-scale motions, occur within a range of nanoseconds to seconds. To capture motion at all levels, time steps on the order of a femtosecond are employed when solving the equations of motion and simulations must continue long enough to capture the desired large-scale motion. To date, simulations of solvated proteins on the order of nanoseconds have been reported. It is typically the case that simulations of a few nanoseconds do not provide adequate information for the study of large-scale motions. Thus, the development of techniques that allow longer simulation times can advance the study of protein function and dynamics. In this dissertation we use principal component analysis (PCA) to identify the dominant characteristics of an MD trajectory and to represent the coordinates with respect to these characteristics. We augment PCA with an updating scheme based on a reduced representation of a molecule and consider equations of motion with respect to the reduced representation. We apply our method to butane and BPTI and compare the results to standard MD simulations of these molecules. Our results indicate that the molecular activity with respect to our simulation method is analogous to that observed in the standard MD simulation with simulations on the order of picoseconds.

  1. CC2 oscillator strengths within the local framework for calculating excitation energies (LoFEx).

    PubMed

    Baudin, Pablo; Kjærgaard, Thomas; Kristensen, Kasper

    2017-04-14

    In a recent work [P. Baudin and K. Kristensen, J. Chem. Phys. 144, 224106 (2016)], we introduced a local framework for calculating excitation energies (LoFEx), based on second-order approximated coupled cluster (CC2) linear-response theory. LoFEx is a black-box method in which a reduced excitation orbital space (XOS) is optimized to provide coupled cluster (CC) excitation energies at a reduced computational cost. In this article, we present an extension of the LoFEx algorithm to the calculation of CC2 oscillator strengths. Two different strategies are suggested, in which the size of the XOS is determined based on the excitation energy or the oscillator strength of the targeted transitions. The two strategies are applied to a set of medium-sized organic molecules in order to assess both the accuracy and the computational cost of the methods. The results show that CC2 excitation energies and oscillator strengths can be calculated at a reduced computational cost, provided that the targeted transitions are local compared to the size of the molecule. To illustrate the potential of LoFEx for large molecules, both strategies have been successfully applied to the lowest transition of the bivalirudin molecule (4255 basis functions) and compared with time-dependent density functional theory.

  2. Vibration Noise Modeling for Measurement While Drilling System Based on FOGs

    PubMed Central

    Zhang, Chunxi; Wang, Lu; Gao, Shuang; Lin, Tie; Li, Xianmu

    2017-01-01

    Aiming to improve survey accuracy of Measurement While Drilling (MWD) based on Fiber Optic Gyroscopes (FOGs) in the long period, the external aiding sources are fused into the inertial navigation by the Kalman filter (KF) method. The KF method needs to model the inertial sensors’ noise as the system noise model. The system noise is modeled as white Gaussian noise conventionally. However, because of the vibration while drilling, the noise in gyros isn’t white Gaussian noise any more. Moreover, an incorrect noise model will degrade the accuracy of KF. This paper developed a new approach for noise modeling on the basis of dynamic Allan variance (DAVAR). In contrast to conventional white noise models, the new noise model contains both the white noise and the color noise. With this new noise model, the KF for the MWD was designed. Finally, two vibration experiments have been performed. Experimental results showed that the proposed vibration noise modeling approach significantly improved the estimated accuracies of the inertial sensor drifts. Compared the navigation results based on different noise model, with the DAVAR noise model, the position error and the toolface angle error are reduced more than 90%. The velocity error is reduced more than 65%. The azimuth error is reduced more than 50%. PMID:29039815

  3. Vibration Noise Modeling for Measurement While Drilling System Based on FOGs.

    PubMed

    Zhang, Chunxi; Wang, Lu; Gao, Shuang; Lin, Tie; Li, Xianmu

    2017-10-17

    Aiming to improve survey accuracy of Measurement While Drilling (MWD) based on Fiber Optic Gyroscopes (FOGs) in the long period, the external aiding sources are fused into the inertial navigation by the Kalman filter (KF) method. The KF method needs to model the inertial sensors' noise as the system noise model. The system noise is modeled as white Gaussian noise conventionally. However, because of the vibration while drilling, the noise in gyros isn't white Gaussian noise any more. Moreover, an incorrect noise model will degrade the accuracy of KF. This paper developed a new approach for noise modeling on the basis of dynamic Allan variance (DAVAR). In contrast to conventional white noise models, the new noise model contains both the white noise and the color noise. With this new noise model, the KF for the MWD was designed. Finally, two vibration experiments have been performed. Experimental results showed that the proposed vibration noise modeling approach significantly improved the estimated accuracies of the inertial sensor drifts. Compared the navigation results based on different noise model, with the DAVAR noise model, the position error and the toolface angle error are reduced more than 90%. The velocity error is reduced more than 65%. The azimuth error is reduced more than 50%.

  4. Reducing Time and Increasing Sensitivity in Sample Preparation for Adherent Mammalian Cell Metabolomics

    PubMed Central

    Lorenz, Matthew A.; Burant, Charles F.; Kennedy, Robert T.

    2011-01-01

    A simple, fast, and reproducible sample preparation procedure was developed for relative quantification of metabolites in adherent mammalian cells using the clonal β-cell line INS-1 as a model sample. The method was developed by evaluating the effect of different sample preparation procedures on high performance liquid chromatography- mass spectrometry quantification of 27 metabolites involved in glycolysis and the tricarboxylic acid cycle on a directed basis as well as for all detectable chromatographic features on an undirected basis. We demonstrate that a rapid water rinse step prior to quenching of metabolism reduces components that suppress electrospray ionization thereby increasing signal for 26 of 27 targeted metabolites and increasing total number of detected features from 237 to 452 with no detectable change of metabolite content. A novel quenching technique is employed which involves addition of liquid nitrogen directly to the culture dish and allows for samples to be stored at −80 °C for at least 7 d before extraction. Separation of quenching and extraction steps provides the benefit of increased experimental convenience and sample stability while maintaining metabolite content similar to techniques that employ simultaneous quenching and extraction with cold organic solvent. The extraction solvent 9:1 methanol: chloroform was found to provide superior performance over acetonitrile, ethanol, and methanol with respect to metabolite recovery and extract stability. Maximal recovery was achieved using a single rapid (~1 min) extraction step. The utility of this rapid preparation method (~5 min) was demonstrated through precise metabolite measurements (11% average relative standard deviation without internal standards) associated with step changes in glucose concentration that evoke insulin secretion in the clonal β-cell line INS-1. PMID:21456517

  5. Application of the Price-Volume Approach in Cases of Innovative Drugs Where Value-Based Pricing is Inadequate: Description of Real Experiences in Italy.

    PubMed

    Messori, Andrea

    2016-08-01

    Several cases of expensive drugs designed for large patient populations (e.g. sofosbuvir) have raised a complex question in terms of drug pricing. Even assuming value-based pricing, the treatment with these drugs of all eligible patients would have an immense budgetary impact, which is unsustainable also for the richest countries. This raises the need to reduce the prices of these agents in comparison with those suggested by the value-based approach and to devise new pricing methods that can achieve this goal. The present study discusses in detail the following two methods: (i) The approach based on setting nation-wide budget thresholds for individual innovative agents in which a fixed proportion of the historical pharmaceutical expenditure represents the maximum budget attributable to an innovative treatment; (ii) The approach based on nation-wide price-volume agreements in which drug prices are progressively reduced as more patients receive the treatment. The first approach has been developed in the USA by the Institute for Clinical and Economic Review and has been applied to PCSK9 inhibitors (alirocumab and evolocumab). The second approach has been designed for the Italian market and has found a systematic application to manage the price of ranibizumab, sofosbuvir, and PCSK9 inhibitors. While, in the past, price-volume agreements have been applied only on an empirical basis (i.e. in the absence of any quantitative theoretical rule), more recently some explicit mathematical models have been described. The performance of these models is now being evaluated on the basis of the real-world experiences conducted in some European countries, especially Italy.

  6. Highly Efficient and Scalable Compound Decomposition of Two-Electron Integral Tensor and Its Application in Coupled Cluster Calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peng, Bo; Kowalski, Karol

    The representation and storage of two-electron integral tensors are vital in large- scale applications of accurate electronic structure methods. Low-rank representation and efficient storage strategy of integral tensors can significantly reduce the numerical overhead and consequently time-to-solution of these methods. In this paper, by combining pivoted incomplete Cholesky decomposition (CD) with a follow-up truncated singular vector decomposition (SVD), we develop a decomposition strategy to approximately represent the two-electron integral tensor in terms of low-rank vectors. A systematic benchmark test on a series of 1-D, 2-D, and 3-D carbon-hydrogen systems demonstrates high efficiency and scalability of the compound two-step decomposition ofmore » the two-electron integral tensor in our implementation. For the size of atomic basis set N_b ranging from ~ 100 up to ~ 2, 000, the observed numerical scaling of our implementation shows O(N_b^{2.5~3}) versus O(N_b^{3~4}) of single CD in most of other implementations. More importantly, this decomposition strategy can significantly reduce the storage requirement of the atomic-orbital (AO) two-electron integral tensor from O(N_b^4) to O(N_b^2 log_{10}(N_b)) with moderate decomposition thresholds. The accuracy tests have been performed using ground- and excited-state formulations of coupled- cluster formalism employing single and double excitations (CCSD) on several bench- mark systems including the C_{60} molecule described by nearly 1,400 basis functions. The results show that the decomposition thresholds can be generally set to 10^{-4} to 10^{-3} to give acceptable compromise between efficiency and accuracy.« less

  7. Permeability test and slope stability analysis of municipal solid waste in Jiangcungou Landfill, Shaanxi, China.

    PubMed

    Yang, Rong; Xu, Zengguang; Chai, Junrui; Qin, Yuan; Li, Yanlong

    2016-07-01

    With the rapid increase of city waste, landfills have become a major method to deals with municipal solid waste. Thus, the safety of landfills has become a valuable research topic. In this paper, Jiangcungou Landfill, located in Shaanxi, China, was investigated and its slope stability was analyzed. Laboratory tests were used to obtain permeability coefficients of municipal solid waste. Based on the results, the distribution of leachate and stability in the landfill was computed and analyzed. These results showed: the range of permeability coefficient was from 1.0 × 10(-7) cm sec(-1) to 6.0 × 10(-3) cm sec(-1) on basis of laboratory test and some parameters of similar landfills. Owing to the existence of intermediate cover layers in the landfill, the perched water level appeared in the landfill with heavy rain. Moreover, the waste was filled with leachate in the top layer, and the range of leachate level was from 2 m to 5 m in depth under the waste surface in other layers. The closer it gets to the surface of landfill, the higher the perched water level of leachate. It is indicated that the minimum safety factors were 1.516 and 0.958 for winter and summer, respectively. Additionally, the slope failure may occur in summer. The research of seepage and stability in landfills may provide a less costly way to reduce accidents. Landslides often occur in the Jiangcungou Landfill because of the high leachate level. Some measures should be implemented to reduce the leachate level. This paper investigated seepage and slope stability of landfills by numerical methods. These results may provide the basis for increasing stability of landfills.

  8. Medical Data Architecture Project Status

    NASA Technical Reports Server (NTRS)

    Krihak, M.; Middour, C.; Lindsey, A.; Marker, N.; Wolfe, S.; Winther, S.; Ronzano, K.; Bolles, D.; Toscano, W.; Shaw, T.

    2017-01-01

    The Medical Data Architecture (MDA) project supports the Exploration Medical Capability (ExMC) risk to minimize or reduce the risk of adverse health outcomes and decrements in performance due to in-flight medical capabilities on human exploration missions. To mitigate this risk, the ExMC MDA project addresses the technical limitations identified in ExMC Gap Med 07: We do not have the capability to comprehensively process medically-relevant information to support medical operations during exploration missions. This gap identifies that the current International Space Station (ISS) medical data management includes a combination of data collection and distribution methods that are minimally integrated with on-board medical devices and systems. Furthermore, there are variety of data sources and methods of data collection. For an exploration mission, the seamless management of such data will enable an increasingly autonomous crew than the current ISS paradigm. The MDA will develop capabilities that support automated data collection, and the necessary functionality and challenges in executing a self-contained medical system that approaches crew health care delivery without assistance from ground support. To attain this goal, the first year of the MDA project focused on reducing technical risk, developing documentation and instituting iterative development processes that established the basis for the first version of MDA software (or Test Bed 1). Test Bed 1 is based on a nominal operations scenario authored by the ExMC Element Scientist. This narrative was decomposed into a Concept of Operations that formed the basis for Test Bed 1 requirements. These requirements were successfully vetted through the MDA Test Bed 1 System Requirements Review, which permitted the MDA project to begin software code development and component integration. This paper highlights the MDA objectives, development processes, and accomplishments, and identifies the fiscal year 2017 milestones and deliverables in the upcoming year.

  9. Highly Efficient and Scalable Compound Decomposition of Two-Electron Integral Tensor and Its Application in Coupled Cluster Calculations.

    PubMed

    Peng, Bo; Kowalski, Karol

    2017-09-12

    The representation and storage of two-electron integral tensors are vital in large-scale applications of accurate electronic structure methods. Low-rank representation and efficient storage strategy of integral tensors can significantly reduce the numerical overhead and consequently time-to-solution of these methods. In this work, by combining pivoted incomplete Cholesky decomposition (CD) with a follow-up truncated singular vector decomposition (SVD), we develop a decomposition strategy to approximately represent the two-electron integral tensor in terms of low-rank vectors. A systematic benchmark test on a series of 1-D, 2-D, and 3-D carbon-hydrogen systems demonstrates high efficiency and scalability of the compound two-step decomposition of the two-electron integral tensor in our implementation. For the size of the atomic basis set, N b , ranging from ∼100 up to ∼2,000, the observed numerical scaling of our implementation shows [Formula: see text] versus [Formula: see text] cost of performing single CD on the two-electron integral tensor in most of the other implementations. More importantly, this decomposition strategy can significantly reduce the storage requirement of the atomic orbital (AO) two-electron integral tensor from [Formula: see text] to [Formula: see text] with moderate decomposition thresholds. The accuracy tests have been performed using ground- and excited-state formulations of coupled cluster formalism employing single and double excitations (CCSD) on several benchmark systems including the C 60 molecule described by nearly 1,400 basis functions. The results show that the decomposition thresholds can be generally set to 10 -4 to 10 -3 to give acceptable compromise between efficiency and accuracy.

  10. Feature extraction algorithm for space targets based on fractal theory

    NASA Astrophysics Data System (ADS)

    Tian, Balin; Yuan, Jianping; Yue, Xiaokui; Ning, Xin

    2007-11-01

    In order to offer a potential for extending the life of satellites and reducing the launch and operating costs, satellite servicing including conducting repairs, upgrading and refueling spacecraft on-orbit become much more frequently. Future space operations can be more economically and reliably executed using machine vision systems, which can meet real time and tracking reliability requirements for image tracking of space surveillance system. Machine vision was applied to the research of relative pose for spacecrafts, the feature extraction algorithm was the basis of relative pose. In this paper fractal geometry based edge extraction algorithm which can be used in determining and tracking the relative pose of an observed satellite during proximity operations in machine vision system was presented. The method gets the gray-level image distributed by fractal dimension used the Differential Box-Counting (DBC) approach of the fractal theory to restrain the noise. After this, we detect the consecutive edge using Mathematical Morphology. The validity of the proposed method is examined by processing and analyzing images of space targets. The edge extraction method not only extracts the outline of the target, but also keeps the inner details. Meanwhile, edge extraction is only processed in moving area to reduce computation greatly. Simulation results compared edge detection using the method which presented by us with other detection methods. The results indicate that the presented algorithm is a valid method to solve the problems of relative pose for spacecrafts.

  11. On the accuracy of explicitly correlated methods to generate potential energy surfaces for scattering calculations and clustering: application to the HCl-He complex.

    PubMed

    Ajili, Yosra; Hammami, Kamel; Jaidane, Nejm Eddine; Lanza, Mathieu; Kalugina, Yulia N; Lique, François; Hochlaf, Majdi

    2013-07-07

    We closely compare the accuracy of multidimensional potential energy surfaces (PESs) generated by the recently developed explicitly correlated coupled cluster (CCSD(T)-F12) methods in connection with the cc-pVXZ-F12 (X = D, T) and aug-cc-pVTZ basis sets and those deduced using the well-established orbital-based coupled cluster techniques employing correlation consistent atomic basis sets (aug-cc-pVXZ, X = T, Q, 5) and extrapolated to the complete basis set (CBS) limit. This work is performed on the benchmark rare gas-hydrogen halide interaction (HCl-He) system. These PESs are then incorporated into quantum close-coupling scattering dynamical calculations in order to check the impact of the accuracy of the PES on the scattering calculations. For this system, we deduced inelastic collisional data including (de-)excitation collisional and pressure broadening cross sections. Our work shows that the CCSD(T)-F12/aug-cc-pVTZ PES describes correctly the repulsive wall, the van der Waals minimum and long range internuclear distances whereas cc-pVXZ-F12 (X = D,T) basis sets are not diffuse enough for that purposes. Interestingly, the collision cross sections deduced from the CCSD(T)-F12/aug-cc-pVTZ PES are in excellent agreement with those obtained with CCSD(T)/CBS methodology. The position of the resonances and the general shape of these cross sections almost coincide. Since the cost of the electronic structure computations is reduced by several orders of magnitude when using CCSD(T)-F12/aug-cc-pVTZ compared to CCSD(T)/CBS methodology, this approach can be recommended as an alternative for generation of PESs of molecular clusters and for the interpretation of accurate scattering experiments as well as for a wide production of collisional data to be included in astrophysical and atmospherical models.

  12. SU-E-T-422: Fast Analytical Beamlet Optimization for Volumetric Intensity-Modulated Arc Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chan, Kenny S K; Lee, Louis K Y; Xing, L

    2015-06-15

    Purpose: To implement a fast optimization algorithm on CPU/GPU heterogeneous computing platform and to obtain an optimal fluence for a given target dose distribution from the pre-calculated beamlets in an analytical approach. Methods: The 2D target dose distribution was modeled as an n-dimensional vector and estimated by a linear combination of independent basis vectors. The basis set was composed of the pre-calculated beamlet dose distributions at every 6 degrees of gantry angle and the cost function was set as the magnitude square of the vector difference between the target and the estimated dose distribution. The optimal weighting of the basis,more » which corresponds to the optimal fluence, was obtained analytically by the least square method. Those basis vectors with a positive weighting were selected for entering into the next level of optimization. Totally, 7 levels of optimization were implemented in the study.Ten head-and-neck and ten prostate carcinoma cases were selected for the study and mapped to a round water phantom with a diameter of 20cm. The Matlab computation was performed in a heterogeneous programming environment with Intel i7 CPU and NVIDIA Geforce 840M GPU. Results: In all selected cases, the estimated dose distribution was in a good agreement with the given target dose distribution and their correlation coefficients were found to be in the range of 0.9992 to 0.9997. Their root-mean-square error was monotonically decreasing and converging after 7 cycles of optimization. The computation took only about 10 seconds and the optimal fluence maps at each gantry angle throughout an arc were quickly obtained. Conclusion: An analytical approach is derived for finding the optimal fluence for a given target dose distribution and a fast optimization algorithm implemented on the CPU/GPU heterogeneous computing environment greatly reduces the optimization time.« less

  13. Novel transform for image description and compression with implementation by neural architectures

    NASA Astrophysics Data System (ADS)

    Ben-Arie, Jezekiel; Rao, Raghunath K.

    1991-10-01

    A general method for signal representation using nonorthogonal basis functions that are composed of Gaussians are described. The Gaussians can be combined into groups with predetermined configuration that can approximate any desired basis function. The same configuration at different scales forms a set of self-similar wavelets. The general scheme is demonstrated by representing a natural signal employing an arbitrary basis function. The basic methodology is demonstrated by two novel schemes for efficient representation of 1-D and 2- D signals using Gaussian basis functions (BFs). Special methods are required here since the Gaussian functions are nonorthogonal. The first method employs a paradigm of maximum energy reduction interlaced with the A* heuristic search. The second method uses an adaptive lattice system to find the minimum-squared error of the BFs onto the signal, and a lateral-vertical suppression network to select the most efficient representation in terms of data compression.

  14. Applying Quantum Monte Carlo to the Electronic Structure Problem

    NASA Astrophysics Data System (ADS)

    Powell, Andrew D.; Dawes, Richard

    2016-06-01

    Two distinct types of Quantum Monte Carlo (QMC) calculations are applied to electronic structure problems such as calculating potential energy curves and producing benchmark values for reaction barriers. First, Variational and Diffusion Monte Carlo (VMC and DMC) methods using a trial wavefunction subject to the fixed node approximation were tested using the CASINO code.[1] Next, Full Configuration Interaction Quantum Monte Carlo (FCIQMC), along with its initiator extension (i-FCIQMC) were tested using the NECI code.[2] FCIQMC seeks the FCI energy for a specific basis set. At a reduced cost, the efficient i-FCIQMC method can be applied to systems in which the standard FCIQMC approach proves to be too costly. Since all of these methods are statistical approaches, uncertainties (error-bars) are introduced for each calculated energy. This study tests the performance of the methods relative to traditional quantum chemistry for some benchmark systems. References: [1] R. J. Needs et al., J. Phys.: Condensed Matter 22, 023201 (2010). [2] G. H. Booth et al., J. Chem. Phys. 131, 054106 (2009).

  15. Computerized mappings of the cerebral cortex: a multiresolution flattening method and a surface-based coordinate system

    NASA Technical Reports Server (NTRS)

    Drury, H. A.; Van Essen, D. C.; Anderson, C. H.; Lee, C. W.; Coogan, T. A.; Lewis, J. W.

    1996-01-01

    We present a new method for generating two-dimensional maps of the cerebral cortex. Our computerized, two-stage flattening method takes as its input any well-defined representation of a surface within the three-dimensional cortex. The first stage rapidly converts this surface to a topologically correct two-dimensional map, without regard for the amount of distortion introduced. The second stage reduces distortions using a multiresolution strategy that makes gross shape changes on a coarsely sampled map and further shape refinements on progressively finer resolution maps. We demonstrate the utility of this approach by creating flat maps of the entire cerebral cortex in the macaque monkey and by displaying various types of experimental data on such maps. We also introduce a surface-based coordinate system that has advantages over conventional stereotaxic coordinates and is relevant to studies of cortical organization in humans as well as non-human primates. Together, these methods provide an improved basis for quantitative studies of individual variability in cortical organization.

  16. Correlation energy extrapolation by many-body expansion

    DOE PAGES

    Boschen, Jeffery S.; Theis, Daniel; Ruedenberg, Klaus; ...

    2017-01-09

    Accounting for electron correlation is required for high accuracy calculations of molecular energies. The full configuration interaction (CI) approach can fully capture the electron correlation within a given basis, but it does so at a computational expense that is impractical for all but the smallest chemical systems. In this work, a new methodology is presented to approximate configuration interaction calculations at a reduced computational expense and memory requirement, namely, the correlation energy extrapolation by many-body expansion (CEEMBE). This method combines a MBE approximation of the CI energy with an extrapolated correction obtained from CI calculations using subsets of the virtualmore » orbitals. The extrapolation approach is inspired by, and analogous to, the method of correlation energy extrapolation by intrinsic scaling. Benchmark calculations of the new method are performed on diatomic fluorine and ozone. Finally, the method consistently achieves agreement with CI calculations to within a few mhartree and often achieves agreement to within ~1 millihartree or less, while requiring significantly less computational resources.« less

  17. Correlation energy extrapolation by many-body expansion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boschen, Jeffery S.; Theis, Daniel; Ruedenberg, Klaus

    Accounting for electron correlation is required for high accuracy calculations of molecular energies. The full configuration interaction (CI) approach can fully capture the electron correlation within a given basis, but it does so at a computational expense that is impractical for all but the smallest chemical systems. In this work, a new methodology is presented to approximate configuration interaction calculations at a reduced computational expense and memory requirement, namely, the correlation energy extrapolation by many-body expansion (CEEMBE). This method combines a MBE approximation of the CI energy with an extrapolated correction obtained from CI calculations using subsets of the virtualmore » orbitals. The extrapolation approach is inspired by, and analogous to, the method of correlation energy extrapolation by intrinsic scaling. Benchmark calculations of the new method are performed on diatomic fluorine and ozone. Finally, the method consistently achieves agreement with CI calculations to within a few mhartree and often achieves agreement to within ~1 millihartree or less, while requiring significantly less computational resources.« less

  18. Potential Energy Surface of the Chromium Dimer Re-re-revisited with Multiconfigurational Perturbation Theory.

    PubMed

    Vancoillie, Steven; Malmqvist, Per Åke; Veryazov, Valera

    2016-04-12

    The chromium dimer has long been a benchmark molecule to evaluate the performance of different computational methods ranging from density functional theory to wave function methods. Among the latter, multiconfigurational perturbation theory was shown to be able to reproduce the potential energy surface of the chromium dimer accurately. However, for modest active space sizes, it was later shown that different definitions of the zeroth-order Hamiltonian have a large impact on the results. In this work, we revisit the system for the third time with multiconfigurational perturbation theory, now in order to increase the active space of the reference wave function. This reduces the impact of the choice of zeroth-order Hamiltonian and improves the shape of the potential energy surface significantly. We conclude by comparing our results of the dissocation energy and vibrational spectrum to those obtained from several highly accurate multiconfigurational methods and experiment. For a meaningful comparison, we used the extrapolation to the complete basis set for all methods involved.

  19. The hydrogen tunneling splitting in malonaldehyde: A full-dimensional time-independent quantum mechanical method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wu, Feng; Ren, Yinghui; Bian, Wensheng, E-mail: bian@iccas.ac.cn

    The accurate time-independent quantum dynamics calculations on the ground-state tunneling splitting of malonaldehyde in full dimensionality are reported for the first time. This is achieved with an efficient method developed by us. In our method, the basis functions are customized for the hydrogen transfer process which has the effect of greatly reducing the size of the final Hamiltonian matrix, and the Lanczos method and parallel strategy are used to further overcome the memory and central processing unit time bottlenecks. The obtained ground-state tunneling splitting of 24.5 cm{sup −1} is in excellent agreement with the benchmark value of 23.8 cm{sup −1}more » computed with the full-dimensional, multi-configurational time-dependent Hartree approach on the same potential energy surface, and we estimate that our reported value has an uncertainty of less than 0.5 cm{sup −1}. Moreover, the role of various vibrational modes strongly coupled to the hydrogen transfer process is revealed.« less

  20. [Debridement- crucial procedure in the treatment of chronic wounds].

    PubMed

    Huljev, Dubravko

    2013-10-01

    Debridement is the process of removing dead tissue from the wound bed. Devitalized tissue can obstruct or completely stop healing of the wound. The aim of debridement is to transform a chronic wound into an acute wound and to initiate the process of healing. Debridement is the basis of each wound treatment and it has to be repeated, depending on the necrotic tissue formation. There are several types of debridement, as follows: mechanical, autolytic, chemical, enzymatic, biological, and new debridement techniques. With advances in technology, new types of debridement have been introduced. Besides standard methods, methods of pulsed lavage debridement (hydro-surgery, water-jet) and ultrasound-assisted wound treatment are ever more frequently introduced. The method of debridement the clinician will choose depends on the amount of necrotic (devitalized) tissue in the wound bed, size and depth of the wound, underlying disease, possible comorbidity, and the patient general condition. Frequently, the methods of debridement are combined in order to achieve better removal of devitalized tissue. In addition, debridement significantly reduces bacterial burden.

  1. Consumers' Kansei Needs Clustering Method for Product Emotional Design Based on Numerical Design Structure Matrix and Genetic Algorithms.

    PubMed

    Yang, Yan-Pu; Chen, Deng-Kai; Gu, Rong; Gu, Yu-Feng; Yu, Sui-Huai

    2016-01-01

    Consumers' Kansei needs reflect their perception about a product and always consist of a large number of adjectives. Reducing the dimension complexity of these needs to extract primary words not only enables the target product to be explicitly positioned, but also provides a convenient design basis for designers engaging in design work. Accordingly, this study employs a numerical design structure matrix (NDSM) by parameterizing a conventional DSM and integrating genetic algorithms to find optimum Kansei clusters. A four-point scale method is applied to assign link weights of every two Kansei adjectives as values of cells when constructing an NDSM. Genetic algorithms are used to cluster the Kansei NDSM and find optimum clusters. Furthermore, the process of the proposed method is presented. The details of the proposed approach are illustrated using an example of electronic scooter for Kansei needs clustering. The case study reveals that the proposed method is promising for clustering Kansei needs adjectives in product emotional design.

  2. Consumers' Kansei Needs Clustering Method for Product Emotional Design Based on Numerical Design Structure Matrix and Genetic Algorithms

    PubMed Central

    Chen, Deng-kai; Gu, Rong; Gu, Yu-feng; Yu, Sui-huai

    2016-01-01

    Consumers' Kansei needs reflect their perception about a product and always consist of a large number of adjectives. Reducing the dimension complexity of these needs to extract primary words not only enables the target product to be explicitly positioned, but also provides a convenient design basis for designers engaging in design work. Accordingly, this study employs a numerical design structure matrix (NDSM) by parameterizing a conventional DSM and integrating genetic algorithms to find optimum Kansei clusters. A four-point scale method is applied to assign link weights of every two Kansei adjectives as values of cells when constructing an NDSM. Genetic algorithms are used to cluster the Kansei NDSM and find optimum clusters. Furthermore, the process of the proposed method is presented. The details of the proposed approach are illustrated using an example of electronic scooter for Kansei needs clustering. The case study reveals that the proposed method is promising for clustering Kansei needs adjectives in product emotional design. PMID:27630709

  3. Prediction of future uniform milk prices in Florida federal milk marketing order 6 from milk futures markets.

    PubMed

    De Vries, A; Feleke, S

    2008-12-01

    This study assessed the accuracy of 3 methods that predict the uniform milk price in Federal Milk Marketing Order 6 (Florida). Predictions were made for 1 to 12 mo into the future. Data were from January 2003 to May 2007. The CURRENT method assumed that future uniform milk prices were equal to the last announced uniform milk price. The F+BASIS and F+UTIL methods were based on the milk futures markets because the futures prices reflect the market's expectation of the class III and class IV cash prices that are announced monthly by USDA. The F+BASIS method added an exponentially weighted moving average of the difference between the class III cash price and the historical uniform milk price (also known as basis) to the class III futures price. The F+UTIL method used the class III and class IV futures prices, the most recently announced butter price, and historical utilizations to predict the skim milk prices, butterfat prices, and utilizations in all 4 classes. Predictions of future utilizations were made with a Holt-Winters smoothing method. Federal Milk Marketing Order 6 had high class I utilization (85 +/- 4.8%). Mean and standard deviation of the class III and class IV cash prices were $13.39 +/- 2.40/cwt (1 cwt = 45.36 kg) and $12.06 +/- 1.80/cwt, respectively. The actual uniform price in Tampa, Florida, was $16.62 +/- 2.16/cwt. The basis was $3.23 +/- 1.23/cwt. The F+BASIS and F+UTIL predictions were generally too low during the period considered because the class III cash prices were greater than the corresponding class III futures prices. For the 1- to 6-mo-ahead predictions, the root of the mean squared prediction errors from the F+BASIS method were $1.12, $1.20, $1.55, $1.91, $2.16, and $2.34/cwt, respectively. The root of the mean squared prediction errors ranged from $2.50 to $2.73/cwt for predictions up to 12 mo ahead. Results from the F+UTIL method were similar. The accuracies of the F+BASIS and F+UTIL methods for all 12 fore-cast horizons were not significantly different. Application of the modified Mariano-Diebold tests showed that no method included all the information contained in the other methods. In conclusion, both F+BASIS and F+UTIL methods tended to more accurately predict the future uniform milk prices than the CURRENT method, but prediction errors could be substantial even a few months into the future. The majority of the prediction error was caused by the inefficiency of the futures markets to predict the class III cash prices.

  4. Wavelet-based analysis of transient electromagnetic wave propagation in photonic crystals.

    PubMed

    Shifman, Yair; Leviatan, Yehuda

    2004-03-01

    Photonic crystals and optical bandgap structures, which facilitate high-precision control of electromagnetic-field propagation, are gaining ever-increasing attention in both scientific and commercial applications. One common photonic device is the distributed Bragg reflector (DBR), which exhibits high reflectivity at certain frequencies. Analysis of the transient interaction of an electromagnetic pulse with such a device can be formulated in terms of the time-domain volume integral equation and, in turn, solved numerically with the method of moments. Owing to the frequency-dependent reflectivity of such devices, the extent of field penetration into deep layers of the device will be different depending on the frequency content of the impinging pulse. We show how this phenomenon can be exploited to reduce the number of basis functions needed for the solution. To this end, we use spatiotemporal wavelet basis functions, which possess the multiresolution property in both spatial and temporal domains. To select the dominant functions in the solution, we use an iterative impedance matrix compression (IMC) procedure, which gradually constructs and solves a compressed version of the matrix equation until the desired degree of accuracy has been achieved. Results show that when the electromagnetic pulse is reflected, the transient IMC omits basis functions defined over the last layers of the DBR, as anticipated.

  5. Generalized Lagrange Jacobi Gauss-Lobatto (GLJGL) Collocation Method for Solving Linear and Nonlinear Fokker-Planck Equations

    NASA Astrophysics Data System (ADS)

    Parand, K.; Latifi, S.; Moayeri, M. M.; Delkhosh, M.

    2018-05-01

    In this study, we have constructed a new numerical approach for solving the time-dependent linear and nonlinear Fokker-Planck equations. In fact, we have discretized the time variable with Crank-Nicolson method and for the space variable, a numerical method based on Generalized Lagrange Jacobi Gauss-Lobatto (GLJGL) collocation method is applied. It leads to in solving the equation in a series of time steps and at each time step, the problem is reduced to a problem consisting of a system of algebraic equations that greatly simplifies the problem. One can observe that the proposed method is simple and accurate. Indeed, one of its merits is that it is derivative-free and by proposing a formula for derivative matrices, the difficulty aroused in calculation is overcome, along with that it does not need to calculate the General Lagrange basis and matrices; they have Kronecker property. Linear and nonlinear Fokker-Planck equations are given as examples and the results amply demonstrate that the presented method is very valid, effective, reliable and does not require any restrictive assumptions for nonlinear terms.

  6. Krylov-Subspace Recycling via the POD-Augmented Conjugate-Gradient Method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carlberg, Kevin; Forstall, Virginia; Tuminaro, Ray

    This paper presents a new Krylov-subspace-recycling method for efficiently solving sequences of linear systems of equations characterized by varying right-hand sides and symmetric-positive-definite matrices. As opposed to typical truncation strategies used in recycling such as deflation, we propose a truncation method inspired by goal-oriented proper orthogonal decomposition (POD) from model reduction. This idea is based on the observation that model reduction aims to compute a low-dimensional subspace that contains an accurate solution; as such, we expect the proposed method to generate a low-dimensional subspace that is well suited for computing solutions that can satisfy inexact tolerances. In particular, we proposemore » specific goal-oriented POD `ingredients' that align the optimality properties of POD with the objective of Krylov-subspace recycling. To compute solutions in the resulting 'augmented' POD subspace, we propose a hybrid direct/iterative three-stage method that leverages 1) the optimal ordering of POD basis vectors, and 2) well-conditioned reduced matrices. Numerical experiments performed on solid-mechanics problems highlight the benefits of the proposed method over existing approaches for Krylov-subspace recycling.« less

  7. Krylov-Subspace Recycling via the POD-Augmented Conjugate-Gradient Method

    DOE PAGES

    Carlberg, Kevin; Forstall, Virginia; Tuminaro, Ray

    2016-01-01

    This paper presents a new Krylov-subspace-recycling method for efficiently solving sequences of linear systems of equations characterized by varying right-hand sides and symmetric-positive-definite matrices. As opposed to typical truncation strategies used in recycling such as deflation, we propose a truncation method inspired by goal-oriented proper orthogonal decomposition (POD) from model reduction. This idea is based on the observation that model reduction aims to compute a low-dimensional subspace that contains an accurate solution; as such, we expect the proposed method to generate a low-dimensional subspace that is well suited for computing solutions that can satisfy inexact tolerances. In particular, we proposemore » specific goal-oriented POD `ingredients' that align the optimality properties of POD with the objective of Krylov-subspace recycling. To compute solutions in the resulting 'augmented' POD subspace, we propose a hybrid direct/iterative three-stage method that leverages 1) the optimal ordering of POD basis vectors, and 2) well-conditioned reduced matrices. Numerical experiments performed on solid-mechanics problems highlight the benefits of the proposed method over existing approaches for Krylov-subspace recycling.« less

  8. Sparsity based target detection for compressive spectral imagery

    NASA Astrophysics Data System (ADS)

    Boada, David Alberto; Arguello Fuentes, Henry

    2016-09-01

    Hyperspectral imagery provides significant information about the spectral characteristics of objects and materials present in a scene. It enables object and feature detection, classification, or identification based on the acquired spectral characteristics. However, it relies on sophisticated acquisition and data processing systems able to acquire, process, store, and transmit hundreds or thousands of image bands from a given area of interest which demands enormous computational resources in terms of storage, computationm, and I/O throughputs. Specialized optical architectures have been developed for the compressed acquisition of spectral images using a reduced set of coded measurements contrary to traditional architectures that need a complete set of measurements of the data cube for image acquisition, dealing with the storage and acquisition limitations. Despite this improvement, if any processing is desired, the image has to be reconstructed by an inverse algorithm in order to be processed, which is also an expensive task. In this paper, a sparsity-based algorithm for target detection in compressed spectral images is presented. Specifically, the target detection model adapts a sparsity-based target detector to work in a compressive domain, modifying the sparse representation basis in the compressive sensing problem by means of over-complete training dictionaries and a wavelet basis representation. Simulations show that the presented method can achieve even better detection results than the state of the art methods.

  9. Integrand-level reduction of loop amplitudes by computational algebraic geometry methods

    NASA Astrophysics Data System (ADS)

    Zhang, Yang

    2012-09-01

    We present an algorithm for the integrand-level reduction of multi-loop amplitudes of renormalizable field theories, based on computational algebraic geometry. This algorithm uses (1) the Gröbner basis method to determine the basis for integrand-level reduction, (2) the primary decomposition of an ideal to classify all inequivalent solutions of unitarity cuts. The resulting basis and cut solutions can be used to reconstruct the integrand from unitarity cuts, via polynomial fitting techniques. The basis determination part of the algorithm has been implemented in the Mathematica package, BasisDet. The primary decomposition part can be readily carried out by algebraic geometry softwares, with the output of the package BasisDet. The algorithm works in both D = 4 and D = 4 - 2 ɛ dimensions, and we present some two and three-loop examples of applications of this algorithm.

  10. An Analytical Solution for Yaw Maneuver Optimization on the International Space Station and Other Orbiting Space Vehicles

    NASA Technical Reports Server (NTRS)

    Dobrinskaya, Tatiana

    2015-01-01

    This paper suggests a new method for optimizing yaw maneuvers on the International Space Station (ISS). Yaw rotations are the most common large maneuvers on the ISS often used for docking and undocking operations, as well as for other activities. When maneuver optimization is used, large maneuvers, which were performed on thrusters, could be performed either using control moment gyroscopes (CMG), or with significantly reduced thruster firings. Maneuver optimization helps to save expensive propellant and reduce structural loads - an important factor for the ISS service life. In addition, optimized maneuvers reduce contamination of the critical elements of the vehicle structure, such as solar arrays. This paper presents an analytical solution for optimizing yaw attitude maneuvers. Equations describing pitch and roll motion needed to counteract the major torques during a yaw maneuver are obtained. A yaw rate profile is proposed. Also the paper describes the physical basis of the suggested optimization approach. In the obtained optimized case, the torques are significantly reduced. This torque reduction was compared to the existing optimization method which utilizes the computational solution. It was shown that the attitude profiles and the torque reduction have a good match for these two methods of optimization. The simulations using the ISS flight software showed similar propellant consumption for both methods. The analytical solution proposed in this paper has major benefits with respect to computational approach. In contrast to the current computational solution, which only can be calculated on the ground, the analytical solution does not require extensive computational resources, and can be implemented in the onboard software, thus, making the maneuver execution automatic. The automatic maneuver significantly simplifies the operations and, if necessary, allows to perform a maneuver without communication with the ground. It also reduces the probability of command errors. The suggested analytical solution provides a new method of maneuver optimization which is less complicated, automatic and more universal. A maneuver optimization approach, presented in this paper, can be used not only for the ISS, but for other orbiting space vehicles.

  11. Machine learning strategies for systems with invariance properties

    NASA Astrophysics Data System (ADS)

    Ling, Julia; Jones, Reese; Templeton, Jeremy

    2016-08-01

    In many scientific fields, empirical models are employed to facilitate computational simulations of engineering systems. For example, in fluid mechanics, empirical Reynolds stress closures enable computationally-efficient Reynolds Averaged Navier Stokes simulations. Likewise, in solid mechanics, constitutive relations between the stress and strain in a material are required in deformation analysis. Traditional methods for developing and tuning empirical models usually combine physical intuition with simple regression techniques on limited data sets. The rise of high performance computing has led to a growing availability of high fidelity simulation data. These data open up the possibility of using machine learning algorithms, such as random forests or neural networks, to develop more accurate and general empirical models. A key question when using data-driven algorithms to develop these empirical models is how domain knowledge should be incorporated into the machine learning process. This paper will specifically address physical systems that possess symmetry or invariance properties. Two different methods for teaching a machine learning model an invariance property are compared. In the first method, a basis of invariant inputs is constructed, and the machine learning model is trained upon this basis, thereby embedding the invariance into the model. In the second method, the algorithm is trained on multiple transformations of the raw input data until the model learns invariance to that transformation. Results are discussed for two case studies: one in turbulence modeling and one in crystal elasticity. It is shown that in both cases embedding the invariance property into the input features yields higher performance at significantly reduced computational training costs.

  12. An RBF-FD closest point method for solving PDEs on surfaces

    NASA Astrophysics Data System (ADS)

    Petras, A.; Ling, L.; Ruuth, S. J.

    2018-10-01

    Partial differential equations (PDEs) on surfaces appear in many applications throughout the natural and applied sciences. The classical closest point method (Ruuth and Merriman (2008) [17]) is an embedding method for solving PDEs on surfaces using standard finite difference schemes. In this paper, we formulate an explicit closest point method using finite difference schemes derived from radial basis functions (RBF-FD). Unlike the orthogonal gradients method (Piret (2012) [22]), our proposed method uses RBF centers on regular grid nodes. This formulation not only reduces the computational cost but also avoids the ill-conditioning from point clustering on the surface and is more natural to couple with a grid based manifold evolution algorithm (Leung and Zhao (2009) [26]). When compared to the standard finite difference discretization of the closest point method, the proposed method requires a smaller computational domain surrounding the surface, resulting in a decrease in the number of sampling points on the surface. In addition, higher-order schemes can easily be constructed by increasing the number of points in the RBF-FD stencil. Applications to a variety of examples are provided to illustrate the numerical convergence of the method.

  13. Bovine and porcine heparins: different drugs with similar effects on human haemodialysis

    PubMed Central

    2013-01-01

    Background Heparins from porcine and bovine intestinal mucosa differ in their structure and also in their effects on coagulation, thrombosis and bleeding. However, they are used as undistinguishable drugs. Methods We compared bovine and porcine intestinal heparin administered to patients undergoing a particular protocol of haemodialysis. We compared plasma concentrations of these two drugs and also evaluated how they affect patients and the dialyzer used. Results Compared with porcine heparin, bovine heparin achieved only 76% of the maximum plasma concentration as IU mL-1. This observation is consistent with the activities observed in the respective pharmaceutical preparations. When the plasma concentrations were expressed on weight basis, bovine heparin achieved a maximum concentration 1.5 fold higher than porcine heparin. The reduced anticoagulant activity and higher concentration, on weight basis, achieved in the plasma of patients under dialysis using bovine instead of porcine heparin did not affect significantly the patients or the dialyzer used. The heparin dose is still in a range, which confers security and safety to the patients. Discussion Despite no apparent difference between bovine and porcine intestinal heparins in the haemodialysis practice, these two types of heparins should be used as distinct drugs due to their differences in structure and biological effects. Conclusions The reduced anticoagulant activity achieved in the plasma of patients under dialysis using bovine instead of porcine heparin did not affect significantly the patients or the dialyzer. PMID:23763719

  14. Sequential updating of a new dynamic pharmacokinetic model for caffeine in premature neonates.

    PubMed

    Micallef, Sandrine; Amzal, Billy; Bach, Véronique; Chardon, Karen; Tourneux, Pierre; Bois, Frédéric Y

    2007-01-01

    Caffeine treatment is widely used in nursing care to reduce the risk of apnoea in premature neonates. To check the therapeutic efficacy of the treatment against apnoea, caffeine concentration in blood is an important indicator. The present study was aimed at building a pharmacokinetic model as a basis for a medical decision support tool. In the proposed model, time dependence of physiological parameters is introduced to describe rapid growth of neonates. To take into account the large variability in the population, the pharmacokinetic model is embedded in a population structure. The whole model is inferred within a Bayesian framework. To update caffeine concentration predictions as data of an incoming patient are collected, we propose a fast method that can be used in a medical context. This involves the sequential updating of model parameters (at individual and population levels) via a stochastic particle algorithm. Our model provides better predictions than the ones obtained with models previously published. We show, through an example, that sequential updating improves predictions of caffeine concentration in blood (reduce bias and length of credibility intervals). The update of the pharmacokinetic model using body mass and caffeine concentration data is studied. It shows how informative caffeine concentration data are in contrast to body mass data. This study provides the methodological basis to predict caffeine concentration in blood, after a given treatment if data are collected on the treated neonate.

  15. How Heterogeneity Affects the Design of Hadoop MapReduce Schedulers: A State-of-the-Art Survey and Challenges.

    PubMed

    Pandey, Vaibhav; Saini, Poonam

    2018-06-01

    MapReduce (MR) computing paradigm and its open source implementation Hadoop have become a de facto standard to process big data in a distributed environment. Initially, the Hadoop system was homogeneous in three significant aspects, namely, user, workload, and cluster (hardware). However, with growing variety of MR jobs and inclusion of different configurations of nodes in the existing cluster, heterogeneity has become an essential part of Hadoop systems. The heterogeneity factors adversely affect the performance of a Hadoop scheduler and limit the overall throughput of the system. To overcome this problem, various heterogeneous Hadoop schedulers have been proposed in the literature. Existing survey works in this area mostly cover homogeneous schedulers and classify them on the basis of quality of service parameters they optimize. Hence, there is a need to study the heterogeneous Hadoop schedulers on the basis of various heterogeneity factors considered by them. In this survey article, we first discuss different heterogeneity factors that typically exist in a Hadoop system and then explore various challenges that arise while designing the schedulers in the presence of such heterogeneity. Afterward, we present the comparative study of heterogeneous scheduling algorithms available in the literature and classify them by the previously said heterogeneity factors. Lastly, we investigate different methods and environment used for evaluation of discussed Hadoop schedulers.

  16. An Alternate Set of Basis Functions for the Electromagnetic Solution of Arbitrarily-Shaped, Three-Dimensional, Closed, Conducting Bodies Using Method of Moments

    NASA Technical Reports Server (NTRS)

    Mackenzie, Anne I.; Baginski, Michael E.; Rao, Sadasiva M.

    2008-01-01

    In this work, we present an alternate set of basis functions, each defined over a pair of planar triangular patches, for the method of moments solution of electromagnetic scattering and radiation problems associated with arbitrarily-shaped, closed, conducting surfaces. The present basis functions are point-wise orthogonal to the pulse basis functions previously defined. The prime motivation to develop the present set of basis functions is to utilize them for the electromagnetic solution of dielectric bodies using a surface integral equation formulation which involves both electric and magnetic cur- rents. However, in the present work, only the conducting body solution is presented and compared with other data.

  17. Extended polarization in 3rd order SCC-DFTB from chemical potential equilization

    PubMed Central

    Kaminski, Steve; Giese, Timothy J.; Gaus, Michael; York, Darrin M.; Elstner, Marcus

    2012-01-01

    In this work we augment the approximate density functional method SCC-DFTB (DFTB3) with the chemical potential equilization (CPE) approach in order to improve the performance for molecular electronic polarizabilities. The CPE method, originally implemented for NDDO type methods by Giese and York, has been shown to emend minimal basis methods wrt response properties significantly, and has been applied to SCC-DFTB recently. CPE allows to overcome this inherent limitation of minimal basis methods by supplying an additional response density. The systematic underestimation is thereby corrected quantitatively without the need to extend the atomic orbital basis, i.e. without increasing the overall computational cost significantly. Especially the dependency of polarizability as a function of molecular charge state was significantly improved from the CPE extension of DFTB3. The empirical parameters introduced by the CPE approach were optimized for 172 organic molecules in order to match the results from density functional methods (DFT) methods using large basis sets. However, the first order derivatives of molecular polarizabilities, as e.g. required to compute Raman activities, are not improved by the current CPE implementation, i.e. Raman spectra are not improved. PMID:22894819

  18. Application effectiveness of the microtremor survey method in the exploration of geothermal resources

    NASA Astrophysics Data System (ADS)

    Tian, Baoqing; Xu, Peifen; Ling, Suqun; Du, Jianguo; Xu, Xueqiu; Pang, Zhonghe

    2017-10-01

    Geophysical techniques are critical tools of geothermal resource surveys. In recent years, the microtremor survey method, which has two branch techniques (the microtremor sounding technique and the two-dimensional (2D) microtremor profiling technique), has become a common method for geothermal resource exploration. The results of microtremor surveys provide important deep information for probing structures of geothermal storing basins and researching the heat-controlling structures, as well as providing the basis for drilling positions of geothermal wells. In this paper, the southern Jiangsu geothermal resources area is taken as a study example. By comparing the results of microtremor surveys and drilling conclusions, and analyzing microtremor survey effectiveness, and geological and technical factors such as observation radius and sampling frequency, we study the applicability of the microtremor survey method and the optimal way of working with this method to achieve better detection results. A comparative study of survey results and geothermal drilling results shows that the microtremor sounding technique effectively distinguishes sub-layers and determines the depth of geothermal reservoirs in the area with excellent layer conditions. The error of depth is generally no more than 8% compared with the results of drilling. It detects deeper by adjusting the size of the probing radius. The 2D microtremor profiling technique probes exactly the buried structures which display as low velocity anomalies in the apparent velocity profile of the S-wave. The anomaly is the critical symbol of the 2D microtremor profiling technique to distinguish and explain the buried geothermal structures. 2D microtremor profiling results provide an important basis for locating exactly the geothermal well and reducing the risk of drilling dry wells.

  19. Spectral-element Method for 3D Marine Controlled-source EM Modeling

    NASA Astrophysics Data System (ADS)

    Liu, L.; Yin, C.; Zhang, B., Sr.; Liu, Y.; Qiu, C.; Huang, X.; Zhu, J.

    2017-12-01

    As one of the predrill reservoir appraisal methods, marine controlled-source EM (MCSEM) has been widely used in mapping oil reservoirs to reduce risk of deep water exploration. With the technical development of MCSEM, the need for improved forward modeling tools has become evident. We introduce in this paper spectral element method (SEM) for 3D MCSEM modeling. It combines the flexibility of finite-element and high accuracy of spectral method. We use Galerkin weighted residual method to discretize the vector Helmholtz equation, where the curl-conforming Gauss-Lobatto-Chebyshev (GLC) polynomials are chosen as vector basis functions. As a kind of high-order complete orthogonal polynomials, the GLC have the characteristic of exponential convergence. This helps derive the matrix elements analytically and improves the modeling accuracy. Numerical 1D models using SEM with different orders show that SEM method delivers accurate results. With increasing SEM orders, the modeling accuracy improves largely. Further we compare our SEM with finite-difference (FD) method for a 3D reservoir model (Figure 1). The results show that SEM method is more effective than FD method. Only when the mesh is fine enough, can FD achieve the same accuracy of SEM. Therefore, to obtain the same precision, SEM greatly reduces the degrees of freedom and cost. Numerical experiments with different models (not shown here) demonstrate that SEM is an efficient and effective tool for MSCEM modeling that has significant advantages over traditional numerical methods.This research is supported by Key Program of National Natural Science Foundation of China (41530320), China Natural Science Foundation for Young Scientists (41404093), and Key National Research Project of China (2016YFC0303100, 2017YFC0601900).

  20. 42 CFR 413.335 - Basis of payment.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Facilities § 413.335 Basis of payment. (a) Method of payment. Under the prospective payment system, SNFs... and, during a transition period, on the basis of a blend of the Federal rate and the facility-specific...

  1. A review of surrogate models and their application to groundwater modeling

    NASA Astrophysics Data System (ADS)

    Asher, M. J.; Croke, B. F. W.; Jakeman, A. J.; Peeters, L. J. M.

    2015-08-01

    The spatially and temporally variable parameters and inputs to complex groundwater models typically result in long runtimes which hinder comprehensive calibration, sensitivity, and uncertainty analysis. Surrogate modeling aims to provide a simpler, and hence faster, model which emulates the specified output of a more complex model in function of its inputs and parameters. In this review paper, we summarize surrogate modeling techniques in three categories: data-driven, projection, and hierarchical-based approaches. Data-driven surrogates approximate a groundwater model through an empirical model that captures the input-output mapping of the original model. Projection-based models reduce the dimensionality of the parameter space by projecting the governing equations onto a basis of orthonormal vectors. In hierarchical or multifidelity methods the surrogate is created by simplifying the representation of the physical system, such as by ignoring certain processes, or reducing the numerical resolution. In discussing the application to groundwater modeling of these methods, we note several imbalances in the existing literature: a large body of work on data-driven approaches seemingly ignores major drawbacks to the methods; only a fraction of the literature focuses on creating surrogates to reproduce outputs of fully distributed groundwater models, despite these being ubiquitous in practice; and a number of the more advanced surrogate modeling methods are yet to be fully applied in a groundwater modeling context.

  2. Beyond maximum entropy: Fractal Pixon-based image reconstruction

    NASA Technical Reports Server (NTRS)

    Puetter, Richard C.; Pina, R. K.

    1994-01-01

    We have developed a new Bayesian image reconstruction method that has been shown to be superior to the best implementations of other competing methods, including Goodness-of-Fit methods such as Least-Squares fitting and Lucy-Richardson reconstruction, as well as Maximum Entropy (ME) methods such as those embodied in the MEMSYS algorithms. Our new method is based on the concept of the pixon, the fundamental, indivisible unit of picture information. Use of the pixon concept provides an improved image model, resulting in an image prior which is superior to that of standard ME. Our past work has shown how uniform information content pixons can be used to develop a 'Super-ME' method in which entropy is maximized exactly. Recently, however, we have developed a superior pixon basis for the image, the Fractal Pixon Basis (FPB). Unlike the Uniform Pixon Basis (UPB) of our 'Super-ME' method, the FPB basis is selected by employing fractal dimensional concepts to assess the inherent structure in the image. The Fractal Pixon Basis results in the best image reconstructions to date, superior to both UPB and the best ME reconstructions. In this paper, we review the theory of the UPB and FPB pixon and apply our methodology to the reconstruction of far-infrared imaging of the galaxy M51. The results of our reconstruction are compared to published reconstructions of the same data using the Lucy-Richardson algorithm, the Maximum Correlation Method developed at IPAC, and the MEMSYS ME algorithms. The results show that our reconstructed image has a spatial resolution a factor of two better than best previous methods (and a factor of 20 finer than the width of the point response function), and detects sources two orders of magnitude fainter than other methods.

  3. The effect of sampling techniques used in the multiconfigurational Ehrenfest method

    NASA Astrophysics Data System (ADS)

    Symonds, C.; Kattirtzi, J. A.; Shalashilin, D. V.

    2018-05-01

    In this paper, we compare and contrast basis set sampling techniques recently developed for use in the ab initio multiple cloning method, a direct dynamics extension to the multiconfigurational Ehrenfest approach, used recently for the quantum simulation of ultrafast photochemistry. We demonstrate that simultaneous use of basis set cloning and basis function trains can produce results which are converged to the exact quantum result. To demonstrate this, we employ these sampling methods in simulations of quantum dynamics in the spin boson model with a broad range of parameters and compare the results to accurate benchmarks.

  4. Galerkin method for unsplit 3-D Dirac equation using atomically/kinetically balanced B-spline basis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fillion-Gourdeau, F., E-mail: filliong@CRM.UMontreal.ca; Centre de Recherches Mathématiques, Université de Montréal, Montréal, H3T 1J4; Lorin, E., E-mail: elorin@math.carleton.ca

    2016-02-15

    A Galerkin method is developed to solve the time-dependent Dirac equation in prolate spheroidal coordinates for an electron–molecular two-center system. The initial state is evaluated from a variational principle using a kinetic/atomic balanced basis, which allows for an efficient and accurate determination of the Dirac spectrum and eigenfunctions. B-spline basis functions are used to obtain high accuracy. This numerical method is used to compute the energy spectrum of the two-center problem and then the evolution of eigenstate wavefunctions in an external electromagnetic field.

  5. The effect of sampling techniques used in the multiconfigurational Ehrenfest method.

    PubMed

    Symonds, C; Kattirtzi, J A; Shalashilin, D V

    2018-05-14

    In this paper, we compare and contrast basis set sampling techniques recently developed for use in the ab initio multiple cloning method, a direct dynamics extension to the multiconfigurational Ehrenfest approach, used recently for the quantum simulation of ultrafast photochemistry. We demonstrate that simultaneous use of basis set cloning and basis function trains can produce results which are converged to the exact quantum result. To demonstrate this, we employ these sampling methods in simulations of quantum dynamics in the spin boson model with a broad range of parameters and compare the results to accurate benchmarks.

  6. Cognitive Radios Exploiting Gray Spaces via Compressed Sensing

    NASA Astrophysics Data System (ADS)

    Wieruch, Dennis; Jung, Peter; Wirth, Thomas; Dekorsy, Armin; Haustein, Thomas

    2016-07-01

    We suggest an interweave cognitive radio system with a gray space detector, which is properly identifying a small fraction of unused resources within an active band of a primary user system like 3GPP LTE. Therefore, the gray space detector can cope with frequency fading holes and distinguish them from inactive resources. Different approaches of the gray space detector are investigated, the conventional reduced-rank least squares method as well as the compressed sensing-based orthogonal matching pursuit and basis pursuit denoising algorithm. In addition, the gray space detector is compared with the classical energy detector. Simulation results present the receiver operating characteristic at several SNRs and the detection performance over further aspects like base station system load for practical false alarm rates. The results show, that especially for practical false alarm rates the compressed sensing algorithm are more suitable than the classical energy detector and reduced-rank least squares approach.

  7. The tensor hypercontracted parametric reduced density matrix algorithm: coupled-cluster accuracy with O(r(4)) scaling.

    PubMed

    Shenvi, Neil; van Aggelen, Helen; Yang, Yang; Yang, Weitao; Schwerdtfeger, Christine; Mazziotti, David

    2013-08-07

    Tensor hypercontraction is a method that allows the representation of a high-rank tensor as a product of lower-rank tensors. In this paper, we show how tensor hypercontraction can be applied to both the electron repulsion integral tensor and the two-particle excitation amplitudes used in the parametric 2-electron reduced density matrix (p2RDM) algorithm. Because only O(r) auxiliary functions are needed in both of these approximations, our overall algorithm can be shown to scale as O(r(4)), where r is the number of single-particle basis functions. We apply our algorithm to several small molecules, hydrogen chains, and alkanes to demonstrate its low formal scaling and practical utility. Provided we use enough auxiliary functions, we obtain accuracy similar to that of the standard p2RDM algorithm, somewhere between that of CCSD and CCSD(T).

  8. Flow-enhanced solution printing of all-polymer solar cells

    DOE PAGES

    Diao, Ying; Zhou, Yan; Kurosawa, Tadanori; ...

    2015-08-12

    Morphology control of solution coated solar cell materials presents a key challenge limiting their device performance and commercial viability. Here we present a new concept for controlling phase separation during solution printing using an all-polymer bulk heterojunction solar cell as a model system. The key aspect of our method lies in the design of fluid flow using a microstructured printing blade, on the basis of the hypothesis of flow-induced polymer crystallization. Our flow design resulted in a similar to 90% increase in the donor thin film crystallinity and reduced microphase separated donor and acceptor domain sizes. The improved morphology enhancedmore » all metrics of solar cell device performance across various printing conditions, specifically leading to higher short-circuit current, fill factor, open circuit voltage and significantly reduced device-to-device variation. However, we expect our design concept to have broad applications beyond all-polymer solar cells because of its simplicity and versatility.« less

  9. Anomalous magnetic behavior in nanocomposite materials of reduced graphene oxide-Ni/NiFe{sub 2}O{sub 4}

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kollu, Pratap, E-mail: pk419@cam.ac.uk, E-mail: anirmalagrace@vit.ac.in, E-mail: dhirenb@iitb.ac.in; Prathapani, Sateesh; Varaprasadarao, Eswara K.

    2014-08-04

    Magnetic Reduced Graphene Oxide-Nickel/NiFe{sub 2}O{sub 4} (RGO-Ni/NF) nanocomposite has been synthesized by one pot solvothermal method. Respective phase formations and their purities in the composite are confirmed by High Resolution Transmission Electron Microscope and X Ray Diffraction, respectively. For the RGO-Ni/NF composite material finite-size effects lead to the anomalous magnetic behavior, which is corroborated in temperature and field dependent magnetization curves. Here, we are reporting the behavior of higher magnetization values for Zero Field Cooled condition to that of Field Cooled for the RGO-Ni/NF nanocomposite. Also, the observed negative and positive moments in Hysteresis loops at relatively smaller applied fieldsmore » (100 Oe and 200 Oe) are explained on the basis of surface spin disorder.« less

  10. Flow-enhanced solution printing of all-polymer solar cells

    PubMed Central

    Diao, Ying; Zhou, Yan; Kurosawa, Tadanori; Shaw, Leo; Wang, Cheng; Park, Steve; Guo, Yikun; Reinspach, Julia A.; Gu, Kevin; Gu, Xiaodan; Tee, Benjamin C. K.; Pang, Changhyun; Yan, Hongping; Zhao, Dahui; Toney, Michael F.; Mannsfeld, Stefan C. B.; Bao, Zhenan

    2015-01-01

    Morphology control of solution coated solar cell materials presents a key challenge limiting their device performance and commercial viability. Here we present a new concept for controlling phase separation during solution printing using an all-polymer bulk heterojunction solar cell as a model system. The key aspect of our method lies in the design of fluid flow using a microstructured printing blade, on the basis of the hypothesis of flow-induced polymer crystallization. Our flow design resulted in a ∼90% increase in the donor thin film crystallinity and reduced microphase separated donor and acceptor domain sizes. The improved morphology enhanced all metrics of solar cell device performance across various printing conditions, specifically leading to higher short-circuit current, fill factor, open circuit voltage and significantly reduced device-to-device variation. We expect our design concept to have broad applications beyond all-polymer solar cells because of its simplicity and versatility. PMID:26264528

  11. Application of the dual-kinetic-balance sets in the relativistic many-body problem of atomic structure

    NASA Astrophysics Data System (ADS)

    Beloy, Kyle; Derevianko, Andrei

    2008-09-01

    The dual-kinetic-balance (DKB) finite basis set method for solving the Dirac equation for hydrogen-like ions [V.M. Shabaev et al., Phys. Rev. Lett. 93 (2004) 130405] is extended to problems with a non-local spherically-symmetric Dirac-Hartree-Fock potential. We implement the DKB method using B-spline basis sets and compare its performance with the widely-employed approach of Notre Dame (ND) group [W.R. Johnson, S.A. Blundell, J. Sapirstein, Phys. Rev. A 37 (1988) 307-315]. We compare the performance of the ND and DKB methods by computing various properties of Cs atom: energies, hyperfine integrals, the parity-non-conserving amplitude of the 6s-7s transition, and the second-order many-body correction to the removal energy of the valence electrons. We find that for a comparable size of the basis set the accuracy of both methods is similar for matrix elements accumulated far from the nuclear region. However, for atomic properties determined by small distances, the DKB method outperforms the ND approach. In addition, we present a strategy for optimizing the size of the basis sets by choosing progressively smaller number of basis functions for increasingly higher partial waves. This strategy exploits suppression of contributions of high partial waves to typical many-body correlation corrections.

  12. Improving estimates of genetic maps: a meta-analysis-based approach.

    PubMed

    Stewart, William C L

    2007-07-01

    Inaccurate genetic (or linkage) maps can reduce the power to detect linkage, increase type I error, and distort haplotype and relationship inference. To improve the accuracy of existing maps, I propose a meta-analysis-based method that combines independent map estimates into a single estimate of the linkage map. The method uses the variance of each independent map estimate to combine them efficiently, whether the map estimates use the same set of markers or not. As compared with a joint analysis of the pooled genotype data, the proposed method is attractive for three reasons: (1) it has comparable efficiency to the maximum likelihood map estimate when the pooled data are homogeneous; (2) relative to existing map estimation methods, it can have increased efficiency when the pooled data are heterogeneous; and (3) it avoids the practical difficulties of pooling human subjects data. On the basis of simulated data modeled after two real data sets, the proposed method can reduce the sampling variation of linkage maps commonly used in whole-genome linkage scans. Furthermore, when the independent map estimates are also maximum likelihood estimates, the proposed method performs as well as or better than when they are estimated by the program CRIMAP. Since variance estimates of maps may not always be available, I demonstrate the feasibility of three different variance estimators. Overall, the method should prove useful to investigators who need map positions for markers not contained in publicly available maps, and to those who wish to minimize the negative effects of inaccurate maps. Copyright 2007 Wiley-Liss, Inc.

  13. Jacobian projection reduced-order models for dynamic systems with contact nonlinearities

    NASA Astrophysics Data System (ADS)

    Gastaldi, Chiara; Zucca, Stefano; Epureanu, Bogdan I.

    2018-02-01

    In structural dynamics, the prediction of the response of systems with localized nonlinearities, such as friction dampers, is of particular interest. This task becomes especially cumbersome when high-resolution finite element models are used. While state-of-the-art techniques such as Craig-Bampton component mode synthesis are employed to generate reduced order models, the interface (nonlinear) degrees of freedom must still be solved in-full. For this reason, a new generation of specialized techniques capable of reducing linear and nonlinear degrees of freedom alike is emerging. This paper proposes a new technique that exploits spatial correlations in the dynamics to compute a reduction basis. The basis is composed of a set of vectors obtained using the Jacobian of partial derivatives of the contact forces with respect to nodal displacements. These basis vectors correspond to specifically chosen boundary conditions at the contacts over one cycle of vibration. The technique is shown to be effective in the reduction of several models studied using multiple harmonics with a coupled static solution. In addition, this paper addresses another challenge common to all reduction techniques: it presents and validates a novel a posteriori error estimate capable of evaluating the quality of the reduced-order solution without involving a comparison with the full-order solution.

  14. Terahertz Heterodyne Receiver with an Electron-Heating Mixer and a Heterodyne Based on the Quantum-Cascade Laser

    NASA Astrophysics Data System (ADS)

    Seliverstov, S. V.; Anfertyev, V. A.; Tretyakov, I. V.; Ozheredov, I. A.; Solyankin, P. M.; Revin, L. S.; Vaks, V. L.; Rusova, A. A.; Goltsman, G. N.; Shkurinov, A. P.

    2017-12-01

    We study characteristics of the laboratory prototype of a terahertz heterodyne receiver with an electron-heating mixer and a heterodyne based on the quantum-cascade laser. The results obtained demonstrate the possibility to use this receiver as a basis for creation of a high-sensitivity terahertz spectrometer, which can be used in many basic and practical applications. A significant advantage of this receiver will be the possibility of placing the mixer and heterodyne in the same cryostat, which will reduce the device dimensions considerably. The obtained experimental results are analyzed, and methods of optimizing the parameters of the receiver are proposed.

  15. Communication: spin-boson model with diagonal and off-diagonal coupling to two independent baths: ground-state phase transition in the deep sub-Ohmic regime.

    PubMed

    Zhao, Yang; Yao, Yao; Chernyak, Vladimir; Zhao, Yang

    2014-04-28

    We investigate a spin-boson model with two boson baths that are coupled to two perpendicular components of the spin by employing the density matrix renormalization group method with an optimized boson basis. It is revealed that in the deep sub-Ohmic regime there exists a novel second-order phase transition between two types of doubly degenerate states, which is reduced to one of the usual types for nonzero tunneling. In addition, it is found that expectation values of the spin components display jumps at the phase boundary in the absence of bias and tunneling.

  16. Dynamic load-sharing characteristic analysis of face gear power-split gear system based on tooth contact characteristics

    NASA Astrophysics Data System (ADS)

    Dong, Hao; Hu, Yahui

    2018-04-01

    The bend-torsion coupling dynamics load-sharing model of the helicopter face gear split torque transmission system is established by using concentrated quality standard, to analyzing the dynamic load-sharing characteristic. The mathematical models include nonlinear support stiffness, time-varying meshing stiffness, damping, gear backlash. The results showed that the errors collectively influenced the load sharing characteristics, only reduce a certain error, it is never fully reached the perfect loading sharing characteristics. The system load-sharing performance can be improved through floating shaft support. The above-method will provide a theoretical basis and data support for its dynamic performance optimization design.

  17. An Extended Kalman Filter to Assimilate Altimetric Data into a Non-Linear Model of the Tropical Pacific

    NASA Technical Reports Server (NTRS)

    Gourdeau, L.; Verron, J.; Murtugudde, R.; Busalacchi, A. J.

    1997-01-01

    A new implementation of the extended Kaman filter is developed for the purpose of assimilating altimetric observations into a primitive equation model of the tropical Pacific. Its specificity consists in defining the errors into a reduced basis that evolves in time with the model dynamic. Validation by twin experiments is conducted and the method is shown to be efficient in quasi real conditions. Data from the first 2 years of the Topex/Poseidon mission are assimilated into the Gent & Cane [1989] model. Assimilation results are evaluated against independent in situ data, namely TAO mooring observations.

  18. Efficient continuous-variable state tomography using Padua points

    NASA Astrophysics Data System (ADS)

    Landon-Cardinal, Olivier; Govia, Luke C. G.; Clerk, Aashish A.

    Further development of quantum technologies calls for efficient characterization methods for quantum systems. While recent work has focused on discrete systems of qubits, much remains to be done for continuous-variable systems such as a microwave mode in a cavity. We introduce a novel technique to reconstruct the full Husimi Q or Wigner function from measurements done at the Padua points in phase space, the optimal sampling points for interpolation in 2D. Our technique not only reduces the number of experimental measurements, but remarkably, also allows for the direct estimation of any density matrix element in the Fock basis, including off-diagonal elements. OLC acknowledges financial support from NSERC.

  19. Comparison of Response Surface Construction Methods for Derivative Estimation Using Moving Least Squares, Kriging and Radial Basis Functions

    NASA Technical Reports Server (NTRS)

    Krishnamurthy, Thiagarajan

    2005-01-01

    Response construction methods using Moving Least Squares (MLS), Kriging and Radial Basis Functions (RBF) are compared with the Global Least Squares (GLS) method in three numerical examples for derivative generation capability. Also, a new Interpolating Moving Least Squares (IMLS) method adopted from the meshless method is presented. It is found that the response surface construction methods using the Kriging and RBF interpolation yields more accurate results compared with MLS and GLS methods. Several computational aspects of the response surface construction methods also discussed.

  20. 77 FR 73286 - Codification of Animal Testing Policy

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-12-10

    ... to post the test method on the animal testing Web site. In the final statement of policy, we refer to... case-by-case basis and, upon review, determine whether to post the test method on the animal testing... on a case-by- case basis and, upon review, determine whether to post the test method on the animal...

  1. A Textbook for a First Course in Computational Fluid Dynamics

    NASA Technical Reports Server (NTRS)

    Zingg, D. W.; Pulliam, T. H.; Nixon, David (Technical Monitor)

    1999-01-01

    This paper describes and discusses the textbook, Fundamentals of Computational Fluid Dynamics by Lomax, Pulliam, and Zingg, which is intended for a graduate level first course in computational fluid dynamics. This textbook emphasizes fundamental concepts in developing, analyzing, and understanding numerical methods for the partial differential equations governing the physics of fluid flow. Its underlying philosophy is that the theory of linear algebra and the attendant eigenanalysis of linear systems provides a mathematical framework to describe and unify most numerical methods in common use in the field of fluid dynamics. Two linear model equations, the linear convection and diffusion equations, are used to illustrate concepts throughout. Emphasis is on the semi-discrete approach, in which the governing partial differential equations (PDE's) are reduced to systems of ordinary differential equations (ODE's) through a discretization of the spatial derivatives. The ordinary differential equations are then reduced to ordinary difference equations (O(Delta)E's) using a time-marching method. This methodology, using the progression from PDE through ODE's to O(Delta)E's, together with the use of the eigensystems of tridiagonal matrices and the theory of O(Delta)E's, gives the book its distinctiveness and provides a sound basis for a deep understanding of fundamental concepts in computational fluid dynamics.

  2. The use of methods of structural optimization at the stage of designing high-rise buildings with steel construction

    NASA Astrophysics Data System (ADS)

    Vasilkin, Andrey

    2018-03-01

    The more designing solutions at the search stage for design for high-rise buildings can be synthesized by the engineer, the more likely that the final adopted version will be the most efficient and economical. However, in modern market conditions, taking into account the complexity and responsibility of high-rise buildings the designer does not have the necessary time to develop, analyze and compare any significant number of options. To solve this problem, it is expedient to use the high potential of computer-aided designing. To implement automated search for design solutions, it is proposed to develop the computing facilities, the application of which will significantly increase the productivity of the designer and reduce the complexity of designing. Methods of structural and parametric optimization have been adopted as the basis of the computing facilities. Their efficiency in the synthesis of design solutions is shown, also the schemes, that illustrate and explain the introduction of structural optimization in the traditional design of steel frames, are constructed. To solve the problem of synthesis and comparison of design solutions for steel frames, it is proposed to develop the computing facilities that significantly reduces the complexity of search designing and based on the use of methods of structural and parametric optimization.

  3. A simple method for measuring porcine circovirus 2 whole virion particles and standardizing vaccine formulation.

    PubMed

    Zanotti, Cinzia; Amadori, Massimo

    2015-03-01

    Porcine Circovirus 2 (PCV2) is involved in porcine circovirus-associated disease, that causes great economic losses to the livestock industry worldwide. Vaccination against PCV2 proved to be very effective in reducing disease occurrence and it is currently performed on a large scale. Starting from a previous model concerning Foot-and Mouth Disease Virus antigens, we developed a rapid and simple method to quantify PCV2 whole virion particles in inactivated vaccines. This procedure, based on sucrose gradient analysis and fluorometric evaluation of viral genomic content, allows for a better standardization of the antigen payload in vaccine batches. It also provides a valid indication of virion integrity. Most important, such a method can be applied to whole virion vaccines regardless of the production procedures, thus enabling meaningful comparisons on a common basis. In a future batch consistency approach to PCV2 vaccine manufacture, our procedure represents a valuable tool to improve in-process controls and to guarantee conformity of the final product with passmarks for approval. This might have important repercussions in terms of reduced usage of animals for vaccine batch release, in the framework of the current 3Rs policy. Copyright © 2015 The International Alliance for Biological Standardization. Published by Elsevier Ltd. All rights reserved.

  4. A new basis set for molecular bending degrees of freedom.

    PubMed

    Jutier, Laurent

    2010-07-21

    We present a new basis set as an alternative to Legendre polynomials for the variational treatment of bending vibrational degrees of freedom in order to highly reduce the number of basis functions. This basis set is inspired from the harmonic oscillator eigenfunctions but is defined for a bending angle in the range theta in [0:pi]. The aim is to bring the basis functions closer to the final (ro)vibronic wave functions nature. Our methodology is extended to complicated potential energy surfaces, such as quasilinearity or multiequilibrium geometries, by using several free parameters in the basis functions. These parameters allow several density maxima, linear or not, around which the basis functions will be mainly located. Divergences at linearity in integral computations are resolved as generalized Legendre polynomials. All integral computations required for the evaluation of molecular Hamiltonian matrix elements are given for both discrete variable representation and finite basis representation. Convergence tests for the low energy vibronic states of HCCH(++), HCCH(+), and HCCS are presented.

  5. A Meshless Method Using Radial Basis Functions for Beam Bending Problems

    NASA Technical Reports Server (NTRS)

    Raju, I. S.; Phillips, D. R.; Krishnamurthy, T.

    2004-01-01

    A meshless local Petrov-Galerkin (MLPG) method that uses radial basis functions (RBFs) as trial functions in the study of Euler-Bernoulli beam problems is presented. RBFs, rather than generalized moving least squares (GMLS) interpolations, are used to develop the trial functions. This choice yields a computationally simpler method as fewer matrix inversions and multiplications are required than when GMLS interpolations are used. Test functions are chosen as simple weight functions as they are in the conventional MLPG method. Compactly and noncompactly supported RBFs are considered. Noncompactly supported cubic RBFs are found to be preferable. Patch tests, mixed boundary value problems, and problems with complex loading conditions are considered. Results obtained from the radial basis MLPG method are either of comparable or better accuracy than those obtained when using the conventional MLPG method.

  6. Reducing predation by common ravens on desert tortoises in the Mojave and Colorado Deserts

    USGS Publications Warehouse

    Boarman, William I.

    2002-01-01

    intended to provide a basis for a long-term reduction in raven impacts. The recommendations fall into four basic categories. (1) Modify anthropogenic sources of food, water, and nesting substrates to reduce their use by ravens. This includes modifying landfill operations, septage containment practices, livestock management, and other commercial and private practices that help facilitate raven survival and dispersal by providing food and water. Most of these measures are long-term actions deigned to reduce the carrying capacity of the desert for ravens. This action is critical and must be done over very large areas. (2)Lethal removal of ravens by shooting or euthanizing following live trapping. Specific ravens known to prey on tortoises would be targeted as well as all ravens found foraging within specific high-priority desert tortoise management zones (e.g., Desert Tortoise Natural Areas, DTNA). These actions would primarily be deployed on a short-term emergency basis to give specific tortoise populations a necessary boost until other measures become fully implemented and achieve their goals. (3) Conduct research on raven ecology, raven behavior, and methods to reduce raven predation on tortoises. Results of these studies would be used to design future phases of the raven management program. (4) All actions should be approached within an adaptive management framework. As such monitor, actions should be designed as experiments so that monitoring of actions will yield reliable and scientifically sound results. Coordinating and oversight teams should be convened to facilitate cooperation and coordination among agencies and to ensure that the actions are being implemented effectively. Recommendations made herein were developed to help recover tortoise populations by reducing raven predation on juvenile tortoises. If the recommendations made are implemented in concert with actions reducing other causes of mortality, ill health, and lowered reproductive output, they should aid in the long-term recovery of desert tortoise populations. Many important aspects of raven population dynamics, raven predation on tortoises, and how to manage raven populations and behavior are as yet unknown. Because of this, any raven management program must be implemented within an adaptive management framework. Doing so would allow for sufficient flexibility to modify the program as new information is gained.

  7. 26 CFR 1.734-1 - Optional adjustment to basis of undistributed partnership property.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... a long-term contract accounted for under a long-term contract method of accounting. The provisions... allocated. (e) Recovery of adjustments to basis of partnership property—(1) Increases in basis. For purposes of section 168, if the basis of a partnership's recovery property is increased as a result of the...

  8. 26 CFR 1.734-1 - Optional adjustment to basis of undistributed partnership property.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... a long-term contract accounted for under a long-term contract method of accounting. The provisions... allocated. (e) Recovery of adjustments to basis of partnership property—(1) Increases in basis. For purposes of section 168, if the basis of a partnership's recovery property is increased as a result of the...

  9. 26 CFR 1.734-1 - Optional adjustment to basis of undistributed partnership property.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... a long-term contract accounted for under a long-term contract method of accounting. The provisions... allocated. (e) Recovery of adjustments to basis of partnership property—(1) Increases in basis. For purposes of section 168, if the basis of a partnership's recovery property is increased as a result of the...

  10. 26 CFR 1.734-1 - Optional adjustment to basis of undistributed partnership property.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... a long-term contract accounted for under a long-term contract method of accounting. The provisions... allocated. (e) Recovery of adjustments to basis of partnership property—(1) Increases in basis. For purposes of section 168, if the basis of a partnership's recovery property is increased as a result of the...

  11. POD/MAC-Based Modal Basis Selection for a Reduced Order Nonlinear Response Analysis

    NASA Technical Reports Server (NTRS)

    Rizzi, Stephen A.; Przekop, Adam

    2007-01-01

    A feasibility study was conducted to explore the applicability of a POD/MAC basis selection technique to a nonlinear structural response analysis. For the case studied the application of the POD/MAC technique resulted in a substantial improvement of the reduced order simulation when compared to a classic approach utilizing only low frequency modes present in the excitation bandwidth. Further studies are aimed to expand application of the presented technique to more complex structures including non-planar and two-dimensional configurations. For non-planar structures the separation of different displacement components may not be necessary or desirable.

  12. Reduced Wiener Chaos representation of random fields via basis adaptation and projection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tsilifis, Panagiotis, E-mail: tsilifis@usc.edu; Department of Civil Engineering, University of Southern California, Los Angeles, CA 90089; Ghanem, Roger G., E-mail: ghanem@usc.edu

    2017-07-15

    A new characterization of random fields appearing in physical models is presented that is based on their well-known Homogeneous Chaos expansions. We take advantage of the adaptation capabilities of these expansions where the core idea is to rotate the basis of the underlying Gaussian Hilbert space, in order to achieve reduced functional representations that concentrate the induced probability measure in a lower dimensional subspace. For a smooth family of rotations along the domain of interest, the uncorrelated Gaussian inputs are transformed into a Gaussian process, thus introducing a mesoscale that captures intermediate characteristics of the quantity of interest.

  13. Eat dirt and avoid atopy: the hygiene hypothesis revisited.

    PubMed

    Patki, Anil

    2007-01-01

    The explosive rise in the incidence of atopic diseases in the Western developed countries can be explained on the basis of the so-called "hygiene hypothesis". In short, it attributes the rising incidence of atopic dermatitis to reduced exposure to various childhood infections and bacterial endotoxins. Reduced exposure to dirt in the clean environment results in a skewed development of the immune system which results in an abnormal allergic response to various environmental allergens which are otherwise innocuous. This article reviews the historical aspects, epidemiological and immunological basis of the hygiene hypothesis and implications for Indian conditions.

  14. Reduced randomness in quantum cryptography with sequences of qubits encoded in the same basis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lamoureux, L.-P.; Cerf, N. J.; Bechmann-Pasquinucci, H.

    2006-03-15

    We consider the cloning of sequences of qubits prepared in the states used in the BB84 or six-state quantum cryptography protocol, and show that the single-qubit fidelity is unaffected even if entire sequences of qubits are prepared in the same basis. This result is only valid provided that the sequences are much shorter than the total key. It is of great importance for practical quantum cryptosystems because it reduces the need for high-speed random number generation without impairing on the security against finite-size cloning attacks.

  15. Reduced Wiener Chaos representation of random fields via basis adaptation and projection

    NASA Astrophysics Data System (ADS)

    Tsilifis, Panagiotis; Ghanem, Roger G.

    2017-07-01

    A new characterization of random fields appearing in physical models is presented that is based on their well-known Homogeneous Chaos expansions. We take advantage of the adaptation capabilities of these expansions where the core idea is to rotate the basis of the underlying Gaussian Hilbert space, in order to achieve reduced functional representations that concentrate the induced probability measure in a lower dimensional subspace. For a smooth family of rotations along the domain of interest, the uncorrelated Gaussian inputs are transformed into a Gaussian process, thus introducing a mesoscale that captures intermediate characteristics of the quantity of interest.

  16. 76 FR 72382 - Atlantic Highly Migratory Species; Electronic Dealer Reporting System Workshop

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-23

    ... tuna data on a more real-time basis and more efficiently, which will reduce duplicative data... a more real-time basis, allowing for timely and efficient data collection for management of Atlantic HMS. In order to give sufficient time for dealers to adjust to implementation of the new system and...

  17. 46 CFR 391.6 - Tax treatment of qualified withdrawals.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... capital gain account; and third, out of the ordinary income account. Such withdrawals will reduce the... (or share therein) is made out of the capital gain account, the basis of such vessel, barge, or... the capital gain account, then the basis of the vessel, barge, or container (or share therein) with...

  18. 46 CFR 391.6 - Tax treatment of qualified withdrawals.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... capital gain account; and third, out of the ordinary income account. Such withdrawals will reduce the... (or share therein) is made out of the capital gain account, the basis of such vessel, barge, or... the capital gain account, then the basis of the vessel, barge, or container (or share therein) with...

  19. 46 CFR 391.6 - Tax treatment of qualified withdrawals.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... capital gain account; and third, out of the ordinary income account. Such withdrawals will reduce the... (or share therein) is made out of the capital gain account, the basis of such vessel, barge, or... the capital gain account, then the basis of the vessel, barge, or container (or share therein) with...

  20. 46 CFR 391.6 - Tax treatment of qualified withdrawals.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... capital gain account; and third, out of the ordinary income account. Such withdrawals will reduce the... (or share therein) is made out of the capital gain account, the basis of such vessel, barge, or... the capital gain account, then the basis of the vessel, barge, or container (or share therein) with...

Top