Sample records for high order polynomials

  1. New Formulae for the High-Order Derivatives of Some Jacobi Polynomials: An Application to Some High-Order Boundary Value Problems

    PubMed Central

    Abd-Elhameed, W. M.

    2014-01-01

    This paper is concerned with deriving some new formulae expressing explicitly the high-order derivatives of Jacobi polynomials whose parameters difference is one or two of any degree and of any order in terms of their corresponding Jacobi polynomials. The derivatives formulae for Chebyshev polynomials of third and fourth kinds of any degree and of any order in terms of their corresponding Chebyshev polynomials are deduced as special cases. Some new reduction formulae for summing some terminating hypergeometric functions of unit argument are also deduced. As an application, and with the aid of the new introduced derivatives formulae, an algorithm for solving special sixth-order boundary value problems are implemented with the aid of applying Galerkin method. A numerical example is presented hoping to ascertain the validity and the applicability of the proposed algorithms. PMID:25386599

  2. Development of Fast Deterministic Physically Accurate Solvers for Kinetic Collision Integral for Applications of Near Space Flight and Control Devices

    DTIC Science & Technology

    2015-08-31

    following functions were used: where are the Legendre polynomials of degree . It is assumed that the coefficient standing with has the form...enforce relaxation rates of high order moments, higher order polynomial basis functions are used. The use of high order polynomials results in strong...enforced while only polynomials up to second degree were used in the representation of the collision frequency. It can be seen that the new model

  3. Development and Evaluation of a Hydrostatic Dynamical Core Using the Spectral Element/Discontinuous Galerkin Methods

    DTIC Science & Technology

    2014-04-01

    The CG and DG horizontal discretization employs high-order nodal basis functions associated with Lagrange polynomials based on Gauss-Lobatto- Legendre ...and DG horizontal discretization employs high-order nodal basis functions 29 associated with Lagrange polynomials based on Gauss-Lobatto- Legendre ...Inside 235 each element we build ( 1)N + Gauss-Lobatto- Legendre (GLL) quadrature points, where N 236 indicate the polynomial order of the basis

  4. Why High-Order Polynomials Should Not Be Used in Regression Discontinuity Designs. NBER Working Paper No. 20405

    ERIC Educational Resources Information Center

    Gelman, Andrew; Imbens, Guido

    2014-01-01

    It is common in regression discontinuity analysis to control for high order (third, fourth, or higher) polynomials of the forcing variable. We argue that estimators for causal effects based on such methods can be misleading, and we recommend researchers do not use them, and instead use estimators based on local linear or quadratic polynomials or…

  5. Assessment of Hybrid High-Order methods on curved meshes and comparison with discontinuous Galerkin methods

    NASA Astrophysics Data System (ADS)

    Botti, Lorenzo; Di Pietro, Daniele A.

    2018-10-01

    We propose and validate a novel extension of Hybrid High-Order (HHO) methods to meshes featuring curved elements. HHO methods are based on discrete unknowns that are broken polynomials on the mesh and its skeleton. We propose here the use of physical frame polynomials over mesh elements and reference frame polynomials over mesh faces. With this choice, the degree of face unknowns must be suitably selected in order to recover on curved meshes the same convergence rates as on straight meshes. We provide an estimate of the optimal face polynomial degree depending on the element polynomial degree and on the so-called effective mapping order. The estimate is numerically validated through specifically crafted numerical tests. All test cases are conducted considering two- and three-dimensional pure diffusion problems, and include comparisons with discontinuous Galerkin discretizations. The extension to agglomerated meshes with curved boundaries is also considered.

  6. Charactering baseline shift with 4th polynomial function for portable biomedical near-infrared spectroscopy device

    NASA Astrophysics Data System (ADS)

    Zhao, Ke; Ji, Yaoyao; Pan, Boan; Li, Ting

    2018-02-01

    The continuous-wave Near-infrared spectroscopy (NIRS) devices have been highlighted for its clinical and health care applications in noninvasive hemodynamic measurements. The baseline shift of the deviation measurement attracts lots of attentions for its clinical importance. Nonetheless current published methods have low reliability or high variability. In this study, we found a perfect polynomial fitting function for baseline removal, using NIRS. Unlike previous studies on baseline correction for near-infrared spectroscopy evaluation of non-hemodynamic particles, we focused on baseline fitting and corresponding correction method for NIRS and found that the polynomial fitting function at 4th order is greater than the function at 2nd order reported in previous research. Through experimental tests of hemodynamic parameters of the solid phantom, we compared the fitting effect between the 4th order polynomial and the 2nd order polynomial, by recording and analyzing the R values and the SSE (the sum of squares due to error) values. The R values of the 4th order polynomial function fitting are all higher than 0.99, which are significantly higher than the corresponding ones of 2nd order, while the SSE values of the 4th order are significantly smaller than the corresponding ones of the 2nd order. By using the high-reliable and low-variable 4th order polynomial fitting function, we are able to remove the baseline online to obtain more accurate NIRS measurements.

  7. Estimation of Phase in Fringe Projection Technique Using High-order Instantaneous Moments Based Method

    NASA Astrophysics Data System (ADS)

    Gorthi, Sai Siva; Rajshekhar, G.; Rastogi, Pramod

    2010-04-01

    For three-dimensional (3D) shape measurement using fringe projection techniques, the information about the 3D shape of an object is encoded in the phase of a recorded fringe pattern. The paper proposes a high-order instantaneous moments based method to estimate phase from a single fringe pattern in fringe projection. The proposed method works by approximating the phase as a piece-wise polynomial and subsequently determining the polynomial coefficients using high-order instantaneous moments to construct the polynomial phase. Simulation results are presented to show the method's potential.

  8. On the Existence of Non-Oscillatory Phase Functions for Second Order Ordinary Differential Equations in the High-Frequency Regime

    DTIC Science & Technology

    2014-08-04

    Chebyshev coefficients of both r and q decay exponentially, although those of r decay at a slightly slower rate. 10.2. Evaluation of Legendre polynomials ...In this experiment, we compare the cost of evaluating Legendre polynomials of large order using the standard recurrence relation with the cost of...doing so with a nonoscillatory phase function. For any integer n ě 0, the Legendre polynomial Pnpxq of order n is a solution of the second order

  9. Compressive sampling of polynomial chaos expansions: Convergence analysis and sampling strategies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hampton, Jerrad; Doostan, Alireza, E-mail: alireza.doostan@colorado.edu

    2015-01-01

    Sampling orthogonal polynomial bases via Monte Carlo is of interest for uncertainty quantification of models with random inputs, using Polynomial Chaos (PC) expansions. It is known that bounding a probabilistic parameter, referred to as coherence, yields a bound on the number of samples necessary to identify coefficients in a sparse PC expansion via solution to an ℓ{sub 1}-minimization problem. Utilizing results for orthogonal polynomials, we bound the coherence parameter for polynomials of Hermite and Legendre type under their respective natural sampling distribution. In both polynomial bases we identify an importance sampling distribution which yields a bound with weaker dependence onmore » the order of the approximation. For more general orthonormal bases, we propose the coherence-optimal sampling: a Markov Chain Monte Carlo sampling, which directly uses the basis functions under consideration to achieve a statistical optimality among all sampling schemes with identical support. We demonstrate these different sampling strategies numerically in both high-order and high-dimensional, manufactured PC expansions. In addition, the quality of each sampling method is compared in the identification of solutions to two differential equations, one with a high-dimensional random input and the other with a high-order PC expansion. In both cases, the coherence-optimal sampling scheme leads to similar or considerably improved accuracy.« less

  10. Analysis of Adaptive Mesh Refinement for IMEX Discontinuous Galerkin Solutions of the Compressible Euler Equations with Application to Atmospheric Simulations

    DTIC Science & Technology

    2013-01-01

    ξi be the Legendre -Gauss-Lobatto (LGL) points defined as the roots of (1 − ξ2)P ′N (ξ) = 0, where PN (ξ) is the N th order Legendre polynomial . The...mesh refinement. By expanding the solution in a basis of high order polynomials in each element, one can dynamically adjust the order of these basis...on refining the mesh while keeping the polynomial order constant across the elements. If we choose to allow non-conforming elements, the challenge in

  11. Simulation of Shallow Water Jets with a Unified Element-based Continuous/Discontinuous Galerkin Model with Grid Flexibility on the Sphere

    DTIC Science & Technology

    2013-01-01

    is the derivative of the N th-order Legendre polynomial . Given these definitions, the one-dimensional Lagrange polynomials hi(ξ) are hi(ξ) = − 1 N(N...2. Detail of one interface patch in the northern hemisphere. The high-order Legendre -Gauss-Lobatto (LGL) points are added to the linear grid by...smaller ones by a Lagrange polynomial of order nI . The number of quadrilateral elements and grid points of the final grid are then given by Np = 6(N

  12. Rows of optical vortices from elliptically perturbing a high-order beam

    NASA Astrophysics Data System (ADS)

    Dennis, Mark R.

    2006-05-01

    An optical vortex (phase singularity) with a high topological strength resides on the axis of a high-order light beam. The breakup of this vortex under elliptic perturbation into a straight row of unit-strength vortices is described. This behavior is studied in helical Ince-Gauss beams and astigmatic, generalized Hermite-Laguerre-Gauss beams, which are perturbations of Laguerre-Gauss beams. Approximations of these beams are derived for small perturbations, in which a neighborhood of the axis can be approximated by a polynomial in the complex plane: a Chebyshev polynomial for Ince-Gauss beams, and a Hermite polynomial for astigmatic beams.

  13. Interpolation Hermite Polynomials For Finite Element Method

    NASA Astrophysics Data System (ADS)

    Gusev, Alexander; Vinitsky, Sergue; Chuluunbaatar, Ochbadrakh; Chuluunbaatar, Galmandakh; Gerdt, Vladimir; Derbov, Vladimir; Góźdź, Andrzej; Krassovitskiy, Pavel

    2018-02-01

    We describe a new algorithm for analytic calculation of high-order Hermite interpolation polynomials of the simplex and give their classification. A typical example of triangle element, to be built in high accuracy finite element schemes, is given.

  14. Comparative Analysis of Various Single-tone Frequency Estimation Techniques in High-order Instantaneous Moments Based Phase Estimation Method

    NASA Astrophysics Data System (ADS)

    Rajshekhar, G.; Gorthi, Sai Siva; Rastogi, Pramod

    2010-04-01

    For phase estimation in digital holographic interferometry, a high-order instantaneous moments (HIM) based method was recently developed which relies on piecewise polynomial approximation of phase and subsequent evaluation of the polynomial coefficients using the HIM operator. A crucial step in the method is mapping the polynomial coefficient estimation to single-tone frequency determination for which various techniques exist. The paper presents a comparative analysis of the performance of the HIM operator based method in using different single-tone frequency estimation techniques for phase estimation. The analysis is supplemented by simulation results.

  15. Positivity-preserving High Order Finite Difference WENO Schemes for Compressible Euler Equations

    DTIC Science & Technology

    2011-07-15

    the WENO reconstruction. We assume that there is a polynomial vector qi(x) = (ρi(x), mi(x), Ei(x)) T with degree k which are (k + 1)-th order accurate...i+ 1 2 = qi(xi+ 1 2 ). The existence of such polynomials can be established by interpolation for WENO schemes. For example, for the fifth or- der...WENO scheme, there is a unique vector of polynomials of degree four qi(x) satisfying qi(xi− 1 2 ) = w+ i− 1 2 , qi(xi+ 1 2 ) = w− i+ 1 2 and 1 ∆x ∫ Ij qi

  16. On the connection coefficients and recurrence relations arising from expansions in series of Laguerre polynomials

    NASA Astrophysics Data System (ADS)

    Doha, E. H.

    2003-05-01

    A formula expressing the Laguerre coefficients of a general-order derivative of an infinitely differentiable function in terms of its original coefficients is proved, and a formula expressing explicitly the derivatives of Laguerre polynomials of any degree and for any order as a linear combination of suitable Laguerre polynomials is deduced. A formula for the Laguerre coefficients of the moments of one single Laguerre polynomial of certain degree is given. Formulae for the Laguerre coefficients of the moments of a general-order derivative of an infinitely differentiable function in terms of its Laguerre coefficients are also obtained. A simple approach in order to build and solve recursively for the connection coefficients between Jacobi-Laguerre and Hermite-Laguerre polynomials is described. An explicit formula for these coefficients between Jacobi and Laguerre polynomials is given, of which the ultra-spherical polynomials of the first and second kinds and Legendre polynomials are important special cases. An analytical formula for the connection coefficients between Hermite and Laguerre polynomials is also obtained.

  17. Pseudo spectral collocation with Maxwell polynomials for kinetic equations with energy diffusion

    NASA Astrophysics Data System (ADS)

    Sánchez-Vizuet, Tonatiuh; Cerfon, Antoine J.

    2018-02-01

    We study the approximation and stability properties of a recently popularized discretization strategy for the speed variable in kinetic equations, based on pseudo-spectral collocation on a grid defined by the zeros of a non-standard family of orthogonal polynomials called Maxwell polynomials. Taking a one-dimensional equation describing energy diffusion due to Fokker-Planck collisions with a Maxwell-Boltzmann background distribution as the test bench for the performance of the scheme, we find that Maxwell based discretizations outperform other commonly used schemes in most situations, often by orders of magnitude. This provides a strong motivation for their use in high-dimensional gyrokinetic simulations. However, we also show that Maxwell based schemes are subject to a non-modal time stepping instability in their most straightforward implementation, so that special care must be given to the discrete representation of the linear operators in order to benefit from the advantages provided by Maxwell polynomials.

  18. Direct calculation of modal parameters from matrix orthogonal polynomials

    NASA Astrophysics Data System (ADS)

    El-Kafafy, Mahmoud; Guillaume, Patrick

    2011-10-01

    The object of this paper is to introduce a new technique to derive the global modal parameter (i.e. system poles) directly from estimated matrix orthogonal polynomials. This contribution generalized the results given in Rolain et al. (1994) [5] and Rolain et al. (1995) [6] for scalar orthogonal polynomials to multivariable (matrix) orthogonal polynomials for multiple input multiple output (MIMO) system. Using orthogonal polynomials improves the numerical properties of the estimation process. However, the derivation of the modal parameters from the orthogonal polynomials is in general ill-conditioned if not handled properly. The transformation of the coefficients from orthogonal polynomials basis to power polynomials basis is known to be an ill-conditioned transformation. In this paper a new approach is proposed to compute the system poles directly from the multivariable orthogonal polynomials. High order models can be used without any numerical problems. The proposed method will be compared with existing methods (Van Der Auweraer and Leuridan (1987) [4] Chen and Xu (2003) [7]). For this comparative study, simulated as well as experimental data will be used.

  19. Characterization of high order spatial discretizations and lumping techniques for discontinuous finite element SN transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maginot, P. G.; Ragusa, J. C.; Morel, J. E.

    2013-07-01

    We examine several possible methods of mass matrix lumping for discontinuous finite element discrete ordinates transport using a Lagrange interpolatory polynomial trial space. Though positive outflow angular flux is guaranteed with traditional mass matrix lumping in a purely absorbing 1-D slab cell for the linear discontinuous approximation, we show that when used with higher degree interpolatory polynomial trial spaces, traditional lumping does yield strictly positive outflows and does not increase in accuracy with an increase in trial space polynomial degree. As an alternative, we examine methods which are 'self-lumping'. Self-lumping methods yield diagonal mass matrices by using numerical quadrature restrictedmore » to the Lagrange interpolatory points. Using equally-spaced interpolatory points, self-lumping is achieved through the use of closed Newton-Cotes formulas, resulting in strictly positive outflows in pure absorbers for odd power polynomials in 1-D slab geometry. By changing interpolatory points from the traditional equally-spaced points to the quadrature points of the Gauss-Legendre or Lobatto-Gauss-Legendre quadratures, it is possible to generate solution representations with a diagonal mass matrix and a strictly positive outflow for any degree polynomial solution representation in a pure absorber medium in 1-D slab geometry. Further, there is no inherent limit to local truncation error order of accuracy when using interpolatory points that correspond to the quadrature points of high order accuracy numerical quadrature schemes. (authors)« less

  20. Spectral/ hp element methods: Recent developments, applications, and perspectives

    NASA Astrophysics Data System (ADS)

    Xu, Hui; Cantwell, Chris D.; Monteserin, Carlos; Eskilsson, Claes; Engsig-Karup, Allan P.; Sherwin, Spencer J.

    2018-02-01

    The spectral/ hp element method combines the geometric flexibility of the classical h-type finite element technique with the desirable numerical properties of spectral methods, employing high-degree piecewise polynomial basis functions on coarse finite element-type meshes. The spatial approximation is based upon orthogonal polynomials, such as Legendre or Chebychev polynomials, modified to accommodate a C 0 - continuous expansion. Computationally and theoretically, by increasing the polynomial order p, high-precision solutions and fast convergence can be obtained and, in particular, under certain regularity assumptions an exponential reduction in approximation error between numerical and exact solutions can be achieved. This method has now been applied in many simulation studies of both fundamental and practical engineering flows. This paper briefly describes the formulation of the spectral/ hp element method and provides an overview of its application to computational fluid dynamics. In particular, it focuses on the use of the spectral/ hp element method in transitional flows and ocean engineering. Finally, some of the major challenges to be overcome in order to use the spectral/ hp element method in more complex science and engineering applications are discussed.

  1. Stability analysis of fuzzy parametric uncertain systems.

    PubMed

    Bhiwani, R J; Patre, B M

    2011-10-01

    In this paper, the determination of stability margin, gain and phase margin aspects of fuzzy parametric uncertain systems are dealt. The stability analysis of uncertain linear systems with coefficients described by fuzzy functions is studied. A complexity reduced technique for determining the stability margin for FPUS is proposed. The method suggested is dependent on the order of the characteristic polynomial. In order to find the stability margin of interval polynomials of order less than 5, it is not always necessary to determine and check all four Kharitonov's polynomials. It has been shown that, for determining stability margin of FPUS of order five, four, and three we require only 3, 2, and 1 Kharitonov's polynomials respectively. Only for sixth and higher order polynomials, a complete set of Kharitonov's polynomials are needed to determine the stability margin. Thus for lower order systems, the calculations are reduced to a large extent. This idea has been extended to determine the stability margin of fuzzy interval polynomials. It is also shown that the gain and phase margin of FPUS can be determined analytically without using graphical techniques. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.

  2. High-Order Polynomial Expansions (HOPE) for flux-vector splitting

    NASA Technical Reports Server (NTRS)

    Liou, Meng-Sing; Steffen, Chris J., Jr.

    1991-01-01

    The Van Leer flux splitting is known to produce excessive numerical dissipation for Navier-Stokes calculations. Researchers attempt to remedy this deficiency by introducing a higher order polynomial expansion (HOPE) for the mass flux. In addition to Van Leer's splitting, a term is introduced so that the mass diffusion error vanishes at M = 0. Several splittings for pressure are proposed and examined. The effectiveness of the HOPE scheme is illustrated for 1-D hypersonic conical viscous flow and 2-D supersonic shock-wave boundary layer interactions.

  3. Quantum solvability of a general ordered position dependent mass system: Mathews-Lakshmanan oscillator

    NASA Astrophysics Data System (ADS)

    Karthiga, S.; Chithiika Ruby, V.; Senthilvelan, M.; Lakshmanan, M.

    2017-10-01

    In position dependent mass (PDM) problems, the quantum dynamics of the associated systems have been understood well in the literature for particular orderings. However, no efforts seem to have been made to solve such PDM problems for general orderings to obtain a global picture. In this connection, we here consider the general ordered quantum Hamiltonian of an interesting position dependent mass problem, namely, the Mathews-Lakshmanan oscillator, and try to solve the quantum problem for all possible orderings including Hermitian and non-Hermitian ones. The other interesting point in our study is that for all possible orderings, although the Schrödinger equation of this Mathews-Lakshmanan oscillator is uniquely reduced to the associated Legendre differential equation, their eigenfunctions cannot be represented in terms of the associated Legendre polynomials with integral degree and order. Rather the eigenfunctions are represented in terms of associated Legendre polynomials with non-integral degree and order. We here explore such polynomials and represent the discrete and continuum states of the system. We also exploit the connection between associated Legendre polynomials with non-integral degree with other orthogonal polynomials such as Jacobi and Gegenbauer polynomials.

  4. On the construction of recurrence relations for the expansion and connection coefficients in series of Jacobi polynomials

    NASA Astrophysics Data System (ADS)

    Doha, E. H.

    2004-01-01

    Formulae expressing explicitly the Jacobi coefficients of a general-order derivative (integral) of an infinitely differentiable function in terms of its original expansion coefficients, and formulae for the derivatives (integrals) of Jacobi polynomials in terms of Jacobi polynomials themselves are stated. A formula for the Jacobi coefficients of the moments of one single Jacobi polynomial of certain degree is proved. Another formula for the Jacobi coefficients of the moments of a general-order derivative of an infinitely differentiable function in terms of its original expanded coefficients is also given. A simple approach in order to construct and solve recursively for the connection coefficients between Jacobi-Jacobi polynomials is described. Explicit formulae for these coefficients between ultraspherical and Jacobi polynomials are deduced, of which the Chebyshev polynomials of the first and second kinds and Legendre polynomials are important special cases. Two analytical formulae for the connection coefficients between Laguerre-Jacobi and Hermite-Jacobi are developed.

  5. Coherent orthogonal polynomials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Celeghini, E., E-mail: celeghini@fi.infn.it; Olmo, M.A. del, E-mail: olmo@fta.uva.es

    2013-08-15

    We discuss a fundamental characteristic of orthogonal polynomials, like the existence of a Lie algebra behind them, which can be added to their other relevant aspects. At the basis of the complete framework for orthogonal polynomials we include thus–in addition to differential equations, recurrence relations, Hilbert spaces and square integrable functions–Lie algebra theory. We start here from the square integrable functions on the open connected subset of the real line whose bases are related to orthogonal polynomials. All these one-dimensional continuous spaces allow, besides the standard uncountable basis (|x〉), for an alternative countable basis (|n〉). The matrix elements that relatemore » these two bases are essentially the orthogonal polynomials: Hermite polynomials for the line and Laguerre and Legendre polynomials for the half-line and the line interval, respectively. Differential recurrence relations of orthogonal polynomials allow us to realize that they determine an infinite-dimensional irreducible representation of a non-compact Lie algebra, whose second order Casimir C gives rise to the second order differential equation that defines the corresponding family of orthogonal polynomials. Thus, the Weyl–Heisenberg algebra h(1) with C=0 for Hermite polynomials and su(1,1) with C=−1/4 for Laguerre and Legendre polynomials are obtained. Starting from the orthogonal polynomials the Lie algebra is extended both to the whole space of the L{sup 2} functions and to the corresponding Universal Enveloping Algebra and transformation group. Generalized coherent states from each vector in the space L{sup 2} and, in particular, generalized coherent polynomials are thus obtained. -- Highlights: •Fundamental characteristic of orthogonal polynomials (OP): existence of a Lie algebra. •Differential recurrence relations of OP determine a unitary representation of a non-compact Lie group. •2nd order Casimir originates a 2nd order differential equation that defines the corresponding OP family. •Generalized coherent polynomials are obtained from OP.« less

  6. An Online Gravity Modeling Method Applied for High Precision Free-INS

    PubMed Central

    Wang, Jing; Yang, Gongliu; Li, Jing; Zhou, Xiao

    2016-01-01

    For real-time solution of inertial navigation system (INS), the high-degree spherical harmonic gravity model (SHM) is not applicable because of its time and space complexity, in which traditional normal gravity model (NGM) has been the dominant technique for gravity compensation. In this paper, a two-dimensional second-order polynomial model is derived from SHM according to the approximate linear characteristic of regional disturbing potential. Firstly, deflections of vertical (DOVs) on dense grids are calculated with SHM in an external computer. And then, the polynomial coefficients are obtained using these DOVs. To achieve global navigation, the coefficients and applicable region of polynomial model are both updated synchronously in above computer. Compared with high-degree SHM, the polynomial model takes less storage and computational time at the expense of minor precision. Meanwhile, the model is more accurate than NGM. Finally, numerical test and INS experiment show that the proposed method outperforms traditional gravity models applied for high precision free-INS. PMID:27669261

  7. An Online Gravity Modeling Method Applied for High Precision Free-INS.

    PubMed

    Wang, Jing; Yang, Gongliu; Li, Jing; Zhou, Xiao

    2016-09-23

    For real-time solution of inertial navigation system (INS), the high-degree spherical harmonic gravity model (SHM) is not applicable because of its time and space complexity, in which traditional normal gravity model (NGM) has been the dominant technique for gravity compensation. In this paper, a two-dimensional second-order polynomial model is derived from SHM according to the approximate linear characteristic of regional disturbing potential. Firstly, deflections of vertical (DOVs) on dense grids are calculated with SHM in an external computer. And then, the polynomial coefficients are obtained using these DOVs. To achieve global navigation, the coefficients and applicable region of polynomial model are both updated synchronously in above computer. Compared with high-degree SHM, the polynomial model takes less storage and computational time at the expense of minor precision. Meanwhile, the model is more accurate than NGM. Finally, numerical test and INS experiment show that the proposed method outperforms traditional gravity models applied for high precision free-INS.

  8. From sequences to polynomials and back, via operator orderings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amdeberhan, Tewodros, E-mail: tamdeber@tulane.edu; Dixit, Atul, E-mail: adixit@tulane.edu; Moll, Victor H., E-mail: vhm@tulane.edu

    2013-12-15

    Bender and Dunne [“Polynomials and operator orderings,” J. Math. Phys. 29, 1727–1731 (1988)] showed that linear combinations of words q{sup k}p{sup n}q{sup n−k}, where p and q are subject to the relation qp − pq = ı, may be expressed as a polynomial in the symbol z=1/2 (qp+pq). Relations between such polynomials and linear combinations of the transformed coefficients are explored. In particular, examples yielding orthogonal polynomials are provided.

  9. Numerical Solutions of the Nonlinear Fractional-Order Brusselator System by Bernstein Polynomials

    PubMed Central

    Khan, Rahmat Ali; Tajadodi, Haleh; Johnston, Sarah Jane

    2014-01-01

    In this paper we propose the Bernstein polynomials to achieve the numerical solutions of nonlinear fractional-order chaotic system known by fractional-order Brusselator system. We use operational matrices of fractional integration and multiplication of Bernstein polynomials, which turns the nonlinear fractional-order Brusselator system to a system of algebraic equations. Two illustrative examples are given in order to demonstrate the accuracy and simplicity of the proposed techniques. PMID:25485293

  10. The s-Ordered Fock Space Projectors Gained by the General Ordering Theorem

    NASA Astrophysics Data System (ADS)

    Farid, Shähandeh; Mohammad, Reza Bazrafkan; Mahmoud, Ashrafi

    2012-09-01

    Employing the general ordering theorem (GOT), operational methods and incomplete 2-D Hermite polynomials, we derive the t-ordered expansion of Fock space projectors. Using the result, the general ordered form of the coherent state projectors is obtained. This indeed gives a new integration formula regarding incomplete 2-D Hermite polynomials. In addition, the orthogonality relation of the incomplete 2-D Hermite polynomials is derived to resolve Dattoli's failure.

  11. Matrix form of Legendre polynomials for solving linear integro-differential equations of high order

    NASA Astrophysics Data System (ADS)

    Kammuji, M.; Eshkuvatov, Z. K.; Yunus, Arif A. M.

    2017-04-01

    This paper presents an effective approximate solution of high order of Fredholm-Volterra integro-differential equations (FVIDEs) with boundary condition. Legendre truncated series is used as a basis functions to estimate the unknown function. Matrix operation of Legendre polynomials is used to transform FVIDEs with boundary conditions into matrix equation of Fredholm-Volterra type. Gauss Legendre quadrature formula and collocation method are applied to transfer the matrix equation into system of linear algebraic equations. The latter equation is solved by Gauss elimination method. The accuracy and validity of this method are discussed by solving two numerical examples and comparisons with wavelet and methods.

  12. A new sampling scheme for developing metamodels with the zeros of Chebyshev polynomials

    NASA Astrophysics Data System (ADS)

    Wu, Jinglai; Luo, Zhen; Zhang, Nong; Zhang, Yunqing

    2015-09-01

    The accuracy of metamodelling is determined by both the sampling and approximation. This article proposes a new sampling method based on the zeros of Chebyshev polynomials to capture the sampling information effectively. First, the zeros of one-dimensional Chebyshev polynomials are applied to construct Chebyshev tensor product (CTP) sampling, and the CTP is then used to construct high-order multi-dimensional metamodels using the 'hypercube' polynomials. Secondly, the CTP sampling is further enhanced to develop Chebyshev collocation method (CCM) sampling, to construct the 'simplex' polynomials. The samples of CCM are randomly and directly chosen from the CTP samples. Two widely studied sampling methods, namely the Smolyak sparse grid and Hammersley, are used to demonstrate the effectiveness of the proposed sampling method. Several numerical examples are utilized to validate the approximation accuracy of the proposed metamodel under different dimensions.

  13. Evaluation of Piecewise Polynomial Equations for Two Types of Thermocouples

    PubMed Central

    Chen, Andrew; Chen, Chiachung

    2013-01-01

    Thermocouples are the most frequently used sensors for temperature measurement because of their wide applicability, long-term stability and high reliability. However, one of the major utilization problems is the linearization of the transfer relation between temperature and output voltage of thermocouples. The linear calibration equation and its modules could be improved by using regression analysis to help solve this problem. In this study, two types of thermocouple and five temperature ranges were selected to evaluate the fitting agreement of different-order polynomial equations. Two quantitative criteria, the average of the absolute error values |e|ave and the standard deviation of calibration equation estd, were used to evaluate the accuracy and precision of these calibrations equations. The optimal order of polynomial equations differed with the temperature range. The accuracy and precision of the calibration equation could be improved significantly with an adequate higher degree polynomial equation. The technique could be applied with hardware modules to serve as an intelligent sensor for temperature measurement. PMID:24351627

  14. Hybrid High-Order methods for finite deformations of hyperelastic materials

    NASA Astrophysics Data System (ADS)

    Abbas, Mickaël; Ern, Alexandre; Pignet, Nicolas

    2018-01-01

    We devise and evaluate numerically Hybrid High-Order (HHO) methods for hyperelastic materials undergoing finite deformations. The HHO methods use as discrete unknowns piecewise polynomials of order k≥1 on the mesh skeleton, together with cell-based polynomials that can be eliminated locally by static condensation. The discrete problem is written as the minimization of a broken nonlinear elastic energy where a local reconstruction of the displacement gradient is used. Two HHO methods are considered: a stabilized method where the gradient is reconstructed as a tensor-valued polynomial of order k and a stabilization is added to the discrete energy functional, and an unstabilized method which reconstructs a stable higher-order gradient and circumvents the need for stabilization. Both methods satisfy the principle of virtual work locally with equilibrated tractions. We present a numerical study of the two HHO methods on test cases with known solution and on more challenging three-dimensional test cases including finite deformations with strong shear layers and cavitating voids. We assess the computational efficiency of both methods, and we compare our results to those obtained with an industrial software using conforming finite elements and to results from the literature. The two HHO methods exhibit robust behavior in the quasi-incompressible regime.

  15. Minimum Sobolev norm interpolation of scattered derivative data

    NASA Astrophysics Data System (ADS)

    Chandrasekaran, S.; Gorman, C. H.; Mhaskar, H. N.

    2018-07-01

    We study the problem of reconstructing a function on a manifold satisfying some mild conditions, given data of the values and some derivatives of the function at arbitrary points on the manifold. While the problem of finding a polynomial of two variables with total degree ≤n given the values of the polynomial and some of its derivatives at exactly the same number of points as the dimension of the polynomial space is sometimes impossible, we show that such a problem always has a solution in a very general situation if the degree of the polynomials is sufficiently large. We give estimates on how large the degree should be, and give explicit constructions for such a polynomial even in a far more general case. As the number of sampling points at which the data is available increases, our polynomials converge to the target function on the set where the sampling points are dense. Numerical examples in single and double precision show that this method is stable, efficient, and of high-order.

  16. Random regression models using different functions to model milk flow in dairy cows.

    PubMed

    Laureano, M M M; Bignardi, A B; El Faro, L; Cardoso, V L; Tonhati, H; Albuquerque, L G

    2014-09-12

    We analyzed 75,555 test-day milk flow records from 2175 primiparous Holstein cows that calved between 1997 and 2005. Milk flow was obtained by dividing the mean milk yield (kg) of the 3 daily milking by the total milking time (min) and was expressed as kg/min. Milk flow was grouped into 43 weekly classes. The analyses were performed using a single-trait Random Regression Models that included direct additive genetic, permanent environmental, and residual random effects. In addition, the contemporary group and linear and quadratic effects of cow age at calving were included as fixed effects. Fourth-order orthogonal Legendre polynomial of days in milk was used to model the mean trend in milk flow. The additive genetic and permanent environmental covariance functions were estimated using random regression Legendre polynomials and B-spline functions of days in milk. The model using a third-order Legendre polynomial for additive genetic effects and a sixth-order polynomial for permanent environmental effects, which contained 7 residual classes, proved to be the most adequate to describe variations in milk flow, and was also the most parsimonious. The heritability in milk flow estimated by the most parsimonious model was of moderate to high magnitude.

  17. Recurrences and explicit formulae for the expansion and connection coefficients in series of Bessel polynomials

    NASA Astrophysics Data System (ADS)

    Doha, E. H.; Ahmed, H. M.

    2004-08-01

    A formula expressing explicitly the derivatives of Bessel polynomials of any degree and for any order in terms of the Bessel polynomials themselves is proved. Another explicit formula, which expresses the Bessel expansion coefficients of a general-order derivative of an infinitely differentiable function in terms of its original Bessel coefficients, is also given. A formula for the Bessel coefficients of the moments of one single Bessel polynomial of certain degree is proved. A formula for the Bessel coefficients of the moments of a general-order derivative of an infinitely differentiable function in terms of its Bessel coefficients is also obtained. Application of these formulae for solving ordinary differential equations with varying coefficients, by reducing them to recurrence relations in the expansion coefficients of the solution, is explained. An algebraic symbolic approach (using Mathematica) in order to build and solve recursively for the connection coefficients between Bessel-Bessel polynomials is described. An explicit formula for these coefficients between Jacobi and Bessel polynomials is given, of which the ultraspherical polynomial and its consequences are important special cases. Two analytical formulae for the connection coefficients between Laguerre-Bessel and Hermite-Bessel are also developed.

  18. Higher order derivatives of R-Jacobi polynomials

    NASA Astrophysics Data System (ADS)

    Das, Sourav; Swaminathan, A.

    2016-06-01

    In this work, the R-Jacobi polynomials defined on the nonnegative real axis related to F-distribution are considered. Using their Sturm-Liouville system higher order derivatives are constructed. Orthogonality property of these higher ordered R-Jacobi polynomials are obtained besides their normal form, self-adjoint form and hypergeometric representation. Interesting results on the Interpolation formula and Gaussian quadrature formulae are obtained with numerical examples.

  19. High degree interpolation polynomial in Newton form

    NASA Technical Reports Server (NTRS)

    Tal-Ezer, Hillel

    1988-01-01

    Polynomial interpolation is an essential subject in numerical analysis. Dealing with a real interval, it is well known that even if f(x) is an analytic function, interpolating at equally spaced points can diverge. On the other hand, interpolating at the zeroes of the corresponding Chebyshev polynomial will converge. Using the Newton formula, this result of convergence is true only on the theoretical level. It is shown that the algorithm which computes the divided differences is numerically stable only if: (1) the interpolating points are arranged in a different order, and (2) the size of the interval is 4.

  20. The construction of high-accuracy schemes for acoustic equations

    NASA Technical Reports Server (NTRS)

    Tang, Lei; Baeder, James D.

    1995-01-01

    An accuracy analysis of various high order schemes is performed from an interpolation point of view. The analysis indicates that classical high order finite difference schemes, which use polynomial interpolation, hold high accuracy only at nodes and are therefore not suitable for time-dependent problems. Thus, some schemes improve their numerical accuracy within grid cells by the near-minimax approximation method, but their practical significance is degraded by maintaining the same stencil as classical schemes. One-step methods in space discretization, which use piecewise polynomial interpolation and involve data at only two points, can generate a uniform accuracy over the whole grid cell and avoid spurious roots. As a result, they are more accurate and efficient than multistep methods. In particular, the Cubic-Interpolated Psuedoparticle (CIP) scheme is recommended for computational acoustics.

  1. Dual exponential polynomials and linear differential equations

    NASA Astrophysics Data System (ADS)

    Wen, Zhi-Tao; Gundersen, Gary G.; Heittokangas, Janne

    2018-01-01

    We study linear differential equations with exponential polynomial coefficients, where exactly one coefficient is of order greater than all the others. The main result shows that a nontrivial exponential polynomial solution of such an equation has a certain dual relationship with the maximum order coefficient. Several examples illustrate our results and exhibit possibilities that can occur.

  2. Stabilization of an inverted pendulum-cart system by fractional PI-state feedback.

    PubMed

    Bettayeb, M; Boussalem, C; Mansouri, R; Al-Saggaf, U M

    2014-03-01

    This paper deals with pole placement PI-state feedback controller design to control an integer order system. The fractional aspect of the control law is introduced by a dynamic state feedback as u(t)=K(p)x(t)+K(I)I(α)(x(t)). The closed loop characteristic polynomial is thus fractional for which the roots are complex to calculate. The proposed method allows us to decompose this polynomial into a first order fractional polynomial and an integer order polynomial of order n-1 (n being the order of the integer system). This new stabilization control algorithm is applied for an inverted pendulum-cart test-bed, and the effectiveness and robustness of the proposed control are examined by experiments. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.

  3. A two-step, fourth-order method with energy preserving properties

    NASA Astrophysics Data System (ADS)

    Brugnano, Luigi; Iavernaro, Felice; Trigiante, Donato

    2012-09-01

    We introduce a family of fourth-order two-step methods that preserve the energy function of canonical polynomial Hamiltonian systems. As is the case with linear mutistep and one-leg methods, a prerogative of the new formulae is that the associated nonlinear systems to be solved at each step of the integration procedure have the very same dimension of the underlying continuous problem. The key tools in the new methods are the line integral associated with a conservative vector field (such as the one defined by a Hamiltonian dynamical system) and its discretization obtained by the aid of a quadrature formula. Energy conservation is equivalent to the requirement that the quadrature is exact, which turns out to be always the case in the event that the Hamiltonian function is a polynomial and the degree of precision of the quadrature formula is high enough. The non-polynomial case is also discussed and a number of test problems are finally presented in order to compare the behavior of the new methods to the theoretical results.

  4. Algorithms for computing solvents of unilateral second-order matrix polynomials over prime finite fields using lambda-matrices

    NASA Astrophysics Data System (ADS)

    Burtyka, Filipp

    2018-01-01

    The paper considers algorithms for finding diagonalizable and non-diagonalizable roots (so called solvents) of monic arbitrary unilateral second-order matrix polynomial over prime finite field. These algorithms are based on polynomial matrices (lambda-matrices). This is an extension of existing general methods for computing solvents of matrix polynomials over field of complex numbers. We analyze how techniques for complex numbers can be adapted for finite field and estimate asymptotic complexity of the obtained algorithms.

  5. Effects of Air Drag and Lunar Third-Body Perturbations on Motion Near a Reference KAM Torus

    DTIC Science & Technology

    2011-03-01

    body m 1) mass of satellite; 2) order of associated Legendre polynomial n 1) mean motion; 2) degree of associated Legendre polynomial n3 mean motion...physical momentum pi ith physical momentum Pmn associated Legendre polynomial of order m and degree n q̇ physical coordinate derivatives vector, [q̇1...are constants specifying the shape of the gravitational field; and Pmn are associated Legendre polynomials . When m = n = 0, the geopotential function

  6. A comparison of Redlich-Kister polynomial and cubic spline representations of the chemical potential in phase field computations

    DOE PAGES

    Teichert, Gregory H.; Gunda, N. S. Harsha; Rudraraju, Shiva; ...

    2016-12-18

    Free energies play a central role in many descriptions of equilibrium and non-equilibrium properties of solids. Continuum partial differential equations (PDEs) of atomic transport, phase transformations and mechanics often rely on first and second derivatives of a free energy function. The stability, accuracy and robustness of numerical methods to solve these PDEs are sensitive to the particular functional representations of the free energy. In this communication we investigate the influence of different representations of thermodynamic data on phase field computations of diffusion and two-phase reactions in the solid state. First-principles statistical mechanics methods were used to generate realistic free energymore » data for HCP titanium with interstitially dissolved oxygen. While Redlich-Kister polynomials have formed the mainstay of thermodynamic descriptions of multi-component solids, they require high order terms to fit oscillations in chemical potentials around phase transitions. Here, we demonstrate that high fidelity fits to rapidly fluctuating free energy functions are obtained with spline functions. As a result, spline functions that are many degrees lower than Redlich-Kister polynomials provide equal or superior fits to chemical potential data and, when used in phase field computations, result in solution times approaching an order of magnitude speed up relative to the use of Redlich-Kister polynomials.« less

  7. Genetic parameters of legendre polynomials for first parity lactation curves.

    PubMed

    Pool, M H; Janss, L L; Meuwissen, T H

    2000-11-01

    Variance components of the covariance function coefficients in a random regression test-day model were estimated by Legendre polynomials up to a fifth order for first-parity records of Dutch dairy cows using Gibbs sampling. Two Legendre polynomials of equal order were used to model the random part of the lactation curve, one for the genetic component and one for permanent environment. Test-day records from cows registered between 1990 to 1996 and collected by regular milk recording were available. For the data set, 23,700 complete lactations were selected from 475 herds sired by 262 sires. Because the application of a random regression model is limited by computing capacity, we investigated the minimum order needed to fit the variance structure in the data sufficiently. Predictions of genetic and permanent environmental variance structures were compared with bivariate estimates on 30-d intervals. A third-order or higher polynomial modeled the shape of variance curves over DIM with sufficient accuracy for the genetic and permanent environment part. Also, the genetic correlation structure was fitted with sufficient accuracy by a third-order polynomial, but, for the permanent environmental component, a fourth order was needed. Because equal orders are suggested in the literature, a fourth-order Legendre polynomial is recommended in this study. However, a rank of three for the genetic covariance matrix and of four for permanent environment allows a simpler covariance function with a reduced number of parameters based on the eigenvalues and eigenvectors.

  8. Exact traveling-wave and spatiotemporal soliton solutions to the generalized (3+1)-dimensional Schrödinger equation with polynomial nonlinearity of arbitrary order.

    PubMed

    Petrović, Nikola Z; Belić, Milivoj; Zhong, Wei-Ping

    2011-02-01

    We obtain exact traveling wave and spatiotemporal soliton solutions to the generalized (3+1)-dimensional nonlinear Schrödinger equation with variable coefficients and polynomial Kerr nonlinearity of an arbitrarily high order. Exact solutions, given in terms of Jacobi elliptic functions, are presented for the special cases of cubic-quintic and septic models. We demonstrate that the widely used method for finding exact solutions in terms of Jacobi elliptic functions is not applicable to the nonlinear Schrödinger equation with saturable nonlinearity. ©2011 American Physical Society

  9. High-Order Polynomial Expansions (HOPE) for flux-vector splitting

    NASA Technical Reports Server (NTRS)

    Liou, Meng-Sing; Steffen, Chris J., Jr.

    1991-01-01

    The Van Leer flux splitting is known to produce excessive numerical dissipation for Navier-Stokes calculations. Researchers attempt to remedy this deficiency by introducing a higher order polynomial expansion (HOPE) for the mass flux. In addition to Van Leer's splitting, a term is introduced so that the mass diffusion error vanishes at M equals 0. Several splittings for pressure are proposed and examined. The effectiveness of the HOPE scheme is illustrated for 1-D hypersonic conical viscous flow and 2-D supersonic shock-wave boundary layer interactions. Also, the authors give the weakness of the scheme and suggest areas for further investigation.

  10. Wavefront reconstruction algorithm based on Legendre polynomials for radial shearing interferometry over a square area and error analysis.

    PubMed

    Kewei, E; Zhang, Chen; Li, Mengyang; Xiong, Zhao; Li, Dahai

    2015-08-10

    Based on the Legendre polynomials expressions and its properties, this article proposes a new approach to reconstruct the distorted wavefront under test of a laser beam over square area from the phase difference data obtained by a RSI system. And the result of simulation and experimental results verifies the reliability of the method proposed in this paper. The formula of the error propagation coefficients is deduced when the phase difference data of overlapping area contain noise randomly. The matrix T which can be used to evaluate the impact of high-orders Legendre polynomial terms on the outcomes of the low-order terms due to mode aliasing is proposed, and the magnitude of impact can be estimated by calculating the F norm of the T. In addition, the relationship between ratio shear, sampling points, terms of polynomials and noise propagation coefficients, and the relationship between ratio shear, sampling points and norms of the T matrix are both analyzed, respectively. Those research results can provide an optimization design way for radial shearing interferometry system with the theoretical reference and instruction.

  11. Modelling the breeding of Aedes Albopictus species in an urban area in Pulau Pinang using polynomial regression

    NASA Astrophysics Data System (ADS)

    Salleh, Nur Hanim Mohd; Ali, Zalila; Noor, Norlida Mohd.; Baharum, Adam; Saad, Ahmad Ramli; Sulaiman, Husna Mahirah; Ahmad, Wan Muhamad Amir W.

    2014-07-01

    Polynomial regression is used to model a curvilinear relationship between a response variable and one or more predictor variables. It is a form of a least squares linear regression model that predicts a single response variable by decomposing the predictor variables into an nth order polynomial. In a curvilinear relationship, each curve has a number of extreme points equal to the highest order term in the polynomial. A quadratic model will have either a single maximum or minimum, whereas a cubic model has both a relative maximum and a minimum. This study used quadratic modeling techniques to analyze the effects of environmental factors: temperature, relative humidity, and rainfall distribution on the breeding of Aedes albopictus, a type of Aedes mosquito. Data were collected at an urban area in south-west Penang from September 2010 until January 2011. The results indicated that the breeding of Aedes albopictus in the urban area is influenced by all three environmental characteristics. The number of mosquito eggs is estimated to reach a maximum value at a medium temperature, a medium relative humidity and a high rainfall distribution.

  12. Using high-order polynomial basis in 3-D EM forward modeling based on volume integral equation method

    NASA Astrophysics Data System (ADS)

    Kruglyakov, Mikhail; Kuvshinov, Alexey

    2018-05-01

    3-D interpretation of electromagnetic (EM) data of different origin and scale becomes a common practice worldwide. However, 3-D EM numerical simulations (modeling)—a key part of any 3-D EM data analysis—with realistic levels of complexity, accuracy and spatial detail still remains challenging from the computational point of view. We present a novel, efficient 3-D numerical solver based on a volume integral equation (IE) method. The efficiency is achieved by using a high-order polynomial (HOP) basis instead of the zero-order (piecewise constant) basis that is invoked in all routinely used IE-based solvers. We demonstrate that usage of the HOP basis allows us to decrease substantially the number of unknowns (preserving the same accuracy), with corresponding speed increase and memory saving.

  13. Credible Set Estimation, Analysis, and Applications in Synthetic Aperture Radar Canonical Feature Extraction

    DTIC Science & Technology

    2015-03-26

    depicting the CSE implementation for use with CV Domes data. . . 88 B.1 Validation results for N = 1 observation at 1.0 interval. Legendre polynomial of... Legendre polynomial of order Nl = 5. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 B.3 Validation results for N = 1 observation at...0.01 interval. Legendre polynomial of order Nl = 5. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 B.4 Validation results for N

  14. Investigation to realize a computationally efficient implementation of the high-order instantaneous-moments-based fringe analysis method

    NASA Astrophysics Data System (ADS)

    Gorthi, Sai Siva; Rajshekhar, Gannavarpu; Rastogi, Pramod

    2010-06-01

    Recently, a high-order instantaneous moments (HIM)-operator-based method was proposed for accurate phase estimation in digital holographic interferometry. The method relies on piece-wise polynomial approximation of phase and subsequent evaluation of the polynomial coefficients from the HIM operator using single-tone frequency estimation. The work presents a comparative analysis of the performance of different single-tone frequency estimation techniques, like Fourier transform followed by optimization, estimation of signal parameters by rotational invariance technique (ESPRIT), multiple signal classification (MUSIC), and iterative frequency estimation by interpolation on Fourier coefficients (IFEIF) in HIM-operator-based methods for phase estimation. Simulation and experimental results demonstrate the potential of the IFEIF technique with respect to computational efficiency and estimation accuracy.

  15. Robust stability of fractional order polynomials with complicated uncertainty structure

    PubMed Central

    Şenol, Bilal; Pekař, Libor

    2017-01-01

    The main aim of this article is to present a graphical approach to robust stability analysis for families of fractional order (quasi-)polynomials with complicated uncertainty structure. More specifically, the work emphasizes the multilinear, polynomial and general structures of uncertainty and, moreover, the retarded quasi-polynomials with parametric uncertainty are studied. Since the families with these complex uncertainty structures suffer from the lack of analytical tools, their robust stability is investigated by numerical calculation and depiction of the value sets and subsequent application of the zero exclusion condition. PMID:28662173

  16. Approximating smooth functions using algebraic-trigonometric polynomials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharapudinov, Idris I

    2011-01-14

    The problem under consideration is that of approximating classes of smooth functions by algebraic-trigonometric polynomials of the form p{sub n}(t)+{tau}{sub m}(t), where p{sub n}(t) is an algebraic polynomial of degree n and {tau}{sub m}(t)=a{sub 0}+{Sigma}{sub k=1}{sup m}a{sub k} cos k{pi}t + b{sub k} sin k{pi}t is a trigonometric polynomial of order m. The precise order of approximation by such polynomials in the classes W{sup r}{sub {infinity}(}M) and an upper bound for similar approximations in the class W{sup r}{sub p}(M) with 4/3

  17. A fully-coupled discontinuous Galerkin spectral element method for two-phase flow in petroleum reservoirs

    NASA Astrophysics Data System (ADS)

    Taneja, Ankur; Higdon, Jonathan

    2018-01-01

    A high-order spectral element discontinuous Galerkin method is presented for simulating immiscible two-phase flow in petroleum reservoirs. The governing equations involve a coupled system of strongly nonlinear partial differential equations for the pressure and fluid saturation in the reservoir. A fully implicit method is used with a high-order accurate time integration using an implicit Rosenbrock method. Numerical tests give the first demonstration of high order hp spatial convergence results for multiphase flow in petroleum reservoirs with industry standard relative permeability models. High order convergence is shown formally for spectral elements with up to 8th order polynomials for both homogeneous and heterogeneous permeability fields. Numerical results are presented for multiphase fluid flow in heterogeneous reservoirs with complex geometric or geologic features using up to 11th order polynomials. Robust, stable simulations are presented for heterogeneous geologic features, including globally heterogeneous permeability fields, anisotropic permeability tensors, broad regions of low-permeability, high-permeability channels, thin shale barriers and thin high-permeability fractures. A major result of this paper is the demonstration that the resolution of the high order spectral element method may be exploited to achieve accurate results utilizing a simple cartesian mesh for non-conforming geological features. Eliminating the need to mesh to the boundaries of geological features greatly simplifies the workflow for petroleum engineers testing multiple scenarios in the face of uncertainty in the subsurface geology.

  18. Polynomial expansions of single-mode motions around equilibrium points in the circular restricted three-body problem

    NASA Astrophysics Data System (ADS)

    Lei, Hanlun; Xu, Bo; Circi, Christian

    2018-05-01

    In this work, the single-mode motions around the collinear and triangular libration points in the circular restricted three-body problem are studied. To describe these motions, we adopt an invariant manifold approach, which states that a suitable pair of independent variables are taken as modal coordinates and the remaining state variables are expressed as polynomial series of them. Based on the invariant manifold approach, the general procedure on constructing polynomial expansions up to a certain order is outlined. Taking the Earth-Moon system as the example dynamical model, we construct the polynomial expansions up to the tenth order for the single-mode motions around collinear libration points, and up to order eight and six for the planar and vertical-periodic motions around triangular libration point, respectively. The application of the polynomial expansions constructed lies in that they can be used to determine the initial states for the single-mode motions around equilibrium points. To check the validity, the accuracy of initial states determined by the polynomial expansions is evaluated.

  19. Very high order discontinuous Galerkin method in elliptic problems

    NASA Astrophysics Data System (ADS)

    Jaśkowiec, Jan

    2017-09-01

    The paper deals with high-order discontinuous Galerkin (DG) method with the approximation order that exceeds 20 and reaches 100 and even 1000 with respect to one-dimensional case. To achieve such a high order solution, the DG method with finite difference method has to be applied. The basis functions of this method are high-order orthogonal Legendre or Chebyshev polynomials. These polynomials are defined in one-dimensional space (1D), but they can be easily adapted to two-dimensional space (2D) by cross products. There are no nodes in the elements and the degrees of freedom are coefficients of linear combination of basis functions. In this sort of analysis the reference elements are needed, so the transformations of the reference element into the real one are needed as well as the transformations connected with the mesh skeleton. Due to orthogonality of the basis functions, the obtained matrices are sparse even for finite elements with more than thousands degrees of freedom. In consequence, the truncation errors are limited and very high-order analysis can be performed. The paper is illustrated with a set of benchmark examples of 1D and 2D for the elliptic problems. The example presents the great effectiveness of the method that can shorten the length of calculation over hundreds times.

  20. Very high order discontinuous Galerkin method in elliptic problems

    NASA Astrophysics Data System (ADS)

    Jaśkowiec, Jan

    2018-07-01

    The paper deals with high-order discontinuous Galerkin (DG) method with the approximation order that exceeds 20 and reaches 100 and even 1000 with respect to one-dimensional case. To achieve such a high order solution, the DG method with finite difference method has to be applied. The basis functions of this method are high-order orthogonal Legendre or Chebyshev polynomials. These polynomials are defined in one-dimensional space (1D), but they can be easily adapted to two-dimensional space (2D) by cross products. There are no nodes in the elements and the degrees of freedom are coefficients of linear combination of basis functions. In this sort of analysis the reference elements are needed, so the transformations of the reference element into the real one are needed as well as the transformations connected with the mesh skeleton. Due to orthogonality of the basis functions, the obtained matrices are sparse even for finite elements with more than thousands degrees of freedom. In consequence, the truncation errors are limited and very high-order analysis can be performed. The paper is illustrated with a set of benchmark examples of 1D and 2D for the elliptic problems. The example presents the great effectiveness of the method that can shorten the length of calculation over hundreds times.

  1. High Dynamic Range Imaging Using Multiple Exposures

    NASA Astrophysics Data System (ADS)

    Hou, Xinglin; Luo, Haibo; Zhou, Peipei; Zhou, Wei

    2017-06-01

    It is challenging to capture a high-dynamic range (HDR) scene using a low-dynamic range (LDR) camera. This paper presents an approach for improving the dynamic range of cameras by using multiple exposure images of same scene taken under different exposure times. First, the camera response function (CRF) is recovered by solving a high-order polynomial in which only the ratios of the exposures are used. Then, the HDR radiance image is reconstructed by weighted summation of the each radiance maps. After that, a novel local tone mapping (TM) operator is proposed for the display of the HDR radiance image. By solving the high-order polynomial, the CRF can be recovered quickly and easily. Taken the local image feature and characteristic of histogram statics into consideration, the proposed TM operator could preserve the local details efficiently. Experimental result demonstrates the effectiveness of our method. By comparison, the method outperforms other methods in terms of imaging quality.

  2. Testing of next-generation nonlinear calibration based non-uniformity correction techniques using SWIR devices

    NASA Astrophysics Data System (ADS)

    Lovejoy, McKenna R.; Wickert, Mark A.

    2017-05-01

    A known problem with infrared imaging devices is their non-uniformity. This non-uniformity is the result of dark current, amplifier mismatch as well as the individual photo response of the detectors. To improve performance, non-uniformity correction (NUC) techniques are applied. Standard calibration techniques use linear, or piecewise linear models to approximate the non-uniform gain and off set characteristics as well as the nonlinear response. Piecewise linear models perform better than the one and two-point models, but in many cases require storing an unmanageable number of correction coefficients. Most nonlinear NUC algorithms use a second order polynomial to improve performance and allow for a minimal number of stored coefficients. However, advances in technology now make higher order polynomial NUC algorithms feasible. This study comprehensively tests higher order polynomial NUC algorithms targeted at short wave infrared (SWIR) imagers. Using data collected from actual SWIR cameras, the nonlinear techniques and corresponding performance metrics are compared with current linear methods including the standard one and two-point algorithms. Machine learning, including principal component analysis, is explored for identifying and replacing bad pixels. The data sets are analyzed and the impact of hardware implementation is discussed. Average floating point results show 30% less non-uniformity, in post-corrected data, when using a third order polynomial correction algorithm rather than a second order algorithm. To maximize overall performance, a trade off analysis on polynomial order and coefficient precision is performed. Comprehensive testing, across multiple data sets, provides next generation model validation and performance benchmarks for higher order polynomial NUC methods.

  3. The sensitivity of catchment hypsometry and hypsometric properties to DEM resolution and polynomial order

    NASA Astrophysics Data System (ADS)

    Liffner, Joel W.; Hewa, Guna A.; Peel, Murray C.

    2018-05-01

    Derivation of the hypsometric curve of a catchment, and properties relating to that curve, requires both use of topographical data (commonly in the form of a Digital Elevation Model - DEM), and the estimation of a functional representation of that curve. An early investigation into catchment hypsometry concluded 3rd order polynomials sufficiently describe the hypsometric curve, without the consideration of higher order polynomials, or the sensitivity of hypsometric properties relating to the curve. Another study concluded the hypsometric integral (HI) is robust against changes in DEM resolution, a conclusion drawn from a very limited sample size. Conclusions from these earlier studies have resulted in the adoption of methods deemed to be "sufficient" in subsequent studies, in addition to assumptions that the robustness of the HI extends to other hypsometric properties. This study investigates and demonstrates the sensitivity of hypsometric properties to DEM resolution, DEM type and polynomial order through assessing differences in hypsometric properties derived from 417 catchments and sub-catchments within South Australia. The sensitivity of hypsometric properties across DEM types and polynomial orders is found to be significant, which suggests careful consideration of the methods chosen to derive catchment hypsometric information is required.

  4. Trajectory Optimization for Helicopter Unmanned Aerial Vehicles (UAVs)

    DTIC Science & Technology

    2010-06-01

    the Nth-order derivative of the Legendre Polynomial ( )NL t . Using this method, the range of integration is transformed universally to [-1,+1...which is the interval for Legendre Polynomials . Although the LGL interpolation points are not evenly spaced, they are symmetric about the midpoint 0...the vehicle’s kinematic constraints are parameterized in terms of polynomials of sufficient order, (2) A collision-free criterion is developed and

  5. Stochastic Estimation via Polynomial Chaos

    DTIC Science & Technology

    2015-10-01

    AFRL-RW-EG-TR-2015-108 Stochastic Estimation via Polynomial Chaos Douglas V. Nance Air Force Research...COVERED (From - To) 20-04-2015 – 07-08-2015 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER Stochastic Estimation via Polynomial Chaos ...This expository report discusses fundamental aspects of the polynomial chaos method for representing the properties of second order stochastic

  6. Degenerate r-Stirling Numbers and r-Bell Polynomials

    NASA Astrophysics Data System (ADS)

    Kim, T.; Yao, Y.; Kim, D. S.; Jang, G.-W.

    2018-01-01

    The purpose of this paper is to exploit umbral calculus in order to derive some properties, recurrence relations, and identities related to the degenerate r-Stirling numbers of the second kind and the degenerate r-Bell polynomials. Especially, we will express the degenerate r-Bell polynomials as linear combinations of many well-known families of special polynomials.

  7. Numeric model to predict the location of market demand and economic order quantity for retailers of supply chain

    NASA Astrophysics Data System (ADS)

    Fradinata, Edy; Marli Kesuma, Zurnila

    2018-05-01

    Polynomials and Spline regression are the numeric model where they used to obtain the performance of methods, distance relationship models for cement retailers in Banda Aceh, predicts the market area for retailers and the economic order quantity (EOQ). These numeric models have their difference accuracy for measuring the mean square error (MSE). The distance relationships between retailers are to identify the density of retailers in the town. The dataset is collected from the sales of cement retailer with a global positioning system (GPS). The sales dataset is plotted of its characteristic to obtain the goodness of fitted quadratic, cubic, and fourth polynomial methods. On the real sales dataset, polynomials are used the behavior relationship x-abscissa and y-ordinate to obtain the models. This research obtains some advantages such as; the four models from the methods are useful for predicting the market area for the retailer in the competitiveness, the comparison of the performance of the methods, the distance of the relationship between retailers, and at last the inventory policy based on economic order quantity. The results, the high-density retail relationship areas indicate that the growing population with the construction project. The spline is better than quadratic, cubic, and four polynomials in predicting the points indicating of small MSE. The inventory policy usages the periodic review policy type.

  8. Unstructured High-Order Galerkin-Temporal- Boundary Methods for the Klein-Gordon Equation with Non-Reflecting Boundary Conditions

    DTIC Science & Technology

    2010-06-01

    9 C. Conservation of Momentum . . . . . . . . . . . . . . . . . . . . . 11 1. Gravity Effects . . . . . . . . . . . . . . . . . . . . . . . . . 12 2...describe the high-order spectral element method used to discretize the problem in space (up to 16th order polynomials ) in Chapter IV. Chapter V discusses...inertial frame. Body forces are those acting on the fluid volume that are proportional to the mass. The body forces considered here are gravity and

  9. Riemann-Liouville Fractional Calculus of Certain Finite Class of Classical Orthogonal Polynomials

    NASA Astrophysics Data System (ADS)

    Malik, Pradeep; Swaminathan, A.

    2010-11-01

    In this work we consider certain class of classical orthogonal polynomials defined on the positive real line. These polynomials have their weight function related to the probability density function of F distribution and are finite in number up to orthogonality. We generalize these polynomials for fractional order by considering the Riemann-Liouville type operator on these polynomials. Various properties like explicit representation in terms of hypergeometric functions, differential equations, recurrence relations are derived.

  10. A Spectral Finite Element Approach to Modeling Soft Solids Excited with High-Frequency Harmonic Loads

    PubMed Central

    Brigham, John C.; Aquino, Wilkins; Aguilo, Miguel A.; Diamessis, Peter J.

    2010-01-01

    An approach for efficient and accurate finite element analysis of harmonically excited soft solids using high-order spectral finite elements is presented and evaluated. The Helmholtz-type equations used to model such systems suffer from additional numerical error known as pollution when excitation frequency becomes high relative to stiffness (i.e. high wave number), which is the case, for example, for soft tissues subject to ultrasound excitations. The use of high-order polynomial elements allows for a reduction in this pollution error, but requires additional consideration to counteract Runge's phenomenon and/or poor linear system conditioning, which has led to the use of spectral element approaches. This work examines in detail the computational benefits and practical applicability of high-order spectral elements for such problems. The spectral elements examined are tensor product elements (i.e. quad or brick elements) of high-order Lagrangian polynomials with non-uniformly distributed Gauss-Lobatto-Legendre nodal points. A shear plane wave example is presented to show the dependence of the accuracy and computational expense of high-order elements on wave number. Then, a convergence study for a viscoelastic acoustic-structure interaction finite element model of an actual ultrasound driven vibroacoustic experiment is shown. The number of degrees of freedom required for a given accuracy level was found to consistently decrease with increasing element order. However, the computationally optimal element order was found to strongly depend on the wave number. PMID:21461402

  11. Advances in Highly Constrained Multi-Phase Trajectory Generation using the General Pseudospectral Optimization Software (GPOPS)

    DTIC Science & Technology

    2013-08-01

    release; distribution unlimited. PA Number 412-TW-PA-13395 f generic function g acceleration due to gravity h altitude L aerodynamic lift force L Lagrange...cost m vehicle mass M Mach number n number of coefficients in polynomial regression p highest order of polynomial regression Q dynamic pressure R...Method (RPM); the collocation points are defined by the roots of Legendre -Gauss- Radau (LGR) functions.9 GPOPS also automatically refines the “mesh” by

  12. Finding the Best-Fit Polynomial Approximation in Evaluating Drill Data: the Application of a Generalized Inverse Matrix / Poszukiwanie Najlepszej ZGODNOŚCI W PRZYBLIŻENIU Wielomianowym Wykorzystanej do Oceny Danych Z ODWIERTÓW - Zastosowanie UOGÓLNIONEJ Macierzy Odwrotnej

    NASA Astrophysics Data System (ADS)

    Karakus, Dogan

    2013-12-01

    In mining, various estimation models are used to accurately assess the size and the grade distribution of an ore body. The estimation of the positional properties of unknown regions using random samples with known positional properties was first performed using polynomial approximations. Although the emergence of computer technologies and statistical evaluation of random variables after the 1950s rendered the polynomial approximations less important, theoretically the best surface passing through the random variables can be expressed as a polynomial approximation. In geoscience studies, in which the number of random variables is high, reliable solutions can be obtained only with high-order polynomials. Finding the coefficients of these types of high-order polynomials can be computationally intensive. In this study, the solution coefficients of high-order polynomials were calculated using a generalized inverse matrix method. A computer algorithm was developed to calculate the polynomial degree giving the best regression between the values obtained for solutions of different polynomial degrees and random observational data with known values, and this solution was tested with data derived from a practical application. In this application, the calorie values for data from 83 drilling points in a coal site located in southwestern Turkey were used, and the results are discussed in the context of this study. W górnictwie wykorzystuje się rozmaite modele estymacji do dokładnego określenia wielkości i rozkładu zawartości pierwiastka użytecznego w rudzie. Estymację położenia i właściwości skał w nieznanych obszarach z wykorzystaniem próbek losowych o znanym położeniu przeprowadzano na początku z wykorzystaniem przybliżenia wielomianowego. Pomimo tego, że rozwój technik komputerowych i statystycznych metod ewaluacji próbek losowych sprawiły, że po roku 1950 metody przybliżenia wielomianowego straciły na znaczeniu, nadal teoretyczna powierzchnia najlepszej zgodności przechodząca przez zmienne losowe wyrażana jest właśnie poprzez przybliżenie wielomianowe. W geofizyce, gdzie liczba próbek losowych jest zazwyczaj bardzo wysoka, wiarygodne rozwiązania uzyskać można jedynie przy wykorzystaniu wielomianów wyższych stopni. Określenie współczynników w tego typu wielomia nach jest skomplikowaną procedurą obliczeniową. W pracy tej poszukiwane współczynniki wielomianu wyższych stopni obliczono przy zastosowaniu metody uogólnionej macierzy odwrotnej. Opracowano odpowiedni algorytm komputerowy do obliczania stopnia wielomianu, zapewniający najlepszą regresję pomiędzy wartościami otrzymanymi z rozwiązań bazujących na wielomianach różnych stopni i losowymi danymi z obserwacji, o znanych wartościach. Rozwiązanie to przetestowano z użyciem danych uzyskanych z zastosowań praktycznych. W tym zastosowaniu użyto danych o wartości opałowej pochodzących z 83 odwiertów wykonanych w zagłębiu węglowym w południowo- zachodniej Turcji, wyniki obliczeń przedyskutowano w kontekście zagadnień uwzględnionych w niniejszej pracy.

  13. High-order regularization in lattice-Boltzmann equations

    NASA Astrophysics Data System (ADS)

    Mattila, Keijo K.; Philippi, Paulo C.; Hegele, Luiz A.

    2017-04-01

    A lattice-Boltzmann equation (LBE) is the discrete counterpart of a continuous kinetic model. It can be derived using a Hermite polynomial expansion for the velocity distribution function. Since LBEs are characterized by discrete, finite representations of the microscopic velocity space, the expansion must be truncated and the appropriate order of truncation depends on the hydrodynamic problem under investigation. Here we consider a particular truncation where the non-equilibrium distribution is expanded on a par with the equilibrium distribution, except that the diffusive parts of high-order non-equilibrium moments are filtered, i.e., only the corresponding advective parts are retained after a given rank. The decomposition of moments into diffusive and advective parts is based directly on analytical relations between Hermite polynomial tensors. The resulting, refined regularization procedure leads to recurrence relations where high-order non-equilibrium moments are expressed in terms of low-order ones. The procedure is appealing in the sense that stability can be enhanced without local variation of transport parameters, like viscosity, or without tuning the simulation parameters based on embedded optimization steps. The improved stability properties are here demonstrated using the perturbed double periodic shear layer flow and the Sod shock tube problem as benchmark cases.

  14. The NonConforming Virtual Element Method for the Stokes Equations

    DOE PAGES

    Cangiani, Andrea; Gyrya, Vitaliy; Manzini, Gianmarco

    2016-01-01

    In this paper, we present the nonconforming virtual element method (VEM) for the numerical approximation of velocity and pressure in the steady Stokes problem. The pressure is approximated using discontinuous piecewise polynomials, while each component of the velocity is approximated using the nonconforming virtual element space. On each mesh element the local virtual space contains the space of polynomials of up to a given degree, plus suitable nonpolynomial functions. The virtual element functions are implicitly defined as the solution of local Poisson problems with polynomial Neumann boundary conditions. As typical in VEM approaches, the explicit evaluation of the non-polynomial functionsmore » is not required. This approach makes it possible to construct nonconforming (virtual) spaces for any polynomial degree regardless of the parity, for two- and three-dimensional problems, and for meshes with very general polygonal and polyhedral elements. We show that the nonconforming VEM is inf-sup stable and establish optimal a priori error estimates for the velocity and pressure approximations. Finally, numerical examples confirm the convergence analysis and the effectiveness of the method in providing high-order accurate approximations.« less

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cangiani, Andrea; Gyrya, Vitaliy; Manzini, Gianmarco

    In this paper, we present the nonconforming virtual element method (VEM) for the numerical approximation of velocity and pressure in the steady Stokes problem. The pressure is approximated using discontinuous piecewise polynomials, while each component of the velocity is approximated using the nonconforming virtual element space. On each mesh element the local virtual space contains the space of polynomials of up to a given degree, plus suitable nonpolynomial functions. The virtual element functions are implicitly defined as the solution of local Poisson problems with polynomial Neumann boundary conditions. As typical in VEM approaches, the explicit evaluation of the non-polynomial functionsmore » is not required. This approach makes it possible to construct nonconforming (virtual) spaces for any polynomial degree regardless of the parity, for two- and three-dimensional problems, and for meshes with very general polygonal and polyhedral elements. We show that the nonconforming VEM is inf-sup stable and establish optimal a priori error estimates for the velocity and pressure approximations. Finally, numerical examples confirm the convergence analysis and the effectiveness of the method in providing high-order accurate approximations.« less

  16. Robust control with structured perturbations

    NASA Technical Reports Server (NTRS)

    Keel, Leehyun

    1988-01-01

    Two important problems in the area of control systems design and analysis are discussed. The first is the robust stability using characteristic polynomial, which is treated first in characteristic polynomial coefficient space with respect to perturbations in the coefficients of the characteristic polynomial, and then for a control system containing perturbed parameters in the transfer function description of the plant. In coefficient space, a simple expression is first given for the l(sup 2) stability margin for both monic and non-monic cases. Following this, a method is extended to reveal much larger stability region. This result has been extended to the parameter space so that one can determine the stability margin, in terms of ranges of parameter variations, of the closed loop system when the nominal stabilizing controller is given. The stability margin can be enlarged by a choice of better stabilizing controller. The second problem describes the lower order stabilization problem, the motivation of the problem is as follows. Even though the wide range of stabilizing controller design methodologies is available in both the state space and transfer function domains, all of these methods produce unnecessarily high order controllers. In practice, the stabilization is only one of many requirements to be satisfied. Therefore, if the order of a stabilizing controller is excessively high, one can normally expect to have a even higher order controller on the completion of design such as inclusion of dynamic response requirements, etc. Therefore, it is reasonable to have a lowest possible order stabilizing controller first and then adjust the controller to meet additional requirements. The algorithm for designing a lower order stabilizing controller is given. The algorithm does not necessarily produce the minimum order controller; however, the algorithm is theoretically logical and some simulation results show that the algorithm works in general.

  17. A frequency domain global parameter estimation method for multiple reference frequency response measurements

    NASA Astrophysics Data System (ADS)

    Shih, C. Y.; Tsuei, Y. G.; Allemang, R. J.; Brown, D. L.

    1988-10-01

    A method of using the matrix Auto-Regressive Moving Average (ARMA) model in the Laplace domain for multiple-reference global parameter identification is presented. This method is particularly applicable to the area of modal analysis where high modal density exists. The method is also applicable when multiple reference frequency response functions are used to characterise linear systems. In order to facilitate the mathematical solution, the Forsythe orthogonal polynomial is used to reduce the ill-conditioning of the formulated equations and to decouple the normal matrix into two reduced matrix blocks. A Complex Mode Indicator Function (CMIF) is introduced, which can be used to determine the proper order of the rational polynomials.

  18. A quadratic regression modelling on paddy production in the area of Perlis

    NASA Astrophysics Data System (ADS)

    Goh, Aizat Hanis Annas; Ali, Zalila; Nor, Norlida Mohd; Baharum, Adam; Ahmad, Wan Muhamad Amir W.

    2017-08-01

    Polynomial regression models are useful in situations in which the relationship between a response variable and predictor variables is curvilinear. Polynomial regression fits the nonlinear relationship into a least squares linear regression model by decomposing the predictor variables into a kth order polynomial. The polynomial order determines the number of inflexions on the curvilinear fitted line. A second order polynomial forms a quadratic expression (parabolic curve) with either a single maximum or minimum, a third order polynomial forms a cubic expression with both a relative maximum and a minimum. This study used paddy data in the area of Perlis to model paddy production based on paddy cultivation characteristics and environmental characteristics. The results indicated that a quadratic regression model best fits the data and paddy production is affected by urea fertilizer application and the interaction between amount of average rainfall and percentage of area defected by pest and disease. Urea fertilizer application has a quadratic effect in the model which indicated that if the number of days of urea fertilizer application increased, paddy production is expected to decrease until it achieved a minimum value and paddy production is expected to increase at higher number of days of urea application. The decrease in paddy production with an increased in rainfall is greater, the higher the percentage of area defected by pest and disease.

  19. A solver for General Unilateral Polynomial Matrix Equation with Second-Order Matrices Over Prime Finite Fields

    NASA Astrophysics Data System (ADS)

    Burtyka, Filipp

    2018-03-01

    The paper firstly considers the problem of finding solvents for arbitrary unilateral polynomial matrix equations with second-order matrices over prime finite fields from the practical point of view: we implement the solver for this problem. The solver’s algorithm has two step: the first is finding solvents, having Jordan Normal Form (JNF), the second is finding solvents among the rest matrices. The first step reduces to the finding roots of usual polynomials over finite fields, the second is essentially exhaustive search. The first step’s algorithms essentially use the polynomial matrices theory. We estimate the practical duration of computations using our software implementation (for example that one can’t construct unilateral matrix polynomial over finite field, having any predefined number of solvents) and answer some theoretically-valued questions.

  20. Polynomial Fitting of DT-MRI Fiber Tracts Allows Accurate Estimation of Muscle Architectural Parameters

    PubMed Central

    Damon, Bruce M.; Heemskerk, Anneriet M.; Ding, Zhaohua

    2012-01-01

    Fiber curvature is a functionally significant muscle structural property, but its estimation from diffusion-tensor MRI fiber tracking data may be confounded by noise. The purpose of this study was to investigate the use of polynomial fitting of fiber tracts for improving the accuracy and precision of fiber curvature (κ) measurements. Simulated image datasets were created in order to provide data with known values for κ and pennation angle (θ). Simulations were designed to test the effects of increasing inherent fiber curvature (3.8, 7.9, 11.8, and 15.3 m−1), signal-to-noise ratio (50, 75, 100, and 150), and voxel geometry (13.8 and 27.0 mm3 voxel volume with isotropic resolution; 13.5 mm3 volume with an aspect ratio of 4.0) on κ and θ measurements. In the originally reconstructed tracts, θ was estimated accurately under most curvature and all imaging conditions studied; however, the estimates of κ were imprecise and inaccurate. Fitting the tracts to 2nd order polynomial functions provided accurate and precise estimates of κ for all conditions except very high curvature (κ=15.3 m−1), while preserving the accuracy of the θ estimates. Similarly, polynomial fitting of in vivo fiber tracking data reduced the κ values of fitted tracts from those of unfitted tracts and did not change the θ values. Polynomial fitting of fiber tracts allows accurate estimation of physiologically reasonable values of κ, while preserving the accuracy of θ estimation. PMID:22503094

  1. Impacts of Sigma Coordinates on the Euler and Navier-Stokes Equations using Continuous Galerkin Methods

    DTIC Science & Technology

    2009-03-01

    the 1- D local basis functions. The 1-D Lagrange polynomial local basis function, using Legendre -Gauss-Lobatto interpolation points, was defined by...cases were run using 10th order polynomials , with contours from -0.05 to 0.525 K with an interval of 0.025 K...after 700 s for reso- lutions: (a) 20, (b) 10, and (c) 5 m. All cases were run using 10th order polynomials , with contours from -0.05 to 0.525 K

  2. Random regression models using Legendre polynomials or linear splines for test-day milk yield of dairy Gyr (Bos indicus) cattle.

    PubMed

    Pereira, R J; Bignardi, A B; El Faro, L; Verneque, R S; Vercesi Filho, A E; Albuquerque, L G

    2013-01-01

    Studies investigating the use of random regression models for genetic evaluation of milk production in Zebu cattle are scarce. In this study, 59,744 test-day milk yield records from 7,810 first lactations of purebred dairy Gyr (Bos indicus) and crossbred (dairy Gyr × Holstein) cows were used to compare random regression models in which additive genetic and permanent environmental effects were modeled using orthogonal Legendre polynomials or linear spline functions. Residual variances were modeled considering 1, 5, or 10 classes of days in milk. Five classes fitted the changes in residual variances over the lactation adequately and were used for model comparison. The model that fitted linear spline functions with 6 knots provided the lowest sum of residual variances across lactation. On the other hand, according to the deviance information criterion (DIC) and bayesian information criterion (BIC), a model using third-order and fourth-order Legendre polynomials for additive genetic and permanent environmental effects, respectively, provided the best fit. However, the high rank correlation (0.998) between this model and that applying third-order Legendre polynomials for additive genetic and permanent environmental effects, indicates that, in practice, the same bulls would be selected by both models. The last model, which is less parameterized, is a parsimonious option for fitting dairy Gyr breed test-day milk yield records. Copyright © 2013 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  3. An extended UTD analysis for the scattering and diffraction from cubic polynomial strips

    NASA Technical Reports Server (NTRS)

    Constantinides, E. D.; Marhefka, R. J.

    1993-01-01

    Spline and polynomial type surfaces are commonly used in high frequency modeling of complex structures such as aircraft, ships, reflectors, etc. It is therefore of interest to develop an efficient and accurate solution to describe the scattered fields from such surfaces. An extended Uniform Geometrical Theory of Diffraction (UTD) solution for the scattering and diffraction from perfectly conducting cubic polynomial strips is derived and involves the incomplete Airy integrals as canonical functions. This new solution is universal in nature and can be used to effectively describe the scattered fields from flat, strictly concave or convex, and concave convex boundaries containing edges. The classic UTD solution fails to describe the more complicated field behavior associated with higher order phase catastrophes and therefore a new set of uniform reflection and first-order edge diffraction coefficients is derived. Also, an additional diffraction coefficient associated with a zero-curvature (inflection) point is presented. Higher order effects such as double edge diffraction, creeping waves, and whispering gallery modes are not examined. The extended UTD solution is independent of the scatterer size and also provides useful physical insight into the various scattering and diffraction processes. Its accuracy is confirmed via comparison with some reference moment method results.

  4. Bicubic uniform B-spline wavefront fitting technology applied in computer-generated holograms

    NASA Astrophysics Data System (ADS)

    Cao, Hui; Sun, Jun-qiang; Chen, Guo-jie

    2006-02-01

    This paper presented a bicubic uniform B-spline wavefront fitting technology to figure out the analytical expression for object wavefront used in Computer-Generated Holograms (CGHs). In many cases, to decrease the difficulty of optical processing, off-axis CGHs rather than complex aspherical surface elements are used in modern advanced military optical systems. In order to design and fabricate off-axis CGH, we have to fit out the analytical expression for object wavefront. Zernike Polynomial is competent for fitting wavefront of centrosymmetric optical systems, but not for axisymmetrical optical systems. Although adopting high-degree polynomials fitting method would achieve higher fitting precision in all fitting nodes, the greatest shortcoming of this method is that any departure from the fitting nodes would result in great fitting error, which is so-called pulsation phenomenon. Furthermore, high-degree polynomials fitting method would increase the calculation time in coding computer-generated hologram and solving basic equation. Basing on the basis function of cubic uniform B-spline and the character mesh of bicubic uniform B-spline wavefront, bicubic uniform B-spline wavefront are described as the product of a series of matrices. Employing standard MATLAB routines, four kinds of different analytical expressions for object wavefront are fitted out by bicubic uniform B-spline as well as high-degree polynomials. Calculation results indicate that, compared with high-degree polynomials, bicubic uniform B-spline is a more competitive method to fit out the analytical expression for object wavefront used in off-axis CGH, for its higher fitting precision and C2 continuity.

  5. A recursive algorithm for Zernike polynomials

    NASA Technical Reports Server (NTRS)

    Davenport, J. W.

    1982-01-01

    The analysis of a function defined on a rotationally symmetric system, with either a circular or annular pupil is discussed. In order to numerically analyze such systems it is typical to expand the given function in terms of a class of orthogonal polynomials. Because of their particular properties, the Zernike polynomials are especially suited for numerical calculations. Developed is a recursive algorithm that can be used to generate the Zernike polynomials up to a given order. The algorithm is recursively defined over J where R(J,N) is the Zernike polynomial of degree N obtained by orthogonalizing the sequence R(J), R(J+2), ..., R(J+2N) over (epsilon, 1). The terms in the preceding row - the (J-1) row - up to the N+1 term is needed for generating the (J,N)th term. Thus, the algorith generates an upper left-triangular table. This algorithm was placed in the computer with the necessary support program also included.

  6. Recurrence approach and higher order polynomial algebras for superintegrable monopole systems

    NASA Astrophysics Data System (ADS)

    Hoque, Md Fazlul; Marquette, Ian; Zhang, Yao-Zhong

    2018-05-01

    We revisit the MIC-harmonic oscillator in flat space with monopole interaction and derive the polynomial algebra satisfied by the integrals of motion and its energy spectrum using the ad hoc recurrence approach. We introduce a superintegrable monopole system in a generalized Taub-Newman-Unti-Tamburino (NUT) space. The Schrödinger equation of this model is solved in spherical coordinates in the framework of Stäckel transformation. It is shown that wave functions of the quantum system can be expressed in terms of the product of Laguerre and Jacobi polynomials. We construct ladder and shift operators based on the corresponding wave functions and obtain the recurrence formulas. By applying these recurrence relations, we construct higher order algebraically independent integrals of motion. We show that the integrals form a polynomial algebra. We construct the structure functions of the polynomial algebra and obtain the degenerate energy spectra of the model.

  7. Efficient algorithms for construction of recurrence relations for the expansion and connection coefficients in series of Al-Salam Carlitz I polynomials

    NASA Astrophysics Data System (ADS)

    Doha, E. H.; Ahmed, H. M.

    2005-12-01

    Two formulae expressing explicitly the derivatives and moments of Al-Salam-Carlitz I polynomials of any degree and for any order in terms of Al-Salam-Carlitz I themselves are proved. Two other formulae for the expansion coefficients of general-order derivatives Dpqf(x), and for the moments xellDpqf(x), of an arbitrary function f(x) in terms of its original expansion coefficients are also obtained. Application of these formulae for solving q-difference equations with varying coefficients, by reducing them to recurrence relations in the expansion coefficients of the solution, is explained. An algebraic symbolic approach (using Mathematica) in order to build and solve recursively for the connection coefficients between Al-Salam-Carlitz I polynomials and any system of basic hypergeometric orthogonal polynomials, belonging to the q-Hahn class, is described.

  8. Finding higher order Darboux polynomials for a family of rational first order ordinary differential equations

    NASA Astrophysics Data System (ADS)

    Avellar, J.; Claudino, A. L. G. C.; Duarte, L. G. S.; da Mota, L. A. C. P.

    2015-10-01

    For the Darbouxian methods we are studying here, in order to solve first order rational ordinary differential equations (1ODEs), the most costly (computationally) step is the finding of the needed Darboux polynomials. This can be so grave that it can render the whole approach unpractical. Hereby we introduce a simple heuristics to speed up this process for a class of 1ODEs.

  9. Exploring the use of random regression models with legendre polynomials to analyze measures of volume of ejaculate in Holstein bulls.

    PubMed

    Carabaño, M J; Díaz, C; Ugarte, C; Serrano, M

    2007-02-01

    Artificial insemination centers routinely collect records of quantity and quality of semen of bulls throughout the animals' productive period. The goal of this paper was to explore the use of random regression models with orthogonal polynomials to analyze repeated measures of semen production of Spanish Holstein bulls. A total of 8,773 records of volume of first ejaculate (VFE) collected between 12 and 30 mo of age from 213 Spanish Holstein bulls was analyzed under alternative random regression models. Legendre polynomial functions of increasing order (0 to 6) were fitted to the average trajectory, additive genetic and permanent environmental effects. Age at collection and days in production were used as time variables. Heterogeneous and homogeneous residual variances were alternatively assumed. Analyses were carried out within a Bayesian framework. The logarithm of the marginal density and the cross-validation predictive ability of the data were used as model comparison criteria. Based on both criteria, age at collection as a time variable and heterogeneous residuals models are recommended to analyze changes of VFE over time. Both criteria indicated that fitting random curves for genetic and permanent environmental components as well as for the average trajector improved the quality of models. Furthermore, models with a higher order polynomial for the permanent environmental (5 to 6) than for the genetic components (4 to 5) and the average trajectory (2 to 3) tended to perform best. High-order polynomials were needed to accommodate the highly oscillating nature of the phenotypic values. Heritability and repeatability estimates, disregarding the extremes of the studied period, ranged from 0.15 to 0.35 and from 0.20 to 0.50, respectively, indicating that selection for VFE may be effective at any stage. Small differences among models were observed. Apart from the extremes, estimated correlations between ages decreased steadily from 0.9 and 0.4 for measures 1 mo apart to 0.4 and 0.2 for most distant measures for additive genetic and phenotypic components, respectively. Further investigation to account for environmental factors that may be responsible for the oscillating observations of VFE is needed.

  10. The spectral cell method in nonlinear earthquake modeling

    NASA Astrophysics Data System (ADS)

    Giraldo, Daniel; Restrepo, Doriam

    2017-12-01

    This study examines the applicability of the spectral cell method (SCM) to compute the nonlinear earthquake response of complex basins. SCM combines fictitious-domain concepts with the spectral-version of the finite element method to solve the wave equations in heterogeneous geophysical domains. Nonlinear behavior is considered by implementing the Mohr-Coulomb and Drucker-Prager yielding criteria. We illustrate the performance of SCM with numerical examples of nonlinear basins exhibiting physically and computationally challenging conditions. The numerical experiments are benchmarked with results from overkill solutions, and using MIDAS GTS NX, a finite element software for geotechnical applications. Our findings show good agreement between the two sets of results. Traditional spectral elements implementations allow points per wavelength as low as PPW = 4.5 for high-order polynomials. Our findings show that in the presence of nonlinearity, high-order polynomials (p ≥ 3) require mesh resolutions above of PPW ≥ 10 to ensure displacement errors below 10%.

  11. Discrete Tchebycheff orthonormal polynomials and applications

    NASA Technical Reports Server (NTRS)

    Lear, W. M.

    1980-01-01

    Discrete Tchebycheff orthonormal polynomials offer a convenient way to make least squares polynomial fits of uniformly spaced discrete data. Computer programs to do so are simple and fast, and appear to be less affected by computer roundoff error, for the higher order fits, than conventional least squares programs. They are useful for any application of polynomial least squares fits: approximation of mathematical functions, noise analysis of radar data, and real time smoothing of noisy data, to name a few.

  12. High-order computer-assisted estimates of topological entropy

    NASA Astrophysics Data System (ADS)

    Grote, Johannes

    The concept of Taylor Models is introduced, which offers highly accurate C0-estimates for the enclosures of functional dependencies, combining high-order Taylor polynomial approximation of functions and rigorous estimates of the truncation error, performed using verified interval arithmetic. The focus of this work is on the application of Taylor Models in algorithms for strongly nonlinear dynamical systems. A method to obtain sharp rigorous enclosures of Poincare maps for certain types of flows and surfaces is developed and numerical examples are presented. Differential algebraic techniques allow the efficient and accurate computation of polynomial approximations for invariant curves of certain planar maps around hyperbolic fixed points. Subsequently we introduce a procedure to extend these polynomial curves to verified Taylor Model enclosures of local invariant manifolds with C0-errors of size 10-10--10 -14, and proceed to generate the global invariant manifold tangle up to comparable accuracy through iteration in Taylor Model arithmetic. Knowledge of the global manifold structure up to finite iterations of the local manifold pieces enables us to find all homoclinic and heteroclinic intersections in the generated manifold tangle. Combined with the mapping properties of the homoclinic points and their ordering we are able to construct a subshift of finite type as a topological factor of the original planar system to obtain rigorous lower bounds for its topological entropy. This construction is fully automatic and yields homoclinic tangles with several hundred homoclinic points. As an example rigorous lower bounds for the topological entropy of the Henon map are computed, which to the best knowledge of the authors yield the largest such estimates published so far.

  13. Estimation of genetic parameters related to eggshell strength using random regression models.

    PubMed

    Guo, J; Ma, M; Qu, L; Shen, M; Dou, T; Wang, K

    2015-01-01

    This study examined the changes in eggshell strength and the genetic parameters related to this trait throughout a hen's laying life using random regression. The data were collected from a crossbred population between 2011 and 2014, where the eggshell strength was determined repeatedly for 2260 hens. Using random regression models (RRMs), several Legendre polynomials were employed to estimate the fixed, direct genetic and permanent environment effects. The residual effects were treated as independently distributed with heterogeneous variance for each test week. The direct genetic variance was included with second-order Legendre polynomials and the permanent environment with third-order Legendre polynomials. The heritability of eggshell strength ranged from 0.26 to 0.43, the repeatability ranged between 0.47 and 0.69, and the estimated genetic correlations between test weeks was high at > 0.67. The first eigenvalue of the genetic covariance matrix accounted for about 97% of the sum of all the eigenvalues. The flexibility and statistical power of RRM suggest that this model could be an effective method to improve eggshell quality and to reduce losses due to cracked eggs in a breeding plan.

  14. Least squares polynomial chaos expansion: A review of sampling strategies

    NASA Astrophysics Data System (ADS)

    Hadigol, Mohammad; Doostan, Alireza

    2018-04-01

    As non-institutive polynomial chaos expansion (PCE) techniques have gained growing popularity among researchers, we here provide a comprehensive review of major sampling strategies for the least squares based PCE. Traditional sampling methods, such as Monte Carlo, Latin hypercube, quasi-Monte Carlo, optimal design of experiments (ODE), Gaussian quadratures, as well as more recent techniques, such as coherence-optimal and randomized quadratures are discussed. We also propose a hybrid sampling method, dubbed alphabetic-coherence-optimal, that employs the so-called alphabetic optimality criteria used in the context of ODE in conjunction with coherence-optimal samples. A comparison between the empirical performance of the selected sampling methods applied to three numerical examples, including high-order PCE's, high-dimensional problems, and low oversampling ratios, is presented to provide a road map for practitioners seeking the most suitable sampling technique for a problem at hand. We observed that the alphabetic-coherence-optimal technique outperforms other sampling methods, specially when high-order ODE are employed and/or the oversampling ratio is low.

  15. Explicit bounds for the positive root of classes of polynomials with applications

    NASA Astrophysics Data System (ADS)

    Herzberger, Jürgen

    2003-03-01

    We consider a certain type of polynomial equations for which there exists--according to Descartes' rule of signs--only one simple positive root. These equations are occurring in Numerical Analysis when calculating or estimating the R-order or Q-order of convergence of certain iterative processes with an error-recursion of special form. On the other hand, these polynomial equations are very common as defining equations for the effective rate of return for certain cashflows like bonds or annuities in finance. The effective rate of interest i* for those cashflows is i*=q*-1, where q* is the unique positive root of such polynomial. We construct bounds for i* for a special problem concerning an ordinary simple annuity which is obtained by changing the conditions of such an annuity with given data applying the German rule (Preisangabeverordnung or short PAngV). Moreover, we consider a number of results for such polynomial roots in Numerical Analysis showing that by a simple variable transformation we can derive several formulas out of earlier results by applying this transformation. The same is possible in finance in order to generalize results to more complicated cashflows.

  16. Near constant-time optimal piecewise LDR to HDR inverse tone mapping

    NASA Astrophysics Data System (ADS)

    Chen, Qian; Su, Guan-Ming; Yin, Peng

    2015-02-01

    In a backward compatible HDR image/video compression, it is a general approach to reconstruct HDR from compressed LDR as a prediction to original HDR, which is referred to as inverse tone mapping. Experimental results show that 2- piecewise 2nd order polynomial has the best mapping accuracy than 1 piece high order or 2-piecewise linear, but it is also the most time-consuming method because to find the optimal pivot point to split LDR range to 2 pieces requires exhaustive search. In this paper, we propose a fast algorithm that completes optimal 2-piecewise 2nd order polynomial inverse tone mapping in near constant time without quality degradation. We observe that in least square solution, each entry in the intermediate matrix can be written as the sum of some basic terms, which can be pre-calculated into look-up tables. Since solving the matrix becomes looking up values in tables, computation time barely differs regardless of the number of points searched. Hence, we can carry out the most thorough pivot point search to find the optimal pivot that minimizes MSE in near constant time. Experiment shows that our proposed method achieves the same PSNR performance while saving 60 times computation time compared to the traditional exhaustive search in 2-piecewise 2nd order polynomial inverse tone mapping with continuous constraint.

  17. Pulse transmission transmitter including a higher order time derivate filter

    DOEpatents

    Dress, Jr., William B.; Smith, Stephen F.

    2003-09-23

    Systems and methods for pulse-transmission low-power communication modes are disclosed. A pulse transmission transmitter includes: a clock; a pseudorandom polynomial generator coupled to the clock, the pseudorandom polynomial generator having a polynomial load input; an exclusive-OR gate coupled to the pseudorandom polynomial generator, the exclusive-OR gate having a serial data input; a programmable delay circuit coupled to both the clock and the exclusive-OR gate; a pulse generator coupled to the programmable delay circuit; and a higher order time derivative filter coupled to the pulse generator. The systems and methods significantly reduce lower-frequency emissions from pulse transmission spread-spectrum communication modes, which reduces potentially harmful interference to existing radio frequency services and users and also simultaneously permit transmission of multiple data bits by utilizing specific pulse shapes.

  18. Data driven discrete-time parsimonious identification of a nonlinear state-space model for a weakly nonlinear system with short data record

    NASA Astrophysics Data System (ADS)

    Relan, Rishi; Tiels, Koen; Marconato, Anna; Dreesen, Philippe; Schoukens, Johan

    2018-05-01

    Many real world systems exhibit a quasi linear or weakly nonlinear behavior during normal operation, and a hard saturation effect for high peaks of the input signal. In this paper, a methodology to identify a parsimonious discrete-time nonlinear state space model (NLSS) for the nonlinear dynamical system with relatively short data record is proposed. The capability of the NLSS model structure is demonstrated by introducing two different initialisation schemes, one of them using multivariate polynomials. In addition, a method using first-order information of the multivariate polynomials and tensor decomposition is employed to obtain the parsimonious decoupled representation of the set of multivariate real polynomials estimated during the identification of NLSS model. Finally, the experimental verification of the model structure is done on the cascaded water-benchmark identification problem.

  19. A point-value enhanced finite volume method based on approximate delta functions

    NASA Astrophysics Data System (ADS)

    Xuan, Li-Jun; Majdalani, Joseph

    2018-02-01

    We revisit the concept of an approximate delta function (ADF), introduced by Huynh (2011) [1], in the form of a finite-order polynomial that holds identical integral properties to the Dirac delta function when used in conjunction with a finite-order polynomial integrand over a finite domain. We show that the use of generic ADF polynomials can be effective at recovering and generalizing several high-order methods, including Taylor-based and nodal-based Discontinuous Galerkin methods, as well as the Correction Procedure via Reconstruction. Based on the ADF concept, we then proceed to formulate a Point-value enhanced Finite Volume (PFV) method, which stores and updates the cell-averaged values inside each element as well as the unknown quantities and, if needed, their derivatives on nodal points. The sharing of nodal information with surrounding elements saves the number of degrees of freedom compared to other compact methods at the same order. To ensure conservation, cell-averaged values are updated using an identical approach to that adopted in the finite volume method. Here, the updating of nodal values and their derivatives is achieved through an ADF concept that leverages all of the elements within the domain of integration that share the same nodal point. The resulting scheme is shown to be very stable at successively increasing orders. Both accuracy and stability of the PFV method are verified using a Fourier analysis and through applications to the linear wave and nonlinear Burgers' equations in one-dimensional space.

  20. Design-order, non-conformal low-Mach fluid algorithms using a hybrid CVFEM/DG approach

    NASA Astrophysics Data System (ADS)

    Domino, Stefan P.

    2018-04-01

    A hybrid, design-order sliding mesh algorithm, which uses a control volume finite element method (CVFEM), in conjunction with a discontinuous Galerkin (DG) approach at non-conformal interfaces, is outlined in the context of a low-Mach fluid dynamics equation set. This novel hybrid DG approach is also demonstrated to be compatible with a classic edge-based vertex centered (EBVC) scheme. For the CVFEM, element polynomial, P, promotion is used to extend the low-order P = 1 CVFEM method to higher-order, i.e., P = 2. An equal-order low-Mach pressure-stabilized methodology, with emphasis on the non-conformal interface boundary condition, is presented. A fully implicit matrix solver approach that accounts for the full stencil connectivity across the non-conformal interface is employed. A complete suite of formal verification studies using the method of manufactured solutions (MMS) is performed to verify the order of accuracy of the underlying methodology. The chosen suite of analytical verification cases range from a simple steady diffusion system to a traveling viscous vortex across mixed-order non-conformal interfaces. Results from all verification studies demonstrate either second- or third-order spatial accuracy and, for transient solutions, second-order temporal accuracy. Significant accuracy gains in manufactured solution error norms are noted even with modest promotion of the underlying polynomial order. The paper also demonstrates the CVFEM/DG methodology on two production-like simulation cases that include an inner block subjected to solid rotation, i.e., each of the simulations include a sliding mesh, non-conformal interface. The first production case presented is a turbulent flow past a high-rate-of-rotation cube (Re, 4000; RPM, 3600) on like and mixed-order polynomial interfaces. The final simulation case is a full-scale Vestas V27 225 kW wind turbine (tower and nacelle omitted) in which a hybrid topology, low-order mesh is used. Both production simulations provide confidence in the underlying capability and demonstrate the viability of this hybrid method for deployment towards high-fidelity wind energy validation and analysis.

  1. Existence of entire solutions of some non-linear differential-difference equations.

    PubMed

    Chen, Minfeng; Gao, Zongsheng; Du, Yunfei

    2017-01-01

    In this paper, we investigate the admissible entire solutions of finite order of the differential-difference equations [Formula: see text] and [Formula: see text], where [Formula: see text], [Formula: see text] are two non-zero polynomials, [Formula: see text] is a polynomial and [Formula: see text]. In addition, we investigate the non-existence of entire solutions of finite order of the differential-difference equation [Formula: see text], where [Formula: see text], [Formula: see text] are two non-constant polynomials, [Formula: see text], m , n are positive integers and satisfy [Formula: see text] except for [Formula: see text], [Formula: see text].

  2. Learning polynomial feedforward neural networks by genetic programming and backpropagation.

    PubMed

    Nikolaev, N Y; Iba, H

    2003-01-01

    This paper presents an approach to learning polynomial feedforward neural networks (PFNNs). The approach suggests, first, finding the polynomial network structure by means of a population-based search technique relying on the genetic programming paradigm, and second, further adjustment of the best discovered network weights by an especially derived backpropagation algorithm for higher order networks with polynomial activation functions. These two stages of the PFNN learning process enable us to identify networks with good training as well as generalization performance. Empirical results show that this approach finds PFNN which outperform considerably some previous constructive polynomial network algorithms on processing benchmark time series.

  3. Empirical performance of interpolation techniques in risk-neutral density (RND) estimation

    NASA Astrophysics Data System (ADS)

    Bahaludin, H.; Abdullah, M. H.

    2017-03-01

    The objective of this study is to evaluate the empirical performance of interpolation techniques in risk-neutral density (RND) estimation. Firstly, the empirical performance is evaluated by using statistical analysis based on the implied mean and the implied variance of RND. Secondly, the interpolation performance is measured based on pricing error. We propose using the leave-one-out cross-validation (LOOCV) pricing error for interpolation selection purposes. The statistical analyses indicate that there are statistical differences between the interpolation techniques:second-order polynomial, fourth-order polynomial and smoothing spline. The results of LOOCV pricing error shows that interpolation by using fourth-order polynomial provides the best fitting to option prices in which it has the lowest value error.

  4. Model-based estimates of long-term persistence of induced HPV antibodies: a flexible subject-specific approach.

    PubMed

    Aregay, Mehreteab; Shkedy, Ziv; Molenberghs, Geert; David, Marie-Pierre; Tibaldi, Fabián

    2013-01-01

    In infectious diseases, it is important to predict the long-term persistence of vaccine-induced antibodies and to estimate the time points where the individual titers are below the threshold value for protection. This article focuses on HPV-16/18, and uses a so-called fractional-polynomial model to this effect, derived in a data-driven fashion. Initially, model selection was done from among the second- and first-order fractional polynomials on the one hand and from the linear mixed model on the other. According to a functional selection procedure, the first-order fractional polynomial was selected. Apart from the fractional polynomial model, we also fitted a power-law model, which is a special case of the fractional polynomial model. Both models were compared using Akaike's information criterion. Over the observation period, the fractional polynomials fitted the data better than the power-law model; this, of course, does not imply that it fits best over the long run, and hence, caution ought to be used when prediction is of interest. Therefore, we point out that the persistence of the anti-HPV responses induced by these vaccines can only be ascertained empirically by long-term follow-up analysis.

  5. Limit cycles via higher order perturbations for some piecewise differential systems

    NASA Astrophysics Data System (ADS)

    Buzzi, Claudio A.; Lima, Maurício Firmino Silva; Torregrosa, Joan

    2018-05-01

    A classical perturbation problem is the polynomial perturbation of the harmonic oscillator, (x‧ ,y‧) =(- y + εf(x , y , ε) , x + εg(x , y , ε)) . In this paper we study the limit cycles that bifurcate from the period annulus via piecewise polynomial perturbations in two zones separated by a straight line. We prove that, for polynomial perturbations of degree n , no more than Nn - 1 limit cycles appear up to a study of order N. We also show that this upper bound is reached for orders one and two. Moreover, we study this problem in some classes of piecewise Liénard differential systems providing better upper bounds for higher order perturbation in ε, showing also when they are reached. The Poincaré-Pontryagin-Melnikov theory is the main technique used to prove all the results.

  6. Approximate tensor-product preconditioners for very high order discontinuous Galerkin methods

    NASA Astrophysics Data System (ADS)

    Pazner, Will; Persson, Per-Olof

    2018-02-01

    In this paper, we develop a new tensor-product based preconditioner for discontinuous Galerkin methods with polynomial degrees higher than those typically employed. This preconditioner uses an automatic, purely algebraic method to approximate the exact block Jacobi preconditioner by Kronecker products of several small, one-dimensional matrices. Traditional matrix-based preconditioners require O (p2d) storage and O (p3d) computational work, where p is the degree of basis polynomials used, and d is the spatial dimension. Our SVD-based tensor-product preconditioner requires O (p d + 1) storage, O (p d + 1) work in two spatial dimensions, and O (p d + 2) work in three spatial dimensions. Combined with a matrix-free Newton-Krylov solver, these preconditioners allow for the solution of DG systems in linear time in p per degree of freedom in 2D, and reduce the computational complexity from O (p9) to O (p5) in 3D. Numerical results are shown in 2D and 3D for the advection, Euler, and Navier-Stokes equations, using polynomials of degree up to p = 30. For many test cases, the preconditioner results in similar iteration counts when compared with the exact block Jacobi preconditioner, and performance is significantly improved for high polynomial degrees p.

  7. Extending Romanovski polynomials in quantum mechanics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quesne, C.

    2013-12-15

    Some extensions of the (third-class) Romanovski polynomials (also called Romanovski/pseudo-Jacobi polynomials), which appear in bound-state wavefunctions of rationally extended Scarf II and Rosen-Morse I potentials, are considered. For the former potentials, the generalized polynomials satisfy a finite orthogonality relation, while for the latter an infinite set of relations among polynomials with degree-dependent parameters is obtained. Both types of relations are counterparts of those known for conventional polynomials. In the absence of any direct information on the zeros of the Romanovski polynomials present in denominators, the regularity of the constructed potentials is checked by taking advantage of the disconjugacy properties ofmore » second-order differential equations of Schrödinger type. It is also shown that on going from Scarf I to Scarf II or from Rosen-Morse II to Rosen-Morse I potentials, the variety of rational extensions is narrowed down from types I, II, and III to type III only.« less

  8. Solving the interval type-2 fuzzy polynomial equation using the ranking method

    NASA Astrophysics Data System (ADS)

    Rahman, Nurhakimah Ab.; Abdullah, Lazim

    2014-07-01

    Polynomial equations with trapezoidal and triangular fuzzy numbers have attracted some interest among researchers in mathematics, engineering and social sciences. There are some methods that have been developed in order to solve these equations. In this study we are interested in introducing the interval type-2 fuzzy polynomial equation and solving it using the ranking method of fuzzy numbers. The ranking method concept was firstly proposed to find real roots of fuzzy polynomial equation. Therefore, the ranking method is applied to find real roots of the interval type-2 fuzzy polynomial equation. We transform the interval type-2 fuzzy polynomial equation to a system of crisp interval type-2 fuzzy polynomial equation. This transformation is performed using the ranking method of fuzzy numbers based on three parameters, namely value, ambiguity and fuzziness. Finally, we illustrate our approach by numerical example.

  9. Orthonormal aberration polynomials for anamorphic optical imaging systems with rectangular pupils.

    PubMed

    Mahajan, Virendra N

    2010-12-20

    The classical aberrations of an anamorphic optical imaging system, representing the terms of a power-series expansion of its aberration function, are separable in the Cartesian coordinates of a point on its pupil. We discuss the balancing of a classical aberration of a certain order with one or more such aberrations of lower order to minimize its variance across a rectangular pupil of such a system. We show that the balanced aberrations are the products of two Legendre polynomials, one for each of the two Cartesian coordinates of the pupil point. The compound Legendre polynomials are orthogonal across a rectangular pupil and, like the classical aberrations, are inherently separable in the Cartesian coordinates of the pupil point. They are different from the balanced aberrations and the corresponding orthogonal polynomials for a system with rotational symmetry but a rectangular pupil.

  10. SAMBA: Sparse Approximation of Moment-Based Arbitrary Polynomial Chaos

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahlfeld, R., E-mail: r.ahlfeld14@imperial.ac.uk; Belkouchi, B.; Montomoli, F.

    2016-09-01

    A new arbitrary Polynomial Chaos (aPC) method is presented for moderately high-dimensional problems characterised by limited input data availability. The proposed methodology improves the algorithm of aPC and extends the method, that was previously only introduced as tensor product expansion, to moderately high-dimensional stochastic problems. The fundamental idea of aPC is to use the statistical moments of the input random variables to develop the polynomial chaos expansion. This approach provides the possibility to propagate continuous or discrete probability density functions and also histograms (data sets) as long as their moments exist, are finite and the determinant of the moment matrixmore » is strictly positive. For cases with limited data availability, this approach avoids bias and fitting errors caused by wrong assumptions. In this work, an alternative way to calculate the aPC is suggested, which provides the optimal polynomials, Gaussian quadrature collocation points and weights from the moments using only a handful of matrix operations on the Hankel matrix of moments. It can therefore be implemented without requiring prior knowledge about statistical data analysis or a detailed understanding of the mathematics of polynomial chaos expansions. The extension to more input variables suggested in this work, is an anisotropic and adaptive version of Smolyak's algorithm that is solely based on the moments of the input probability distributions. It is referred to as SAMBA (PC), which is short for Sparse Approximation of Moment-Based Arbitrary Polynomial Chaos. It is illustrated that for moderately high-dimensional problems (up to 20 different input variables or histograms) SAMBA can significantly simplify the calculation of sparse Gaussian quadrature rules. SAMBA's efficiency for multivariate functions with regard to data availability is further demonstrated by analysing higher order convergence and accuracy for a set of nonlinear test functions with 2, 5 and 10 different input distributions or histograms.« less

  11. SAMBA: Sparse Approximation of Moment-Based Arbitrary Polynomial Chaos

    NASA Astrophysics Data System (ADS)

    Ahlfeld, R.; Belkouchi, B.; Montomoli, F.

    2016-09-01

    A new arbitrary Polynomial Chaos (aPC) method is presented for moderately high-dimensional problems characterised by limited input data availability. The proposed methodology improves the algorithm of aPC and extends the method, that was previously only introduced as tensor product expansion, to moderately high-dimensional stochastic problems. The fundamental idea of aPC is to use the statistical moments of the input random variables to develop the polynomial chaos expansion. This approach provides the possibility to propagate continuous or discrete probability density functions and also histograms (data sets) as long as their moments exist, are finite and the determinant of the moment matrix is strictly positive. For cases with limited data availability, this approach avoids bias and fitting errors caused by wrong assumptions. In this work, an alternative way to calculate the aPC is suggested, which provides the optimal polynomials, Gaussian quadrature collocation points and weights from the moments using only a handful of matrix operations on the Hankel matrix of moments. It can therefore be implemented without requiring prior knowledge about statistical data analysis or a detailed understanding of the mathematics of polynomial chaos expansions. The extension to more input variables suggested in this work, is an anisotropic and adaptive version of Smolyak's algorithm that is solely based on the moments of the input probability distributions. It is referred to as SAMBA (PC), which is short for Sparse Approximation of Moment-Based Arbitrary Polynomial Chaos. It is illustrated that for moderately high-dimensional problems (up to 20 different input variables or histograms) SAMBA can significantly simplify the calculation of sparse Gaussian quadrature rules. SAMBA's efficiency for multivariate functions with regard to data availability is further demonstrated by analysing higher order convergence and accuracy for a set of nonlinear test functions with 2, 5 and 10 different input distributions or histograms.

  12. On the stability of projection methods for the incompressible Navier-Stokes equations based on high-order discontinuous Galerkin discretizations

    NASA Astrophysics Data System (ADS)

    Fehn, Niklas; Wall, Wolfgang A.; Kronbichler, Martin

    2017-12-01

    The present paper deals with the numerical solution of the incompressible Navier-Stokes equations using high-order discontinuous Galerkin (DG) methods for discretization in space. For DG methods applied to the dual splitting projection method, instabilities have recently been reported that occur for small time step sizes. Since the critical time step size depends on the viscosity and the spatial resolution, these instabilities limit the robustness of the Navier-Stokes solver in case of complex engineering applications characterized by coarse spatial resolutions and small viscosities. By means of numerical investigation we give evidence that these instabilities are related to the discontinuous Galerkin formulation of the velocity divergence term and the pressure gradient term that couple velocity and pressure. Integration by parts of these terms with a suitable definition of boundary conditions is required in order to obtain a stable and robust method. Since the intermediate velocity field does not fulfill the boundary conditions prescribed for the velocity, a consistent boundary condition is derived from the convective step of the dual splitting scheme to ensure high-order accuracy with respect to the temporal discretization. This new formulation is stable in the limit of small time steps for both equal-order and mixed-order polynomial approximations. Although the dual splitting scheme itself includes inf-sup stabilizing contributions, we demonstrate that spurious pressure oscillations appear for equal-order polynomials and small time steps highlighting the necessity to consider inf-sup stability explicitly.

  13. Neck curve polynomials in neck rupture model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurniadi, Rizal; Perkasa, Yudha S.; Waris, Abdul

    2012-06-06

    The Neck Rupture Model is a model that explains the scission process which has smallest radius in liquid drop at certain position. Old fashion of rupture position is determined randomly so that has been called as Random Neck Rupture Model (RNRM). The neck curve polynomials have been employed in the Neck Rupture Model for calculation the fission yield of neutron induced fission reaction of {sup 280}X{sub 90} with changing of order of polynomials as well as temperature. The neck curve polynomials approximation shows the important effects in shaping of fission yield curve.

  14. More on rotations as spin matrix polynomials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Curtright, Thomas L.

    2015-09-15

    Any nonsingular function of spin j matrices always reduces to a matrix polynomial of order 2j. The challenge is to find a convenient form for the coefficients of the matrix polynomial. The theory of biorthogonal systems is a useful framework to meet this challenge. Central factorial numbers play a key role in the theoretical development. Explicit polynomial coefficients for rotations expressed either as exponentials or as rational Cayley transforms are considered here. Structural features of the results are discussed and compared, and large j limits of the coefficients are examined.

  15. Comparison of random regression test-day models for Polish Black and White cattle.

    PubMed

    Strabel, T; Szyda, J; Ptak, E; Jamrozik, J

    2005-10-01

    Test-day milk yields of first-lactation Black and White cows were used to select the model for routine genetic evaluation of dairy cattle in Poland. The population of Polish Black and White cows is characterized by small herd size, low level of production, and relatively early peak of lactation. Several random regression models for first-lactation milk yield were initially compared using the "percentage of squared bias" criterion and the correlations between true and predicted breeding values. Models with random herd-test-date effects, fixed age-season and herd-year curves, and random additive genetic and permanent environmental curves (Legendre polynomials of different orders were used for all regressions) were chosen for further studies. Additional comparisons included analyses of the residuals and shapes of variance curves in days in milk. The low production level and early peak of lactation of the breed required the use of Legendre polynomials of order 5 to describe age-season lactation curves. For the other curves, Legendre polynomials of order 3 satisfactorily described daily milk yield variation. Fitting third-order polynomials for the permanent environmental effect made it possible to adequately account for heterogeneous residual variance at different stages of lactation.

  16. A Unified Methodology for Computing Accurate Quaternion Color Moments and Moment Invariants.

    PubMed

    Karakasis, Evangelos G; Papakostas, George A; Koulouriotis, Dimitrios E; Tourassis, Vassilios D

    2014-02-01

    In this paper, a general framework for computing accurate quaternion color moments and their corresponding invariants is proposed. The proposed unified scheme arose by studying the characteristics of different orthogonal polynomials. These polynomials are used as kernels in order to form moments, the invariants of which can easily be derived. The resulted scheme permits the usage of any polynomial-like kernel in a unified and consistent way. The resulted moments and moment invariants demonstrate robustness to noisy conditions and high discriminative power. Additionally, in the case of continuous moments, accurate computations take place to avoid approximation errors. Based on this general methodology, the quaternion Tchebichef, Krawtchouk, Dual Hahn, Legendre, orthogonal Fourier-Mellin, pseudo Zernike and Zernike color moments, and their corresponding invariants are introduced. A selected paradigm presents the reconstruction capability of each moment family, whereas proper classification scenarios evaluate the performance of color moment invariants.

  17. High order spectral difference lattice Boltzmann method for incompressible hydrodynamics

    NASA Astrophysics Data System (ADS)

    Li, Weidong

    2017-09-01

    This work presents a lattice Boltzmann equation (LBE) based high order spectral difference method for incompressible flows. In the present method, the spectral difference (SD) method is adopted to discretize the convection and collision term of the LBE to obtain high order (≥3) accuracy. Because the SD scheme represents the solution as cell local polynomials and the solution polynomials have good tensor-product property, the present spectral difference lattice Boltzmann method (SD-LBM) can be implemented on arbitrary unstructured quadrilateral meshes for effective and efficient treatment of complex geometries. Thanks to only first oder PDEs involved in the LBE, no special techniques, such as hybridizable discontinuous Galerkin method (HDG), local discontinuous Galerkin method (LDG) and so on, are needed to discrete diffusion term, and thus, it simplifies the algorithm and implementation of the high order spectral difference method for simulating viscous flows. The proposed SD-LBM is validated with four incompressible flow benchmarks in two-dimensions: (a) the Poiseuille flow driven by a constant body force; (b) the lid-driven cavity flow without singularity at the two top corners-Burggraf flow; and (c) the unsteady Taylor-Green vortex flow; (d) the Blasius boundary-layer flow past a flat plate. Computational results are compared with analytical solutions of these cases and convergence studies of these cases are also given. The designed accuracy of the proposed SD-LBM is clearly verified.

  18. Lattice Boltzmann method for bosons and fermions and the fourth-order Hermite polynomial expansion.

    PubMed

    Coelho, Rodrigo C V; Ilha, Anderson; Doria, Mauro M; Pereira, R M; Aibe, Valter Yoshihiko

    2014-04-01

    The Boltzmann equation with the Bhatnagar-Gross-Krook collision operator is considered for the Bose-Einstein and Fermi-Dirac equilibrium distribution functions. We show that the expansion of the microscopic velocity in terms of Hermite polynomials must be carried to the fourth order to correctly describe the energy equation. The viscosity and thermal coefficients, previously obtained by Yang et al. [Shi and Yang, J. Comput. Phys. 227, 9389 (2008); Yang and Hung, Phys. Rev. E 79, 056708 (2009)] through the Uehling-Uhlenbeck approach, are also derived here. Thus the construction of a lattice Boltzmann method for the quantum fluid is possible provided that the Bose-Einstein and Fermi-Dirac equilibrium distribution functions are expanded to fourth order in the Hermite polynomials.

  19. Polynomial probability distribution estimation using the method of moments

    PubMed Central

    Mattsson, Lars; Rydén, Jesper

    2017-01-01

    We suggest a procedure for estimating Nth degree polynomial approximations to unknown (or known) probability density functions (PDFs) based on N statistical moments from each distribution. The procedure is based on the method of moments and is setup algorithmically to aid applicability and to ensure rigor in use. In order to show applicability, polynomial PDF approximations are obtained for the distribution families Normal, Log-Normal, Weibull as well as for a bimodal Weibull distribution and a data set of anonymized household electricity use. The results are compared with results for traditional PDF series expansion methods of Gram–Charlier type. It is concluded that this procedure is a comparatively simple procedure that could be used when traditional distribution families are not applicable or when polynomial expansions of probability distributions might be considered useful approximations. In particular this approach is practical for calculating convolutions of distributions, since such operations become integrals of polynomial expressions. Finally, in order to show an advanced applicability of the method, it is shown to be useful for approximating solutions to the Smoluchowski equation. PMID:28394949

  20. Phase unwrapping algorithm using polynomial phase approximation and linear Kalman filter.

    PubMed

    Kulkarni, Rishikesh; Rastogi, Pramod

    2018-02-01

    A noise-robust phase unwrapping algorithm is proposed based on state space analysis and polynomial phase approximation using wrapped phase measurement. The true phase is approximated as a two-dimensional first order polynomial function within a small sized window around each pixel. The estimates of polynomial coefficients provide the measurement of phase and local fringe frequencies. A state space representation of spatial phase evolution and the wrapped phase measurement is considered with the state vector consisting of polynomial coefficients as its elements. Instead of using the traditional nonlinear Kalman filter for the purpose of state estimation, we propose to use the linear Kalman filter operating directly with the wrapped phase measurement. The adaptive window width is selected at each pixel based on the local fringe density to strike a balance between the computation time and the noise robustness. In order to retrieve the unwrapped phase, either a line-scanning approach or a quality guided strategy of pixel selection is used depending on the underlying continuous or discontinuous phase distribution, respectively. Simulation and experimental results are provided to demonstrate the applicability of the proposed method.

  1. Polynomial probability distribution estimation using the method of moments.

    PubMed

    Munkhammar, Joakim; Mattsson, Lars; Rydén, Jesper

    2017-01-01

    We suggest a procedure for estimating Nth degree polynomial approximations to unknown (or known) probability density functions (PDFs) based on N statistical moments from each distribution. The procedure is based on the method of moments and is setup algorithmically to aid applicability and to ensure rigor in use. In order to show applicability, polynomial PDF approximations are obtained for the distribution families Normal, Log-Normal, Weibull as well as for a bimodal Weibull distribution and a data set of anonymized household electricity use. The results are compared with results for traditional PDF series expansion methods of Gram-Charlier type. It is concluded that this procedure is a comparatively simple procedure that could be used when traditional distribution families are not applicable or when polynomial expansions of probability distributions might be considered useful approximations. In particular this approach is practical for calculating convolutions of distributions, since such operations become integrals of polynomial expressions. Finally, in order to show an advanced applicability of the method, it is shown to be useful for approximating solutions to the Smoluchowski equation.

  2. On the Numerical Formulation of Parametric Linear Fractional Transformation (LFT) Uncertainty Models for Multivariate Matrix Polynomial Problems

    NASA Technical Reports Server (NTRS)

    Belcastro, Christine M.

    1998-01-01

    Robust control system analysis and design is based on an uncertainty description, called a linear fractional transformation (LFT), which separates the uncertain (or varying) part of the system from the nominal system. These models are also useful in the design of gain-scheduled control systems based on Linear Parameter Varying (LPV) methods. Low-order LFT models are difficult to form for problems involving nonlinear parameter variations. This paper presents a numerical computational method for constructing and LFT model for a given LPV model. The method is developed for multivariate polynomial problems, and uses simple matrix computations to obtain an exact low-order LFT representation of the given LPV system without the use of model reduction. Although the method is developed for multivariate polynomial problems, multivariate rational problems can also be solved using this method by reformulating the rational problem into a polynomial form.

  3. Fast beampattern evaluation by polynomial rooting

    NASA Astrophysics Data System (ADS)

    Häcker, P.; Uhlich, S.; Yang, B.

    2011-07-01

    Current automotive radar systems measure the distance, the relative velocity and the direction of objects in their environment. This information enables the car to support the driver. The direction estimation capabilities of a sensor array depend on its beampattern. To find the array configuration leading to the best angle estimation by a global optimization algorithm, a huge amount of beampatterns have to be calculated to detect their maxima. In this paper, a novel algorithm is proposed to find all maxima of an array's beampattern fast and reliably, leading to accelerated array optimizations. The algorithm works for arrays having the sensors on a uniformly spaced grid. We use a general version of the gcd (greatest common divisor) function in order to write the problem as a polynomial. We differentiate and root the polynomial to get the extrema of the beampattern. In addition, we show a method to reduce the computational burden even more by decreasing the order of the polynomial.

  4. Automatic differentiation for Fourier series and the radii polynomial approach

    NASA Astrophysics Data System (ADS)

    Lessard, Jean-Philippe; Mireles James, J. D.; Ransford, Julian

    2016-11-01

    In this work we develop a computer-assisted technique for proving existence of periodic solutions of nonlinear differential equations with non-polynomial nonlinearities. We exploit ideas from the theory of automatic differentiation in order to formulate an augmented polynomial system. We compute a numerical Fourier expansion of the periodic orbit for the augmented system, and prove the existence of a true solution nearby using an a-posteriori validation scheme (the radii polynomial approach). The problems considered here are given in terms of locally analytic vector fields (i.e. the field is analytic in a neighborhood of the periodic orbit) hence the computer-assisted proofs are formulated in a Banach space of sequences satisfying a geometric decay condition. In order to illustrate the use and utility of these ideas we implement a number of computer-assisted existence proofs for periodic orbits of the Planar Circular Restricted Three-Body Problem (PCRTBP).

  5. Roots of polynomials by ratio of successive derivatives

    NASA Technical Reports Server (NTRS)

    Crouse, J. E.; Putt, C. W.

    1972-01-01

    An order of magnitude study of the ratios of successive polynomial derivatives yields information about the number of roots at an approached root point and the approximate location of a root point from a nearby point. The location approximation improves as a root is approached, so a powerful convergence procedure becomes available. These principles are developed into a computer program which finds the roots of polynomials with real number coefficients.

  6. On method of solving third-order ordinary differential equations directly using Bernstein polynomials

    NASA Astrophysics Data System (ADS)

    Khataybeh, S. N.; Hashim, I.

    2018-04-01

    In this paper, we propose for the first time a method based on Bernstein polynomials for solving directly a class of third-order ordinary differential equations (ODEs). This method gives a numerical solution by converting the equation into a system of algebraic equations which is solved directly. Some numerical examples are given to show the applicability of the method.

  7. Generalized Freud's equation and level densities with polynomial potential

    NASA Astrophysics Data System (ADS)

    Boobna, Akshat; Ghosh, Saugata

    2013-08-01

    We study orthogonal polynomials with weight $\\exp[-NV(x)]$, where $V(x)=\\sum_{k=1}^{d}a_{2k}x^{2k}/2k$ is a polynomial of order 2d. We derive the generalised Freud's equations for $d=3$, 4 and 5 and using this obtain $R_{\\mu}=h_{\\mu}/h_{\\mu -1}$, where $h_{\\mu}$ is the normalization constant for the corresponding orthogonal polynomials. Moments of the density functions, expressed in terms of $R_{\\mu}$, are obtained using Freud's equation and using this, explicit results of level densities as $N\\rightarrow\\infty$ are derived.

  8. Gaussian quadrature for multiple orthogonal polynomials

    NASA Astrophysics Data System (ADS)

    Coussement, Jonathan; van Assche, Walter

    2005-06-01

    We study multiple orthogonal polynomials of type I and type II, which have orthogonality conditions with respect to r measures. These polynomials are connected by their recurrence relation of order r+1. First we show a relation with the eigenvalue problem of a banded lower Hessenberg matrix Ln, containing the recurrence coefficients. As a consequence, we easily find that the multiple orthogonal polynomials of type I and type II satisfy a generalized Christoffel-Darboux identity. Furthermore, we explain the notion of multiple Gaussian quadrature (for proper multi-indices), which is an extension of the theory of Gaussian quadrature for orthogonal polynomials and was introduced by Borges. In particular, we show that the quadrature points and quadrature weights can be expressed in terms of the eigenvalue problem of Ln.

  9. Frequency domain system identification methods - Matrix fraction description approach

    NASA Technical Reports Server (NTRS)

    Horta, Luca G.; Juang, Jer-Nan

    1993-01-01

    This paper presents the use of matrix fraction descriptions for least-squares curve fitting of the frequency spectra to compute two matrix polynomials. The matrix polynomials are intermediate step to obtain a linearized representation of the experimental transfer function. Two approaches are presented: first, the matrix polynomials are identified using an estimated transfer function; second, the matrix polynomials are identified directly from the cross/auto spectra of the input and output signals. A set of Markov parameters are computed from the polynomials and subsequently realization theory is used to recover a minimum order state space model. Unevenly spaced frequency response functions may be used. Results from a simple numerical example and an experiment are discussed to highlight some of the important aspect of the algorithm.

  10. Probing baryogenesis through the Higgs boson self-coupling

    NASA Astrophysics Data System (ADS)

    Reichert, M.; Eichhorn, A.; Gies, H.; Pawlowski, J. M.; Plehn, T.; Scherer, M. M.

    2018-04-01

    The link between a modified Higgs self-coupling and the strong first-order phase transition necessary for baryogenesis is well explored for polynomial extensions of the Higgs potential. We broaden this argument beyond leading polynomial expansions of the Higgs potential to higher polynomial terms and to nonpolynomial Higgs potentials. For our quantitative analysis we resort to the functional renormalization group, which allows us to evolve the full Higgs potential to higher scales and finite temperature. In all cases we find that a strong first-order phase transition manifests itself in an enhancement of the Higgs self-coupling by at least 50%, implying that such modified Higgs potentials should be accessible at the LHC.

  11. Orthonormal vector general polynomials derived from the Cartesian gradient of the orthonormal Zernike-based polynomials.

    PubMed

    Mafusire, Cosmas; Krüger, Tjaart P J

    2018-06-01

    The concept of orthonormal vector circle polynomials is revisited by deriving a set from the Cartesian gradient of Zernike polynomials in a unit circle using a matrix-based approach. The heart of this model is a closed-form matrix equation of the gradient of Zernike circle polynomials expressed as a linear combination of lower-order Zernike circle polynomials related through a gradient matrix. This is a sparse matrix whose elements are two-dimensional standard basis transverse Euclidean vectors. Using the outer product form of the Cholesky decomposition, the gradient matrix is used to calculate a new matrix, which we used to express the Cartesian gradient of the Zernike circle polynomials as a linear combination of orthonormal vector circle polynomials. Since this new matrix is singular, the orthonormal vector polynomials are recovered by reducing the matrix to its row echelon form using the Gauss-Jordan elimination method. We extend the model to derive orthonormal vector general polynomials, which are orthonormal in a general pupil by performing a similarity transformation on the gradient matrix to give its equivalent in the general pupil. The outer form of the Gram-Schmidt procedure and the Gauss-Jordan elimination method are then applied to the general pupil to generate the orthonormal vector general polynomials from the gradient of the orthonormal Zernike-based polynomials. The performance of the model is demonstrated with a simulated wavefront in a square pupil inscribed in a unit circle.

  12. Wavefront propagation from one plane to another with the use of Zernike polynomials and Taylor monomials.

    PubMed

    Dai, Guang-ming; Campbell, Charles E; Chen, Li; Zhao, Huawei; Chernyak, Dimitri

    2009-01-20

    In wavefront-driven vision correction, ocular aberrations are often measured on the pupil plane and the correction is applied on a different plane. The problem with this practice is that any changes undergone by the wavefront as it propagates between planes are not currently included in devising customized vision correction. With some valid approximations, we have developed an analytical foundation based on geometric optics in which Zernike polynomials are used to characterize the propagation of the wavefront from one plane to another. Both the boundary and the magnitude of the wavefront change after the propagation. Taylor monomials were used to realize the propagation because of their simple form for this purpose. The method we developed to identify changes in low-order aberrations was verified with the classical vertex correction formula. The method we developed to identify changes in high-order aberrations was verified with ZEMAX ray-tracing software. Although the method may not be valid for highly irregular wavefronts and it was only proven for wavefronts with low-order or high-order aberrations, our analysis showed that changes in the propagating wavefront are significant and should, therefore, be included in calculating vision correction. This new approach could be of major significance in calculating wavefront-driven vision correction whether by refractive surgery, contact lenses, intraocular lenses, or spectacles.

  13. Uniform high order spectral methods for one and two dimensional Euler equations

    NASA Technical Reports Server (NTRS)

    Cai, Wei; Shu, Chi-Wang

    1991-01-01

    Uniform high order spectral methods to solve multi-dimensional Euler equations for gas dynamics are discussed. Uniform high order spectral approximations with spectral accuracy in smooth regions of solutions are constructed by introducing the idea of the Essentially Non-Oscillatory (ENO) polynomial interpolations into the spectral methods. The authors present numerical results for the inviscid Burgers' equation, and for the one dimensional Euler equations including the interactions between a shock wave and density disturbance, Sod's and Lax's shock tube problems, and the blast wave problem. The interaction between a Mach 3 two dimensional shock wave and a rotating vortex is simulated.

  14. Adaptive optics with a magnetic deformable mirror: applications in the human eye

    NASA Astrophysics Data System (ADS)

    Fernandez, Enrique J.; Vabre, Laurent; Hermann, Boris; Unterhuber, Angelika; Povazay, Boris; Drexler, Wolfgang

    2006-10-01

    A novel deformable mirror using 52 independent magnetic actuators (MIRAO 52, Imagine Eyes) is presented and characterized for ophthalmic applications. The capabilities of the device to reproduce different surfaces, in particular Zernike polynomials up to the fifth order, are investigated in detail. The study of the influence functions of the deformable mirror reveals a significant linear response with the applied voltage. The correcting device also presents a high fidelity in the generation of surfaces. The ranges of production of Zernike polynomials fully cover those typically found in the human eye, even for the cases of highly aberrated eyes. Data from keratoconic eyes are confronted with the obtained ranges, showing that the deformable mirror is able to compensate for these strong aberrations. Ocular aberration correction with polychromatic light, using a near Gaussian spectrum of 130 nm full width at half maximum centered at 800 nm, in five subjects is accomplished by simultaneously using the deformable mirror and an achromatizing lens, in order to compensate for the monochromatic and chromatic aberrations, respectively. Results from living eyes, including one exhibiting 4.66 D of myopia and a near pathologic cornea with notable high order aberrations, show a practically perfect aberration correction. Benefits and applications of simultaneous monochromatic and chromatic aberration correction are finally discussed in the context of retinal imaging and vision.

  15. High-Order Model and Dynamic Filtering for Frame Rate Up-Conversion.

    PubMed

    Bao, Wenbo; Zhang, Xiaoyun; Chen, Li; Ding, Lianghui; Gao, Zhiyong

    2018-08-01

    This paper proposes a novel frame rate up-conversion method through high-order model and dynamic filtering (HOMDF) for video pixels. Unlike the constant brightness and linear motion assumptions in traditional methods, the intensity and position of the video pixels are both modeled with high-order polynomials in terms of time. Then, the key problem of our method is to estimate the polynomial coefficients that represent the pixel's intensity variation, velocity, and acceleration. We propose to solve it with two energy objectives: one minimizes the auto-regressive prediction error of intensity variation by its past samples, and the other minimizes video frame's reconstruction error along the motion trajectory. To efficiently address the optimization problem for these coefficients, we propose the dynamic filtering solution inspired by video's temporal coherence. The optimal estimation of these coefficients is reformulated into a dynamic fusion of the prior estimate from pixel's temporal predecessor and the maximum likelihood estimate from current new observation. Finally, frame rate up-conversion is implemented using motion-compensated interpolation by pixel-wise intensity variation and motion trajectory. Benefited from the advanced model and dynamic filtering, the interpolated frame has much better visual quality. Extensive experiments on the natural and synthesized videos demonstrate the superiority of HOMDF over the state-of-the-art methods in both subjective and objective comparisons.

  16. Comparative assessment of orthogonal polynomials for wavefront reconstruction over the square aperture.

    PubMed

    Ye, Jingfei; Gao, Zhishan; Wang, Shuai; Cheng, Jinlong; Wang, Wei; Sun, Wenqing

    2014-10-01

    Four orthogonal polynomials for reconstructing a wavefront over a square aperture based on the modal method are currently available, namely, the 2D Chebyshev polynomials, 2D Legendre polynomials, Zernike square polynomials and Numerical polynomials. They are all orthogonal over the full unit square domain. 2D Chebyshev polynomials are defined by the product of Chebyshev polynomials in x and y variables, as are 2D Legendre polynomials. Zernike square polynomials are derived by the Gram-Schmidt orthogonalization process, where the integration region across the full unit square is circumscribed outside the unit circle. Numerical polynomials are obtained by numerical calculation. The presented study is to compare these four orthogonal polynomials by theoretical analysis and numerical experiments from the aspects of reconstruction accuracy, remaining errors, and robustness. Results show that the Numerical orthogonal polynomial is superior to the other three polynomials because of its high accuracy and robustness even in the case of a wavefront with incomplete data.

  17. Adaptive-Mesh-Refinement for hyperbolic systems of conservation laws based on a posteriori stabilized high order polynomial reconstructions

    NASA Astrophysics Data System (ADS)

    Semplice, Matteo; Loubère, Raphaël

    2018-02-01

    In this paper we propose a third order accurate finite volume scheme based on a posteriori limiting of polynomial reconstructions within an Adaptive-Mesh-Refinement (AMR) simulation code for hydrodynamics equations in 2D. The a posteriori limiting is based on the detection of problematic cells on a so-called candidate solution computed at each stage of a third order Runge-Kutta scheme. Such detection may include different properties, derived from physics, such as positivity, from numerics, such as a non-oscillatory behavior, or from computer requirements such as the absence of NaN's. Troubled cell values are discarded and re-computed starting again from the previous time-step using a more dissipative scheme but only locally, close to these cells. By locally decrementing the degree of the polynomial reconstructions from 2 to 0 we switch from a third-order to a first-order accurate but more stable scheme. The entropy indicator sensor is used to refine/coarsen the mesh. This sensor is also employed in an a posteriori manner because if some refinement is needed at the end of a time step, then the current time-step is recomputed with the refined mesh, but only locally, close to the new cells. We show on a large set of numerical tests that this a posteriori limiting procedure coupled with the entropy-based AMR technology can maintain not only optimal accuracy on smooth flows but also stability on discontinuous profiles such as shock waves, contacts, interfaces, etc. Moreover numerical evidences show that this approach is at least comparable in terms of accuracy and cost to a more classical CWENO approach within the same AMR context.

  18. Online Removal of Baseline Shift with a Polynomial Function for Hemodynamic Monitoring Using Near-Infrared Spectroscopy.

    PubMed

    Zhao, Ke; Ji, Yaoyao; Li, Yan; Li, Ting

    2018-01-21

    Near-infrared spectroscopy (NIRS) has become widely accepted as a valuable tool for noninvasively monitoring hemodynamics for clinical and diagnostic purposes. Baseline shift has attracted great attention in the field, but there has been little quantitative study on baseline removal. Here, we aimed to study the baseline characteristics of an in-house-built portable medical NIRS device over a long time (>3.5 h). We found that the measured baselines all formed perfect polynomial functions on phantom tests mimicking human bodies, which were identified by recent NIRS studies. More importantly, our study shows that the fourth-order polynomial function acted to distinguish performance with stable and low-computation-burden fitting calibration (R-square >0.99 for all probes) among second- to sixth-order polynomials, evaluated by the parameters R-square, sum of squares due to error, and residual. This study provides a straightforward, efficient, and quantitatively evaluated solution for online baseline removal for hemodynamic monitoring using NIRS devices.

  19. Mathematics of Zernike polynomials: a review.

    PubMed

    McAlinden, Colm; McCartney, Mark; Moore, Jonathan

    2011-11-01

    Monochromatic aberrations of the eye principally originate from the cornea and the crystalline lens. Aberrometers operate via differing principles but function by either analysing the reflected wavefront from the retina or by analysing an image on the retina. Aberrations may be described as lower order or higher order aberrations with Zernike polynomials being the most commonly employed fitting method. The complex mathematical aspects with regards the Zernike polynomial expansion series are detailed in this review. Refractive surgery has been a key clinical application of aberrometers; however, more recently aberrometers have been used in a range of other areas ophthalmology including corneal diseases, cataract and retinal imaging. © 2011 The Authors. Clinical and Experimental Ophthalmology © 2011 Royal Australian and New Zealand College of Ophthalmologists.

  20. An efficient algorithm for building locally refined hp - adaptive H-PCFE: Application to uncertainty quantification

    NASA Astrophysics Data System (ADS)

    Chakraborty, Souvik; Chowdhury, Rajib

    2017-12-01

    Hybrid polynomial correlated function expansion (H-PCFE) is a novel metamodel formulated by coupling polynomial correlated function expansion (PCFE) and Kriging. Unlike commonly available metamodels, H-PCFE performs a bi-level approximation and hence, yields more accurate results. However, till date, it is only applicable to medium scaled problems. In order to address this apparent void, this paper presents an improved H-PCFE, referred to as locally refined hp - adaptive H-PCFE. The proposed framework computes the optimal polynomial order and important component functions of PCFE, which is an integral part of H-PCFE, by using global variance based sensitivity analysis. Optimal number of training points are selected by using distribution adaptive sequential experimental design. Additionally, the formulated model is locally refined by utilizing the prediction error, which is inherently obtained in H-PCFE. Applicability of the proposed approach has been illustrated with two academic and two industrial problems. To illustrate the superior performance of the proposed approach, results obtained have been compared with those obtained using hp - adaptive PCFE. It is observed that the proposed approach yields highly accurate results. Furthermore, as compared to hp - adaptive PCFE, significantly less number of actual function evaluations are required for obtaining results of similar accuracy.

  1. Statistics of Data Fitting: Flaws and Fixes of Polynomial Analysis of Channeled Spectra

    NASA Astrophysics Data System (ADS)

    Karstens, William; Smith, David

    2013-03-01

    Starting from general statistical principles, we have critically examined Baumeister's procedure* for determining the refractive index of thin films from channeled spectra. Briefly, the method assumes that the index and interference fringe order may be approximated by polynomials quadratic and cubic in photon energy, respectively. The coefficients of the polynomials are related by differentiation, which is equivalent to comparing energy differences between fringes. However, we find that when the fringe order is calculated from the published IR index for silicon* and then analyzed with Baumeister's procedure, the results do not reproduce the original index. This problem has been traced to 1. Use of unphysical powers in the polynomials (e.g., time-reversal invariance requires that the index is an even function of photon energy), and 2. Use of insufficient terms of the correct parity. Exclusion of unphysical terms and addition of quartic and quintic terms to the index and order polynomials yields significantly better fits with fewer parameters. This represents a specific example of using statistics to determine if the assumed fitting model adequately captures the physics contained in experimental data. The use of analysis of variance (ANOVA) and the Durbin-Watson statistic to test criteria for the validity of least-squares fitting will be discussed. *D.F. Edwards and E. Ochoa, Appl. Opt. 19, 4130 (1980). Supported in part by the US Department of Energy, Office of Nuclear Physics under contract DE-AC02-06CH11357.

  2. On the coefficients of integrated expansions of Bessel polynomials

    NASA Astrophysics Data System (ADS)

    Doha, E. H.; Ahmed, H. M.

    2006-03-01

    A new formula expressing explicitly the integrals of Bessel polynomials of any degree and for any order in terms of the Bessel polynomials themselves is proved. Another new explicit formula relating the Bessel coefficients of an expansion for infinitely differentiable function that has been integrated an arbitrary number of times in terms of the coefficients of the original expansion of the function is also established. An application of these formulae for solving ordinary differential equations with varying coefficients is discussed.

  3. Range Image Flow using High-Order Polynomial Expansion

    DTIC Science & Technology

    2013-09-01

    included as a default algorithm in the OpenCV library [2]. The research of estimating the motion between range images, or range flow, is much more...Journal of Computer Vision, vol. 92, no. 1, pp. 1‒31. 2. G. Bradski and A. Kaehler. 2008. Learning OpenCV : Computer Vision with the OpenCV Library

  4. Symmetric polynomials in information theory: Entropy and subentropy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jozsa, Richard; Mitchison, Graeme

    2015-06-15

    Entropy and other fundamental quantities of information theory are customarily expressed and manipulated as functions of probabilities. Here we study the entropy H and subentropy Q as functions of the elementary symmetric polynomials in the probabilities and reveal a series of remarkable properties. Derivatives of all orders are shown to satisfy a complete monotonicity property. H and Q themselves become multivariate Bernstein functions and we derive the density functions of their Levy-Khintchine representations. We also show that H and Q are Pick functions in each symmetric polynomial variable separately. Furthermore, we see that H and the intrinsically quantum informational quantitymore » Q become surprisingly closely related in functional form, suggesting a special significance for the symmetric polynomials in quantum information theory. Using the symmetric polynomials, we also derive a series of further properties of H and Q.« less

  5. Correcting bias in the rational polynomial coefficients of satellite imagery using thin-plate smoothing splines

    NASA Astrophysics Data System (ADS)

    Shen, Xiang; Liu, Bin; Li, Qing-Quan

    2017-03-01

    The Rational Function Model (RFM) has proven to be a viable alternative to the rigorous sensor models used for geo-processing of high-resolution satellite imagery. Because of various errors in the satellite ephemeris and instrument calibration, the Rational Polynomial Coefficients (RPCs) supplied by image vendors are often not sufficiently accurate, and there is therefore a clear need to correct the systematic biases in order to meet the requirements of high-precision topographic mapping. In this paper, we propose a new RPC bias-correction method using the thin-plate spline modeling technique. Benefiting from its excellent performance and high flexibility in data fitting, the thin-plate spline model has the potential to remove complex distortions in vendor-provided RPCs, such as the errors caused by short-period orbital perturbations. The performance of the new method was evaluated by using Ziyuan-3 satellite images and was compared against the recently developed least-squares collocation approach, as well as the classical affine-transformation and quadratic-polynomial based methods. The results show that the accuracies of the thin-plate spline and the least-squares collocation approaches were better than the other two methods, which indicates that strong non-rigid deformations exist in the test data because they cannot be adequately modeled by simple polynomial-based methods. The performance of the thin-plate spline method was close to that of the least-squares collocation approach when only a few Ground Control Points (GCPs) were used, and it improved more rapidly with an increase in the number of redundant observations. In the test scenario using 21 GCPs (some of them located at the four corners of the scene), the correction residuals of the thin-plate spline method were about 36%, 37%, and 19% smaller than those of the affine transformation method, the quadratic polynomial method, and the least-squares collocation algorithm, respectively, which demonstrates that the new method can be more effective at removing systematic biases in vendor-supplied RPCs.

  6. Experimental Modal Analysis and Dynamic Component Synthesis. Volume 6. Software User’s Guide.

    DTIC Science & Technology

    1987-12-01

    generate a Complex Mode Indication Function ( CMIF ) from the measurement directory, including modifications from the measurement selection option. This...reference measurements are - included in the data set to be analyzed. The peaks in the CMIF chart indicate existing modes. Thus, the order of the the...polynomials is determined by the number of peaks found in the CMIF chart. Then, the order of the polynomials can be determined before the estimation process

  7. A new order-theoretic characterisation of the polytime computable functions☆

    PubMed Central

    Avanzini, Martin; Eguchi, Naohi; Moser, Georg

    2015-01-01

    We propose a new order-theoretic characterisation of the class of polytime computable functions. To this avail we define the small polynomial path order (sPOP⁎ for short). This termination order entails a new syntactic method to analyse the innermost runtime complexity of term rewrite systems fully automatically: for any rewrite system compatible with sPOP⁎ that employs recursion up to depth d, the (innermost) runtime complexity is polynomially bounded of degree d. This bound is tight. Thus we obtain a direct correspondence between a syntactic (and easily verifiable) condition of a program and the asymptotic worst-case complexity of the program. PMID:26412933

  8. High-order local maximum principle preserving (MPP) discontinuous Galerkin finite element method for the transport equation

    NASA Astrophysics Data System (ADS)

    Anderson, R.; Dobrev, V.; Kolev, Tz.; Kuzmin, D.; Quezada de Luna, M.; Rieben, R.; Tomov, V.

    2017-04-01

    In this work we present a FCT-like Maximum-Principle Preserving (MPP) method to solve the transport equation. We use high-order polynomial spaces; in particular, we consider up to 5th order spaces in two and three dimensions and 23rd order spaces in one dimension. The method combines the concepts of positive basis functions for discontinuous Galerkin finite element spatial discretization, locally defined solution bounds, element-based flux correction, and non-linear local mass redistribution. We consider a simple 1D problem with non-smooth initial data to explain and understand the behavior of different parts of the method. Convergence tests in space indicate that high-order accuracy is achieved. Numerical results from several benchmarks in two and three dimensions are also reported.

  9. New families of superintegrable systems from Hermite and Laguerre exceptional orthogonal polynomials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marquette, Ian; Quesne, Christiane

    2013-04-15

    In recent years, many exceptional orthogonal polynomials (EOP) were introduced and used to construct new families of 1D exactly solvable quantum potentials, some of which are shape invariant. In this paper, we construct from Hermite and Laguerre EOP and their related quantum systems new 2D superintegrable Hamiltonians with higher-order integrals of motion and the polynomial algebras generated by their integrals of motion. We obtain the finite-dimensional unitary representations of the polynomial algebras and the corresponding energy spectrum. We also point out a new type of degeneracies of the energy levels of these systems that is associated with holes in sequencesmore » of EOP.« less

  10. New algorithms for solving high even-order differential equations using third and fourth Chebyshev-Galerkin methods

    NASA Astrophysics Data System (ADS)

    Doha, E. H.; Abd-Elhameed, W. M.; Bassuony, M. A.

    2013-03-01

    This paper is concerned with spectral Galerkin algorithms for solving high even-order two point boundary value problems in one dimension subject to homogeneous and nonhomogeneous boundary conditions. The proposed algorithms are extended to solve two-dimensional high even-order differential equations. The key to the efficiency of these algorithms is to construct compact combinations of Chebyshev polynomials of the third and fourth kinds as basis functions. The algorithms lead to linear systems with specially structured matrices that can be efficiently inverted. Numerical examples are included to demonstrate the validity and applicability of the proposed algorithms, and some comparisons with some other methods are made.

  11. Zernike expansion of derivatives and Laplacians of the Zernike circle polynomials.

    PubMed

    Janssen, A J E M

    2014-07-01

    The partial derivatives and Laplacians of the Zernike circle polynomials occur in various places in the literature on computational optics. In a number of cases, the expansion of these derivatives and Laplacians in the circle polynomials are required. For the first-order partial derivatives, analytic results are scattered in the literature. Results start as early as 1942 in Nijboer's thesis and continue until present day, with some emphasis on recursive computation schemes. A brief historic account of these results is given in the present paper. By choosing the unnormalized version of the circle polynomials, with exponential rather than trigonometric azimuthal dependence, and by a proper combination of the two partial derivatives, a concise form of the expressions emerges. This form is appropriate for the formulation and solution of a model wavefront sensing problem of reconstructing a wavefront on the level of its expansion coefficients from (measurements of the expansion coefficients of) the partial derivatives. It turns out that the least-squares estimation problem arising here decouples per azimuthal order m, and per m the generalized inverse solution assumes a concise analytic form so that singular value decompositions are avoided. The preferred version of the circle polynomials, with proper combination of the partial derivatives, also leads to a concise analytic result for the Zernike expansion of the Laplacian of the circle polynomials. From these expansions, the properties of the Laplacian as a mapping from the space of circle polynomials of maximal degree N, as required in the study of the Neumann problem associated with the transport-of-intensity equation, can be read off within a single glance. Furthermore, the inverse of the Laplacian on this space is shown to have a concise analytic form.

  12. Ricci polynomial gravity

    NASA Astrophysics Data System (ADS)

    Hao, Xin; Zhao, Liu

    2017-12-01

    We study a novel class of higher-curvature gravity models in n spacetime dimensions which we call Ricci polynomial gravity. The action consists purely of a polynomial in Ricci curvature of order N . In the absence of the second-order terms in the action, the models are ghost free around the Minkowski vacuum. By appropriately choosing the coupling coefficients in front of each term in the action, it is shown that the models can have multiple vacua with different effective cosmological constants, and can be made free of ghost and scalar degrees of freedom around at least one of the maximally symmetric vacua for any n >2 and any N ≥4 . We also discuss some of the physical implications of the existence of multiple vacua in the contexts of black hole physics and cosmology.

  13. Polynomial mixture method of solving ordinary differential equations

    NASA Astrophysics Data System (ADS)

    Shahrir, Mohammad Shazri; Nallasamy, Kumaresan; Ratnavelu, Kuru; Kamali, M. Z. M.

    2017-11-01

    In this paper, a numerical solution of fuzzy quadratic Riccati differential equation is estimated using a proposed new approach that provides mixture of polynomials where iteratively the right mixture will be generated. This mixture provide a generalized formalism of traditional Neural Networks (NN). Previous works have shown reliable results using Runge-Kutta 4th order (RK4). This can be achieved by solving the 1st Order Non-linear Differential Equation (ODE) that is found commonly in Riccati differential equation. Research has shown improved results relatively to the RK4 method. It can be said that Polynomial Mixture Method (PMM) shows promising results with the advantage of continuous estimation and improved accuracy that can be produced over Mabood et al, RK-4, Multi-Agent NN and Neuro Method (NM).

  14. Constructing general partial differential equations using polynomial and neural networks.

    PubMed

    Zjavka, Ladislav; Pedrycz, Witold

    2016-01-01

    Sum fraction terms can approximate multi-variable functions on the basis of discrete observations, replacing a partial differential equation definition with polynomial elementary data relation descriptions. Artificial neural networks commonly transform the weighted sum of inputs to describe overall similarity relationships of trained and new testing input patterns. Differential polynomial neural networks form a new class of neural networks, which construct and solve an unknown general partial differential equation of a function of interest with selected substitution relative terms using non-linear multi-variable composite polynomials. The layers of the network generate simple and composite relative substitution terms whose convergent series combinations can describe partial dependent derivative changes of the input variables. This regression is based on trained generalized partial derivative data relations, decomposed into a multi-layer polynomial network structure. The sigmoidal function, commonly used as a nonlinear activation of artificial neurons, may transform some polynomial items together with the parameters with the aim to improve the polynomial derivative term series ability to approximate complicated periodic functions, as simple low order polynomials are not able to fully make up for the complete cycles. The similarity analysis facilitates substitutions for differential equations or can form dimensional units from data samples to describe real-world problems. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. Solution of Fifth-order Korteweg and de Vries Equation by Homotopy perturbation Transform Method using He's Polynomial

    NASA Astrophysics Data System (ADS)

    Sharma, Dinkar; Singh, Prince; Chauhan, Shubha

    2017-06-01

    In this paper, a combined form of the Laplace transform method with the homotopy perturbation method is applied to solve nonlinear fifth order Korteweg de Vries (KdV) equations. The method is known as homotopy perturbation transform method (HPTM). The nonlinear terms can be easily handled by the use of He's polynomials. Two test examples are considered to illustrate the present scheme. Further the results are compared with Homotopy perturbation method (HPM).

  16. Dispersion analysis of the Pn -Pn-1DG mixed finite element pair for atmospheric modelling

    NASA Astrophysics Data System (ADS)

    Melvin, Thomas

    2018-02-01

    Mixed finite element methods provide a generalisation of staggered grid finite difference methods with a framework to extend the method to high orders. The ability to generate a high order method is appealing for applications on the kind of quasi-uniform grids that are popular for atmospheric modelling, so that the method retains an acceptable level of accuracy even around special points in the grid. The dispersion properties of such schemes are important to study as they provide insight into the numerical adjustment to imbalance that is an important component in atmospheric modelling. This paper extends the recent analysis of the P2 - P1DG pair, that is a quadratic continuous and linear discontinuous finite element pair, to higher polynomial orders and also spectral element type pairs. In common with the previously studied element pair, and also with other schemes such as the spectral element and discontinuous Galerkin methods, increasing the polynomial order is found to provide a more accurate dispersion relation for the well resolved part of the spectrum but at the cost of a number of unphysical spectral gaps. The effects of these spectral gaps are investigated and shown to have a varying impact depending upon the width of the gap. Finally, the tensor product nature of the finite element spaces is exploited to extend the dispersion analysis into two-dimensions.

  17. Piecewise polynomial representations of genomic tracks.

    PubMed

    Tarabichi, Maxime; Detours, Vincent; Konopka, Tomasz

    2012-01-01

    Genomic data from micro-array and sequencing projects consist of associations of measured values to chromosomal coordinates. These associations can be thought of as functions in one dimension and can thus be stored, analyzed, and interpreted as piecewise-polynomial curves. We present a general framework for building piecewise polynomial representations of genome-scale signals and illustrate some of its applications via examples. We show that piecewise constant segmentation, a typical step in copy-number analyses, can be carried out within this framework for both array and (DNA) sequencing data offering advantages over existing methods in each case. Higher-order polynomial curves can be used, for example, to detect trends and/or discontinuities in transcription levels from RNA-seq data. We give a concrete application of piecewise linear functions to diagnose and quantify alignment quality at exon borders (splice sites). Our software (source and object code) for building piecewise polynomial models is available at http://sourceforge.net/projects/locsmoc/.

  18. Using Tutte polynomials to analyze the structure of the benzodiazepines

    NASA Astrophysics Data System (ADS)

    Cadavid Muñoz, Juan José

    2014-05-01

    Graph theory in general and Tutte polynomials in particular, are implemented for analyzing the chemical structure of the benzodiazepines. Similarity analysis are used with the Tutte polynomials for finding other molecules that are similar to the benzodiazepines and therefore that might show similar psycho-active actions for medical purpose, in order to evade the drawbacks associated to the benzodiazepines based medicine. For each type of benzodiazepines, Tutte polynomials are computed and some numeric characteristics are obtained, such as the number of spanning trees and the number of spanning forests. Computations are done using the computer algebra Maple's GraphTheory package. The obtained analytical results are of great importance in pharmaceutical engineering. As a future research line, the usage of the chemistry computational program named Spartan, will be used to extent and compare it with the obtained results from the Tutte polynomials of benzodiazepines.

  19. A high-order time-parallel scheme for solving wave propagation problems via the direct construction of an approximate time-evolution operator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haut, T. S.; Babb, T.; Martinsson, P. G.

    2015-06-16

    Our manuscript demonstrates a technique for efficiently solving the classical wave equation, the shallow water equations, and, more generally, equations of the form ∂u/∂t=Lu∂u/∂t=Lu, where LL is a skew-Hermitian differential operator. The idea is to explicitly construct an approximation to the time-evolution operator exp(τL)exp(τL) for a relatively large time-step ττ. Recently developed techniques for approximating oscillatory scalar functions by rational functions, and accelerated algorithms for computing functions of discretized differential operators are exploited. Principal advantages of the proposed method include: stability even for large time-steps, the possibility to parallelize in time over many characteristic wavelengths and large speed-ups over existingmore » methods in situations where simulation over long times are required. Numerical examples involving the 2D rotating shallow water equations and the 2D wave equation in an inhomogenous medium are presented, and the method is compared to the 4th order Runge–Kutta (RK4) method and to the use of Chebyshev polynomials. The new method achieved high accuracy over long-time intervals, and with speeds that are orders of magnitude faster than both RK4 and the use of Chebyshev polynomials.« less

  20. Covariance functions for body weight from birth to maturity in Nellore cows.

    PubMed

    Boligon, A A; Mercadante, M E Z; Forni, S; Lôbo, R B; Albuquerque, L G

    2010-03-01

    The objective of this study was to estimate (co)variance functions using random regression models on Legendre polynomials for the analysis of repeated measures of BW from birth to adult age. A total of 82,064 records from 8,145 females were analyzed. Different models were compared. The models included additive direct and maternal effects, and animal and maternal permanent environmental effects as random terms. Contemporary group and dam age at calving (linear and quadratic effect) were included as fixed effects, and orthogonal Legendre polynomials of animal age (cubic regression) were considered as random covariables. Eight models with polynomials of third to sixth order were used to describe additive direct and maternal effects, and animal and maternal permanent environmental effects. Residual effects were modeled using 1 (i.e., assuming homogeneity of variances across all ages) or 5 age classes. The model with 5 classes was the best to describe the trajectory of residuals along the growth curve. The model including fourth- and sixth-order polynomials for additive direct and animal permanent environmental effects, respectively, and third-order polynomials for maternal genetic and maternal permanent environmental effects were the best. Estimates of (co)variance obtained with the multi-trait and random regression models were similar. Direct heritability estimates obtained with the random regression models followed a trend similar to that obtained with the multi-trait model. The largest estimates of maternal heritability were those of BW taken close to 240 d of age. In general, estimates of correlation between BW from birth to 8 yr of age decreased with increasing distance between ages.

  1. Causality and a -theorem constraints on Ricci polynomial and Riemann cubic gravities

    NASA Astrophysics Data System (ADS)

    Li, Yue-Zhou; Lü, H.; Wu, Jun-Bao

    2018-01-01

    In this paper, we study Einstein gravity extended with Ricci polynomials and derive the constraints on the coupling constants from the considerations of being ghost-free, exhibiting an a -theorem and maintaining causality. The salient feature is that Einstein metrics with appropriate effective cosmological constants continue to be solutions with the inclusion of such Ricci polynomials and the causality constraint is automatically satisfied. The ghost-free and a -theorem conditions can only be both met starting at the quartic order. We also study these constraints on general Riemann cubic gravities.

  2. Non-axisymmetric Aberration Patterns from Wide-field Telescopes Using Spin-weighted Zernike Polynomials

    DOE PAGES

    Kent, Stephen M.

    2018-02-15

    If the optical system of a telescope is perturbed from rotational symmetry, the Zernike wavefront aberration coefficients describing that system can be expressed as a function of position in the focal plane using spin-weighted Zernike polynomials. Methodologies are presented to derive these polynomials to arbitrary order. This methodology is applied to aberration patterns produced by a misaligned Ritchey Chretian telescope and to distortion patterns at the focal plane of the DESI optical corrector, where it is shown to provide a more efficient description of distortion than conventional expansions.

  3. New template family for the detection of gravitational waves from comparable-mass black hole binaries

    NASA Astrophysics Data System (ADS)

    Porter, Edward K.

    2007-11-01

    In order to improve the phasing of the comparable-mass waveform as we approach the last stable orbit for a system, various resummation methods have been used to improve the standard post-Newtonian waveforms. In this work we present a new family of templates for the detection of gravitational waves from the inspiral of two comparable-mass black hole binaries. These new adiabatic templates are based on reexpressing the derivative of the binding energy and the gravitational wave flux functions in terms of shifted Chebyshev polynomials. The Chebyshev polynomials are a useful tool in numerical methods as they display the fastest convergence of any of the orthogonal polynomials. In this case they are also particularly useful as they eliminate one of the features that plagues the post-Newtonian expansion. The Chebyshev binding energy now has information at all post-Newtonian orders, compared to the post-Newtonian templates which only have information at full integer orders. In this work, we compare both the post-Newtonian and Chebyshev templates against a fiducially exact waveform. This waveform is constructed from a hybrid method of using the test-mass results combined with the mass dependent parts of the post-Newtonian expansions for the binding energy and flux functions. Our results show that the Chebyshev templates achieve extremely high fitting factors at all post-Newtonian orders and provide excellent parameter extraction. We also show that this new template family has a faster Cauchy convergence, gives a better prediction of the position of the last stable orbit and in general recovers higher Signal-to-Noise ratios than the post-Newtonian templates.

  4. Polynomial sequences for bond percolation critical thresholds

    DOE PAGES

    Scullard, Christian R.

    2011-09-22

    In this paper, I compute the inhomogeneous (multi-probability) bond critical surfaces for the (4, 6, 12) and (3 4, 6) using the linearity approximation described in (Scullard and Ziff, J. Stat. Mech. 03021), implemented as a branching process of lattices. I find the estimates for the bond percolation thresholds, pc(4, 6, 12) = 0.69377849... and p c(3 4, 6) = 0.43437077..., compared with Parviainen’s numerical results of p c = 0.69373383... and p c = 0.43430621... . These deviations are of the order 10 -5, as is standard for this method. Deriving thresholds in this way for a given latticemore » leads to a polynomial with integer coefficients, the root in [0, 1] of which gives the estimate for the bond threshold and I show how the method can be refined, leading to a series of higher order polynomials making predictions that likely converge to the exact answer. Finally, I discuss how this fact hints that for certain graphs, such as the kagome lattice, the exact bond threshold may not be the root of any polynomial with integer coefficients.« less

  5. A method for fitting regression splines with varying polynomial order in the linear mixed model.

    PubMed

    Edwards, Lloyd J; Stewart, Paul W; MacDougall, James E; Helms, Ronald W

    2006-02-15

    The linear mixed model has become a widely used tool for longitudinal analysis of continuous variables. The use of regression splines in these models offers the analyst additional flexibility in the formulation of descriptive analyses, exploratory analyses and hypothesis-driven confirmatory analyses. We propose a method for fitting piecewise polynomial regression splines with varying polynomial order in the fixed effects and/or random effects of the linear mixed model. The polynomial segments are explicitly constrained by side conditions for continuity and some smoothness at the points where they join. By using a reparameterization of this explicitly constrained linear mixed model, an implicitly constrained linear mixed model is constructed that simplifies implementation of fixed-knot regression splines. The proposed approach is relatively simple, handles splines in one variable or multiple variables, and can be easily programmed using existing commercial software such as SAS or S-plus. The method is illustrated using two examples: an analysis of longitudinal viral load data from a study of subjects with acute HIV-1 infection and an analysis of 24-hour ambulatory blood pressure profiles.

  6. Improving multivariate Horner schemes with Monte Carlo tree search

    NASA Astrophysics Data System (ADS)

    Kuipers, J.; Plaat, A.; Vermaseren, J. A. M.; van den Herik, H. J.

    2013-11-01

    Optimizing the cost of evaluating a polynomial is a classic problem in computer science. For polynomials in one variable, Horner's method provides a scheme for producing a computationally efficient form. For multivariate polynomials it is possible to generalize Horner's method, but this leaves freedom in the order of the variables. Traditionally, greedy schemes like most-occurring variable first are used. This simple textbook algorithm has given remarkably efficient results. Finding better algorithms has proved difficult. In trying to improve upon the greedy scheme we have implemented Monte Carlo tree search, a recent search method from the field of artificial intelligence. This results in better Horner schemes and reduces the cost of evaluating polynomials, sometimes by factors up to two.

  7. Exponential-fitted methods for integrating stiff systems of ordinary differential equations: Applications to homogeneous gas-phase chemical kinetics

    NASA Technical Reports Server (NTRS)

    Pratt, D. T.

    1984-01-01

    Conventional algorithms for the numerical integration of ordinary differential equations (ODEs) are based on the use of polynomial functions as interpolants. However, the exact solutions of stiff ODEs behave like decaying exponential functions, which are poorly approximated by polynomials. An obvious choice of interpolant are the exponential functions themselves, or their low-order diagonal Pade (rational function) approximants. A number of explicit, A-stable, integration algorithms were derived from the use of a three-parameter exponential function as interpolant, and their relationship to low-order, polynomial-based and rational-function-based implicit and explicit methods were shown by examining their low-order diagonal Pade approximants. A robust implicit formula was derived by exponential fitting the trapezoidal rule. Application of these algorithms to integration of the ODEs governing homogenous, gas-phase chemical kinetics was demonstrated in a developmental code CREK1D, which compares favorably with the Gear-Hindmarsh code LSODE in spite of the use of a primitive stepsize control strategy.

  8. Hypergeometric Series Solution to a Class of Second-Order Boundary Value Problems via Laplace Transform with Applications to Nanofluids

    NASA Astrophysics Data System (ADS)

    Ebaid, Abdelhalim; Wazwaz, Abdul-Majid; Alali, Elham; Masaedeh, Basem S.

    2017-03-01

    Very recently, it was observed that the temperature of nanofluids is finally governed by second-order ordinary differential equations with variable coefficients of exponential orders. Such coefficients were then transformed to polynomials type by using new independent variables. In this paper, a class of second-order ordinary differential equations with variable coefficients of polynomials type has been solved analytically. The analytical solution is expressed in terms of a hypergeometric function with generalized parameters. Moreover, applications of the present results have been applied on some selected nanofluids problems in the literature. The exact solutions in the literature were derived as special cases of our generalized analytical solution.

  9. Study on the mapping of dark matter clustering from real space to redshift space

    NASA Astrophysics Data System (ADS)

    Zheng, Yi; Song, Yong-Seon

    2016-08-01

    The mapping of dark matter clustering from real space to redshift space introduces the anisotropic property to the measured density power spectrum in redshift space, known as the redshift space distortion effect. The mapping formula is intrinsically non-linear, which is complicated by the higher order polynomials due to indefinite cross correlations between the density and velocity fields, and the Finger-of-God effect due to the randomness of the peculiar velocity field. Whilst the full higher order polynomials remain unknown, the other systematics can be controlled consistently within the same order truncation in the expansion of the mapping formula, as shown in this paper. The systematic due to the unknown non-linear density and velocity fields is removed by separately measuring all terms in the expansion directly using simulations. The uncertainty caused by the velocity randomness is controlled by splitting the FoG term into two pieces, 1) the ``one-point" FoG term being independent of the separation vector between two different points, and 2) the ``correlated" FoG term appearing as an indefinite polynomials which is expanded in the same order as all other perturbative polynomials. Using 100 realizations of simulations, we find that the Gaussian FoG function with only one scale-independent free parameter works quite well, and that our new mapping formulation accurately reproduces the observed 2-dimensional density power spectrum in redshift space at the smallest scales by far, up to k~ 0.2 Mpc-1, considering the resolution of future experiments.

  10. Application of Statistic Experimental Design to Assess the Effect of Gammairradiation Pre-Treatment on the Drying Characteristics and Qualities of Wheat

    NASA Astrophysics Data System (ADS)

    Yu, Yong; Wang, Jun

    Wheat, pretreated by 60Co gamma irradiation, was dried by hot-air with irradiation dosage 0-3 kGy, drying temperature 40-60 °C, and initial moisture contents 19-25% (drying basis). The drying characteristics and dried qualities of wheat were evaluated based on drying time, average dehydration rate, wet gluten content (WGC), moisture content of wet gluten (MCWG)and titratable acidity (TA). A quadratic rotation-orthogonal composite experimental design, with three variables (at five levels) and five response functions, and analysis method were employed to study the effect of three variables on the individual response functions. The five response functions (drying time, average dehydration rate, WGC, MCWG, TA) correlated with these variables by second order polynomials consisting of linear, quadratic and interaction terms. A high correlation coefficient indicated the suitability of the second order polynomial to predict these response functions. The linear, interaction and quadratic effects of three variables on the five response functions were all studied.

  11. High-order rogue waves of the Benjamin-Ono equation and the nonlocal nonlinear Schrödinger equation

    NASA Astrophysics Data System (ADS)

    Liu, Wei

    2017-10-01

    High-order rogue wave solutions of the Benjamin-Ono equation and the nonlocal nonlinear Schrödinger equation are derived by employing the bilinear method, which are expressed by simple polynomials. Typical dynamics of these high-order rogue waves are studied by analytical and graphical ways. For the Benjamin-Ono equation, there are two types of rogue waves, namely, bright rogue waves and dark rogue waves. In particular, the fundamental rogue wave pattern is different from the usual fundamental rogue wave patterns in other soliton equations. For the nonlocal nonlinear Schrödinger equation, the exact explicit rogue wave solutions up to the second order are presented. Typical rogue wave patterns such as Peregrine-type, triple and fundamental rogue waves are put forward. These high-order rogue wave patterns have not been shown before in the nonlocal Schrödinger equation.

  12. Permutation invariant polynomial neural network approach to fitting potential energy surfaces. II. Four-atom systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Jun; Jiang, Bin; Guo, Hua, E-mail: hguo@unm.edu

    2013-11-28

    A rigorous, general, and simple method to fit global and permutation invariant potential energy surfaces (PESs) using neural networks (NNs) is discussed. This so-called permutation invariant polynomial neural network (PIP-NN) method imposes permutation symmetry by using in its input a set of symmetry functions based on PIPs. For systems with more than three atoms, it is shown that the number of symmetry functions in the input vector needs to be larger than the number of internal coordinates in order to include both the primary and secondary invariant polynomials. This PIP-NN method is successfully demonstrated in three atom-triatomic reactive systems, resultingmore » in full-dimensional global PESs with average errors on the order of meV. These PESs are used in full-dimensional quantum dynamical calculations.« less

  13. Genetic parameters for test-day yield of milk, fat and protein in buffaloes estimated by random regression models.

    PubMed

    Aspilcueta-Borquis, Rúsbel R; Araujo Neto, Francisco R; Baldi, Fernando; Santos, Daniel J A; Albuquerque, Lucia G; Tonhati, Humberto

    2012-08-01

    The test-day yields of milk, fat and protein were analysed from 1433 first lactations of buffaloes of the Murrah breed, daughters of 113 sires from 12 herds in the state of São Paulo, Brazil, born between 1985 and 2007. For the test-day yields, 10 monthly classes of lactation days were considered. The contemporary groups were defined as the herd-year-month of the test day. Random additive genetic, permanent environmental and residual effects were included in the model. The fixed effects considered were the contemporary group, number of milkings (1 or 2 milkings), linear and quadratic effects of the covariable cow age at calving and the mean lactation curve of the population (modelled by third-order Legendre orthogonal polynomials). The random additive genetic and permanent environmental effects were estimated by means of regression on third- to sixth-order Legendre orthogonal polynomials. The residual variances were modelled with a homogenous structure and various heterogeneous classes. According to the likelihood-ratio test, the best model for milk and fat production was that with four residual variance classes, while a third-order Legendre polynomial was best for the additive genetic effect for milk and fat yield, a fourth-order polynomial was best for the permanent environmental effect for milk production and a fifth-order polynomial was best for fat production. For protein yield, the best model was that with three residual variance classes and third- and fourth-order Legendre polynomials were best for the additive genetic and permanent environmental effects, respectively. The heritability estimates for the characteristics analysed were moderate, varying from 0·16±0·05 to 0·29±0·05 for milk yield, 0·20±0·05 to 0·30±0·08 for fat yield and 0·18±0·06 to 0·27±0·08 for protein yield. The estimates of the genetic correlations between the tests varied from 0·18±0·120 to 0·99±0·002; from 0·44±0·080 to 0·99±0·004; and from 0·41±0·080 to 0·99±0·004, for milk, fat and protein production, respectively, indicating that whatever the selection criterion used, indirect genetic gains can be expected throughout the lactation curve.

  14. Optimization and formulation design of gels of Diclofenac and Curcumin for transdermal drug delivery by Box-Behnken statistical design.

    PubMed

    Chaudhary, Hema; Kohli, Kanchan; Amin, Saima; Rathee, Permender; Kumar, Vikash

    2011-02-01

    The aim of this study was to develop and optimize a transdermal gel formulation for Diclofenac diethylamine (DDEA) and Curcumin (CRM). A 3-factor, 3-level Box-Behnken design was used to derive a second-order polynomial equation to construct contour plots for prediction of responses. Independent variables studied were the polymer concentration (X(1)), ethanol (X(2)) and propylene glycol (X(3)) and the levels of each factor were low, medium, and high. The dependent variables studied were the skin permeation rate of DDEA (Y(1)), skin permeation rate of CRM (Y(2)), and viscosity of the gels (Y(3)). Response surface plots were drawn, statistical validity of the polynomials was established to find the compositions of optimized formulation which was evaluated using the Franz-type diffusion cell. The permeation rate of DDEA increased proportionally with ethanol concentration but decreased with polymer concentration, whereas the permeation rate of CRM increased proportionally with polymer concentration. Gels showed a non-Fickian super case II (typical zero order) and non-Fickian diffusion release mechanism for DDEA and CRM, respectively. The design demonstrated the role of the derived polynomial equation and contour plots in predicting the values of dependent variables for the preparation and optimization of gel formulation for transdermal drug release. Copyright © 2010 Wiley-Liss, Inc.

  15. Grid Effect on Spherical Shallow Water Jets Using Continuous and Discontinuous Galerkin Methods

    DTIC Science & Technology

    2013-01-01

    The high-order Legendre -Gauss-Lobatto (LGL) points are added to the linear grid by projecting the linear elements onto the auxiliary gnomonic space...mapping, the triangles are subdivided into smaller ones by a Lagrange polynomial of order nI . The number of quadrilateral elements and grid points of...of the acceleration of gravity and the vertical height of the fluid), ν∇2 is the artificial viscosity term of viscous coefficient ν = 1× 105 m2 s−1

  16. Sandia Higher Order Elements (SHOE) v 0.5 alpha

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2013-09-24

    SHOE is research code for characterizing and visualizing higher-order finite elements; it contains a framework for defining classes of interpolation techniques and element shapes; methods for interpolating triangular, quadrilateral, tetrahedral, and hexahedral cells using Lagrange and Legendre polynomial bases of arbitrary order; methods to decompose each element into domains of constant gradient flow (using a polynomial solver to identify critical points); and an isocontouring technique that uses this decomposition to guarantee topological correctness. Please note that this is an alpha release of research software and that some time has passed since it was actively developed; build- and run-time issues likelymore » exist.« less

  17. Direct discriminant locality preserving projection with Hammerstein polynomial expansion.

    PubMed

    Chen, Xi; Zhang, Jiashu; Li, Defang

    2012-12-01

    Discriminant locality preserving projection (DLPP) is a linear approach that encodes discriminant information into the objective of locality preserving projection and improves its classification ability. To enhance the nonlinear description ability of DLPP, we can optimize the objective function of DLPP in reproducing kernel Hilbert space to form a kernel-based discriminant locality preserving projection (KDLPP). However, KDLPP suffers the following problems: 1) larger computational burden; 2) no explicit mapping functions in KDLPP, which results in more computational burden when projecting a new sample into the low-dimensional subspace; and 3) KDLPP cannot obtain optimal discriminant vectors, which exceedingly optimize the objective of DLPP. To overcome the weaknesses of KDLPP, in this paper, a direct discriminant locality preserving projection with Hammerstein polynomial expansion (HPDDLPP) is proposed. The proposed HPDDLPP directly implements the objective of DLPP in high-dimensional second-order Hammerstein polynomial space without matrix inverse, which extracts the optimal discriminant vectors for DLPP without larger computational burden. Compared with some other related classical methods, experimental results for face and palmprint recognition problems indicate the effectiveness of the proposed HPDDLPP.

  18. Robust algebraic image enhancement for intelligent control systems

    NASA Technical Reports Server (NTRS)

    Lerner, Bao-Ting; Morrelli, Michael

    1993-01-01

    Robust vision capability for intelligent control systems has been an elusive goal in image processing. The computationally intensive techniques a necessary for conventional image processing make real-time applications, such as object tracking and collision avoidance difficult. In order to endow an intelligent control system with the needed vision robustness, an adequate image enhancement subsystem capable of compensating for the wide variety of real-world degradations, must exist between the image capturing and the object recognition subsystems. This enhancement stage must be adaptive and must operate with consistency in the presence of both statistical and shape-based noise. To deal with this problem, we have developed an innovative algebraic approach which provides a sound mathematical framework for image representation and manipulation. Our image model provides a natural platform from which to pursue dynamic scene analysis, and its incorporation into a vision system would serve as the front-end to an intelligent control system. We have developed a unique polynomial representation of gray level imagery and applied this representation to develop polynomial operators on complex gray level scenes. This approach is highly advantageous since polynomials can be manipulated very easily, and are readily understood, thus providing a very convenient environment for image processing. Our model presents a highly structured and compact algebraic representation of grey-level images which can be viewed as fuzzy sets.

  19. Non-axisymmetric Aberration Patterns from Wide-field Telescopes Using Spin-weighted Zernike Polynomials

    NASA Astrophysics Data System (ADS)

    Kent, Stephen M.

    2018-04-01

    If the optical system of a telescope is perturbed from rotational symmetry, the Zernike wavefront aberration coefficients describing that system can be expressed as a function of position in the focal plane using spin-weighted Zernike polynomials. Methodologies are presented to derive these polynomials to arbitrary order. This methodology is applied to aberration patterns produced by a misaligned Ritchey–Chrétien telescope and to distortion patterns at the focal plane of the DESI optical corrector, where it is shown to provide a more efficient description of distortion than conventional expansions.

  20. Limitations of polynomial chaos expansions in the Bayesian solution of inverse problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Fei; Department of Mathematics, University of California, Berkeley; Morzfeld, Matthias, E-mail: mmo@math.lbl.gov

    2015-02-01

    Polynomial chaos expansions are used to reduce the computational cost in the Bayesian solutions of inverse problems by creating a surrogate posterior that can be evaluated inexpensively. We show, by analysis and example, that when the data contain significant information beyond what is assumed in the prior, the surrogate posterior can be very different from the posterior, and the resulting estimates become inaccurate. One can improve the accuracy by adaptively increasing the order of the polynomial chaos, but the cost may increase too fast for this to be cost effective compared to Monte Carlo sampling without a surrogate posterior.

  1. An adaptive least-squares global sensitivity method and application to a plasma-coupled combustion prediction with parametric correlation

    NASA Astrophysics Data System (ADS)

    Tang, Kunkun; Massa, Luca; Wang, Jonathan; Freund, Jonathan B.

    2018-05-01

    We introduce an efficient non-intrusive surrogate-based methodology for global sensitivity analysis and uncertainty quantification. Modified covariance-based sensitivity indices (mCov-SI) are defined for outputs that reflect correlated effects. The overall approach is applied to simulations of a complex plasma-coupled combustion system with disparate uncertain parameters in sub-models for chemical kinetics and a laser-induced breakdown ignition seed. The surrogate is based on an Analysis of Variance (ANOVA) expansion, such as widely used in statistics, with orthogonal polynomials representing the ANOVA subspaces and a polynomial dimensional decomposition (PDD) representing its multi-dimensional components. The coefficients of the PDD expansion are obtained using a least-squares regression, which both avoids the direct computation of high-dimensional integrals and affords an attractive flexibility in choosing sampling points. This facilitates importance sampling using a Bayesian calibrated posterior distribution, which is fast and thus particularly advantageous in common practical cases, such as our large-scale demonstration, for which the asymptotic convergence properties of polynomial expansions cannot be realized due to computation expense. Effort, instead, is focused on efficient finite-resolution sampling. Standard covariance-based sensitivity indices (Cov-SI) are employed to account for correlation of the uncertain parameters. Magnitude of Cov-SI is unfortunately unbounded, which can produce extremely large indices that limit their utility. Alternatively, mCov-SI are then proposed in order to bound this magnitude ∈ [ 0 , 1 ]. The polynomial expansion is coupled with an adaptive ANOVA strategy to provide an accurate surrogate as the union of several low-dimensional spaces, avoiding the typical computational cost of a high-dimensional expansion. It is also adaptively simplified according to the relative contribution of the different polynomials to the total variance. The approach is demonstrated for a laser-induced turbulent combustion simulation model, which includes parameters with correlated effects.

  2. State-vector formalism and the Legendre polynomial solution for modelling guided waves in anisotropic plates

    NASA Astrophysics Data System (ADS)

    Zheng, Mingfang; He, Cunfu; Lu, Yan; Wu, Bin

    2018-01-01

    We presented a numerical method to solve phase dispersion curve in general anisotropic plates. This approach involves an exact solution to the problem in the form of the Legendre polynomial of multiple integrals, which we substituted into the state-vector formalism. In order to improve the efficiency of the proposed method, we made a special effort to demonstrate the analytical methodology. Furthermore, we analyzed the algebraic symmetries of the matrices in the state-vector formalism for anisotropic plates. The basic feature of the proposed method was the expansion of field quantities by Legendre polynomials. The Legendre polynomial method avoid to solve the transcendental dispersion equation, which can only be solved numerically. This state-vector formalism combined with Legendre polynomial expansion distinguished the adjacent dispersion mode clearly, even when the modes were very close. We then illustrated the theoretical solutions of the dispersion curves by this method for isotropic and anisotropic plates. Finally, we compared the proposed method with the global matrix method (GMM), which shows excellent agreement.

  3. Accurate polynomial expressions for the density and specific volume of seawater using the TEOS-10 standard

    NASA Astrophysics Data System (ADS)

    Roquet, F.; Madec, G.; McDougall, Trevor J.; Barker, Paul M.

    2015-06-01

    A new set of approximations to the standard TEOS-10 equation of state are presented. These follow a polynomial form, making it computationally efficient for use in numerical ocean models. Two versions are provided, the first being a fit of density for Boussinesq ocean models, and the second fitting specific volume which is more suitable for compressible models. Both versions are given as the sum of a vertical reference profile (6th-order polynomial) and an anomaly (52-term polynomial, cubic in pressure), with relative errors of ∼0.1% on the thermal expansion coefficients. A 75-term polynomial expression is also presented for computing specific volume, with a better accuracy than the existing TEOS-10 48-term rational approximation, especially regarding the sound speed, and it is suggested that this expression represents a valuable approximation of the TEOS-10 equation of state for hydrographic data analysis. In the last section, practical aspects about the implementation of TEOS-10 in ocean models are discussed.

  4. Disconjugacy, regularity of multi-indexed rationally extended potentials, and Laguerre exceptional polynomials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grandati, Y.; Quesne, C.

    2013-07-15

    The power of the disconjugacy properties of second-order differential equations of Schrödinger type to check the regularity of rationally extended quantum potentials connected with exceptional orthogonal polynomials is illustrated by re-examining the extensions of the isotonic oscillator (or radial oscillator) potential derived in kth-order supersymmetric quantum mechanics or multistep Darboux-Bäcklund transformation method. The function arising in the potential denominator is proved to be a polynomial with a nonvanishing constant term, whose value is calculated by induction over k. The sign of this term being the same as that of the already known highest degree term, the potential denominator has themore » same sign at both extremities of the definition interval, a property that is shared by the seed eigenfunction used in the potential construction. By virtue of disconjugacy, such a property implies the nodeless character of both the eigenfunction and the resulting potential.« less

  5. Classical Dynamics of Fullerenes

    NASA Astrophysics Data System (ADS)

    Sławianowski, Jan J.; Kotowski, Romuald K.

    2017-06-01

    The classical mechanics of large molecules and fullerenes is studied. The approach is based on the model of collective motion of these objects. The mixed Lagrangian (material) and Eulerian (space) description of motion is used. In particular, the Green and Cauchy deformation tensors are geometrically defined. The important issue is the group-theoretical approach to describing the affine deformations of the body. The Hamiltonian description of motion based on the Poisson brackets methodology is used. The Lagrange and Hamilton approaches allow us to formulate the mechanics in the canonical form. The method of discretization in analytical continuum theory and in classical dynamics of large molecules and fullerenes enable us to formulate their dynamics in terms of the polynomial expansions of configurations. Another approach is based on the theory of analytical functions and on their approximations by finite-order polynomials. We concentrate on the extremely simplified model of affine deformations or on their higher-order polynomial perturbations.

  6. Quadratically Convergent Method for Simultaneously Approaching the Roots of Polynomial Solutions of a Class of Differential Equations

    NASA Astrophysics Data System (ADS)

    Recchioni, Maria Cristina

    2001-12-01

    This paper investigates the application of the method introduced by L. Pasquini (1989) for simultaneously approaching the zeros of polynomial solutions to a class of second-order linear homogeneous ordinary differential equations with polynomial coefficients to a particular case in which these polynomial solutions have zeros symmetrically arranged with respect to the origin. The method is based on a family of nonlinear equations which is associated with a given class of differential equations. The roots of the nonlinear equations are related to the roots of the polynomial solutions of differential equations considered. Newton's method is applied to find the roots of these nonlinear equations. In (Pasquini, 1994) the nonsingularity of the roots of these nonlinear equations is studied. In this paper, following the lines in (Pasquini, 1994), the nonsingularity of the roots of these nonlinear equations is studied. More favourable results than the ones in (Pasquini, 1994) are proven in the particular case of polynomial solutions with symmetrical zeros. The method is applied to approximate the roots of Hermite-Sobolev type polynomials and Freud polynomials. A lower bound for the smallest positive root of Hermite-Sobolev type polynomials is given via the nonlinear equation. The quadratic convergence of the method is proven. A comparison with a classical method that uses the Jacobi matrices is carried out. We show that the algorithm derived by the proposed method is sometimes preferable to the classical QR type algorithms for computing the eigenvalues of the Jacobi matrices even if these matrices are real and symmetric.

  7. The expression and comparison of healthy and ptotic upper eyelid contours using a polynomial mathematical function.

    PubMed

    Mocan, Mehmet C; Ilhan, Hacer; Gurcay, Hasmet; Dikmetas, Ozlem; Karabulut, Erdem; Erdener, Ugur; Irkec, Murat

    2014-06-01

    To derive a mathematical expression for the healthy upper eyelid (UE) contour and to use this expression to differentiate the normal UE curve from its abnormal configuration in the setting of blepharoptosis. The study was designed as a cross-sectional study. Fifty healthy subjects (26M/24F) and 50 patients with blepharoptosis (28M/22F) with a margin-reflex distance (MRD1) of ≤2.5 mm were recruited. A polynomial interpolation was used to approximate UE curve. The polynomial coefficients were calculated from digital eyelid images of all participants using a set of operator defined points along the UE curve. Coefficients up to the fourth-order polynomial, iris area covered by the UE, iris area covered by the lower eyelid and total iris area covered by both the upper and the lower eyelids were defined using the polynomial function and used in statistical comparisons. The t-test, Mann-Whitney U test and the Spearman's correlation test were used for statistical comparisons. The mathematical expression derived from the data of 50 healthy subjects aged 24.1 ± 2.6 years was defined as y = 22.0915 + (-1.3213)x + 0.0318x(2 )+ (-0.0005x)(3). The fifth and the consecutive coefficients were <0.00001 in all cases and were not included in the polynomial function. None of the first fourth-order coefficients of the equation were found to be significantly different in male versus female subjects. In normal subjects, the percentage of the iris area covered by upper and lower lids was 6.46 ± 5.17% and 0.66% ± 1.62%, respectively. All coefficients and mean iris area covered by the UE were significantly different between healthy and ptotic eyelids. The healthy and abnormal eyelid contour can be defined and differentiated using a polynomial mathematical function.

  8. A Semi-Analytical Orbit Propagator Program for Highly Elliptical Orbits

    NASA Astrophysics Data System (ADS)

    Lara, M.; San-Juan, J. F.; Hautesserres, D.

    2016-05-01

    A semi-analytical orbit propagator to study the long-term evolution of spacecraft in Highly Elliptical Orbits is presented. The perturbation model taken into account includes the gravitational effects produced by the first nine zonal harmonics and the main tesseral harmonics affecting to the 2:1 resonance, which has an impact on Molniya orbit-types, of Earth's gravitational potential, the mass-point approximation for third body perturbations, which on ly include the Legendre polynomial of second order for the sun and the polynomials from second order to sixth order for the moon, solar radiation pressure and atmospheric drag. Hamiltonian formalism is used to model the forces of gravitational nature so as to avoid time-dependence issues the problem is formulated in the extended phase space. The solar radiation pressure is modeled as a potential and included in the Hamiltonian, whereas the atmospheric drag is added as a generalized force. The semi-analytical theory is developed using perturbation techniques based on Lie transforms. Deprit's perturbation algorithm is applied up to the second order of the second zonal harmonics, J2, including Kozay-type terms in the mean elements Hamiltonian to get "centered" elements. The transformation is developed in closed-form of the eccentricity except for tesseral resonances and the coupling between J_2 and the moon's disturbing effects are neglected. This paper describes the semi-analytical theory, the semi-analytical orbit propagator program and some of the numerical validations.

  9. Concentration of the L{sub 1}-norm of trigonometric polynomials and entire functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Malykhin, Yu V; Ryutin, K S

    2014-11-30

    For any sufficiently large n, the minimal measure of a subset of [−π,π] on which some nonzero trigonometric polynomial of order ≤n gains half of the L{sub 1}-norm is shown to be π/(n+1). A similar result for entire functions of exponential type is established. Bibliography: 13 titles.

  10. Parametric correlation functions to model the structure of permanent environmental (co)variances in milk yield random regression models.

    PubMed

    Bignardi, A B; El Faro, L; Cardoso, V L; Machado, P F; Albuquerque, L G

    2009-09-01

    The objective of the present study was to estimate milk yield genetic parameters applying random regression models and parametric correlation functions combined with a variance function to model animal permanent environmental effects. A total of 152,145 test-day milk yields from 7,317 first lactations of Holstein cows belonging to herds located in the southeastern region of Brazil were analyzed. Test-day milk yields were divided into 44 weekly classes of days in milk. Contemporary groups were defined by herd-test-day comprising a total of 2,539 classes. The model included direct additive genetic, permanent environmental, and residual random effects. The following fixed effects were considered: contemporary group, age of cow at calving (linear and quadratic regressions), and the population average lactation curve modeled by fourth-order orthogonal Legendre polynomial. Additive genetic effects were modeled by random regression on orthogonal Legendre polynomials of days in milk, whereas permanent environmental effects were estimated using a stationary or nonstationary parametric correlation function combined with a variance function of different orders. The structure of residual variances was modeled using a step function containing 6 variance classes. The genetic parameter estimates obtained with the model using a stationary correlation function associated with a variance function to model permanent environmental effects were similar to those obtained with models employing orthogonal Legendre polynomials for the same effect. A model using a sixth-order polynomial for additive effects and a stationary parametric correlation function associated with a seventh-order variance function to model permanent environmental effects would be sufficient for data fitting.

  11. Random regression models using Legendre orthogonal polynomials to evaluate the milk production of Alpine goats.

    PubMed

    Silva, F G; Torres, R A; Brito, L F; Euclydes, R F; Melo, A L P; Souza, N O; Ribeiro, J I; Rodrigues, M T

    2013-12-11

    The objective of this study was to identify the best random regression model using Legendre orthogonal polynomials to evaluate Alpine goats genetically and to estimate the parameters for test day milk yield. On the test day, we analyzed 20,710 records of milk yield of 667 goats from the Goat Sector of the Universidade Federal de Viçosa. The evaluated models had combinations of distinct fitting orders for polynomials (2-5), random genetic (1-7), and permanent environmental (1-7) fixed curves and a number of classes for residual variance (2, 4, 5, and 6). WOMBAT software was used for all genetic analyses. A random regression model using the best Legendre orthogonal polynomial for genetic evaluation of milk yield on the test day of Alpine goats considered a fixed curve of order 4, curve of genetic additive effects of order 2, curve of permanent environmental effects of order 7, and a minimum of 5 classes of residual variance because it was the most economical model among those that were equivalent to the complete model by the likelihood ratio test. Phenotypic variance and heritability were higher at the end of the lactation period, indicating that the length of lactation has more genetic components in relation to the production peak and persistence. It is very important that the evaluation utilizes the best combination of fixed, genetic additive and permanent environmental regressions, and number of classes of heterogeneous residual variance for genetic evaluation using random regression models, thereby enhancing the precision and accuracy of the estimates of parameters and prediction of genetic values.

  12. Study on the mapping of dark matter clustering from real space to redshift space

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zheng, Yi; Song, Yong-Seon, E-mail: yizheng@kasi.re.kr, E-mail: ysong@kasi.re.kr

    The mapping of dark matter clustering from real space to redshift space introduces the anisotropic property to the measured density power spectrum in redshift space, known as the redshift space distortion effect. The mapping formula is intrinsically non-linear, which is complicated by the higher order polynomials due to indefinite cross correlations between the density and velocity fields, and the Finger-of-God effect due to the randomness of the peculiar velocity field. Whilst the full higher order polynomials remain unknown, the other systematics can be controlled consistently within the same order truncation in the expansion of the mapping formula, as shown inmore » this paper. The systematic due to the unknown non-linear density and velocity fields is removed by separately measuring all terms in the expansion directly using simulations. The uncertainty caused by the velocity randomness is controlled by splitting the FoG term into two pieces, 1) the ''one-point' FoG term being independent of the separation vector between two different points, and 2) the ''correlated' FoG term appearing as an indefinite polynomials which is expanded in the same order as all other perturbative polynomials. Using 100 realizations of simulations, we find that the Gaussian FoG function with only one scale-independent free parameter works quite well, and that our new mapping formulation accurately reproduces the observed 2-dimensional density power spectrum in redshift space at the smallest scales by far, up to k ∼ 0.2 Mpc{sup -1}, considering the resolution of future experiments.« less

  13. Modeling State-Space Aeroelastic Systems Using a Simple Matrix Polynomial Approach for the Unsteady Aerodynamics

    NASA Technical Reports Server (NTRS)

    Pototzky, Anthony S.

    2008-01-01

    A simple matrix polynomial approach is introduced for approximating unsteady aerodynamics in the s-plane and ultimately, after combining matrix polynomial coefficients with matrices defining the structure, a matrix polynomial of the flutter equations of motion (EOM) is formed. A technique of recasting the matrix-polynomial form of the flutter EOM into a first order form is also presented that can be used to determine the eigenvalues near the origin and everywhere on the complex plane. An aeroservoelastic (ASE) EOM have been generalized to include the gust terms on the right-hand side. The reasons for developing the new matrix polynomial approach are also presented, which are the following: first, the "workhorse" methods such as the NASTRAN flutter analysis lack the capability to consistently find roots near the origin, along the real axis or accurately find roots farther away from the imaginary axis of the complex plane; and, second, the existing s-plane methods, such as the Roger s s-plane approximation method as implemented in ISAC, do not always give suitable fits of some tabular data of the unsteady aerodynamics. A method available in MATLAB is introduced that will accurately fit generalized aerodynamic force (GAF) coefficients in a tabular data form into the coefficients of a matrix polynomial form. The root-locus results from the NASTRAN pknl flutter analysis, the ISAC-Roger's s-plane method and the present matrix polynomial method are presented and compared for accuracy and for the number and locations of roots.

  14. Noncommutative Differential Geometry of Generalized Weyl Algebras

    NASA Astrophysics Data System (ADS)

    Brzeziński, Tomasz

    2016-06-01

    Elements of noncommutative differential geometry of Z-graded generalized Weyl algebras A(p;q) over the ring of polynomials in two variables and their zero-degree subalgebras B(p;q), which themselves are generalized Weyl algebras over the ring of polynomials in one variable, are discussed. In particular, three classes of skew derivations of A(p;q) are constructed, and three-dimensional first-order differential calculi induced by these derivations are described. The associated integrals are computed and it is shown that the dimension of the integral space coincides with the order of the defining polynomial p(z). It is proven that the restriction of these first-order differential calculi to the calculi on B(p;q) is isomorphic to the direct sum of degree 2 and degree -2 components of A(p;q). A Dirac operator for B(p;q) is constructed from a (strong) connection with respect to this differential calculus on the (free) spinor bimodule defined as the direct sum of degree 1 and degree -1 components of A(p;q). The real structure of KO-dimension two for this Dirac operator is also described.

  15. Predicting Physical Time Series Using Dynamic Ridge Polynomial Neural Networks

    PubMed Central

    Al-Jumeily, Dhiya; Ghazali, Rozaida; Hussain, Abir

    2014-01-01

    Forecasting naturally occurring phenomena is a common problem in many domains of science, and this has been addressed and investigated by many scientists. The importance of time series prediction stems from the fact that it has wide range of applications, including control systems, engineering processes, environmental systems and economics. From the knowledge of some aspects of the previous behaviour of the system, the aim of the prediction process is to determine or predict its future behaviour. In this paper, we consider a novel application of a higher order polynomial neural network architecture called Dynamic Ridge Polynomial Neural Network that combines the properties of higher order and recurrent neural networks for the prediction of physical time series. In this study, four types of signals have been used, which are; The Lorenz attractor, mean value of the AE index, sunspot number, and heat wave temperature. The simulation results showed good improvements in terms of the signal to noise ratio in comparison to a number of higher order and feedforward neural networks in comparison to the benchmarked techniques. PMID:25157950

  16. Comparison of volatility function technique for risk-neutral densities estimation

    NASA Astrophysics Data System (ADS)

    Bahaludin, Hafizah; Abdullah, Mimi Hafizah

    2017-08-01

    Volatility function technique by using interpolation approach plays an important role in extracting the risk-neutral density (RND) of options. The aim of this study is to compare the performances of two interpolation approaches namely smoothing spline and fourth order polynomial in extracting the RND. The implied volatility of options with respect to strike prices/delta are interpolated to obtain a well behaved density. The statistical analysis and forecast accuracy are tested using moments of distribution. The difference between the first moment of distribution and the price of underlying asset at maturity is used as an input to analyze forecast accuracy. RNDs are extracted from the Dow Jones Industrial Average (DJIA) index options with a one month constant maturity for the period from January 2011 until December 2015. The empirical results suggest that the estimation of RND using a fourth order polynomial is more appropriate to be used compared to a smoothing spline in which the fourth order polynomial gives the lowest mean square error (MSE). The results can be used to help market participants capture market expectations of the future developments of the underlying asset.

  17. Predicting physical time series using dynamic ridge polynomial neural networks.

    PubMed

    Al-Jumeily, Dhiya; Ghazali, Rozaida; Hussain, Abir

    2014-01-01

    Forecasting naturally occurring phenomena is a common problem in many domains of science, and this has been addressed and investigated by many scientists. The importance of time series prediction stems from the fact that it has wide range of applications, including control systems, engineering processes, environmental systems and economics. From the knowledge of some aspects of the previous behaviour of the system, the aim of the prediction process is to determine or predict its future behaviour. In this paper, we consider a novel application of a higher order polynomial neural network architecture called Dynamic Ridge Polynomial Neural Network that combines the properties of higher order and recurrent neural networks for the prediction of physical time series. In this study, four types of signals have been used, which are; The Lorenz attractor, mean value of the AE index, sunspot number, and heat wave temperature. The simulation results showed good improvements in terms of the signal to noise ratio in comparison to a number of higher order and feedforward neural networks in comparison to the benchmarked techniques.

  18. On Using Homogeneous Polynomials To Design Anisotropic Yield Functions With Tension/Compression Symmetry/Assymetry

    NASA Astrophysics Data System (ADS)

    Soare, S.; Yoon, J. W.; Cazacu, O.

    2007-05-01

    With few exceptions, non-quadratic homogeneous polynomials have received little attention as possible candidates for yield functions. One reason might be that not every such polynomial is a convex function. In this paper we show that homogeneous polynomials can be used to develop powerful anisotropic yield criteria, and that imposing simple constraints on the identification process leads, aposteriori, to the desired convexity property. It is shown that combinations of such polynomials allow for modeling yielding properties of metallic materials with any crystal structure, i.e. both cubic and hexagonal which display strength differential effects. Extensions of the proposed criteria to 3D stress states are also presented. We apply these criteria to the description of the aluminum alloy AA2090T3. We prove that a sixth order orthotropic homogeneous polynomial is capable of a satisfactory description of this alloy. Next, applications to the deep drawing of a cylindrical cup are presented. The newly proposed criteria were implemented as UMAT subroutines into the commercial FE code ABAQUS. We were able to predict six ears on the AA2090T3 cup's profile. Finally, we show that a tension/compression asymmetry in yielding can have an important effect on the earing profile.

  19. On adaptive weighted polynomial preconditioning for Hermitian positive definite matrices

    NASA Technical Reports Server (NTRS)

    Fischer, Bernd; Freund, Roland W.

    1992-01-01

    The conjugate gradient algorithm for solving Hermitian positive definite linear systems is usually combined with preconditioning in order to speed up convergence. In recent years, there has been a revival of polynomial preconditioning, motivated by the attractive features of the method on modern architectures. Standard techniques for choosing the preconditioning polynomial are based only on bounds for the extreme eigenvalues. Here a different approach is proposed, which aims at adapting the preconditioner to the eigenvalue distribution of the coefficient matrix. The technique is based on the observation that good estimates for the eigenvalue distribution can be derived after only a few steps of the Lanczos process. This information is then used to construct a weight function for a suitable Chebyshev approximation problem. The solution of this problem yields the polynomial preconditioner. In particular, we investigate the use of Bernstein-Szego weights.

  20. Stable multi-domain spectral penalty methods for fractional partial differential equations

    NASA Astrophysics Data System (ADS)

    Xu, Qinwu; Hesthaven, Jan S.

    2014-01-01

    We propose stable multi-domain spectral penalty methods suitable for solving fractional partial differential equations with fractional derivatives of any order. First, a high order discretization is proposed to approximate fractional derivatives of any order on any given grids based on orthogonal polynomials. The approximation order is analyzed and verified through numerical examples. Based on the discrete fractional derivative, we introduce stable multi-domain spectral penalty methods for solving fractional advection and diffusion equations. The equations are discretized in each sub-domain separately and the global schemes are obtained by weakly imposed boundary and interface conditions through a penalty term. Stability of the schemes are analyzed and numerical examples based on both uniform and nonuniform grids are considered to highlight the flexibility and high accuracy of the proposed schemes.

  1. Reliability of the Load-Velocity Relationship Obtained Through Linear and Polynomial Regression Models to Predict the One-Repetition Maximum Load.

    PubMed

    Pestaña-Melero, Francisco Luis; Haff, G Gregory; Rojas, Francisco Javier; Pérez-Castilla, Alejandro; García-Ramos, Amador

    2017-12-18

    This study aimed to compare the between-session reliability of the load-velocity relationship between (1) linear vs. polynomial regression models, (2) concentric-only vs. eccentric-concentric bench press variants, as well as (3) the within-participants vs. the between-participants variability of the velocity attained at each percentage of the one-repetition maximum (%1RM). The load-velocity relationship of 30 men (age: 21.2±3.8 y; height: 1.78±0.07 m, body mass: 72.3±7.3 kg; bench press 1RM: 78.8±13.2 kg) were evaluated by means of linear and polynomial regression models in the concentric-only and eccentric-concentric bench press variants in a Smith Machine. Two sessions were performed with each bench press variant. The main findings were: (1) first-order-polynomials (CV: 4.39%-4.70%) provided the load-velocity relationship with higher reliability than second-order-polynomials (CV: 4.68%-5.04%); (2) the reliability of the load-velocity relationship did not differ between the concentric-only and eccentric-concentric bench press variants; (3) the within-participants variability of the velocity attained at each %1RM was markedly lower than the between-participants variability. Taken together, these results highlight that, regardless of the bench press variant considered, the individual determination of the load-velocity relationship by a linear regression model could be recommended to monitor and prescribe the relative load in the Smith machine bench press exercise.

  2. Reduced-order modeling with sparse polynomial chaos expansion and dimension reduction for evaluating the impact of CO2 and brine leakage on groundwater

    NASA Astrophysics Data System (ADS)

    Liu, Y.; Zheng, L.; Pau, G. S. H.

    2016-12-01

    A careful assessment of the risk associated with geologic CO2 storage is critical to the deployment of large-scale storage projects. While numerical modeling is an indispensable tool for risk assessment, there has been increasing need in considering and addressing uncertainties in the numerical models. However, uncertainty analyses have been significantly hindered by the computational complexity of the model. As a remedy, reduced-order models (ROM), which serve as computationally efficient surrogates for high-fidelity models (HFM), have been employed. The ROM is constructed at the expense of an initial set of HFM simulations, and afterwards can be relied upon to predict the model output values at minimal cost. The ROM presented here is part of National Risk Assessment Program (NRAP) and intends to predict the water quality change in groundwater in response to hypothetical CO2 and brine leakage. The HFM based on which the ROM is derived is a multiphase flow and reactive transport model, with 3-D heterogeneous flow field and complex chemical reactions including aqueous complexation, mineral dissolution/precipitation, adsorption/desorption via surface complexation and cation exchange. Reduced-order modeling techniques based on polynomial basis expansion, such as polynomial chaos expansion (PCE), are widely used in the literature. However, the accuracy of such ROMs can be affected by the sparse structure of the coefficients of the expansion. Failing to identify vanishing polynomial coefficients introduces unnecessary sampling errors, the accumulation of which deteriorates the accuracy of the ROMs. To address this issue, we treat the PCE as a sparse Bayesian learning (SBL) problem, and the sparsity is obtained by detecting and including only the non-zero PCE coefficients one at a time by iteratively selecting the most contributing coefficients. The computational complexity due to predicting the entire 3-D concentration fields is further mitigated by a dimension reduction procedure-proper orthogonal decomposition (POD). Our numerical results show that utilizing the sparse structure and POD significantly enhances the accuracy and efficiency of the ROMs, laying the basis for further analyses that necessitate a large number of model simulations.

  3. Application of overlay modeling and control with Zernike polynomials in an HVM environment

    NASA Astrophysics Data System (ADS)

    Ju, JaeWuk; Kim, MinGyu; Lee, JuHan; Nabeth, Jeremy; Jeon, Sanghuck; Heo, Hoyoung; Robinson, John C.; Pierson, Bill

    2016-03-01

    Shrinking technology nodes and smaller process margins require improved photolithography overlay control. Generally, overlay measurement results are modeled with Cartesian polynomial functions for both intra-field and inter-field models and the model coefficients are sent to an advanced process control (APC) system operating in an XY Cartesian basis. Dampened overlay corrections, typically via exponentially or linearly weighted moving average in time, are then retrieved from the APC system to apply on the scanner in XY Cartesian form for subsequent lot exposure. The goal of the above method is to process lots with corrections that target the least possible overlay misregistration in steady state as well as in change point situations. In this study, we model overlay errors on product using Zernike polynomials with same fitting capability as the process of reference (POR) to represent the wafer-level terms, and use the standard Cartesian polynomials to represent the field-level terms. APC calculations for wafer-level correction are performed in Zernike basis while field-level calculations use standard XY Cartesian basis. Finally, weighted wafer-level correction terms are converted to XY Cartesian space in order to be applied on the scanner, along with field-level corrections, for future wafer exposures. Since Zernike polynomials have the property of being orthogonal in the unit disk we are able to reduce the amount of collinearity between terms and improve overlay stability. Our real time Zernike modeling and feedback evaluation was performed on a 20-lot dataset in a high volume manufacturing (HVM) environment. The measured on-product results were compared to POR and showed a 7% reduction in overlay variation including a 22% terms variation. This led to an on-product raw overlay Mean + 3Sigma X&Y improvement of 5% and resulted in 0.1% yield improvement.

  4. Fast Implicit Methods For Elliptic Moving Interface Problems

    DTIC Science & Technology

    2015-12-11

    analyzed, and tested for the Fourier transform of piecewise polynomials given on d-dimensional simplices in D-dimensional Euclidean space. These transforms...evaluation, and one to three orders of magnitude slower than the classical uniform Fast Fourier Transform. Second, bilinear quadratures ---which...a fast algorithm was derived, analyzed, and tested for the Fourier transform of pi ecewise polynomials given on d-dimensional simplices in D

  5. Polynomial Solutions of Nth Order Non-Homogeneous Differential Equations

    ERIC Educational Resources Information Center

    Levine, Lawrence E.; Maleh, Ray

    2002-01-01

    It was shown by Costa and Levine that the homogeneous differential equation (1-x[superscript N])y([superscript N]) + A[subscript N-1]x[superscript N-1)y([superscript N-1]) + A[subscript N-2]x[superscript N-2])y([superscript N-2]) + ... + A[subscript 1]xy[prime] + A[subscript 0]y = 0 has a finite polynomial solution if and only if [for…

  6. A new model for estimating total body water from bioelectrical resistance

    NASA Technical Reports Server (NTRS)

    Siconolfi, S. F.; Kear, K. T.

    1992-01-01

    Estimation of total body water (T) from bioelectrical resistance (R) is commonly done by stepwise regression models with height squared over R, H(exp 2)/R, age, sex, and weight (W). Polynomials of H(exp 2)/R have not been included in these models. We examined the validity of a model with third order polynomials and W. Methods: T was measured with oxygen-18 labled water in 27 subjects. R at 50 kHz was obtained from electrodes placed on the hand and foot while subjects were in the supine position. A stepwise regression equation was developed with 13 subjects (age 31.5 plus or minus 6.2 years, T 38.2 plus or minus 6.6 L, W 65.2 plus or minus 12.0 kg). Correlations, standard error of estimates and mean differences were computed between T and estimated T's from the new (N) model and other models. Evaluations were completed with the remaining 14 subjects (age 32.4 plus or minus 6.3 years, T 40.3 plus or minus 8 L, W 70.2 plus or minus 12.3 kg) and two of its subgroups (high and low) Results: A regression equation was developed from the model. The only significant mean difference was between T and one of the earlier models. Conclusion: Third order polynomials in regression models may increase the accuracy of estimating total body water. Evaluating the model with a larger population is needed.

  7. Improving Global Models of Remotely Sensed Ocean Chlorophyll Content Using Partial Least Squares and Geographically Weighted Regression

    NASA Astrophysics Data System (ADS)

    Gholizadeh, H.; Robeson, S. M.

    2015-12-01

    Empirical models have been widely used to estimate global chlorophyll content from remotely sensed data. Here, we focus on the standard NASA empirical models that use blue-green band ratios. These band ratio ocean color (OC) algorithms are in the form of fourth-order polynomials and the parameters of these polynomials (i.e. coefficients) are estimated from the NASA bio-Optical Marine Algorithm Data set (NOMAD). Most of the points in this data set have been sampled from tropical and temperate regions. However, polynomial coefficients obtained from this data set are used to estimate chlorophyll content in all ocean regions with different properties such as sea-surface temperature, salinity, and downwelling/upwelling patterns. Further, the polynomial terms in these models are highly correlated. In sum, the limitations of these empirical models are as follows: 1) the independent variables within the empirical models, in their current form, are correlated (multicollinear), and 2) current algorithms are global approaches and are based on the spatial stationarity assumption, so they are independent of location. Multicollinearity problem is resolved by using partial least squares (PLS). PLS, which transforms the data into a set of independent components, can be considered as a combined form of principal component regression (PCR) and multiple regression. Geographically weighted regression (GWR) is also used to investigate the validity of spatial stationarity assumption. GWR solves a regression model over each sample point by using the observations within its neighbourhood. PLS results show that the empirical method underestimates chlorophyll content in high latitudes, including the Southern Ocean region, when compared to PLS (see Figure 1). Cluster analysis of GWR coefficients also shows that the spatial stationarity assumption in empirical models is not likely a valid assumption.

  8. High Order Accuracy Methods for Supersonic Reactive Flows

    DTIC Science & Technology

    2008-06-25

    k = 0, · · · , N and N is the polynomial order used. The positive constant M is chosen such that σ(N) becomes machine zero. Typically M ∼ 32. Table...function used in this study is the Exponential filter given by σ(η) = exp(−αηp), (38) where α = − ln() and is the machine zero. The spectral...respectively. All numerical experiments were run 49 on a 667 MHz Compaq Alpha machine with 1GB memory and with an Alpha internal floating point processor. 9.1

  9. Efficient conservative ADER schemes based on WENO reconstruction and space-time predictor in primitive variables

    NASA Astrophysics Data System (ADS)

    Zanotti, Olindo; Dumbser, Michael

    2016-01-01

    We present a new version of conservative ADER-WENO finite volume schemes, in which both the high order spatial reconstruction as well as the time evolution of the reconstruction polynomials in the local space-time predictor stage are performed in primitive variables, rather than in conserved ones. To obtain a conservative method, the underlying finite volume scheme is still written in terms of the cell averages of the conserved quantities. Therefore, our new approach performs the spatial WENO reconstruction twice: the first WENO reconstruction is carried out on the known cell averages of the conservative variables. The WENO polynomials are then used at the cell centers to compute point values of the conserved variables, which are subsequently converted into point values of the primitive variables. This is the only place where the conversion from conservative to primitive variables is needed in the new scheme. Then, a second WENO reconstruction is performed on the point values of the primitive variables to obtain piecewise high order reconstruction polynomials of the primitive variables. The reconstruction polynomials are subsequently evolved in time with a novel space-time finite element predictor that is directly applied to the governing PDE written in primitive form. The resulting space-time polynomials of the primitive variables can then be directly used as input for the numerical fluxes at the cell boundaries in the underlying conservative finite volume scheme. Hence, the number of necessary conversions from the conserved to the primitive variables is reduced to just one single conversion at each cell center. We have verified the validity of the new approach over a wide range of hyperbolic systems, including the classical Euler equations of gas dynamics, the special relativistic hydrodynamics (RHD) and ideal magnetohydrodynamics (RMHD) equations, as well as the Baer-Nunziato model for compressible two-phase flows. In all cases we have noticed that the new ADER schemes provide less oscillatory solutions when compared to ADER finite volume schemes based on the reconstruction in conserved variables, especially for the RMHD and the Baer-Nunziato equations. For the RHD and RMHD equations, the overall accuracy is improved and the CPU time is reduced by about 25 %. Because of its increased accuracy and due to the reduced computational cost, we recommend to use this version of ADER as the standard one in the relativistic framework. At the end of the paper, the new approach has also been extended to ADER-DG schemes on space-time adaptive grids (AMR).

  10. Higher-order Fourier analysis over finite fields and applications

    NASA Astrophysics Data System (ADS)

    Hatami, Pooya

    Higher-order Fourier analysis is a powerful tool in the study of problems in additive and extremal combinatorics, for instance the study of arithmetic progressions in primes, where the traditional Fourier analysis comes short. In recent years, higher-order Fourier analysis has found multiple applications in computer science in fields such as property testing and coding theory. In this thesis, we develop new tools within this theory with several new applications such as a characterization theorem in algebraic property testing. One of our main contributions is a strong near-equidistribution result for regular collections of polynomials. The densities of small linear structures in subsets of Abelian groups can be expressed as certain analytic averages involving linear forms. Higher-order Fourier analysis examines such averages by approximating the indicator function of a subset by a function of bounded number of polynomials. Then, to approximate the average, it suffices to know the joint distribution of the polynomials applied to the linear forms. We prove a near-equidistribution theorem that describes these distributions for the group F(n/p) when p is a fixed prime. This fundamental fact was previously known only under various extra assumptions about the linear forms or the field size. We use this near-equidistribution theorem to settle a conjecture of Gowers and Wolf on the true complexity of systems of linear forms. Our next application is towards a characterization of testable algebraic properties. We prove that every locally characterized affine-invariant property of functions f : F(n/p) → R with n∈ N, is testable. In fact, we prove that any such property P is proximity-obliviously testable. More generally, we show that any affine-invariant property that is closed under subspace restrictions and has "bounded complexity" is testable. We also prove that any property that can be described as the property of decomposing into a known structure of low-degree polynomials is locally characterized and is, hence, testable. We discuss several notions of regularity which allow us to deduce algorithmic versions of various regularity lemmas for polynomials by Green and Tao and by Kaufman and Lovett. We show that our algorithmic regularity lemmas for polynomials imply algorithmic versions of several results relying on regularity, such as decoding Reed-Muller codes beyond the list decoding radius (for certain structured errors), and prescribed polynomial decompositions. Finally, motivated by the definition of Gowers norms, we investigate norms defined by different systems of linear forms. We give necessary conditions on the structure of systems of linear forms that define norms. We prove that such norms can be one of only two types, and assuming that |F p| is sufficiently large, they essentially are equivalent to either a Gowers norm or Lp norms.

  11. A ROM-Less Direct Digital Frequency Synthesizer Based on Hybrid Polynomial Approximation

    PubMed Central

    Omran, Qahtan Khalaf; Islam, Mohammad Tariqul; Misran, Norbahiah; Faruque, Mohammad Rashed Iqbal

    2014-01-01

    In this paper, a novel design approach for a phase to sinusoid amplitude converter (PSAC) has been investigated. Two segments have been used to approximate the first sine quadrant. A first linear segment is used to fit the region near the zero point, while a second fourth-order parabolic segment is used to approximate the rest of the sine curve. The phase sample, where the polynomial changed, was chosen in such a way as to achieve the maximum spurious free dynamic range (SFDR). The invented direct digital frequency synthesizer (DDFS) has been encoded in VHDL and post simulation was carried out. The synthesized architecture exhibits a promising result of 90 dBc SFDR. The targeted structure is expected to show advantages for perceptible reduction of hardware resources and power consumption as well as high clock speeds. PMID:24892092

  12. A High-Order Immersed Boundary Method for Acoustic Wave Scattering and Low-Mach Number Flow-Induced Sound in Complex Geometries

    PubMed Central

    Seo, Jung Hee; Mittal, Rajat

    2010-01-01

    A new sharp-interface immersed boundary method based approach for the computation of low-Mach number flow-induced sound around complex geometries is described. The underlying approach is based on a hydrodynamic/acoustic splitting technique where the incompressible flow is first computed using a second-order accurate immersed boundary solver. This is followed by the computation of sound using the linearized perturbed compressible equations (LPCE). The primary contribution of the current work is the development of a versatile, high-order accurate immersed boundary method for solving the LPCE in complex domains. This new method applies the boundary condition on the immersed boundary to a high-order by combining the ghost-cell approach with a weighted least-squares error method based on a high-order approximating polynomial. The method is validated for canonical acoustic wave scattering and flow-induced noise problems. Applications of this technique to relatively complex cases of practical interest are also presented. PMID:21318129

  13. Final Report - Subcontract B623760

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bank, R.

    2017-11-17

    During my visit to LLNL during July 17{27, 2017, I worked on linear system solvers. The two level hierarchical solver that initiated our study was developed to solve linear systems arising from hp adaptive finite element calculations, and is implemented in the PLTMG software package, version 12. This preconditioner typically requires 3-20% of the space used by the stiffness matrix for higher order elements. It has multigrid like convergence rates for a wide variety of PDEs (self-adjoint positive de nite elliptic equations, convection dominated convection-diffusion equations, and highly indefinite Helmholtz equations, among others). The convergence rate is not independent ofmore » the polynomial degree p as p ! 1, but but remains strong for p 9, which is the highest polynomial degree allowed in PLTMG, due to limitations of the numerical quadrature rules implemented in the software package. A more complete description of the method and some numerical experiments illustrating its effectiveness appear in. Like traditional geometric multilevel methods, this scheme relies on knowledge of the underlying finite element space in order to construct the smoother and the coarse grid correction.« less

  14. Discontinuous Skeletal Gradient Discretisation methods on polytopal meshes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Di Pietro, Daniele A.; Droniou, Jérôme; Manzini, Gianmarco

    Here, in this work we develop arbitrary-order Discontinuous Skeletal Gradient Discretisations (DSGD) on general polytopal meshes. Discontinuous Skeletal refers to the fact that the globally coupled unknowns are broken polynomials on the mesh skeleton. The key ingredient is a high-order gradient reconstruction composed of two terms: (i) a consistent contribution obtained mimicking an integration by parts formula inside each element and (ii) a stabilising term for which sufficient design conditions are provided. An example of stabilisation that satisfies the design conditions is proposed based on a local lifting of high-order residuals on a Raviart–Thomas–Nédélec subspace. We prove that the novelmore » DSGDs satisfy coercivity, consistency, limit-conformity, and compactness requirements that ensure convergence for a variety of elliptic and parabolic problems. Lastly, links with Hybrid High-Order, non-conforming Mimetic Finite Difference and non-conforming Virtual Element methods are also studied. Numerical examples complete the exposition.« less

  15. Discontinuous Skeletal Gradient Discretisation methods on polytopal meshes

    DOE PAGES

    Di Pietro, Daniele A.; Droniou, Jérôme; Manzini, Gianmarco

    2017-11-21

    Here, in this work we develop arbitrary-order Discontinuous Skeletal Gradient Discretisations (DSGD) on general polytopal meshes. Discontinuous Skeletal refers to the fact that the globally coupled unknowns are broken polynomials on the mesh skeleton. The key ingredient is a high-order gradient reconstruction composed of two terms: (i) a consistent contribution obtained mimicking an integration by parts formula inside each element and (ii) a stabilising term for which sufficient design conditions are provided. An example of stabilisation that satisfies the design conditions is proposed based on a local lifting of high-order residuals on a Raviart–Thomas–Nédélec subspace. We prove that the novelmore » DSGDs satisfy coercivity, consistency, limit-conformity, and compactness requirements that ensure convergence for a variety of elliptic and parabolic problems. Lastly, links with Hybrid High-Order, non-conforming Mimetic Finite Difference and non-conforming Virtual Element methods are also studied. Numerical examples complete the exposition.« less

  16. Parallel multigrid smoothing: polynomial versus Gauss-Seidel

    NASA Astrophysics Data System (ADS)

    Adams, Mark; Brezina, Marian; Hu, Jonathan; Tuminaro, Ray

    2003-07-01

    Gauss-Seidel is often the smoother of choice within multigrid applications. In the context of unstructured meshes, however, maintaining good parallel efficiency is difficult with multiplicative iterative methods such as Gauss-Seidel. This leads us to consider alternative smoothers. We discuss the computational advantages of polynomial smoothers within parallel multigrid algorithms for positive definite symmetric systems. Two particular polynomials are considered: Chebyshev and a multilevel specific polynomial. The advantages of polynomial smoothing over traditional smoothers such as Gauss-Seidel are illustrated on several applications: Poisson's equation, thin-body elasticity, and eddy current approximations to Maxwell's equations. While parallelizing the Gauss-Seidel method typically involves a compromise between a scalable convergence rate and maintaining high flop rates, polynomial smoothers achieve parallel scalable multigrid convergence rates without sacrificing flop rates. We show that, although parallel computers are the main motivation, polynomial smoothers are often surprisingly competitive with Gauss-Seidel smoothers on serial machines.

  17. Two-dimensional orthonormal trend surfaces for prospecting

    NASA Astrophysics Data System (ADS)

    Sarma, D. D.; Selvaraj, J. B.

    Orthonormal polynomials have distinct advantages over conventional polynomials: the equations for evaluating trend coefficients are not ill-conditioned and the convergence power of this method is greater compared to the least-squares approximation and therefore the approach by orthonormal functions provides a powerful alternative to the least-squares method. In this paper, orthonormal polynomials in two dimensions are obtained using the Gram-Schmidt method for a polynomial series of the type: Z = 1 + x + y + x2 + xy + y2 + … + yn, where x and y are the locational coordinates and Z is the value of the variable under consideration. Trend-surface analysis, which has wide applications in prospecting, has been carried out using the orthonormal polynomial approach for two sample sets of data from India concerned with gold accumulation from the Kolar Gold Field, and gravity data. A comparison of the orthonormal polynomial trend surfaces with those obtained by the classical least-squares method has been made for the two data sets. In both the situations, the orthonormal polynomial surfaces gave an improved fit to the data. A flowchart and a FORTRAN-IV computer program for deriving orthonormal polynomials of any order and for using them to fit trend surfaces is included. The program has provision for logarithmic transformation of the Z variable. If log-transformation is performed the predicted Z values are reconverted to the original units and the trend-surface map generated for use. The illustration of gold assay data related to the Champion lode system of Kolar Gold Fields, for which a 9th-degree orthonormal trend surface was fit, could be used for further prospecting the area.

  18. Venus radar mapper attitude reference quaternion

    NASA Technical Reports Server (NTRS)

    Lyons, D. T.

    1986-01-01

    Polynomial functions of time are used to specify the components of the quaternion which represents the nominal attitude of the Venus Radar mapper spacecraft during mapping. The following constraints must be satisfied in order to obtain acceptable synthetic array radar data: the nominal attitude function must have a large dynamic range, the sensor orientation must be known very accurately, the attitude reference function must use as little memory as possible, and the spacecraft must operate autonomously. Fitting polynomials to the components of the desired quaternion function is a straightforward method for providing a very dynamic nominal attitude using a minimum amount of on-board computer resources. Although the attitude from the polynomials may not be exactly the one requested by the radar designers, the polynomial coefficients are known, so they do not contribute to the attitude uncertainty. Frequent coefficient updates are not required, so the spacecraft can operate autonomously.

  19. Polynomial equations for science orbits around Europa

    NASA Astrophysics Data System (ADS)

    Cinelli, Marco; Circi, Christian; Ortore, Emiliano

    2015-07-01

    In this paper, the design of science orbits for the observation of a celestial body has been carried out using polynomial equations. The effects related to the main zonal harmonics of the celestial body and the perturbation deriving from the presence of a third celestial body have been taken into account. The third body describes a circular and equatorial orbit with respect to the primary body and, for its disturbing potential, an expansion in Legendre polynomials up to the second order has been considered. These polynomial equations allow the determination of science orbits around Jupiter's satellite Europa, where the third body gravitational attraction represents one of the main forces influencing the motion of an orbiting probe. Thus, the retrieved relationships have been applied to this moon and periodic sun-synchronous and multi-sun-synchronous orbits have been determined. Finally, numerical simulations have been carried out to validate the analytical results.

  20. a Unified Matrix Polynomial Approach to Modal Identification

    NASA Astrophysics Data System (ADS)

    Allemang, R. J.; Brown, D. L.

    1998-04-01

    One important current focus of modal identification is a reformulation of modal parameter estimation algorithms into a single, consistent mathematical formulation with a corresponding set of definitions and unifying concepts. Particularly, a matrix polynomial approach is used to unify the presentation with respect to current algorithms such as the least-squares complex exponential (LSCE), the polyreference time domain (PTD), Ibrahim time domain (ITD), eigensystem realization algorithm (ERA), rational fraction polynomial (RFP), polyreference frequency domain (PFD) and the complex mode indication function (CMIF) methods. Using this unified matrix polynomial approach (UMPA) allows a discussion of the similarities and differences of the commonly used methods. the use of least squares (LS), total least squares (TLS), double least squares (DLS) and singular value decomposition (SVD) methods is discussed in order to take advantage of redundant measurement data. Eigenvalue and SVD transformation methods are utilized to reduce the effective size of the resulting eigenvalue-eigenvector problem as well.

  1. Using Chebyshev polynomials and approximate inverse triangular factorizations for preconditioning the conjugate gradient method

    NASA Astrophysics Data System (ADS)

    Kaporin, I. E.

    2012-02-01

    In order to precondition a sparse symmetric positive definite matrix, its approximate inverse is examined, which is represented as the product of two sparse mutually adjoint triangular matrices. In this way, the solution of the corresponding system of linear algebraic equations (SLAE) by applying the preconditioned conjugate gradient method (CGM) is reduced to performing only elementary vector operations and calculating sparse matrix-vector products. A method for constructing the above preconditioner is described and analyzed. The triangular factor has a fixed sparsity pattern and is optimal in the sense that the preconditioned matrix has a minimum K-condition number. The use of polynomial preconditioning based on Chebyshev polynomials makes it possible to considerably reduce the amount of scalar product operations (at the cost of an insignificant increase in the total number of arithmetic operations). The possibility of an efficient massively parallel implementation of the resulting method for solving SLAEs is discussed. For a sequential version of this method, the results obtained by solving 56 test problems from the Florida sparse matrix collection (which are large-scale and ill-conditioned) are presented. These results show that the method is highly reliable and has low computational costs.

  2. Cosmographic analysis with Chebyshev polynomials

    NASA Astrophysics Data System (ADS)

    Capozziello, Salvatore; D'Agostino, Rocco; Luongo, Orlando

    2018-05-01

    The limits of standard cosmography are here revised addressing the problem of error propagation during statistical analyses. To do so, we propose the use of Chebyshev polynomials to parametrize cosmic distances. In particular, we demonstrate that building up rational Chebyshev polynomials significantly reduces error propagations with respect to standard Taylor series. This technique provides unbiased estimations of the cosmographic parameters and performs significatively better than previous numerical approximations. To figure this out, we compare rational Chebyshev polynomials with Padé series. In addition, we theoretically evaluate the convergence radius of (1,1) Chebyshev rational polynomial and we compare it with the convergence radii of Taylor and Padé approximations. We thus focus on regions in which convergence of Chebyshev rational functions is better than standard approaches. With this recipe, as high-redshift data are employed, rational Chebyshev polynomials remain highly stable and enable one to derive highly accurate analytical approximations of Hubble's rate in terms of the cosmographic series. Finally, we check our theoretical predictions by setting bounds on cosmographic parameters through Monte Carlo integration techniques, based on the Metropolis-Hastings algorithm. We apply our technique to high-redshift cosmic data, using the Joint Light-curve Analysis supernovae sample and the most recent versions of Hubble parameter and baryon acoustic oscillation measurements. We find that cosmography with Taylor series fails to be predictive with the aforementioned data sets, while turns out to be much more stable using the Chebyshev approach.

  3. Applicability of the polynomial chaos expansion method for personalization of a cardiovascular pulse wave propagation model.

    PubMed

    Huberts, W; Donders, W P; Delhaas, T; van de Vosse, F N

    2014-12-01

    Patient-specific modeling requires model personalization, which can be achieved in an efficient manner by parameter fixing and parameter prioritization. An efficient variance-based method is using generalized polynomial chaos expansion (gPCE), but it has not been applied in the context of model personalization, nor has it ever been compared with standard variance-based methods for models with many parameters. In this work, we apply the gPCE method to a previously reported pulse wave propagation model and compare the conclusions for model personalization with that of a reference analysis performed with Saltelli's efficient Monte Carlo method. We furthermore differentiate two approaches for obtaining the expansion coefficients: one based on spectral projection (gPCE-P) and one based on least squares regression (gPCE-R). It was found that in general the gPCE yields similar conclusions as the reference analysis but at much lower cost, as long as the polynomial metamodel does not contain unnecessary high order terms. Furthermore, the gPCE-R approach generally yielded better results than gPCE-P. The weak performance of the gPCE-P can be attributed to the assessment of the expansion coefficients using the Smolyak algorithm, which might be hampered by the high number of model parameters and/or by possible non-smoothness in the output space. Copyright © 2014 John Wiley & Sons, Ltd.

  4. A new third order finite volume weighted essentially non-oscillatory scheme on tetrahedral meshes

    NASA Astrophysics Data System (ADS)

    Zhu, Jun; Qiu, Jianxian

    2017-11-01

    In this paper a third order finite volume weighted essentially non-oscillatory scheme is designed for solving hyperbolic conservation laws on tetrahedral meshes. Comparing with other finite volume WENO schemes designed on tetrahedral meshes, the crucial advantages of such new WENO scheme are its simplicity and compactness with the application of only six unequal size spatial stencils for reconstructing unequal degree polynomials in the WENO type spatial procedures, and easy choice of the positive linear weights without considering the topology of the meshes. The original innovation of such scheme is to use a quadratic polynomial defined on a big central spatial stencil for obtaining third order numerical approximation at any points inside the target tetrahedral cell in smooth region and switch to at least one of five linear polynomials defined on small biased/central spatial stencils for sustaining sharp shock transitions and keeping essentially non-oscillatory property simultaneously. By performing such new procedures in spatial reconstructions and adopting a third order TVD Runge-Kutta time discretization method for solving the ordinary differential equation (ODE), the new scheme's memory occupancy is decreased and the computing efficiency is increased. So it is suitable for large scale engineering requirements on tetrahedral meshes. Some numerical results are provided to illustrate the good performance of such scheme.

  5. Geometrical Theory of Spherical Harmonics for Geosciences

    NASA Astrophysics Data System (ADS)

    Svehla, Drazen

    2010-05-01

    Spherical harmonics play a central role in the modelling of spatial and temporal processes in the system Earth. The gravity field of the Earth and its temporal variations, sea surface topography, geomagnetic field, ionosphere etc., are just a few examples where spherical harmonics are used to represent processes in the system Earth. We introduce a novel method for the computation and rotation of spherical harmonics, Legendre polynomials and associated Legendre functions without making use of recursive relations. This novel geometrical approach allows calculation of spherical harmonics without any numerical instability up to an arbitrary degree and order, e.g. up to degree and order 106 and beyond. The algorithm is based on the trigonometric reduction of Legendre polynomials and the geometric rotation in hyperspace. It is shown that Legendre polynomials can be computed using trigonometric series by pre-computing amplitudes and translation terms for all angular arguments. It is shown that they can be treated as vectors in the Hilbert hyperspace leading to unitary hermitian rotation matrices with geometric properties. Thus, rotation of spherical harmonics about e.g. a polar or an equatorial axis can be represented in the similar way. This novel method allows stable calculation of spherical harmonics up to an arbitrary degree and order, i.e. up to degree and order 106 and beyond.

  6. Establishing a direct connection between detrended fluctuation analysis and Fourier analysis

    NASA Astrophysics Data System (ADS)

    Kiyono, Ken

    2015-10-01

    To understand methodological features of the detrended fluctuation analysis (DFA) using a higher-order polynomial fitting, we establish the direct connection between DFA and Fourier analysis. Based on an exact calculation of the single-frequency response of the DFA, the following facts are shown analytically: (1) in the analysis of stochastic processes exhibiting a power-law scaling of the power spectral density (PSD), S (f ) ˜f-β , a higher-order detrending in the DFA has no adverse effect in the estimation of the DFA scaling exponent α , which satisfies the scaling relation α =(β +1 )/2 ; (2) the upper limit of the scaling exponents detectable by the DFA depends on the order of polynomial fit used in the DFA, and is bounded by m +1 , where m is the order of the polynomial fit; (3) the relation between the time scale in the DFA and the corresponding frequency in the PSD are distorted depending on both the order of the DFA and the frequency dependence of the PSD. We can improve the scale distortion by introducing the corrected time scale in the DFA corresponding to the inverse of the frequency scale in the PSD. In addition, our analytical approach makes it possible to characterize variants of the DFA using different types of detrending. As an application, properties of the detrending moving average algorithm are discussed.

  7. An Interpolation Approach to Optimal Trajectory Planning for Helicopter Unmanned Aerial Vehicles

    DTIC Science & Technology

    2012-06-01

    Armament Data Line DOF Degree of Freedom PS Pseudospectral LGL Legendre -Gauss-Lobatto quadrature nodes ODE Ordinary Differential Equation xiv...low order polynomials patched together in such away so that the resulting trajectory has several continuous derivatives at all points. In [7], Murray...claims that splines are ideal for optimal control problems because each segment of the spline’s piecewise polynomials approximate the trajectory

  8. Least Squares Approximation By G1 Piecewise Parametric Cubes

    DTIC Science & Technology

    1993-12-01

    ADDRESS(ES) 10.SPONSORING/MONITORING AGENCY REPORT NUMBER 11. SUPPLEMENTARY NOTES The views expressed in this thesis are those of the author and do not...CODE Approved for public release; distribution is unlimited. 13. ABSTRACT (maximum 200 words) Parametric piecewise cubic polynomials are used throughout...piecewise parametric cubic polynomial to a sequence of ordered points in the plane. Cubic Bdzier curves are used as a basis. The parameterization, the

  9. Ridge Polynomial Neural Network with Error Feedback for Time Series Forecasting

    PubMed Central

    Ghazali, Rozaida; Herawan, Tutut

    2016-01-01

    Time series forecasting has gained much attention due to its many practical applications. Higher-order neural network with recurrent feedback is a powerful technique that has been used successfully for time series forecasting. It maintains fast learning and the ability to learn the dynamics of the time series over time. Network output feedback is the most common recurrent feedback for many recurrent neural network models. However, not much attention has been paid to the use of network error feedback instead of network output feedback. In this study, we propose a novel model, called Ridge Polynomial Neural Network with Error Feedback (RPNN-EF) that incorporates higher order terms, recurrence and error feedback. To evaluate the performance of RPNN-EF, we used four univariate time series with different forecasting horizons, namely star brightness, monthly smoothed sunspot numbers, daily Euro/Dollar exchange rate, and Mackey-Glass time-delay differential equation. We compared the forecasting performance of RPNN-EF with the ordinary Ridge Polynomial Neural Network (RPNN) and the Dynamic Ridge Polynomial Neural Network (DRPNN). Simulation results showed an average 23.34% improvement in Root Mean Square Error (RMSE) with respect to RPNN and an average 10.74% improvement with respect to DRPNN. That means that using network errors during training helps enhance the overall forecasting performance for the network. PMID:27959927

  10. Ridge Polynomial Neural Network with Error Feedback for Time Series Forecasting.

    PubMed

    Waheeb, Waddah; Ghazali, Rozaida; Herawan, Tutut

    2016-01-01

    Time series forecasting has gained much attention due to its many practical applications. Higher-order neural network with recurrent feedback is a powerful technique that has been used successfully for time series forecasting. It maintains fast learning and the ability to learn the dynamics of the time series over time. Network output feedback is the most common recurrent feedback for many recurrent neural network models. However, not much attention has been paid to the use of network error feedback instead of network output feedback. In this study, we propose a novel model, called Ridge Polynomial Neural Network with Error Feedback (RPNN-EF) that incorporates higher order terms, recurrence and error feedback. To evaluate the performance of RPNN-EF, we used four univariate time series with different forecasting horizons, namely star brightness, monthly smoothed sunspot numbers, daily Euro/Dollar exchange rate, and Mackey-Glass time-delay differential equation. We compared the forecasting performance of RPNN-EF with the ordinary Ridge Polynomial Neural Network (RPNN) and the Dynamic Ridge Polynomial Neural Network (DRPNN). Simulation results showed an average 23.34% improvement in Root Mean Square Error (RMSE) with respect to RPNN and an average 10.74% improvement with respect to DRPNN. That means that using network errors during training helps enhance the overall forecasting performance for the network.

  11. Jack Polynomials as Fractional Quantum Hall States and the Betti Numbers of the ( k + 1)-Equals Ideal

    NASA Astrophysics Data System (ADS)

    Zamaere, Christine Berkesch; Griffeth, Stephen; Sam, Steven V.

    2014-08-01

    We show that for Jack parameter α = -( k + 1)/( r - 1), certain Jack polynomials studied by Feigin-Jimbo-Miwa-Mukhin vanish to order r when k + 1 of the coordinates coincide. This result was conjectured by Bernevig and Haldane, who proposed that these Jack polynomials are model wavefunctions for fractional quantum Hall states. Special cases of these Jack polynomials include the wavefunctions of Laughlin and Read-Rezayi. In fact, along these lines we prove several vanishing theorems known as clustering properties for Jack polynomials in the mathematical physics literature, special cases of which had previously been conjectured by Bernevig and Haldane. Motivated by the method of proof, which in the case r = 2 identifies the span of the relevant Jack polynomials with the S n -invariant part of a unitary representation of the rational Cherednik algebra, we conjecture that unitary representations of the type A Cherednik algebra have graded minimal free resolutions of Bernstein-Gelfand-Gelfand type; we prove this for the ideal of the ( k + 1)-equals arrangement in the case when the number of coordinates n is at most 2 k + 1. In general, our conjecture predicts the graded S n -equivariant Betti numbers of the ideal of the ( k + 1)-equals arrangement with no restriction on the number of ambient dimensions.

  12. On Using Homogeneous Polynomials To Design Anisotropic Yield Functions With Tension/Compression Symmetry/Assymetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Soare, S.; Cazacu, O.; Yoon, J. W.

    With few exceptions, non-quadratic homogeneous polynomials have received little attention as possible candidates for yield functions. One reason might be that not every such polynomial is a convex function. In this paper we show that homogeneous polynomials can be used to develop powerful anisotropic yield criteria, and that imposing simple constraints on the identification process leads, aposteriori, to the desired convexity property. It is shown that combinations of such polynomials allow for modeling yielding properties of metallic materials with any crystal structure, i.e. both cubic and hexagonal which display strength differential effects. Extensions of the proposed criteria to 3D stressmore » states are also presented. We apply these criteria to the description of the aluminum alloy AA2090T3. We prove that a sixth order orthotropic homogeneous polynomial is capable of a satisfactory description of this alloy. Next, applications to the deep drawing of a cylindrical cup are presented. The newly proposed criteria were implemented as UMAT subroutines into the commercial FE code ABAQUS. We were able to predict six ears on the AA2090T3 cup's profile. Finally, we show that a tension/compression asymmetry in yielding can have an important effect on the earing profile.« less

  13. Perceptually informed synthesis of bandlimited classical waveforms using integrated polynomial interpolation.

    PubMed

    Välimäki, Vesa; Pekonen, Jussi; Nam, Juhan

    2012-01-01

    Digital subtractive synthesis is a popular music synthesis method, which requires oscillators that are aliasing-free in a perceptual sense. It is a research challenge to find computationally efficient waveform generation algorithms that produce similar-sounding signals to analog music synthesizers but which are free from audible aliasing. A technique for approximately bandlimited waveform generation is considered that is based on a polynomial correction function, which is defined as the difference of a non-bandlimited step function and a polynomial approximation of the ideal bandlimited step function. It is shown that the ideal bandlimited step function is equivalent to the sine integral, and that integrated polynomial interpolation methods can successfully approximate it. Integrated Lagrange interpolation and B-spline basis functions are considered for polynomial approximation. The polynomial correction function can be added onto samples around each discontinuity in a non-bandlimited waveform to suppress aliasing. Comparison against previously known methods shows that the proposed technique yields the best tradeoff between computational cost and sound quality. The superior method amongst those considered in this study is the integrated third-order B-spline correction function, which offers perceptually aliasing-free sawtooth emulation up to the fundamental frequency of 7.8 kHz at the sample rate of 44.1 kHz. © 2012 Acoustical Society of America.

  14. The value of continuity: Refined isogeometric analysis and fast direct solvers

    DOE PAGES

    Garcia, Daniel; Pardo, David; Dalcin, Lisandro; ...

    2016-08-24

    Here, we propose the use of highly continuous finite element spaces interconnected with low continuity hyperplanes to maximize the performance of direct solvers. Starting from a highly continuous Isogeometric Analysis (IGA) discretization, we introduce C0-separators to reduce the interconnection between degrees of freedom in the mesh. By doing so, both the solution time and best approximation errors are simultaneously improved. We call the resulting method “refined Isogeometric Analysis (rIGA)”. To illustrate the impact of the continuity reduction, we analyze the number of Floating Point Operations (FLOPs), computational times, and memory required to solve the linear system obtained by discretizing themore » Laplace problem with structured meshes and uniform polynomial orders. Theoretical estimates demonstrate that an optimal continuity reduction may decrease the total computational time by a factor between p 2 and p 3, with pp being the polynomial order of the discretization. Numerical results indicate that our proposed refined isogeometric analysis delivers a speed-up factor proportional to p 2. In a 2D mesh with four million elements and p=5, the linear system resulting from rIGA is solved 22 times faster than the one from highly continuous IGA. In a 3D mesh with one million elements and p=3, the linear system is solved 15 times faster for the refined than the maximum continuity isogeometric analysis.« less

  15. Gravity Gradient Tensor of Arbitrary 3D Polyhedral Bodies with up to Third-Order Polynomial Horizontal and Vertical Mass Contrasts

    NASA Astrophysics Data System (ADS)

    Ren, Zhengyong; Zhong, Yiyuan; Chen, Chaojian; Tang, Jingtian; Kalscheuer, Thomas; Maurer, Hansruedi; Li, Yang

    2018-03-01

    During the last 20 years, geophysicists have developed great interest in using gravity gradient tensor signals to study bodies of anomalous density in the Earth. Deriving exact solutions of the gravity gradient tensor signals has become a dominating task in exploration geophysics or geodetic fields. In this study, we developed a compact and simple framework to derive exact solutions of gravity gradient tensor measurements for polyhedral bodies, in which the density contrast is represented by a general polynomial function. The polynomial mass contrast can continuously vary in both horizontal and vertical directions. In our framework, the original three-dimensional volume integral of gravity gradient tensor signals is transformed into a set of one-dimensional line integrals along edges of the polyhedral body by sequentially invoking the volume and surface gradient (divergence) theorems. In terms of an orthogonal local coordinate system defined on these edges, exact solutions are derived for these line integrals. We successfully derived a set of unified exact solutions of gravity gradient tensors for constant, linear, quadratic and cubic polynomial orders. The exact solutions for constant and linear cases cover all previously published vertex-type exact solutions of the gravity gradient tensor for a polygonal body, though the associated algorithms may differ in numerical stability. In addition, to our best knowledge, it is the first time that exact solutions of gravity gradient tensor signals are derived for a polyhedral body with a polynomial mass contrast of order higher than one (that is quadratic and cubic orders). Three synthetic models (a prismatic body with depth-dependent density contrasts, an irregular polyhedron with linear density contrast and a tetrahedral body with horizontally and vertically varying density contrasts) are used to verify the correctness and the efficiency of our newly developed closed-form solutions. Excellent agreements are obtained between our solutions and other published exact solutions. In addition, stability tests are performed to demonstrate that our exact solutions can safely be used to detect shallow subsurface targets.

  16. Intricacies of cosmological bounce in polynomial metric f(R) gravity for flat FLRW spacetime

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bhattacharya, Kaushik; Chakrabarty, Saikat, E-mail: kaushikb@iitk.ac.in, E-mail: snilch@iitk.ac.in

    2016-02-01

    In this paper we present the techniques for computing cosmological bounces in polynomial f(R) theories, whose order is more than two, for spatially flat FLRW spacetime. In these cases the conformally connected Einstein frame shows up multiple scalar potentials predicting various possibilities of cosmological evolution in the Jordan frame where the f(R) theory lives. We present a reasonable way in which one can associate the various possible potentials in the Einstein frame, for cubic f(R) gravity, to the cosmological development in the Jordan frame. The issue concerning the energy conditions in f(R) theories is presented. We also point out themore » very important relationships between the conformal transformations connecting the Jordan frame and the Einstein frame and the various instabilities of f(R) theory. All the calculations are done for cubic f(R) gravity but we hope the results are sufficiently general for higher order polynomial gravity.« less

  17. Inferring genetic parameters of lactation in Tropical Milking Criollo cattle with random regression test-day models.

    PubMed

    Santellano-Estrada, E; Becerril-Pérez, C M; de Alba, J; Chang, Y M; Gianola, D; Torres-Hernández, G; Ramírez-Valverde, R

    2008-11-01

    This study inferred genetic and permanent environmental variation of milk yield in Tropical Milking Criollo cattle and compared 5 random regression test-day models using Wilmink's function and Legendre polynomials. Data consisted of 15,377 test-day records from 467 Tropical Milking Criollo cows that calved between 1974 and 2006 in the tropical lowlands of the Gulf Coast of Mexico and in southern Nicaragua. Estimated heritabilities of test-day milk yields ranged from 0.18 to 0.45, and repeatabilities ranged from 0.35 to 0.68 for the period spanning from 6 to 400 d in milk. Genetic correlation between days in milk 10 and 400 was around 0.50 but greater than 0.90 for most pairs of test days. The model that used first-order Legendre polynomials for additive genetic effects and second-order Legendre polynomials for permanent environmental effects gave the smallest residual variance and was also favored by the Akaike information criterion and likelihood ratio tests.

  18. Connection between quantum systems involving the fourth Painlevé transcendent and k-step rational extensions of the harmonic oscillator related to Hermite exceptional orthogonal polynomial

    NASA Astrophysics Data System (ADS)

    Marquette, Ian; Quesne, Christiane

    2016-05-01

    The purpose of this communication is to point out the connection between a 1D quantum Hamiltonian involving the fourth Painlevé transcendent PIV, obtained in the context of second-order supersymmetric quantum mechanics and third-order ladder operators, with a hierarchy of families of quantum systems called k-step rational extensions of the harmonic oscillator and related with multi-indexed Xm1,m2,…,mk Hermite exceptional orthogonal polynomials of type III. The connection between these exactly solvable models is established at the level of the equivalence of the Hamiltonians using rational solutions of the fourth Painlevé equation in terms of generalized Hermite and Okamoto polynomials. We also relate the different ladder operators obtained by various combinations of supersymmetric constructions involving Darboux-Crum and Krein-Adler supercharges, their zero modes and the corresponding energies. These results will demonstrate and clarify the relation observed for a particular case in previous papers.

  19. Micropolar curved rods. 2-D, high order, Timoshenko's and Euler-Bernoulli models

    NASA Astrophysics Data System (ADS)

    Zozulya, V. V.

    2017-01-01

    New models for micropolar plane curved rods have been developed. 2-D theory is developed from general 2-D equations of linear micropolar elasticity using a special curvilinear system of coordinates related to the middle line of the rod and special hypothesis based on assumptions that take into account the fact that the rod is thin.High order theory is based on the expansion of the equations of the theory of elasticity into Fourier series in terms of Legendre polynomials. First stress and strain tensors,vectors of displacements and rotation and body force shave been expanded into Fourier series in terms of Legendre polynomials with respect to a thickness coordinate.Thereby all equations of elasticity including Hooke's law have been transformed to the corresponding equations for Fourier coefficients. Then in the same way as in the theory of elasticity, system of differential equations in term of displacements and boundary conditions for Fourier coefficients have been obtained. The Timoshenko's and Euler-Bernoulli theories are based on the classical hypothesis and 2-D equations of linear micropolar elasticity in a special curvilinear system. The obtained equations can be used to calculate stress-strain and to model thin walled structures in macro, micro and nano scale when taking in to account micropolar couple stress and rotation effects.

  20. Kurtosis Approach Nonlinear Blind Source Separation

    NASA Technical Reports Server (NTRS)

    Duong, Vu A.; Stubbemd, Allen R.

    2005-01-01

    In this paper, we introduce a new algorithm for blind source signal separation for post-nonlinear mixtures. The mixtures are assumed to be linearly mixed from unknown sources first and then distorted by memoryless nonlinear functions. The nonlinear functions are assumed to be smooth and can be approximated by polynomials. Both the coefficients of the unknown mixing matrix and the coefficients of the approximated polynomials are estimated by the gradient descent method conditional on the higher order statistical requirements. The results of simulation experiments presented in this paper demonstrate the validity and usefulness of our approach for nonlinear blind source signal separation Keywords: Independent Component Analysis, Kurtosis, Higher order statistics.

  1. Spline function approximation techniques for image geometric distortion representation. [for registration of multitemporal remote sensor imagery

    NASA Technical Reports Server (NTRS)

    Anuta, P. E.

    1975-01-01

    Least squares approximation techniques were developed for use in computer aided correction of spatial image distortions for registration of multitemporal remote sensor imagery. Polynomials were first used to define image distortion over the entire two dimensional image space. Spline functions were then investigated to determine if the combination of lower order polynomials could approximate a higher order distortion with less computational difficulty. Algorithms for generating approximating functions were developed and applied to the description of image distortion in aircraft multispectral scanner imagery. Other applications of the techniques were suggested for earth resources data processing areas other than geometric distortion representation.

  2. Algebraic solutions of shape-invariant position-dependent effective mass systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amir, Naila, E-mail: naila.amir@live.com, E-mail: naila.amir@seecs.edu.pk; Iqbal, Shahid, E-mail: sic80@hotmail.com, E-mail: siqbal@sns.nust.edu.pk

    2016-06-15

    Keeping in view the ordering ambiguity that arises due to the presence of position-dependent effective mass in the kinetic energy term of the Hamiltonian, a general scheme for obtaining algebraic solutions of quantum mechanical systems with position-dependent effective mass is discussed. We quantize the Hamiltonian of the pertaining system by using symmetric ordering of the operators concerning momentum and the spatially varying mass, initially proposed by von Roos and Lévy-Leblond. The algebraic method, used to obtain the solutions, is based on the concepts of supersymmetric quantum mechanics and shape invariance. In order to exemplify the general formalism a class ofmore » non-linear oscillators has been considered. This class includes the particular example of a one-dimensional oscillator with different position-dependent effective mass profiles. Explicit expressions for the eigenenergies and eigenfunctions in terms of generalized Hermite polynomials are presented. Moreover, properties of these modified Hermite polynomials, like existence of generating function and recurrence relations among the polynomials have also been studied. Furthermore, it has been shown that in the harmonic limit, all the results for the linear harmonic oscillator are recovered.« less

  3. Human evaluation in association to the mathematical analysis of arch forms: Two-dimensional study.

    PubMed

    Zabidin, Nurwahidah; Mohamed, Alizae Marny; Zaharim, Azami; Marizan Nor, Murshida; Rosli, Tanti Irawati

    2018-03-01

    To evaluate the relationship between human evaluation of the dental-arch form, to complete a mathematical analysis via two different methods in quantifying the arch form, and to establish agreement with the fourth-order polynomial equation. This study included 64 sets of digitised maxilla and mandible dental casts obtained from a sample of dental arch with normal occlusion. For human evaluation, a convenient sample of orthodontic practitioners ranked the photo images of dental cast from the most tapered to the less tapered (square). In the mathematical analysis, dental arches were interpolated using the fourth-order polynomial equation with millimetric acetate paper and AutoCAD software. Finally, the relations between human evaluation and mathematical objective analyses were evaluated. Human evaluations were found to be generally in agreement, but only at the extremes of tapered and square arch forms; this indicated general human error and observer bias. The two methods used to plot the arch form were comparable. The use of fourth-order polynomial equation may be facilitative in obtaining a smooth curve, which can produce a template for individual arch that represents all potential tooth positions for the dental arch. Copyright © 2018 CEO. Published by Elsevier Masson SAS. All rights reserved.

  4. A new root-based direction-finding algorithm

    NASA Astrophysics Data System (ADS)

    Wasylkiwskyj, Wasyl; Kopriva, Ivica; DoroslovačKi, Miloš; Zaghloul, Amir I.

    2007-04-01

    Polynomial rooting direction-finding (DF) algorithms are a computationally efficient alternative to search-based DF algorithms and are particularly suitable for uniform linear arrays of physically identical elements provided that mutual interaction among the array elements can be either neglected or compensated for. A popular algorithm in such situations is Root Multiple Signal Classification (Root MUSIC (RM)), wherein the estimation of the directions of arrivals (DOA) requires the computation of the roots of a (2N - 2) -order polynomial, where N represents number of array elements. The DOA are estimated from the L pairs of roots closest to the unit circle, where L represents number of sources. In this paper we derive a modified root polynomial (MRP) algorithm requiring the calculation of only L roots in order to estimate the L DOA. We evaluate the performance of the MRP algorithm numerically and show that it is as accurate as the RM algorithm but with a significantly simpler algebraic structure. In order to demonstrate that the theoretically predicted performance can be achieved in an experimental setting, a decoupled array is emulated in hardware using phase shifters. The results are in excellent agreement with theory.

  5. Higher-order Multivariable Polynomial Regression to Estimate Human Affective States

    NASA Astrophysics Data System (ADS)

    Wei, Jie; Chen, Tong; Liu, Guangyuan; Yang, Jiemin

    2016-03-01

    From direct observations, facial, vocal, gestural, physiological, and central nervous signals, estimating human affective states through computational models such as multivariate linear-regression analysis, support vector regression, and artificial neural network, have been proposed in the past decade. In these models, linear models are generally lack of precision because of ignoring intrinsic nonlinearities of complex psychophysiological processes; and nonlinear models commonly adopt complicated algorithms. To improve accuracy and simplify model, we introduce a new computational modeling method named as higher-order multivariable polynomial regression to estimate human affective states. The study employs standardized pictures in the International Affective Picture System to induce thirty subjects’ affective states, and obtains pure affective patterns of skin conductance as input variables to the higher-order multivariable polynomial model for predicting affective valence and arousal. Experimental results show that our method is able to obtain efficient correlation coefficients of 0.98 and 0.96 for estimation of affective valence and arousal, respectively. Moreover, the method may provide certain indirect evidences that valence and arousal have their brain’s motivational circuit origins. Thus, the proposed method can serve as a novel one for efficiently estimating human affective states.

  6. Higher-order Multivariable Polynomial Regression to Estimate Human Affective States

    PubMed Central

    Wei, Jie; Chen, Tong; Liu, Guangyuan; Yang, Jiemin

    2016-01-01

    From direct observations, facial, vocal, gestural, physiological, and central nervous signals, estimating human affective states through computational models such as multivariate linear-regression analysis, support vector regression, and artificial neural network, have been proposed in the past decade. In these models, linear models are generally lack of precision because of ignoring intrinsic nonlinearities of complex psychophysiological processes; and nonlinear models commonly adopt complicated algorithms. To improve accuracy and simplify model, we introduce a new computational modeling method named as higher-order multivariable polynomial regression to estimate human affective states. The study employs standardized pictures in the International Affective Picture System to induce thirty subjects’ affective states, and obtains pure affective patterns of skin conductance as input variables to the higher-order multivariable polynomial model for predicting affective valence and arousal. Experimental results show that our method is able to obtain efficient correlation coefficients of 0.98 and 0.96 for estimation of affective valence and arousal, respectively. Moreover, the method may provide certain indirect evidences that valence and arousal have their brain’s motivational circuit origins. Thus, the proposed method can serve as a novel one for efficiently estimating human affective states. PMID:26996254

  7. Elevation data fitting and precision analysis of Google Earth in road survey

    NASA Astrophysics Data System (ADS)

    Wei, Haibin; Luan, Xiaohan; Li, Hanchao; Jia, Jiangkun; Chen, Zhao; Han, Leilei

    2018-05-01

    Objective: In order to improve efficiency of road survey and save manpower and material resources, this paper intends to apply Google Earth to the feasibility study stage of road survey and design. Limited by the problem that Google Earth elevation data lacks precision, this paper is focused on finding several different fitting or difference methods to improve the data precision, in order to make every effort to meet the accuracy requirements of road survey and design specifications. Method: On the basis of elevation difference of limited public points, any elevation difference of the other points can be fitted or interpolated. Thus, the precise elevation can be obtained by subtracting elevation difference from the Google Earth data. Quadratic polynomial surface fitting method, cubic polynomial surface fitting method, V4 interpolation method in MATLAB and neural network method are used in this paper to process elevation data of Google Earth. And internal conformity, external conformity and cross correlation coefficient are used as evaluation indexes to evaluate the data processing effect. Results: There is no fitting difference at the fitting point while using V4 interpolation method. Its external conformity is the largest and the effect of accuracy improvement is the worst, so V4 interpolation method is ruled out. The internal and external conformity of the cubic polynomial surface fitting method both are better than those of the quadratic polynomial surface fitting method. The neural network method has a similar fitting effect with the cubic polynomial surface fitting method, but its fitting effect is better in the case of a higher elevation difference. Because the neural network method is an unmanageable fitting model, the cubic polynomial surface fitting method should be mainly used and the neural network method can be used as the auxiliary method in the case of higher elevation difference. Conclusions: Cubic polynomial surface fitting method can obviously improve data precision of Google Earth. The error of data in hilly terrain areas meets the requirement of specifications after precision improvement and it can be used in feasibility study stage of road survey and design.

  8. Characterization of bone collagen organization defects in murine hypophosphatasia using a Zernike model of optical aberrations

    NASA Astrophysics Data System (ADS)

    Tehrani, Kayvan Forouhesh; Pendleton, Emily G.; Leitmann, Bobby; Barrow, Ruth; Mortensen, Luke J.

    2018-02-01

    Bone growth and strength is severely impacted by Hypophosphatasia (HPP). It is a genetic disease that affects the mineralization of the bone. We hypothesize that it impacts overall organization, density, and porosity of collagen fibers. Lower density of fibers and higher porosity cause less absorption and scattering of light, and therefore a different regime of transport mean free path. To find a cure for this disease, a metric for the evaluation of bone is required. Here we present an evaluation method based on our Phase Accumulation Ray Tracing (PART) method. This method uses second harmonic generation (SHG) in bone collagen fiber to model bone indices of refraction, which is used to calculate phase retardation on the propagation path of light in bone. The calculated phase is then expanded using Zernike polynomials up to 15th order, to make a quantitative analysis of tissue anomalies. Because the Zernike modes are a complete set of orthogonal polynomials, we can compare low and high order modes in HPP, compare them with healthy wild type mice, to identify the differences between their geometry and structure. Larger coefficients of low order modes show more uniform fiber density and less porosity, whereas the opposite is shown with larger coefficients of higher order modes. Our analyses show significant difference between Zernike modes in different types of bone evidenced by Principal Components Analysis (PCA).

  9. Recent advances in high-order WENO finite volume methods for compressible multiphase flows

    NASA Astrophysics Data System (ADS)

    Dumbser, Michael

    2013-10-01

    We present two new families of better than second order accurate Godunov-type finite volume methods for the solution of nonlinear hyperbolic partial differential equations with nonconservative products. One family is based on a high order Arbitrary-Lagrangian-Eulerian (ALE) formulation on moving meshes, which allows to resolve the material contact wave in a very sharp way when the mesh is moved at the speed of the material interface. The other family of methods is based on a high order Adaptive Mesh Refinement (AMR) strategy, where the mesh can be strongly refined in the vicinity of the material interface. Both classes of schemes have several building blocks in common, in particular: a high order WENO reconstruction operator to obtain high order of accuracy in space; the use of an element-local space-time Galerkin predictor step which evolves the reconstruction polynomials in time and that allows to reach high order of accuracy in time in one single step; the use of a path-conservative approach to treat the nonconservative terms of the PDE. We show applications of both methods to the Baer-Nunziato model for compressible multiphase flows.

  10. Nonlocal theory of curved rods. 2-D, high order, Timoshenko's and Euler-Bernoulli models

    NASA Astrophysics Data System (ADS)

    Zozulya, V. V.

    2017-09-01

    New models for plane curved rods based on linear nonlocal theory of elasticity have been developed. The 2-D theory is developed from general 2-D equations of linear nonlocal elasticity using a special curvilinear system of coordinates related to the middle line of the rod along with special hypothesis based on assumptions that take into account the fact that the rod is thin. High order theory is based on the expansion of the equations of the theory of elasticity into Fourier series in terms of Legendre polynomials. First, stress and strain tensors, vectors of displacements and body forces have been expanded into Fourier series in terms of Legendre polynomials with respect to a thickness coordinate. Thereby, all equations of elasticity including nonlocal constitutive relations have been transformed to the corresponding equations for Fourier coefficients. Then, in the same way as in the theory of local elasticity, a system of differential equations in terms of displacements for Fourier coefficients has been obtained. First and second order approximations have been considered in detail. Timoshenko's and Euler-Bernoulli theories are based on the classical hypothesis and the 2-D equations of linear nonlocal theory of elasticity which are considered in a special curvilinear system of coordinates related to the middle line of the rod. The obtained equations can be used to calculate stress-strain and to model thin walled structures in micro- and nanoscales when taking into account size dependent and nonlocal effects.

  11. Solving the Rational Polynomial Coefficients Based on L Curve

    NASA Astrophysics Data System (ADS)

    Zhou, G.; Li, X.; Yue, T.; Huang, W.; He, C.; Huang, Y.

    2018-05-01

    The rational polynomial coefficients (RPC) model is a generalized sensor model, which can achieve high approximation accuracy. And it is widely used in the field of photogrammetry and remote sensing. Least square method is usually used to determine the optimal parameter solution of the rational function model. However the distribution of control points is not uniform or the model is over-parameterized, which leads to the singularity of the coefficient matrix of the normal equation. So the normal equation becomes ill conditioned equation. The obtained solutions are extremely unstable and even wrong. The Tikhonov regularization can effectively improve and solve the ill conditioned equation. In this paper, we calculate pathological equations by regularization method, and determine the regularization parameters by L curve. The results of the experiments on aerial format photos show that the accuracy of the first-order RPC with the equal denominators has the highest accuracy. The high order RPC model is not necessary in the processing of dealing with frame images, as the RPC model and the projective model are almost the same. The result shows that the first-order RPC model is basically consistent with the strict sensor model of photogrammetry. Orthorectification results both the firstorder RPC model and Camera Model (ERDAS9.2 platform) are similar to each other, and the maximum residuals of X and Y are 0.8174 feet and 0.9272 feet respectively. This result shows that RPC model can be used in the aerial photographic compensation replacement sensor model.

  12. Shock Capturing with PDE-Based Artificial Viscosity for an Adaptive, Higher-Order Discontinuous Galerkin Finite Element Method

    DTIC Science & Technology

    2008-06-01

    Geometry Interpolation The function space , VpH , consists of discontinuous, piecewise-polynomials. This work used a polynomial basis for VpH such...between a piecewise-constant and smooth variation of viscosity in both a one- dimensional and multi- dimensional setting. Before continuing with the ...inviscid, transonic flow past a NACA 0012 at zero angle of attack and freestream Mach number of M∞ = 0.95. The

  13. Darboux partners of pseudoscalar Dirac potentials associated with exceptional orthogonal polynomials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schulze-Halberg, Axel, E-mail: xbataxel@gmail.com; Department of Physics, Indiana University Northwest, 3400 Broadway, Gary, IN 46408; Roy, Barnana, E-mail: barnana@isical.ac.in

    2014-10-15

    We introduce a method for constructing Darboux (or supersymmetric) pairs of pseudoscalar and scalar Dirac potentials that are associated with exceptional orthogonal polynomials. Properties of the transformed potentials and regularity conditions are discussed. As an application, we consider a pseudoscalar Dirac potential related to the Schrödinger model for the rationally extended radial oscillator. The pseudoscalar partner potentials are constructed under the first- and second-order Darboux transformations.

  14. Regression-based reduced-order models to predict transient thermal output for enhanced geothermal systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mudunuru, Maruti Kumar; Karra, Satish; Harp, Dylan Robert

    Reduced-order modeling is a promising approach, as many phenomena can be described by a few parameters/mechanisms. An advantage and attractive aspect of a reduced-order model is that it is computational inexpensive to evaluate when compared to running a high-fidelity numerical simulation. A reduced-order model takes couple of seconds to run on a laptop while a high-fidelity simulation may take couple of hours to run on a high-performance computing cluster. The goal of this paper is to assess the utility of regression-based reduced-order models (ROMs) developed from high-fidelity numerical simulations for predicting transient thermal power output for an enhanced geothermal reservoirmore » while explicitly accounting for uncertainties in the subsurface system and site-specific details. Numerical simulations are performed based on equally spaced values in the specified range of model parameters. Key sensitive parameters are then identified from these simulations, which are fracture zone permeability, well/skin factor, bottom hole pressure, and injection flow rate. We found the fracture zone permeability to be the most sensitive parameter. The fracture zone permeability along with time, are used to build regression-based ROMs for the thermal power output. The ROMs are trained and validated using detailed physics-based numerical simulations. Finally, predictions from the ROMs are then compared with field data. We propose three different ROMs with different levels of model parsimony, each describing key and essential features of the power production curves. The coefficients in the proposed regression-based ROMs are developed by minimizing a non-linear least-squares misfit function using the Levenberg–Marquardt algorithm. The misfit function is based on the difference between numerical simulation data and reduced-order model. ROM-1 is constructed based on polynomials up to fourth order. ROM-1 is able to accurately reproduce the power output of numerical simulations for low values of permeabilities and certain features of the field-scale data. ROM-2 is a model with more analytical functions consisting of polynomials up to order eight, exponential functions and smooth approximations of Heaviside functions, and accurately describes the field-data. At higher permeabilities, ROM-2 reproduces numerical results better than ROM-1, however, there is a considerable deviation from numerical results at low fracture zone permeabilities. ROM-3 consists of polynomials up to order ten, and is developed by taking the best aspects of ROM-1 and ROM-2. ROM-1 is relatively parsimonious than ROM-2 and ROM-3, while ROM-2 overfits the data. ROM-3 on the other hand, provides a middle ground for model parsimony. Based on R 2-values for training, validation, and prediction data sets we found that ROM-3 is better model than ROM-2 and ROM-1. For predicting thermal drawdown in EGS applications, where high fracture zone permeabilities (typically greater than 10 –15 m 2) are desired, ROM-2 and ROM-3 outperform ROM-1. As per computational time, all the ROMs are 10 4 times faster when compared to running a high-fidelity numerical simulation. In conclusion, this makes the proposed regression-based ROMs attractive for real-time EGS applications because they are fast and provide reasonably good predictions for thermal power output.« less

  15. Regression-based reduced-order models to predict transient thermal output for enhanced geothermal systems

    DOE PAGES

    Mudunuru, Maruti Kumar; Karra, Satish; Harp, Dylan Robert; ...

    2017-07-10

    Reduced-order modeling is a promising approach, as many phenomena can be described by a few parameters/mechanisms. An advantage and attractive aspect of a reduced-order model is that it is computational inexpensive to evaluate when compared to running a high-fidelity numerical simulation. A reduced-order model takes couple of seconds to run on a laptop while a high-fidelity simulation may take couple of hours to run on a high-performance computing cluster. The goal of this paper is to assess the utility of regression-based reduced-order models (ROMs) developed from high-fidelity numerical simulations for predicting transient thermal power output for an enhanced geothermal reservoirmore » while explicitly accounting for uncertainties in the subsurface system and site-specific details. Numerical simulations are performed based on equally spaced values in the specified range of model parameters. Key sensitive parameters are then identified from these simulations, which are fracture zone permeability, well/skin factor, bottom hole pressure, and injection flow rate. We found the fracture zone permeability to be the most sensitive parameter. The fracture zone permeability along with time, are used to build regression-based ROMs for the thermal power output. The ROMs are trained and validated using detailed physics-based numerical simulations. Finally, predictions from the ROMs are then compared with field data. We propose three different ROMs with different levels of model parsimony, each describing key and essential features of the power production curves. The coefficients in the proposed regression-based ROMs are developed by minimizing a non-linear least-squares misfit function using the Levenberg–Marquardt algorithm. The misfit function is based on the difference between numerical simulation data and reduced-order model. ROM-1 is constructed based on polynomials up to fourth order. ROM-1 is able to accurately reproduce the power output of numerical simulations for low values of permeabilities and certain features of the field-scale data. ROM-2 is a model with more analytical functions consisting of polynomials up to order eight, exponential functions and smooth approximations of Heaviside functions, and accurately describes the field-data. At higher permeabilities, ROM-2 reproduces numerical results better than ROM-1, however, there is a considerable deviation from numerical results at low fracture zone permeabilities. ROM-3 consists of polynomials up to order ten, and is developed by taking the best aspects of ROM-1 and ROM-2. ROM-1 is relatively parsimonious than ROM-2 and ROM-3, while ROM-2 overfits the data. ROM-3 on the other hand, provides a middle ground for model parsimony. Based on R 2-values for training, validation, and prediction data sets we found that ROM-3 is better model than ROM-2 and ROM-1. For predicting thermal drawdown in EGS applications, where high fracture zone permeabilities (typically greater than 10 –15 m 2) are desired, ROM-2 and ROM-3 outperform ROM-1. As per computational time, all the ROMs are 10 4 times faster when compared to running a high-fidelity numerical simulation. In conclusion, this makes the proposed regression-based ROMs attractive for real-time EGS applications because they are fast and provide reasonably good predictions for thermal power output.« less

  16. Wilson polynomials/functions and intertwining operators for the generic quantum superintegrable system on the 2-sphere

    NASA Astrophysics Data System (ADS)

    Miller, W., Jr.; Li, Q.

    2015-04-01

    The Wilson and Racah polynomials can be characterized as basis functions for irreducible representations of the quadratic symmetry algebra of the quantum superintegrable system on the 2-sphere, HΨ = EΨ, with generic 3-parameter potential. Clearly, the polynomials are expansion coefficients for one eigenbasis of a symmetry operator L2 of H in terms of an eigenbasis of another symmetry operator L1, but the exact relationship appears not to have been made explicit. We work out the details of the expansion to show, explicitly, how the polynomials arise and how the principal properties of these functions: the measure, 3-term recurrence relation, 2nd order difference equation, duality of these relations, permutation symmetry, intertwining operators and an alternate derivation of Wilson functions - follow from the symmetry of this quantum system. This paper is an exercise to show that quantum mechancal concepts and recurrence relations for Gausian hypergeometrc functions alone suffice to explain these properties; we make no assumptions about the structure of Wilson polynomial/functions, but derive them from quantum principles. There is active interest in the relation between multivariable Wilson polynomials and the quantum superintegrable system on the n-sphere with generic potential, and these results should aid in the generalization. Contracting function space realizations of irreducible representations of this quadratic algebra to the other superintegrable systems one can obtain the full Askey scheme of orthogonal hypergeometric polynomials. All of these contractions of superintegrable systems with potential are uniquely induced by Wigner Lie algebra contractions of so(3, C) and e(2,C). All of the polynomials produced are interpretable as quantum expansion coefficients. It is important to extend this process to higher dimensions.

  17. Third order maximum-principle-satisfying direct discontinuous Galerkin methods for time dependent convection diffusion equations on unstructured triangular meshes

    DOE PAGES

    Chen, Zheng; Huang, Hongying; Yan, Jue

    2015-12-21

    We develop 3rd order maximum-principle-satisfying direct discontinuous Galerkin methods [8], [9], [19] and [21] for convection diffusion equations on unstructured triangular mesh. We carefully calculate the normal derivative numerical flux across element edges and prove that, with proper choice of parameter pair (β 0,β 1) in the numerical flux formula, the quadratic polynomial solution satisfies strict maximum principle. The polynomial solution is bounded within the given range and third order accuracy is maintained. There is no geometric restriction on the meshes and obtuse triangles are allowed in the partition. As a result, a sequence of numerical examples are carried outmore » to demonstrate the accuracy and capability of the maximum-principle-satisfying limiter.« less

  18. An analytical technique for approximating unsteady aerodynamics in the time domain

    NASA Technical Reports Server (NTRS)

    Dunn, H. J.

    1980-01-01

    An analytical technique is presented for approximating unsteady aerodynamic forces in the time domain. The order of elements of a matrix Pade approximation was postulated, and the resulting polynomial coefficients were determined through a combination of least squares estimates for the numerator coefficients and a constrained gradient search for the denominator coefficients which insures stable approximating functions. The number of differential equations required to represent the aerodynamic forces to a given accuracy tends to be smaller than that employed in certain existing techniques where the denominator coefficients are chosen a priori. Results are shown for an aeroelastic, cantilevered, semispan wing which indicate a good fit to the aerodynamic forces for oscillatory motion can be achieved with a matrix Pade approximation having fourth order numerator and second order denominator polynomials.

  19. A Legendre tau-spectral method for solving time-fractional heat equation with nonlocal conditions.

    PubMed

    Bhrawy, A H; Alghamdi, M A

    2014-01-01

    We develop the tau-spectral method to solve the time-fractional heat equation (T-FHE) with nonlocal condition. In order to achieve highly accurate solution of this problem, the operational matrix of fractional integration (described in the Riemann-Liouville sense) for shifted Legendre polynomials is investigated in conjunction with tau-spectral scheme and the Legendre operational polynomials are used as the base function. The main advantage in using the presented scheme is that it converts the T-FHE with nonlocal condition to a system of algebraic equations that simplifies the problem. For demonstrating the validity and applicability of the developed spectral scheme, two numerical examples are presented. The logarithmic graphs of the maximum absolute errors is presented to achieve the exponential convergence of the proposed method. Comparing between our spectral method and other methods ensures that our method is more accurate than those solved similar problem.

  20. A Legendre tau-Spectral Method for Solving Time-Fractional Heat Equation with Nonlocal Conditions

    PubMed Central

    Bhrawy, A. H.; Alghamdi, M. A.

    2014-01-01

    We develop the tau-spectral method to solve the time-fractional heat equation (T-FHE) with nonlocal condition. In order to achieve highly accurate solution of this problem, the operational matrix of fractional integration (described in the Riemann-Liouville sense) for shifted Legendre polynomials is investigated in conjunction with tau-spectral scheme and the Legendre operational polynomials are used as the base function. The main advantage in using the presented scheme is that it converts the T-FHE with nonlocal condition to a system of algebraic equations that simplifies the problem. For demonstrating the validity and applicability of the developed spectral scheme, two numerical examples are presented. The logarithmic graphs of the maximum absolute errors is presented to achieve the exponential convergence of the proposed method. Comparing between our spectral method and other methods ensures that our method is more accurate than those solved similar problem. PMID:25057507

  1. Exact Integrations of Polynomials and Symmetric Quadrature Formulas over Arbitrary Polyhedral Grids

    NASA Technical Reports Server (NTRS)

    Liu, Yen; Vinokur, Marcel

    1997-01-01

    This paper is concerned with two important elements in the high-order accurate spatial discretization of finite volume equations over arbitrary grids. One element is the integration of basis functions over arbitrary domains, which is used in expressing various spatial integrals in terms of discrete unknowns. The other consists of quadrature approximations to those integrals. Only polynomial basis functions applied to polyhedral and polygonal grids are treated here. Non-triangular polygonal faces are subdivided into a union of planar triangular facets, and the resulting triangulated polyhedron is subdivided into a union of tetrahedra. The straight line segment, triangle, and tetrahedron are thus the fundamental shapes that are the building blocks for all integrations and quadrature approximations. Integrals of products up to the fifth order are derived in a unified manner for the three fundamental shapes in terms of the position vectors of vertices. Results are given both in terms of tensor products and products of Cartesian coordinates. The exact polynomial integrals are used to obtain symmetric quadrature approximations of any degree of precision up to five for arbitrary integrals over the three fundamental domains. Using a coordinate-free formulation, simple and rational procedures are developed to derive virtually all quadrature formulas, including some previously unpublished. Four symmetry groups of quadrature points are introduced to derive Gauss formulas, while their limiting forms are used to derive Lobatto formulas. Representative Gauss and Lobatto formulas are tabulated. The relative efficiency of their application to polyhedral and polygonal grids is detailed. The extension to higher degrees of precision is discussed.

  2. Rigorous high-precision enclosures of fixed points and their invariant manifolds

    NASA Astrophysics Data System (ADS)

    Wittig, Alexander N.

    The well established concept of Taylor Models is introduced, which offer highly accurate C0 enclosures of functional dependencies, combining high-order polynomial approximation of functions and rigorous estimates of the truncation error, performed using verified arithmetic. The focus of this work is on the application of Taylor Models in algorithms for strongly non-linear dynamical systems. A method is proposed to extend the existing implementation of Taylor Models in COSY INFINITY from double precision coefficients to arbitrary precision coefficients. Great care is taken to maintain the highest efficiency possible by adaptively adjusting the precision of higher order coefficients in the polynomial expansion. High precision operations are based on clever combinations of elementary floating point operations yielding exact values for round-off errors. An experimental high precision interval data type is developed and implemented. Algorithms for the verified computation of intrinsic functions based on the High Precision Interval datatype are developed and described in detail. The application of these operations in the implementation of High Precision Taylor Models is discussed. An application of Taylor Model methods to the verification of fixed points is presented by verifying the existence of a period 15 fixed point in a near standard Henon map. Verification is performed using different verified methods such as double precision Taylor Models, High Precision intervals and High Precision Taylor Models. Results and performance of each method are compared. An automated rigorous fixed point finder is implemented, allowing the fully automated search for all fixed points of a function within a given domain. It returns a list of verified enclosures of each fixed point, optionally verifying uniqueness within these enclosures. An application of the fixed point finder to the rigorous analysis of beam transfer maps in accelerator physics is presented. Previous work done by Johannes Grote is extended to compute very accurate polynomial approximations to invariant manifolds of discrete maps of arbitrary dimension around hyperbolic fixed points. The algorithm presented allows for automatic removal of resonances occurring during construction. A method for the rigorous enclosure of invariant manifolds of continuous systems is introduced. Using methods developed for discrete maps, polynomial approximations of invariant manifolds of hyperbolic fixed points of ODEs are obtained. These approximations are outfit with a sharp error bound which is verified to rigorously contain the manifolds. While we focus on the three dimensional case, verification in higher dimensions is possible using similar techniques. Integrating the resulting enclosures using the verified COSY VI integrator, the initial manifold enclosures are expanded to yield sharp enclosures of large parts of the stable and unstable manifolds. To demonstrate the effectiveness of this method, we construct enclosures of the invariant manifolds of the Lorenz system and show pictures of the resulting manifold enclosures. To the best of our knowledge, these enclosures are the largest verified enclosures of manifolds in the Lorenz system in existence.

  3. An atlas of Rapp's 180-th order geopotential.

    NASA Astrophysics Data System (ADS)

    Melvin, P. J.

    1986-08-01

    Deprit's 1979 approach to the summation of the spherical harmonic expansion of the geopotential has been modified to spherical components and normalized Legendre polynomials. An algorithm has been developed which produces ten fields at the users option: the undulations of the geoid, three anomalous components of the gravity vector, or six components of the Hessian of the geopotential (gravity gradient). The algorithm is stable to high orders in single precision and does not treat the polar regions as a special case. Eleven contour maps of components of the anomalous geopotential on the surface of the ellipsoid are presented to validate the algorithm.

  4. Gabor-based kernel PCA with fractional power polynomial models for face recognition.

    PubMed

    Liu, Chengjun

    2004-05-01

    This paper presents a novel Gabor-based kernel Principal Component Analysis (PCA) method by integrating the Gabor wavelet representation of face images and the kernel PCA method for face recognition. Gabor wavelets first derive desirable facial features characterized by spatial frequency, spatial locality, and orientation selectivity to cope with the variations due to illumination and facial expression changes. The kernel PCA method is then extended to include fractional power polynomial models for enhanced face recognition performance. A fractional power polynomial, however, does not necessarily define a kernel function, as it might not define a positive semidefinite Gram matrix. Note that the sigmoid kernels, one of the three classes of widely used kernel functions (polynomial kernels, Gaussian kernels, and sigmoid kernels), do not actually define a positive semidefinite Gram matrix either. Nevertheless, the sigmoid kernels have been successfully used in practice, such as in building support vector machines. In order to derive real kernel PCA features, we apply only those kernel PCA eigenvectors that are associated with positive eigenvalues. The feasibility of the Gabor-based kernel PCA method with fractional power polynomial models has been successfully tested on both frontal and pose-angled face recognition, using two data sets from the FERET database and the CMU PIE database, respectively. The FERET data set contains 600 frontal face images of 200 subjects, while the PIE data set consists of 680 images across five poses (left and right profiles, left and right half profiles, and frontal view) with two different facial expressions (neutral and smiling) of 68 subjects. The effectiveness of the Gabor-based kernel PCA method with fractional power polynomial models is shown in terms of both absolute performance indices and comparative performance against the PCA method, the kernel PCA method with polynomial kernels, the kernel PCA method with fractional power polynomial models, the Gabor wavelet-based PCA method, and the Gabor wavelet-based kernel PCA method with polynomial kernels.

  5. Polynomial expressions of electron depth dose as a function of energy in various materials: application to thermoluminescence (TL) dosimetry

    NASA Astrophysics Data System (ADS)

    Deogracias, E. C.; Wood, J. L.; Wagner, E. C.; Kearfott, K. J.

    1999-02-01

    The CEPXS/ONEDANT code package was used to produce a library of depth-dose profiles for monoenergetic electrons in various materials for energies ranging from 500 keV to 5 MeV in 10 keV increments. The various materials for which depth-dose functions were derived include: lithium fluoride (LiF), aluminum oxide (Al 2O 3), beryllium oxide (BeO), calcium sulfate (CaSO 4), calcium fluoride (CaF 2), lithium boron oxide (LiBO), soft tissue, lens of the eye, adiopose, muscle, skin, glass and water. All materials data sets were fit to five polynomials, each covering a different range of electron energies, using a least squares method. The resultant three dimensional, fifth-order polynomials give the dose as a function of depth and energy for the monoenergetic electrons in each material. The polynomials can be used to describe an energy spectrum by summing the doses at a given depth for each energy, weighted by the spectral intensity for that energy. An application of the polynomial is demonstrated by explaining the energy dependence of thermoluminescent detectors (TLDs) and illustrating the relationship between TLD signal and actual shallow dose due to beta particles.

  6. Functional Form of the Radiometric Equation for the SNPP VIIRS Reflective Solar Bands: An Initial Study

    NASA Technical Reports Server (NTRS)

    Lei, Ning; Xiong, Xiaoxiong

    2016-01-01

    The Visible Infrared Imaging Radiometer Suite (VIIRS) aboard the Suomi National Polar-orbiting Partnership (SNPP) satellite is a passive scanning radiometer and an imager, observing radiative energy from the Earth in 22 spectral bands from 0.41 to 12 microns which include 14 reflective solar bands (RSBs). Extending the formula used by the Moderate Resolution Imaging Spectroradiometer instruments, currently the VIIRS determines the sensor aperture spectral radiance through a quadratic polynomial of its detector digital count. It has been known that for the RSBs the quadratic polynomial is not adequate in the design specified spectral radiance region and using a quadratic polynomial could drastically increase the errors in the polynomial coefficients, leading to possible large errors in the determined aperture spectral radiance. In addition, it is very desirable to be able to extend the radiance calculation formula to correctly retrieve the aperture spectral radiance with the level beyond the design specified range. In order to more accurately determine the aperture spectral radiance from the observed digital count, we examine a few polynomials of the detector digital count to calculate the sensor aperture spectral radiance.

  7. A comparison of companion matrix methods to find roots of a trigonometric polynomial

    NASA Astrophysics Data System (ADS)

    Boyd, John P.

    2013-08-01

    A trigonometric polynomial is a truncated Fourier series of the form fN(t)≡∑j=0Naj cos(jt)+∑j=1N bj sin(jt). It has been previously shown by the author that zeros of such a polynomial can be computed as the eigenvalues of a companion matrix with elements which are complex valued combinations of the Fourier coefficients, the "CCM" method. However, previous work provided no examples, so one goal of this new work is to experimentally test the CCM method. A second goal is introduce a new alternative, the elimination/Chebyshev algorithm, and experimentally compare it with the CCM scheme. The elimination/Chebyshev matrix (ECM) algorithm yields a companion matrix with real-valued elements, albeit at the price of usefulness only for real roots. The new elimination scheme first converts the trigonometric rootfinding problem to a pair of polynomial equations in the variables (c,s) where c≡cos(t) and s≡sin(t). The elimination method next reduces the system to a single univariate polynomial P(c). We show that this same polynomial is the resultant of the system and is also a generator of the Groebner basis with lexicographic ordering for the system. Both methods give very high numerical accuracy for real-valued roots, typically at least 11 decimal places in Matlab/IEEE 754 16 digit floating point arithmetic. The CCM algorithm is typically one or two decimal places more accurate, though these differences disappear if the roots are "Newton-polished" by a single Newton's iteration. The complex-valued matrix is accurate for complex-valued roots, too, though accuracy decreases with the magnitude of the imaginary part of the root. The cost of both methods scales as O(N3) floating point operations. In spite of intimate connections of the elimination/Chebyshev scheme to two well-established technologies for solving systems of equations, resultants and Groebner bases, and the advantages of using only real-valued arithmetic to obtain a companion matrix with real-valued elements, the ECM algorithm is noticeably inferior to the complex-valued companion matrix in simplicity, ease of programming, and accuracy.

  8. Computing border bases using mutant strategies

    NASA Astrophysics Data System (ADS)

    Ullah, E.; Abbas Khan, S.

    2014-01-01

    Border bases, a generalization of Gröbner bases, have actively been addressed during recent years due to their applicability to industrial problems. In cryptography and coding theory a useful application of border based is to solve zero-dimensional systems of polynomial equations over finite fields, which motivates us for developing optimizations of the algorithms that compute border bases. In 2006, Kehrein and Kreuzer formulated the Border Basis Algorithm (BBA), an algorithm which allows the computation of border bases that relate to a degree compatible term ordering. In 2007, J. Ding et al. introduced mutant strategies bases on finding special lower degree polynomials in the ideal. The mutant strategies aim to distinguish special lower degree polynomials (mutants) from the other polynomials and give them priority in the process of generating new polynomials in the ideal. In this paper we develop hybrid algorithms that use the ideas of J. Ding et al. involving the concept of mutants to optimize the Border Basis Algorithm for solving systems of polynomial equations over finite fields. In particular, we recall a version of the Border Basis Algorithm which is actually called the Improved Border Basis Algorithm and propose two hybrid algorithms, called MBBA and IMBBA. The new mutants variants provide us space efficiency as well as time efficiency. The efficiency of these newly developed hybrid algorithms is discussed using standard cryptographic examples.

  9. Statistically generated weighted curve fit of residual functions for modal analysis of structures

    NASA Technical Reports Server (NTRS)

    Bookout, P. S.

    1995-01-01

    A statistically generated weighting function for a second-order polynomial curve fit of residual functions has been developed. The residual flexibility test method, from which a residual function is generated, is a procedure for modal testing large structures in an external constraint-free environment to measure the effects of higher order modes and interface stiffness. This test method is applicable to structures with distinct degree-of-freedom interfaces to other system components. A theoretical residual function in the displacement/force domain has the characteristics of a relatively flat line in the lower frequencies and a slight upward curvature in the higher frequency range. In the test residual function, the above-mentioned characteristics can be seen in the data, but due to the present limitations in the modal parameter evaluation (natural frequencies and mode shapes) of test data, the residual function has regions of ragged data. A second order polynomial curve fit is required to obtain the residual flexibility term. A weighting function of the data is generated by examining the variances between neighboring data points. From a weighted second-order polynomial curve fit, an accurate residual flexibility value can be obtained. The residual flexibility value and free-free modes from testing are used to improve a mathematical model of the structure. The residual flexibility modal test method is applied to a straight beam with a trunnion appendage and a space shuttle payload pallet simulator.

  10. Algebraic calculations for spectrum of superintegrable system from exceptional orthogonal polynomials

    NASA Astrophysics Data System (ADS)

    Hoque, Md. Fazlul; Marquette, Ian; Post, Sarah; Zhang, Yao-Zhong

    2018-04-01

    We introduce an extended Kepler-Coulomb quantum model in spherical coordinates. The Schrödinger equation of this Hamiltonian is solved in these coordinates and it is shown that the wave functions of the system can be expressed in terms of Laguerre, Legendre and exceptional Jacobi polynomials (of hypergeometric type). We construct ladder and shift operators based on the corresponding wave functions and obtain their recurrence formulas. These recurrence relations are used to construct higher-order, algebraically independent integrals of motion to prove superintegrability of the Hamiltonian. The integrals form a higher rank polynomial algebra. By constructing the structure functions of the associated deformed oscillator algebras we derive the degeneracy of energy spectrum of the superintegrable system.

  11. On High-Order Upwind Methods for Advection

    NASA Technical Reports Server (NTRS)

    Huynh, H. T.

    2017-01-01

    In the fourth installment of the celebrated series of five papers entitled "Towards the ultimate conservative difference scheme", Van Leer (1977) introduced five schemes for advection, the first three are piecewise linear, and the last two, piecewise parabolic. Among the five, scheme I, which is the least accurate, extends with relative ease to systems of equations in multiple dimensions. As a result, it became the most popular and is widely known as the MUSCL scheme (monotone upstream-centered schemes for conservation laws). Schemes III and V have the same accuracy, are the most accurate, and are closely related to current high-order methods. Scheme III uses a piecewise linear approximation that is discontinuous across cells, and can be considered as a precursor of the discontinuous Galerkin methods. Scheme V employs a piecewise quadratic approximation that is, as opposed to the case of scheme III, continuous across cells. This method is the basis for the on-going "active flux scheme" developed by Roe and collaborators. Here, schemes III and V are shown to be equivalent in the sense that they yield identical (reconstructed) solutions, provided the initial condition for scheme III is defined from that of scheme V in a manner dependent on the CFL number. This equivalence is counter intuitive since it is generally believed that piecewise linear and piecewise parabolic methods cannot produce the same solutions due to their different degrees of approximation. The finding also shows a key connection between the approaches of discontinuous and continuous polynomial approximations. In addition to the discussed equivalence, a framework using both projection and interpolation that extends schemes III and V into a single family of high-order schemes is introduced. For these high-order extensions, it is demonstrated via Fourier analysis that schemes with the same number of degrees of freedom ?? per cell, in spite of the different piecewise polynomial degrees, share the same sets of eigenvalues and thus, have the same stability and accuracy. Moreover, these schemes are accurate to order 2??-1, which is higher than the expected order of ??.

  12. Computing Gröbner Bases within Linear Algebra

    NASA Astrophysics Data System (ADS)

    Suzuki, Akira

    In this paper, we present an alternative algorithm to compute Gröbner bases, which is based on computations on sparse linear algebra. Both of S-polynomial computations and monomial reductions are computed in linear algebra simultaneously in this algorithm. So it can be implemented to any computational system which can handle linear algebra. For a given ideal in a polynomial ring, it calculates a Gröbner basis along with the corresponding term order appropriately.

  13. Eshelby's problem of polygonal inclusions with polynomial eigenstrains in an anisotropic magneto-electro-elastic full plane

    PubMed Central

    Lee, Y.-G.; Zou, W.-N.; Pan, E.

    2015-01-01

    This paper presents a closed-form solution for the arbitrary polygonal inclusion problem with polynomial eigenstrains of arbitrary order in an anisotropic magneto-electro-elastic full plane. The additional displacements or eigendisplacements, instead of the eigenstrains, are assumed to be a polynomial with general terms of order M+N. By virtue of the extended Stroh formulism, the induced fields are expressed in terms of a group of basic functions which involve boundary integrals of the inclusion domain. For the special case of polygonal inclusions, the boundary integrals are carried out explicitly, and their averages over the inclusion are also obtained. The induced fields under quadratic eigenstrains are mostly analysed in terms of figures and tables, as well as those under the linear and cubic eigenstrains. The connection between the present solution and the solution via the Green's function method is established and numerically verified. The singularity at the vertices of the arbitrary polygon is further analysed via the basic functions. The general solution and the numerical results for the constant, linear, quadratic and cubic eigenstrains presented in this paper enable us to investigate the features of the inclusion and inhomogeneity problem concerning polynomial eigenstrains in semiconductors and advanced composites, while the results can further serve as benchmarks for future analyses of Eshelby's inclusion problem. PMID:26345141

  14. Prediction of zeolite-cement-sand unconfined compressive strength using polynomial neural network

    NASA Astrophysics Data System (ADS)

    MolaAbasi, H.; Shooshpasha, I.

    2016-04-01

    The improvement of local soils with cement and zeolite can provide great benefits, including strengthening slopes in slope stability problems, stabilizing problematic soils and preventing soil liquefaction. Recently, dosage methodologies are being developed for improved soils based on a rational criterion as it exists in concrete technology. There are numerous earlier studies showing the possibility of relating Unconfined Compressive Strength (UCS) and Cemented sand (CS) parameters (voids/cement ratio) as a power function fits. Taking into account the fact that the existing equations are incapable of estimating UCS for zeolite cemented sand mixture (ZCS) well, artificial intelligence methods are used for forecasting them. Polynomial-type neural network is applied to estimate the UCS from more simply determined index properties such as zeolite and cement content, porosity as well as curing time. In order to assess the merits of the proposed approach, a total number of 216 unconfined compressive tests have been done. A comparison is carried out between the experimentally measured UCS with the predictions in order to evaluate the performance of the current method. The results demonstrate that generalized polynomial-type neural network has a great ability for prediction of the UCS. At the end sensitivity analysis of the polynomial model is applied to study the influence of input parameters on model output. The sensitivity analysis reveals that cement and zeolite content have significant influence on predicting UCS.

  15. Kostant polynomials and the cohomology ring for G/B

    PubMed Central

    Billey, Sara C.

    1997-01-01

    The Schubert calculus for G/B can be completely determined by a certain matrix related to the Kostant polynomials introduced in section 5 of Bernstein, Gelfand, and Gelfand [Bernstein, I., Gelfand, I. & Gelfand, S. (1973) Russ. Math. Surv. 28, 1–26]. The polynomials are defined by vanishing properties on the orbit of a regular point under the action of the Weyl group. For each element w in the Weyl group the polynomials also have nonzero values on the orbit points corresponding to elements which are larger in the Bruhat order than w. The main theorem given here is an explicit formula for these values. The matrix of orbit values can be used to determine the cup product for the cohomology ring for G/B, using only linear algebra or as described by Lascoux and Schützenberger [Lascoux, A. & Schützenberger, M.-P. (1982) C. R. Seances Acad. Sci. Ser. A 294, 447–450]. Complete proofs of all the theorems will appear in a forthcoming paper. PMID:11038536

  16. A high-order multiscale finite-element method for time-domain acoustic-wave modeling

    NASA Astrophysics Data System (ADS)

    Gao, Kai; Fu, Shubin; Chung, Eric T.

    2018-05-01

    Accurate and efficient wave equation modeling is vital for many applications in such as acoustics, electromagnetics, and seismology. However, solving the wave equation in large-scale and highly heterogeneous models is usually computationally expensive because the computational cost is directly proportional to the number of grids in the model. We develop a novel high-order multiscale finite-element method to reduce the computational cost of time-domain acoustic-wave equation numerical modeling by solving the wave equation on a coarse mesh based on the multiscale finite-element theory. In contrast to existing multiscale finite-element methods that use only first-order multiscale basis functions, our new method constructs high-order multiscale basis functions from local elliptic problems which are closely related to the Gauss-Lobatto-Legendre quadrature points in a coarse element. Essentially, these basis functions are not only determined by the order of Legendre polynomials, but also by local medium properties, and therefore can effectively convey the fine-scale information to the coarse-scale solution with high-order accuracy. Numerical tests show that our method can significantly reduce the computation time while maintain high accuracy for wave equation modeling in highly heterogeneous media by solving the corresponding discrete system only on the coarse mesh with the new high-order multiscale basis functions.

  17. A high-order multiscale finite-element method for time-domain acoustic-wave modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gao, Kai; Fu, Shubin; Chung, Eric T.

    Accurate and efficient wave equation modeling is vital for many applications in such as acoustics, electromagnetics, and seismology. However, solving the wave equation in large-scale and highly heterogeneous models is usually computationally expensive because the computational cost is directly proportional to the number of grids in the model. We develop a novel high-order multiscale finite-element method to reduce the computational cost of time-domain acoustic-wave equation numerical modeling by solving the wave equation on a coarse mesh based on the multiscale finite-element theory. In contrast to existing multiscale finite-element methods that use only first-order multiscale basis functions, our new method constructsmore » high-order multiscale basis functions from local elliptic problems which are closely related to the Gauss–Lobatto–Legendre quadrature points in a coarse element. Essentially, these basis functions are not only determined by the order of Legendre polynomials, but also by local medium properties, and therefore can effectively convey the fine-scale information to the coarse-scale solution with high-order accuracy. Numerical tests show that our method can significantly reduce the computation time while maintain high accuracy for wave equation modeling in highly heterogeneous media by solving the corresponding discrete system only on the coarse mesh with the new high-order multiscale basis functions.« less

  18. A high-order multiscale finite-element method for time-domain acoustic-wave modeling

    DOE PAGES

    Gao, Kai; Fu, Shubin; Chung, Eric T.

    2018-02-04

    Accurate and efficient wave equation modeling is vital for many applications in such as acoustics, electromagnetics, and seismology. However, solving the wave equation in large-scale and highly heterogeneous models is usually computationally expensive because the computational cost is directly proportional to the number of grids in the model. We develop a novel high-order multiscale finite-element method to reduce the computational cost of time-domain acoustic-wave equation numerical modeling by solving the wave equation on a coarse mesh based on the multiscale finite-element theory. In contrast to existing multiscale finite-element methods that use only first-order multiscale basis functions, our new method constructsmore » high-order multiscale basis functions from local elliptic problems which are closely related to the Gauss–Lobatto–Legendre quadrature points in a coarse element. Essentially, these basis functions are not only determined by the order of Legendre polynomials, but also by local medium properties, and therefore can effectively convey the fine-scale information to the coarse-scale solution with high-order accuracy. Numerical tests show that our method can significantly reduce the computation time while maintain high accuracy for wave equation modeling in highly heterogeneous media by solving the corresponding discrete system only on the coarse mesh with the new high-order multiscale basis functions.« less

  19. A multi-domain spectral method for time-fractional differential equations

    NASA Astrophysics Data System (ADS)

    Chen, Feng; Xu, Qinwu; Hesthaven, Jan S.

    2015-07-01

    This paper proposes an approach for high-order time integration within a multi-domain setting for time-fractional differential equations. Since the kernel is singular or nearly singular, two main difficulties arise after the domain decomposition: how to properly account for the history/memory part and how to perform the integration accurately. To address these issues, we propose a novel hybrid approach for the numerical integration based on the combination of three-term-recurrence relations of Jacobi polynomials and high-order Gauss quadrature. The different approximations used in the hybrid approach are justified theoretically and through numerical examples. Based on this, we propose a new multi-domain spectral method for high-order accurate time integrations and study its stability properties by identifying the method as a generalized linear method. Numerical experiments confirm hp-convergence for both time-fractional differential equations and time-fractional partial differential equations.

  20. Construction of Response Surface with Higher Order Continuity and Its Application to Reliability Engineering

    NASA Technical Reports Server (NTRS)

    Krishnamurthy, T.; Romero, V. J.

    2002-01-01

    The usefulness of piecewise polynomials with C1 and C2 derivative continuity for response surface construction method is examined. A Moving Least Squares (MLS) method is developed and compared with four other interpolation methods, including kriging. First the selected methods are applied and compared with one another in a two-design variables problem with a known theoretical response function. Next the methods are tested in a four-design variables problem from a reliability-based design application. In general the piecewise polynomial with higher order derivative continuity methods produce less error in the response prediction. The MLS method was found to be superior for response surface construction among the methods evaluated.

  1. Efficient spectral-Galerkin algorithms for direct solution for second-order differential equations using Jacobi polynomials

    NASA Astrophysics Data System (ADS)

    Doha, E.; Bhrawy, A.

    2006-06-01

    It is well known that spectral methods (tau, Galerkin, collocation) have a condition number of ( is the number of retained modes of polynomial approximations). This paper presents some efficient spectral algorithms, which have a condition number of , based on the Jacobi?Galerkin methods of second-order elliptic equations in one and two space variables. The key to the efficiency of these algorithms is to construct appropriate base functions, which lead to systems with specially structured matrices that can be efficiently inverted. The complexities of the algorithms are a small multiple of operations for a -dimensional domain with unknowns, while the convergence rates of the algorithms are exponentials with smooth solutions.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Zheng; Huang, Hongying; Yan, Jue

    We develop 3rd order maximum-principle-satisfying direct discontinuous Galerkin methods [8], [9], [19] and [21] for convection diffusion equations on unstructured triangular mesh. We carefully calculate the normal derivative numerical flux across element edges and prove that, with proper choice of parameter pair (β 0,β 1) in the numerical flux formula, the quadratic polynomial solution satisfies strict maximum principle. The polynomial solution is bounded within the given range and third order accuracy is maintained. There is no geometric restriction on the meshes and obtuse triangles are allowed in the partition. As a result, a sequence of numerical examples are carried outmore » to demonstrate the accuracy and capability of the maximum-principle-satisfying limiter.« less

  3. Investigation of advanced UQ for CRUD prediction with VIPRE.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eldred, Michael Scott

    2011-09-01

    This document summarizes the results from a level 3 milestone study within the CASL VUQ effort. It demonstrates the application of 'advanced UQ,' in particular dimension-adaptive p-refinement for polynomial chaos and stochastic collocation. The study calculates statistics for several quantities of interest that are indicators for the formation of CRUD (Chalk River unidentified deposit), which can lead to CIPS (CRUD induced power shift). Stochastic expansion methods are attractive methods for uncertainty quantification due to their fast convergence properties. For smooth functions (i.e., analytic, infinitely-differentiable) in L{sup 2} (i.e., possessing finite variance), exponential convergence rates can be obtained under order refinementmore » for integrated statistical quantities of interest such as mean, variance, and probability. Two stochastic expansion methods are of interest: nonintrusive polynomial chaos expansion (PCE), which computes coefficients for a known basis of multivariate orthogonal polynomials, and stochastic collocation (SC), which forms multivariate interpolation polynomials for known coefficients. Within the DAKOTA project, recent research in stochastic expansion methods has focused on automated polynomial order refinement ('p-refinement') of expansions to support scalability to higher dimensional random input spaces [4, 3]. By preferentially refining only in the most important dimensions of the input space, the applicability of these methods can be extended from O(10{sup 0})-O(10{sup 1}) random variables to O(10{sup 2}) and beyond, depending on the degree of anisotropy (i.e., the extent to which randominput variables have differing degrees of influence on the statistical quantities of interest (QOIs)). Thus, the purpose of this study is to investigate the application of these adaptive stochastic expansion methods to the analysis of CRUD using the VIPRE simulation tools for two different plant models of differing random dimension, anisotropy, and smoothness.« less

  4. The Mathematical and Computer Aided Analysis of the Contact Stress of the Surface With 4th Order

    NASA Astrophysics Data System (ADS)

    Huran, Liu

    Inspired from some gears with heavy power transmission in practical usage after serious plastic deformation in metallurgical industry, we believe that there must existed some kind of gear profile which is most suitable in both the contact and bending fatigue strength. From careful analysis and deep going investigation, we think that it is the profile of equal conjugate curvature with high order of contact, and analyzed the forming principle of this kind of profile. Based on the second curve and comparative analysis of fourth order curves, combined with Chebyshev polynomial terms of higher order contact with tooth contact stress formula derived. Note high exposure in the case of two extreme points of stress and extreme positions and the derived extreme contact stress formula. Finally, a pair of conjugate gear tooth profile curvature provides specific contact stress calculation.

  5. Van-der-Waals interaction of atoms in dipolar Rydberg states

    NASA Astrophysics Data System (ADS)

    Kamenski, Aleksandr A.; Mokhnenko, Sergey N.; Ovsiannikov, Vitaly D.

    2018-02-01

    An asymptotic expression for the van-der-Waals constant C 6( n) ≈ -0.03 n 12 K p ( x) is derived for the long-range interaction between two highly excited hydrogen atoms A and B in their extreme Stark states of equal principal quantum numbers n A = n B = n ≫ 1 and parabolic quantum numbers n 1(2) = n - 1, n 2(1) = m = 0 in the case of collinear orientation of the Stark-state dipolar electric moments and the interatomic axis. The cubic polynomial K 3( x) in powers of reciprocal values of the principal quantum number x = 1/ n and quadratic polynomial K 2( y) in powers of reciprocal values of the principal quantum number squared y = 1/ n 2 were determined on the basis of the standard curve fitting polynomial procedure from the calculated data for C 6( n). The transformation of attractive van-der-Waals force ( C 6 > 0) for low-energy states n < 23 into repulsive force ( C 6 < 0) for all higher-energy states of n ≥ 23, is observed from the results of numerical calculations based on the second-order perturbation theory for the operator of the long-range interaction between neutral atoms. This transformation is taken into account in the asymptotic formulas (in both cases of p = 2, 3) by polynomials K p tending to unity at n → ∞ ( K p (0) = 1). The transformation from low- n attractive van-der-Waals force into high- n repulsive force demonstrates the gradual increase of the negative contribution to C 6( n) from the lower-energy two-atomic states, of the A(B)-atom principal quantum numbers n'A(B) = n-Δ n (where Δ n = 1, 2, … is significantly smaller than n for the terms providing major contribution to the second-order series), which together with the states of n″B(A) = n+Δ n make the joint contribution proportional to n 12. So, the hydrogen-like manifold structure of the energy spectrum is responsible for the transformation of the power-11 asymptotic dependence C 6( n) ∝ n 11of the low-angular-momenta Rydberg states in many-electron atoms into the power-12 dependence C 6( n) ∝ n 12 for the dipolar states of the Rydberg manifold.

  6. Reproducibility and calibration of MMC-based high-resolution gamma detectors

    DOE PAGES

    Bates, C. R.; Pies, C.; Kempf, S.; ...

    2016-07-15

    Here, we describe a prototype γ-ray detector based on a metallic magnetic calorimeter with an energy resolution of 46 eV at 60 keV and a reproducible response function that follows a simple second-order polynomial. The simple detector calibration allows adding high-resolution spectra from different pixels and different cool-downs without loss in energy resolution to determine γ-ray centroids with high accuracy. As an example of an application in nuclear safeguards enabled by such a γ-ray detector, we discuss the non-destructive assay of 242Pu in a mixed-isotope Pu sample.

  7. Using Spherical-Harmonics Expansions for Optics Surface Reconstruction from Gradients.

    PubMed

    Solano-Altamirano, Juan Manuel; Vázquez-Otero, Alejandro; Khikhlukha, Danila; Dormido, Raquel; Duro, Natividad

    2017-11-30

    In this paper, we propose a new algorithm to reconstruct optics surfaces (aka wavefronts) from gradients, defined on a circular domain, by means of the Spherical Harmonics. The experimental results indicate that this algorithm renders the same accuracy, compared to the reconstruction based on classical Zernike polynomials, using a smaller number of polynomial terms, which potentially speeds up the wavefront reconstruction. Additionally, we provide an open-source C++ library, released under the terms of the GNU General Public License version 2 (GPLv2), wherein several polynomial sets are coded. Therefore, this library constitutes a robust software alternative for wavefront reconstruction in a high energy laser field, optical surface reconstruction, and, more generally, in surface reconstruction from gradients. The library is a candidate for being integrated in control systems for optical devices, or similarly to be used in ad hoc simulations. Moreover, it has been developed with flexibility in mind, and, as such, the implementation includes the following features: (i) a mock-up generator of various incident wavefronts, intended to simulate the wavefronts commonly encountered in the field of high-energy lasers production; (ii) runtime selection of the library in charge of performing the algebraic computations; (iii) a profiling mechanism to measure and compare the performance of different steps of the algorithms and/or third-party linear algebra libraries. Finally, the library can be easily extended to include additional dependencies, such as porting the algebraic operations to specific architectures, in order to exploit hardware acceleration features.

  8. Using Spherical-Harmonics Expansions for Optics Surface Reconstruction from Gradients

    PubMed Central

    Solano-Altamirano, Juan Manuel; Khikhlukha, Danila

    2017-01-01

    In this paper, we propose a new algorithm to reconstruct optics surfaces (aka wavefronts) from gradients, defined on a circular domain, by means of the Spherical Harmonics. The experimental results indicate that this algorithm renders the same accuracy, compared to the reconstruction based on classical Zernike polynomials, using a smaller number of polynomial terms, which potentially speeds up the wavefront reconstruction. Additionally, we provide an open-source C++ library, released under the terms of the GNU General Public License version 2 (GPLv2), wherein several polynomial sets are coded. Therefore, this library constitutes a robust software alternative for wavefront reconstruction in a high energy laser field, optical surface reconstruction, and, more generally, in surface reconstruction from gradients. The library is a candidate for being integrated in control systems for optical devices, or similarly to be used in ad hoc simulations. Moreover, it has been developed with flexibility in mind, and, as such, the implementation includes the following features: (i) a mock-up generator of various incident wavefronts, intended to simulate the wavefronts commonly encountered in the field of high-energy lasers production; (ii) runtime selection of the library in charge of performing the algebraic computations; (iii) a profiling mechanism to measure and compare the performance of different steps of the algorithms and/or third-party linear algebra libraries. Finally, the library can be easily extended to include additional dependencies, such as porting the algebraic operations to specific architectures, in order to exploit hardware acceleration features. PMID:29189722

  9. A high-order time-accurate interrogation method for time-resolved PIV

    NASA Astrophysics Data System (ADS)

    Lynch, Kyle; Scarano, Fulvio

    2013-03-01

    A novel method is introduced for increasing the accuracy and extending the dynamic range of time-resolved particle image velocimetry (PIV). The approach extends the concept of particle tracking velocimetry by multiple frames to the pattern tracking by cross-correlation analysis as employed in PIV. The working principle is based on tracking the patterned fluid element, within a chosen interrogation window, along its individual trajectory throughout an image sequence. In contrast to image-pair interrogation methods, the fluid trajectory correlation concept deals with variable velocity along curved trajectories and non-zero tangential acceleration during the observed time interval. As a result, the velocity magnitude and its direction are allowed to evolve in a nonlinear fashion along the fluid element trajectory. The continuum deformation (namely spatial derivatives of the velocity vector) is accounted for by adopting local image deformation. The principle offers important reductions of the measurement error based on three main points: by enlarging the temporal measurement interval, the relative error becomes reduced; secondly, the random and peak-locking errors are reduced by the use of least-squares polynomial fits to individual trajectories; finally, the introduction of high-order (nonlinear) fitting functions provides the basis for reducing the truncation error. Lastly, the instantaneous velocity is evaluated as the temporal derivative of the polynomial representation of the fluid parcel position in time. The principal features of this algorithm are compared with a single-pair iterative image deformation method. Synthetic image sequences are considered with steady flow (translation, shear and rotation) illustrating the increase of measurement precision. An experimental data set obtained by time-resolved PIV measurements of a circular jet is used to verify the robustness of the method on image sequences affected by camera noise and three-dimensional motions. In both cases, it is demonstrated that the measurement time interval can be significantly extended without compromising the correlation signal-to-noise ratio and with no increase of the truncation error. The increase of velocity dynamic range scales more than linearly with the number of frames included for the analysis, which supersedes by one order of magnitude the pair correlation by window deformation. The main factors influencing the performance of the method are discussed, namely the number of images composing the sequence and the polynomial order chosen to represent the motion throughout the trajectory.

  10. A New Navigation Satellite Clock Bias Prediction Method Based on Modified Clock-bias Quadratic Polynomial Model

    NASA Astrophysics Data System (ADS)

    Wang, Y. P.; Lu, Z. P.; Sun, D. S.; Wang, N.

    2016-01-01

    In order to better express the characteristics of satellite clock bias (SCB) and improve SCB prediction precision, this paper proposed a new SCB prediction model which can take physical characteristics of space-borne atomic clock, the cyclic variation, and random part of SCB into consideration. First, the new model employs a quadratic polynomial model with periodic items to fit and extract the trend term and cyclic term of SCB; then based on the characteristics of fitting residuals, a time series ARIMA ~(Auto-Regressive Integrated Moving Average) model is used to model the residuals; eventually, the results from the two models are combined to obtain final SCB prediction values. At last, this paper uses precise SCB data from IGS (International GNSS Service) to conduct prediction tests, and the results show that the proposed model is effective and has better prediction performance compared with the quadratic polynomial model, grey model, and ARIMA model. In addition, the new method can also overcome the insufficiency of the ARIMA model in model recognition and order determination.

  11. An efficient higher order family of root finders

    NASA Astrophysics Data System (ADS)

    Petkovic, Ljiljana D.; Rancic, Lidija; Petkovic, Miodrag S.

    2008-06-01

    A one parameter family of iterative methods for the simultaneous approximation of simple complex zeros of a polynomial, based on a cubically convergent Hansen-Patrick's family, is studied. We show that the convergence of the basic family of the fourth order can be increased to five and six using Newton's and Halley's corrections, respectively. Since these corrections use the already calculated values, the computational efficiency of the accelerated methods is significantly increased. Further acceleration is achieved by applying the Gauss-Seidel approach (single-step mode). One of the most important problems in solving nonlinear equations, the construction of initial conditions which provide both the guaranteed and fast convergence, is considered for the proposed accelerated family. These conditions are computationally verifiable; they depend only on the polynomial coefficients, its degree and initial approximations, which is of practical importance. Some modifications of the considered family, providing the computation of multiple zeros of polynomials and simple zeros of a wide class of analytic functions, are also studied. Numerical examples demonstrate the convergence properties of the presented family of root-finding methods.

  12. Field curvature correction method for ultrashort throw ratio projection optics design using an odd polynomial mirror surface.

    PubMed

    Zhuang, Zhenfeng; Chen, Yanting; Yu, Feihong; Sun, Xiaowei

    2014-08-01

    This paper presents a field curvature correction method of designing an ultrashort throw ratio (TR) projection lens for an imaging system. The projection lens is composed of several refractive optical elements and an odd polynomial mirror surface. A curved image is formed in a direction away from the odd polynomial mirror surface by the refractive optical elements from the image formed on the digital micromirror device (DMD) panel, and the curved image formed is its virtual image. Then the odd polynomial mirror surface enlarges the curved image and a plane image is formed on the screen. Based on the relationship between the chief ray from the exit pupil of each field of view (FOV) and the corresponding predescribed position on the screen, the initial profile of the freeform mirror surface is calculated by using segments of the hyperbolic according to the laws of reflection. For further optimization, the value of the high-order odd polynomial surface is used to express the freeform mirror surface through a least-squares fitting method. As an example, an ultrashort TR projection lens that realizes projection onto a large 50 in. screen at a distance of only 510 mm is presented. The optical performance for the designed projection lens is analyzed by ray tracing method. Results show that an ultrashort TR projection lens modulation transfer function of over 60% at 0.5 cycles/mm for all optimization fields is achievable with f-number of 2.0, 126° full FOV, <1% distortion, and 0.46 TR. Moreover, in comparing the proposed projection lens' optical specifications to that of traditional projection lenses, aspheric mirror projection lenses, and conventional short TR projection lenses, results indicate that this projection lens has the advantages of ultrashort TR, low f-number, wide full FOV, and small distortion.

  13. Finite-volume application of high order ENO schemes to multi-dimensional boundary-value problems

    NASA Technical Reports Server (NTRS)

    Casper, Jay; Dorrepaal, J. Mark

    1990-01-01

    The finite volume approach in developing multi-dimensional, high-order accurate essentially non-oscillatory (ENO) schemes is considered. In particular, a two dimensional extension is proposed for the Euler equation of gas dynamics. This requires a spatial reconstruction operator that attains formal high order of accuracy in two dimensions by taking account of cross gradients. Given a set of cell averages in two spatial variables, polynomial interpolation of a two dimensional primitive function is employed in order to extract high-order pointwise values on cell interfaces. These points are appropriately chosen so that correspondingly high-order flux integrals are obtained through each interface by quadrature, at each point having calculated a flux contribution in an upwind fashion. The solution-in-the-small of Riemann's initial value problem (IVP) that is required for this pointwise flux computation is achieved using Roe's approximate Riemann solver. Issues to be considered in this two dimensional extension include the implementation of boundary conditions and application to general curvilinear coordinates. Results of numerical experiments are presented for qualitative and quantitative examination. These results contain the first successful application of ENO schemes to boundary value problems with solid walls.

  14. Analysis of the impacts of horizontal translation and scaling on wavefront approximation coefficients with rectangular pupils for Chebyshev and Legendre polynomials.

    PubMed

    Sun, Wenqing; Chen, Lei; Tuya, Wulan; He, Yong; Zhu, Rihong

    2013-12-01

    Chebyshev and Legendre polynomials are frequently used in rectangular pupils for wavefront approximation. Ideally, the dataset completely fits with the polynomial basis, which provides the full-pupil approximation coefficients and the corresponding geometric aberrations. However, if there are horizontal translation and scaling, the terms in the original polynomials will become the linear combinations of the coefficients of the other terms. This paper introduces analytical expressions for two typical situations after translation and scaling. With a small translation, first-order Taylor expansion could be used to simplify the computation. Several representative terms could be selected as inputs to compute the coefficient changes before and after translation and scaling. Results show that the outcomes of the analytical solutions and the approximated values under discrete sampling are consistent. With the computation of a group of randomly generated coefficients, we contrasted the changes under different translation and scaling conditions. The larger ratios correlate the larger deviation from the approximated values to the original ones. Finally, we analyzed the peak-to-valley (PV) and root mean square (RMS) deviations from the uses of the first-order approximation and the direct expansion under different translation values. The results show that when the translation is less than 4%, the most deviated 5th term in the first-order 1D-Legendre expansion has a PV deviation less than 7% and an RMS deviation less than 2%. The analytical expressions and the computed results under discrete sampling given in this paper for the multiple typical function basis during translation and scaling in the rectangular areas could be applied in wavefront approximation and analysis.

  15. Measurement of distributions of temperature and wavelength-dependent emissivity of a laminar diffusion flame using hyper-spectral imaging technique

    NASA Astrophysics Data System (ADS)

    Liu, Huawei; Zheng, Shu; Zhou, Huaichun; Qi, Chaobo

    2016-02-01

    A generalized method to estimate a two-dimensional (2D) distribution of temperature and wavelength-dependent emissivity in a sooty flame with spectroscopic radiation intensities is proposed in this paper. The method adopts a Newton-type iterative method to solve the unknown coefficients in the polynomial relationship between the emissivity and the wavelength, as well as the unknown temperature. Polynomial functions with increasing order are examined, and final results are determined as the result converges. Numerical simulation on a fictitious flame with wavelength-dependent absorption coefficients shows a good performance with relative errors less than 0.5% in the average temperature. What’s more, a hyper-spectral imaging device is introduced to measure an ethylene/air laminar diffusion flame with the proposed method. The proper order for the polynomial function is selected to be 2, because every one order increase in the polynomial function will only bring in a temperature variation smaller than 20 K. For the ethylene laminar diffusion flame with 194 ml min-1 C2H4 and 284 L min-1 air studied in this paper, the 2D distribution of average temperature estimated along the line of sight is similar to, but smoother than that of the local temperature given in references, and the 2D distribution of emissivity shows a cumulative effect of the absorption coefficient along the line of sight. It also shows that emissivity of the flame decreases as the wavelength increases. The emissivity under wavelength 400 nm is about 2.5 times as much as that under wavelength 1000 nm for a typical line-of-sight in the flame, with the same trend for the absorption coefficient of soot varied with the wavelength.

  16. A two-level stochastic collocation method for semilinear elliptic equations with random coefficients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Luoping; Zheng, Bin; Lin, Guang

    In this work, we propose a novel two-level discretization for solving semilinear elliptic equations with random coefficients. Motivated by the two-grid method for deterministic partial differential equations (PDEs) introduced by Xu, our two-level stochastic collocation method utilizes a two-grid finite element discretization in the physical space and a two-level collocation method in the random domain. In particular, we solve semilinear equations on a coarse meshmore » $$\\mathcal{T}_H$$ with a low level stochastic collocation (corresponding to the polynomial space $$\\mathcal{P}_{P}$$) and solve linearized equations on a fine mesh $$\\mathcal{T}_h$$ using high level stochastic collocation (corresponding to the polynomial space $$\\mathcal{P}_p$$). We prove that the approximated solution obtained from this method achieves the same order of accuracy as that from solving the original semilinear problem directly by stochastic collocation method with $$\\mathcal{T}_h$$ and $$\\mathcal{P}_p$$. The two-level method is computationally more efficient, especially for nonlinear problems with high random dimensions. Numerical experiments are also provided to verify the theoretical results.« less

  17. The Local Discontinuous Galerkin Method for Time-Dependent Convection-Diffusion Systems

    NASA Technical Reports Server (NTRS)

    Cockburn, Bernardo; Shu, Chi-Wang

    1997-01-01

    In this paper, we study the Local Discontinuous Galerkin methods for nonlinear, time-dependent convection-diffusion systems. These methods are an extension of the Runge-Kutta Discontinuous Galerkin methods for purely hyperbolic systems to convection-diffusion systems and share with those methods their high parallelizability, their high-order formal accuracy, and their easy handling of complicated geometries, for convection dominated problems. It is proven that for scalar equations, the Local Discontinuous Galerkin methods are L(sup 2)-stable in the nonlinear case. Moreover, in the linear case, it is shown that if polynomials of degree k are used, the methods are k-th order accurate for general triangulations; although this order of convergence is suboptimal, it is sharp for the LDG methods. Preliminary numerical examples displaying the performance of the method are shown.

  18. The time-fractional radiative transport equation—Continuous-time random walk, diffusion approximation, and Legendre-polynomial expansion

    NASA Astrophysics Data System (ADS)

    Machida, Manabu

    2017-01-01

    We consider the radiative transport equation in which the time derivative is replaced by the Caputo derivative. Such fractional-order derivatives are related to anomalous transport and anomalous diffusion. In this paper we describe how the time-fractional radiative transport equation is obtained from continuous-time random walk and see how the equation is related to the time-fractional diffusion equation in the asymptotic limit. Then we solve the equation with Legendre-polynomial expansion.

  19. Gradient nonlinearity calibration and correction for a compact, asymmetric magnetic resonance imaging gradient system.

    PubMed

    Tao, S; Trzasko, J D; Gunter, J L; Weavers, P T; Shu, Y; Huston, J; Lee, S K; Tan, E T; Bernstein, M A

    2017-01-21

    Due to engineering limitations, the spatial encoding gradient fields in conventional magnetic resonance imaging cannot be perfectly linear and always contain higher-order, nonlinear components. If ignored during image reconstruction, gradient nonlinearity (GNL) manifests as image geometric distortion. Given an estimate of the GNL field, this distortion can be corrected to a degree proportional to the accuracy of the field estimate. The GNL of a gradient system is typically characterized using a spherical harmonic polynomial model with model coefficients obtained from electromagnetic simulation. Conventional whole-body gradient systems are symmetric in design; typically, only odd-order terms up to the 5th-order are required for GNL modeling. Recently, a high-performance, asymmetric gradient system was developed, which exhibits more complex GNL that requires higher-order terms including both odd- and even-orders for accurate modeling. This work characterizes the GNL of this system using an iterative calibration method and a fiducial phantom used in ADNI (Alzheimer's Disease Neuroimaging Initiative). The phantom was scanned at different locations inside the 26 cm diameter-spherical-volume of this gradient, and the positions of fiducials in the phantom were estimated. An iterative calibration procedure was utilized to identify the model coefficients that minimize the mean-squared-error between the true fiducial positions and the positions estimated from images corrected using these coefficients. To examine the effect of higher-order and even-order terms, this calibration was performed using spherical harmonic polynomial of different orders up to the 10th-order including even- and odd-order terms, or odd-order only. The results showed that the model coefficients of this gradient can be successfully estimated. The residual root-mean-squared-error after correction using up to the 10th-order coefficients was reduced to 0.36 mm, yielding spatial accuracy comparable to conventional whole-body gradients. The even-order terms were necessary for accurate GNL modeling. In addition, the calibrated coefficients improved image geometric accuracy compared with the simulation-based coefficients.

  20. High quality adaptive optics zoom with adaptive lenses

    NASA Astrophysics Data System (ADS)

    Quintavalla, M.; Santiago, F.; Bonora, S.; Restaino, S.

    2018-02-01

    We present the combined use of large aperture adaptive lens with large optical power modulation with a multi actuator adaptive lens. The Multi-actuator Adaptive Lens (M-AL) can correct up to the 4th radial order of Zernike polynomials, without any obstructions (electrodes and actuators) placed inside its clear aperture. We demonstrated that the use of both lenses together can lead to better image quality and to the correction of aberrations of adaptive optics optical systems.

  1. Couple stress theory of curved rods. 2-D, high order, Timoshenko's and Euler-Bernoulli models

    NASA Astrophysics Data System (ADS)

    Zozulya, V. V.

    2017-01-01

    New models for plane curved rods based on linear couple stress theory of elasticity have been developed.2-D theory is developed from general 2-D equations of linear couple stress elasticity using a special curvilinear system of coordinates related to the middle line of the rod as well as special hypothesis based on assumptions that take into account the fact that the rod is thin. High order theory is based on the expansion of the equations of the theory of elasticity into Fourier series in terms of Legendre polynomials. First, stress and strain tensors, vectors of displacements and rotation along with body forces have been expanded into Fourier series in terms of Legendre polynomials with respect to a thickness coordinate.Thereby, all equations of elasticity including Hooke's law have been transformed to the corresponding equations for Fourier coefficients. Then, in the same way as in the theory of elasticity, a system of differential equations in terms of displacements and boundary conditions for Fourier coefficients have been obtained. Timoshenko's and Euler-Bernoulli theories are based on the classical hypothesis and the 2-D equations of linear couple stress theory of elasticity in a special curvilinear system. The obtained equations can be used to calculate stress-strain and to model thin walled structures in macro, micro and nano scales when taking into account couple stress and rotation effects.

  2. A high-accuracy algorithm for solving nonlinear PDEs with high-order spatial derivatives in 1 + 1 dimensions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yao, Jian Hua; Gooding, R.J.

    1994-06-01

    We propose an algorithm to solve a system of partial differential equations of the type u[sub t](x,t) = F(x, t, u, u[sub x], u[sub xx], u[sub xxx], u[sub xxxx]) in 1 + 1 dimensions using the method of lines with piecewise ninth-order Hermite polynomials, where u and F and N-dimensional vectors. Nonlinear boundary conditions are easily incorporated with this method. We demonstrate the accuracy of this method through comparisons of numerically determine solutions to the analytical ones. Then, we apply this algorithm to a complicated physical system involving nonlinear and nonlocal strain forces coupled to a thermal field. 4 refs.,more » 5 figs., 1 tab.« less

  3. Estimation of Ordinary Differential Equation Parameters Using Constrained Local Polynomial Regression.

    PubMed

    Ding, A Adam; Wu, Hulin

    2014-10-01

    We propose a new method to use a constrained local polynomial regression to estimate the unknown parameters in ordinary differential equation models with a goal of improving the smoothing-based two-stage pseudo-least squares estimate. The equation constraints are derived from the differential equation model and are incorporated into the local polynomial regression in order to estimate the unknown parameters in the differential equation model. We also derive the asymptotic bias and variance of the proposed estimator. Our simulation studies show that our new estimator is clearly better than the pseudo-least squares estimator in estimation accuracy with a small price of computational cost. An application example on immune cell kinetics and trafficking for influenza infection further illustrates the benefits of the proposed new method.

  4. Estimation of Ordinary Differential Equation Parameters Using Constrained Local Polynomial Regression

    PubMed Central

    Ding, A. Adam; Wu, Hulin

    2015-01-01

    We propose a new method to use a constrained local polynomial regression to estimate the unknown parameters in ordinary differential equation models with a goal of improving the smoothing-based two-stage pseudo-least squares estimate. The equation constraints are derived from the differential equation model and are incorporated into the local polynomial regression in order to estimate the unknown parameters in the differential equation model. We also derive the asymptotic bias and variance of the proposed estimator. Our simulation studies show that our new estimator is clearly better than the pseudo-least squares estimator in estimation accuracy with a small price of computational cost. An application example on immune cell kinetics and trafficking for influenza infection further illustrates the benefits of the proposed new method. PMID:26401093

  5. A Subspace Semi-Definite programming-based Underestimation (SSDU) method for stochastic global optimization in protein docking*

    PubMed Central

    Nan, Feng; Moghadasi, Mohammad; Vakili, Pirooz; Vajda, Sandor; Kozakov, Dima; Ch. Paschalidis, Ioannis

    2015-01-01

    We propose a new stochastic global optimization method targeting protein docking problems. The method is based on finding a general convex polynomial underestimator to the binding energy function in a permissive subspace that possesses a funnel-like structure. We use Principal Component Analysis (PCA) to determine such permissive subspaces. The problem of finding the general convex polynomial underestimator is reduced into the problem of ensuring that a certain polynomial is a Sum-of-Squares (SOS), which can be done via semi-definite programming. The underestimator is then used to bias sampling of the energy function in order to recover a deep minimum. We show that the proposed method significantly improves the quality of docked conformations compared to existing methods. PMID:25914440

  6. Reachability Analysis in Probabilistic Biological Networks.

    PubMed

    Gabr, Haitham; Todor, Andrei; Dobra, Alin; Kahveci, Tamer

    2015-01-01

    Extra-cellular molecules trigger a response inside the cell by initiating a signal at special membrane receptors (i.e., sources), which is then transmitted to reporters (i.e., targets) through various chains of interactions among proteins. Understanding whether such a signal can reach from membrane receptors to reporters is essential in studying the cell response to extra-cellular events. This problem is drastically complicated due to the unreliability of the interaction data. In this paper, we develop a novel method, called PReach (Probabilistic Reachability), that precisely computes the probability that a signal can reach from a given collection of receptors to a given collection of reporters when the underlying signaling network is uncertain. This is a very difficult computational problem with no known polynomial-time solution. PReach represents each uncertain interaction as a bi-variate polynomial. It transforms the reachability problem to a polynomial multiplication problem. We introduce novel polynomial collapsing operators that associate polynomial terms with possible paths between sources and targets as well as the cuts that separate sources from targets. These operators significantly shrink the number of polynomial terms and thus the running time. PReach has much better time complexity than the recent solutions for this problem. Our experimental results on real data sets demonstrate that this improvement leads to orders of magnitude of reduction in the running time over the most recent methods. Availability: All the data sets used, the software implemented and the alignments found in this paper are available at http://bioinformatics.cise.ufl.edu/PReach/.

  7. Developing a reversible rapid coordinate transformation model for the cylindrical projection

    NASA Astrophysics Data System (ADS)

    Ye, Si-jing; Yan, Tai-lai; Yue, Yan-li; Lin, Wei-yan; Li, Lin; Yao, Xiao-chuang; Mu, Qin-yun; Li, Yong-qin; Zhu, De-hai

    2016-04-01

    Numerical models are widely used for coordinate transformations. However, in most numerical models, polynomials are generated to approximate "true" geographic coordinates or plane coordinates, and one polynomial is hard to make simultaneously appropriate for both forward and inverse transformations. As there is a transformation rule between geographic coordinates and plane coordinates, how accurate and efficient is the calculation of the coordinate transformation if we construct polynomials to approximate the transformation rule instead of "true" coordinates? In addition, is it preferable to compare models using such polynomials with traditional numerical models with even higher exponents? Focusing on cylindrical projection, this paper reports on a grid-based rapid numerical transformation model - a linear rule approximation model (LRA-model) that constructs linear polynomials to approximate the transformation rule and uses a graticule to alleviate error propagation. Our experiments on cylindrical projection transformation between the WGS 84 Geographic Coordinate System (EPSG 4326) and the WGS 84 UTM ZONE 50N Plane Coordinate System (EPSG 32650) with simulated data demonstrate that the LRA-model exhibits high efficiency, high accuracy, and high stability; is simple and easy to use for both forward and inverse transformations; and can be applied to the transformation of a large amount of data with a requirement of high calculation efficiency. Furthermore, the LRA-model exhibits advantages in terms of calculation efficiency, accuracy and stability for coordinate transformations, compared to the widely used hyperbolic transformation model.

  8. Local polynomial chaos expansion for linear differential equations with high dimensional random inputs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Yi; Jakeman, John; Gittelson, Claude

    2015-01-08

    In this paper we present a localized polynomial chaos expansion for partial differential equations (PDE) with random inputs. In particular, we focus on time independent linear stochastic problems with high dimensional random inputs, where the traditional polynomial chaos methods, and most of the existing methods, incur prohibitively high simulation cost. Furthermore, the local polynomial chaos method employs a domain decomposition technique to approximate the stochastic solution locally. In each subdomain, a subdomain problem is solved independently and, more importantly, in a much lower dimensional random space. In a postprocesing stage, accurate samples of the original stochastic problems are obtained frommore » the samples of the local solutions by enforcing the correct stochastic structure of the random inputs and the coupling conditions at the interfaces of the subdomains. Overall, the method is able to solve stochastic PDEs in very large dimensions by solving a collection of low dimensional local problems and can be highly efficient. In our paper we present the general mathematical framework of the methodology and use numerical examples to demonstrate the properties of the method.« less

  9. Characteristic of entire corneal topography and tomography for the detection of sub-clinical keratoconus with Zernike polynomials using Pentacam.

    PubMed

    Xu, Zhe; Li, Weibo; Jiang, Jun; Zhuang, Xiran; Chen, Wei; Peng, Mei; Wang, Jianhua; Lu, Fan; Shen, Meixiao; Wang, Yuanyuan

    2017-11-28

    The study aimed to characterize the entire corneal topography and tomography for the detection of sub-clinical keratoconus (KC) with a Zernike application method. Normal subjects (n = 147; 147 eyes), sub-clinical KC patients (n = 77; 77 eyes), and KC patients (n = 139; 139 eyes) were imaged with the Pentacam HR system. The entire corneal data of pachymetry and elevation of both the anterior and posterior surfaces were exported from the Pentacam HR software. Zernike polynomials fitting was used to quantify the 3D distribution of the corneal thickness and surface elevation. The root mean square (RMS) values for each order and the total high-order irregularity were calculated. Multimeric discriminant functions combined with individual indices were built using linear step discriminant analysis. Receiver operating characteristic curves determined the diagnostic accuracy (area under the curve, AUC). The 3rd-order RMS of the posterior surface (AUC: 0.928) obtained the highest discriminating capability in sub-clinical KC eyes. The multimeric function, which consisted of the Zernike fitting indices of corneal posterior elevation, showed the highest discriminant ability (AUC: 0.951). Indices generated from the elevation of posterior surface and thickness measurements over the entire cornea using the Zernike method based on the Pentacam HR system were able to identify very early KC.

  10. SOMBI: Bayesian identification of parameter relations in unstructured cosmological data

    NASA Astrophysics Data System (ADS)

    Frank, Philipp; Jasche, Jens; Enßlin, Torsten A.

    2016-11-01

    This work describes the implementation and application of a correlation determination method based on self organizing maps and Bayesian inference (SOMBI). SOMBI aims to automatically identify relations between different observed parameters in unstructured cosmological or astrophysical surveys by automatically identifying data clusters in high-dimensional datasets via the self organizing map neural network algorithm. Parameter relations are then revealed by means of a Bayesian inference within respective identified data clusters. Specifically such relations are assumed to be parametrized as a polynomial of unknown order. The Bayesian approach results in a posterior probability distribution function for respective polynomial coefficients. To decide which polynomial order suffices to describe correlation structures in data, we include a method for model selection, the Bayesian information criterion, to the analysis. The performance of the SOMBI algorithm is tested with mock data. As illustration we also provide applications of our method to cosmological data. In particular, we present results of a correlation analysis between galaxy and active galactic nucleus (AGN) properties provided by the SDSS catalog with the cosmic large-scale-structure (LSS). The results indicate that the combined galaxy and LSS dataset indeed is clustered into several sub-samples of data with different average properties (for example different stellar masses or web-type classifications). The majority of data clusters appear to have a similar correlation structure between galaxy properties and the LSS. In particular we revealed a positive and linear dependency between the stellar mass, the absolute magnitude and the color of a galaxy with the corresponding cosmic density field. A remaining subset of data shows inverted correlations, which might be an artifact of non-linear redshift distortions.

  11. Non-model-based damage identification of plates using principal, mean and Gaussian curvature mode shapes

    DOE PAGES

    Xu, Yongfeng F.; Zhu, Weidong D.; Smith, Scott A.

    2017-07-01

    Mode shapes (MSs) have been extensively used to identify structural damage. This paper presents a new non-model-based method that uses principal, mean and Gaussian curvature MSs (CMSs) to identify damage in plates; the method is applicable and robust to MSs associated with low and high elastic modes on dense and coarse measurement grids. A multi-scale discrete differential-geometry scheme is proposed to calculate principal, mean and Gaussian CMSs associated with a MS of a plate, which can alleviate adverse effects of measurement noise on calculating the CMSs. Principal, mean and Gaussian CMSs of a damaged plate and those of an undamagedmore » one are used to yield four curvature damage indices (CDIs), including Maximum-CDIs, Minimum-CDIs, Mean-CDIs and Gaussian-CDIs. Damage can be identified near regions with consistently higher values of the CDIs. It is shown that a MS of an undamaged plate can be well approximated using a polynomial with a properly determined order that fits a MS of a damaged one, provided that the undamaged plate has a smooth geometry and is made of material that has no stiffness and mass discontinuities. New fitting and convergence indices are proposed to quantify the level of approximation of a MS from a polynomial fit to that of a damaged plate and to determine the proper order of the polynomial fit, respectively. A MS of an aluminum plate with damage in the form of a machined thickness reduction area was measured to experimentally investigate the effectiveness of the proposed CDIs in damage identification; the damage on the plate was successfully identified.« less

  12. Non-model-based damage identification of plates using principal, mean and Gaussian curvature mode shapes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Yongfeng F.; Zhu, Weidong D.; Smith, Scott A.

    Mode shapes (MSs) have been extensively used to identify structural damage. This paper presents a new non-model-based method that uses principal, mean and Gaussian curvature MSs (CMSs) to identify damage in plates; the method is applicable and robust to MSs associated with low and high elastic modes on dense and coarse measurement grids. A multi-scale discrete differential-geometry scheme is proposed to calculate principal, mean and Gaussian CMSs associated with a MS of a plate, which can alleviate adverse effects of measurement noise on calculating the CMSs. Principal, mean and Gaussian CMSs of a damaged plate and those of an undamagedmore » one are used to yield four curvature damage indices (CDIs), including Maximum-CDIs, Minimum-CDIs, Mean-CDIs and Gaussian-CDIs. Damage can be identified near regions with consistently higher values of the CDIs. It is shown that a MS of an undamaged plate can be well approximated using a polynomial with a properly determined order that fits a MS of a damaged one, provided that the undamaged plate has a smooth geometry and is made of material that has no stiffness and mass discontinuities. New fitting and convergence indices are proposed to quantify the level of approximation of a MS from a polynomial fit to that of a damaged plate and to determine the proper order of the polynomial fit, respectively. A MS of an aluminum plate with damage in the form of a machined thickness reduction area was measured to experimentally investigate the effectiveness of the proposed CDIs in damage identification; the damage on the plate was successfully identified.« less

  13. Time-optimal Aircraft Pursuit-evasion with a Weapon Envelope Constraint

    NASA Technical Reports Server (NTRS)

    Menon, P. K. A.

    1990-01-01

    The optimal pursuit-evasion problem between two aircraft including a realistic weapon envelope is analyzed using differential game theory. Six order nonlinear point mass vehicle models are employed and the inclusion of an arbitrary weapon envelope geometry is allowed. The performance index is a linear combination of flight time and the square of the vehicle acceleration. Closed form solution to this high-order differential game is then obtained using feedback linearization. The solution is in the form of a feedback guidance law together with a quartic polynomial for time-to-go. Due to its modest computational requirements, this nonlinear guidance law is useful for on-board real-time implementation.

  14. Application of wall-models to discontinuous Galerkin LES

    NASA Astrophysics Data System (ADS)

    Frère, Ariane; Carton de Wiart, Corentin; Hillewaert, Koen; Chatelain, Philippe; Winckelmans, Grégoire

    2017-08-01

    Wall-resolved Large-Eddy Simulations (LES) are still limited to moderate Reynolds number flows due to the high computational cost required to capture the inner part of the boundary layer. Wall-modeled LES (WMLES) provide more affordable LES by modeling the near-wall layer. Wall function-based WMLES solve LES equations up to the wall, where the coarse mesh resolution essentially renders the calculation under-resolved. This makes the accuracy of WMLES very sensitive to the behavior of the numerical method. Therefore, best practice rules regarding the use and implementation of WMLES cannot be directly transferred from one methodology to another regardless of the type of discretization approach. Whilst numerous studies present guidelines on the use of WMLES, there is a lack of knowledge for discontinuous finite-element-like high-order methods. Incidentally, these methods are increasingly used on the account of their high accuracy on unstructured meshes and their strong computational efficiency. The present paper proposes best practice guidelines for the use of WMLES in these methods. The study is based on sensitivity analyses of turbulent channel flow simulations by means of a Discontinuous Galerkin approach. It appears that good results can be obtained without the use of a spatial or temporal averaging. The study confirms the importance of the wall function input data location and suggests to take it at the bottom of the second off-wall element. These data being available through the ghost element, the suggested method prevents the loss of computational scalability experienced in unstructured WMLES. The study also highlights the influence of the polynomial degree used in the wall-adjacent element. It should preferably be of even degree as using polynomials of degree two in the first off-wall element provides, surprisingly, better results than using polynomials of degree three.

  15. Positivity-preserving numerical schemes for multidimensional advection

    NASA Technical Reports Server (NTRS)

    Leonard, B. P.; Macvean, M. K.; Lock, A. P.

    1993-01-01

    This report describes the construction of an explicit, single time-step, conservative, finite-volume method for multidimensional advective flow, based on a uniformly third-order polynomial interpolation algorithm (UTOPIA). Particular attention is paid to the problem of flow-to-grid angle-dependent, anisotropic distortion typical of one-dimensional schemes used component-wise. The third-order multidimensional scheme automatically includes certain cross-difference terms that guarantee good isotropy (and stability). However, above first-order, polynomial-based advection schemes do not preserve positivity (the multidimensional analogue of monotonicity). For this reason, a multidimensional generalization of the first author's universal flux-limiter is sought. This is a very challenging problem. A simple flux-limiter can be found; but this introduces strong anisotropic distortion. A more sophisticated technique, limiting part of the flux and then restoring the isotropy-maintaining cross-terms afterwards, gives more satisfactory results. Test cases are confined to two dimensions; three-dimensional extensions are briefly discussed.

  16. Robust consensus control with guaranteed rate of convergence using second-order Hurwitz polynomials

    NASA Astrophysics Data System (ADS)

    Fruhnert, Michael; Corless, Martin

    2017-10-01

    This paper considers homogeneous networks of general, linear time-invariant, second-order systems. We consider linear feedback controllers and require that the directed graph associated with the network contains a spanning tree and systems are stabilisable. We show that consensus with a guaranteed rate of convergence can always be achieved using linear state feedback. To achieve this, we provide a new and simple derivation of the conditions for a second-order polynomial with complex coefficients to be Hurwitz. We apply this result to obtain necessary and sufficient conditions to achieve consensus with networks whose graph Laplacian matrix may have complex eigenvalues. Based on the conditions found, methods to compute feedback gains are proposed. We show that gains can be chosen such that consensus is achieved robustly over a variety of communication structures and system dynamics. We also consider the use of static output feedback.

  17. Numerical Methods for Nonlinear Fokker-Planck Collision Operator in TEMPEST

    NASA Astrophysics Data System (ADS)

    Kerbel, G.; Xiong, Z.

    2006-10-01

    Early implementations of Fokker-Planck collision operator and moment computations in TEMPEST used low order polynomial interpolation schemes to reuse conservative operators developed for speed/pitch-angle (v, θ) coordinates. When this approach proved to be too inaccurate we developed an alternative higher order interpolation scheme for the Rosenbluth potentials and a high order finite volume method in TEMPEST (,) coordinates. The collision operator is thus generated by using the expansion technique in (v, θ) coordinates for the diffusion coefficients only, and then the fluxes for the conservative differencing are computed directly in the TEMPEST (,) coordinates. Combined with a cut-cell treatment at the turning-point boundary, this new approach is shown to have much better accuracy and conservation properties.

  18. A simple, robust and efficient high-order accurate shock-capturing scheme for compressible flows: Towards minimalism

    NASA Astrophysics Data System (ADS)

    Ohwada, Taku; Shibata, Yuki; Kato, Takuma; Nakamura, Taichi

    2018-06-01

    Developed is a high-order accurate shock-capturing scheme for the compressible Euler/Navier-Stokes equations; the formal accuracy is 5th order in space and 4th order in time. The performance and efficiency of the scheme are validated in various numerical tests. The main ingredients of the scheme are nothing special; they are variants of the standard numerical flux, MUSCL, the usual Lagrange's polynomial and the conventional Runge-Kutta method. The scheme can compute a boundary layer accurately with a rational resolution and capture a stationary contact discontinuity sharply without inner points. And yet it is endowed with high resistance against shock anomalies (carbuncle phenomenon, post-shock oscillations, etc.). A good balance between high robustness and low dissipation is achieved by blending three types of numerical fluxes according to physical situation in an intuitively easy-to-understand way. The performance of the scheme is largely comparable to that of WENO5-Rusanov, while its computational cost is 30-40% less than of that of the advanced scheme.

  19. Spectral (Finite) Volume Method for Conservation Laws on Unstructured Grids II: Extension to Two Dimensional Scalar Equation

    NASA Technical Reports Server (NTRS)

    Wang, Z. J.; Liu, Yen; Kwak, Dochan (Technical Monitor)

    2002-01-01

    The framework for constructing a high-order, conservative Spectral (Finite) Volume (SV) method is presented for two-dimensional scalar hyperbolic conservation laws on unstructured triangular grids. Each triangular grid cell forms a spectral volume (SV), and the SV is further subdivided into polygonal control volumes (CVs) to supported high-order data reconstructions. Cell-averaged solutions from these CVs are used to reconstruct a high order polynomial approximation in the SV. Each CV is then updated independently with a Godunov-type finite volume method and a high-order Runge-Kutta time integration scheme. A universal reconstruction is obtained by partitioning all SVs in a geometrically similar manner. The convergence of the SV method is shown to depend on how a SV is partitioned. A criterion based on the Lebesgue constant has been developed and used successfully to determine the quality of various partitions. Symmetric, stable, and convergent linear, quadratic, and cubic SVs have been obtained, and many different types of partitions have been evaluated. The SV method is tested for both linear and non-linear model problems with and without discontinuities.

  20. Cylinder stitching interferometry: with and without overlap regions

    NASA Astrophysics Data System (ADS)

    Peng, Junzheng; Chen, Dingfu; Yu, Yingjie

    2017-06-01

    Since the cylinder surface is closed and periodic in the azimuthal direction, existing stitching methods cannot be used to yield the 360° form map. To address this problem, this paper presents two methods for stitching interferometry of cylinder: one requires overlap regions, and the other does not need the overlap regions. For the former, we use the first order approximation of cylindrical coordinate transformation to build the stitching model. With it, the relative parameters between the adjacent sub-apertures can be calculated by the stitching model. For the latter, a set of orthogonal polynomials, termed Legendre Fourier (LF) polynomials, was developed. With these polynomials, individual sub-aperture data can be expanded as composition of inherent form of partial cylinder surface and additional misalignment parameters. Then the 360° form map can be acquired by simultaneously fitting all sub-aperture data with LF polynomials. Finally the two proposed methods are compared under various conditions. The merits and drawbacks of each stitching method are consequently revealed to provide suggestion in acquisition of 360° form map for a precision cylinder.

  1. Mapping Landslides in Lunar Impact Craters Using Chebyshev Polynomials and Dem's

    NASA Astrophysics Data System (ADS)

    Yordanov, V.; Scaioni, M.; Brunetti, M. T.; Melis, M. T.; Zinzi, A.; Giommi, P.

    2016-06-01

    Geological slope failure processes have been observed on the Moon surface for decades, nevertheless a detailed and exhaustive lunar landslide inventory has not been produced yet. For a preliminary survey, WAC images and DEM maps from LROC at 100 m/pixels have been exploited in combination with the criteria applied by Brunetti et al. (2015) to detect the landslides. These criteria are based on the visual analysis of optical images to recognize mass wasting features. In the literature, Chebyshev polynomials have been applied to interpolate crater cross-sections in order to obtain a parametric characterization useful for classification into different morphological shapes. Here a new implementation of Chebyshev polynomial approximation is proposed, taking into account some statistical testing of the results obtained during Least-squares estimation. The presence of landslides in lunar craters is then investigated by analyzing the absolute values off odd coefficients of estimated Chebyshev polynomials. A case study on the Cassini A crater has demonstrated the key-points of the proposed methodology and outlined the required future development to carry out.

  2. New realisation of Preisach model using adaptive polynomial approximation

    NASA Astrophysics Data System (ADS)

    Liu, Van-Tsai; Lin, Chun-Liang; Wing, Home-Young

    2012-09-01

    Modelling system with hysteresis has received considerable attention recently due to the increasing accurate requirement in engineering applications. The classical Preisach model (CPM) is the most popular model to demonstrate hysteresis which can be represented by infinite but countable first-order reversal curves (FORCs). The usage of look-up tables is one way to approach the CPM in actual practice. The data in those tables correspond with the samples of a finite number of FORCs. This approach, however, faces two major problems: firstly, it requires a large amount of memory space to obtain an accurate prediction of hysteresis; secondly, it is difficult to derive efficient ways to modify the data table to reflect the timing effect of elements with hysteresis. To overcome, this article proposes the idea of using a set of polynomials to emulate the CPM instead of table look-up. The polynomial approximation requires less memory space for data storage. Furthermore, the polynomial coefficients can be obtained accurately by using the least-square approximation or adaptive identification algorithm, such as the possibility of accurate tracking of hysteresis model parameters.

  3. A locally p-adaptive approach for Large Eddy Simulation of compressible flows in a DG framework

    NASA Astrophysics Data System (ADS)

    Tugnoli, Matteo; Abbà, Antonella; Bonaventura, Luca; Restelli, Marco

    2017-11-01

    We investigate the possibility of reducing the computational burden of LES models by employing local polynomial degree adaptivity in the framework of a high-order DG method. A novel degree adaptation technique especially featured to be effective for LES applications is proposed and its effectiveness is compared to that of other criteria already employed in the literature. The resulting locally adaptive approach allows to achieve significant reductions in computational cost of representative LES computations.

  4. Comparison of Implicit Collocation Methods for the Heat Equation

    NASA Technical Reports Server (NTRS)

    Kouatchou, Jules; Jezequel, Fabienne; Zukor, Dorothy (Technical Monitor)

    2001-01-01

    We combine a high-order compact finite difference scheme to approximate spatial derivatives arid collocation techniques for the time component to numerically solve the two dimensional heat equation. We use two approaches to implement the collocation methods. The first one is based on an explicit computation of the coefficients of polynomials and the second one relies on differential quadrature. We compare them by studying their merits and analyzing their numerical performance. All our computations, based on parallel algorithms, are carried out on the CRAY SV1.

  5. Correction factors for on-line microprobe analysis of multielement alloy systems

    NASA Technical Reports Server (NTRS)

    Unnam, J.; Tenney, D. R.; Brewer, W. D.

    1977-01-01

    An on-line correction technique was developed for the conversion of electron probe X-ray intensities into concentrations of emitting elements. This technique consisted of off-line calculation and representation of binary interaction data which were read into an on-line minicomputer to calculate variable correction coefficients. These coefficients were used to correct the X-ray data without significantly increasing computer core requirements. The binary interaction data were obtained by running Colby's MAGIC 4 program in the reverse mode. The data for each binary interaction were represented by polynomial coefficients obtained by least-squares fitting a third-order polynomial. Polynomial coefficients were generated for most of the common binary interactions at different accelerating potentials and are included. Results are presented for the analyses of several alloy standards to demonstrate the applicability of this correction procedure.

  6. Acute effects of static stretching on passive stiffness of the hamstring muscles calculated using different mathematical models.

    PubMed

    Nordez, Antoine; Cornu, Christophe; McNair, Peter

    2006-08-01

    The aim of this study was to assess the effects of static stretching on hamstring passive stiffness calculated using different data reduction methods. Subjects performed a maximal range of motion test, five cyclic stretching repetitions and a static stretching intervention that involved five 30-s static stretches. A computerised dynamometer allowed the measurement of torque and range of motion during passive knee extension. Stiffness was then calculated as the slope of the torque-angle relationship fitted using a second-order polynomial, a fourth-order polynomial, and an exponential model. The second-order polynomial and exponential models allowed the calculation of stiffness indices normalized to knee angle and passive torque, respectively. Prior to static stretching, stiffness levels were significantly different across the models. After stretching, while knee maximal joint range of motion increased, stiffness was shown to decrease. Stiffness decreased more at the extended knee joint angle, and the magnitude of change depended upon the model used. After stretching, the stiffness indices also varied according to the model used to fit data. Thus, the stiffness index normalized to knee angle was found to decrease whereas the stiffness index normalized to passive torque increased after static stretching. Stretching has significant effects on stiffness, but the findings highlight the need to carefully assess the effect of different models when analyzing such data.

  7. A coupled electro-thermal Discontinuous Galerkin method

    NASA Astrophysics Data System (ADS)

    Homsi, L.; Geuzaine, C.; Noels, L.

    2017-11-01

    This paper presents a Discontinuous Galerkin scheme in order to solve the nonlinear elliptic partial differential equations of coupled electro-thermal problems. In this paper we discuss the fundamental equations for the transport of electricity and heat, in terms of macroscopic variables such as temperature and electric potential. A fully coupled nonlinear weak formulation for electro-thermal problems is developed based on continuum mechanics equations expressed in terms of energetically conjugated pair of fluxes and fields gradients. The weak form can thus be formulated as a Discontinuous Galerkin method. The existence and uniqueness of the weak form solution are proved. The numerical properties of the nonlinear elliptic problems i.e., consistency and stability, are demonstrated under specific conditions, i.e. use of high enough stabilization parameter and at least quadratic polynomial approximations. Moreover the prior error estimates in the H1-norm and in the L2-norm are shown to be optimal in the mesh size with the polynomial approximation degree.

  8. Entropy-stable summation-by-parts discretization of the Euler equations on general curved elements

    NASA Astrophysics Data System (ADS)

    Crean, Jared; Hicken, Jason E.; Del Rey Fernández, David C.; Zingg, David W.; Carpenter, Mark H.

    2018-03-01

    We present and analyze an entropy-stable semi-discretization of the Euler equations based on high-order summation-by-parts (SBP) operators. In particular, we consider general multidimensional SBP elements, building on and generalizing previous work with tensor-product discretizations. In the absence of dissipation, we prove that the semi-discrete scheme conserves entropy; significantly, this proof of nonlinear L2 stability does not rely on integral exactness. Furthermore, interior penalties can be incorporated into the discretization to ensure that the total (mathematical) entropy decreases monotonically, producing an entropy-stable scheme. SBP discretizations with curved elements remain accurate, conservative, and entropy stable provided the mapping Jacobian satisfies the discrete metric invariants; polynomial mappings at most one degree higher than the SBP operators automatically satisfy the metric invariants in two dimensions. In three-dimensions, we describe an elementwise optimization that leads to suitable Jacobians in the case of polynomial mappings. The properties of the semi-discrete scheme are verified and investigated using numerical experiments.

  9. A new basis set for molecular bending degrees of freedom.

    PubMed

    Jutier, Laurent

    2010-07-21

    We present a new basis set as an alternative to Legendre polynomials for the variational treatment of bending vibrational degrees of freedom in order to highly reduce the number of basis functions. This basis set is inspired from the harmonic oscillator eigenfunctions but is defined for a bending angle in the range theta in [0:pi]. The aim is to bring the basis functions closer to the final (ro)vibronic wave functions nature. Our methodology is extended to complicated potential energy surfaces, such as quasilinearity or multiequilibrium geometries, by using several free parameters in the basis functions. These parameters allow several density maxima, linear or not, around which the basis functions will be mainly located. Divergences at linearity in integral computations are resolved as generalized Legendre polynomials. All integral computations required for the evaluation of molecular Hamiltonian matrix elements are given for both discrete variable representation and finite basis representation. Convergence tests for the low energy vibronic states of HCCH(++), HCCH(+), and HCCS are presented.

  10. Computational aspects of pseudospectral Laguerre approximations

    NASA Technical Reports Server (NTRS)

    Funaro, Daniele

    1989-01-01

    Pseudospectral approximations in unbounded domains by Laguerre polynomials lead to ill-conditioned algorithms. Introduced are a scaling function and appropriate numerical procedures in order to limit these unpleasant phenomena.

  11. Killings, duality and characteristic polynomials

    NASA Astrophysics Data System (ADS)

    Álvarez, Enrique; Borlaf, Javier; León, José H.

    1998-03-01

    In this paper the complete geometrical setting of (lowest order) abelian T-duality is explored with the help of some new geometrical tools (the reduced formalism). In particular, all invariant polynomials (the integrands of the characteristic classes) can be explicitly computed for the dual model in terms of quantities pertaining to the original one and with the help of the canonical connection whose intrinsic characterization is given. Using our formalism the physically, and T-duality invariant, relevant result that top forms are zero when there is an isometry without fixed points is easily proved. © 1998

  12. Axisymmetric solid elements by a rational hybrid stress method

    NASA Technical Reports Server (NTRS)

    Tian, Z.; Pian, T. H. H.

    1985-01-01

    Four-node axisymmetric solid elements are derived by a new version of hybrid method for which the assumed stresses are expressed in complete polynomials in natural coordinates. The stress equilibrium conditions are introduced through the use of additional displacements as Lagrange multipliers. A rational procedure is to choose the displacement terms such that the resulting strains are also of complete polynomials of the same order. Example problems all indicate that elements obtained by this procedure lead to better results in displacements and stresses than that by other finite elements.

  13. Identification of stochastic interactions in nonlinear models of structural mechanics

    NASA Astrophysics Data System (ADS)

    Kala, Zdeněk

    2017-07-01

    In the paper, the polynomial approximation is presented by which the Sobol sensitivity analysis can be evaluated with all sensitivity indices. The nonlinear FEM model is approximated. The input area is mapped using simulations runs of Latin Hypercube Sampling method. The domain of the approximation polynomial is chosen so that it were possible to apply large number of simulation runs of Latin Hypercube Sampling method. The method presented also makes possible to evaluate higher-order sensitivity indices, which could not be identified in case of nonlinear FEM.

  14. Quantitative Boltzmann-Gibbs Principles via Orthogonal Polynomial Duality

    NASA Astrophysics Data System (ADS)

    Ayala, Mario; Carinci, Gioia; Redig, Frank

    2018-06-01

    We study fluctuation fields of orthogonal polynomials in the context of particle systems with duality. We thereby obtain a systematic orthogonal decomposition of the fluctuation fields of local functions, where the order of every term can be quantified. This implies a quantitative generalization of the Boltzmann-Gibbs principle. In the context of independent random walkers, we complete this program, including also fluctuation fields in non-stationary context (local equilibrium). For other interacting particle systems with duality such as the symmetric exclusion process, similar results can be obtained, under precise conditions on the n particle dynamics.

  15. Combining freeform optics and curved detectors for wide field imaging: a polynomial approach over squared aperture.

    PubMed

    Muslimov, Eduard; Hugot, Emmanuel; Jahn, Wilfried; Vives, Sebastien; Ferrari, Marc; Chambion, Bertrand; Henry, David; Gaschet, Christophe

    2017-06-26

    In the recent years a significant progress was achieved in the field of design and fabrication of optical systems based on freeform optical surfaces. They provide a possibility to build fast, wide-angle and high-resolution systems, which are very compact and free of obscuration. However, the field of freeform surfaces design techniques still remains underexplored. In the present paper we use the mathematical apparatus of orthogonal polynomials defined over a square aperture, which was developed before for the tasks of wavefront reconstruction, to describe shape of a mirror surface. Two cases, namely Legendre polynomials and generalization of the Zernike polynomials on a square, are considered. The potential advantages of these polynomials sets are demonstrated on example of a three-mirror unobscured telescope with F/# = 2.5 and FoV = 7.2x7.2°. In addition, we discuss possibility of use of curved detectors in such a design.

  16. Nonlinear Fourier transform—towards the construction of nonlinear Fourier modes

    NASA Astrophysics Data System (ADS)

    Saksida, Pavle

    2018-01-01

    We study a version of the nonlinear Fourier transform associated with ZS-AKNS systems. This version is suitable for the construction of nonlinear analogues of Fourier modes, and for the perturbation-theoretic study of their superposition. We provide an iterative scheme for computing the inverse of our transform. The relevant formulae are expressed in terms of Bell polynomials and functions related to them. In order to prove the validity of our iterative scheme, we show that our transform has the necessary analytic properties. We show that up to order three of the perturbation parameter, the nonlinear Fourier mode is a complex sinusoid modulated by the second Bernoulli polynomial. We describe an application of the nonlinear superposition of two modes to a problem of transmission through a nonlinear medium.

  17. Polynomial meta-models with canonical low-rank approximations: Numerical insights and comparison to sparse polynomial chaos expansions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Konakli, Katerina, E-mail: konakli@ibk.baug.ethz.ch; Sudret, Bruno

    2016-09-15

    The growing need for uncertainty analysis of complex computational models has led to an expanding use of meta-models across engineering and sciences. The efficiency of meta-modeling techniques relies on their ability to provide statistically-equivalent analytical representations based on relatively few evaluations of the original model. Polynomial chaos expansions (PCE) have proven a powerful tool for developing meta-models in a wide range of applications; the key idea thereof is to expand the model response onto a basis made of multivariate polynomials obtained as tensor products of appropriate univariate polynomials. The classical PCE approach nevertheless faces the “curse of dimensionality”, namely themore » exponential increase of the basis size with increasing input dimension. To address this limitation, the sparse PCE technique has been proposed, in which the expansion is carried out on only a few relevant basis terms that are automatically selected by a suitable algorithm. An alternative for developing meta-models with polynomial functions in high-dimensional problems is offered by the newly emerged low-rank approximations (LRA) approach. By exploiting the tensor–product structure of the multivariate basis, LRA can provide polynomial representations in highly compressed formats. Through extensive numerical investigations, we herein first shed light on issues relating to the construction of canonical LRA with a particular greedy algorithm involving a sequential updating of the polynomial coefficients along separate dimensions. Specifically, we examine the selection of optimal rank, stopping criteria in the updating of the polynomial coefficients and error estimation. In the sequel, we confront canonical LRA to sparse PCE in structural-mechanics and heat-conduction applications based on finite-element solutions. Canonical LRA exhibit smaller errors than sparse PCE in cases when the number of available model evaluations is small with respect to the input dimension, a situation that is often encountered in real-life problems. By introducing the conditional generalization error, we further demonstrate that canonical LRA tend to outperform sparse PCE in the prediction of extreme model responses, which is critical in reliability analysis.« less

  18. Bayesian B-spline mapping for dynamic quantitative traits.

    PubMed

    Xing, Jun; Li, Jiahan; Yang, Runqing; Zhou, Xiaojing; Xu, Shizhong

    2012-04-01

    Owing to their ability and flexibility to describe individual gene expression at different time points, random regression (RR) analyses have become a popular procedure for the genetic analysis of dynamic traits whose phenotypes are collected over time. Specifically, when modelling the dynamic patterns of gene expressions in the RR framework, B-splines have been proved successful as an alternative to orthogonal polynomials. In the so-called Bayesian B-spline quantitative trait locus (QTL) mapping, B-splines are used to characterize the patterns of QTL effects and individual-specific time-dependent environmental errors over time, and the Bayesian shrinkage estimation method is employed to estimate model parameters. Extensive simulations demonstrate that (1) in terms of statistical power, Bayesian B-spline mapping outperforms the interval mapping based on the maximum likelihood; (2) for the simulated dataset with complicated growth curve simulated by B-splines, Legendre polynomial-based Bayesian mapping is not capable of identifying the designed QTLs accurately, even when higher-order Legendre polynomials are considered and (3) for the simulated dataset using Legendre polynomials, the Bayesian B-spline mapping can find the same QTLs as those identified by Legendre polynomial analysis. All simulation results support the necessity and flexibility of B-spline in Bayesian mapping of dynamic traits. The proposed method is also applied to a real dataset, where QTLs controlling the growth trajectory of stem diameters in Populus are located.

  19. A 2 epoch proper motion catalogue from the UKIDSS Large Area Survey

    NASA Astrophysics Data System (ADS)

    Smith, Leigh; Lucas, Phil; Burningham, Ben; Jones, Hugh; Pinfield, David; Smart, Ricky; Andrei, Alexandre

    2013-04-01

    The UKIDSS Large Area Survey (LAS) began in 2005, with the start of the UKIDSS program as a 7 year effort to survey roughly 4000 square degrees at high galactic latitudes in Y, J, H and K bands. The survey also included a significant quantity of 2-epoch J band observations, with epoch baselines ranging from 2 to 7 years. We present a proper motion catalogue for the 1500 square degrees of the 2 epoch LAS data, which includes some 800,000 sources with motions detected above the 5σ level. We developed a bespoke proper motion pipeline which applies a source-unique second order polynomial transformation to UKIDSS array coordinates of each source to counter potential local non-uniformity in the focal plane. Our catalogue agrees well with the proper motion data supplied in the current WFCAM Science Archive (WSA) DR9 catalogue where there is overlap, and in various optical catalogues, but it benefits from some improvements. One improvement is that we provide absolute proper motions, using LAS galaxies for the relative to absolute correction. Also, by using unique, local, 2nd order polynomial tranformations, as opposed to the linear transformations in the WSA, we correct better for any local distortions in the focal plane, not including the radial distortion that is removed by their pipeline.

  20. Performance analysis of 60-min to 1-min integration time rain rate conversion models in Malaysia

    NASA Astrophysics Data System (ADS)

    Ng, Yun-Yann; Singh, Mandeep Singh Jit; Thiruchelvam, Vinesh

    2018-01-01

    Utilizing the frequency band above 10 GHz is in focus nowadays as a result of the fast expansion of radio communication systems in Malaysia. However, rain fade is the critical factor in attenuation of signal propagation for frequencies above 10 GHz. Malaysia is located in a tropical and equatorial region with high rain intensity throughout the year, and this study will review rain distribution and evaluate the performance of 60-min to 1-min integration time rain rate conversion methods for Malaysia. Several conversion methods such as Segal, Chebil & Rahman, Burgeono, Emiliani, Lavergnat and Gole (LG), Simplified Moupfouma, Joo et al., fourth order polynomial fit and logarithmic model have been chosen to evaluate the performance to predict 1-min rain rate for 10 sites in Malaysia. After the completion of this research, the results show that Chebil & Rahman model, Lavergnat & Gole model, Fourth order polynomial fit and Logarithmic model have shown the best performances in 60-min to 1-min rain rate conversion over 10 sites. In conclusion, it is proven that there is no single model which can claim to perform the best across 10 sites. By averaging RMSE and SC-RMSE over 10 sites, Chebil and Rahman model is the best method.

  1. Performance evaluation of an infrared thermocouple.

    PubMed

    Chen, Chiachung; Weng, Yu-Kai; Shen, Te-Ching

    2010-01-01

    The measurement of the leaf temperature of forests or agricultural plants is an important technique for the monitoring of the physiological state of crops. The infrared thermometer is a convenient device due to its fast response and nondestructive measurement technique. Nowadays, a novel infrared thermocouple, developed with the same measurement principle of the infrared thermometer but using a different detector, has been commercialized for non-contact temperature measurement. The performances of two-kinds of infrared thermocouples were evaluated in this study. The standard temperature was maintained by a temperature calibrator and a special black cavity device. The results indicated that both types of infrared thermocouples had good precision. The error distribution ranged from -1.8 °C to 18 °C as the reading values served as the true values. Within the range from 13 °C to 37 °C, the adequate calibration equations were the high-order polynomial equations. Within the narrower range from 20 °C to 35 °C, the adequate equation was a linear equation for one sensor and a two-order polynomial equation for the other sensor. The accuracy of the two kinds of infrared thermocouple was improved by nearly 0.4 °C with the calibration equations. These devices could serve as mobile monitoring tools for in situ and real time routine estimation of leaf temperatures.

  2. Color calibration of swine gastrointestinal tract images acquired by radial imaging capsule endoscope

    NASA Astrophysics Data System (ADS)

    Ou-Yang, Mang; Jeng, Wei-De; Lai, Chien-Cheng; Wu, Hsien-Ming; Lin, Jyh-Hung

    2016-01-01

    The type of illumination systems and color filters used typically generate varying levels of color difference in capsule endoscopes, which influence medical diagnoses. In order to calibrate the color difference caused by the optical system, this study applied a radial imaging capsule endoscope (RICE) to photograph standard color charts, which were then employed to calculate the color gamut of RICE. Color gamut was also measured using a spectrometer in order to get a high-precision color information, and the results obtained using both methods were compared. Subsequently, color-correction methods, namely polynomial transform and conformal mapping, were used to improve the color difference. Before color calibration, the color difference value caused by the influences of optical systems in RICE was 21.45±1.09. Through the proposed polynomial transformation, the color difference could be reduced effectively to 1.53±0.07. Compared to another proposed conformal mapping, the color difference value was substantially reduced to 1.32±0.11, and the color difference is imperceptible for human eye because it is <1.5. Then, real-time color correction was achieved using this algorithm combined with a field-programmable gate array, and the results of the color correction can be viewed from real-time images.

  3. A new weak Galerkin finite element method for elliptic interface problems

    DOE PAGES

    Mu, Lin; Wang, Junping; Ye, Xiu; ...

    2016-08-26

    We introduce and analyze a new weak Galerkin (WG) finite element method in this paper for solving second order elliptic equations with discontinuous coefficients and interfaces. Comparing with the existing WG algorithm for solving the same type problems, the present WG method has a simpler variational formulation and fewer unknowns. Moreover, the new WG algorithm allows the use of finite element partitions consisting of general polytopal meshes and can be easily generalized to high orders. Optimal order error estimates in both H1 and L2 norms are established for the present WG finite element solutions. We conducted extensive numerical experiments inmore » order to examine the accuracy, flexibility, and robustness of the proposed WG interface approach. In solving regular elliptic interface problems, high order convergences are numerically confirmed by using piecewise polynomial basis functions of high degrees. Moreover, the WG method is shown to be able to accommodate very complicated interfaces, due to its flexibility in choosing finite element partitions. Finally, in dealing with challenging problems with low regularities, the piecewise linear WG method is capable of delivering a second order of accuracy in L∞ norm for both C1 and H2 continuous solutions.« less

  4. A new weak Galerkin finite element method for elliptic interface problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mu, Lin; Wang, Junping; Ye, Xiu

    We introduce and analyze a new weak Galerkin (WG) finite element method in this paper for solving second order elliptic equations with discontinuous coefficients and interfaces. Comparing with the existing WG algorithm for solving the same type problems, the present WG method has a simpler variational formulation and fewer unknowns. Moreover, the new WG algorithm allows the use of finite element partitions consisting of general polytopal meshes and can be easily generalized to high orders. Optimal order error estimates in both H1 and L2 norms are established for the present WG finite element solutions. We conducted extensive numerical experiments inmore » order to examine the accuracy, flexibility, and robustness of the proposed WG interface approach. In solving regular elliptic interface problems, high order convergences are numerically confirmed by using piecewise polynomial basis functions of high degrees. Moreover, the WG method is shown to be able to accommodate very complicated interfaces, due to its flexibility in choosing finite element partitions. Finally, in dealing with challenging problems with low regularities, the piecewise linear WG method is capable of delivering a second order of accuracy in L∞ norm for both C1 and H2 continuous solutions.« less

  5. High Order Discontinuous Gelerkin Methods for Convection Dominated Problems with Application to Aeroacoustics

    NASA Technical Reports Server (NTRS)

    Shu, Chi-Wang

    2000-01-01

    This project is about the investigation of the development of the discontinuous Galerkin finite element methods, for general geometry and triangulations, for solving convection dominated problems, with applications to aeroacoustics. On the analysis side, we have studied the efficient and stable discontinuous Galerkin framework for small second derivative terms, for example in Navier-Stokes equations, and also for related equations such as the Hamilton-Jacobi equations. This is a truly local discontinuous formulation where derivatives are considered as new variables. On the applied side, we have implemented and tested the efficiency of different approaches numerically. Related issues in high order ENO and WENO finite difference methods and spectral methods have also been investigated. Jointly with Hu, we have presented a discontinuous Galerkin finite element method for solving the nonlinear Hamilton-Jacobi equations. This method is based on the RungeKutta discontinuous Galerkin finite element method for solving conservation laws. The method has the flexibility of treating complicated geometry by using arbitrary triangulation, can achieve high order accuracy with a local, compact stencil, and are suited for efficient parallel implementation. One and two dimensional numerical examples are given to illustrate the capability of the method. Jointly with Hu, we have constructed third and fourth order WENO schemes on two dimensional unstructured meshes (triangles) in the finite volume formulation. The third order schemes are based on a combination of linear polynomials with nonlinear weights, and the fourth order schemes are based on combination of quadratic polynomials with nonlinear weights. We have addressed several difficult issues associated with high order WENO schemes on unstructured mesh, including the choice of linear and nonlinear weights, what to do with negative weights, etc. Numerical examples are shown to demonstrate the accuracies and robustness of the methods for shock calculations. Jointly with P. Montarnal, we have used a recently developed energy relaxation theory by Coquel and Perthame and high order weighted essentially non-oscillatory (WENO) schemes to simulate the Euler equations of real gas. The main idea is an energy decomposition under the form epsilon = epsilon(sub 1) + epsilon(sub 2), where epsilon(sub 1) is associated with a simpler pressure law (gamma)-law in this paper) and the nonlinear deviation epsilon(sub 2) is convected with the flow. A relaxation process is performed for each time step to ensure that the original pressure law is satisfied. The necessary characteristic decomposition for the high order WENO schemes is performed on the characteristic fields based on the epsilon(sub l) gamma-law. The algorithm only calls for the original pressure law once per grid point per time step, without the need to compute its derivatives or any Riemann solvers. Both one and two dimensional numerical examples are shown to illustrate the effectiveness of this approach.

  6. Well-conditioned fractional collocation methods using fractional Birkhoff interpolation basis

    NASA Astrophysics Data System (ADS)

    Jiao, Yujian; Wang, Li-Lian; Huang, Can

    2016-01-01

    The purpose of this paper is twofold. Firstly, we provide explicit and compact formulas for computing both Caputo and (modified) Riemann-Liouville (RL) fractional pseudospectral differentiation matrices (F-PSDMs) of any order at general Jacobi-Gauss-Lobatto (JGL) points. We show that in the Caputo case, it suffices to compute F-PSDM of order μ ∈ (0 , 1) to compute that of any order k + μ with integer k ≥ 0, while in the modified RL case, it is only necessary to evaluate a fractional integral matrix of order μ ∈ (0 , 1). Secondly, we introduce suitable fractional JGL Birkhoff interpolation problems leading to new interpolation polynomial basis functions with remarkable properties: (i) the matrix generated from the new basis yields the exact inverse of F-PSDM at "interior" JGL points; (ii) the matrix of the highest fractional derivative in a collocation scheme under the new basis is diagonal; and (iii) the resulted linear system is well-conditioned in the Caputo case, while in the modified RL case, the eigenvalues of the coefficient matrix are highly concentrated. In both cases, the linear systems of the collocation schemes using the new basis can be solved by an iterative solver within a few iterations. Notably, the inverse can be computed in a very stable manner, so this offers optimal preconditioners for usual fractional collocation methods for fractional differential equations (FDEs). It is also noteworthy that the choice of certain special JGL points with parameters related to the order of the equations can ease the implementation. We highlight that the use of the Bateman's fractional integral formulas and fast transforms between Jacobi polynomials with different parameters, is essential for our algorithm development.

  7. Spectral-element Method for 3D Marine Controlled-source EM Modeling

    NASA Astrophysics Data System (ADS)

    Liu, L.; Yin, C.; Zhang, B., Sr.; Liu, Y.; Qiu, C.; Huang, X.; Zhu, J.

    2017-12-01

    As one of the predrill reservoir appraisal methods, marine controlled-source EM (MCSEM) has been widely used in mapping oil reservoirs to reduce risk of deep water exploration. With the technical development of MCSEM, the need for improved forward modeling tools has become evident. We introduce in this paper spectral element method (SEM) for 3D MCSEM modeling. It combines the flexibility of finite-element and high accuracy of spectral method. We use Galerkin weighted residual method to discretize the vector Helmholtz equation, where the curl-conforming Gauss-Lobatto-Chebyshev (GLC) polynomials are chosen as vector basis functions. As a kind of high-order complete orthogonal polynomials, the GLC have the characteristic of exponential convergence. This helps derive the matrix elements analytically and improves the modeling accuracy. Numerical 1D models using SEM with different orders show that SEM method delivers accurate results. With increasing SEM orders, the modeling accuracy improves largely. Further we compare our SEM with finite-difference (FD) method for a 3D reservoir model (Figure 1). The results show that SEM method is more effective than FD method. Only when the mesh is fine enough, can FD achieve the same accuracy of SEM. Therefore, to obtain the same precision, SEM greatly reduces the degrees of freedom and cost. Numerical experiments with different models (not shown here) demonstrate that SEM is an efficient and effective tool for MSCEM modeling that has significant advantages over traditional numerical methods.This research is supported by Key Program of National Natural Science Foundation of China (41530320), China Natural Science Foundation for Young Scientists (41404093), and Key National Research Project of China (2016YFC0303100, 2017YFC0601900).

  8. B+ L violation at colliders and new physics

    NASA Astrophysics Data System (ADS)

    Cerdeño, David G.; Reimitz, Peter; Sakurai, Kazuki; Tamarit, Carlos

    2018-04-01

    Chiral electroweak anomalies predict baryon ( B) and lepton ( L) violating fermion interactions, which can be dressed with large numbers of Higgs and gauge bosons. The estimation of the total B + L-violating rate from an initial two-particle state — potentially observable at colliders — has been the subject of an intense discussion, mainly centered on the resummation of boson emission, which is believed to contribute to the cross-section with an exponential function of the energy, yet with an exponent (the "holy-grail" function) which is not fully known in the energy range of interest. In this article we focus instead on the effect of fermions beyond the Standard-Model (SM) in the polynomial contributions to the rate. It is shown that B + L processes involving the new fermions have a polynomial contribution that can be several orders of magnitude greater than in the SM, for high centre-of-mass energies and light enough masses. We also present calculations that hint at a simple dependence of the holy grail function on the heavy fermion masses. Thus, if anomalous B + L violating interactions are ever detected at high-energy colliders, they could be associated with new physics.

  9. Nonrelativistic Yang-Mills theory for a naturally light Higgs boson

    NASA Astrophysics Data System (ADS)

    Berthier, Laure; Grosvenor, Kevin T.; Yan, Ziqi

    2017-11-01

    We continue the study of the nonrelativistic short-distance completions of a naturally light Higgs, focusing on the interplay between the gauge symmetries and the polynomial shift symmetries. We investigate the naturalness of nonrelativistic scalar quantum electrodynamics with a dynamical critical exponent z =3 by computing leading power law divergences to the scalar propagator in this theory. We find that power law divergences exhibit a more refined structure in theories that lack boost symmetries. Finally, in this toy model, we show that it is possible to preserve a fairly large hierarchy between the scalar mass and the high-energy naturalness scale across 7 orders of magnitude, while accommodating a gauge coupling of order 0.1.

  10. Hamiltonian BVMs (HBVMs): Implementation Details and Applications

    NASA Astrophysics Data System (ADS)

    Brugnano, Luigi; Iavernaro, Felice; Susca, Tiziana

    2009-09-01

    Hamiltonian Boundary Value Methods are one step schemes of high order where the internal stages are partly exploited to impose the order conditions (fundamental stages) and partly to confer the formula the property of conserving the Hamiltonian function when this is a polynomial with a given degree v. The term "silent stages" has been coined for these latter set of extra-stages to mean that their presence does not cause an increase of the dimension of the associated nonlinear system to be solved at each step. By considering a specific method in this class, we give some details about how the solution of the nonlinear system may be conveniently carried out and how to compensate the effect of roundoff errors.

  11. Maximum Marginal Likelihood Estimation of a Monotonic Polynomial Generalized Partial Credit Model with Applications to Multiple Group Analysis.

    PubMed

    Falk, Carl F; Cai, Li

    2016-06-01

    We present a semi-parametric approach to estimating item response functions (IRF) useful when the true IRF does not strictly follow commonly used functions. Our approach replaces the linear predictor of the generalized partial credit model with a monotonic polynomial. The model includes the regular generalized partial credit model at the lowest order polynomial. Our approach extends Liang's (A semi-parametric approach to estimate IRFs, Unpublished doctoral dissertation, 2007) method for dichotomous item responses to the case of polytomous data. Furthermore, item parameter estimation is implemented with maximum marginal likelihood using the Bock-Aitkin EM algorithm, thereby facilitating multiple group analyses useful in operational settings. Our approach is demonstrated on both educational and psychological data. We present simulation results comparing our approach to more standard IRF estimation approaches and other non-parametric and semi-parametric alternatives.

  12. Uncertainty Quantification in Simulations of Epidemics Using Polynomial Chaos

    PubMed Central

    Santonja, F.; Chen-Charpentier, B.

    2012-01-01

    Mathematical models based on ordinary differential equations are a useful tool to study the processes involved in epidemiology. Many models consider that the parameters are deterministic variables. But in practice, the transmission parameters present large variability and it is not possible to determine them exactly, and it is necessary to introduce randomness. In this paper, we present an application of the polynomial chaos approach to epidemiological mathematical models based on ordinary differential equations with random coefficients. Taking into account the variability of the transmission parameters of the model, this approach allows us to obtain an auxiliary system of differential equations, which is then integrated numerically to obtain the first-and the second-order moments of the output stochastic processes. A sensitivity analysis based on the polynomial chaos approach is also performed to determine which parameters have the greatest influence on the results. As an example, we will apply the approach to an obesity epidemic model. PMID:22927889

  13. Computing multiple periodic solutions of nonlinear vibration problems using the harmonic balance method and Groebner bases

    NASA Astrophysics Data System (ADS)

    Grolet, Aurelien; Thouverez, Fabrice

    2015-02-01

    This paper is devoted to the study of vibration of mechanical systems with geometric nonlinearities. The harmonic balance method is used to derive systems of polynomial equations whose solutions give the frequency component of the possible steady states. Groebner basis methods are used for computing all solutions of polynomial systems. This approach allows to reduce the complete system to an unique polynomial equation in one variable driving all solutions of the problem. In addition, in order to decrease the number of variables, we propose to first work on the undamped system, and recover solution of the damped system using a continuation on the damping parameter. The search for multiple solutions is illustrated on a simple system, where the influence of the retained number of harmonic is studied. Finally, the procedure is applied on a simple cyclic system and we give a representation of the multiple states versus frequency.

  14. Estimating the Effective Permittivity for Reconstructing Accurate Microwave-Radar Images.

    PubMed

    Lavoie, Benjamin R; Okoniewski, Michal; Fear, Elise C

    2016-01-01

    We present preliminary results from a method for estimating the optimal effective permittivity for reconstructing microwave-radar images. Using knowledge of how microwave-radar images are formed, we identify characteristics that are typical of good images, and define a fitness function to measure the relative image quality. We build a polynomial interpolant of the fitness function in order to identify the most likely permittivity values of the tissue. To make the estimation process more efficient, the polynomial interpolant is constructed using a locally and dimensionally adaptive sampling method that is a novel combination of stochastic collocation and polynomial chaos. Examples, using a series of simulated, experimental and patient data collected using the Tissue Sensing Adaptive Radar system, which is under development at the University of Calgary, are presented. These examples show how, using our method, accurate images can be reconstructed starting with only a broad estimate of the permittivity range.

  15. Polynomial Chaos Based Acoustic Uncertainty Predictions from Ocean Forecast Ensembles

    NASA Astrophysics Data System (ADS)

    Dennis, S.

    2016-02-01

    Most significant ocean acoustic propagation occurs at tens of kilometers, at scales small compared basin and to most fine scale ocean modeling. To address the increased emphasis on uncertainty quantification, for example transmission loss (TL) probability density functions (PDF) within some radius, a polynomial chaos (PC) based method is utilized. In order to capture uncertainty in ocean modeling, Navy Coastal Ocean Model (NCOM) now includes ensembles distributed to reflect the ocean analysis statistics. Since the ensembles are included in the data assimilation for the new forecast ensembles, the acoustic modeling uses the ensemble predictions in a similar fashion for creating sound speed distribution over an acoustically relevant domain. Within an acoustic domain, singular value decomposition over the combined time-space structure of the sound speeds can be used to create Karhunen-Loève expansions of sound speed, subject to multivariate normality testing. These sound speed expansions serve as a basis for Hermite polynomial chaos expansions of derived quantities, in particular TL. The PC expansion coefficients result from so-called non-intrusive methods, involving evaluation of TL at multi-dimensional Gauss-Hermite quadrature collocation points. Traditional TL calculation from standard acoustic propagation modeling could be prohibitively time consuming at all multi-dimensional collocation points. This method employs Smolyak order and gridding methods to allow adaptive sub-sampling of the collocation points to determine only the most significant PC expansion coefficients to within a preset tolerance. Practically, the Smolyak order and grid sizes grow only polynomially in the number of Karhunen-Loève terms, alleviating the curse of dimensionality. The resulting TL PC coefficients allow the determination of TL PDF normality and its mean and standard deviation. In the non-normal case, PC Monte Carlo methods are used to rapidly establish the PDF. This work was sponsored by the Office of Naval Research

  16. Learning epistatic interactions from sequence-activity data to predict enantioselectivity

    NASA Astrophysics Data System (ADS)

    Zaugg, Julian; Gumulya, Yosephine; Malde, Alpeshkumar K.; Bodén, Mikael

    2017-12-01

    Enzymes with a high selectivity are desirable for improving economics of chemical synthesis of enantiopure compounds. To improve enzyme selectivity mutations are often introduced near the catalytic active site. In this compact environment epistatic interactions between residues, where contributions to selectivity are non-additive, play a significant role in determining the degree of selectivity. Using support vector machine regression models we map mutations to the experimentally characterised enantioselectivities for a set of 136 variants of the epoxide hydrolase from the fungus Aspergillus niger (AnEH). We investigate whether the influence a mutation has on enzyme selectivity can be accurately predicted through linear models, and whether prediction accuracy can be improved using higher-order counterparts. Comparing linear and polynomial degree = 2 models, mean Pearson coefficients (r) from 50 {× } 5 -fold cross-validation increase from 0.84 to 0.91 respectively. Equivalent models tested on interaction-minimised sequences achieve values of r=0.90 and r=0.93 . As expected, testing on a simulated control data set with no interactions results in no significant improvements from higher-order models. Additional experimentally derived AnEH mutants are tested with linear and polynomial degree = 2 models, with values increasing from r=0.51 to r=0.87 respectively. The study demonstrates that linear models perform well, however the representation of epistatic interactions in predictive models improves identification of selectivity-enhancing mutations. The improvement is attributed to higher-order kernel functions that represent epistatic interactions between residues.

  17. Learning epistatic interactions from sequence-activity data to predict enantioselectivity

    NASA Astrophysics Data System (ADS)

    Zaugg, Julian; Gumulya, Yosephine; Malde, Alpeshkumar K.; Bodén, Mikael

    2017-12-01

    Enzymes with a high selectivity are desirable for improving economics of chemical synthesis of enantiopure compounds. To improve enzyme selectivity mutations are often introduced near the catalytic active site. In this compact environment epistatic interactions between residues, where contributions to selectivity are non-additive, play a significant role in determining the degree of selectivity. Using support vector machine regression models we map mutations to the experimentally characterised enantioselectivities for a set of 136 variants of the epoxide hydrolase from the fungus Aspergillus niger ( AnEH). We investigate whether the influence a mutation has on enzyme selectivity can be accurately predicted through linear models, and whether prediction accuracy can be improved using higher-order counterparts. Comparing linear and polynomial degree = 2 models, mean Pearson coefficients ( r) from 50 {× } 5-fold cross-validation increase from 0.84 to 0.91 respectively. Equivalent models tested on interaction-minimised sequences achieve values of r=0.90 and r=0.93. As expected, testing on a simulated control data set with no interactions results in no significant improvements from higher-order models. Additional experimentally derived AnEH mutants are tested with linear and polynomial degree = 2 models, with values increasing from r=0.51 to r=0.87 respectively. The study demonstrates that linear models perform well, however the representation of epistatic interactions in predictive models improves identification of selectivity-enhancing mutations. The improvement is attributed to higher-order kernel functions that represent epistatic interactions between residues.

  18. Learning epistatic interactions from sequence-activity data to predict enantioselectivity.

    PubMed

    Zaugg, Julian; Gumulya, Yosephine; Malde, Alpeshkumar K; Bodén, Mikael

    2017-12-01

    Enzymes with a high selectivity are desirable for improving economics of chemical synthesis of enantiopure compounds. To improve enzyme selectivity mutations are often introduced near the catalytic active site. In this compact environment epistatic interactions between residues, where contributions to selectivity are non-additive, play a significant role in determining the degree of selectivity. Using support vector machine regression models we map mutations to the experimentally characterised enantioselectivities for a set of 136 variants of the epoxide hydrolase from the fungus Aspergillus niger (AnEH). We investigate whether the influence a mutation has on enzyme selectivity can be accurately predicted through linear models, and whether prediction accuracy can be improved using higher-order counterparts. Comparing linear and polynomial degree = 2 models, mean Pearson coefficients (r) from [Formula: see text]-fold cross-validation increase from 0.84 to 0.91 respectively. Equivalent models tested on interaction-minimised sequences achieve values of [Formula: see text] and [Formula: see text]. As expected, testing on a simulated control data set with no interactions results in no significant improvements from higher-order models. Additional experimentally derived AnEH mutants are tested with linear and polynomial degree = 2 models, with values increasing from [Formula: see text] to [Formula: see text] respectively. The study demonstrates that linear models perform well, however the representation of epistatic interactions in predictive models improves identification of selectivity-enhancing mutations. The improvement is attributed to higher-order kernel functions that represent epistatic interactions between residues.

  19. Integrand reduction for two-loop scattering amplitudes through multivariate polynomial division

    NASA Astrophysics Data System (ADS)

    Mastrolia, Pierpaolo; Mirabella, Edoardo; Ossola, Giovanni; Peraro, Tiziano

    2013-04-01

    We describe the application of a novel approach for the reduction of scattering amplitudes, based on multivariate polynomial division, which we have recently presented. This technique yields the complete integrand decomposition for arbitrary amplitudes, regardless of the number of loops. It allows for the determination of the residue at any multiparticle cut, whose knowledge is a mandatory prerequisite for applying the integrand-reduction procedure. By using the division modulo Gröbner basis, we can derive a simple integrand recurrence relation that generates the multiparticle pole decomposition for integrands of arbitrary multiloop amplitudes. We apply the new reduction algorithm to the two-loop planar and nonplanar diagrams contributing to the five-point scattering amplitudes in N=4 super Yang-Mills and N=8 supergravity in four dimensions, whose numerator functions contain up to rank-two terms in the integration momenta. We determine all polynomial residues parametrizing the cuts of the corresponding topologies and subtopologies. We obtain the integral basis for the decomposition of each diagram from the polynomial form of the residues. Our approach is well suited for a seminumerical implementation, and its general mathematical properties provide an effective algorithm for the generalization of the integrand-reduction method to all orders in perturbation theory.

  20. Sum-of-squares-based fuzzy controller design using quantum-inspired evolutionary algorithm

    NASA Astrophysics Data System (ADS)

    Yu, Gwo-Ruey; Huang, Yu-Chia; Cheng, Chih-Yung

    2016-07-01

    In the field of fuzzy control, control gains are obtained by solving stabilisation conditions in linear-matrix-inequality-based Takagi-Sugeno fuzzy control method and sum-of-squares-based polynomial fuzzy control method. However, the optimal performance requirements are not considered under those stabilisation conditions. In order to handle specific performance problems, this paper proposes a novel design procedure with regard to polynomial fuzzy controllers using quantum-inspired evolutionary algorithms. The first contribution of this paper is a combination of polynomial fuzzy control and quantum-inspired evolutionary algorithms to undertake an optimal performance controller design. The second contribution is the proposed stability condition derived from the polynomial Lyapunov function. The proposed design approach is dissimilar to the traditional approach, in which control gains are obtained by solving the stabilisation conditions. The first step of the controller design uses the quantum-inspired evolutionary algorithms to determine the control gains with the best performance. Then, the stability of the closed-loop system is analysed under the proposed stability conditions. To illustrate effectiveness and validity, the problem of balancing and the up-swing of an inverted pendulum on a cart is used.

  1. Data identification for improving gene network inference using computational algebra.

    PubMed

    Dimitrova, Elena; Stigler, Brandilyn

    2014-11-01

    Identification of models of gene regulatory networks is sensitive to the amount of data used as input. Considering the substantial costs in conducting experiments, it is of value to have an estimate of the amount of data required to infer the network structure. To minimize wasted resources, it is also beneficial to know which data are necessary to identify the network. Knowledge of the data and knowledge of the terms in polynomial models are often required a priori in model identification. In applications, it is unlikely that the structure of a polynomial model will be known, which may force data sets to be unnecessarily large in order to identify a model. Furthermore, none of the known results provides any strategy for constructing data sets to uniquely identify a model. We provide a specialization of an existing criterion for deciding when a set of data points identifies a minimal polynomial model when its monomial terms have been specified. Then, we relax the requirement of the knowledge of the monomials and present results for model identification given only the data. Finally, we present a method for constructing data sets that identify minimal polynomial models.

  2. Gradient nonlinearity calibration and correction for a compact, asymmetric magnetic resonance imaging gradient system

    PubMed Central

    Tao, S; Trzasko, J D; Gunter, J L; Weavers, P T; Shu, Y; Huston, J; Lee, S K; Tan, E T; Bernstein, M A

    2017-01-01

    Due to engineering limitations, the spatial encoding gradient fields in conventional magnetic resonance imaging cannot be perfectly linear and always contain higher-order, nonlinear components. If ignored during image reconstruction, gradient nonlinearity (GNL) manifests as image geometric distortion. Given an estimate of the GNL field, this distortion can be corrected to a degree proportional to the accuracy of the field estimate. The GNL of a gradient system is typically characterized using a spherical harmonic polynomial model with model coefficients obtained from electromagnetic simulation. Conventional whole-body gradient systems are symmetric in design; typically, only odd-order terms up to the 5th-order are required for GNL modeling. Recently, a high-performance, asymmetric gradient system was developed, which exhibits more complex GNL that requires higher-order terms including both odd- and even-orders for accurate modeling. This work characterizes the GNL of this system using an iterative calibration method and a fiducial phantom used in ADNI (Alzheimer’s Disease Neuroimaging Initiative). The phantom was scanned at different locations inside the 26-cm diameter-spherical-volume of this gradient, and the positions of fiducials in the phantom were estimated. An iterative calibration procedure was utilized to identify the model coefficients that minimize the mean-squared-error between the true fiducial positions and the positions estimated from images corrected using these coefficients. To examine the effect of higher-order and even-order terms, this calibration was performed using spherical harmonic polynomial of different orders up to the 10th-order including even- and odd-order terms, or odd-order only. The results showed that the model coefficients of this gradient can be successfully estimated. The residual root-mean-squared-error after correction using up to the 10th-order coefficients was reduced to 0.36 mm, yielding spatial accuracy comparable to conventional whole-body gradients. The even-order terms were necessary for accurate GNL modeling. In addition, the calibrated coefficients improved image geometric accuracy compared with the simulation-based coefficients. PMID:28033119

  3. Polynomial approximation of the Lense-Thirring rigid precession frequency

    NASA Astrophysics Data System (ADS)

    De Falco, Vittorio; Motta, Sara

    2018-05-01

    We propose a polynomial approximation of the global Lense-Thirring rigid precession frequency to study low-frequency quasi-periodic oscillations around spinning black holes. This high-performing approximation allows to determine the expected frequencies of a precessing thick accretion disc with fixed inner radius and variable outer radius around a black hole with given mass and spin. We discuss the accuracy and the applicability regions of our polynomial approximation, showing that the computational times are reduced by a factor of ≈70 in the range of minutes.

  4. Genetic evaluation and selection response for growth in meat-type quail through random regression models using B-spline functions and Legendre polynomials.

    PubMed

    Mota, L F M; Martins, P G M A; Littiere, T O; Abreu, L R A; Silva, M A; Bonafé, C M

    2018-04-01

    The objective was to estimate (co)variance functions using random regression models (RRM) with Legendre polynomials, B-spline function and multi-trait models aimed at evaluating genetic parameters of growth traits in meat-type quail. A database containing the complete pedigree information of 7000 meat-type quail was utilized. The models included the fixed effects of contemporary group and generation. Direct additive genetic and permanent environmental effects, considered as random, were modeled using B-spline functions considering quadratic and cubic polynomials for each individual segment, and Legendre polynomials for age. Residual variances were grouped in four age classes. Direct additive genetic and permanent environmental effects were modeled using 2 to 4 segments and were modeled by Legendre polynomial with orders of fit ranging from 2 to 4. The model with quadratic B-spline adjustment, using four segments for direct additive genetic and permanent environmental effects, was the most appropriate and parsimonious to describe the covariance structure of the data. The RRM using Legendre polynomials presented an underestimation of the residual variance. Lesser heritability estimates were observed for multi-trait models in comparison with RRM for the evaluated ages. In general, the genetic correlations between measures of BW from hatching to 35 days of age decreased as the range between the evaluated ages increased. Genetic trend for BW was positive and significant along the selection generations. The genetic response to selection for BW in the evaluated ages presented greater values for RRM compared with multi-trait models. In summary, RRM using B-spline functions with four residual variance classes and segments were the best fit for genetic evaluation of growth traits in meat-type quail. In conclusion, RRM should be considered in genetic evaluation of breeding programs.

  5. Application of KAM Theorem to Earth Orbiting Satellites

    DTIC Science & Technology

    2009-03-01

    P m n are the associated Legendre Polynomials, and r, δ and λ are the radius, geocentric latitude and east longitude of the of the satellite...Laskar shows that the cost -to-benefit drops off after windows of order 3-5 [11]. Higher order functions also result in wider peaks, which leads to

  6. Breeding value accuracy estimates for growth traits using random regression and multi-trait models in Nelore cattle.

    PubMed

    Boligon, A A; Baldi, F; Mercadante, M E Z; Lobo, R B; Pereira, R J; Albuquerque, L G

    2011-06-28

    We quantified the potential increase in accuracy of expected breeding value for weights of Nelore cattle, from birth to mature age, using multi-trait and random regression models on Legendre polynomials and B-spline functions. A total of 87,712 weight records from 8144 females were used, recorded every three months from birth to mature age from the Nelore Brazil Program. For random regression analyses, all female weight records from birth to eight years of age (data set I) were considered. From this general data set, a subset was created (data set II), which included only nine weight records: at birth, weaning, 365 and 550 days of age, and 2, 3, 4, 5, and 6 years of age. Data set II was analyzed using random regression and multi-trait models. The model of analysis included the contemporary group as fixed effects and age of dam as a linear and quadratic covariable. In the random regression analyses, average growth trends were modeled using a cubic regression on orthogonal polynomials of age. Residual variances were modeled by a step function with five classes. Legendre polynomials of fourth and sixth order were utilized to model the direct genetic and animal permanent environmental effects, respectively, while third-order Legendre polynomials were considered for maternal genetic and maternal permanent environmental effects. Quadratic polynomials were applied to model all random effects in random regression models on B-spline functions. Direct genetic and animal permanent environmental effects were modeled using three segments or five coefficients, and genetic maternal and maternal permanent environmental effects were modeled with one segment or three coefficients in the random regression models on B-spline functions. For both data sets (I and II), animals ranked differently according to expected breeding value obtained by random regression or multi-trait models. With random regression models, the highest gains in accuracy were obtained at ages with a low number of weight records. The results indicate that random regression models provide more accurate expected breeding values than the traditionally finite multi-trait models. Thus, higher genetic responses are expected for beef cattle growth traits by replacing a multi-trait model with random regression models for genetic evaluation. B-spline functions could be applied as an alternative to Legendre polynomials to model covariance functions for weights from birth to mature age.

  7. Kurtosis Approach for Nonlinear Blind Source Separation

    NASA Technical Reports Server (NTRS)

    Duong, Vu A.; Stubbemd, Allen R.

    2005-01-01

    In this paper, we introduce a new algorithm for blind source signal separation for post-nonlinear mixtures. The mixtures are assumed to be linearly mixed from unknown sources first and then distorted by memoryless nonlinear functions. The nonlinear functions are assumed to be smooth and can be approximated by polynomials. Both the coefficients of the unknown mixing matrix and the coefficients of the approximated polynomials are estimated by the gradient descent method conditional on the higher order statistical requirements. The results of simulation experiments presented in this paper demonstrate the validity and usefulness of our approach for nonlinear blind source signal separation.

  8. A Polynomial-Based Nonlinear Least Squares Optimized Preconditioner for Continuous and Discontinuous Element-Based Discretizations of the Euler Equations

    DTIC Science & Technology

    2014-01-01

    system (here using left- preconditioning ) (KÃ)x = Kb̃, (3.1) where K is a low-order polynomial in à given by K = s(Ã) = m∑ i=0 kià i, (3.2) and has a... system with a complex spectrum, region E in the complex plane must be some convex form (e.g., an ellipse or polygon) that approximately encloses the...preconditioners with p = 2 and p = 20 on the spectrum of the preconditioned system matrices Kà and KH̃ for both CG Schur-complement form and DG form cases

  9. Finding Limit Cycles in self-excited oscillators with infinite-series damping functions

    NASA Astrophysics Data System (ADS)

    Das, Debapriya; Banerjee, Dhruba; Bhattacharjee, Jayanta K.

    2015-03-01

    In this paper we present a simple method for finding the location of limit cycles of self excited oscillators whose damping functions can be represented by some infinite convergent series. We have used standard results of first-order perturbation theory to arrive at amplitude equations. The approach has been kept pedagogic by first working out the cases of finite polynomials using elementary algebra. Then the method has been extended to various infinite polynomials, where the fixed points of the corresponding amplitude equations cannot be found out. Hopf bifurcations for systems with nonlinear powers in velocities have also been discussed.

  10. A Generalized Sampling and Preconditioning Scheme for Sparse Approximation of Polynomial Chaos Expansions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jakeman, John D.; Narayan, Akil; Zhou, Tao

    We propose an algorithm for recovering sparse orthogonal polynomial expansions via collocation. A standard sampling approach for recovering sparse polynomials uses Monte Carlo sampling, from the density of orthogonality, which results in poor function recovery when the polynomial degree is high. Our proposed approach aims to mitigate this limitation by sampling with respect to the weighted equilibrium measure of the parametric domain and subsequently solves a preconditionedmore » $$\\ell^1$$-minimization problem, where the weights of the diagonal preconditioning matrix are given by evaluations of the Christoffel function. Our algorithm can be applied to a wide class of orthogonal polynomial families on bounded and unbounded domains, including all classical families. We present theoretical analysis to motivate the algorithm and numerical results that show our method is superior to standard Monte Carlo methods in many situations of interest. In conclusion, numerical examples are also provided to demonstrate that our proposed algorithm leads to comparable or improved accuracy even when compared with Legendre- and Hermite-specific algorithms.« less

  11. A Generalized Sampling and Preconditioning Scheme for Sparse Approximation of Polynomial Chaos Expansions

    DOE PAGES

    Jakeman, John D.; Narayan, Akil; Zhou, Tao

    2017-06-22

    We propose an algorithm for recovering sparse orthogonal polynomial expansions via collocation. A standard sampling approach for recovering sparse polynomials uses Monte Carlo sampling, from the density of orthogonality, which results in poor function recovery when the polynomial degree is high. Our proposed approach aims to mitigate this limitation by sampling with respect to the weighted equilibrium measure of the parametric domain and subsequently solves a preconditionedmore » $$\\ell^1$$-minimization problem, where the weights of the diagonal preconditioning matrix are given by evaluations of the Christoffel function. Our algorithm can be applied to a wide class of orthogonal polynomial families on bounded and unbounded domains, including all classical families. We present theoretical analysis to motivate the algorithm and numerical results that show our method is superior to standard Monte Carlo methods in many situations of interest. In conclusion, numerical examples are also provided to demonstrate that our proposed algorithm leads to comparable or improved accuracy even when compared with Legendre- and Hermite-specific algorithms.« less

  12. New algorithms for solving third- and fifth-order two point boundary value problems based on nonsymmetric generalized Jacobi Petrov–Galerkin method

    PubMed Central

    Doha, E.H.; Abd-Elhameed, W.M.; Youssri, Y.H.

    2014-01-01

    Two families of certain nonsymmetric generalized Jacobi polynomials with negative integer indexes are employed for solving third- and fifth-order two point boundary value problems governed by homogeneous and nonhomogeneous boundary conditions using a dual Petrov–Galerkin method. The idea behind our method is to use trial functions satisfying the underlying boundary conditions of the differential equations and the test functions satisfying the dual boundary conditions. The resulting linear systems from the application of our method are specially structured and they can be efficiently inverted. The use of generalized Jacobi polynomials simplify the theoretical and numerical analysis of the method and also leads to accurate and efficient numerical algorithms. The presented numerical results indicate that the proposed numerical algorithms are reliable and very efficient. PMID:26425358

  13. Beampattern control of a microphone array to minimize secondary source contamination.

    PubMed

    Jordan, Peter; Fitzpatrick, John A; Meskell, Craig

    2003-10-01

    A null-steering technique is adapted and applied to a linear delay-and-sum beamformer in order to measure the noise generated by one of the propellers of a 1/8 scale twin propeller aircraft model. The technique involves shading the linear array using a set of weights, which are calculated according to the locations onto which the nulls need to be steered (in this case onto the second propeller). The technique is based on an established microwave antenna theory, and uses a plane-wave, or far field formulation in order to represent the response of the array by an nth-order polynomial, where n is the number of array elements. The roots of this polynomial correspond to the minima of the array response, and so by an appropriate choice of roots, a polynomial can be generated, the coefficients of which are the weights needed to achieve the prespecified set of null positions. It is shown that, for the technique to work with actual data, the cross-spectral matrix must be conditioned before array shading is implemented. This ensures that the shading function is not distorted by the intrinsic element weighting which can occur as a result of the directional nature of aeroacoustic systems. A difference of 6 dB between measurements before and after null steering shows the technique to have been effective in eliminating the contribution from one of the propellers, thus providing a quantitative measure of the acoustic energy from the other.

  14. Modeling the High Speed Research Cycle 2B Longitudinal Aerodynamic Database Using Multivariate Orthogonal Functions

    NASA Technical Reports Server (NTRS)

    Morelli, E. A.; Proffitt, M. S.

    1999-01-01

    The data for longitudinal non-dimensional, aerodynamic coefficients in the High Speed Research Cycle 2B aerodynamic database were modeled using polynomial expressions identified with an orthogonal function modeling technique. The discrepancy between the tabular aerodynamic data and the polynomial models was tested and shown to be less than 15 percent for drag, lift, and pitching moment coefficients over the entire flight envelope. Most of this discrepancy was traced to smoothing local measurement noise and to the omission of mass case 5 data in the modeling process. A simulation check case showed that the polynomial models provided a compact and accurate representation of the nonlinear aerodynamic dependencies contained in the HSR Cycle 2B tabular aerodynamic database.

  15. Warpage analysis on thin shell part using response surface methodology (RSM)

    NASA Astrophysics Data System (ADS)

    Zulhasif, Z.; Shayfull, Z.; Nasir, S. M.; Fathullah, M.; Hazwan, M. H. M.

    2017-09-01

    The optimisation of moulding parameters appropriate to reduce warpage defects produce using Autodesk Moldflow Insight (AMI) 2012 software The product is injected by using Acrylonitrile-Butadiene-Styrene (ABS) materials. This analysis has processing parameter that varies in melting temperature, mould temperature, packing pressure and packing time. Design of Experiments (DOE) has been integrated to obtain a polynomial model using Response Surface Methodology (RSM). The Glowworm Swarm Optimisation (GSO) method is used to predict a best combination parameters to minimise warpage defect in order to produce high quality parts.

  16. Use of high-order spectral moments in Doppler weather radar

    NASA Astrophysics Data System (ADS)

    di Vito, A.; Galati, G.; Veredice, A.

    Three techniques to estimate the skewness and curtosis of measured precipitation spectra are evaluated. These are: (1) an extension of the pulse-pair technique, (2) fitting the autocorrelation function with a least square polynomial and differentiating it, and (3) the autoregressive spectral estimation. The third technique provides the best results but has an exceedingly large computation burden. The first technique does not supply any useful results due to the crude approximation of the derivatives of the ACF. The second technique requires further study to reduce its variance.

  17. Flux-corrected transport algorithms for continuous Galerkin methods based on high order Bernstein finite elements

    NASA Astrophysics Data System (ADS)

    Lohmann, Christoph; Kuzmin, Dmitri; Shadid, John N.; Mabuza, Sibusiso

    2017-09-01

    This work extends the flux-corrected transport (FCT) methodology to arbitrary order continuous finite element discretizations of scalar conservation laws on simplex meshes. Using Bernstein polynomials as local basis functions, we constrain the total variation of the numerical solution by imposing local discrete maximum principles on the Bézier net. The design of accuracy-preserving FCT schemes for high order Bernstein-Bézier finite elements requires the development of new algorithms and/or generalization of limiting techniques tailored for linear and multilinear Lagrange elements. In this paper, we propose (i) a new discrete upwinding strategy leading to local extremum bounded low order approximations with compact stencils, (ii) high order variational stabilization based on the difference between two gradient approximations, and (iii) new localized limiting techniques for antidiffusive element contributions. The optional use of a smoothness indicator, based on a second derivative test, makes it possible to potentially avoid unnecessary limiting at smooth extrema and achieve optimal convergence rates for problems with smooth solutions. The accuracy of the proposed schemes is assessed in numerical studies for the linear transport equation in 1D and 2D.

  18. Accurate Estimation of Solvation Free Energy Using Polynomial Fitting Techniques

    PubMed Central

    Shyu, Conrad; Ytreberg, F. Marty

    2010-01-01

    This report details an approach to improve the accuracy of free energy difference estimates using thermodynamic integration data (slope of the free energy with respect to the switching variable λ) and its application to calculating solvation free energy. The central idea is to utilize polynomial fitting schemes to approximate the thermodynamic integration data to improve the accuracy of the free energy difference estimates. Previously, we introduced the use of polynomial regression technique to fit thermodynamic integration data (Shyu and Ytreberg, J Comput Chem 30: 2297–2304, 2009). In this report we introduce polynomial and spline interpolation techniques. Two systems with analytically solvable relative free energies are used to test the accuracy of the interpolation approach. We also use both interpolation and regression methods to determine a small molecule solvation free energy. Our simulations show that, using such polynomial techniques and non-equidistant λ values, the solvation free energy can be estimated with high accuracy without using soft-core scaling and separate simulations for Lennard-Jones and partial charges. The results from our study suggest these polynomial techniques, especially with use of non-equidistant λ values, improve the accuracy for ΔF estimates without demanding additional simulations. We also provide general guidelines for use of polynomial fitting to estimate free energy. To allow researchers to immediately utilize these methods, free software and documentation is provided via http://www.phys.uidaho.edu/ytreberg/software. PMID:20623657

  19. On mixed derivatives type high dimensional multi-term fractional partial differential equations approximate solutions

    NASA Astrophysics Data System (ADS)

    Talib, Imran; Belgacem, Fethi Bin Muhammad; Asif, Naseer Ahmad; Khalil, Hammad

    2017-01-01

    In this research article, we derive and analyze an efficient spectral method based on the operational matrices of three dimensional orthogonal Jacobi polynomials to solve numerically the mixed partial derivatives type multi-terms high dimensions generalized class of fractional order partial differential equations. We transform the considered fractional order problem to an easily solvable algebraic equations with the aid of the operational matrices. Being easily solvable, the associated algebraic system leads to finding the solution of the problem. Some test problems are considered to confirm the accuracy and validity of the proposed numerical method. The convergence of the method is ensured by comparing our Matlab software simulations based obtained results with the exact solutions in the literature, yielding negligible errors. Moreover, comparative results discussed in the literature are extended and improved in this study.

  20. Polynomial order selection in random regression models via penalizing adaptively the likelihood.

    PubMed

    Corrales, J D; Munilla, S; Cantet, R J C

    2015-08-01

    Orthogonal Legendre polynomials (LP) are used to model the shape of additive genetic and permanent environmental effects in random regression models (RRM). Frequently, the Akaike (AIC) and the Bayesian (BIC) information criteria are employed to select LP order. However, it has been theoretically shown that neither AIC nor BIC is simultaneously optimal in terms of consistency and efficiency. Thus, the goal was to introduce a method, 'penalizing adaptively the likelihood' (PAL), as a criterion to select LP order in RRM. Four simulated data sets and real data (60,513 records, 6675 Colombian Holstein cows) were employed. Nested models were fitted to the data, and AIC, BIC and PAL were calculated for all of them. Results showed that PAL and BIC identified with probability of one the true LP order for the additive genetic and permanent environmental effects, but AIC tended to favour over parameterized models. Conversely, when the true model was unknown, PAL selected the best model with higher probability than AIC. In the latter case, BIC never favoured the best model. To summarize, PAL selected a correct model order regardless of whether the 'true' model was within the set of candidates. © 2015 Blackwell Verlag GmbH.

  1. Theoretical effect of modifications to the upper surface of two NACA airfoils using smooth polynomial additional thickness distributions which emphasize leading edge profile and which vary quadratically at the trailing edge. [using flow equations and a CDC 7600 computer

    NASA Technical Reports Server (NTRS)

    Merz, A. W.; Hague, D. S.

    1975-01-01

    An investigation was conducted on a CDC 7600 digital computer to determine the effects of additional thickness distributions to the upper surface of the NACA 64-206 and 64 sub 1 - 212 airfoils. The additional thickness distribution had the form of a continuous mathematical function which disappears at both the leading edge and the trailing edge. The function behaves as a polynomial of order epsilon sub 1 at the leading edge, and a polynomial of order epsilon sub 2 at the trailing edge. Epsilon sub 2 is a constant and epsilon sub 1 is varied over a range of practical interest. The magnitude of the additional thickness, y, is a second input parameter, and the effect of varying epsilon sub 1 and y on the aerodynamic performance of the airfoil was investigated. Results were obtained at a Mach number of 0.2 with an angle-of-attack of 6 degrees on the basic airfoils, and all calculations employ the full potential flow equations for two dimensional flow. The relaxation method of Jameson was employed for solution of the potential flow equations.

  2. Thermodynamic characterization of networks using graph polynomials

    NASA Astrophysics Data System (ADS)

    Ye, Cheng; Comin, César H.; Peron, Thomas K. DM.; Silva, Filipi N.; Rodrigues, Francisco A.; Costa, Luciano da F.; Torsello, Andrea; Hancock, Edwin R.

    2015-09-01

    In this paper, we present a method for characterizing the evolution of time-varying complex networks by adopting a thermodynamic representation of network structure computed from a polynomial (or algebraic) characterization of graph structure. Commencing from a representation of graph structure based on a characteristic polynomial computed from the normalized Laplacian matrix, we show how the polynomial is linked to the Boltzmann partition function of a network. This allows us to compute a number of thermodynamic quantities for the network, including the average energy and entropy. Assuming that the system does not change volume, we can also compute the temperature, defined as the rate of change of entropy with energy. All three thermodynamic variables can be approximated using low-order Taylor series that can be computed using the traces of powers of the Laplacian matrix, avoiding explicit computation of the normalized Laplacian spectrum. These polynomial approximations allow a smoothed representation of the evolution of networks to be constructed in the thermodynamic space spanned by entropy, energy, and temperature. We show how these thermodynamic variables can be computed in terms of simple network characteristics, e.g., the total number of nodes and node degree statistics for nodes connected by edges. We apply the resulting thermodynamic characterization to real-world time-varying networks representing complex systems in the financial and biological domains. The study demonstrates that the method provides an efficient tool for detecting abrupt changes and characterizing different stages in network evolution.

  3. Grid and basis adaptive polynomial chaos techniques for sensitivity and uncertainty analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perkó, Zoltán, E-mail: Z.Perko@tudelft.nl; Gilli, Luca, E-mail: Gilli@nrg.eu; Lathouwers, Danny, E-mail: D.Lathouwers@tudelft.nl

    2014-03-01

    The demand for accurate and computationally affordable sensitivity and uncertainty techniques is constantly on the rise and has become especially pressing in the nuclear field with the shift to Best Estimate Plus Uncertainty methodologies in the licensing of nuclear installations. Besides traditional, already well developed methods – such as first order perturbation theory or Monte Carlo sampling – Polynomial Chaos Expansion (PCE) has been given a growing emphasis in recent years due to its simple application and good performance. This paper presents new developments of the research done at TU Delft on such Polynomial Chaos (PC) techniques. Our work ismore » focused on the Non-Intrusive Spectral Projection (NISP) approach and adaptive methods for building the PCE of responses of interest. Recent efforts resulted in a new adaptive sparse grid algorithm designed for estimating the PC coefficients. The algorithm is based on Gerstner's procedure for calculating multi-dimensional integrals but proves to be computationally significantly cheaper, while at the same it retains a similar accuracy as the original method. More importantly the issue of basis adaptivity has been investigated and two techniques have been implemented for constructing the sparse PCE of quantities of interest. Not using the traditional full PC basis set leads to further reduction in computational time since the high order grids necessary for accurately estimating the near zero expansion coefficients of polynomial basis vectors not needed in the PCE can be excluded from the calculation. Moreover the sparse PC representation of the response is easier to handle when used for sensitivity analysis or uncertainty propagation due to the smaller number of basis vectors. The developed grid and basis adaptive methods have been implemented in Matlab as the Fully Adaptive Non-Intrusive Spectral Projection (FANISP) algorithm and were tested on four analytical problems. These show consistent good performance both in terms of the accuracy of the resulting PC representation of quantities and the computational costs associated with constructing the sparse PCE. Basis adaptivity also seems to make the employment of PC techniques possible for problems with a higher number of input parameters (15–20), alleviating a well known limitation of the traditional approach. The prospect of larger scale applicability and the simplicity of implementation makes such adaptive PC algorithms particularly appealing for the sensitivity and uncertainty analysis of complex systems and legacy codes.« less

  4. A Numerical Method for Integrating Orbits

    NASA Astrophysics Data System (ADS)

    Sahakyan, Karen P.; Melkonyan, Anahit A.; Hayrapetyan, S. R.

    2007-08-01

    A numerical method based of trigonometric polynomials for integrating of ordinary differential equations of first and second order is suggested. This method is a trigonometric analogue of Everhart's method and can be especially useful for periodical trajectories.

  5. Estimation of genetic parameters for milk yield in Murrah buffaloes by Bayesian inference.

    PubMed

    Breda, F C; Albuquerque, L G; Euclydes, R F; Bignardi, A B; Baldi, F; Torres, R A; Barbosa, L; Tonhati, H

    2010-02-01

    Random regression models were used to estimate genetic parameters for test-day milk yield in Murrah buffaloes using Bayesian inference. Data comprised 17,935 test-day milk records from 1,433 buffaloes. Twelve models were tested using different combinations of third-, fourth-, fifth-, sixth-, and seventh-order orthogonal polynomials of weeks of lactation for additive genetic and permanent environmental effects. All models included the fixed effects of contemporary group, number of daily milkings and age of cow at calving as covariate (linear and quadratic effect). In addition, residual variances were considered to be heterogeneous with 6 classes of variance. Models were selected based on the residual mean square error, weighted average of residual variance estimates, and estimates of variance components, heritabilities, correlations, eigenvalues, and eigenfunctions. Results indicated that changes in the order of fit for additive genetic and permanent environmental random effects influenced the estimation of genetic parameters. Heritability estimates ranged from 0.19 to 0.31. Genetic correlation estimates were close to unity between adjacent test-day records, but decreased gradually as the interval between test-days increased. Results from mean squared error and weighted averages of residual variance estimates suggested that a model considering sixth- and seventh-order Legendre polynomials for additive and permanent environmental effects, respectively, and 6 classes for residual variances, provided the best fit. Nevertheless, this model presented the largest degree of complexity. A more parsimonious model, with fourth- and sixth-order polynomials, respectively, for these same effects, yielded very similar genetic parameter estimates. Therefore, this last model is recommended for routine applications. Copyright 2010 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  6. Ferroic phase transition of tetragonal Pb0.6-xCaxBi0.4(Ti0.75Zn0.15Fe0.1)O3 ceramics: Factors determining Curie temperature

    NASA Astrophysics Data System (ADS)

    Yu, Jian; An, Fei-fei; Cao, Fei

    2014-05-01

    In this paper, ferroelectric phase transitions of Pb0.6-xCaxBi0.4(Ti0.75Zn0.15Fe0.1)O3 with x ≤ 0.20 ceramics were experimentally measured and a change from first-order to relaxor was found at a critical composition x ˜ 0.19. With increasing Ca content of x ≤ 0.18, Curie temperature and tetragonality was found decrease but piezoelectric constant and dielectric constant increase in a quadratic polynomial relationship as a function of x, while the ferroic Curie temperature and ferroelastic ordering parameter of tetragonality are correlated in a quadratic polynomial relationship. Near the critical composition of ferroic phase transition from first-order to relaxor, the Pb0.42Ca0.18Bi0.4(Ti0.75Zn0.15Fe0.1)O3 and 1 mol % Nb + 0.5 mol % Mg co-doped Pb0.44Ca0.16Bi0.4(Ti0.75Zn0.15Fe0.1)O3 ceramics exhibit a better anisotropic piezoelectric properties than those commercial piezoceramics of modified-PbTiO3 and PbNb2O6. At last, those factors including reduced mass of unit cell, mismatch between cation size and anion cage size, which affect ferroic Curie temperature and ferroelastic ordering parameter (tetragonality) of tetragonal ABO3 perovskites, are analyzed on the basis of first principle effective Hamiltonian and the reduced mass of unit cell is argued a more universal variable than concentration to determine Curie temperature in a quadratic polynomial relationship over various perovskite-structured solid solutions.

  7. Element Library for Three-Dimensional Stress Analysis by the Integrated Force Method

    NASA Technical Reports Server (NTRS)

    Kaljevic, Igor; Patnaik, Surya N.; Hopkins, Dale A.

    1996-01-01

    The Integrated Force Method, a recently developed method for analyzing structures, is extended in this paper to three-dimensional structural analysis. First, a general formulation is developed to generate the stress interpolation matrix in terms of complete polynomials of the required order. The formulation is based on definitions of the stress tensor components in term of stress functions. The stress functions are written as complete polynomials and substituted into expressions for stress components. Then elimination of the dependent coefficients leaves the stress components expressed as complete polynomials whose coefficients are defined as generalized independent forces. Such derived components of the stress tensor identically satisfy homogenous Navier equations of equilibrium. The resulting element matrices are invariant with respect to coordinate transformation and are free of spurious zero-energy modes. The formulation provides a rational way to calculate the exact number of independent forces necessary to arrive at an approximation of the required order for complete polynomials. The influence of reducing the number of independent forces on the accuracy of the response is also analyzed. The stress fields derived are used to develop a comprehensive finite element library for three-dimensional structural analysis by the Integrated Force Method. Both tetrahedral- and hexahedral-shaped elements capable of modeling arbitrary geometric configurations are developed. A number of examples with known analytical solutions are solved by using the developments presented herein. The results are in good agreement with the analytical solutions. The responses obtained with the Integrated Force Method are also compared with those generated by the standard displacement method. In most cases, the performance of the Integrated Force Method is better overall.

  8. Adaptive surrogate modeling by ANOVA and sparse polynomial dimensional decomposition for global sensitivity analysis in fluid simulation

    NASA Astrophysics Data System (ADS)

    Tang, Kunkun; Congedo, Pietro M.; Abgrall, Rémi

    2016-06-01

    The Polynomial Dimensional Decomposition (PDD) is employed in this work for the global sensitivity analysis and uncertainty quantification (UQ) of stochastic systems subject to a moderate to large number of input random variables. Due to the intimate connection between the PDD and the Analysis of Variance (ANOVA) approaches, PDD is able to provide a simpler and more direct evaluation of the Sobol' sensitivity indices, when compared to the Polynomial Chaos expansion (PC). Unfortunately, the number of PDD terms grows exponentially with respect to the size of the input random vector, which makes the computational cost of standard methods unaffordable for real engineering applications. In order to address the problem of the curse of dimensionality, this work proposes essentially variance-based adaptive strategies aiming to build a cheap meta-model (i.e. surrogate model) by employing the sparse PDD approach with its coefficients computed by regression. Three levels of adaptivity are carried out in this paper: 1) the truncated dimensionality for ANOVA component functions, 2) the active dimension technique especially for second- and higher-order parameter interactions, and 3) the stepwise regression approach designed to retain only the most influential polynomials in the PDD expansion. During this adaptive procedure featuring stepwise regressions, the surrogate model representation keeps containing few terms, so that the cost to resolve repeatedly the linear systems of the least-squares regression problem is negligible. The size of the finally obtained sparse PDD representation is much smaller than the one of the full expansion, since only significant terms are eventually retained. Consequently, a much smaller number of calls to the deterministic model is required to compute the final PDD coefficients.

  9. [Job satisfaction vs. occupational stress - Quantitative analysis of 3 organizational units of a public sector institution].

    PubMed

    Rogozińska-Pawełczyk, Anna

    2018-05-22

    The influence of subjective perception of occupational stress and its individual factors on the overall level of job satisfaction was analyzed. The respondents were also asked to answer the question of the potential differences in terms of variables in managers and non-managers, and in various demographic factors. This article presents the results of a study conducted among 5930 people employed in 3 units of the examined public sector institution. The research was conducted using computer-assisted web interview method. The parameters of the polynomial model of ordered categories were estimated. The results showed a statistically significant effect between the variables and the differences between the groups of subjects. Analyzes showed slight differences between men and women. Employees with a low level of stress and high job satisfaction were noted in the oldest group, aged over 55 years, and in managers. Low levels of stress and job satisfaction were observed in young employees with the shortest period of employment. Among those least satisfied with the work and experiencing high levels of stress there were respondents with 6-15 years of employment in non-managerial positions. While the highest levels of stress and high satisfaction were found in people aged 46-55 years, with more than 20 years of work experience. The results of the estimation of the polynomial model parameters of ordered categories indicate that the level of perceived stress is related to the level of job satisfaction. The lower the level of stress and stressors in the workplace, the greater the job satisfaction in the surveyed unit. Med Pr 2018;69(3):301-315. This work is available in Open Access model and licensed under a CC BY-NC 3.0 PL license.

  10. Modelling the growth of the brown frog (Rana dybowskii)

    PubMed Central

    Du, Xiao-peng; Hu, Zong-fu; Cui, Li-yong

    2018-01-01

    Well-controlled development leads to uniform body size and a better growth rate; therefore, the ability to determine the growth rate of frogs and their period of sexual maturity is essential for producing healthy, high-quality descendant frogs. To establish a working model that can best predict the growth performance of frogs, the present study examined the growth of one-year-old and two-year-old brown frogs (Rana dybowskii) from metamorphosis to hibernation (18 weeks) and out-hibernation to hibernation (20 weeks) under the same environmental conditions. Brown frog growth was studied and mathematically modelled using various nonlinear, linear, and polynomial functions. The model input values were statistically evaluated using parameters such as the Akaike’s information criterion. The body weight/size ratio (Kwl) and Fulton’s condition factor (K) were used to compare the weight and size of groups of frogs during the growth period. The results showed that the third- and fourth-order polynomial models provided the most consistent predictions of body weight for age 1 and age 2 brown frogs, respectively. Both the Gompertz and third-order polynomial models yielded similarly adequate results for the body size of age 1 brown frogs, while the Janoschek model produced a similarly adequate result for the body size of age 2 brown frogs. The Brody and Janoschek models yielded the highest and lowest estimates of asymptotic weight, respectively, for the body weights of all frogs. The Kwl value of all frogs increased from 0.40 to 3.18. The K value of age 1 frogs decreased from 23.81 to 9.45 in the first four weeks. The K value of age 2 frogs remained close to 10. Graphically, a sigmoidal trend was observed for body weight and body size with increasing age. The results of this study will be useful not only for amphibian research but also for frog farming management strategies and decisions.

  11. Symmetric moment problems and a conjecture of Valent

    NASA Astrophysics Data System (ADS)

    Berg, C.; Szwarc, R.

    2017-03-01

    In 1998 Valent made conjectures about the order and type of certain indeterminate Stieltjes moment problems associated with birth and death processes which have polynomial birth and death rates of degree {p≥slant 3}. Romanov recently proved that the order is 1/p as conjectured. We prove that the type with respect to the order is related to certain multi-zeta values and that this type belongs to the interval which also contains the conjectured value. This proves that the conjecture about type is asymptotically correct as p\\to∞. The main idea is to obtain estimates for order and type of symmetric indeterminate Hamburger moment problems when the orthonormal polynomials P_n and those of the second kind Q_n satisfy P2n^2(0)∼ c_1n-1/β and Q2n-1^2(0)∼ c2 n-1/α, where 0<α,β<1 may be different, and c_1 and c_2 are positive constants. In this case the order of the moment problem is majorized by the harmonic mean of α and β. Here α_n∼ β_n means that α_n/β_n\\to 1. This also leads to a new proof of Romanov's Theorem that the order is 1/p. Bibliography: 19 titles.

  12. Asymptotic safety of quantum gravity beyond Ricci scalars

    NASA Astrophysics Data System (ADS)

    Falls, Kevin; King, Callum R.; Litim, Daniel F.; Nikolakopoulos, Kostas; Rahmede, Christoph

    2018-04-01

    We investigate the asymptotic safety conjecture for quantum gravity including curvature invariants beyond Ricci scalars. Our strategy is put to work for families of gravitational actions which depend on functions of the Ricci scalar, the Ricci tensor, and products thereof. Combining functional renormalization with high order polynomial approximations and full numerical integration we derive the renormalization group flow for all couplings and analyse their fixed points, scaling exponents, and the fixed point effective action as a function of the background Ricci curvature. The theory is characterized by three relevant couplings. Higher-dimensional couplings show near-Gaussian scaling with increasing canonical mass dimension. We find that Ricci tensor invariants stabilize the UV fixed point and lead to a rapid convergence of polynomial approximations. We apply our results to models for cosmology and establish that the gravitational fixed point admits inflationary solutions. We also compare findings with those from f (R ) -type theories in the same approximation and pin-point the key new effects due to Ricci tensor interactions. Implications for the asymptotic safety conjecture of gravity are indicated.

  13. Zero-multipole summation method for efficiently estimating electrostatic interactions in molecular system.

    PubMed

    Fukuda, Ikuo

    2013-11-07

    The zero-multipole summation method has been developed to efficiently evaluate the electrostatic Coulombic interactions of a point charge system. This summation prevents the electrically non-neutral multipole states that may artificially be generated by a simple cutoff truncation, which often causes large amounts of energetic noise and significant artifacts. The resulting energy function is represented by a constant term plus a simple pairwise summation, using a damped or undamped Coulombic pair potential function along with a polynomial of the distance between each particle pair. Thus, the implementation is straightforward and enables facile applications to high-performance computations. Any higher-order multipole moment can be taken into account in the neutrality principle, and it only affects the degree and coefficients of the polynomial and the constant term. The lowest and second moments correspond respectively to the Wolf zero-charge scheme and the zero-dipole summation scheme, which was previously proposed. Relationships with other non-Ewald methods are discussed, to validate the current method in their contexts. Good numerical efficiencies were easily obtained in the evaluation of Madelung constants of sodium chloride and cesium chloride crystals.

  14. Multifractal structures for the Russian stock market

    NASA Astrophysics Data System (ADS)

    Ikeda, Taro

    2018-02-01

    In this paper, we apply the multifractal detrended fluctuation analysis (MFDFA) to the Russian stock price returns. To the best of our knowledge, this paper is the first to reveal the multifractal structures for the Russian stock market by financial crises. The contributions of the paper are twofold. (i) Finding the multifractal structures for the Russian stock market. The generalized Hurst exponents estimated become highly-nonlinear to the order of the fluctuation functions. (ii) Computing the multifractality degree according to Zunino et al. (2008). We find that the multifractality degree of the Russian stock market can be categorized within emerging markets, however, the Russian 1998 crisis and the global financial crisis dampen the degree when we consider the order of the polynomial trends in the MFDFA.

  15. Polynomial solution of quantum Grassmann matrices

    NASA Astrophysics Data System (ADS)

    Tierz, Miguel

    2017-05-01

    We study a model of quantum mechanical fermions with matrix-like index structure (with indices N and L) and quartic interactions, recently introduced by Anninos and Silva. We compute the partition function exactly with q-deformed orthogonal polynomials (Stieltjes-Wigert polynomials), for different values of L and arbitrary N. From the explicit evaluation of the thermal partition function, the energy levels and degeneracies are determined. For a given L, the number of states of different energy is quadratic in N, which implies an exponential degeneracy of the energy levels. We also show that at high-temperature we have a Gaussian matrix model, which implies a symmetry that swaps N and L, together with a Wick rotation of the spectral parameter. In this limit, we also write the partition function, for generic L and N, in terms of a single generalized Hermite polynomial.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marquette, Ian, E-mail: i.marquette@uq.edu.au; Quesne, Christiane, E-mail: cquesne@ulb.ac.be

    The purpose of this communication is to point out the connection between a 1D quantum Hamiltonian involving the fourth Painlevé transcendent P{sub IV}, obtained in the context of second-order supersymmetric quantum mechanics and third-order ladder operators, with a hierarchy of families of quantum systems called k-step rational extensions of the harmonic oscillator and related with multi-indexed X{sub m{sub 1,m{sub 2,…,m{sub k}}}} Hermite exceptional orthogonal polynomials of type III. The connection between these exactly solvable models is established at the level of the equivalence of the Hamiltonians using rational solutions of the fourth Painlevé equation in terms of generalized Hermite andmore » Okamoto polynomials. We also relate the different ladder operators obtained by various combinations of supersymmetric constructions involving Darboux-Crum and Krein-Adler supercharges, their zero modes and the corresponding energies. These results will demonstrate and clarify the relation observed for a particular case in previous papers.« less

  17. Differential Galois theory and non-integrability of planar polynomial vector fields

    NASA Astrophysics Data System (ADS)

    Acosta-Humánez, Primitivo B.; Lázaro, J. Tomás; Morales-Ruiz, Juan J.; Pantazi, Chara

    2018-06-01

    We study a necessary condition for the integrability of the polynomials vector fields in the plane by means of the differential Galois Theory. More concretely, by means of the variational equations around a particular solution it is obtained a necessary condition for the existence of a rational first integral. The method is systematic starting with the first order variational equation. We illustrate this result with several families of examples. A key point is to check whether a suitable primitive is elementary or not. Using a theorem by Liouville, the problem is equivalent to the existence of a rational solution of a certain first order linear equation, the Risch equation. This is a classical problem studied by Risch in 1969, and the solution is given by the "Risch algorithm". In this way we point out the connection of the non integrability with some higher transcendent functions, like the error function.

  18. Investigation of modification design of the fan stage in axial compressor

    NASA Astrophysics Data System (ADS)

    Zhou, Xun; Yan, Peigang; Han, Wanjin

    2010-04-01

    The S2 flow path design method of the transonic compressor is used to design the one stage fan in order to replace the original designed blade cascade which has two-stage transonic fan rotors. In the modification design, the camber line is parameterized by a quartic polynomial curve and the thickness distribution of the blade profile is controlled by the double-thrice polynomial. Therefore, the inlet flow has been pre-compressed and the location and intensity of the shock wave at supersonic area have been controlled in order to let the new blade profiles have better aerodynamic performance. The computational results show that the new single stage fan rotor increases the efficiency by two percent at the design condition and the total pressure ratio is slightly higher than that of the original design. At the same time, it also meets the mass flow rate and the geometrical size requirements for the modification design.

  19. Trade off between variable and fixed size normalization in orthogonal polynomials based iris recognition system.

    PubMed

    Krishnamoorthi, R; Anna Poorani, G

    2016-01-01

    Iris normalization is an important stage in any iris biometric, as it has a propensity to trim down the consequences of iris distortion. To indemnify the variation in size of the iris owing to the action of stretching or enlarging the pupil in iris acquisition process and camera to eyeball distance, two normalization schemes has been proposed in this work. In the first method, the iris region of interest is normalized by converting the iris into the variable size rectangular model in order to avoid the under samples near the limbus border. In the second method, the iris region of interest is normalized by converting the iris region into a fixed size rectangular model in order to avoid the dimensional discrepancies between the eye images. The performance of the proposed normalization methods is evaluated with orthogonal polynomials based iris recognition in terms of FAR, FRR, GAR, CRR and EER.

  20. Tolerance analysis of optical telescopes using coherent addition of wavefront errors

    NASA Technical Reports Server (NTRS)

    Davenport, J. W.

    1982-01-01

    A near diffraction-limited telescope requires that tolerance analysis be done on the basis of system wavefront error. One method of analyzing the wavefront error is to represent the wavefront error function in terms of its Zernike polynomial expansion. A Ramsey-Korsch ray trace package, a computer program that simulates the tracing of rays through an optical telescope system, was expanded to include the Zernike polynomial expansion up through the fifth-order spherical term. An option to determine a 3 dimensional plot of the wavefront error function was also included in the Ramsey-Korsch package. Several assimulation runs were analyzed to determine the particular set of coefficients in the Zernike expansion that are effected by various errors such as tilt, decenter and despace. A 3 dimensional plot of each error up through the fifth-order spherical term was also included in the study. Tolerance analysis data are presented.

  1. An Approach to Stable Gradient-Descent Adaptation of Higher Order Neural Units.

    PubMed

    Bukovsky, Ivo; Homma, Noriyasu

    2017-09-01

    Stability evaluation of a weight-update system of higher order neural units (HONUs) with polynomial aggregation of neural inputs (also known as classes of polynomial neural networks) for adaptation of both feedforward and recurrent HONUs by a gradient descent method is introduced. An essential core of the approach is based on the spectral radius of a weight-update system, and it allows stability monitoring and its maintenance at every adaptation step individually. Assuring the stability of the weight-update system (at every single adaptation step) naturally results in the adaptation stability of the whole neural architecture that adapts to the target data. As an aside, the used approach highlights the fact that the weight optimization of HONU is a linear problem, so the proposed approach can be generally extended to any neural architecture that is linear in its adaptable parameters.

  2. Performance Evaluation of an Infrared Thermocouple

    PubMed Central

    Chen, Chiachung; Weng, Yu-Kai; Shen, Te-Ching

    2010-01-01

    The measurement of the leaf temperature of forests or agricultural plants is an important technique for the monitoring of the physiological state of crops. The infrared thermometer is a convenient device due to its fast response and nondestructive measurement technique. Nowadays, a novel infrared thermocouple, developed with the same measurement principle of the infrared thermometer but using a different detector, has been commercialized for non-contact temperature measurement. The performances of two-kinds of infrared thermocouples were evaluated in this study. The standard temperature was maintained by a temperature calibrator and a special black cavity device. The results indicated that both types of infrared thermocouples had good precision. The error distribution ranged from −1.8 °C to 18 °C as the reading values served as the true values. Within the range from 13 °C to 37 °C, the adequate calibration equations were the high-order polynomial equations. Within the narrower range from 20 °C to 35 °C, the adequate equation was a linear equation for one sensor and a two-order polynomial equation for the other sensor. The accuracy of the two kinds of infrared thermocouple was improved by nearly 0.4 °C with the calibration equations. These devices could serve as mobile monitoring tools for in situ and real time routine estimation of leaf temperatures. PMID:22163458

  3. A well-posed and stable stochastic Galerkin formulation of the incompressible Navier–Stokes equations with random data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pettersson, Per, E-mail: per.pettersson@uib.no; Nordström, Jan, E-mail: jan.nordstrom@liu.se; Doostan, Alireza, E-mail: alireza.doostan@colorado.edu

    2016-02-01

    We present a well-posed stochastic Galerkin formulation of the incompressible Navier–Stokes equations with uncertainty in model parameters or the initial and boundary conditions. The stochastic Galerkin method involves representation of the solution through generalized polynomial chaos expansion and projection of the governing equations onto stochastic basis functions, resulting in an extended system of equations. A relatively low-order generalized polynomial chaos expansion is sufficient to capture the stochastic solution for the problem considered. We derive boundary conditions for the continuous form of the stochastic Galerkin formulation of the velocity and pressure equations. The resulting problem formulation leads to an energy estimatemore » for the divergence. With suitable boundary data on the pressure and velocity, the energy estimate implies zero divergence of the velocity field. Based on the analysis of the continuous equations, we present a semi-discretized system where the spatial derivatives are approximated using finite difference operators with a summation-by-parts property. With a suitable choice of dissipative boundary conditions imposed weakly through penalty terms, the semi-discrete scheme is shown to be stable. Numerical experiments in the laminar flow regime corroborate the theoretical results and we obtain high-order accurate results for the solution variables and the velocity divergence converges to zero as the mesh is refined.« less

  4. Impacts of the aerodynamic force representation on the stability and performance of a galloping-based energy harvester

    NASA Astrophysics Data System (ADS)

    Javed, U.; Abdelkefi, A.

    2017-07-01

    One of the challenging tasks in the analytical modeling of galloping systems is the representation of the galloping force. In this study, the impacts of using different aerodynamic load representations on the dynamics of galloping oscillations are investigated. A distributed-parameter model is considered to determine the response of a galloping energy harvester subjected to a uniform wind speed. For the same experimental data and conditions, various polynomial expressions for the galloping force are proposed in order to determine the possible differences in the variations of the harvester's outputs as well as the type of instability. For the same experimental data of the galloping force, it is demonstrated that the choice of the coefficients of the polynomial approximation may result in a change in the type of bifurcation, the tip displacement and harvested power amplitudes. A parametric study is then performed to investigate the effects of the electrical load resistance on the harvester's performance when considering different possible representations of the aerodynamic force. It is indicated that for low and high values of the electrical resistance, there is an increase in the range of wind speeds where the response of the energy harvester is not affected. The performed analysis shows the importance of accurately representing the galloping force in order to efficiently design piezoelectric energy harvesters.

  5. Effect of boundary representation on viscous, separated flows in a discontinuous-Galerkin Navier-Stokes solver

    NASA Astrophysics Data System (ADS)

    Nelson, Daniel A.; Jacobs, Gustaaf B.; Kopriva, David A.

    2016-08-01

    The effect of curved-boundary representation on the physics of the separated flow over a NACA 65(1)-412 airfoil is thoroughly investigated. A method is presented to approximate curved boundaries with a high-order discontinuous-Galerkin spectral element method for the solution of the Navier-Stokes equations. Multiblock quadrilateral element meshes are constructed with the grid generation software GridPro. The boundary of a NACA 65(1)-412 airfoil, defined by a cubic natural spline, is piecewise-approximated by isoparametric polynomial interpolants that represent the edges of boundary-fitted elements. Direct numerical simulation of the airfoil is performed on a coarse mesh and fine mesh with polynomial orders ranging from four to twelve. The accuracy of the curve fitting is investigated by comparing the flows computed on curved-sided meshes with those given by straight-sided meshes. Straight-sided meshes yield irregular wakes, whereas curved-sided meshes produce a regular Karman street wake. Straight-sided meshes also produce lower lift and higher viscous drag as compared with curved-sided meshes. When the mesh is refined by reducing the sizes of the elements, the lift decrease and viscous drag increase are less pronounced. The differences in the aerodynamic performance between the straight-sided meshes and the curved-sided meshes are concluded to be the result of artificial surface roughness introduced by the piecewise-linear boundary approximation provided by the straight-sided meshes.

  6. Generation of hollow Gaussian beams by spatial filtering

    NASA Astrophysics Data System (ADS)

    Liu, Zhengjun; Zhao, Haifa; Liu, Jianlong; Lin, Jie; Ashfaq Ahmad, Muhammad; Liu, Shutian

    2007-08-01

    We demonstrate that hollow Gaussian beams can be obtained from Fourier transform of the differentials of a Gaussian beam, and thus they can be generated by spatial filtering in the Fourier domain with spatial filters that consist of binomial combinations of even-order Hermite polynomials. A typical 4f optical system and a Michelson interferometer type system are proposed to implement the proposed scheme. Numerical results have proved the validity and effectiveness of this method. Furthermore, other polynomial Gaussian beams can also be generated by using this scheme. This approach is simple and may find significant applications in generating the dark hollow beams for nanophotonic technology.

  7. Generation of hollow Gaussian beams by spatial filtering.

    PubMed

    Liu, Zhengjun; Zhao, Haifa; Liu, Jianlong; Lin, Jie; Ahmad, Muhammad Ashfaq; Liu, Shutian

    2007-08-01

    We demonstrate that hollow Gaussian beams can be obtained from Fourier transform of the differentials of a Gaussian beam, and thus they can be generated by spatial filtering in the Fourier domain with spatial filters that consist of binomial combinations of even-order Hermite polynomials. A typical 4f optical system and a Michelson interferometer type system are proposed to implement the proposed scheme. Numerical results have proved the validity and effectiveness of this method. Furthermore, other polynomial Gaussian beams can also be generated by using this scheme. This approach is simple and may find significant applications in generating the dark hollow beams for nanophotonic technology.

  8. Techniques to improve the accuracy of noise power spectrum measurements in digital x-ray imaging based on background trends removal.

    PubMed

    Zhou, Zhongxing; Gao, Feng; Zhao, Huijuan; Zhang, Lixin

    2011-03-01

    Noise characterization through estimation of the noise power spectrum (NPS) is a central component of the evaluation of digital x-ray systems. Extensive works have been conducted to achieve accurate and precise measurement of NPS. One approach to improve the accuracy of the NPS measurement is to reduce the statistical variance of the NPS results by involving more data samples. However, this method is based on the assumption that the noise in a radiographic image is arising from stochastic processes. In the practical data, the artifactuals always superimpose on the stochastic noise as low-frequency background trends and prevent us from achieving accurate NPS. The purpose of this study was to investigate an appropriate background detrending technique to improve the accuracy of NPS estimation for digital x-ray systems. In order to achieve the optimal background detrending technique for NPS estimate, four methods for artifactuals removal were quantitatively studied and compared: (1) Subtraction of a low-pass-filtered version of the image, (2) subtraction of a 2-D first-order fit to the image, (3) subtraction of a 2-D second-order polynomial fit to the image, and (4) subtracting two uniform exposure images. In addition, background trend removal was separately applied within original region of interest or its partitioned sub-blocks for all four methods. The performance of background detrending techniques was compared according to the statistical variance of the NPS results and low-frequency systematic rise suppression. Among four methods, subtraction of a 2-D second-order polynomial fit to the image was most effective in low-frequency systematic rise suppression and variances reduction for NPS estimate according to the authors' digital x-ray system. Subtraction of a low-pass-filtered version of the image led to NPS variance increment above low-frequency components because of the side lobe effects of frequency response of the boxcar filtering function. Subtracting two uniform exposure images obtained the worst result on the smoothness of NPS curve, although it was effective in low-frequency systematic rise suppression. Subtraction of a 2-D first-order fit to the image was also identified effective for background detrending, but it was worse than subtraction of a 2-D second-order polynomial fit to the image according to the authors' digital x-ray system. As a result of this study, the authors verified that it is necessary and feasible to get better NPS estimate by appropriate background trend removal. Subtraction of a 2-D second-order polynomial fit to the image was the most appropriate technique for background detrending without consideration of processing time.

  9. Graph characterization via Ihara coefficients.

    PubMed

    Ren, Peng; Wilson, Richard C; Hancock, Edwin R

    2011-02-01

    The novel contributions of this paper are twofold. First, we demonstrate how to characterize unweighted graphs in a permutation-invariant manner using the polynomial coefficients from the Ihara zeta function, i.e., the Ihara coefficients. Second, we generalize the definition of the Ihara coefficients to edge-weighted graphs. For an unweighted graph, the Ihara zeta function is the reciprocal of a quasi characteristic polynomial of the adjacency matrix of the associated oriented line graph. Since the Ihara zeta function has poles that give rise to infinities, the most convenient numerically stable representation is to work with the coefficients of the quasi characteristic polynomial. Moreover, the polynomial coefficients are invariant to vertex order permutations and also convey information concerning the cycle structure of the graph. To generalize the representation to edge-weighted graphs, we make use of the reduced Bartholdi zeta function. We prove that the computation of the Ihara coefficients for unweighted graphs is a special case of our proposed method for unit edge weights. We also present a spectral analysis of the Ihara coefficients and indicate their advantages over other graph spectral methods. We apply the proposed graph characterization method to capturing graph-class structure and clustering graphs. Experimental results reveal that the Ihara coefficients are more effective than methods based on Laplacian spectra.

  10. Short communication: Genetic variation of saturated fatty acids in Holsteins in the Walloon region of Belgium.

    PubMed

    Arnould, V M-R; Hammami, H; Soyeurt, H; Gengler, N

    2010-09-01

    Random regression test-day models using Legendre polynomials are commonly used for the estimation of genetic parameters and genetic evaluation for test-day milk production traits. However, some researchers have reported that these models present some undesirable properties such as the overestimation of variances at the edges of lactation. Describing genetic variation of saturated fatty acids expressed in milk fat might require the testing of different models. Therefore, 3 different functions were used and compared to take into account the lactation curve: (1) Legendre polynomials with the same order as currently applied for genetic model for production traits; 2) linear splines with 10 knots; and 3) linear splines with the same 10 knots reduced to 3 parameters. The criteria used were Akaike's information and Bayesian information criteria, percentage square biases, and log-likelihood function. These criteria indentified Legendre polynomials and linear splines with 10 knots reduced to 3 parameters models as the most useful. Reducing more complex models using eigenvalues seemed appealing because the resulting models are less time demanding and can reduce convergence difficulties, because convergence properties also seemed to be improved. Finally, the results showed that the reduced spline model was very similar to the Legendre polynomials model. Copyright (c) 2010 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  11. A high-order 3D spectral difference solver for simulating flows about rotating geometries

    NASA Astrophysics Data System (ADS)

    Zhang, Bin; Liang, Chunlei

    2017-11-01

    Fluid flows around rotating geometries are ubiquitous. For example, a spinning ping pong ball can quickly change its trajectory in an air flow; a marine propeller can provide enormous amount of thrust to a ship. It has been a long-time challenge to accurately simulate these flows. In this work, we present a high-order and efficient 3D flow solver based on unstructured spectral difference (SD) method and a novel sliding-mesh method. In the SD method, solution and fluxes are reconstructed using tensor products of 1D polynomials and the equations are solved in differential-form, which leads to high-order accuracy and high efficiency. In the sliding-mesh method, a computational domain is decomposed into non-overlapping subdomains. Each subdomain can enclose a geometry and can rotate relative to its neighbor, resulting in nonconforming sliding interfaces. A curved dynamic mortar approach is designed for communication on these interfaces. In this approach, solutions and fluxes are projected from cell faces to mortars to compute common values which are then projected back to ensures continuity and conservation. Through theoretical analysis and numerical tests, it is shown that this solver is conservative, free-stream preservative, and high-order accurate in both space and time.

  12. De-Aliasing Through Over-Integration Applied to the Flux Reconstruction and Discontinuous Galerkin Methods

    NASA Technical Reports Server (NTRS)

    Spiegel, Seth C.; Huynh, H. T.; DeBonis, James R.

    2015-01-01

    High-order methods are quickly becoming popular for turbulent flows as the amount of computer processing power increases. The flux reconstruction (FR) method presents a unifying framework for a wide class of high-order methods including discontinuous Galerkin (DG), Spectral Difference (SD), and Spectral Volume (SV). It offers a simple, efficient, and easy way to implement nodal-based methods that are derived via the differential form of the governing equations. Whereas high-order methods have enjoyed recent success, they have been known to introduce numerical instabilities due to polynomial aliasing when applied to under-resolved nonlinear problems. Aliasing errors have been extensively studied in reference to DG methods; however, their study regarding FR methods has mostly been limited to the selection of the nodal points used within each cell. Here, we extend some of the de-aliasing techniques used for DG methods, primarily over-integration, to the FR framework. Our results show that over-integration does remove aliasing errors but may not remove all instabilities caused by insufficient resolution (for FR as well as DG).

  13. Poly-Frobenius-Euler polynomials

    NASA Astrophysics Data System (ADS)

    Kurt, Burak

    2017-07-01

    Hamahata [3] defined poly-Euler polynomials and the generalized poly-Euler polynomials. He proved some relations and closed formulas for the poly-Euler polynomials. By this motivation, we define poly-Frobenius-Euler polynomials. We give some relations for this polynomials. Also, we prove the relationships between poly-Frobenius-Euler polynomials and Stirling numbers of the second kind.

  14. Tachyon inflation in the large-N formalism

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barbosa-Cendejas, Nandinii; De-Santiago, Josue; German, Gabriel

    2015-11-01

    We study tachyon inflation within the large-N formalism, which takes a prescription for the small Hubble flow slow-roll parameter ε{sub 1} as a function of the large number of e-folds N. This leads to a classification of models through their behaviour at large N. In addition to the perturbative N class, we introduce the polynomial and exponential classes for the ε{sub 1} parameter. With this formalism we reconstruct a large number of potentials used previously in the literature for tachyon inflation. We also obtain new families of potentials from the polynomial class. We characterize the realizations of tachyon inflation bymore » computing the usual cosmological observables up to second order in the Hubble flow slow-roll parameters. This allows us to look at observable differences between tachyon and canonical single field inflation. The analysis of observables in light of the Planck 2015 data shows the viability of some of these models, mostly for certain realization of the polynomial and exponential classes.« less

  15. An Efficient Spectral Method for Ordinary Differential Equations with Rational Function Coefficients

    NASA Technical Reports Server (NTRS)

    Coutsias, Evangelos A.; Torres, David; Hagstrom, Thomas

    1994-01-01

    We present some relations that allow the efficient approximate inversion of linear differential operators with rational function coefficients. We employ expansions in terms of a large class of orthogonal polynomial families, including all the classical orthogonal polynomials. These families obey a simple three-term recurrence relation for differentiation, which implies that on an appropriately restricted domain the differentiation operator has a unique banded inverse. The inverse is an integration operator for the family, and it is simply the tridiagonal coefficient matrix for the recurrence. Since in these families convolution operators (i.e. matrix representations of multiplication by a function) are banded for polynomials, we are able to obtain a banded representation for linear differential operators with rational coefficients. This leads to a method of solution of initial or boundary value problems that, besides having an operation count that scales linearly with the order of truncation N, is computationally well conditioned. Among the applications considered is the use of rational maps for the resolution of sharp interior layers.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Singh, M.; Al-Dayeh, L.; Patel, P.

    It is well known that even small movements of the head can lead to artifacts in fMRI. Corrections for these movements are usually made by a registration algorithm which accounts for translational and rotational motion of the head under a rigid body assumption. The brain, however, is not entirely rigid and images are prone to local deformations due to CSF motion, susceptibility effects, local changes in blood flow and inhomogeneities in the magnetic and gradient fields. Since nonrigid body motion is not adequately corrected by approaches relying on simple rotational and translational corrections, we have investigated a general approach wheremore » an n{sup th} order polynomial is used to map all images onto a common reference image. The coefficients of the polynomial transformation were determined through minimization of the ratio of the variance to the mean of each pixel. Simulation studies were conducted to validate the technique. Results of experimental studies using polynomial transformation for 2D and 3D registration show lower variance to mean ratio compared to simple rotational and translational corrections.« less

  17. A Generalized Framework for Reduced-Order Modeling of a Wind Turbine Wake

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hamilton, Nicholas; Viggiano, Bianca; Calaf, Marc

    A reduced-order model for a wind turbine wake is sought from large eddy simulation data. Fluctuating velocity fields are combined in the correlation tensor to form the kernel of the proper orthogonal decomposition (POD). Proper orthogonal decomposition modes resulting from the decomposition represent the spatially coherent turbulence structures in the wind turbine wake; eigenvalues delineate the relative amount of turbulent kinetic energy associated with each mode. Back-projecting the POD modes onto the velocity snapshots produces dynamic coefficients that express the amplitude of each mode in time. A reduced-order model of the wind turbine wake (wakeROM) is defined through a seriesmore » of polynomial parameters that quantify mode interaction and the evolution of each POD mode coefficients. The resulting system of ordinary differential equations models the wind turbine wake composed only of the large-scale turbulent dynamics identified by the POD. Tikhonov regularization is used to recalibrate the dynamical system by adding additional constraints to the minimization seeking polynomial parameters, reducing error in the modeled mode coefficients. The wakeROM is periodically reinitialized with new initial conditions found by relating the incoming turbulent velocity to the POD mode coefficients through a series of open-loop transfer functions. The wakeROM reproduces mode coefficients to within 25.2%, quantified through the normalized root-mean-square error. A high-level view of the modeling approach is provided as a platform to discuss promising research directions, alternate processes that could benefit stability and efficiency, and desired extensions of the wakeROM.« less

  18. New Families of Skewed Higher-Order Kernel Estimators to Solve the BSS/ICA Problem for Multimodal Sources Mixtures.

    PubMed

    Jabbar, Ahmed Najah

    2018-04-13

    This letter suggests two new types of asymmetrical higher-order kernels (HOK) that are generated using the orthogonal polynomials Laguerre (positive or right skew) and Bessel (negative or left skew). These skewed HOK are implemented in the blind source separation/independent component analysis (BSS/ICA) algorithm. The tests for these proposed HOK are accomplished using three scenarios to simulate a real environment using actual sound sources, an environment of mixtures of multimodal fast-changing probability density function (pdf) sources that represent a challenge to the symmetrical HOK, and an environment of an adverse case (near gaussian). The separation is performed by minimizing the mutual information (MI) among the mixed sources. The performance of the skewed kernels is compared to the performance of the standard kernels such as Epanechnikov, bisquare, trisquare, and gaussian and the performance of the symmetrical HOK generated using the polynomials Chebyshev1, Chebyshev2, Gegenbauer, Jacobi, and Legendre to the tenth order. The gaussian HOK are generated using the Hermite polynomial and the Wand and Schucany procedure. The comparison among the 96 kernels is based on the average intersymbol interference ratio (AISIR) and the time needed to complete the separation. In terms of AISIR, the skewed kernels' performance is better than that of the standard kernels and rivals most of the symmetrical kernels' performance. The importance of these new skewed HOK is manifested in the environment of the multimodal pdf mixtures. In such an environment, the skewed HOK come in first place compared with the symmetrical HOK. These new families can substitute for symmetrical HOKs in such applications.

  19. More irregular eye shape in low myopia than in emmetropia.

    PubMed

    Tabernero, Juan; Schaeffel, Frank

    2009-09-01

    To improve the description of the peripheral eye shape in myopia and emmetropia by using a new method for continuous measurement of the peripheral refractive state. A scanning photorefractor was designed to record refractive errors in the vertical pupil meridian across the horizontal visual field (up to +/-45 degrees ). The setup consists of a hot mirror that continuously projects the infrared light from a photoretinoscope under different angles of eccentricity into the eye. The movement of the mirror is controlled by using two stepping motors. Refraction in a group of 17 emmetropic subjects and 11 myopic subjects (mean, -4.3 D; SD, 1.7) was measured without spectacle correction. For the analysis of eye shape, the refractive error versus the eccentricity angles was fitted with different polynomials (from second to tenth order). The new setup presents some important advantages over previous techniques: The subject does not have to change gaze during the measurements, and a continuous profile is obtained rather than discrete points. There was a significant difference in the fitting errors between the subjects with myopia and those with emmetropia. Tenth-order polynomials were required in myopic subjects to achieve a quality of fit similar to that in emmetropic subjects fitted with only sixth-order polynomials. Apparently, the peripheral shape of the myopic eye is more "bumpy." A new setup is presented for obtaining continuous peripheral refraction profiles. It was found that the peripheral retinal shape is more irregular even in only moderately myopic eyes, perhaps because the sclera lost some rigidity even at the early stage of myopia.

  20. Symmetric and arbitrarily high-order Birkhoff-Hermite time integrators and their long-time behaviour for solving nonlinear Klein-Gordon equations

    NASA Astrophysics Data System (ADS)

    Liu, Changying; Iserles, Arieh; Wu, Xinyuan

    2018-03-01

    The Klein-Gordon equation with nonlinear potential occurs in a wide range of application areas in science and engineering. Its computation represents a major challenge. The main theme of this paper is the construction of symmetric and arbitrarily high-order time integrators for the nonlinear Klein-Gordon equation by integrating Birkhoff-Hermite interpolation polynomials. To this end, under the assumption of periodic boundary conditions, we begin with the formulation of the nonlinear Klein-Gordon equation as an abstract second-order ordinary differential equation (ODE) and its operator-variation-of-constants formula. We then derive a symmetric and arbitrarily high-order Birkhoff-Hermite time integration formula for the nonlinear abstract ODE. Accordingly, the stability, convergence and long-time behaviour are rigorously analysed once the spatial differential operator is approximated by an appropriate positive semi-definite matrix, subject to suitable temporal and spatial smoothness. A remarkable characteristic of this new approach is that the requirement of temporal smoothness is reduced compared with the traditional numerical methods for PDEs in the literature. Numerical results demonstrate the advantage and efficiency of our time integrators in comparison with the existing numerical approaches.

  1. Arbitrary-Lagrangian-Eulerian Discontinuous Galerkin schemes with a posteriori subcell finite volume limiting on moving unstructured meshes

    NASA Astrophysics Data System (ADS)

    Boscheri, Walter; Dumbser, Michael

    2017-10-01

    We present a new family of high order accurate fully discrete one-step Discontinuous Galerkin (DG) finite element schemes on moving unstructured meshes for the solution of nonlinear hyperbolic PDE in multiple space dimensions, which may also include parabolic terms in order to model dissipative transport processes, like molecular viscosity or heat conduction. High order piecewise polynomials of degree N are adopted to represent the discrete solution at each time level and within each spatial control volume of the computational grid, while high order of accuracy in time is achieved by the ADER approach, making use of an element-local space-time Galerkin finite element predictor. A novel nodal solver algorithm based on the HLL flux is derived to compute the velocity for each nodal degree of freedom that describes the current mesh geometry. In our algorithm the spatial mesh configuration can be defined in two different ways: either by an isoparametric approach that generates curved control volumes, or by a piecewise linear decomposition of each spatial control volume into simplex sub-elements. Each technique generates a corresponding number of geometrical degrees of freedom needed to describe the current mesh configuration and which must be considered by the nodal solver for determining the grid velocity. The connection of the old mesh configuration at time tn with the new one at time t n + 1 provides the space-time control volumes on which the governing equations have to be integrated in order to obtain the time evolution of the discrete solution. Our numerical method belongs to the category of so-called direct Arbitrary-Lagrangian-Eulerian (ALE) schemes, where a space-time conservation formulation of the governing PDE system is considered and which already takes into account the new grid geometry (including a possible rezoning step) directly during the computation of the numerical fluxes. We emphasize that our method is a moving mesh method, as opposed to total Lagrangian formulations that are based on a fixed computational grid and which instead evolve the mapping of the reference configuration to the current one. Our new Lagrangian-type DG scheme adopts the novel a posteriori sub-cell finite volume limiter method recently developed in [62] for fixed unstructured grids. In this approach, the validity of the candidate solution produced in each cell by an unlimited ADER-DG scheme is verified against a set of physical and numerical detection criteria, such as the positivity of pressure and density, the absence of floating point errors (NaN) and the satisfaction of a relaxed discrete maximum principle (DMP) in the sense of polynomials. Those cells which do not satisfy all of the above criteria are flagged as troubled cells and are recomputed at the aid of a more robust second order TVD finite volume scheme. To preserve the subcell resolution capability of the original DG scheme, the FV limiter is run on a sub-grid that is 2 N + 1 times finer compared to the mesh of the original unlimited DG scheme. The new subcell averages are then gathered back into a high order DG polynomial by a usual conservative finite volume reconstruction operator. The numerical convergence rates of the new ALE ADER-DG schemes are studied up to fourth order in space and time and several test problems are simulated in order to check the accuracy and the robustness of the proposed numerical method in the context of the Euler and Navier-Stokes equations for compressible gas dynamics, considering both inviscid and viscous fluids. Finally, an application inspired by Inertial Confinement Fusion (ICF) type flows is considered by solving the Euler equations and the PDE of viscous and resistive magnetohydrodynamics (VRMHD).

  2. High Accuracy, Absolute, Cryogenic Refractive Index Measurements of Infrared Lens Materials for JWST NIRCam using CHARMS

    NASA Technical Reports Server (NTRS)

    Leviton, Douglas; Frey, Bradley

    2005-01-01

    The current refractive optical design of the James Webb Space Telescope (JWST) Near Infrared Camera (NIRCam) uses three infrared materials in its lenses: LiF, BaF2, and ZnSe. In order to provide the instrument s optical designers with accurate, heretofore unavailable data for absolute refractive index based on actual cryogenic measurements, two prismatic samples of each material were measured using the cryogenic, high accuracy, refraction measuring system (CHARMS) at NASA GSFC, densely covering the temperature range from 15 to 320 K and wavelength range from 0.4 to 5.6 microns. Measurement methods are discussed and graphical and tabulated data for absolute refractive index, dispersion, and thermo-optic coefficient for these three materials are presented along with estimates of uncertainty. Coefficients for second order polynomial fits of measured index to temperature are provided for many wavelengths to allow accurate interpolation of index to other wavelengths and temperatures.

  3. Dealing with Uncertainties in Initial Orbit Determination

    NASA Technical Reports Server (NTRS)

    Armellin, Roberto; Di Lizia, Pierluigi; Zanetti, Renato

    2015-01-01

    A method to deal with uncertainties in initial orbit determination (IOD) is presented. This is based on the use of Taylor differential algebra (DA) to nonlinearly map the observation uncertainties from the observation space to the state space. When a minimum set of observations is available DA is used to expand the solution of the IOD problem in Taylor series with respect to measurement errors. When more observations are available high order inversion tools are exploited to obtain full state pseudo-observations at a common epoch. The mean and covariance of these pseudo-observations are nonlinearly computed by evaluating the expectation of high order Taylor polynomials. Finally, a linear scheme is employed to update the current knowledge of the orbit. Angles-only observations are considered and simplified Keplerian dynamics adopted to ease the explanation. Three test cases of orbit determination of artificial satellites in different orbital regimes are presented to discuss the feature and performances of the proposed methodology.

  4. Computational Performance of a Parallelized Three-Dimensional High-Order Spectral Element Toolbox

    NASA Astrophysics Data System (ADS)

    Bosshard, Christoph; Bouffanais, Roland; Clémençon, Christian; Deville, Michel O.; Fiétier, Nicolas; Gruber, Ralf; Kehtari, Sohrab; Keller, Vincent; Latt, Jonas

    In this paper, a comprehensive performance review of an MPI-based high-order three-dimensional spectral element method C++ toolbox is presented. The focus is put on the performance evaluation of several aspects with a particular emphasis on the parallel efficiency. The performance evaluation is analyzed with help of a time prediction model based on a parameterization of the application and the hardware resources. A tailor-made CFD computation benchmark case is introduced and used to carry out this review, stressing the particular interest for clusters with up to 8192 cores. Some problems in the parallel implementation have been detected and corrected. The theoretical complexities with respect to the number of elements, to the polynomial degree, and to communication needs are correctly reproduced. It is concluded that this type of code has a nearly perfect speed up on machines with thousands of cores, and is ready to make the step to next-generation petaflop machines.

  5. Deterministic absorbed dose estimation in computed tomography using a discrete ordinates method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Norris, Edward T.; Liu, Xin, E-mail: xinliu@mst.edu; Hsieh, Jiang

    Purpose: Organ dose estimation for a patient undergoing computed tomography (CT) scanning is very important. Although Monte Carlo methods are considered gold-standard in patient dose estimation, the computation time required is formidable for routine clinical calculations. Here, the authors instigate a deterministic method for estimating an absorbed dose more efficiently. Methods: Compared with current Monte Carlo methods, a more efficient approach to estimating the absorbed dose is to solve the linear Boltzmann equation numerically. In this study, an axial CT scan was modeled with a software package, Denovo, which solved the linear Boltzmann equation using the discrete ordinates method. Themore » CT scanning configuration included 16 x-ray source positions, beam collimators, flat filters, and bowtie filters. The phantom was the standard 32 cm CT dose index (CTDI) phantom. Four different Denovo simulations were performed with different simulation parameters, including the number of quadrature sets and the order of Legendre polynomial expansions. A Monte Carlo simulation was also performed for benchmarking the Denovo simulations. A quantitative comparison was made of the simulation results obtained by the Denovo and the Monte Carlo methods. Results: The difference in the simulation results of the discrete ordinates method and those of the Monte Carlo methods was found to be small, with a root-mean-square difference of around 2.4%. It was found that the discrete ordinates method, with a higher order of Legendre polynomial expansions, underestimated the absorbed dose near the center of the phantom (i.e., low dose region). Simulations of the quadrature set 8 and the first order of the Legendre polynomial expansions proved to be the most efficient computation method in the authors’ study. The single-thread computation time of the deterministic simulation of the quadrature set 8 and the first order of the Legendre polynomial expansions was 21 min on a personal computer. Conclusions: The simulation results showed that the deterministic method can be effectively used to estimate the absorbed dose in a CTDI phantom. The accuracy of the discrete ordinates method was close to that of a Monte Carlo simulation, and the primary benefit of the discrete ordinates method lies in its rapid computation speed. It is expected that further optimization of this method in routine clinical CT dose estimation will improve its accuracy and speed.« less

  6. A gradient-based model parametrization using Bernstein polynomials in Bayesian inversion of surface wave dispersion

    NASA Astrophysics Data System (ADS)

    Gosselin, Jeremy M.; Dosso, Stan E.; Cassidy, John F.; Quijano, Jorge E.; Molnar, Sheri; Dettmer, Jan

    2017-10-01

    This paper develops and applies a Bernstein-polynomial parametrization to efficiently represent general, gradient-based profiles in nonlinear geophysical inversion, with application to ambient-noise Rayleigh-wave dispersion data. Bernstein polynomials provide a stable parametrization in that small perturbations to the model parameters (basis-function coefficients) result in only small perturbations to the geophysical parameter profile. A fully nonlinear Bayesian inversion methodology is applied to estimate shear wave velocity (VS) profiles and uncertainties from surface wave dispersion data extracted from ambient seismic noise. The Bayesian information criterion is used to determine the appropriate polynomial order consistent with the resolving power of the data. Data error correlations are accounted for in the inversion using a parametric autoregressive model. The inversion solution is defined in terms of marginal posterior probability profiles for VS as a function of depth, estimated using Metropolis-Hastings sampling with parallel tempering. This methodology is applied to synthetic dispersion data as well as data processed from passive array recordings collected on the Fraser River Delta in British Columbia, Canada. Results from this work are in good agreement with previous studies, as well as with co-located invasive measurements. The approach considered here is better suited than `layered' modelling approaches in applications where smooth gradients in geophysical parameters are expected, such as soil/sediment profiles. Further, the Bernstein polynomial representation is more general than smooth models based on a fixed choice of gradient type (e.g. power-law gradient) because the form of the gradient is determined objectively by the data, rather than by a subjective parametrization choice.

  7. From h to p efficiently: optimal implementation strategies for explicit time-dependent problems using the spectral/hp element method

    PubMed Central

    Bolis, A; Cantwell, C D; Kirby, R M; Sherwin, S J

    2014-01-01

    We investigate the relative performance of a second-order Adams–Bashforth scheme and second-order and fourth-order Runge–Kutta schemes when time stepping a 2D linear advection problem discretised using a spectral/hp element technique for a range of different mesh sizes and polynomial orders. Numerical experiments explore the effects of short (two wavelengths) and long (32 wavelengths) time integration for sets of uniform and non-uniform meshes. The choice of time-integration scheme and discretisation together fixes a CFL limit that imposes a restriction on the maximum time step, which can be taken to ensure numerical stability. The number of steps, together with the order of the scheme, affects not only the runtime but also the accuracy of the solution. Through numerical experiments, we systematically highlight the relative effects of spatial resolution and choice of time integration on performance and provide general guidelines on how best to achieve the minimal execution time in order to obtain a prescribed solution accuracy. The significant role played by higher polynomial orders in reducing CPU time while preserving accuracy becomes more evident, especially for uniform meshes, compared with what has been typically considered when studying this type of problem.© 2014. The Authors. International Journal for Numerical Methods in Fluids published by John Wiley & Sons, Ltd. PMID:25892840

  8. An Adaptive Moving Target Imaging Method for Bistatic Forward-Looking SAR Using Keystone Transform and Optimization NLCS.

    PubMed

    Li, Zhongyu; Wu, Junjie; Huang, Yulin; Yang, Haiguang; Yang, Jianyu

    2017-01-23

    Bistatic forward-looking SAR (BFSAR) is a kind of bistatic synthetic aperture radar (SAR) system that can image forward-looking terrain in the flight direction of an aircraft. Until now, BFSAR imaging theories and methods for a stationary scene have been researched thoroughly. However, for moving-target imaging with BFSAR, the non-cooperative movement of the moving target induces some new issues: (I) large and unknown range cell migration (RCM) (including range walk and high-order RCM); (II) the spatial-variances of the Doppler parameters (including the Doppler centroid and high-order Doppler) are not only unknown, but also nonlinear for different point-scatterers. In this paper, we put forward an adaptive moving-target imaging method for BFSAR. First, the large and unknown range walk is corrected by applying keystone transform over the whole received echo, and then, the relationships among the unknown high-order RCM, the nonlinear spatial-variances of the Doppler parameters, and the speed of the mover, are established. After that, using an optimization nonlinear chirp scaling (NLCS) technique, not only can the unknown high-order RCM be accurately corrected, but also the nonlinear spatial-variances of the Doppler parameters can be balanced. At last, a high-order polynomial filter is applied to compress the whole azimuth data of the moving target. Numerical simulations verify the effectiveness of the proposed method.

  9. Hybrid DG/FV schemes for magnetohydrodynamics and relativistic hydrodynamics

    NASA Astrophysics Data System (ADS)

    Núñez-de la Rosa, Jonatan; Munz, Claus-Dieter

    2018-01-01

    This paper presents a high order hybrid discontinuous Galerkin/finite volume scheme for solving the equations of the magnetohydrodynamics (MHD) and of the relativistic hydrodynamics (SRHD) on quadrilateral meshes. In this approach, for the spatial discretization, an arbitrary high order discontinuous Galerkin spectral element (DG) method is combined with a finite volume (FV) scheme in order to simulate complex flow problems involving strong shocks. Regarding the time discretization, a fourth order strong stability preserving Runge-Kutta method is used. In the proposed hybrid scheme, a shock indicator is computed at the beginning of each Runge-Kutta stage in order to flag those elements containing shock waves or discontinuities. Subsequently, the DG solution in these troubled elements and in the current time step is projected onto a subdomain composed of finite volume subcells. Right after, the DG operator is applied to those unflagged elements, which, in principle, are oscillation-free, meanwhile the troubled elements are evolved with a robust second/third order FV operator. With this approach we are able to numerically simulate very challenging problems in the context of MHD and SRHD in one, and two space dimensions and with very high order polynomials. We make convergence tests and show a comprehensive one- and two dimensional testbench for both equation systems, focusing in problems with strong shocks. The presented hybrid approach shows that numerical schemes of very high order of accuracy are able to simulate these complex flow problems in an efficient and robust manner.

  10. Generic expansion of the Jastrow correlation factor in polynomials satisfying symmetry and cusp conditions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lüchow, Arne, E-mail: luechow@rwth-aachen.de; Jülich Aachen Research Alliance; Sturm, Alexander

    2015-02-28

    Jastrow correlation factors play an important role in quantum Monte Carlo calculations. Together with an orbital based antisymmetric function, they allow the construction of highly accurate correlation wave functions. In this paper, a generic expansion of the Jastrow correlation function in terms of polynomials that satisfy both the electron exchange symmetry constraint and the cusp conditions is presented. In particular, an expansion of the three-body electron-electron-nucleus contribution in terms of cuspless homogeneous symmetric polynomials is proposed. The polynomials can be expressed in fairly arbitrary scaling function allowing a generic implementation of the Jastrow factor. It is demonstrated with a fewmore » examples that the new Jastrow factor achieves 85%–90% of the total correlation energy in a variational quantum Monte Carlo calculation and more than 90% of the diffusion Monte Carlo correlation energy.« less

  11. Explicit 2-D Hydrodynamic FEM Program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Jerry

    1996-08-07

    DYNA2D* is a vectorized, explicit, two-dimensional, axisymmetric and plane strain finite element program for analyzing the large deformation dynamic and hydrodynamic response of inelastic solids. DYNA2D* contains 13 material models and 9 equations of state (EOS) to cover a wide range of material behavior. The material models implemented in all machine versions are: elastic, orthotropic elastic, kinematic/isotropic elastic plasticity, thermoelastoplastic, soil and crushable foam, linear viscoelastic, rubber, high explosive burn, isotropic elastic-plastic, temperature-dependent elastic-plastic. The isotropic and temperature-dependent elastic-plastic models determine only the deviatoric stresses. Pressure is determined by one of 9 equations of state including linear polynomial, JWL highmore » explosive, Sack Tuesday high explosive, Gruneisen, ratio of polynomials, linear polynomial with energy deposition, ignition and growth of reaction in HE, tabulated compaction, and tabulated.« less

  12. Stochastic Modeling of Flow-Structure Interactions using Generalized Polynomial Chaos

    DTIC Science & Technology

    2001-09-11

    Some basic hypergeometric polynomials that generalize Jacobi polynomials . Memoirs Amer. Math. Soc...scheme, which is represented as a tree structure in figure 1 (following [24]), classifies the hypergeometric orthogonal polynomials and indicates the...2F0(1) 2F0(0) Figure 1: The Askey scheme of orthogonal polynomials The orthogonal polynomials associated with the generalized polynomial chaos,

  13. An arbitrary high-order Discontinuous Galerkin method for elastic waves on unstructured meshes - III. Viscoelastic attenuation

    NASA Astrophysics Data System (ADS)

    Käser, Martin; Dumbser, Michael; de la Puente, Josep; Igel, Heiner

    2007-01-01

    We present a new numerical method to solve the heterogeneous anelastic, seismic wave equations with arbitrary high order accuracy in space and time on 3-D unstructured tetrahedral meshes. Using the velocity-stress formulation provides a linear hyperbolic system of equations with source terms that is completed by additional equations for the anelastic functions including the strain history of the material. These additional equations result from the rheological model of the generalized Maxwell body and permit the incorporation of realistic attenuation properties of viscoelastic material accounting for the behaviour of elastic solids and viscous fluids. The proposed method combines the Discontinuous Galerkin (DG) finite element (FE) method with the ADER approach using Arbitrary high order DERivatives for flux calculations. The DG approach, in contrast to classical FE methods, uses a piecewise polynomial approximation of the numerical solution which allows for discontinuities at element interfaces. Therefore, the well-established theory of numerical fluxes across element interfaces obtained by the solution of Riemann problems can be applied as in the finite volume framework. The main idea of the ADER time integration approach is a Taylor expansion in time in which all time derivatives are replaced by space derivatives using the so-called Cauchy-Kovalewski procedure which makes extensive use of the governing PDE. Due to the ADER time integration technique the same approximation order in space and time is achieved automatically and the method is a one-step scheme advancing the solution for one time step without intermediate stages. To this end, we introduce a new unrolled recursive algorithm for efficiently computing the Cauchy-Kovalewski procedure by making use of the sparsity of the system matrices. The numerical convergence analysis demonstrates that the new schemes provide very high order accuracy even on unstructured tetrahedral meshes while computational cost and storage space for a desired accuracy can be reduced when applying higher degree approximation polynomials. In addition, we investigate the increase in computing time, when the number of relaxation mechanisms due to the generalized Maxwell body are increased. An application to a well-acknowledged test case and comparisons with analytic and reference solutions, obtained by different well-established numerical methods, confirm the performance of the proposed method. Therefore, the development of the highly accurate ADER-DG approach for tetrahedral meshes including viscoelastic material provides a novel, flexible and efficient numerical technique to approach 3-D wave propagation problems including realistic attenuation and complex geometry.

  14. Aberrated laser beams in terms of Zernike polynomials

    NASA Astrophysics Data System (ADS)

    Alda, Javier; Alonso, Jose; Bernabeu, Eusebio

    1996-11-01

    The characterization of light beams has devoted a lot of attention in the past decade. Several formalisms have been presented to treat the problem of parameter invariance and characterization in the propagation of light beam along ideal, ABCD, optical systems. The hard and soft apertured optical systems have been treated too. Also some aberrations have been analyzed, but it has not appeared a formalism able to treat the problem as a whole. In this contribution we use a classical approach to describe the problem of aberrated, and therefore apertured, light beams. The wavefront aberration is included in a pure phase term expanded in terms of the Zernike polynomials. Then, we can use the relation between the lower order Zernike polynomia and the Seidel or third order aberrations. We analyze the astigmatism, the spherical aberration and the coma, and we show how higher order aberrations can be taken into account. We have calculated the divergence, and the radius of curvature of such aberrated beams and the influence of these aberrations in the quality of the light beam. Some numerical simulations have been done to illustrate the method.

  15. Nonlinear secret image sharing scheme.

    PubMed

    Shin, Sang-Ho; Lee, Gil-Je; Yoo, Kee-Young

    2014-01-01

    Over the past decade, most of secret image sharing schemes have been proposed by using Shamir's technique. It is based on a linear combination polynomial arithmetic. Although Shamir's technique based secret image sharing schemes are efficient and scalable for various environments, there exists a security threat such as Tompa-Woll attack. Renvall and Ding proposed a new secret sharing technique based on nonlinear combination polynomial arithmetic in order to solve this threat. It is hard to apply to the secret image sharing. In this paper, we propose a (t, n)-threshold nonlinear secret image sharing scheme with steganography concept. In order to achieve a suitable and secure secret image sharing scheme, we adapt a modified LSB embedding technique with XOR Boolean algebra operation, define a new variable m, and change a range of prime p in sharing procedure. In order to evaluate efficiency and security of proposed scheme, we use the embedding capacity and PSNR. As a result of it, average value of PSNR and embedding capacity are 44.78 (dB) and 1.74t⌈log2 m⌉ bit-per-pixel (bpp), respectively.

  16. Nonlinear Secret Image Sharing Scheme

    PubMed Central

    Shin, Sang-Ho; Yoo, Kee-Young

    2014-01-01

    Over the past decade, most of secret image sharing schemes have been proposed by using Shamir's technique. It is based on a linear combination polynomial arithmetic. Although Shamir's technique based secret image sharing schemes are efficient and scalable for various environments, there exists a security threat such as Tompa-Woll attack. Renvall and Ding proposed a new secret sharing technique based on nonlinear combination polynomial arithmetic in order to solve this threat. It is hard to apply to the secret image sharing. In this paper, we propose a (t, n)-threshold nonlinear secret image sharing scheme with steganography concept. In order to achieve a suitable and secure secret image sharing scheme, we adapt a modified LSB embedding technique with XOR Boolean algebra operation, define a new variable m, and change a range of prime p in sharing procedure. In order to evaluate efficiency and security of proposed scheme, we use the embedding capacity and PSNR. As a result of it, average value of PSNR and embedding capacity are 44.78 (dB) and 1.74t⌈log2⁡m⌉ bit-per-pixel (bpp), respectively. PMID:25140334

  17. Rheological Analysis of Binary Eutectic Mixture of Sodium and Potassium Nitrate and the Effect of Low Concentration CuO Nanoparticle Addition to Its Viscosity.

    PubMed

    Lasfargues, Mathieu; Cao, Hui; Geng, Qiao; Ding, Yulong

    2015-08-11

    This paper is focused on the characterisation and demonstration of Newtonian behaviour of salt at both high and low shear rate for sodium and potassium nitrate eutectic mixture (60/40) ranging from 250 °C to 500 °C. Analysis of published and experimental data was carried out to correlate all the numbers into one meaningful 4th order polynomial equation. Addition of a low amount of copper oxide nanoparticles to the mixture increased viscosity of 5.0%-18.0% compared to the latter equation.

  18. Synthesized tissue-equivalent dielectric phantoms using salt and polyvinylpyrrolidone solutions.

    PubMed

    Ianniello, Carlotta; de Zwart, Jacco A; Duan, Qi; Deniz, Cem M; Alon, Leeor; Lee, Jae-Seung; Lattanzi, Riccardo; Brown, Ryan

    2018-07-01

    To explore the use of polyvinylpyrrolidone (PVP) for simulated materials with tissue-equivalent dielectric properties. PVP and salt were used to control, respectively, relative permittivity and electrical conductivity in a collection of 63 samples with a range of solute concentrations. Their dielectric properties were measured with a commercial probe and fitted to a 3D polynomial in order to establish an empirical recipe. The material's thermal properties and MR spectra were measured. The empirical polynomial recipe (available at https://www.amri.ninds.nih.gov/cgi-bin/phantomrecipe) provides the PVP and salt concentrations required for dielectric materials with permittivity and electrical conductivity values between approximately 45 and 78, and 0.1 to 2 siemens per meter, respectively, from 50 MHz to 4.5 GHz. The second- (solute concentrations) and seventh- (frequency) order polynomial recipe provided less than 2.5% relative error between the measured and target properties. PVP side peaks in the spectra were minor and unaffected by temperature changes. PVP-based phantoms are easy to prepare and nontoxic, and their semitransparency makes air bubbles easy to identify. The polymer can be used to create simulated material with a range of dielectric properties, negligible spectral side peaks, and long T 2 relaxation time, which are favorable in many MR applications. Magn Reson Med 80:413-419, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  19. Optimization of Paclitaxel Containing pH-Sensitive Liposomes By 3 Factor, 3 Level Box-Behnken Design.

    PubMed

    Rane, Smita; Prabhakar, Bala

    2013-07-01

    The aim of this study was to investigate the combined influence of 3 independent variables in the preparation of paclitaxel containing pH-sensitive liposomes. A 3 factor, 3 levels Box-Behnken design was used to derive a second order polynomial equation and construct contour plots to predict responses. The independent variables selected were molar ratio phosphatidylcholine:diolylphosphatidylethanolamine (X1), molar concentration of cholesterylhemisuccinate (X2), and amount of drug (X3). Fifteen batches were prepared by thin film hydration method and evaluated for percent drug entrapment, vesicle size, and pH sensitivity. The transformed values of the independent variables and the percent drug entrapment were subjected to multiple regression to establish full model second order polynomial equation. F was calculated to confirm the omission of insignificant terms from the full model equation to derive a reduced model polynomial equation to predict the dependent variables. Contour plots were constructed to show the effects of X1, X2, and X3 on the percent drug entrapment. A model was validated for accurate prediction of the percent drug entrapment by performing checkpoint analysis. The computer optimization process and contour plots predicted the levels of independent variables X1, X2, and X3 (0.99, -0.06, 0, respectively), for maximized response of percent drug entrapment with constraints on vesicle size and pH sensitivity.

  20. Versions of the collocation and least squares method for solving biharmonic equations in non-canonical domains

    NASA Astrophysics Data System (ADS)

    Belyaev, V. A.; Shapeev, V. P.

    2017-10-01

    New versions of the collocations and least squares method of high-order accuracy are proposed and implemented for the numerical solution of the boundary value problems for the biharmonic equation in non-canonical domains. The solution of the biharmonic equation is used for simulating the stress-strain state of an isotropic plate under the action of transverse load. The differential problem is projected into a space of fourth-degree polynomials by the CLS method. The boundary conditions for the approximate solution are put down exactly on the boundary of the computational domain. The versions of the CLS method are implemented on the grids which are constructed in two different ways. It is shown that the approximate solution of problems converges with high order. Thus it matches with high accuracy with the analytical solution of the test problems in the case of known solution in the numerical experiments on the convergence of the solution of various problems on a sequence of grids.

  1. On Facilitating the use of HARDI in population studies by creating Rotation-Invariant Markers

    PubMed Central

    Caruyer, Emmanuel; Verma, Ragini

    2014-01-01

    We design and evaluate a novel method to compute rotationally invariant features using High Angular Resolution Diffusion Imaging (HARDI) data. These measures quantify the complexity of the angular diffusion profile modeled using a higher order model, thereby giving more information than classical diffusion tensor-derived parameters. The method is based on the spherical harmonic (SH) representation of the angular diffusion information, and is generalizable to a range of HARDI reconstruction models. These scalars are obtained as homogeneous polynomials of the SH representation of a HARDI reconstruction model. We show that finding such polynomials is equivalent to solving a large linear system of equations, and present a numerical method based on sparse matrices to efficiently solve this system. Among the solutions, we only keep a subset of algebraically independent polynomials, using an algorithm based on a numerical implementation of the Jacobian criterion. We compute a set of 12 or 25 rotationally invariant measures representative of the underlying white matter for the rank-4 or rank-6 spherical harmonics (SH) representation of the apparent diffusion coefficient (ADC) profile, respectively. Synthetic data was used to investigate and quantify the difference in contrast. Real data acquired with multiple repetitions showed that within subject variation in the invariants was less than the difference across subjects - facilitating their use to study population differences. These results demonstrate that our measures are able to characterize white matter, especially complex white matter found in regions of fiber crossings and hence can be used to derive new biomarkers for HARDI and can be used for HARDI-based population analysis. PMID:25465846

  2. A discontinuous Galerkin method for the shallow water equations in spherical triangular coordinates

    NASA Astrophysics Data System (ADS)

    Läuter, Matthias; Giraldo, Francis X.; Handorf, Dörthe; Dethloff, Klaus

    2008-12-01

    A global model of the atmosphere is presented governed by the shallow water equations and discretized by a Runge-Kutta discontinuous Galerkin method on an unstructured triangular grid. The shallow water equations on the sphere, a two-dimensional surface in R3, are locally represented in terms of spherical triangular coordinates, the appropriate local coordinate mappings on triangles. On every triangular grid element, this leads to a two-dimensional representation of tangential momentum and therefore only two discrete momentum equations. The discontinuous Galerkin method consists of an integral formulation which requires both area (elements) and line (element faces) integrals. Here, we use a Rusanov numerical flux to resolve the discontinuous fluxes at the element faces. A strong stability-preserving third-order Runge-Kutta method is applied for the time discretization. The polynomial space of order k on each curved triangle of the grid is characterized by a Lagrange basis and requires high-order quadature rules for the integration over elements and element faces. For the presented method no mass matrix inversion is necessary, except in a preprocessing step. The validation of the atmospheric model has been done considering standard tests from Williamson et al. [D.L. Williamson, J.B. Drake, J.J. Hack, R. Jakob, P.N. Swarztrauber, A standard test set for numerical approximations to the shallow water equations in spherical geometry, J. Comput. Phys. 102 (1992) 211-224], unsteady analytical solutions of the nonlinear shallow water equations and a barotropic instability caused by an initial perturbation of a jet stream. A convergence rate of O(Δx) was observed in the model experiments. Furthermore, a numerical experiment is presented, for which the third-order time-integration method limits the model error. Thus, the time step Δt is restricted by both the CFL-condition and accuracy demands. Conservation of mass was shown up to machine precision and energy conservation converges for both increasing grid resolution and increasing polynomial order k.

  3. Seasonality in twin birth rates, Denmark, 1936-84.

    PubMed

    Bonnelykke, B; Søgaard, J; Nielsen, J

    1987-12-01

    A study was made of seasonality in twin birth rate in Denmark between 1977 and 1984. We studied all twin births (N = 45,550) in all deliveries (N = 3,679,932) during that period. Statistical analysis using a simple harmonic sinusoidal model provided no evidence for seasonality. However, sequential polynomial analysis disclosed a significant fit to a fifth order polynomial curve with peaks in twin birth rates in May-June and December, along with troughs in February and September. A falling trend in twinning rate broke off in Denmark around 1970, and from 1970 to 1984 an increasing trend was found. The results are discussed in terms of possible environmental influences on twinning.

  4. Reconstruction of color biomedical images by means of quaternion generic Jacobi-Fourier moments in the framework of polar pixels

    PubMed Central

    Camacho-Bello, César; Padilla-Vivanco, Alfonso; Toxqui-Quitl, Carina; Báez-Rojas, José Javier

    2016-01-01

    Abstract. A detailed analysis of the quaternion generic Jacobi-Fourier moments (QGJFMs) for color image description is presented. In order to reach numerical stability, a recursive approach is used during the computation of the generic Jacobi radial polynomials. Moreover, a search criterion is performed to establish the best values for the parameters α and β of the radial Jacobi polynomial families. Additionally, a polar pixel approach is taken into account to increase the numerical accuracy in the calculation of the QGJFMs. To prove the mathematical theory, some color images from optical microscopy and human retina are used. Experiments and results about color image reconstruction are presented. PMID:27014716

  5. On a self-consistent representation of earth models, with an application to the computing of internal flattening

    NASA Astrophysics Data System (ADS)

    Denis, C.; Ibrahim, A.

    Self-consistent parametric earth models are discussed in terms of a flexible numerical code. The density profile of each layer is represented as a polynomial, and figures of gravity, mass, mean density, hydrostatic pressure, and moment of inertia are derived. The polynomial representation also allows computation of the first order flattening of the internal strata of some models, using a Gauss-Legendre quadrature with a rapidly converging iteration technique. Agreement with measured geophysical data is obtained, and algorithm for estimation of the geometric flattening for any equidense surface identified by its fractional radius is developed. The program can also be applied in studies of planetary and stellar models.

  6. Decomposition of algebraic sets and applications to weak centers of cubic systems

    NASA Astrophysics Data System (ADS)

    Chen, Xingwu; Zhang, Weinian

    2009-10-01

    There are many methods such as Gröbner basis, characteristic set and resultant, in computing an algebraic set of a system of multivariate polynomials. The common difficulties come from the complexity of computation, singularity of the corresponding matrices and some unnecessary factors in successive computation. In this paper, we decompose algebraic sets, stratum by stratum, into a union of constructible sets with Sylvester resultants, so as to simplify the procedure of elimination. Applying this decomposition to systems of multivariate polynomials resulted from period constants of reversible cubic differential systems which possess a quadratic isochronous center, we determine the order of weak centers and discuss the bifurcation of critical periods.

  7. Uncertainty Propagation for Turbulent, Compressible Flow in a Quasi-1D Nozzle Using Stochastic Methods

    NASA Technical Reports Server (NTRS)

    Zang, Thomas A.; Mathelin, Lionel; Hussaini, M. Yousuff; Bataille, Francoise

    2003-01-01

    This paper describes a fully spectral, Polynomial Chaos method for the propagation of uncertainty in numerical simulations of compressible, turbulent flow, as well as a novel stochastic collocation algorithm for the same application. The stochastic collocation method is key to the efficient use of stochastic methods on problems with complex nonlinearities, such as those associated with the turbulence model equations in compressible flow and for CFD schemes requiring solution of a Riemann problem. Both methods are applied to compressible flow in a quasi-one-dimensional nozzle. The stochastic collocation method is roughly an order of magnitude faster than the fully Galerkin Polynomial Chaos method on the inviscid problem.

  8. Gegenbauer-solvable quantum chain model

    NASA Astrophysics Data System (ADS)

    Znojil, Miloslav

    2010-11-01

    An N-level quantum model is proposed in which the energies are represented by an N-plet of zeros of a suitable classical orthogonal polynomial. The family of Gegenbauer polynomials G(n,a,x) is selected for illustrative purposes. The main obstacle lies in the non-Hermiticity (aka crypto-Hermiticity) of Hamiltonians H≠H†. We managed to (i) start from elementary secular equation G(N,a,En)=0, (ii) keep our H, in the nearest-neighbor-interaction spirit, tridiagonal, (iii) render it Hermitian in an ad hoc, nonunique Hilbert space endowed with metric Θ≠I, (iv) construct eligible metrics in closed forms ordered by increasing nondiagonality, and (v) interpret the model as a smeared N-site lattice.

  9. Generalized neurofuzzy network modeling algorithms using Bézier-Bernstein polynomial functions and additive decomposition.

    PubMed

    Hong, X; Harris, C J

    2000-01-01

    This paper introduces a new neurofuzzy model construction algorithm for nonlinear dynamic systems based upon basis functions that are Bézier-Bernstein polynomial functions. This paper is generalized in that it copes with n-dimensional inputs by utilising an additive decomposition construction to overcome the curse of dimensionality associated with high n. This new construction algorithm also introduces univariate Bézier-Bernstein polynomial functions for the completeness of the generalized procedure. Like the B-spline expansion based neurofuzzy systems, Bézier-Bernstein polynomial function based neurofuzzy networks hold desirable properties such as nonnegativity of the basis functions, unity of support, and interpretability of basis function as fuzzy membership functions, moreover with the additional advantages of structural parsimony and Delaunay input space partition, essentially overcoming the curse of dimensionality associated with conventional fuzzy and RBF networks. This new modeling network is based on additive decomposition approach together with two separate basis function formation approaches for both univariate and bivariate Bézier-Bernstein polynomial functions used in model construction. The overall network weights are then learnt using conventional least squares methods. Numerical examples are included to demonstrate the effectiveness of this new data based modeling approach.

  10. On the robustness of bucket brigade quantum RAM

    NASA Astrophysics Data System (ADS)

    Arunachalam, Srinivasan; Gheorghiu, Vlad; Jochym-O'Connor, Tomas; Mosca, Michele; Varshinee Srinivasan, Priyaa

    2015-12-01

    We study the robustness of the bucket brigade quantum random access memory model introduced by Giovannetti et al (2008 Phys. Rev. Lett.100 160501). Due to a result of Regev and Schiff (ICALP ’08 733), we show that for a class of error models the error rate per gate in the bucket brigade quantum memory has to be of order o({2}-n/2) (where N={2}n is the size of the memory) whenever the memory is used as an oracle for the quantum searching problem. We conjecture that this is the case for any realistic error model that will be encountered in practice, and that for algorithms with super-polynomially many oracle queries the error rate must be super-polynomially small, which further motivates the need for quantum error correction. By contrast, for algorithms such as matrix inversion Harrow et al (2009 Phys. Rev. Lett.103 150502) or quantum machine learning Rebentrost et al (2014 Phys. Rev. Lett.113 130503) that only require a polynomial number of queries, the error rate only needs to be polynomially small and quantum error correction may not be required. We introduce a circuit model for the quantum bucket brigade architecture and argue that quantum error correction for the circuit causes the quantum bucket brigade architecture to lose its primary advantage of a small number of ‘active’ gates, since all components have to be actively error corrected.

  11. Uncertainty Quantification in CO 2 Sequestration Using Surrogate Models from Polynomial Chaos Expansion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Yan; Sahinidis, Nikolaos V.

    2013-03-06

    In this paper, surrogate models are iteratively built using polynomial chaos expansion (PCE) and detailed numerical simulations of a carbon sequestration system. Output variables from a numerical simulator are approximated as polynomial functions of uncertain parameters. Once generated, PCE representations can be used in place of the numerical simulator and often decrease simulation times by several orders of magnitude. However, PCE models are expensive to derive unless the number of terms in the expansion is moderate, which requires a relatively small number of uncertain variables and a low degree of expansion. To cope with this limitation, instead of using amore » classical full expansion at each step of an iterative PCE construction method, we introduce a mixed-integer programming (MIP) formulation to identify the best subset of basis terms in the expansion. This approach makes it possible to keep the number of terms small in the expansion. Monte Carlo (MC) simulation is then performed by substituting the values of the uncertain parameters into the closed-form polynomial functions. Based on the results of MC simulation, the uncertainties of injecting CO{sub 2} underground are quantified for a saline aquifer. Moreover, based on the PCE model, we formulate an optimization problem to determine the optimal CO{sub 2} injection rate so as to maximize the gas saturation (residual trapping) during injection, and thereby minimize the chance of leakage.« less

  12. A Thick-Restart Lanczos Algorithm with Polynomial Filtering for Hermitian Eigenvalue Problems

    DOE PAGES

    Li, Ruipeng; Xi, Yuanzhe; Vecharynski, Eugene; ...

    2016-08-16

    Polynomial filtering can provide a highly effective means of computing all eigenvalues of a real symmetric (or complex Hermitian) matrix that are located in a given interval, anywhere in the spectrum. This paper describes a technique for tackling this problem by combining a thick-restart version of the Lanczos algorithm with deflation ("locking'') and a new type of polynomial filter obtained from a least-squares technique. Furthermore, the resulting algorithm can be utilized in a “spectrum-slicing” approach whereby a very large number of eigenvalues and associated eigenvectors of the matrix are computed by extracting eigenpairs located in different subintervals independently from onemore » another.« less

  13. Optimization of supercritical carbon dioxide extraction of Piper Betel Linn leaves oil and total phenolic content

    NASA Astrophysics Data System (ADS)

    Aziz, A. H. A.; Yunus, M. A. C.; Arsad, N. H.; Lee, N. Y.; Idham, Z.; Razak, A. Q. A.

    2016-11-01

    Supercritical Carbon Dioxide (SC-CO2) Extraction was applied to extract piper betel linn leaves. The piper betel leaves oil was used antioxidant, anti-diabetic, anticancer and antistroke. The aim of this study was to optimize the conditions of pressure, temperature and flowrate for oil yield and total phenolic content. The operational conditions of SC-CO2 studied were pressure (10, 20, 30 MPa), temperature (40, 60, 80 °C) and flowrate carbon dioxide (4, 6, 8 mL/min). The constant parameters were average particle size and extraction regime, 355pm and 3.5 hours respectively. First order polynomial expression was used to express the extracted oil while second order polynomial expression was used to express the total phenolic content and the both results were satisfactory. The best conditions to maximize the total extraction oil yields and total phenolic content were 30 MPa, 80 °C and 4.42 mL/min leading to 7.32% of oil and 29.72 MPa, 67.53 °C and 7.98 mL/min leading to 845.085 mg GAE/g sample. In terms of optimum condition with high extraction yield and high total phenolic content in the extracts, the best operating conditions were 30 MPa, 78 °C and 8 mL/min with 7.05% yield and 791.709 mg gallic acid equivalent (GAE)/g sample. The most dominant condition for extraction of oil yield and phenolic content were pressure and CO2 flowrate. The results show a good fit to the proposed model and the optimal conditions obtained were within the experimental range with the value of R2 was 96.13% for percentage yield and 98.52% for total phenolic content.

  14. Explaining variation in tropical plant community composition: influence of environmental and spatial data quality.

    PubMed

    Jones, Mirkka M; Tuomisto, Hanna; Borcard, Daniel; Legendre, Pierre; Clark, David B; Olivas, Paulo C

    2008-03-01

    The degree to which variation in plant community composition (beta-diversity) is predictable from environmental variation, relative to other spatial processes, is of considerable current interest. We addressed this question in Costa Rican rain forest pteridophytes (1,045 plots, 127 species). We also tested the effect of data quality on the results, which has largely been overlooked in earlier studies. To do so, we compared two alternative spatial models [polynomial vs. principal coordinates of neighbour matrices (PCNM)] and ten alternative environmental models (all available environmental variables vs. four subsets, and including their polynomials vs. not). Of the environmental data types, soil chemistry contributed most to explaining pteridophyte community variation, followed in decreasing order of contribution by topography, soil type and forest structure. Environmentally explained variation increased moderately when polynomials of the environmental variables were included. Spatially explained variation increased substantially when the multi-scale PCNM spatial model was used instead of the traditional, broad-scale polynomial spatial model. The best model combination (PCNM spatial model and full environmental model including polynomials) explained 32% of pteridophyte community variation, after correcting for the number of sampling sites and explanatory variables. Overall evidence for environmental control of beta-diversity was strong, and the main floristic gradients detected were correlated with environmental variation at all scales encompassed by the study (c. 100-2,000 m). Depending on model choice, however, total explained variation differed more than fourfold, and the apparent relative importance of space and environment could be reversed. Therefore, we advocate a broader recognition of the impacts that data quality has on analysis results. A general understanding of the relative contributions of spatial and environmental processes to species distributions and beta-diversity requires that methodological artefacts are separated from real ecological differences.

  15. Modeling Uncertainty in Steady State Diffusion Problems via Generalized Polynomial Chaos

    DTIC Science & Technology

    2002-07-25

    Some basic hypergeometric polynomials that generalize Jacobi polynomials . Memoirs Amer. Math. Soc., AMS... orthogonal polynomial functionals from the Askey scheme, as a generalization of the original polynomial chaos idea of Wiener (1938). A Galerkin projection...1) by generalized polynomial chaos expansion, where the uncertainties can be introduced through κ, f , or g, or some combinations. It is worth

  16. A new surrogate modeling technique combining Kriging and polynomial chaos expansions – Application to uncertainty analysis in computational dosimetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kersaudy, Pierric, E-mail: pierric.kersaudy@orange.com; Whist Lab, 38 avenue du Général Leclerc, 92130 Issy-les-Moulineaux; ESYCOM, Université Paris-Est Marne-la-Vallée, 5 boulevard Descartes, 77700 Marne-la-Vallée

    2015-04-01

    In numerical dosimetry, the recent advances in high performance computing led to a strong reduction of the required computational time to assess the specific absorption rate (SAR) characterizing the human exposure to electromagnetic waves. However, this procedure remains time-consuming and a single simulation can request several hours. As a consequence, the influence of uncertain input parameters on the SAR cannot be analyzed using crude Monte Carlo simulation. The solution presented here to perform such an analysis is surrogate modeling. This paper proposes a novel approach to build such a surrogate model from a design of experiments. Considering a sparse representationmore » of the polynomial chaos expansions using least-angle regression as a selection algorithm to retain the most influential polynomials, this paper proposes to use the selected polynomials as regression functions for the universal Kriging model. The leave-one-out cross validation is used to select the optimal number of polynomials in the deterministic part of the Kriging model. The proposed approach, called LARS-Kriging-PC modeling, is applied to three benchmark examples and then to a full-scale metamodeling problem involving the exposure of a numerical fetus model to a femtocell device. The performances of the LARS-Kriging-PC are compared to an ordinary Kriging model and to a classical sparse polynomial chaos expansion. The LARS-Kriging-PC appears to have better performances than the two other approaches. A significant accuracy improvement is observed compared to the ordinary Kriging or to the sparse polynomial chaos depending on the studied case. This approach seems to be an optimal solution between the two other classical approaches. A global sensitivity analysis is finally performed on the LARS-Kriging-PC model of the fetus exposure problem.« less

  17. A Polynomial Subset-Based Efficient Multi-Party Key Management System for Lightweight Device Networks.

    PubMed

    Mahmood, Zahid; Ning, Huansheng; Ghafoor, AtaUllah

    2017-03-24

    Wireless Sensor Networks (WSNs) consist of lightweight devices to measure sensitive data that are highly vulnerable to security attacks due to their constrained resources. In a similar manner, the internet-based lightweight devices used in the Internet of Things (IoT) are facing severe security and privacy issues because of the direct accessibility of devices due to their connection to the internet. Complex and resource-intensive security schemes are infeasible and reduce the network lifetime. In this regard, we have explored the polynomial distribution-based key establishment schemes and identified an issue that the resultant polynomial value is either storage intensive or infeasible when large values are multiplied. It becomes more costly when these polynomials are regenerated dynamically after each node join or leave operation and whenever key is refreshed. To reduce the computation, we have proposed an Efficient Key Management (EKM) scheme for multiparty communication-based scenarios. The proposed session key management protocol is established by applying a symmetric polynomial for group members, and the group head acts as a responsible node. The polynomial generation method uses security credentials and secure hash function. Symmetric cryptographic parameters are efficient in computation, communication, and the storage required. The security justification of the proposed scheme has been completed by using Rubin logic, which guarantees that the protocol attains mutual validation and session key agreement property strongly among the participating entities. Simulation scenarios are performed using NS 2.35 to validate the results for storage, communication, latency, energy, and polynomial calculation costs during authentication, session key generation, node migration, secure joining, and leaving phases. EKM is efficient regarding storage, computation, and communication overhead and can protect WSN-based IoT infrastructure.

  18. A Polynomial Subset-Based Efficient Multi-Party Key Management System for Lightweight Device Networks

    PubMed Central

    Mahmood, Zahid; Ning, Huansheng; Ghafoor, AtaUllah

    2017-01-01

    Wireless Sensor Networks (WSNs) consist of lightweight devices to measure sensitive data that are highly vulnerable to security attacks due to their constrained resources. In a similar manner, the internet-based lightweight devices used in the Internet of Things (IoT) are facing severe security and privacy issues because of the direct accessibility of devices due to their connection to the internet. Complex and resource-intensive security schemes are infeasible and reduce the network lifetime. In this regard, we have explored the polynomial distribution-based key establishment schemes and identified an issue that the resultant polynomial value is either storage intensive or infeasible when large values are multiplied. It becomes more costly when these polynomials are regenerated dynamically after each node join or leave operation and whenever key is refreshed. To reduce the computation, we have proposed an Efficient Key Management (EKM) scheme for multiparty communication-based scenarios. The proposed session key management protocol is established by applying a symmetric polynomial for group members, and the group head acts as a responsible node. The polynomial generation method uses security credentials and secure hash function. Symmetric cryptographic parameters are efficient in computation, communication, and the storage required. The security justification of the proposed scheme has been completed by using Rubin logic, which guarantees that the protocol attains mutual validation and session key agreement property strongly among the participating entities. Simulation scenarios are performed using NS 2.35 to validate the results for storage, communication, latency, energy, and polynomial calculation costs during authentication, session key generation, node migration, secure joining, and leaving phases. EKM is efficient regarding storage, computation, and communication overhead and can protect WSN-based IoT infrastructure. PMID:28338632

  19. Orthonormal aberration polynomials for anamorphic optical imaging systems with circular pupils.

    PubMed

    Mahajan, Virendra N

    2012-06-20

    In a recent paper, we considered the classical aberrations of an anamorphic optical imaging system with a rectangular pupil, representing the terms of a power series expansion of its aberration function. These aberrations are inherently separable in the Cartesian coordinates (x,y) of a point on the pupil. Accordingly, there is x-defocus and x-coma, y-defocus and y-coma, and so on. We showed that the aberration polynomials orthonormal over the pupil and representing balanced aberrations for such a system are represented by the products of two Legendre polynomials, one for each of the two Cartesian coordinates of the pupil point; for example, L(l)(x)L(m)(y), where l and m are positive integers (including zero) and L(l)(x), for example, represents an orthonormal Legendre polynomial of degree l in x. The compound two-dimensional (2D) Legendre polynomials, like the classical aberrations, are thus also inherently separable in the Cartesian coordinates of the pupil point. Moreover, for every orthonormal polynomial L(l)(x)L(m)(y), there is a corresponding orthonormal polynomial L(l)(y)L(m)(x) obtained by interchanging x and y. These polynomials are different from the corresponding orthogonal polynomials for a system with rotational symmetry but a rectangular pupil. In this paper, we show that the orthonormal aberration polynomials for an anamorphic system with a circular pupil, obtained by the Gram-Schmidt orthogonalization of the 2D Legendre polynomials, are not separable in the two coordinates. Moreover, for a given polynomial in x and y, there is no corresponding polynomial obtained by interchanging x and y. For example, there are polynomials representing x-defocus, balanced x-coma, and balanced x-spherical aberration, but no corresponding y-aberration polynomials. The missing y-aberration terms are contained in other polynomials. We emphasize that the Zernike circle polynomials, although orthogonal over a circular pupil, are not suitable for an anamorphic system as they do not represent balanced aberrations for such a system.

  20. Stochastic uncertainty analysis for unconfined flow systems

    USGS Publications Warehouse

    Liu, Gaisheng; Zhang, Dongxiao; Lu, Zhiming

    2006-01-01

    A new stochastic approach proposed by Zhang and Lu (2004), called the Karhunen‐Loeve decomposition‐based moment equation (KLME), has been extended to solving nonlinear, unconfined flow problems in randomly heterogeneous aquifers. This approach is on the basis of an innovative combination of Karhunen‐Loeve decomposition, polynomial expansion, and perturbation methods. The random log‐transformed hydraulic conductivity field (lnKS) is first expanded into a series in terms of orthogonal Gaussian standard random variables with their coefficients obtained as the eigenvalues and eigenfunctions of the covariance function of lnKS. Next, head h is decomposed as a perturbation expansion series Σh(m), where h(m) represents the mth‐order head term with respect to the standard deviation of lnKS. Then h(m) is further expanded into a polynomial series of m products of orthogonal Gaussian standard random variables whose coefficients hi1,i2,...,im(m) are deterministic and solved sequentially from low to high expansion orders using MODFLOW‐2000. Finally, the statistics of head and flux are computed using simple algebraic operations on hi1,i2,...,im(m). A series of numerical test results in 2‐D and 3‐D unconfined flow systems indicated that the KLME approach is effective in estimating the mean and (co)variance of both heads and fluxes and requires much less computational effort as compared to the traditional Monte Carlo simulation technique.

  1. A pressure-based semi-implicit space-time discontinuous Galerkin method on staggered unstructured meshes for the solution of the compressible Navier-Stokes equations at all Mach numbers

    NASA Astrophysics Data System (ADS)

    Tavelli, Maurizio; Dumbser, Michael

    2017-07-01

    We propose a new arbitrary high order accurate semi-implicit space-time discontinuous Galerkin (DG) method for the solution of the two and three dimensional compressible Euler and Navier-Stokes equations on staggered unstructured curved meshes. The method is pressure-based and semi-implicit and is able to deal with all Mach number flows. The new DG scheme extends the seminal ideas outlined in [1], where a second order semi-implicit finite volume method for the solution of the compressible Navier-Stokes equations with a general equation of state was introduced on staggered Cartesian grids. Regarding the high order extension we follow [2], where a staggered space-time DG scheme for the incompressible Navier-Stokes equations was presented. In our scheme, the discrete pressure is defined on the primal grid, while the discrete velocity field and the density are defined on a face-based staggered dual grid. Then, the mass conservation equation, as well as the nonlinear convective terms in the momentum equation and the transport of kinetic energy in the energy equation are discretized explicitly, while the pressure terms appearing in the momentum and energy equation are discretized implicitly. Formal substitution of the discrete momentum equation into the total energy conservation equation yields a linear system for only one unknown, namely the scalar pressure. Here the equation of state is assumed linear with respect to the pressure. The enthalpy and the kinetic energy are taken explicitly and are then updated using a simple Picard procedure. Thanks to the use of a staggered grid, the final pressure system is a very sparse block five-point system for three dimensional problems and it is a block four-point system in the two dimensional case. Furthermore, for high order in space and piecewise constant polynomials in time, the system is observed to be symmetric and positive definite. This allows to use fast linear solvers such as the conjugate gradient (CG) method. In addition, all the volume and surface integrals needed by the scheme depend only on the geometry and the polynomial degree of the basis and test functions and can therefore be precomputed and stored in a preprocessing stage. This leads to significant savings in terms of computational effort for the time evolution part. In this way also the extension to a fully curved isoparametric approach becomes natural and affects only the preprocessing step. The viscous terms and the heat flux are also discretized making use of the staggered grid by defining the viscous stress tensor and the heat flux vector on the dual grid, which corresponds to the use of a lifting operator, but on the dual grid. The time step of our new numerical method is limited by a CFL condition based only on the fluid velocity and not on the sound speed. This makes the method particularly interesting for low Mach number flows. Finally, a very simple combination of artificial viscosity and the a posteriori MOOD technique allows to deal with shock waves and thus permits also to simulate high Mach number flows. We show computational results for a large set of two and three-dimensional benchmark problems, including both low and high Mach number flows and using polynomial approximation degrees up to p = 4.

  2. Hermite WENO limiting for multi-moment finite-volume methods using the ADER-DT time discretization for 1-D systems of conservation laws

    DOE PAGES

    Norman, Matthew R.

    2014-11-24

    New Hermite Weighted Essentially Non-Oscillatory (HWENO) interpolants are developed and investigated within the Multi-Moment Finite-Volume (MMFV) formulation using the ADER-DT time discretization. Whereas traditional WENO methods interpolate pointwise, function-based WENO methods explicitly form a non-oscillatory, high-order polynomial over the cell in question. This study chooses a function-based approach and details how fast convergence to optimal weights for smooth flow is ensured. Methods of sixth-, eighth-, and tenth-order accuracy are developed. We compare these against traditional single-moment WENO methods of fifth-, seventh-, ninth-, and eleventh-order accuracy to compare against more familiar methods from literature. The new HWENO methods improve upon existingmore » HWENO methods (1) by giving a better resolution of unreinforced contact discontinuities and (2) by only needing a single HWENO polynomial to update both the cell mean value and cell mean derivative. Test cases to validate and assess these methods include 1-D linear transport, the 1-D inviscid Burger's equation, and the 1-D inviscid Euler equations. Smooth and non-smooth flows are used for evaluation. These HWENO methods performed better than comparable literature-standard WENO methods for all regimes of discontinuity and smoothness in all tests herein. They exhibit improved optimal accuracy due to the use of derivatives, and they collapse to solutions similar to typical WENO methods when limiting is required. The study concludes that the new HWENO methods are robust and effective when used in the ADER-DT MMFV framework. Finally, these results are intended to demonstrate capability rather than exhaust all possible implementations.« less

  3. Multivariate random regression analysis for body weight and main morphological traits in genetically improved farmed tilapia (Oreochromis niloticus).

    PubMed

    He, Jie; Zhao, Yunfeng; Zhao, Jingli; Gao, Jin; Han, Dandan; Xu, Pao; Yang, Runqing

    2017-11-02

    Because of their high economic importance, growth traits in fish are under continuous improvement. For growth traits that are recorded at multiple time-points in life, the use of univariate and multivariate animal models is limited because of the variable and irregular timing of these measures. Thus, the univariate random regression model (RRM) was introduced for the genetic analysis of dynamic growth traits in fish breeding. We used a multivariate random regression model (MRRM) to analyze genetic changes in growth traits recorded at multiple time-point of genetically-improved farmed tilapia. Legendre polynomials of different orders were applied to characterize the influences of fixed and random effects on growth trajectories. The final MRRM was determined by optimizing the univariate RRM for the analyzed traits separately via penalizing adaptively the likelihood statistical criterion, which is superior to both the Akaike information criterion and the Bayesian information criterion. In the selected MRRM, the additive genetic effects were modeled by Legendre polynomials of three orders for body weight (BWE) and body length (BL) and of two orders for body depth (BD). By using the covariance functions of the MRRM, estimated heritabilities were between 0.086 and 0.628 for BWE, 0.155 and 0.556 for BL, and 0.056 and 0.607 for BD. Only heritabilities for BD measured from 60 to 140 days of age were consistently higher than those estimated by the univariate RRM. All genetic correlations between growth time-points exceeded 0.5 for either single or pairwise time-points. Moreover, correlations between early and late growth time-points were lower. Thus, for phenotypes that are measured repeatedly in aquaculture, an MRRM can enhance the efficiency of the comprehensive selection for BWE and the main morphological traits.

  4. Influence of flooding duration on the biomass growth of alder and willow.

    Treesearch

    Lewis F. Ohmann; M. Dean Knighton; Ronald McRoberts

    1990-01-01

    Simple second-order (quadratic) polynomials were used to model the relationship between 3-year biomass increase (net ovendry weight in grams) and flooding duration (days) for four combinations of shrub type (alder, willow) and soils type (fine-sand, clay-loam).

  5. Approximating exponential and logarithmic functions using polynomial interpolation

    NASA Astrophysics Data System (ADS)

    Gordon, Sheldon P.; Yang, Yajun

    2017-04-01

    This article takes a closer look at the problem of approximating the exponential and logarithmic functions using polynomials. Either as an alternative to or a precursor to Taylor polynomial approximations at the precalculus level, interpolating polynomials are considered. A measure of error is given and the behaviour of the error function is analysed. The results of interpolating polynomials are compared with those of Taylor polynomials.

  6. Adaptive Window Zero-Crossing-Based Instantaneous Frequency Estimation

    NASA Astrophysics Data System (ADS)

    Sekhar, S. Chandra; Sreenivas, TV

    2004-12-01

    We address the problem of estimating instantaneous frequency (IF) of a real-valued constant amplitude time-varying sinusoid. Estimation of polynomial IF is formulated using the zero-crossings of the signal. We propose an algorithm to estimate nonpolynomial IF by local approximation using a low-order polynomial, over a short segment of the signal. This involves the choice of window length to minimize the mean square error (MSE). The optimal window length found by directly minimizing the MSE is a function of the higher-order derivatives of the IF which are not available a priori. However, an optimum solution is formulated using an adaptive window technique based on the concept of intersection of confidence intervals. The adaptive algorithm enables minimum MSE-IF (MMSE-IF) estimation without requiring a priori information about the IF. Simulation results show that the adaptive window zero-crossing-based IF estimation method is superior to fixed window methods and is also better than adaptive spectrogram and adaptive Wigner-Ville distribution (WVD)-based IF estimators for different signal-to-noise ratio (SNR).

  7. Third-order polynomial model for analyzing stickup state laminated structure in flexible electronics

    NASA Astrophysics Data System (ADS)

    Meng, Xianhong; Wang, Zihao; Liu, Boya; Wang, Shuodao

    2018-02-01

    Laminated hard-soft integrated structures play a significant role in the fabrication and development of flexible electronics devices. Flexible electronics have advantageous characteristics such as soft and light-weight, can be folded, twisted, flipped inside-out, or be pasted onto other surfaces of arbitrary shapes. In this paper, an analytical model is presented to study the mechanics of laminated hard-soft structures in flexible electronics under a stickup state. Third-order polynomials are used to describe the displacement field, and the principle of virtual work is adopted to derive the governing equations and boundary conditions. The normal strain and the shear stress along the thickness direction in the bi-material region are obtained analytically, which agree well with the results from finite element analysis. The analytical model can be used to analyze stickup state laminated structures, and can serve as a valuable reference for the failure prediction and optimal design of flexible electronics in the future.

  8. A Reconstructed Discontinuous Galerkin Method for the Euler Equations on Arbitrary Grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hong Luo; Luqing Luo; Robert Nourgaliev

    2012-11-01

    A reconstruction-based discontinuous Galerkin (RDG(P1P2)) method, a variant of P1P2 method, is presented for the solution of the compressible Euler equations on arbitrary grids. In this method, an in-cell reconstruction, designed to enhance the accuracy of the discontinuous Galerkin method, is used to obtain a quadratic polynomial solution (P2) from the underlying linear polynomial (P1) discontinuous Galerkin solution using a least-squares method. The stencils used in the reconstruction involve only the von Neumann neighborhood (face-neighboring cells) and are compact and consistent with the underlying DG method. The developed RDG method is used to compute a variety of flow problems onmore » arbitrary meshes to demonstrate its accuracy, efficiency, robustness, and versatility. The numerical results indicate that this RDG(P1P2) method is third-order accurate, and outperforms the third-order DG method (DG(P2)) in terms of both computing costs and storage requirements.« less

  9. Simultaneous estimation of multiple phases in digital holographic interferometry using state space analysis

    NASA Astrophysics Data System (ADS)

    Kulkarni, Rishikesh; Rastogi, Pramod

    2018-05-01

    A new approach is proposed for the multiple phase estimation from a multicomponent exponential phase signal recorded in multi-beam digital holographic interferometry. It is capable of providing multidimensional measurements in a simultaneous manner from a single recording of the exponential phase signal encoding multiple phases. Each phase within a small window around each pixel is appproximated with a first order polynomial function of spatial coordinates. The problem of accurate estimation of polynomial coefficients, and in turn the unwrapped phases, is formulated as a state space analysis wherein the coefficients and signal amplitudes are set as the elements of a state vector. The state estimation is performed using the extended Kalman filter. An amplitude discrimination criterion is utilized in order to unambiguously estimate the coefficients associated with the individual signal components. The performance of proposed method is stable over a wide range of the ratio of signal amplitudes. The pixelwise phase estimation approach of the proposed method allows it to handle the fringe patterns that may contain invalid regions.

  10. Second Order Boltzmann-Gibbs Principle for Polynomial Functions and Applications

    NASA Astrophysics Data System (ADS)

    Gonçalves, Patrícia; Jara, Milton; Simon, Marielle

    2017-01-01

    In this paper we give a new proof of the second order Boltzmann-Gibbs principle introduced in Gonçalves and Jara (Arch Ration Mech Anal 212(2):597-644, 2014). The proof does not impose the knowledge on the spectral gap inequality for the underlying model and it relies on a proper decomposition of the antisymmetric part of the current of the system in terms of polynomial functions. In addition, we fully derive the convergence of the equilibrium fluctuations towards (1) a trivial process in case of super-diffusive systems, (2) an Ornstein-Uhlenbeck process or the unique energy solution of the stochastic Burgers equation, as defined in Gubinelli and Jara (SPDEs Anal Comput (1):325-350, 2013) and Gubinelli and Perkowski (Arxiv:1508.07764, 2015), in case of weakly asymmetric diffusive systems. Examples and applications are presented for weakly and partial asymmetric exclusion processes, weakly asymmetric speed change exclusion processes and hamiltonian systems with exponential interactions.

  11. Lyapunov functions for a class of nonlinear systems using Caputo derivative

    NASA Astrophysics Data System (ADS)

    Fernandez-Anaya, G.; Nava-Antonio, G.; Jamous-Galante, J.; Muñoz-Vega, R.; Hernández-Martínez, E. G.

    2017-02-01

    This paper presents an extension of recent results that allow proving the stability of Caputo nonlinear and time-varying systems, by means of the fractional order Lyapunov direct method, using quadratic Lyapunov functions. This article introduces a new way of building polynomial Lyapunov functions of any positive integer order as a way of determining the stability of a greater variety of systems when the order of the derivative is 0 < α < 1. Some examples are given to validate these results.

  12. Higher-order neural networks, Polyà polynomials, and Fermi cluster diagrams

    NASA Astrophysics Data System (ADS)

    Kürten, K. E.; Clark, J. W.

    2003-09-01

    The problem of controlling higher-order interactions in neural networks is addressed with techniques commonly applied in the cluster analysis of quantum many-particle systems. For multineuron synaptic weights chosen according to a straightforward extension of the standard Hebbian learning rule, we show that higher-order contributions to the stimulus felt by a given neuron can be readily evaluated via Polyà’s combinatoric group-theoretical approach or equivalently by exploiting a precise formal analogy with fermion diagrammatics.

  13. The Ritz - Sublaminate Generalized Unified Formulation approach for piezoelectric composite plates

    NASA Astrophysics Data System (ADS)

    D'Ottavio, Michele; Dozio, Lorenzo; Vescovini, Riccardo; Polit, Olivier

    2018-01-01

    This paper extends to composite plates including piezoelectric plies the variable kinematics plate modeling approach called Sublaminate Generalized Unified Formulation (SGUF). Two-dimensional plate equations are obtained upon defining a priori the through-thickness distribution of the displacement field and electric potential. According to SGUF, independent approximations can be adopted for the four components of these generalized displacements: an Equivalent Single Layer (ESL) or Layer-Wise (LW) description over an arbitrary group of plies constituting the composite plate (the sublaminate) and the polynomial order employed in each sublaminate. The solution of the two-dimensional equations is sought in weak form by means of a Ritz method. In this work, boundary functions are used in conjunction with the domain approximation expressed by an orthogonal basis spanned by Legendre polynomials. The proposed computational tool is capable to represent electroded surfaces with equipotentiality conditions. Free-vibration problems as well as static problems involving actuator and sensor configurations are addressed. Two case studies are presented, which demonstrate the high accuracy of the proposed Ritz-SGUF approach. A model assessment is proposed for showcasing to which extent the SGUF approach allows a reduction of the number of unknowns with a controlled impact on the accuracy of the result.

  14. A fast, automated, polynomial-based cosmic ray spike-removal method for the high-throughput processing of Raman spectra.

    PubMed

    Schulze, H Georg; Turner, Robin F B

    2013-04-01

    Raman spectra often contain undesirable, randomly positioned, intense, narrow-bandwidth, positive, unidirectional spectral features generated when cosmic rays strike charge-coupled device cameras. These must be removed prior to analysis, but doing so manually is not feasible for large data sets. We developed a quick, simple, effective, semi-automated procedure to remove cosmic ray spikes from spectral data sets that contain large numbers of relatively homogenous spectra. Although some inhomogeneous spectral data sets can be accommodated--it requires replacing excessively modified spectra with the originals and removing their spikes with a median filter instead--caution is advised when processing such data sets. In addition, the technique is suitable for interpolating missing spectra or replacing aberrant spectra with good spectral estimates. The method is applied to baseline-flattened spectra and relies on fitting a third-order (or higher) polynomial through all the spectra at every wavenumber. Pixel intensities in excess of a threshold of 3× the noise standard deviation above the fit are reduced to the threshold level. Because only two parameters (with readily specified default values) might require further adjustment, the method is easily implemented for semi-automated processing of large spectral sets.

  15. Improving the Unsteady Aerodynamic Performance of Transonic Turbines using Neural Networks

    NASA Technical Reports Server (NTRS)

    Rai, Man Mohan; Madavan, Nateri K.; Huber, Frank W.

    1999-01-01

    A recently developed neural net-based aerodynamic design procedure is used in the redesign of a transonic turbine stage to improve its unsteady aerodynamic performance. The redesign procedure used incorporates the advantages of both traditional response surface methodology and neural networks by employing a strategy called parameter-based partitioning of the design space. Starting from the reference design, a sequence of response surfaces based on both neural networks and polynomial fits are constructed to traverse the design space in search of an optimal solution that exhibits improved unsteady performance. The procedure combines the power of neural networks and the economy of low-order polynomials (in terms of number of simulations required and network training requirements). A time-accurate, two-dimensional, Navier-Stokes solver is used to evaluate the various intermediate designs and provide inputs to the optimization procedure. The procedure yielded a modified design that improves the aerodynamic performance through small changes to the reference design geometry. These results demonstrate the capabilities of the neural net-based design procedure, and also show the advantages of including high-fidelity unsteady simulations that capture the relevant flow physics in the design optimization process.

  16. Neural Net-Based Redesign of Transonic Turbines for Improved Unsteady Aerodynamic Performance

    NASA Technical Reports Server (NTRS)

    Madavan, Nateri K.; Rai, Man Mohan; Huber, Frank W.

    1998-01-01

    A recently developed neural net-based aerodynamic design procedure is used in the redesign of a transonic turbine stage to improve its unsteady aerodynamic performance. The redesign procedure used incorporates the advantages of both traditional response surface methodology (RSM) and neural networks by employing a strategy called parameter-based partitioning of the design space. Starting from the reference design, a sequence of response surfaces based on both neural networks and polynomial fits are constructed to traverse the design space in search of an optimal solution that exhibits improved unsteady performance. The procedure combines the power of neural networks and the economy of low-order polynomials (in terms of number of simulations required and network training requirements). A time-accurate, two-dimensional, Navier-Stokes solver is used to evaluate the various intermediate designs and provide inputs to the optimization procedure. The optimization procedure yields a modified design that improves the aerodynamic performance through small changes to the reference design geometry. The computed results demonstrate the capabilities of the neural net-based design procedure, and also show the tremendous advantages that can be gained by including high-fidelity unsteady simulations that capture the relevant flow physics in the design optimization process.

  17. Nano-transfersomes as a novel carrier for transdermal delivery.

    PubMed

    Chaudhary, Hema; Kohli, Kanchan; Kumar, Vikash

    2013-09-15

    The aim of this study was to design and optimize a nano-transfersomes of Diclofenac diethylamine (DDEA) and Curcumin (CRM). A 3(3) factorial design (Box-Behnken) was used to derive a polynomial equation (second order) to construct 2-D (contour) and 3-D (Response Surface) plots for prediction of responses. The ratio of lipid to surfactant (X1), weight of lipid to surfactant (X2) and sonication time (X3) (independent variables) and dependent variables [entrapment efficiency of DDEA (Y1), entrapment efficiency of CRM (Y2), effect on particle size (Y3), flux of DDEA (Y4), and flux of CRM (Y5)] were studied. The 2-D and 3-D plots were drawn and a statistical validity of the polynomials was established to find the compositions of optimized formulation. The design established the role of the derived polynomial equation, 2-D and 3-D plots in predicting the values of dependent variables for the preparation and optimization of nano-transfersomes for transdermal drug release. Copyright © 2013 Elsevier B.V. All rights reserved.

  18. Deformed oscillator algebra approach of some quantum superintegrable Lissajous systems on the sphere and of their rational extensions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marquette, Ian, E-mail: i.marquette@uq.edu.au; Quesne, Christiane, E-mail: cquesne@ulb.ac.be

    2015-06-15

    We extend the construction of 2D superintegrable Hamiltonians with separation of variables in spherical coordinates using combinations of shift, ladder, and supercharge operators to models involving rational extensions of the two-parameter Lissajous systems on the sphere. These new families of superintegrable systems with integrals of arbitrary order are connected with Jacobi exceptional orthogonal polynomials of type I (or II) and supersymmetric quantum mechanics. Moreover, we present an algebraic derivation of the degenerate energy spectrum for the one- and two-parameter Lissajous systems and the rationally extended models. These results are based on finitely generated polynomial algebras, Casimir operators, realizations as deformedmore » oscillator algebras, and finite-dimensional unitary representations. Such results have only been established so far for 2D superintegrable systems separable in Cartesian coordinates, which are related to a class of polynomial algebras that display a simpler structure. We also point out how the structure function of these deformed oscillator algebras is directly related with the generalized Heisenberg algebras spanned by the nonpolynomial integrals.« less

  19. Stabilization of nonlinear systems using sampled-data output-feedback fuzzy controller based on polynomial-fuzzy-model-based control approach.

    PubMed

    Lam, H K

    2012-02-01

    This paper investigates the stability of sampled-data output-feedback (SDOF) polynomial-fuzzy-model-based control systems. Representing the nonlinear plant using a polynomial fuzzy model, an SDOF fuzzy controller is proposed to perform the control process using the system output information. As only the system output is available for feedback compensation, it is more challenging for the controller design and system analysis compared to the full-state-feedback case. Furthermore, because of the sampling activity, the control signal is kept constant by the zero-order hold during the sampling period, which complicates the system dynamics and makes the stability analysis more difficult. In this paper, two cases of SDOF fuzzy controllers, which either share the same number of fuzzy rules or not, are considered. The system stability is investigated based on the Lyapunov stability theory using the sum-of-squares (SOS) approach. SOS-based stability conditions are obtained to guarantee the system stability and synthesize the SDOF fuzzy controller. Simulation examples are given to demonstrate the merits of the proposed SDOF fuzzy control approach.

  20. Comparison Between Polynomial, Euler Beta-Function and Expo-Rational B-Spline Bases

    NASA Astrophysics Data System (ADS)

    Kristoffersen, Arnt R.; Dechevsky, Lubomir T.; Laksa˚, Arne; Bang, Børre

    2011-12-01

    Euler Beta-function B-splines (BFBS) are the practically most important instance of generalized expo-rational B-splines (GERBS) which are not true expo-rational B-splines (ERBS). BFBS do not enjoy the full range of the superproperties of ERBS but, while ERBS are special functions computable by a very rapidly converging yet approximate numerical quadrature algorithms, BFBS are explicitly computable piecewise polynomial (for integer multiplicities), similar to classical Schoenberg B-splines. In the present communication we define, compute and visualize for the first time all possible BFBS of degree up to 3 which provide Hermite interpolation in three consecutive knots of multiplicity up to 3, i.e., the function is being interpolated together with its derivatives of order up to 2. We compare the BFBS obtained for different degrees and multiplicities among themselves and versus the classical Schoenberg polynomial B-splines and the true ERBS for the considered knots. The results of the graphical comparison are discussed from analytical point of view. For the numerical computation and visualization of the new B-splines we have used Maple 12.

  1. Sensor selection cost optimisation for tracking structurally cyclic systems: a P-order solution

    NASA Astrophysics Data System (ADS)

    Doostmohammadian, M.; Zarrabi, H.; Rabiee, H. R.

    2017-08-01

    Measurements and sensing implementations impose certain cost in sensor networks. The sensor selection cost optimisation is the problem of minimising the sensing cost of monitoring a physical (or cyber-physical) system. Consider a given set of sensors tracking states of a dynamical system for estimation purposes. For each sensor assume different costs to measure different (realisable) states. The idea is to assign sensors to measure states such that the global cost is minimised. The number and selection of sensor measurements need to ensure the observability to track the dynamic state of the system with bounded estimation error. The main question we address is how to select the state measurements to minimise the cost while satisfying the observability conditions. Relaxing the observability condition for structurally cyclic systems, the main contribution is to propose a graph theoretic approach to solve the problem in polynomial time. Note that polynomial time algorithms are suitable for large-scale systems as their running time is upper-bounded by a polynomial expression in the size of input for the algorithm. We frame the problem as a linear sum assignment with solution complexity of ?.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jakeman, John D.; Narayan, Akil; Zhou, Tao

    We propose an algorithm for recovering sparse orthogonal polynomial expansions via collocation. A standard sampling approach for recovering sparse polynomials uses Monte Carlo sampling, from the density of orthogonality, which results in poor function recovery when the polynomial degree is high. Our proposed approach aims to mitigate this limitation by sampling with respect to the weighted equilibrium measure of the parametric domain and subsequently solves a preconditionedmore » $$\\ell^1$$-minimization problem, where the weights of the diagonal preconditioning matrix are given by evaluations of the Christoffel function. Our algorithm can be applied to a wide class of orthogonal polynomial families on bounded and unbounded domains, including all classical families. We present theoretical analysis to motivate the algorithm and numerical results that show our method is superior to standard Monte Carlo methods in many situations of interest. In conclusion, numerical examples are also provided to demonstrate that our proposed algorithm leads to comparable or improved accuracy even when compared with Legendre- and Hermite-specific algorithms.« less

  3. Recognition of Arabic Sign Language Alphabet Using Polynomial Classifiers

    NASA Astrophysics Data System (ADS)

    Assaleh, Khaled; Al-Rousan, M.

    2005-12-01

    Building an accurate automatic sign language recognition system is of great importance in facilitating efficient communication with deaf people. In this paper, we propose the use of polynomial classifiers as a classification engine for the recognition of Arabic sign language (ArSL) alphabet. Polynomial classifiers have several advantages over other classifiers in that they do not require iterative training, and that they are highly computationally scalable with the number of classes. Based on polynomial classifiers, we have built an ArSL system and measured its performance using real ArSL data collected from deaf people. We show that the proposed system provides superior recognition results when compared with previously published results using ANFIS-based classification on the same dataset and feature extraction methodology. The comparison is shown in terms of the number of misclassified test patterns. The reduction in the rate of misclassified patterns was very significant. In particular, we have achieved a 36% reduction of misclassifications on the training data and 57% on the test data.

  4. High-precision numerical integration of equations in dynamics

    NASA Astrophysics Data System (ADS)

    Alesova, I. M.; Babadzanjanz, L. K.; Pototskaya, I. Yu.; Pupysheva, Yu. Yu.; Saakyan, A. T.

    2018-05-01

    An important requirement for the process of solving differential equations in Dynamics, such as the equations of the motion of celestial bodies and, in particular, the motion of cosmic robotic systems is high accuracy at large time intervals. One of effective tools for obtaining such solutions is the Taylor series method. In this connection, we note that it is very advantageous to reduce the given equations of Dynamics to systems with polynomial (in unknowns) right-hand sides. This allows us to obtain effective algorithms for finding the Taylor coefficients, a priori error estimates at each step of integration, and an optimal choice of the order of the approximation used. In the paper, these questions are discussed and appropriate algorithms are considered.

  5. Linear precoding based on polynomial expansion: reducing complexity in massive MIMO.

    PubMed

    Mueller, Axel; Kammoun, Abla; Björnson, Emil; Debbah, Mérouane

    Massive multiple-input multiple-output (MIMO) techniques have the potential to bring tremendous improvements in spectral efficiency to future communication systems. Counterintuitively, the practical issues of having uncertain channel knowledge, high propagation losses, and implementing optimal non-linear precoding are solved more or less automatically by enlarging system dimensions. However, the computational precoding complexity grows with the system dimensions. For example, the close-to-optimal and relatively "antenna-efficient" regularized zero-forcing (RZF) precoding is very complicated to implement in practice, since it requires fast inversions of large matrices in every coherence period. Motivated by the high performance of RZF, we propose to replace the matrix inversion and multiplication by a truncated polynomial expansion (TPE), thereby obtaining the new TPE precoding scheme which is more suitable for real-time hardware implementation and significantly reduces the delay to the first transmitted symbol. The degree of the matrix polynomial can be adapted to the available hardware resources and enables smooth transition between simple maximum ratio transmission and more advanced RZF. By deriving new random matrix results, we obtain a deterministic expression for the asymptotic signal-to-interference-and-noise ratio (SINR) achieved by TPE precoding in massive MIMO systems. Furthermore, we provide a closed-form expression for the polynomial coefficients that maximizes this SINR. To maintain a fixed per-user rate loss as compared to RZF, the polynomial degree does not need to scale with the system, but it should be increased with the quality of the channel knowledge and the signal-to-noise ratio.

  6. A High Order, Locally-Adaptive Method for the Navier-Stokes Equations

    NASA Astrophysics Data System (ADS)

    Chan, Daniel

    1998-11-01

    I have extended the FOSLS method of Cai, Manteuffel and McCormick (1997) and implemented it within the framework of a spectral element formulation using the Legendre polynomial basis function. The FOSLS method solves the Navier-Stokes equations as a system of coupled first-order equations and provides the ellipticity that is needed for fast iterative matrix solvers like multigrid to operate efficiently. Each element is treated as an object and its properties are self-contained. Only C^0 continuity is imposed across element interfaces; this design allows local grid refinement and coarsening without the burden of having an elaborate data structure, since only information along element boundaries is needed. With the FORTRAN 90 programming environment, I can maintain a high computational efficiency by employing a hybrid parallel processing model. The OpenMP directives provides parallelism in the loop level which is executed in a shared-memory SMP and the MPI protocol allows the distribution of elements to a cluster of SMP's connected via a commodity network. This talk will provide timing results and a comparison with a second order finite difference method.

  7. Cell-Averaged discretization for incompressible Navier-Stokes with embedded boundaries and locally refined Cartesian meshes: a high-order finite volume approach

    NASA Astrophysics Data System (ADS)

    Bhalla, Amneet Pal Singh; Johansen, Hans; Graves, Dan; Martin, Dan; Colella, Phillip; Applied Numerical Algorithms Group Team

    2017-11-01

    We present a consistent cell-averaged discretization for incompressible Navier-Stokes equations on complex domains using embedded boundaries. The embedded boundary is allowed to freely cut the locally-refined background Cartesian grid. Implicit-function representation is used for the embedded boundary, which allows us to convert the required geometric moments in the Taylor series expansion (upto arbitrary order) of polynomials into an algebraic problem in lower dimensions. The computed geometric moments are then used to construct stencils for various operators like the Laplacian, divergence, gradient, etc., by solving a least-squares system locally. We also construct the inter-level data-transfer operators like prolongation and restriction for multi grid solvers using the same least-squares system approach. This allows us to retain high-order of accuracy near coarse-fine interface and near embedded boundaries. Canonical problems like Taylor-Green vortex flow and flow past bluff bodies will be presented to demonstrate the proposed method. U.S. Department of Energy, Office of Science, ASCR (Award Number DE-AC02-05CH11231).

  8. Radiometer Calibrations: Saving Time by Automating the Gathering and Analysis Procedures

    NASA Technical Reports Server (NTRS)

    Sadino, Jeffrey L.

    2005-01-01

    Mr. Abtahi custom-designs radiometers for Mr. Hook's research group. Inherently, when the radiometers report the temperature of arbitrary surfaces, the results are affected by errors in accuracy. This problem can be reduced if the errors can be accounted for in a polynomial. This is achieved by pointing the radiometer at a constant-temperature surface. We have been using a Hartford Scientific WaterBath. The measurements from the radiometer are collected at many different temperatures and compared to the measurements made by a Hartford Chubb thermometer with a four-decimal point resolution. The data is analyzed and fit to a fifth-order polynomial. This formula is then uploaded into the radiometer software, enabling accurate data gathering. Traditionally, Mr. Abtahi has done this by hand, spending several hours of his time setting the temperature, waiting for stabilization, taking measurements, and then repeating for other temperatures. My program, written in the Python language, has enabled the data gathering and analysis process to be handed off to a less-senior member of the team. Simply by entering several initial settings, the program will simultaneously control all three instruments and organize the data suitable for computer analyses, thus giving the desired fifth-order polynomial. This will save time, allow for a more complete calibration data set, and allow for base calibrations to be developed. The program is expandable to simultaneously take any type of measurement from up to nine distinct instruments.

  9. Satellite Orbit Theory for a Small Computer.

    DTIC Science & Technology

    1983-12-15

    them across the pass. . Both sets of interpolating polynomials are finally used to provide osculating orbital elements at arbitrary times during the...polyno-iials are established for themt across the mass. Both sets of inter- polating polynomials are finally used to provide osculating orbital elements ...high Drecisicn orbital elements at epoch, a correspond ing set of initial mean eleme-nts must be determined for the samianalytical model. It is importan

  10. Compressive Sensing with Cross-Validation and Stop-Sampling for Sparse Polynomial Chaos Expansions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huan, Xun; Safta, Cosmin; Sargsyan, Khachik

    Compressive sensing is a powerful technique for recovering sparse solutions of underdetermined linear systems, which is often encountered in uncertainty quanti cation analysis of expensive and high-dimensional physical models. We perform numerical investigations employing several com- pressive sensing solvers that target the unconstrained LASSO formulation, with a focus on linear systems that arise in the construction of polynomial chaos expansions. With core solvers of l1 ls, SpaRSA, CGIST, FPC AS, and ADMM, we develop techniques to mitigate over tting through an automated selection of regularization constant based on cross-validation, and a heuristic strategy to guide the stop-sampling decision. Practical recommendationsmore » on parameter settings for these tech- niques are provided and discussed. The overall method is applied to a series of numerical examples of increasing complexity, including large eddy simulations of supersonic turbulent jet-in-cross flow involving a 24-dimensional input. Through empirical phase-transition diagrams and convergence plots, we illustrate sparse recovery performance under structures induced by polynomial chaos, accuracy and computational tradeoffs between polynomial bases of different degrees, and practi- cability of conducting compressive sensing for a realistic, high-dimensional physical application. Across test cases studied in this paper, we find ADMM to have demonstrated empirical advantages through consistent lower errors and faster computational times.« less

  11. Adaptive polynomial chaos techniques for uncertainty quantification of a gas cooled fast reactor transient

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perko, Z.; Gilli, L.; Lathouwers, D.

    2013-07-01

    Uncertainty quantification plays an increasingly important role in the nuclear community, especially with the rise of Best Estimate Plus Uncertainty methodologies. Sensitivity analysis, surrogate models, Monte Carlo sampling and several other techniques can be used to propagate input uncertainties. In recent years however polynomial chaos expansion has become a popular alternative providing high accuracy at affordable computational cost. This paper presents such polynomial chaos (PC) methods using adaptive sparse grids and adaptive basis set construction, together with an application to a Gas Cooled Fast Reactor transient. Comparison is made between a new sparse grid algorithm and the traditionally used techniquemore » proposed by Gerstner. An adaptive basis construction method is also introduced and is proved to be advantageous both from an accuracy and a computational point of view. As a demonstration the uncertainty quantification of a 50% loss of flow transient in the GFR2400 Gas Cooled Fast Reactor design was performed using the CATHARE code system. The results are compared to direct Monte Carlo sampling and show the superior convergence and high accuracy of the polynomial chaos expansion. Since PC techniques are easy to implement, they can offer an attractive alternative to traditional techniques for the uncertainty quantification of large scale problems. (authors)« less

  12. Equivalences of the multi-indexed orthogonal polynomials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Odake, Satoru

    2014-01-15

    Multi-indexed orthogonal polynomials describe eigenfunctions of exactly solvable shape-invariant quantum mechanical systems in one dimension obtained by the method of virtual states deletion. Multi-indexed orthogonal polynomials are labeled by a set of degrees of polynomial parts of virtual state wavefunctions. For multi-indexed orthogonal polynomials of Laguerre, Jacobi, Wilson, and Askey-Wilson types, two different index sets may give equivalent multi-indexed orthogonal polynomials. We clarify these equivalences. Multi-indexed orthogonal polynomials with both type I and II indices are proportional to those of type I indices only (or type II indices only) with shifted parameters.

  13. Rheological Analysis of Binary Eutectic Mixture of Sodium and Potassium Nitrate and the Effect of Low Concentration CuO Nanoparticle Addition to Its Viscosity

    PubMed Central

    Lasfargues, Mathieu; Cao, Hui; Geng, Qiao; Ding, Yulong

    2015-01-01

    This paper is focused on the characterisation and demonstration of Newtonian behaviour of salt at both high and low shear rate for sodium and potassium nitrate eutectic mixture (60/40) ranging from 250 °C to 500 °C. Analysis of published and experimental data was carried out to correlate all the numbers into one meaningful 4th order polynomial equation. Addition of a low amount of copper oxide nanoparticles to the mixture increased viscosity of 5.0%–18.0% compared to the latter equation. PMID:28793498

  14. The constraint method: A new finite element technique. [applied to static and dynamic loads on plates

    NASA Technical Reports Server (NTRS)

    Tsai, C.; Szabo, B. A.

    1973-01-01

    An approch to the finite element method which utilizes families of conforming finite elements based on complete polynomials is presented. Finite element approximations based on this method converge with respect to progressively reduced element sizes as well as with respect to progressively increasing orders of approximation. Numerical results of static and dynamic applications of plates are presented to demonstrate the efficiency of the method. Comparisons are made with plate elements in NASTRAN and the high-precision plate element developed by Cowper and his co-workers. Some considerations are given to implementation of the constraint method into general purpose computer programs such as NASTRAN.

  15. Gravity investigation of the Manson impact structure, Iowa

    NASA Technical Reports Server (NTRS)

    Plescia, J. B.

    1993-01-01

    The Manson crater, of probable Cretaceous/Tertiary age, is located in northwestern Iowa (center at 42 deg. 34.44 min N; 94 deg. 33.60 min W). A seismic reflection profile along an east west line across the crater and drill hole data indicate a crater about 35 km in diameter having the classic form for an impact crater, an uplifted central peak composed of uplifted Proterozoic crystalline bedrock, surrounded by a 'moat' filled with impact produced breccia and a ring graben zone composed of tilted fault blocks of the Proterozoic and Paleozoic country rocks. The structure has been significantly eroded. This geologic structure would be expected to produce a significant gravity signature and study of that signature would shed additional light on the details of the crater structure. A gravity study was undertaken to better resolve the crustal structure. The regional Bouguer gravity field is characterized by a southeastward decreasing field. To first order, the Bouguer gravity field can be understood in the context of the geology of the Precambrian basement. The high gravity at the southeast corner is associated with the mid-continent gravity high; the adjacent low to the northwest results from a basin containing low-density clastic sediments shed from the basement high. Modeling of a simple basin and adjacent high predicts much of the observed Bouguer gravity signature. A gravity signature due to structure associated with the Manson impact is not apparent in the Bouguer data. To resolve the gravity signature of the impact, a series of polynomial surfaces were fit to the Bouguer gravity field to isolate the small wavelength residual anomalies. The residual gravity obtained after subtracting a 5th- or 6th-order polynomial seems to remove most of the regional effects and isolate local anomalies. The pattern resolved in the residual gravity is one of a gravity high surrounded by gravity lows and in turn surrounded by isolated gravity highs. The central portion of the crater is characterized by two positive anomalies having amplitudes of about plus 4 mGal separated by a gentle saddle located approximately at the crater center.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Znojil, Miloslav

    An N-level quantum model is proposed in which the energies are represented by an N-plet of zeros of a suitable classical orthogonal polynomial. The family of Gegenbauer polynomials G(n,a,x) is selected for illustrative purposes. The main obstacle lies in the non-Hermiticity (aka crypto-Hermiticity) of Hamiltonians H{ne}H{sup {dagger}.} We managed to (i) start from elementary secular equation G(N,a,E{sub n})=0, (ii) keep our H, in the nearest-neighbor-interaction spirit, tridiagonal, (iii) render it Hermitian in an ad hoc, nonunique Hilbert space endowed with metric {Theta}{ne}I, (iv) construct eligible metrics in closed forms ordered by increasing nondiagonality, and (v) interpret the model as amore » smeared N-site lattice.« less

  17. Differential geometric treewidth estimation in adiabatic quantum computation

    NASA Astrophysics Data System (ADS)

    Wang, Chi; Jonckheere, Edmond; Brun, Todd

    2016-10-01

    The D-Wave adiabatic quantum computing platform is designed to solve a particular class of problems—the Quadratic Unconstrained Binary Optimization (QUBO) problems. Due to the particular "Chimera" physical architecture of the D-Wave chip, the logical problem graph at hand needs an extra process called minor embedding in order to be solvable on the D-Wave architecture. The latter problem is itself NP-hard. In this paper, we propose a novel polynomial-time approximation to the closely related treewidth based on the differential geometric concept of Ollivier-Ricci curvature. The latter runs in polynomial time and thus could significantly reduce the overall complexity of determining whether a QUBO problem is minor embeddable, and thus solvable on the D-Wave architecture.

  18. Classes of exact Einstein Maxwell solutions

    NASA Astrophysics Data System (ADS)

    Komathiraj, K.; Maharaj, S. D.

    2007-12-01

    We find new classes of exact solutions to the Einstein Maxwell system of equations for a charged sphere with a particular choice of the electric field intensity and one of the gravitational potentials. The condition of pressure isotropy is reduced to a linear, second order differential equation which can be solved in general. Consequently we can find exact solutions to the Einstein Maxwell field equations corresponding to a static spherically symmetric gravitational potential in terms of hypergeometric functions. It is possible to find exact solutions which can be written explicitly in terms of elementary functions, namely polynomials and product of polynomials and algebraic functions. Uncharged solutions are regainable with our choice of electric field intensity; in particular we generate the Einstein universe for particular parameter values.

  19. A comparison between space-time video descriptors

    NASA Astrophysics Data System (ADS)

    Costantini, Luca; Capodiferro, Licia; Neri, Alessandro

    2013-02-01

    The description of space-time patches is a fundamental task in many applications such as video retrieval or classification. Each space-time patch can be described by using a set of orthogonal functions that represent a subspace, for example a sphere or a cylinder, within the patch. In this work, our aim is to investigate the differences between the spherical descriptors and the cylindrical descriptors. In order to compute the descriptors, the 3D spherical and cylindrical Zernike polynomials are employed. This is important because both the functions are based on the same family of polynomials, and only the symmetry is different. Our experimental results show that the cylindrical descriptor outperforms the spherical descriptor. However, the performances of the two descriptors are similar.

  20. A Fast lattice-based polynomial digital signature system for m-commerce

    NASA Astrophysics Data System (ADS)

    Wei, Xinzhou; Leung, Lin; Anshel, Michael

    2003-01-01

    The privacy and data integrity are not guaranteed in current wireless communications due to the security hole inside the Wireless Application Protocol (WAP) version 1.2 gateway. One of the remedies is to provide an end-to-end security in m-commerce by applying application level security on top of current WAP1.2. The traditional security technologies like RSA and ECC applied on enterprise's server are not practical for wireless devices because wireless devices have relatively weak computation power and limited memory compared with server. In this paper, we developed a lattice based polynomial digital signature system based on NTRU's Polynomial Authentication and Signature Scheme (PASS), which enabled the feasibility of applying high-level security on both server and wireless device sides.

Top