Sample records for fully polynomial-time approximation

  1. Approximating Multilinear Monomial Coefficients and Maximum Multilinear Monomials in Multivariate Polynomials

    NASA Astrophysics Data System (ADS)

    Chen, Zhixiang; Fu, Bin

    This paper is our third step towards developing a theory of testing monomials in multivariate polynomials and concentrates on two problems: (1) How to compute the coefficients of multilinear monomials; and (2) how to find a maximum multilinear monomial when the input is a ΠΣΠ polynomial. We first prove that the first problem is #P-hard and then devise a O *(3 n s(n)) upper bound for this problem for any polynomial represented by an arithmetic circuit of size s(n). Later, this upper bound is improved to O *(2 n ) for ΠΣΠ polynomials. We then design fully polynomial-time randomized approximation schemes for this problem for ΠΣ polynomials. On the negative side, we prove that, even for ΠΣΠ polynomials with terms of degree ≤ 2, the first problem cannot be approximated at all for any approximation factor ≥ 1, nor "weakly approximated" in a much relaxed setting, unless P=NP. For the second problem, we first give a polynomial time λ-approximation algorithm for ΠΣΠ polynomials with terms of degrees no more a constant λ ≥ 2. On the inapproximability side, we give a n (1 - ɛ)/2 lower bound, for any ɛ> 0, on the approximation factor for ΠΣΠ polynomials. When the degrees of the terms in these polynomials are constrained as ≤ 2, we prove a 1.0476 lower bound, assuming Pnot=NP; and a higher 1.0604 lower bound, assuming the Unique Games Conjecture.

  2. Efficiently approximating the Pareto frontier: Hydropower dam placement in the Amazon basin

    USGS Publications Warehouse

    Wu, Xiaojian; Gomes-Selman, Jonathan; Shi, Qinru; Xue, Yexiang; Garcia-Villacorta, Roosevelt; Anderson, Elizabeth; Sethi, Suresh; Steinschneider, Scott; Flecker, Alexander; Gomes, Carla P.

    2018-01-01

    Real–world problems are often not fully characterized by a single optimal solution, as they frequently involve multiple competing objectives; it is therefore important to identify the so-called Pareto frontier, which captures solution trade-offs. We propose a fully polynomial-time approximation scheme based on Dynamic Programming (DP) for computing a polynomially succinct curve that approximates the Pareto frontier to within an arbitrarily small > 0 on treestructured networks. Given a set of objectives, our approximation scheme runs in time polynomial in the size of the instance and 1/. We also propose a Mixed Integer Programming (MIP) scheme to approximate the Pareto frontier. The DP and MIP Pareto frontier approaches have complementary strengths and are surprisingly effective. We provide empirical results showing that our methods outperform other approaches in efficiency and accuracy. Our work is motivated by a problem in computational sustainability concerning the proliferation of hydropower dams throughout the Amazon basin. Our goal is to support decision-makers in evaluating impacted ecosystem services on the full scale of the Amazon basin. Our work is general and can be applied to approximate the Pareto frontier of a variety of multiobjective problems on tree-structured networks.

  3. Fully polynomial-time approximation scheme for a special case of a quadratic Euclidean 2-clustering problem

    NASA Astrophysics Data System (ADS)

    Kel'manov, A. V.; Khandeev, V. I.

    2016-02-01

    The strongly NP-hard problem of partitioning a finite set of points of Euclidean space into two clusters of given sizes (cardinalities) minimizing the sum (over both clusters) of the intracluster sums of squared distances from the elements of the clusters to their centers is considered. It is assumed that the center of one of the sought clusters is specified at the desired (arbitrary) point of space (without loss of generality, at the origin), while the center of the other one is unknown and determined as the mean value over all elements of this cluster. It is shown that unless P = NP, there is no fully polynomial-time approximation scheme for this problem, and such a scheme is substantiated in the case of a fixed space dimension.

  4. A Constant-Factor Approximation Algorithm for the Link Building Problem

    NASA Astrophysics Data System (ADS)

    Olsen, Martin; Viglas, Anastasios; Zvedeniouk, Ilia

    In this work we consider the problem of maximizing the PageRank of a given target node in a graph by adding k new links. We consider the case that the new links must point to the given target node (backlinks). Previous work [7] shows that this problem has no fully polynomial time approximation schemes unless P = NP. We present a polynomial time algorithm yielding a PageRank value within a constant factor from the optimal. We also consider the naive algorithm where we choose backlinks from nodes with high PageRank values compared to the outdegree and show that the naive algorithm performs much worse on certain graphs compared to the constant factor approximation scheme.

  5. Calculation of Thermal Conductivity Coefficients of Electrons in Magnetized Dense Matter

    NASA Astrophysics Data System (ADS)

    Bisnovatyi-Kogan, G. S.; Glushikhina, M. V.

    2018-04-01

    The solution of Boltzmann equation for plasma in magnetic field with arbitrarily degenerate electrons and nondegenerate nuclei is obtained by Chapman-Enskog method. Functions generalizing Sonine polynomials are used for obtaining an approximate solution. Fully ionized plasma is considered. The tensor of the heat conductivity coefficients in nonquantized magnetic field is calculated. For nondegenerate and strongly degenerate plasma the asymptotic analytic formulas are obtained and compared with results of previous authors. The Lorentz approximation with neglecting of electron-electron encounters is asymptotically exact for strongly degenerate plasma. For the first time, analytical expressions for the heat conductivity tensor for nondegenerate electrons in the presence of a magnetic field are obtained in the three-polynomial approximation with account of electron-electron collisions. Account of the third polynomial improved substantially the precision of results. In the two-polynomial approximation, the obtained solution coincides with the published results. For strongly degenerate electrons, an asymptotically exact analytical solution for the heat conductivity tensor in the presence of a magnetic field is obtained for the first time. This solution has a considerably more complicated dependence on the magnetic field than those in previous publications and gives a several times smaller relative value of the thermal conductivity across the magnetic field at ωτ * 0.8.

  6. The CFL condition for spectral approximations to hyperbolic initial-boundary value problems

    NASA Technical Reports Server (NTRS)

    Gottlieb, David; Tadmor, Eitan

    1991-01-01

    The stability of spectral approximations to scalar hyperbolic initial-boundary value problems with variable coefficients are studied. Time is discretized by explicit multi-level or Runge-Kutta methods of order less than or equal to 3 (forward Euler time differencing is included), and spatial discretizations are studied by spectral and pseudospectral approximations associated with the general family of Jacobi polynomials. It is proved that these fully explicit spectral approximations are stable provided their time-step, delta t, is restricted by the CFL-like condition, delta t less than Const. N(exp-2), where N equals the spatial number of degrees of freedom. We give two independent proofs of this result, depending on two different choices of approximate L(exp 2)-weighted norms. In both approaches, the proofs hinge on a certain inverse inequality interesting for its own sake. The result confirms the commonly held belief that the above CFL stability restriction, which is extensively used in practical implementations, guarantees the stability (and hence the convergence) of fully-explicit spectral approximations in the nonperiodic case.

  7. The CFL condition for spectral approximations to hyperbolic initial-boundary value problems

    NASA Technical Reports Server (NTRS)

    Gottlieb, David; Tadmor, Eitan

    1990-01-01

    The stability of spectral approximations to scalar hyperbolic initial-boundary value problems with variable coefficients are studied. Time is discretized by explicit multi-level or Runge-Kutta methods of order less than or equal to 3 (forward Euler time differencing is included), and spatial discretizations are studied by spectral and pseudospectral approximations associated with the general family of Jacobi polynomials. It is proved that these fully explicit spectral approximations are stable provided their time-step, delta t, is restricted by the CFL-like condition, delta t less than Const. N(exp-2), where N equals the spatial number of degrees of freedom. We give two independent proofs of this result, depending on two different choices of approximate L(exp 2)-weighted norms. In both approaches, the proofs hinge on a certain inverse inequality interesting for its own sake. The result confirms the commonly held belief that the above CFL stability restriction, which is extensively used in practical implementations, guarantees the stability (and hence the convergence) of fully-explicit spectral approximations in the nonperiodic case.

  8. Efficient uncertainty quantification in fully-integrated surface and subsurface hydrologic simulations

    NASA Astrophysics Data System (ADS)

    Miller, K. L.; Berg, S. J.; Davison, J. H.; Sudicky, E. A.; Forsyth, P. A.

    2018-01-01

    Although high performance computers and advanced numerical methods have made the application of fully-integrated surface and subsurface flow and transport models such as HydroGeoSphere common place, run times for large complex basin models can still be on the order of days to weeks, thus, limiting the usefulness of traditional workhorse algorithms for uncertainty quantification (UQ) such as Latin Hypercube simulation (LHS) or Monte Carlo simulation (MCS), which generally require thousands of simulations to achieve an acceptable level of accuracy. In this paper we investigate non-intrusive polynomial chaos for uncertainty quantification, which in contrast to random sampling methods (e.g., LHS and MCS), represents a model response of interest as a weighted sum of polynomials over the random inputs. Once a chaos expansion has been constructed, approximating the mean, covariance, probability density function, cumulative distribution function, and other common statistics as well as local and global sensitivity measures is straightforward and computationally inexpensive, thus making PCE an attractive UQ method for hydrologic models with long run times. Our polynomial chaos implementation was validated through comparison with analytical solutions as well as solutions obtained via LHS for simple numerical problems. It was then used to quantify parametric uncertainty in a series of numerical problems with increasing complexity, including a two-dimensional fully-saturated, steady flow and transient transport problem with six uncertain parameters and one quantity of interest; a one-dimensional variably-saturated column test involving transient flow and transport, four uncertain parameters, and two quantities of interest at 101 spatial locations and five different times each (1010 total); and a three-dimensional fully-integrated surface and subsurface flow and transport problem for a small test catchment involving seven uncertain parameters and three quantities of interest at 241 different times each. Numerical experiments show that polynomial chaos is an effective and robust method for quantifying uncertainty in fully-integrated hydrologic simulations, which provides a rich set of features and is computationally efficient. Our approach has the potential for significant speedup over existing sampling based methods when the number of uncertain model parameters is modest ( ≤ 20). To our knowledge, this is the first implementation of the algorithm in a comprehensive, fully-integrated, physically-based three-dimensional hydrosystem model.

  9. Constructing general partial differential equations using polynomial and neural networks.

    PubMed

    Zjavka, Ladislav; Pedrycz, Witold

    2016-01-01

    Sum fraction terms can approximate multi-variable functions on the basis of discrete observations, replacing a partial differential equation definition with polynomial elementary data relation descriptions. Artificial neural networks commonly transform the weighted sum of inputs to describe overall similarity relationships of trained and new testing input patterns. Differential polynomial neural networks form a new class of neural networks, which construct and solve an unknown general partial differential equation of a function of interest with selected substitution relative terms using non-linear multi-variable composite polynomials. The layers of the network generate simple and composite relative substitution terms whose convergent series combinations can describe partial dependent derivative changes of the input variables. This regression is based on trained generalized partial derivative data relations, decomposed into a multi-layer polynomial network structure. The sigmoidal function, commonly used as a nonlinear activation of artificial neurons, may transform some polynomial items together with the parameters with the aim to improve the polynomial derivative term series ability to approximate complicated periodic functions, as simple low order polynomials are not able to fully make up for the complete cycles. The similarity analysis facilitates substitutions for differential equations or can form dimensional units from data samples to describe real-world problems. Copyright © 2015 Elsevier Ltd. All rights reserved.

  10. Polynomial compensation, inversion, and approximation of discrete time linear systems

    NASA Technical Reports Server (NTRS)

    Baram, Yoram

    1987-01-01

    The least-squares transformation of a discrete-time multivariable linear system into a desired one by convolving the first with a polynomial system yields optimal polynomial solutions to the problems of system compensation, inversion, and approximation. The polynomial coefficients are obtained from the solution to a so-called normal linear matrix equation, whose coefficients are shown to be the weighting patterns of certain linear systems. These, in turn, can be used in the recursive solution of the normal equation.

  11. Single product lot-sizing on unrelated parallel machines with non-decreasing processing times

    NASA Astrophysics Data System (ADS)

    Eremeev, A.; Kovalyov, M.; Kuznetsov, P.

    2018-01-01

    We consider a problem in which at least a given quantity of a single product has to be partitioned into lots, and lots have to be assigned to unrelated parallel machines for processing. In one version of the problem, the maximum machine completion time should be minimized, in another version of the problem, the sum of machine completion times is to be minimized. Machine-dependent lower and upper bounds on the lot size are given. The product is either assumed to be continuously divisible or discrete. The processing time of each machine is defined by an increasing function of the lot volume, given as an oracle. Setup times and costs are assumed to be negligibly small, and therefore, they are not considered. We derive optimal polynomial time algorithms for several special cases of the problem. An NP-hard case is shown to admit a fully polynomial time approximation scheme. An application of the problem in energy efficient processors scheduling is considered.

  12. Multiple zeros of polynomials

    NASA Technical Reports Server (NTRS)

    Wood, C. A.

    1974-01-01

    For polynomials of higher degree, iterative numerical methods must be used. Four iterative methods are presented for approximating the zeros of a polynomial using a digital computer. Newton's method and Muller's method are two well known iterative methods which are presented. They extract the zeros of a polynomial by generating a sequence of approximations converging to each zero. However, both of these methods are very unstable when used on a polynomial which has multiple zeros. That is, either they fail to converge to some or all of the zeros, or they converge to very bad approximations of the polynomial's zeros. This material introduces two new methods, the greatest common divisor (G.C.D.) method and the repeated greatest common divisor (repeated G.C.D.) method, which are superior methods for numerically approximating the zeros of a polynomial having multiple zeros. These methods were programmed in FORTRAN 4 and comparisons in time and accuracy are given.

  13. Polynomial approximation of the Lense-Thirring rigid precession frequency

    NASA Astrophysics Data System (ADS)

    De Falco, Vittorio; Motta, Sara

    2018-05-01

    We propose a polynomial approximation of the global Lense-Thirring rigid precession frequency to study low-frequency quasi-periodic oscillations around spinning black holes. This high-performing approximation allows to determine the expected frequencies of a precessing thick accretion disc with fixed inner radius and variable outer radius around a black hole with given mass and spin. We discuss the accuracy and the applicability regions of our polynomial approximation, showing that the computational times are reduced by a factor of ≈70 in the range of minutes.

  14. Long-time uncertainty propagation using generalized polynomial chaos and flow map composition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luchtenburg, Dirk M., E-mail: dluchten@cooper.edu; Brunton, Steven L.; Rowley, Clarence W.

    2014-10-01

    We present an efficient and accurate method for long-time uncertainty propagation in dynamical systems. Uncertain initial conditions and parameters are both addressed. The method approximates the intermediate short-time flow maps by spectral polynomial bases, as in the generalized polynomial chaos (gPC) method, and uses flow map composition to construct the long-time flow map. In contrast to the gPC method, this approach has spectral error convergence for both short and long integration times. The short-time flow map is characterized by small stretching and folding of the associated trajectories and hence can be well represented by a relatively low-degree basis. The compositionmore » of these low-degree polynomial bases then accurately describes the uncertainty behavior for long integration times. The key to the method is that the degree of the resulting polynomial approximation increases exponentially in the number of time intervals, while the number of polynomial coefficients either remains constant (for an autonomous system) or increases linearly in the number of time intervals (for a non-autonomous system). The findings are illustrated on several numerical examples including a nonlinear ordinary differential equation (ODE) with an uncertain initial condition, a linear ODE with an uncertain model parameter, and a two-dimensional, non-autonomous double gyre flow.« less

  15. Analytical approximate solutions for a general class of nonlinear delay differential equations.

    PubMed

    Căruntu, Bogdan; Bota, Constantin

    2014-01-01

    We use the polynomial least squares method (PLSM), which allows us to compute analytical approximate polynomial solutions for a very general class of strongly nonlinear delay differential equations. The method is tested by computing approximate solutions for several applications including the pantograph equations and a nonlinear time-delay model from biology. The accuracy of the method is illustrated by a comparison with approximate solutions previously computed using other methods.

  16. The Approximability of Partial Vertex Covers in Trees.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mkrtchyan, Vahan; Parekh, Ojas D.; Segev, Danny

    Motivated by applications in risk management of computational systems, we focus our attention on a special case of the partial vertex cover problem, where the underlying graph is assumed to be a tree. Here, we consider four possible versions of this setting, depending on whether vertices and edges are weighted or not. Two of these versions, where edges are assumed to be unweighted, are known to be polynomial-time solvable (Gandhi, Khuller, and Srinivasan, 2004). However, the computational complexity of this problem with weighted edges, and possibly with weighted vertices, has not been determined yet. The main contribution of this papermore » is to resolve these questions, by fully characterizing which variants of partial vertex cover remain intractable in trees, and which can be efficiently solved. In particular, we propose a pseudo-polynomial DP-based algorithm for the most general case of having weights on both edges and vertices, which is proven to be NPhard. This algorithm provides a polynomial-time solution method when weights are limited to edges, and combined with additional scaling ideas, leads to an FPTAS for the general case. A secondary contribution of this work is to propose a novel way of using centroid decompositions in trees, which could be useful in other settings as well.« less

  17. Processing short-term and long-term information with a combination of polynomial approximation techniques and time-delay neural networks.

    PubMed

    Fuchs, Erich; Gruber, Christian; Reitmaier, Tobias; Sick, Bernhard

    2009-09-01

    Neural networks are often used to process temporal information, i.e., any kind of information related to time series. In many cases, time series contain short-term and long-term trends or behavior. This paper presents a new approach to capture temporal information with various reference periods simultaneously. A least squares approximation of the time series with orthogonal polynomials will be used to describe short-term trends contained in a signal (average, increase, curvature, etc.). Long-term behavior will be modeled with the tapped delay lines of a time-delay neural network (TDNN). This network takes the coefficients of the orthogonal expansion of the approximating polynomial as inputs such considering short-term and long-term information efficiently. The advantages of the method will be demonstrated by means of artificial data and two real-world application examples, the prediction of the user number in a computer network and online tool wear classification in turning.

  18. Approximating exponential and logarithmic functions using polynomial interpolation

    NASA Astrophysics Data System (ADS)

    Gordon, Sheldon P.; Yang, Yajun

    2017-04-01

    This article takes a closer look at the problem of approximating the exponential and logarithmic functions using polynomials. Either as an alternative to or a precursor to Taylor polynomial approximations at the precalculus level, interpolating polynomials are considered. A measure of error is given and the behaviour of the error function is analysed. The results of interpolating polynomials are compared with those of Taylor polynomials.

  19. Discrete Tchebycheff orthonormal polynomials and applications

    NASA Technical Reports Server (NTRS)

    Lear, W. M.

    1980-01-01

    Discrete Tchebycheff orthonormal polynomials offer a convenient way to make least squares polynomial fits of uniformly spaced discrete data. Computer programs to do so are simple and fast, and appear to be less affected by computer roundoff error, for the higher order fits, than conventional least squares programs. They are useful for any application of polynomial least squares fits: approximation of mathematical functions, noise analysis of radar data, and real time smoothing of noisy data, to name a few.

  20. Phase unwrapping algorithm using polynomial phase approximation and linear Kalman filter.

    PubMed

    Kulkarni, Rishikesh; Rastogi, Pramod

    2018-02-01

    A noise-robust phase unwrapping algorithm is proposed based on state space analysis and polynomial phase approximation using wrapped phase measurement. The true phase is approximated as a two-dimensional first order polynomial function within a small sized window around each pixel. The estimates of polynomial coefficients provide the measurement of phase and local fringe frequencies. A state space representation of spatial phase evolution and the wrapped phase measurement is considered with the state vector consisting of polynomial coefficients as its elements. Instead of using the traditional nonlinear Kalman filter for the purpose of state estimation, we propose to use the linear Kalman filter operating directly with the wrapped phase measurement. The adaptive window width is selected at each pixel based on the local fringe density to strike a balance between the computation time and the noise robustness. In order to retrieve the unwrapped phase, either a line-scanning approach or a quality guided strategy of pixel selection is used depending on the underlying continuous or discontinuous phase distribution, respectively. Simulation and experimental results are provided to demonstrate the applicability of the proposed method.

  1. Approximating Exponential and Logarithmic Functions Using Polynomial Interpolation

    ERIC Educational Resources Information Center

    Gordon, Sheldon P.; Yang, Yajun

    2017-01-01

    This article takes a closer look at the problem of approximating the exponential and logarithmic functions using polynomials. Either as an alternative to or a precursor to Taylor polynomial approximations at the precalculus level, interpolating polynomials are considered. A measure of error is given and the behaviour of the error function is…

  2. An analytical technique for approximating unsteady aerodynamics in the time domain

    NASA Technical Reports Server (NTRS)

    Dunn, H. J.

    1980-01-01

    An analytical technique is presented for approximating unsteady aerodynamic forces in the time domain. The order of elements of a matrix Pade approximation was postulated, and the resulting polynomial coefficients were determined through a combination of least squares estimates for the numerator coefficients and a constrained gradient search for the denominator coefficients which insures stable approximating functions. The number of differential equations required to represent the aerodynamic forces to a given accuracy tends to be smaller than that employed in certain existing techniques where the denominator coefficients are chosen a priori. Results are shown for an aeroelastic, cantilevered, semispan wing which indicate a good fit to the aerodynamic forces for oscillatory motion can be achieved with a matrix Pade approximation having fourth order numerator and second order denominator polynomials.

  3. Fitting by Orthonormal Polynomials of Silver Nanoparticles Spectroscopic Data

    NASA Astrophysics Data System (ADS)

    Bogdanova, Nina; Koleva, Mihaela

    2018-02-01

    Our original Orthonormal Polynomial Expansion Method (OPEM) in one-dimensional version is applied for first time to describe the silver nanoparticles (NPs) spectroscopic data. The weights for approximation include experimental errors in variables. In this way we construct orthonormal polynomial expansion for approximating the curve on a non equidistant point grid. The corridors of given data and criteria define the optimal behavior of searched curve. The most important subinterval of spectra data is investigated, where the minimum (surface plasmon resonance absorption) is looking for. This study describes the Ag nanoparticles produced by laser approach in a ZnO medium forming a AgNPs/ZnO nanocomposite heterostructure.

  4. On the coefficients of integrated expansions and integrals of ultraspherical polynomials and their applications for solving differential equations

    NASA Astrophysics Data System (ADS)

    Doha, E. H.

    2002-02-01

    An analytical formula expressing the ultraspherical coefficients of an expansion for an infinitely differentiable function that has been integrated an arbitrary number of times in terms of the coefficients of the original expansion of the function is stated in a more compact form and proved in a simpler way than the formula suggested by Phillips and Karageorghis (27 (1990) 823). A new formula expressing explicitly the integrals of ultraspherical polynomials of any degree that has been integrated an arbitrary number of times of ultraspherical polynomials is given. The tensor product of ultraspherical polynomials is used to approximate a function of more than one variable. Formulae expressing the coefficients of differentiated expansions of double and triple ultraspherical polynomials in terms of the original expansion are stated and proved. Some applications of how to use ultraspherical polynomials for solving ordinary and partial differential equations are described.

  5. Solving a class of generalized fractional programming problems using the feasibility of linear programs.

    PubMed

    Shen, Peiping; Zhang, Tongli; Wang, Chunfeng

    2017-01-01

    This article presents a new approximation algorithm for globally solving a class of generalized fractional programming problems (P) whose objective functions are defined as an appropriate composition of ratios of affine functions. To solve this problem, the algorithm solves an equivalent optimization problem (Q) via an exploration of a suitably defined nonuniform grid. The main work of the algorithm involves checking the feasibility of linear programs associated with the interesting grid points. It is proved that the proposed algorithm is a fully polynomial time approximation scheme as the ratio terms are fixed in the objective function to problem (P), based on the computational complexity result. In contrast to existing results in literature, the algorithm does not require the assumptions on quasi-concavity or low-rank of the objective function to problem (P). Numerical results are given to illustrate the feasibility and effectiveness of the proposed algorithm.

  6. A coupled electro-thermal Discontinuous Galerkin method

    NASA Astrophysics Data System (ADS)

    Homsi, L.; Geuzaine, C.; Noels, L.

    2017-11-01

    This paper presents a Discontinuous Galerkin scheme in order to solve the nonlinear elliptic partial differential equations of coupled electro-thermal problems. In this paper we discuss the fundamental equations for the transport of electricity and heat, in terms of macroscopic variables such as temperature and electric potential. A fully coupled nonlinear weak formulation for electro-thermal problems is developed based on continuum mechanics equations expressed in terms of energetically conjugated pair of fluxes and fields gradients. The weak form can thus be formulated as a Discontinuous Galerkin method. The existence and uniqueness of the weak form solution are proved. The numerical properties of the nonlinear elliptic problems i.e., consistency and stability, are demonstrated under specific conditions, i.e. use of high enough stabilization parameter and at least quadratic polynomial approximations. Moreover the prior error estimates in the H1-norm and in the L2-norm are shown to be optimal in the mesh size with the polynomial approximation degree.

  7. The time-fractional radiative transport equation—Continuous-time random walk, diffusion approximation, and Legendre-polynomial expansion

    NASA Astrophysics Data System (ADS)

    Machida, Manabu

    2017-01-01

    We consider the radiative transport equation in which the time derivative is replaced by the Caputo derivative. Such fractional-order derivatives are related to anomalous transport and anomalous diffusion. In this paper we describe how the time-fractional radiative transport equation is obtained from continuous-time random walk and see how the equation is related to the time-fractional diffusion equation in the asymptotic limit. Then we solve the equation with Legendre-polynomial expansion.

  8. Algorithms in Discrepancy Theory and Lattices

    NASA Astrophysics Data System (ADS)

    Ramadas, Harishchandra

    This thesis deals with algorithmic problems in discrepancy theory and lattices, and is based on two projects I worked on while at the University of Washington in Seattle. A brief overview is provided in Chapter 1 (Introduction). Chapter 2 covers joint work with Avi Levy and Thomas Rothvoss in the field of discrepancy minimization. A well-known theorem of Spencer shows that any set system with n sets over n elements admits a coloring of discrepancy O(√n). While the original proof was non-constructive, recent progress brought polynomial time algorithms by Bansal, Lovett and Meka, and Rothvoss. All those algorithms are randomized, even though Bansal's algorithm admitted a complicated derandomization. We propose an elegant deterministic polynomial time algorithm that is inspired by Lovett-Meka as well as the Multiplicative Weight Update method. The algorithm iteratively updates a fractional coloring while controlling the exponential weights that are assigned to the set constraints. A conjecture by Meka suggests that Spencer's bound can be generalized to symmetric matrices. We prove that n x n matrices that are block diagonal with block size q admit a coloring of discrepancy O(√n . √log(q)). Bansal, Dadush and Garg recently gave a randomized algorithm to find a vector x with entries in {-1,1} with ∥Ax∥infinity ≤ O(√log n) in polynomial time, where A is any matrix whose columns have length at most 1. We show that our method can be used to deterministically obtain such a vector. In Chapter 3, we discuss a result in the broad area of lattices and integer optimization, in joint work with Rebecca Hoberg, Thomas Rothvoss and Xin Yang. The number balancing (NBP) problem is the following: given real numbers a1,...,an in [0,1], find two disjoint subsets I1,I2 of [ n] so that the difference |sumi∈I1a i - sumi∈I2ai| of their sums is minimized. An application of the pigeonhole principle shows that there is always a solution where the difference is at most O √n/2n). Finding the minimum, however, is NP-hard. In polynomial time, the differencing algorithm by Karmarkar and Karp from 1982 can produce a solution with difference at most n-theta(log n), but no further improvement has been made since then. We show a relationship between NBP and Minkowski's Theorem. First we show that an approximate oracle for Minkowski's Theorem gives an approximate NBP oracle. Perhaps more surprisingly, we show that an approximate NBP oracle gives an approximate Minkowski oracle. In particular, we prove that any polynomial time algorithm that guarantees a solution of difference at most 2√n/2 n would give a polynomial approximation for Minkowski as well as a polynomial factor approximation algorithm for the Shortest Vector Problem.

  9. Approximating smooth functions using algebraic-trigonometric polynomials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharapudinov, Idris I

    2011-01-14

    The problem under consideration is that of approximating classes of smooth functions by algebraic-trigonometric polynomials of the form p{sub n}(t)+{tau}{sub m}(t), where p{sub n}(t) is an algebraic polynomial of degree n and {tau}{sub m}(t)=a{sub 0}+{Sigma}{sub k=1}{sup m}a{sub k} cos k{pi}t + b{sub k} sin k{pi}t is a trigonometric polynomial of order m. The precise order of approximation by such polynomials in the classes W{sup r}{sub {infinity}(}M) and an upper bound for similar approximations in the class W{sup r}{sub p}(M) with 4/3

  10. Optimal Chebyshev polynomials on ellipses in the complex plane

    NASA Technical Reports Server (NTRS)

    Fischer, Bernd; Freund, Roland

    1989-01-01

    The design of iterative schemes for sparse matrix computations often leads to constrained polynomial approximation problems on sets in the complex plane. For the case of ellipses, we introduce a new class of complex polynomials which are in general very good approximations to the best polynomials and even optimal in most cases.

  11. New realisation of Preisach model using adaptive polynomial approximation

    NASA Astrophysics Data System (ADS)

    Liu, Van-Tsai; Lin, Chun-Liang; Wing, Home-Young

    2012-09-01

    Modelling system with hysteresis has received considerable attention recently due to the increasing accurate requirement in engineering applications. The classical Preisach model (CPM) is the most popular model to demonstrate hysteresis which can be represented by infinite but countable first-order reversal curves (FORCs). The usage of look-up tables is one way to approach the CPM in actual practice. The data in those tables correspond with the samples of a finite number of FORCs. This approach, however, faces two major problems: firstly, it requires a large amount of memory space to obtain an accurate prediction of hysteresis; secondly, it is difficult to derive efficient ways to modify the data table to reflect the timing effect of elements with hysteresis. To overcome, this article proposes the idea of using a set of polynomials to emulate the CPM instead of table look-up. The polynomial approximation requires less memory space for data storage. Furthermore, the polynomial coefficients can be obtained accurately by using the least-square approximation or adaptive identification algorithm, such as the possibility of accurate tracking of hysteresis model parameters.

  12. Polynomial approximation of non-Gaussian unitaries by counting one photon at a time

    NASA Astrophysics Data System (ADS)

    Arzani, Francesco; Treps, Nicolas; Ferrini, Giulia

    2017-05-01

    In quantum computation with continuous-variable systems, quantum advantage can only be achieved if some non-Gaussian resource is available. Yet, non-Gaussian unitary evolutions and measurements suited for computation are challenging to realize in the laboratory. We propose and analyze two methods to apply a polynomial approximation of any unitary operator diagonal in the amplitude quadrature representation, including non-Gaussian operators, to an unknown input state. Our protocols use as a primary non-Gaussian resource a single-photon counter. We use the fidelity of the transformation with the target one on Fock and coherent states to assess the quality of the approximate gate.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hunt, H.B. III; Rosenkrantz, D.J.; Stearns, R.E.

    We study both the complexity and approximability of various graph and combinatorial problems specified using two dimensional narrow periodic specifications (see [CM93, HW92, KMW67, KO91, Or84b, Wa93]). The following two general kinds of results are presented. (1) We prove that a number of natural graph and combinatorial problems are NEXPTIME- or EXPSPACE-complete when instances are so specified; (2) In contrast, we prove that the optimization versions of several of these NEXPTIME-, EXPSPACE-complete problems have polynomial time approximation algorithms with constant performance guarantees. Moreover, some of these problems even have polynomial time approximation schemes. We also sketch how our NEXPTIME-hardness resultsmore » can be used to prove analogous NEXPTIME-hardness results for problems specified using other kinds of succinct specification languages. Our results provide the first natural problems for which there is a proven exponential (and possibly doubly exponential) gap between the complexities of finding exact and approximate solutions.« less

  14. Maximizing Submodular Functions under Matroid Constraints by Evolutionary Algorithms.

    PubMed

    Friedrich, Tobias; Neumann, Frank

    2015-01-01

    Many combinatorial optimization problems have underlying goal functions that are submodular. The classical goal is to find a good solution for a given submodular function f under a given set of constraints. In this paper, we investigate the runtime of a simple single objective evolutionary algorithm called (1 + 1) EA and a multiobjective evolutionary algorithm called GSEMO until they have obtained a good approximation for submodular functions. For the case of monotone submodular functions and uniform cardinality constraints, we show that the GSEMO achieves a (1 - 1/e)-approximation in expected polynomial time. For the case of monotone functions where the constraints are given by the intersection of K ≥ 2 matroids, we show that the (1 + 1) EA achieves a (1/k + δ)-approximation in expected polynomial time for any constant δ > 0. Turning to nonmonotone symmetric submodular functions with k ≥ 1 matroid intersection constraints, we show that the GSEMO achieves a 1/((k + 2)(1 + ε))-approximation in expected time O(n(k + 6)log(n)/ε.

  15. Algorithm for Compressing Time-Series Data

    NASA Technical Reports Server (NTRS)

    Hawkins, S. Edward, III; Darlington, Edward Hugo

    2012-01-01

    An algorithm based on Chebyshev polynomials effects lossy compression of time-series data or other one-dimensional data streams (e.g., spectral data) that are arranged in blocks for sequential transmission. The algorithm was developed for use in transmitting data from spacecraft scientific instruments to Earth stations. In spite of its lossy nature, the algorithm preserves the information needed for scientific analysis. The algorithm is computationally simple, yet compresses data streams by factors much greater than two. The algorithm is not restricted to spacecraft or scientific uses: it is applicable to time-series data in general. The algorithm can also be applied to general multidimensional data that have been converted to time-series data, a typical example being image data acquired by raster scanning. However, unlike most prior image-data-compression algorithms, this algorithm neither depends on nor exploits the two-dimensional spatial correlations that are generally present in images. In order to understand the essence of this compression algorithm, it is necessary to understand that the net effect of this algorithm and the associated decompression algorithm is to approximate the original stream of data as a sequence of finite series of Chebyshev polynomials. For the purpose of this algorithm, a block of data or interval of time for which a Chebyshev polynomial series is fitted to the original data is denoted a fitting interval. Chebyshev approximation has two properties that make it particularly effective for compressing serial data streams with minimal loss of scientific information: The errors associated with a Chebyshev approximation are nearly uniformly distributed over the fitting interval (this is known in the art as the "equal error property"); and the maximum deviations of the fitted Chebyshev polynomial from the original data have the smallest possible values (this is known in the art as the "min-max property").

  16. Bin Packing, Number Balancing, and Rescaling Linear Programs

    NASA Astrophysics Data System (ADS)

    Hoberg, Rebecca

    This thesis deals with several important algorithmic questions using techniques from diverse areas including discrepancy theory, machine learning and lattice theory. In Chapter 2, we construct an improved approximation algorithm for a classical NP-complete problem, the bin packing problem. In this problem, the goal is to pack items of sizes si ∈ [0,1] into as few bins as possible, where a set of items fits into a bin provided the sum of the item sizes is at most one. We give a polynomial-time rounding scheme for a standard linear programming relaxation of the problem, yielding a packing that uses at most OPT + O(log OPT) bins. This makes progress towards one of the "10 open problems in approximation algorithms" stated in the book of Shmoys and Williamson. In fact, based on related combinatorial lower bounds, Rothvoss conjectures that theta(logOPT) may be a tight bound on the additive integrality gap of this LP relaxation. In Chapter 3, we give a new polynomial-time algorithm for linear programming. Our algorithm is based on the multiplicative weights update (MWU) method, which is a general framework that is currently of great interest in theoretical computer science. An algorithm for linear programming based on MWU was known previously, but was not polynomial time--we remedy this by alternating between a MWU phase and a rescaling phase. The rescaling methods we introduce improve upon previous methods by reducing the number of iterations needed until one can rescale, and they can be used for any algorithm with a similar rescaling structure. Finally, we note that the MWU phase of the algorithm has a simple interpretation as gradient descent of a particular potential function, and we show we can speed up this phase by walking in a direction that decreases both the potential function and its gradient. In Chapter 4, we show that an approximate oracle for Minkowski's Theorem gives an approximate oracle for the number balancing problem, and conversely. Number balancing is the problem of minimizing | 〈a,x〉 | over x ∈ {-1,0,1}n \\ { 0}, given a ∈ [0,1]n. While an application of the pigeonhole principle shows that there always exists x with | 〈a,x〉| ≤ O(√ n/2n), the best known algorithm only guarantees |〈a,x〉| ≤ 2-ntheta(log n). We show that an oracle for Minkowski's Theorem with approximation factor rho would give an algorithm for NBP that guarantees | 〈a,x〉 | ≤ 2-ntheta(1/rho). In particular, this would beat the bound of Karmarkar and Karp provided rho ≤ O(logn/loglogn). In the other direction, we prove that any polynomial time algorithm for NBP that guarantees a solution of difference at most 2√n/2 n would give a polynomial approximation for Minkowski as well as a polynomial factor approximation algorithm for the Shortest Vector Problem.

  17. XML Reconstruction View Selection in XML Databases: Complexity Analysis and Approximation Scheme

    NASA Astrophysics Data System (ADS)

    Chebotko, Artem; Fu, Bin

    Query evaluation in an XML database requires reconstructing XML subtrees rooted at nodes found by an XML query. Since XML subtree reconstruction can be expensive, one approach to improve query response time is to use reconstruction views - materialized XML subtrees of an XML document, whose nodes are frequently accessed by XML queries. For this approach to be efficient, the principal requirement is a framework for view selection. In this work, we are the first to formalize and study the problem of XML reconstruction view selection. The input is a tree T, in which every node i has a size c i and profit p i , and the size limitation C. The target is to find a subset of subtrees rooted at nodes i 1, ⋯ , i k respectively such that c_{i_1}+\\cdots +c_{i_k}le C, and p_{i_1}+\\cdots +p_{i_k} is maximal. Furthermore, there is no overlap between any two subtrees selected in the solution. We prove that this problem is NP-hard and present a fully polynomial-time approximation scheme (FPTAS) as a solution.

  18. Adversarial Geospatial Abduction Problems

    DTIC Science & Technology

    2011-01-01

    which is new , shows that #GCD is #P-complete and, moreover, that there is no fully-polynomial random approximation scheme for #GCD unless NP equals the...use L∗ to form a new set of constraints to find a δ-core optimal explanation. We now present these δ-core constraints. Notice that the cardinality...EXBrf (∅, efd), flag1 = true, i = 2 (4) While flag1 (a) new val = cur val + inci (b) If new val > (1 + |L|2 ) · cur val then i. If EXBrf (B ∪ {pi

  19. Diameter-Constrained Steiner Tree

    NASA Astrophysics Data System (ADS)

    Ding, Wei; Lin, Guohui; Xue, Guoliang

    Given an edge-weighted undirected graph G = (V,E,c,w), where each edge e ∈ E has a cost c(e) and a weight w(e), a set S ⊆ V of terminals and a positive constant D 0, we seek a minimum cost Steiner tree where all terminals appear as leaves and its diameter is bounded by D 0. Note that the diameter of a tree represents the maximum weight of path connecting two different leaves in the tree. Such problem is called the minimum cost diameter-constrained Steiner tree problem. This problem is NP-hard even when the topology of Steiner tree is fixed. In present paper we focus on this restricted version and present a fully polynomial time approximation scheme (FPTAS) for computing a minimum cost diameter-constrained Steiner tree under a fixed topology.

  20. On direct theorems for best polynomial approximation

    NASA Astrophysics Data System (ADS)

    Auad, A. A.; AbdulJabbar, R. S.

    2018-05-01

    This paper is to obtain similarity for the best approximation degree of functions, which are unbounded in L p,α (A = [0,1]), which called weighted space by algebraic polynomials. {E}nH{(f)}p,α and the best approximation degree in the same space on the interval [0,2π] by trigonometric polynomials {E}nT{(f)}p,α of direct wellknown theorems in forms the average modules.

  1. An Introduction to Lagrangian Differential Calculus.

    ERIC Educational Resources Information Center

    Schremmer, Francesca; Schremmer, Alain

    1990-01-01

    Illustrates how Lagrange's approach applies to the differential calculus of polynomial functions when approximations are obtained. Discusses how to obtain polynomial approximations in other cases. (YP)

  2. Towards syntactic characterizations of approximation schemes via predicate and graph decompositions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hunt, H.B. III; Stearns, R.E.; Jacob, R.

    1998-12-01

    The authors present a simple extensible theoretical framework for devising polynomial time approximation schemes for problems represented using natural syntactic (algebraic) specifications endowed with natural graph theoretic restrictions on input instances. Direct application of the technique yields polynomial time approximation schemes for all the problems studied in [LT80, NC88, KM96, Ba83, DTS93, HM+94a, HM+94] as well as the first known approximation schemes for a number of additional combinatorial problems. One notable aspect of the work is that it provides insights into the structure of the syntactic specifications and the corresponding algorithms considered in [KM96, HM+94]. The understanding allows them tomore » extend the class of syntactic specifications for which generic approximation schemes can be developed. The results can be shown to be tight in many cases, i.e. natural extensions of the specifications can be shown to yield non-approximable problems. The results provide a non-trivial characterization of a class of problems having a PTAS and extend the earlier work on this topic by [KM96, HM+94].« less

  3. Animating Nested Taylor Polynomials to Approximate a Function

    ERIC Educational Resources Information Center

    Mazzone, Eric F.; Piper, Bruce R.

    2010-01-01

    The way that Taylor polynomials approximate functions can be demonstrated by moving the center point while keeping the degree fixed. These animations are particularly nice when the Taylor polynomials do not intersect and form a nested family. We prove a result that shows when this nesting occurs. The animations can be shown in class or…

  4. Explicitly solvable complex Chebyshev approximation problems related to sine polynomials

    NASA Technical Reports Server (NTRS)

    Freund, Roland

    1989-01-01

    Explicitly solvable real Chebyshev approximation problems on the unit interval are typically characterized by simple error curves. A similar principle is presented for complex approximation problems with error curves induced by sine polynomials. As an application, some new explicit formulae for complex best approximations are derived.

  5. An Online Gravity Modeling Method Applied for High Precision Free-INS

    PubMed Central

    Wang, Jing; Yang, Gongliu; Li, Jing; Zhou, Xiao

    2016-01-01

    For real-time solution of inertial navigation system (INS), the high-degree spherical harmonic gravity model (SHM) is not applicable because of its time and space complexity, in which traditional normal gravity model (NGM) has been the dominant technique for gravity compensation. In this paper, a two-dimensional second-order polynomial model is derived from SHM according to the approximate linear characteristic of regional disturbing potential. Firstly, deflections of vertical (DOVs) on dense grids are calculated with SHM in an external computer. And then, the polynomial coefficients are obtained using these DOVs. To achieve global navigation, the coefficients and applicable region of polynomial model are both updated synchronously in above computer. Compared with high-degree SHM, the polynomial model takes less storage and computational time at the expense of minor precision. Meanwhile, the model is more accurate than NGM. Finally, numerical test and INS experiment show that the proposed method outperforms traditional gravity models applied for high precision free-INS. PMID:27669261

  6. An Online Gravity Modeling Method Applied for High Precision Free-INS.

    PubMed

    Wang, Jing; Yang, Gongliu; Li, Jing; Zhou, Xiao

    2016-09-23

    For real-time solution of inertial navigation system (INS), the high-degree spherical harmonic gravity model (SHM) is not applicable because of its time and space complexity, in which traditional normal gravity model (NGM) has been the dominant technique for gravity compensation. In this paper, a two-dimensional second-order polynomial model is derived from SHM according to the approximate linear characteristic of regional disturbing potential. Firstly, deflections of vertical (DOVs) on dense grids are calculated with SHM in an external computer. And then, the polynomial coefficients are obtained using these DOVs. To achieve global navigation, the coefficients and applicable region of polynomial model are both updated synchronously in above computer. Compared with high-degree SHM, the polynomial model takes less storage and computational time at the expense of minor precision. Meanwhile, the model is more accurate than NGM. Finally, numerical test and INS experiment show that the proposed method outperforms traditional gravity models applied for high precision free-INS.

  7. The NonConforming Virtual Element Method for the Stokes Equations

    DOE PAGES

    Cangiani, Andrea; Gyrya, Vitaliy; Manzini, Gianmarco

    2016-01-01

    In this paper, we present the nonconforming virtual element method (VEM) for the numerical approximation of velocity and pressure in the steady Stokes problem. The pressure is approximated using discontinuous piecewise polynomials, while each component of the velocity is approximated using the nonconforming virtual element space. On each mesh element the local virtual space contains the space of polynomials of up to a given degree, plus suitable nonpolynomial functions. The virtual element functions are implicitly defined as the solution of local Poisson problems with polynomial Neumann boundary conditions. As typical in VEM approaches, the explicit evaluation of the non-polynomial functionsmore » is not required. This approach makes it possible to construct nonconforming (virtual) spaces for any polynomial degree regardless of the parity, for two- and three-dimensional problems, and for meshes with very general polygonal and polyhedral elements. We show that the nonconforming VEM is inf-sup stable and establish optimal a priori error estimates for the velocity and pressure approximations. Finally, numerical examples confirm the convergence analysis and the effectiveness of the method in providing high-order accurate approximations.« less

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cangiani, Andrea; Gyrya, Vitaliy; Manzini, Gianmarco

    In this paper, we present the nonconforming virtual element method (VEM) for the numerical approximation of velocity and pressure in the steady Stokes problem. The pressure is approximated using discontinuous piecewise polynomials, while each component of the velocity is approximated using the nonconforming virtual element space. On each mesh element the local virtual space contains the space of polynomials of up to a given degree, plus suitable nonpolynomial functions. The virtual element functions are implicitly defined as the solution of local Poisson problems with polynomial Neumann boundary conditions. As typical in VEM approaches, the explicit evaluation of the non-polynomial functionsmore » is not required. This approach makes it possible to construct nonconforming (virtual) spaces for any polynomial degree regardless of the parity, for two- and three-dimensional problems, and for meshes with very general polygonal and polyhedral elements. We show that the nonconforming VEM is inf-sup stable and establish optimal a priori error estimates for the velocity and pressure approximations. Finally, numerical examples confirm the convergence analysis and the effectiveness of the method in providing high-order accurate approximations.« less

  9. Thermodynamic characterization of networks using graph polynomials

    NASA Astrophysics Data System (ADS)

    Ye, Cheng; Comin, César H.; Peron, Thomas K. DM.; Silva, Filipi N.; Rodrigues, Francisco A.; Costa, Luciano da F.; Torsello, Andrea; Hancock, Edwin R.

    2015-09-01

    In this paper, we present a method for characterizing the evolution of time-varying complex networks by adopting a thermodynamic representation of network structure computed from a polynomial (or algebraic) characterization of graph structure. Commencing from a representation of graph structure based on a characteristic polynomial computed from the normalized Laplacian matrix, we show how the polynomial is linked to the Boltzmann partition function of a network. This allows us to compute a number of thermodynamic quantities for the network, including the average energy and entropy. Assuming that the system does not change volume, we can also compute the temperature, defined as the rate of change of entropy with energy. All three thermodynamic variables can be approximated using low-order Taylor series that can be computed using the traces of powers of the Laplacian matrix, avoiding explicit computation of the normalized Laplacian spectrum. These polynomial approximations allow a smoothed representation of the evolution of networks to be constructed in the thermodynamic space spanned by entropy, energy, and temperature. We show how these thermodynamic variables can be computed in terms of simple network characteristics, e.g., the total number of nodes and node degree statistics for nodes connected by edges. We apply the resulting thermodynamic characterization to real-world time-varying networks representing complex systems in the financial and biological domains. The study demonstrates that the method provides an efficient tool for detecting abrupt changes and characterizing different stages in network evolution.

  10. Polynomial-Time Approximation Algorithm for the Problem of Cardinality-Weighted Variance-Based 2-Clustering with a Given Center

    NASA Astrophysics Data System (ADS)

    Kel'manov, A. V.; Motkova, A. V.

    2018-01-01

    A strongly NP-hard problem of partitioning a finite set of points of Euclidean space into two clusters is considered. The solution criterion is the minimum of the sum (over both clusters) of weighted sums of squared distances from the elements of each cluster to its geometric center. The weights of the sums are equal to the cardinalities of the desired clusters. The center of one cluster is given as input, while the center of the other is unknown and is determined as the point of space equal to the mean of the cluster elements. A version of the problem is analyzed in which the cardinalities of the clusters are given as input. A polynomial-time 2-approximation algorithm for solving the problem is constructed.

  11. Peculiarities of stochastic regime of Arctic ice cover time evolution over 1987-2014 from microwave satellite sounding on the basis of NASA team 2 algorithm

    NASA Astrophysics Data System (ADS)

    Raev, M. D.; Sharkov, E. A.; Tikhonov, V. V.; Repina, I. A.; Komarova, N. Yu.

    2015-12-01

    The GLOBAL-RT database (DB) is composed of long-term radio heat multichannel observation data received from DMSP F08-F17 satellites; it is permanently supplemented with new data on the Earth's exploration from the space department of the Space Research Institute, Russian Academy of Sciences. Arctic ice-cover areas for regions higher than 60° N latitude were calculated using the DB polar version and NASA Team 2 algorithm, which is widely used in foreign scientific literature. According to the analysis of variability of Arctic ice cover during 1987-2014, 2 months were selected when the Arctic ice cover was maximal (February) and minimal (September), and the average ice cover area was calculated for these months. Confidence intervals of the average values are in the 95-98% limits. Several approximations are derived for the time dependences of the ice-cover maximum and minimum over the period under study. Regression dependences were calculated for polynomials from the first degree (linear) to sextic. It was ascertained that the minimal root-mean-square error of deviation from the approximated curve sharply decreased for the biquadratic polynomial and then varied insignificantly: from 0.5593 for the polynomial of third degree to 0.4560 for the biquadratic polynomial. Hence, the commonly used strictly linear regression with a negative time gradient for the September Arctic ice cover minimum over 30 years should be considered incorrect.

  12. Polynomial probability distribution estimation using the method of moments

    PubMed Central

    Mattsson, Lars; Rydén, Jesper

    2017-01-01

    We suggest a procedure for estimating Nth degree polynomial approximations to unknown (or known) probability density functions (PDFs) based on N statistical moments from each distribution. The procedure is based on the method of moments and is setup algorithmically to aid applicability and to ensure rigor in use. In order to show applicability, polynomial PDF approximations are obtained for the distribution families Normal, Log-Normal, Weibull as well as for a bimodal Weibull distribution and a data set of anonymized household electricity use. The results are compared with results for traditional PDF series expansion methods of Gram–Charlier type. It is concluded that this procedure is a comparatively simple procedure that could be used when traditional distribution families are not applicable or when polynomial expansions of probability distributions might be considered useful approximations. In particular this approach is practical for calculating convolutions of distributions, since such operations become integrals of polynomial expressions. Finally, in order to show an advanced applicability of the method, it is shown to be useful for approximating solutions to the Smoluchowski equation. PMID:28394949

  13. Polynomial probability distribution estimation using the method of moments.

    PubMed

    Munkhammar, Joakim; Mattsson, Lars; Rydén, Jesper

    2017-01-01

    We suggest a procedure for estimating Nth degree polynomial approximations to unknown (or known) probability density functions (PDFs) based on N statistical moments from each distribution. The procedure is based on the method of moments and is setup algorithmically to aid applicability and to ensure rigor in use. In order to show applicability, polynomial PDF approximations are obtained for the distribution families Normal, Log-Normal, Weibull as well as for a bimodal Weibull distribution and a data set of anonymized household electricity use. The results are compared with results for traditional PDF series expansion methods of Gram-Charlier type. It is concluded that this procedure is a comparatively simple procedure that could be used when traditional distribution families are not applicable or when polynomial expansions of probability distributions might be considered useful approximations. In particular this approach is practical for calculating convolutions of distributions, since such operations become integrals of polynomial expressions. Finally, in order to show an advanced applicability of the method, it is shown to be useful for approximating solutions to the Smoluchowski equation.

  14. LMI-based stability analysis of fuzzy-model-based control systems using approximated polynomial membership functions.

    PubMed

    Narimani, Mohammand; Lam, H K; Dilmaghani, R; Wolfe, Charles

    2011-06-01

    Relaxed linear-matrix-inequality-based stability conditions for fuzzy-model-based control systems with imperfect premise matching are proposed. First, the derivative of the Lyapunov function, containing the product terms of the fuzzy model and fuzzy controller membership functions, is derived. Then, in the partitioned operating domain of the membership functions, the relations between the state variables and the mentioned product terms are represented by approximated polynomials in each subregion. Next, the stability conditions containing the information of all subsystems and the approximated polynomials are derived. In addition, the concept of the S-procedure is utilized to release the conservativeness caused by considering the whole operating region for approximated polynomials. It is shown that the well-known stability conditions can be special cases of the proposed stability conditions. Simulation examples are given to illustrate the validity of the proposed approach.

  15. On the best mean-square approximations to a planet's gravitational potential

    NASA Astrophysics Data System (ADS)

    Lobkova, N. I.

    1985-02-01

    The continuous problem of approximating the gravitational potential of a planet in the form of polynomials of solid spherical functions is considered. The best mean-square polynomials, referred to different parts of space, are compared with each other. The harmonic coefficients corresponding to the surface of a planet are shown to be unstable with respect to the degree of the polynomial and to differ from the Stokes constants.

  16. Roots of polynomials by ratio of successive derivatives

    NASA Technical Reports Server (NTRS)

    Crouse, J. E.; Putt, C. W.

    1972-01-01

    An order of magnitude study of the ratios of successive polynomial derivatives yields information about the number of roots at an approached root point and the approximate location of a root point from a nearby point. The location approximation improves as a root is approached, so a powerful convergence procedure becomes available. These principles are developed into a computer program which finds the roots of polynomials with real number coefficients.

  17. Perceptually informed synthesis of bandlimited classical waveforms using integrated polynomial interpolation.

    PubMed

    Välimäki, Vesa; Pekonen, Jussi; Nam, Juhan

    2012-01-01

    Digital subtractive synthesis is a popular music synthesis method, which requires oscillators that are aliasing-free in a perceptual sense. It is a research challenge to find computationally efficient waveform generation algorithms that produce similar-sounding signals to analog music synthesizers but which are free from audible aliasing. A technique for approximately bandlimited waveform generation is considered that is based on a polynomial correction function, which is defined as the difference of a non-bandlimited step function and a polynomial approximation of the ideal bandlimited step function. It is shown that the ideal bandlimited step function is equivalent to the sine integral, and that integrated polynomial interpolation methods can successfully approximate it. Integrated Lagrange interpolation and B-spline basis functions are considered for polynomial approximation. The polynomial correction function can be added onto samples around each discontinuity in a non-bandlimited waveform to suppress aliasing. Comparison against previously known methods shows that the proposed technique yields the best tradeoff between computational cost and sound quality. The superior method amongst those considered in this study is the integrated third-order B-spline correction function, which offers perceptually aliasing-free sawtooth emulation up to the fundamental frequency of 7.8 kHz at the sample rate of 44.1 kHz. © 2012 Acoustical Society of America.

  18. THEORETICAL p-MODE OSCILLATION FREQUENCIES FOR THE RAPIDLY ROTATING {delta} SCUTI STAR {alpha} OPHIUCHI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deupree, Robert G., E-mail: bdeupree@ap.smu.ca

    2011-11-20

    A rotating, two-dimensional stellar model is evolved to match the approximate conditions of {alpha} Oph. Both axisymmetric and nonaxisymmetric oscillation frequencies are computed for two-dimensional rotating models which approximate the properties of {alpha} Oph. These computed frequencies are compared to the observed frequencies. Oscillation calculations are made assuming the eigenfunction can be fitted with six Legendre polynomials, but comparison calculations with eight Legendre polynomials show the frequencies agree to within about 0.26% on average. The surface horizontal shape of the eigenfunctions for the two sets of assumed number of Legendre polynomials agrees less well, but all calculations show significant departuresmore » from that of a single Legendre polynomial. It is still possible to determine the large separation, although the small separation is more complicated to estimate. With the addition of the nonaxisymmetric modes with |m| {<=} 4, the frequency space becomes sufficiently dense that it is difficult to comment on the adequacy of the fit of the computed to the observed frequencies. While the nonaxisymmetric frequency mode splitting is no longer uniform, the frequency difference between the frequencies for positive and negative values of the same m remains 2m times the rotation rate.« less

  19. Accurate spectral solutions for the parabolic and elliptic partial differential equations by the ultraspherical tau method

    NASA Astrophysics Data System (ADS)

    Doha, E. H.; Abd-Elhameed, W. M.

    2005-09-01

    We present a double ultraspherical spectral methods that allow the efficient approximate solution for the parabolic partial differential equations in a square subject to the most general inhomogeneous mixed boundary conditions. The differential equations with their boundary and initial conditions are reduced to systems of ordinary differential equations for the time-dependent expansion coefficients. These systems are greatly simplified by using tensor matrix algebra, and are solved by using the step-by-step method. Numerical applications of how to use these methods are described. Numerical results obtained compare favorably with those of the analytical solutions. Accurate double ultraspherical spectral approximations for Poisson's and Helmholtz's equations are also noted. Numerical experiments show that spectral approximation based on Chebyshev polynomials of the first kind is not always better than others based on ultraspherical polynomials.

  20. Accurate polynomial expressions for the density and specific volume of seawater using the TEOS-10 standard

    NASA Astrophysics Data System (ADS)

    Roquet, F.; Madec, G.; McDougall, Trevor J.; Barker, Paul M.

    2015-06-01

    A new set of approximations to the standard TEOS-10 equation of state are presented. These follow a polynomial form, making it computationally efficient for use in numerical ocean models. Two versions are provided, the first being a fit of density for Boussinesq ocean models, and the second fitting specific volume which is more suitable for compressible models. Both versions are given as the sum of a vertical reference profile (6th-order polynomial) and an anomaly (52-term polynomial, cubic in pressure), with relative errors of ∼0.1% on the thermal expansion coefficients. A 75-term polynomial expression is also presented for computing specific volume, with a better accuracy than the existing TEOS-10 48-term rational approximation, especially regarding the sound speed, and it is suggested that this expression represents a valuable approximation of the TEOS-10 equation of state for hydrographic data analysis. In the last section, practical aspects about the implementation of TEOS-10 in ocean models are discussed.

  1. Best uniform approximation to a class of rational functions

    NASA Astrophysics Data System (ADS)

    Zheng, Zhitong; Yong, Jun-Hai

    2007-10-01

    We explicitly determine the best uniform polynomial approximation to a class of rational functions of the form 1/(x-c)2+K(a,b,c,n)/(x-c) on [a,b] represented by their Chebyshev expansion, where a, b, and c are real numbers, n-1 denotes the degree of the best approximating polynomial, and K is a constant determined by a, b, c, and n. Our result is based on the explicit determination of a phase angle [eta] in the representation of the approximation error by a trigonometric function. Moreover, we formulate an ansatz which offers a heuristic strategies to determine the best approximating polynomial to a function represented by its Chebyshev expansion. Combined with the phase angle method, this ansatz can be used to find the best uniform approximation to some more functions.

  2. On polynomial preconditioning for indefinite Hermitian matrices

    NASA Technical Reports Server (NTRS)

    Freund, Roland W.

    1989-01-01

    The minimal residual method is studied combined with polynomial preconditioning for solving large linear systems (Ax = b) with indefinite Hermitian coefficient matrices (A). The standard approach for choosing the polynomial preconditioners leads to preconditioned systems which are positive definite. Here, a different strategy is studied which leaves the preconditioned coefficient matrix indefinite. More precisely, the polynomial preconditioner is designed to cluster the positive, resp. negative eigenvalues of A around 1, resp. around some negative constant. In particular, it is shown that such indefinite polynomial preconditioners can be obtained as the optimal solutions of a certain two parameter family of Chebyshev approximation problems. Some basic results are established for these approximation problems and a Remez type algorithm is sketched for their numerical solution. The problem of selecting the parameters such that the resulting indefinite polynomial preconditioners speeds up the convergence of minimal residual method optimally is also addressed. An approach is proposed based on the concept of asymptotic convergence factors. Finally, some numerical examples of indefinite polynomial preconditioners are given.

  3. Performance tradeoffs in static and dynamic load balancing strategies

    NASA Technical Reports Server (NTRS)

    Iqbal, M. A.; Saltz, J. H.; Bokhart, S. H.

    1986-01-01

    The problem of uniformly distributing the load of a parallel program over a multiprocessor system was considered. A program was analyzed whose structure permits the computation of the optimal static solution. Then four strategies for load balancing were described and their performance compared. The strategies are: (1) the optimal static assignment algorithm which is guaranteed to yield the best static solution, (2) the static binary dissection method which is very fast but sub-optimal, (3) the greedy algorithm, a static fully polynomial time approximation scheme, which estimates the optimal solution to arbitrary accuracy, and (4) the predictive dynamic load balancing heuristic which uses information on the precedence relationships within the program and outperforms any of the static methods. It is also shown that the overhead incurred by the dynamic heuristic is reduced considerably if it is started off with a static assignment provided by either of the other three strategies.

  4. High-order computer-assisted estimates of topological entropy

    NASA Astrophysics Data System (ADS)

    Grote, Johannes

    The concept of Taylor Models is introduced, which offers highly accurate C0-estimates for the enclosures of functional dependencies, combining high-order Taylor polynomial approximation of functions and rigorous estimates of the truncation error, performed using verified interval arithmetic. The focus of this work is on the application of Taylor Models in algorithms for strongly nonlinear dynamical systems. A method to obtain sharp rigorous enclosures of Poincare maps for certain types of flows and surfaces is developed and numerical examples are presented. Differential algebraic techniques allow the efficient and accurate computation of polynomial approximations for invariant curves of certain planar maps around hyperbolic fixed points. Subsequently we introduce a procedure to extend these polynomial curves to verified Taylor Model enclosures of local invariant manifolds with C0-errors of size 10-10--10 -14, and proceed to generate the global invariant manifold tangle up to comparable accuracy through iteration in Taylor Model arithmetic. Knowledge of the global manifold structure up to finite iterations of the local manifold pieces enables us to find all homoclinic and heteroclinic intersections in the generated manifold tangle. Combined with the mapping properties of the homoclinic points and their ordering we are able to construct a subshift of finite type as a topological factor of the original planar system to obtain rigorous lower bounds for its topological entropy. This construction is fully automatic and yields homoclinic tangles with several hundred homoclinic points. As an example rigorous lower bounds for the topological entropy of the Henon map are computed, which to the best knowledge of the authors yield the largest such estimates published so far.

  5. A high-order time-parallel scheme for solving wave propagation problems via the direct construction of an approximate time-evolution operator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haut, T. S.; Babb, T.; Martinsson, P. G.

    2015-06-16

    Our manuscript demonstrates a technique for efficiently solving the classical wave equation, the shallow water equations, and, more generally, equations of the form ∂u/∂t=Lu∂u/∂t=Lu, where LL is a skew-Hermitian differential operator. The idea is to explicitly construct an approximation to the time-evolution operator exp(τL)exp(τL) for a relatively large time-step ττ. Recently developed techniques for approximating oscillatory scalar functions by rational functions, and accelerated algorithms for computing functions of discretized differential operators are exploited. Principal advantages of the proposed method include: stability even for large time-steps, the possibility to parallelize in time over many characteristic wavelengths and large speed-ups over existingmore » methods in situations where simulation over long times are required. Numerical examples involving the 2D rotating shallow water equations and the 2D wave equation in an inhomogenous medium are presented, and the method is compared to the 4th order Runge–Kutta (RK4) method and to the use of Chebyshev polynomials. The new method achieved high accuracy over long-time intervals, and with speeds that are orders of magnitude faster than both RK4 and the use of Chebyshev polynomials.« less

  6. An efficient algorithm for choosing the degree of a polynomial to approximate discrete nonoscillatory data

    NASA Technical Reports Server (NTRS)

    Hedgley, D. R.

    1978-01-01

    An efficient algorithm for selecting the degree of a polynomial that defines a curve that best approximates a data set was presented. This algorithm was applied to both oscillatory and nonoscillatory data without loss of generality.

  7. Better approximation guarantees for job-shop scheduling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goldberg, L.A.; Paterson, M.; Srinivasan, A.

    1997-06-01

    Job-shop scheduling is a classical NP-hard problem. Shmoys, Stein & Wein presented the first polynomial-time approximation algorithm for this problem that has a good (polylogarithmic) approximation guarantee. We improve the approximation guarantee of their work, and present further improvements for some important NP-hard special cases of this problem (e.g., in the preemptive case where machines can suspend work on operations and later resume). We also present NC algorithms with improved approximation guarantees for some NP-hard special cases.

  8. A Comparison of Approximation Modeling Techniques: Polynomial Versus Interpolating Models

    NASA Technical Reports Server (NTRS)

    Giunta, Anthony A.; Watson, Layne T.

    1998-01-01

    Two methods of creating approximation models are compared through the calculation of the modeling accuracy on test problems involving one, five, and ten independent variables. Here, the test problems are representative of the modeling challenges typically encountered in realistic engineering optimization problems. The first approximation model is a quadratic polynomial created using the method of least squares. This type of polynomial model has seen considerable use in recent engineering optimization studies due to its computational simplicity and ease of use. However, quadratic polynomial models may be of limited accuracy when the response data to be modeled have multiple local extrema. The second approximation model employs an interpolation scheme known as kriging developed in the fields of spatial statistics and geostatistics. This class of interpolating model has the flexibility to model response data with multiple local extrema. However, this flexibility is obtained at an increase in computational expense and a decrease in ease of use. The intent of this study is to provide an initial exploration of the accuracy and modeling capabilities of these two approximation methods.

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sevast'yanov, E A; Sadekova, E Kh

    The Bulgarian mathematicians Sendov, Popov, and Boyanov have well-known results on the asymptotic behaviour of the least deviations of 2{pi}-periodic functions in the classes H{sup {omega}} from trigonometric polynomials in the Hausdorff metric. However, the asymptotics they give are not adequate to detect a difference in, for example, the rate of approximation of functions f whose moduli of continuity {omega}(f;{delta}) differ by factors of the form (log(1/{delta})){sup {beta}}. Furthermore, a more detailed determination of the asymptotic behaviour by traditional methods becomes very difficult. This paper develops an approach based on using trigonometric snakes as approximating polynomials. The snakes of ordermore » n inscribed in the Minkowski {delta}-neighbourhood of the graph of the approximated function f provide, in a number of cases, the best approximation for f (for the appropriate choice of {delta}). The choice of {delta} depends on n and f and is based on constructing polynomial kernels adjusted to the Hausdorff metric and polynomials with special oscillatory properties. Bibliography: 19 titles.« less

  10. Polynomial approximation of functions of matrices and its application to the solution of a general system of linear equations

    NASA Technical Reports Server (NTRS)

    Tal-Ezer, Hillel

    1987-01-01

    During the process of solving a mathematical model numerically, there is often a need to operate on a vector v by an operator which can be expressed as f(A) while A is NxN matrix (ex: exp(A), sin(A), A sup -1). Except for very simple matrices, it is impractical to construct the matrix f(A) explicitly. Usually an approximation to it is used. In the present research, an algorithm is developed which uses a polynomial approximation to f(A). It is reduced to a problem of approximating f(z) by a polynomial in z while z belongs to the domain D in the complex plane which includes all the eigenvalues of A. This problem of approximation is approached by interpolating the function f(z) in a certain set of points which is known to have some maximal properties. The approximation thus achieved is almost best. Implementing the algorithm to some practical problem is described. Since a solution to a linear system Ax = b is x= A sup -1 b, an iterative solution to it can be regarded as a polynomial approximation to f(A) = A sup -1. Implementing the algorithm in this case is also described.

  11. Pseudo spectral collocation with Maxwell polynomials for kinetic equations with energy diffusion

    NASA Astrophysics Data System (ADS)

    Sánchez-Vizuet, Tonatiuh; Cerfon, Antoine J.

    2018-02-01

    We study the approximation and stability properties of a recently popularized discretization strategy for the speed variable in kinetic equations, based on pseudo-spectral collocation on a grid defined by the zeros of a non-standard family of orthogonal polynomials called Maxwell polynomials. Taking a one-dimensional equation describing energy diffusion due to Fokker-Planck collisions with a Maxwell-Boltzmann background distribution as the test bench for the performance of the scheme, we find that Maxwell based discretizations outperform other commonly used schemes in most situations, often by orders of magnitude. This provides a strong motivation for their use in high-dimensional gyrokinetic simulations. However, we also show that Maxwell based schemes are subject to a non-modal time stepping instability in their most straightforward implementation, so that special care must be given to the discrete representation of the linear operators in order to benefit from the advantages provided by Maxwell polynomials.

  12. A comparison of polynomial approximations and artificial neural nets as response surfaces

    NASA Technical Reports Server (NTRS)

    Carpenter, William C.; Barthelemy, Jean-Francois M.

    1992-01-01

    Artificial neural nets and polynomial approximations were used to develop response surfaces for several test problems. Based on the number of functional evaluations required to build the approximations and the number of undetermined parameters associated with the approximations, the performance of the two types of approximations was found to be comparable. A rule of thumb is developed for determining the number of nodes to be used on a hidden layer of an artificial neural net, and the number of designs needed to train an approximation is discussed.

  13. A Boussinesq-scaled, pressure-Poisson water wave model

    NASA Astrophysics Data System (ADS)

    Donahue, Aaron S.; Zhang, Yao; Kennedy, Andrew B.; Westerink, Joannes J.; Panda, Nishant; Dawson, Clint

    2015-02-01

    Through the use of Boussinesq scaling we develop and test a model for resolving non-hydrostatic pressure profiles in nonlinear wave systems over varying bathymetry. A Green-Nagdhi type polynomial expansion is used to resolve the pressure profile along the vertical axis, this is then inserted into the pressure-Poisson equation, retaining terms up to a prescribed order and solved using a weighted residual approach. The model shows rapid convergence properties with increasing order of polynomial expansion which can be greatly improved through the application of asymptotic rearrangement. Models of Boussinesq scaling of the fully nonlinear O (μ2) and weakly nonlinear O (μN) are presented, the analytical and numerical properties of O (μ2) and O (μ4) models are discussed. Optimal basis functions in the Green-Nagdhi expansion are determined through manipulation of the free-parameters which arise due to the Boussinesq scaling. The optimal O (μ2) model has dispersion accuracy equivalent to a Padé [2,2] approximation with one extra free-parameter. The optimal O (μ4) model obtains dispersion accuracy equivalent to a Padé [4,4] approximation with two free-parameters which can be used to optimize shoaling or nonlinear properties. In comparison to experimental results the O (μ4) model shows excellent agreement to experimental data.

  14. Least-Squares Adaptive Control Using Chebyshev Orthogonal Polynomials

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.; Burken, John; Ishihara, Abraham

    2011-01-01

    This paper presents a new adaptive control approach using Chebyshev orthogonal polynomials as basis functions in a least-squares functional approximation. The use of orthogonal basis functions improves the function approximation significantly and enables better convergence of parameter estimates. Flight control simulations demonstrate the effectiveness of the proposed adaptive control approach.

  15. Design and Use of a Learning Object for Finding Complex Polynomial Roots

    ERIC Educational Resources Information Center

    Benitez, Julio; Gimenez, Marcos H.; Hueso, Jose L.; Martinez, Eulalia; Riera, Jaime

    2013-01-01

    Complex numbers are essential in many fields of engineering, but students often fail to have a natural insight of them. We present a learning object for the study of complex polynomials that graphically shows that any complex polynomials has a root and, furthermore, is useful to find the approximate roots of a complex polynomial. Moreover, we…

  16. Recursive approach to the moment-based phase unwrapping method.

    PubMed

    Langley, Jason A; Brice, Robert G; Zhao, Qun

    2010-06-01

    The moment-based phase unwrapping algorithm approximates the phase map as a product of Gegenbauer polynomials, but the weight function for the Gegenbauer polynomials generates artificial singularities along the edge of the phase map. A method is presented to remove the singularities inherent to the moment-based phase unwrapping algorithm by approximating the phase map as a product of two one-dimensional Legendre polynomials and applying a recursive property of derivatives of Legendre polynomials. The proposed phase unwrapping algorithm is tested on simulated and experimental data sets. The results are then compared to those of PRELUDE 2D, a widely used phase unwrapping algorithm, and a Chebyshev-polynomial-based phase unwrapping algorithm. It was found that the proposed phase unwrapping algorithm provides results that are comparable to those obtained by using PRELUDE 2D and the Chebyshev phase unwrapping algorithm.

  17. A new sampling scheme for developing metamodels with the zeros of Chebyshev polynomials

    NASA Astrophysics Data System (ADS)

    Wu, Jinglai; Luo, Zhen; Zhang, Nong; Zhang, Yunqing

    2015-09-01

    The accuracy of metamodelling is determined by both the sampling and approximation. This article proposes a new sampling method based on the zeros of Chebyshev polynomials to capture the sampling information effectively. First, the zeros of one-dimensional Chebyshev polynomials are applied to construct Chebyshev tensor product (CTP) sampling, and the CTP is then used to construct high-order multi-dimensional metamodels using the 'hypercube' polynomials. Secondly, the CTP sampling is further enhanced to develop Chebyshev collocation method (CCM) sampling, to construct the 'simplex' polynomials. The samples of CCM are randomly and directly chosen from the CTP samples. Two widely studied sampling methods, namely the Smolyak sparse grid and Hammersley, are used to demonstrate the effectiveness of the proposed sampling method. Several numerical examples are utilized to validate the approximation accuracy of the proposed metamodel under different dimensions.

  18. Event-Triggered Fault Detection of Nonlinear Networked Systems.

    PubMed

    Li, Hongyi; Chen, Ziran; Wu, Ligang; Lam, Hak-Keung; Du, Haiping

    2017-04-01

    This paper investigates the problem of fault detection for nonlinear discrete-time networked systems under an event-triggered scheme. A polynomial fuzzy fault detection filter is designed to generate a residual signal and detect faults in the system. A novel polynomial event-triggered scheme is proposed to determine the transmission of the signal. A fault detection filter is designed to guarantee that the residual system is asymptotically stable and satisfies the desired performance. Polynomial approximated membership functions obtained by Taylor series are employed for filtering analysis. Furthermore, sufficient conditions are represented in terms of sum of squares (SOSs) and can be solved by SOS tools in MATLAB environment. A numerical example is provided to demonstrate the effectiveness of the proposed results.

  19. Approximate tensor-product preconditioners for very high order discontinuous Galerkin methods

    NASA Astrophysics Data System (ADS)

    Pazner, Will; Persson, Per-Olof

    2018-02-01

    In this paper, we develop a new tensor-product based preconditioner for discontinuous Galerkin methods with polynomial degrees higher than those typically employed. This preconditioner uses an automatic, purely algebraic method to approximate the exact block Jacobi preconditioner by Kronecker products of several small, one-dimensional matrices. Traditional matrix-based preconditioners require O (p2d) storage and O (p3d) computational work, where p is the degree of basis polynomials used, and d is the spatial dimension. Our SVD-based tensor-product preconditioner requires O (p d + 1) storage, O (p d + 1) work in two spatial dimensions, and O (p d + 2) work in three spatial dimensions. Combined with a matrix-free Newton-Krylov solver, these preconditioners allow for the solution of DG systems in linear time in p per degree of freedom in 2D, and reduce the computational complexity from O (p9) to O (p5) in 3D. Numerical results are shown in 2D and 3D for the advection, Euler, and Navier-Stokes equations, using polynomials of degree up to p = 30. For many test cases, the preconditioner results in similar iteration counts when compared with the exact block Jacobi preconditioner, and performance is significantly improved for high polynomial degrees p.

  20. Logical definability and asymptotic growth in optimization and counting problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Compton, K.

    1994-12-31

    There has recently been a great deal of interest in the relationship between logical definability and NP-optimization problems. Let MS{sub n} (resp. MP{sub n}) be the class of problems to compute, for given a finite structure A, the maximum number of tuples {bar x} in A satisfying a {Sigma}{sub n} (resp. II{sub n}) formula {psi}({bar x}, {bar S}) as {bar S} ranges over predicates on A. Kolaitis and Thakur showed that the classes MS{sub n} and MP{sub n} collapse to a hierarchy of four levels. Papadimitriou and Yannakakis previously showed that problems in the two lowest levels MS{sub 0} andmore » MS{sub 1} (which they called Max Snp and Max Np) are approximable to within a contrast factor in polynomial time. Similarly, Saluja, Subrahmanyam, and Thakur defined SS{sub n} (resp. SP{sub n}) to be the class of problems to compute, for given a finite structure A, the number of tuples ({bar T}, {bar S}) satisfying a given {Sigma}{sub n} (resp. II{sub n}) formula {psi}({bar T}, {bar c}) in A. They showed that the classes SS{sub n} and SP{sub n} collapse to a hierarchy of five levels and that problems in the two lowest levels SS{sub 0} and SS{sub 1} have a fully polynomial time randomized approximation scheme. We define extended classes MSF{sub n}, MPF{sub n} SSF{sub n}, and SPF{sub n} by allowing formulae to contain predicates definable in a logic known as least fixpoint logic. The resulting hierarchies classes collapse to the same number of levels and problems in the bottom levels can be approximated as before, but now some problems descend from the highest levels in the original hierarchies to the lowest levels in the new hierarchies. We introduce a method characterizing rates of growth of average solution sizes thereby showing a number of important problems do not belong MSF{sub 1} and SSF{sub 1}. This method is related to limit laws for logics and the probabilistic method from combinatorics.« less

  1. Pedestrian detection in crowded scenes with the histogram of gradients principle

    NASA Astrophysics Data System (ADS)

    Sidla, O.; Rosner, M.; Lypetskyy, Y.

    2006-10-01

    This paper describes a close to real-time scale invariant implementation of a pedestrian detector system which is based on the Histogram of Oriented Gradients (HOG) principle. Salient HOG features are first selected from a manually created very large database of samples with an evolutionary optimization procedure that directly trains a polynomial Support Vector Machine (SVM). Real-time operation is achieved by a cascaded 2-step classifier which uses first a very fast linear SVM (with the same features as the polynomial SVM) to reject most of the irrelevant detections and then computes the decision function with a polynomial SVM on the remaining set of candidate detections. Scale invariance is achieved by running the detector of constant size on scaled versions of the original input images and by clustering the results over all resolutions. The pedestrian detection system has been implemented in two versions: i) fully body detection, and ii) upper body only detection. The latter is especially suited for very busy and crowded scenarios. On a state-of-the-art PC it is able to run at a frequency of 8 - 20 frames/sec.

  2. New Bernstein type inequalities for polynomials on ellipses

    NASA Technical Reports Server (NTRS)

    Freund, Roland; Fischer, Bernd

    1990-01-01

    New and sharp estimates are derived for the growth in the complex plane of polynomials known to have a curved majorant on a given ellipse. These so-called Bernstein type inequalities are closely connected with certain constrained Chebyshev approximation problems on ellipses. Also presented are some new results for approximation problems of this type.

  3. Graphical Solution of Polynomial Equations

    ERIC Educational Resources Information Center

    Grishin, Anatole

    2009-01-01

    Graphing utilities, such as the ubiquitous graphing calculator, are often used in finding the approximate real roots of polynomial equations. In this paper the author offers a simple graphing technique that allows one to find all solutions of a polynomial equation (1) of arbitrary degree; (2) with real or complex coefficients; and (3) possessing…

  4. Extended Islands of Tractability for Parsimony Haplotyping

    NASA Astrophysics Data System (ADS)

    Fleischer, Rudolf; Guo, Jiong; Niedermeier, Rolf; Uhlmann, Johannes; Wang, Yihui; Weller, Mathias; Wu, Xi

    Parsimony haplotyping is the problem of finding a smallest size set of haplotypes that can explain a given set of genotypes. The problem is NP-hard, and many heuristic and approximation algorithms as well as polynomial-time solvable special cases have been discovered. We propose improved fixed-parameter tractability results with respect to the parameter "size of the target haplotype set" k by presenting an O *(k 4k )-time algorithm. This also applies to the practically important constrained case, where we can only use haplotypes from a given set. Furthermore, we show that the problem becomes polynomial-time solvable if the given set of genotypes is complete, i.e., contains all possible genotypes that can be explained by the set of haplotypes.

  5. Generating the Patterns of Variation with GeoGebra: The Case of Polynomial Approximations

    ERIC Educational Resources Information Center

    Attorps, Iiris; Björk, Kjell; Radic, Mirko

    2016-01-01

    In this paper, we report a teaching experiment regarding the theory of polynomial approximations at the university mathematics teaching in Sweden. The experiment was designed by applying Variation theory and by using the free dynamic mathematics software GeoGebra. The aim of this study was to investigate if the technology-assisted teaching of…

  6. Developing a reversible rapid coordinate transformation model for the cylindrical projection

    NASA Astrophysics Data System (ADS)

    Ye, Si-jing; Yan, Tai-lai; Yue, Yan-li; Lin, Wei-yan; Li, Lin; Yao, Xiao-chuang; Mu, Qin-yun; Li, Yong-qin; Zhu, De-hai

    2016-04-01

    Numerical models are widely used for coordinate transformations. However, in most numerical models, polynomials are generated to approximate "true" geographic coordinates or plane coordinates, and one polynomial is hard to make simultaneously appropriate for both forward and inverse transformations. As there is a transformation rule between geographic coordinates and plane coordinates, how accurate and efficient is the calculation of the coordinate transformation if we construct polynomials to approximate the transformation rule instead of "true" coordinates? In addition, is it preferable to compare models using such polynomials with traditional numerical models with even higher exponents? Focusing on cylindrical projection, this paper reports on a grid-based rapid numerical transformation model - a linear rule approximation model (LRA-model) that constructs linear polynomials to approximate the transformation rule and uses a graticule to alleviate error propagation. Our experiments on cylindrical projection transformation between the WGS 84 Geographic Coordinate System (EPSG 4326) and the WGS 84 UTM ZONE 50N Plane Coordinate System (EPSG 32650) with simulated data demonstrate that the LRA-model exhibits high efficiency, high accuracy, and high stability; is simple and easy to use for both forward and inverse transformations; and can be applied to the transformation of a large amount of data with a requirement of high calculation efficiency. Furthermore, the LRA-model exhibits advantages in terms of calculation efficiency, accuracy and stability for coordinate transformations, compared to the widely used hyperbolic transformation model.

  7. Cosmographic analysis with Chebyshev polynomials

    NASA Astrophysics Data System (ADS)

    Capozziello, Salvatore; D'Agostino, Rocco; Luongo, Orlando

    2018-05-01

    The limits of standard cosmography are here revised addressing the problem of error propagation during statistical analyses. To do so, we propose the use of Chebyshev polynomials to parametrize cosmic distances. In particular, we demonstrate that building up rational Chebyshev polynomials significantly reduces error propagations with respect to standard Taylor series. This technique provides unbiased estimations of the cosmographic parameters and performs significatively better than previous numerical approximations. To figure this out, we compare rational Chebyshev polynomials with Padé series. In addition, we theoretically evaluate the convergence radius of (1,1) Chebyshev rational polynomial and we compare it with the convergence radii of Taylor and Padé approximations. We thus focus on regions in which convergence of Chebyshev rational functions is better than standard approaches. With this recipe, as high-redshift data are employed, rational Chebyshev polynomials remain highly stable and enable one to derive highly accurate analytical approximations of Hubble's rate in terms of the cosmographic series. Finally, we check our theoretical predictions by setting bounds on cosmographic parameters through Monte Carlo integration techniques, based on the Metropolis-Hastings algorithm. We apply our technique to high-redshift cosmic data, using the Joint Light-curve Analysis supernovae sample and the most recent versions of Hubble parameter and baryon acoustic oscillation measurements. We find that cosmography with Taylor series fails to be predictive with the aforementioned data sets, while turns out to be much more stable using the Chebyshev approach.

  8. Polynomial Conjoint Analysis of Similarities: A Model for Constructing Polynomial Conjoint Measurement Algorithms.

    ERIC Educational Resources Information Center

    Young, Forrest W.

    A model permitting construction of algorithms for the polynomial conjoint analysis of similarities is presented. This model, which is based on concepts used in nonmetric scaling, permits one to obtain the best approximate solution. The concepts used to construct nonmetric scaling algorithms are reviewed. Finally, examples of algorithmic models for…

  9. A Stochastic Mixed Finite Element Heterogeneous Multiscale Method for Flow in Porous Media

    DTIC Science & Technology

    2010-08-01

    applicable for flow in porous media has drawn significant interest in the last few years. Several techniques like generalized polynomial chaos expansions (gPC...represents the stochastic solution as a polynomial approxima- tion. This interpolant is constructed via independent function calls to the de- terministic...of orthogonal polynomials [34,38] or sparse grid approximations [39–41]. It is well known that the global polynomial interpolation cannot resolve lo

  10. Polynomial approximation of Poincare maps for Hamiltonian system

    NASA Technical Reports Server (NTRS)

    Froeschle, Claude; Petit, Jean-Marc

    1992-01-01

    Different methods are proposed and tested for transforming a non-linear differential system, and more particularly a Hamiltonian one, into a map without integrating the whole orbit as in the well-known Poincare return map technique. We construct piecewise polynomial maps by coarse-graining the phase-space surface of section into parallelograms and using either only values of the Poincare maps at the vertices or also the gradient information at the nearest neighbors to define a polynomial approximation within each cell. The numerical experiments are in good agreement with both the real symplectic and Poincare maps.

  11. Gröbner Bases and Generation of Difference Schemes for Partial Differential Equations

    NASA Astrophysics Data System (ADS)

    Gerdt, Vladimir P.; Blinkov, Yuri A.; Mozzhilkin, Vladimir V.

    2006-05-01

    In this paper we present an algorithmic approach to the generation of fully conservative difference schemes for linear partial differential equations. The approach is based on enlargement of the equations in their integral conservation law form by extra integral relations between unknown functions and their derivatives, and on discretization of the obtained system. The structure of the discrete system depends on numerical approximation methods for the integrals occurring in the enlarged system. As a result of the discretization, a system of linear polynomial difference equations is derived for the unknown functions and their partial derivatives. A difference scheme is constructed by elimination of all the partial derivatives. The elimination can be achieved by selecting a proper elimination ranking and by computing a Gröbner basis of the linear difference ideal generated by the polynomials in the discrete system. For these purposes we use the difference form of Janet-like Gröbner bases and their implementation in Maple. As illustration of the described methods and algorithms, we construct a number of difference schemes for Burgers and Falkowich-Karman equations and discuss their numerical properties.

  12. Approximate ground states of the random-field Potts model from graph cuts

    NASA Astrophysics Data System (ADS)

    Kumar, Manoj; Kumar, Ravinder; Weigel, Martin; Banerjee, Varsha; Janke, Wolfhard; Puri, Sanjay

    2018-05-01

    While the ground-state problem for the random-field Ising model is polynomial, and can be solved using a number of well-known algorithms for maximum flow or graph cut, the analog random-field Potts model corresponds to a multiterminal flow problem that is known to be NP-hard. Hence an efficient exact algorithm is very unlikely to exist. As we show here, it is nevertheless possible to use an embedding of binary degrees of freedom into the Potts spins in combination with graph-cut methods to solve the corresponding ground-state problem approximately in polynomial time. We benchmark this heuristic algorithm using a set of quasiexact ground states found for small systems from long parallel tempering runs. For a not-too-large number q of Potts states, the method based on graph cuts finds the same solutions in a fraction of the time. We employ the new technique to analyze the breakup length of the random-field Potts model in two dimensions.

  13. A Christoffel function weighted least squares algorithm for collocation approximations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Narayan, Akil; Jakeman, John D.; Zhou, Tao

    Here, we propose, theoretically investigate, and numerically validate an algorithm for the Monte Carlo solution of least-squares polynomial approximation problems in a collocation framework. Our investigation is motivated by applications in the collocation approximation of parametric functions, which frequently entails construction of surrogates via orthogonal polynomials. A standard Monte Carlo approach would draw samples according to the density defining the orthogonal polynomial family. Our proposed algorithm instead samples with respect to the (weighted) pluripotential equilibrium measure of the domain, and subsequently solves a weighted least-squares problem, with weights given by evaluations of the Christoffel function. We present theoretical analysis tomore » motivate the algorithm, and numerical results that show our method is superior to standard Monte Carlo methods in many situations of interest.« less

  14. A Christoffel function weighted least squares algorithm for collocation approximations

    DOE PAGES

    Narayan, Akil; Jakeman, John D.; Zhou, Tao

    2016-11-28

    Here, we propose, theoretically investigate, and numerically validate an algorithm for the Monte Carlo solution of least-squares polynomial approximation problems in a collocation framework. Our investigation is motivated by applications in the collocation approximation of parametric functions, which frequently entails construction of surrogates via orthogonal polynomials. A standard Monte Carlo approach would draw samples according to the density defining the orthogonal polynomial family. Our proposed algorithm instead samples with respect to the (weighted) pluripotential equilibrium measure of the domain, and subsequently solves a weighted least-squares problem, with weights given by evaluations of the Christoffel function. We present theoretical analysis tomore » motivate the algorithm, and numerical results that show our method is superior to standard Monte Carlo methods in many situations of interest.« less

  15. Quadratures with multiple nodes, power orthogonality, and moment-preserving spline approximation

    NASA Astrophysics Data System (ADS)

    Milovanovic, Gradimir V.

    2001-01-01

    Quadrature formulas with multiple nodes, power orthogonality, and some applications of such quadratures to moment-preserving approximation by defective splines are considered. An account on power orthogonality (s- and [sigma]-orthogonal polynomials) and generalized Gaussian quadratures with multiple nodes, including stable algorithms for numerical construction of the corresponding polynomials and Cotes numbers, are given. In particular, the important case of Chebyshev weight is analyzed. Finally, some applications in moment-preserving approximation of functions by defective splines are discussed.

  16. On size-constrained minimum s–t cut problems and size-constrained dense subgraph problems

    DOE PAGES

    Chen, Wenbin; Samatova, Nagiza F.; Stallmann, Matthias F.; ...

    2015-10-30

    In some application cases, the solutions of combinatorial optimization problems on graphs should satisfy an additional vertex size constraint. In this paper, we consider size-constrained minimum s–t cut problems and size-constrained dense subgraph problems. We introduce the minimum s–t cut with at-least-k vertices problem, the minimum s–t cut with at-most-k vertices problem, and the minimum s–t cut with exactly k vertices problem. We prove that they are NP-complete. Thus, they are not polynomially solvable unless P = NP. On the other hand, we also study the densest at-least-k-subgraph problem (DalkS) and the densest at-most-k-subgraph problem (DamkS) introduced by Andersen andmore » Chellapilla [1]. We present a polynomial time algorithm for DalkS when k is bounded by some constant c. We also present two approximation algorithms for DamkS. In conclusion, the first approximation algorithm for DamkS has an approximation ratio of n-1/k-1, where n is the number of vertices in the input graph. The second approximation algorithm for DamkS has an approximation ratio of O (n δ), for some δ < 1/3.« less

  17. Rational trigonometric approximations using Fourier series partial sums

    NASA Technical Reports Server (NTRS)

    Geer, James F.

    1993-01-01

    A class of approximations (S(sub N,M)) to a periodic function f which uses the ideas of Pade, or rational function, approximations based on the Fourier series representation of f, rather than on the Taylor series representation of f, is introduced and studied. Each approximation S(sub N,M) is the quotient of a trigonometric polynomial of degree N and a trigonometric polynomial of degree M. The coefficients in these polynomials are determined by requiring that an appropriate number of the Fourier coefficients of S(sub N,M) agree with those of f. Explicit expressions are derived for these coefficients in terms of the Fourier coefficients of f. It is proven that these 'Fourier-Pade' approximations converge point-wise to (f(x(exp +))+f(x(exp -)))/2 more rapidly (in some cases by a factor of 1/k(exp 2M)) than the Fourier series partial sums on which they are based. The approximations are illustrated by several examples and an application to the solution of an initial, boundary value problem for the simple heat equation is presented.

  18. Generating the patterns of variation with GeoGebra: the case of polynomial approximations

    NASA Astrophysics Data System (ADS)

    Attorps, Iiris; Björk, Kjell; Radic, Mirko

    2016-01-01

    In this paper, we report a teaching experiment regarding the theory of polynomial approximations at the university mathematics teaching in Sweden. The experiment was designed by applying Variation theory and by using the free dynamic mathematics software GeoGebra. The aim of this study was to investigate if the technology-assisted teaching of Taylor polynomials compared with traditional way of work at the university level can support the teaching and learning of mathematical concepts and ideas. An engineering student group (n = 19) was taught Taylor polynomials with the assistance of GeoGebra while a control group (n = 18) was taught in a traditional way. The data were gathered by video recording of the lectures, by doing a post-test concerning Taylor polynomials in both groups and by giving one question regarding Taylor polynomials at the final exam for the course in Real Analysis in one variable. In the analysis of the lectures, we found Variation theory combined with GeoGebra to be a potentially powerful tool for revealing some critical aspects of Taylor Polynomials. Furthermore, the research results indicated that applying Variation theory, when planning the technology-assisted teaching, supported and enriched students' learning opportunities in the study group compared with the control group.

  19. Rational approximations of f(R) cosmography through Pad'e polynomials

    NASA Astrophysics Data System (ADS)

    Capozziello, Salvatore; D'Agostino, Rocco; Luongo, Orlando

    2018-05-01

    We consider high-redshift f(R) cosmography adopting the technique of polynomial reconstruction. In lieu of considering Taylor treatments, which turn out to be non-predictive as soon as z>1, we take into account the Pad&apose rational approximations which consist in performing expansions converging at high redshift domains. Particularly, our strategy is to reconstruct f(z) functions first, assuming the Ricci scalar to be invertible with respect to the redshift z. Having the so-obtained f(z) functions, we invert them and we easily obtain the corresponding f(R) terms. We minimize error propagation, assuming no errors upon redshift data. The treatment we follow naturally leads to evaluating curvature pressure, density and equation of state, characterizing the universe evolution at redshift much higher than standard cosmographic approaches. We therefore match these outcomes with small redshift constraints got by framing the f(R) cosmology through Taylor series around 0zsimeq . This gives rise to a calibration procedure with small redshift that enables the definitions of polynomial approximations up to zsimeq 10. Last but not least, we show discrepancies with the standard cosmological model which go towards an extension of the ΛCDM paradigm, indicating an effective dark energy term evolving in time. We finally describe the evolution of our effective dark energy term by means of basic techniques of data mining.

  20. Polynomial approximations of thermodynamic properties of arbitrary gas mixtures over wide pressure and density ranges

    NASA Technical Reports Server (NTRS)

    Allison, D. O.

    1972-01-01

    Computer programs for flow fields around planetary entry vehicles require real-gas equilibrium thermodynamic properties in a simple form which can be evaluated quickly. To fill this need, polynomial approximations were found for thermodynamic properties of air and model planetary atmospheres. A coefficient-averaging technique was used for curve fitting in lieu of the usual least-squares method. The polynomials consist of terms up to the ninth degree in each of two variables (essentially pressure and density) including all cross terms. Four of these polynomials can be joined to cover, for example, a range of about 1000 to 11000 K and 0.00001 to 1 atmosphere (1 atm = 1.0133 x 100,000 N/m sq) for a given thermodynamic property. Relative errors of less than 1 percent are found over most of the applicable range.

  1. Rows of optical vortices from elliptically perturbing a high-order beam

    NASA Astrophysics Data System (ADS)

    Dennis, Mark R.

    2006-05-01

    An optical vortex (phase singularity) with a high topological strength resides on the axis of a high-order light beam. The breakup of this vortex under elliptic perturbation into a straight row of unit-strength vortices is described. This behavior is studied in helical Ince-Gauss beams and astigmatic, generalized Hermite-Laguerre-Gauss beams, which are perturbations of Laguerre-Gauss beams. Approximations of these beams are derived for small perturbations, in which a neighborhood of the axis can be approximated by a polynomial in the complex plane: a Chebyshev polynomial for Ince-Gauss beams, and a Hermite polynomial for astigmatic beams.

  2. Superbounce and loop quantum ekpyrotic cosmologies from modified gravity: F(R) , F(G) and F(T) theories

    NASA Astrophysics Data System (ADS)

    Odintsov, S. D.; Oikonomou, V. K.; Saridakis, Emmanuel N.

    2015-12-01

    We investigate the realization of two bouncing paradigms, namely of the superbounce and the loop quantum cosmological ekpyrosis, in the framework of various modified gravities. In particular, we focus on the F(R) , F(G) and F(T) gravities, and we reconstruct their specific subclasses which lead to such universe evolutions. These subclasses constitute from power laws, polynomials, or hypergeometric ansatzes, which can be approximated by power laws. The qualitative similarity of the different effective gravities which realize the above two bouncing cosmologies, indicates that a universality might be lying behind the bounce. Finally, performing a linear perturbation analysis, we show that the obtained solutions are conditionally or fully stable.

  3. Differential geometric treewidth estimation in adiabatic quantum computation

    NASA Astrophysics Data System (ADS)

    Wang, Chi; Jonckheere, Edmond; Brun, Todd

    2016-10-01

    The D-Wave adiabatic quantum computing platform is designed to solve a particular class of problems—the Quadratic Unconstrained Binary Optimization (QUBO) problems. Due to the particular "Chimera" physical architecture of the D-Wave chip, the logical problem graph at hand needs an extra process called minor embedding in order to be solvable on the D-Wave architecture. The latter problem is itself NP-hard. In this paper, we propose a novel polynomial-time approximation to the closely related treewidth based on the differential geometric concept of Ollivier-Ricci curvature. The latter runs in polynomial time and thus could significantly reduce the overall complexity of determining whether a QUBO problem is minor embeddable, and thus solvable on the D-Wave architecture.

  4. The Approximability of Learning and Constraint Satisfaction Problems

    DTIC Science & Technology

    2010-10-07

    further improved this result to NP ⊆ naPCP1,3/4+²(O(log(n)),3). Around the same time, Zwick [141] showed that naPCP1,5/8(O(log(n)),3)⊆ BPP by giving a...randomized polynomial-time 5/8-approximation algorithm for satisfiable 3CSP. Therefore unless NP⊆ BPP , the best s must be bigger than 5/8. Zwick... BPP [141]. We think that Question 5.1.2 addresses an important missing part in understanding the 3-query PCP systems. In addition, as is mentioned the

  5. A faster 1.375-approximation algorithm for sorting by transpositions.

    PubMed

    Cunha, Luís Felipe I; Kowada, Luis Antonio B; Hausen, Rodrigo de A; de Figueiredo, Celina M H

    2015-11-01

    Sorting by Transpositions is an NP-hard problem for which several polynomial-time approximation algorithms have been developed. Hartman and Shamir (2006) developed a 1.5-approximation [Formula: see text] algorithm, whose running time was improved to O(nlogn) by Feng and Zhu (2007) with a data structure they defined, the permutation tree. Elias and Hartman (2006) developed a 1.375-approximation O(n(2)) algorithm, and Firoz et al. (2011) claimed an improvement to the running time, from O(n(2)) to O(nlogn), by using the permutation tree. We provide counter-examples to the correctness of Firoz et al.'s strategy, showing that it is not possible to reach a component by sufficient extensions using the method proposed by them. In addition, we propose a 1.375-approximation algorithm, modifying Elias and Hartman's approach with the use of permutation trees and achieving O(nlogn) time.

  6. Modeling State-Space Aeroelastic Systems Using a Simple Matrix Polynomial Approach for the Unsteady Aerodynamics

    NASA Technical Reports Server (NTRS)

    Pototzky, Anthony S.

    2008-01-01

    A simple matrix polynomial approach is introduced for approximating unsteady aerodynamics in the s-plane and ultimately, after combining matrix polynomial coefficients with matrices defining the structure, a matrix polynomial of the flutter equations of motion (EOM) is formed. A technique of recasting the matrix-polynomial form of the flutter EOM into a first order form is also presented that can be used to determine the eigenvalues near the origin and everywhere on the complex plane. An aeroservoelastic (ASE) EOM have been generalized to include the gust terms on the right-hand side. The reasons for developing the new matrix polynomial approach are also presented, which are the following: first, the "workhorse" methods such as the NASTRAN flutter analysis lack the capability to consistently find roots near the origin, along the real axis or accurately find roots farther away from the imaginary axis of the complex plane; and, second, the existing s-plane methods, such as the Roger s s-plane approximation method as implemented in ISAC, do not always give suitable fits of some tabular data of the unsteady aerodynamics. A method available in MATLAB is introduced that will accurately fit generalized aerodynamic force (GAF) coefficients in a tabular data form into the coefficients of a matrix polynomial form. The root-locus results from the NASTRAN pknl flutter analysis, the ISAC-Roger's s-plane method and the present matrix polynomial method are presented and compared for accuracy and for the number and locations of roots.

  7. Identification of stochastic interactions in nonlinear models of structural mechanics

    NASA Astrophysics Data System (ADS)

    Kala, Zdeněk

    2017-07-01

    In the paper, the polynomial approximation is presented by which the Sobol sensitivity analysis can be evaluated with all sensitivity indices. The nonlinear FEM model is approximated. The input area is mapped using simulations runs of Latin Hypercube Sampling method. The domain of the approximation polynomial is chosen so that it were possible to apply large number of simulation runs of Latin Hypercube Sampling method. The method presented also makes possible to evaluate higher-order sensitivity indices, which could not be identified in case of nonlinear FEM.

  8. Formally biorthogonal polynomials and a look-ahead Levinson algorithm for general Toeplitz systems

    NASA Technical Reports Server (NTRS)

    Freund, Roland W.; Zha, Hongyuan

    1992-01-01

    Systems of linear equations with Toeplitz coefficient matrices arise in many important applications. The classical Levinson algorithm computes solutions of Toeplitz systems with only O(n(sub 2)) arithmetic operations, as compared to O(n(sub 3)) operations that are needed for solving general linear systems. However, the Levinson algorithm in its original form requires that all leading principal submatrices are nonsingular. An extension of the Levinson algorithm to general Toeplitz systems is presented. The algorithm uses look-ahead to skip over exactly singular, as well as ill-conditioned leading submatrices, and, at the same time, it still fully exploits the Toeplitz structure. In our derivation of this algorithm, we make use of the intimate connection of Toeplitz matrices with formally biorthogonal polynomials.

  9. Convex optimisation approach to constrained fuel optimal control of spacecraft in close relative motion

    NASA Astrophysics Data System (ADS)

    Massioni, Paolo; Massari, Mauro

    2018-05-01

    This paper describes an interesting and powerful approach to the constrained fuel-optimal control of spacecraft in close relative motion. The proposed approach is well suited for problems under linear dynamic equations, therefore perfectly fitting to the case of spacecraft flying in close relative motion. If the solution of the optimisation is approximated as a polynomial with respect to the time variable, then the problem can be approached with a technique developed in the control engineering community, known as "Sum Of Squares" (SOS), and the constraints can be reduced to bounds on the polynomials. Such a technique allows rewriting polynomial bounding problems in the form of convex optimisation problems, at the cost of a certain amount of conservatism. The principles of the techniques are explained and some application related to spacecraft flying in close relative motion are shown.

  10. Local polynomial chaos expansion for linear differential equations with high dimensional random inputs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Yi; Jakeman, John; Gittelson, Claude

    2015-01-08

    In this paper we present a localized polynomial chaos expansion for partial differential equations (PDE) with random inputs. In particular, we focus on time independent linear stochastic problems with high dimensional random inputs, where the traditional polynomial chaos methods, and most of the existing methods, incur prohibitively high simulation cost. Furthermore, the local polynomial chaos method employs a domain decomposition technique to approximate the stochastic solution locally. In each subdomain, a subdomain problem is solved independently and, more importantly, in a much lower dimensional random space. In a postprocesing stage, accurate samples of the original stochastic problems are obtained frommore » the samples of the local solutions by enforcing the correct stochastic structure of the random inputs and the coupling conditions at the interfaces of the subdomains. Overall, the method is able to solve stochastic PDEs in very large dimensions by solving a collection of low dimensional local problems and can be highly efficient. In our paper we present the general mathematical framework of the methodology and use numerical examples to demonstrate the properties of the method.« less

  11. Trajectory Optimization Using Adjoint Method and Chebyshev Polynomial Approximation for Minimizing Fuel Consumption During Climb

    NASA Technical Reports Server (NTRS)

    Nguyen, Nhan T.; Hornby, Gregory; Ishihara, Abe

    2013-01-01

    This paper describes two methods of trajectory optimization to obtain an optimal trajectory of minimum-fuel- to-climb for an aircraft. The first method is based on the adjoint method, and the second method is based on a direct trajectory optimization method using a Chebyshev polynomial approximation and cubic spine approximation. The approximate optimal trajectory will be compared with the adjoint-based optimal trajectory which is considered as the true optimal solution of the trajectory optimization problem. The adjoint-based optimization problem leads to a singular optimal control solution which results in a bang-singular-bang optimal control.

  12. Spline function approximation techniques for image geometric distortion representation. [for registration of multitemporal remote sensor imagery

    NASA Technical Reports Server (NTRS)

    Anuta, P. E.

    1975-01-01

    Least squares approximation techniques were developed for use in computer aided correction of spatial image distortions for registration of multitemporal remote sensor imagery. Polynomials were first used to define image distortion over the entire two dimensional image space. Spline functions were then investigated to determine if the combination of lower order polynomials could approximate a higher order distortion with less computational difficulty. Algorithms for generating approximating functions were developed and applied to the description of image distortion in aircraft multispectral scanner imagery. Other applications of the techniques were suggested for earth resources data processing areas other than geometric distortion representation.

  13. Colored knot polynomials for arbitrary pretzel knots and links

    DOE PAGES

    Galakhov, D.; Melnikov, D.; Mironov, A.; ...

    2015-04-01

    A very simple expression is conjectured for arbitrary colored Jones and HOMFLY polynomials of a rich (g+1)-parametric family of pretzel knots and links. The answer for the Jones and HOMFLY is fully and explicitly expressed through the Racah matrix of Uq(SU N), and looks related to a modular transformation of toric conformal block. Knot polynomials are among the hottest topics in modern theory. They are supposed to summarize nicely representation theory of quantum algebras and modular properties of conformal blocks. The result reported in the present letter, provides a spectacular illustration and support to this general expectation.

  14. Neck curve polynomials in neck rupture model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurniadi, Rizal; Perkasa, Yudha S.; Waris, Abdul

    2012-06-06

    The Neck Rupture Model is a model that explains the scission process which has smallest radius in liquid drop at certain position. Old fashion of rupture position is determined randomly so that has been called as Random Neck Rupture Model (RNRM). The neck curve polynomials have been employed in the Neck Rupture Model for calculation the fission yield of neutron induced fission reaction of {sup 280}X{sub 90} with changing of order of polynomials as well as temperature. The neck curve polynomials approximation shows the important effects in shaping of fission yield curve.

  15. Fully Decomposable Split Graphs

    NASA Astrophysics Data System (ADS)

    Broersma, Hajo; Kratsch, Dieter; Woeginger, Gerhard J.

    We discuss various questions around partitioning a split graph into connected parts. Our main result is a polynomial time algorithm that decides whether a given split graph is fully decomposable, i.e., whether it can be partitioned into connected parts of order α 1,α 2,...,α k for every α 1,α 2,...,α k summing up to the order of the graph. In contrast, we show that the decision problem whether a given split graph can be partitioned into connected parts of order α 1,α 2,...,α k for a given partition α 1,α 2,...,α k of the order of the graph, is NP-hard.

  16. Approximation for limit cycles and their isochrons.

    PubMed

    Demongeot, Jacques; Françoise, Jean-Pierre

    2006-12-01

    Local analysis of trajectories of dynamical systems near an attractive periodic orbit displays the notion of asymptotic phase and isochrons. These notions are quite useful in applications to biosciences. In this note, we give an expression for the first approximation of equations of isochrons in the setting of perturbations of polynomial Hamiltonian systems. This method can be generalized to perturbations of systems that have a polynomial integral factor (like the Lotka-Volterra equation).

  17. An Analysis of Polynomial Chaos Approximations for Modeling Single-Fluid-Phase Flow in Porous Medium Systems

    PubMed Central

    Rupert, C.P.; Miller, C.T.

    2008-01-01

    We examine a variety of polynomial-chaos-motivated approximations to a stochastic form of a steady state groundwater flow model. We consider approaches for truncating the infinite dimensional problem and producing decoupled systems. We discuss conditions under which such decoupling is possible and show that to generalize the known decoupling by numerical cubature, it would be necessary to find new multivariate cubature rules. Finally, we use the acceleration of Monte Carlo to compare the quality of polynomial models obtained for all approaches and find that in general the methods considered are more efficient than Monte Carlo for the relatively small domains considered in this work. A curse of dimensionality in the series expansion of the log-normal stochastic random field used to represent hydraulic conductivity provides a significant impediment to efficient approximations for large domains for all methods considered in this work, other than the Monte Carlo method. PMID:18836519

  18. Policy Iteration for $H_\\infty $ Optimal Control of Polynomial Nonlinear Systems via Sum of Squares Programming.

    PubMed

    Zhu, Yuanheng; Zhao, Dongbin; Yang, Xiong; Zhang, Qichao

    2018-02-01

    Sum of squares (SOS) polynomials have provided a computationally tractable way to deal with inequality constraints appearing in many control problems. It can also act as an approximator in the framework of adaptive dynamic programming. In this paper, an approximate solution to the optimal control of polynomial nonlinear systems is proposed. Under a given attenuation coefficient, the Hamilton-Jacobi-Isaacs equation is relaxed to an optimization problem with a set of inequalities. After applying the policy iteration technique and constraining inequalities to SOS, the optimization problem is divided into a sequence of feasible semidefinite programming problems. With the converged solution, the attenuation coefficient is further minimized to a lower value. After iterations, approximate solutions to the smallest -gain and the associated optimal controller are obtained. Four examples are employed to verify the effectiveness of the proposed algorithm.

  19. Analysis of the impacts of horizontal translation and scaling on wavefront approximation coefficients with rectangular pupils for Chebyshev and Legendre polynomials.

    PubMed

    Sun, Wenqing; Chen, Lei; Tuya, Wulan; He, Yong; Zhu, Rihong

    2013-12-01

    Chebyshev and Legendre polynomials are frequently used in rectangular pupils for wavefront approximation. Ideally, the dataset completely fits with the polynomial basis, which provides the full-pupil approximation coefficients and the corresponding geometric aberrations. However, if there are horizontal translation and scaling, the terms in the original polynomials will become the linear combinations of the coefficients of the other terms. This paper introduces analytical expressions for two typical situations after translation and scaling. With a small translation, first-order Taylor expansion could be used to simplify the computation. Several representative terms could be selected as inputs to compute the coefficient changes before and after translation and scaling. Results show that the outcomes of the analytical solutions and the approximated values under discrete sampling are consistent. With the computation of a group of randomly generated coefficients, we contrasted the changes under different translation and scaling conditions. The larger ratios correlate the larger deviation from the approximated values to the original ones. Finally, we analyzed the peak-to-valley (PV) and root mean square (RMS) deviations from the uses of the first-order approximation and the direct expansion under different translation values. The results show that when the translation is less than 4%, the most deviated 5th term in the first-order 1D-Legendre expansion has a PV deviation less than 7% and an RMS deviation less than 2%. The analytical expressions and the computed results under discrete sampling given in this paper for the multiple typical function basis during translation and scaling in the rectangular areas could be applied in wavefront approximation and analysis.

  20. Solutions of interval type-2 fuzzy polynomials using a new ranking method

    NASA Astrophysics Data System (ADS)

    Rahman, Nurhakimah Ab.; Abdullah, Lazim; Ghani, Ahmad Termimi Ab.; Ahmad, Noor'Ani

    2015-10-01

    A few years ago, a ranking method have been introduced in the fuzzy polynomial equations. Concept of the ranking method is proposed to find actual roots of fuzzy polynomials (if exists). Fuzzy polynomials are transformed to system of crisp polynomials, performed by using ranking method based on three parameters namely, Value, Ambiguity and Fuzziness. However, it was found that solutions based on these three parameters are quite inefficient to produce answers. Therefore in this study a new ranking method have been developed with the aim to overcome the inherent weakness. The new ranking method which have four parameters are then applied in the interval type-2 fuzzy polynomials, covering the interval type-2 of fuzzy polynomial equation, dual fuzzy polynomial equations and system of fuzzy polynomials. The efficiency of the new ranking method then numerically considered in the triangular fuzzy numbers and the trapezoidal fuzzy numbers. Finally, the approximate solutions produced from the numerical examples indicate that the new ranking method successfully produced actual roots for the interval type-2 fuzzy polynomials.

  1. Uncertainty Propagation for Turbulent, Compressible Flow in a Quasi-1D Nozzle Using Stochastic Methods

    NASA Technical Reports Server (NTRS)

    Zang, Thomas A.; Mathelin, Lionel; Hussaini, M. Yousuff; Bataille, Francoise

    2003-01-01

    This paper describes a fully spectral, Polynomial Chaos method for the propagation of uncertainty in numerical simulations of compressible, turbulent flow, as well as a novel stochastic collocation algorithm for the same application. The stochastic collocation method is key to the efficient use of stochastic methods on problems with complex nonlinearities, such as those associated with the turbulence model equations in compressible flow and for CFD schemes requiring solution of a Riemann problem. Both methods are applied to compressible flow in a quasi-one-dimensional nozzle. The stochastic collocation method is roughly an order of magnitude faster than the fully Galerkin Polynomial Chaos method on the inviscid problem.

  2. The cost-constrained traveling salesman problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sokkappa, P.R.

    1990-10-01

    The Cost-Constrained Traveling Salesman Problem (CCTSP) is a variant of the well-known Traveling Salesman Problem (TSP). In the TSP, the goal is to find a tour of a given set of cities such that the total cost of the tour is minimized. In the CCTSP, each city is given a value, and a fixed cost-constraint is specified. The objective is to find a subtour of the cities that achieves maximum value without exceeding the cost-constraint. Thus, unlike the TSP, the CCTSP requires both selection and sequencing. As a consequence, most results for the TSP cannot be extended to the CCTSP.more » We show that the CCTSP is NP-hard and that no K-approximation algorithm or fully polynomial approximation scheme exists, unless P = NP. We also show that several special cases are polynomially solvable. Algorithms for the CCTSP, which outperform previous methods, are developed in three areas: upper bounding methods, exact algorithms, and heuristics. We found that a bounding strategy based on the knapsack problem performs better, both in speed and in the quality of the bounds, than methods based on the assignment problem. Likewise, we found that a branch-and-bound approach using the knapsack bound was superior to a method based on a common branch-and-bound method for the TSP. In our study of heuristic algorithms, we found that, when selecting modes for inclusion in the subtour, it is important to consider the neighborhood'' of the nodes. A node with low value that brings the subtour near many other nodes may be more desirable than an isolated node of high value. We found two types of repetition to be desirable: repetitions based on randomization in the subtour buildings process, and repetitions encouraging the inclusion of different subsets of the nodes. By varying the number and type of repetitions, we can adjust the computation time required by our method to obtain algorithms that outperform previous methods.« less

  3. Polynomial solutions of the Monge-Ampère equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aminov, Yu A

    2014-11-30

    The question of the existence of polynomial solutions to the Monge-Ampère equation z{sub xx}z{sub yy}−z{sub xy}{sup 2}=f(x,y) is considered in the case when f(x,y) is a polynomial. It is proved that if f is a polynomial of the second degree, which is positive for all values of its arguments and has a positive squared part, then no polynomial solution exists. On the other hand, a solution which is not polynomial but is analytic in the whole of the x, y-plane is produced. Necessary and sufficient conditions for the existence of polynomial solutions of degree up to 4 are found and methods for the construction ofmore » such solutions are indicated. An approximation theorem is proved. Bibliography: 10 titles.« less

  4. Parallel multigrid smoothing: polynomial versus Gauss-Seidel

    NASA Astrophysics Data System (ADS)

    Adams, Mark; Brezina, Marian; Hu, Jonathan; Tuminaro, Ray

    2003-07-01

    Gauss-Seidel is often the smoother of choice within multigrid applications. In the context of unstructured meshes, however, maintaining good parallel efficiency is difficult with multiplicative iterative methods such as Gauss-Seidel. This leads us to consider alternative smoothers. We discuss the computational advantages of polynomial smoothers within parallel multigrid algorithms for positive definite symmetric systems. Two particular polynomials are considered: Chebyshev and a multilevel specific polynomial. The advantages of polynomial smoothing over traditional smoothers such as Gauss-Seidel are illustrated on several applications: Poisson's equation, thin-body elasticity, and eddy current approximations to Maxwell's equations. While parallelizing the Gauss-Seidel method typically involves a compromise between a scalable convergence rate and maintaining high flop rates, polynomial smoothers achieve parallel scalable multigrid convergence rates without sacrificing flop rates. We show that, although parallel computers are the main motivation, polynomial smoothers are often surprisingly competitive with Gauss-Seidel smoothers on serial machines.

  5. Estimating phase synchronization in dynamical systems using cellular nonlinear networks

    NASA Astrophysics Data System (ADS)

    Sowa, Robert; Chernihovskyi, Anton; Mormann, Florian; Lehnertz, Klaus

    2005-06-01

    We propose a method for estimating phase synchronization between time series using the parallel computing architecture of cellular nonlinear networks (CNN’s). Applying this method to time series of coupled nonlinear model systems and to electroencephalographic time series from epilepsy patients, we show that an accurate approximation of the mean phase coherence R —a bivariate measure for phase synchronization—can be achieved with CNN’s using polynomial-type templates.

  6. On the Gibbs phenomenon 5: Recovering exponential accuracy from collocation point values of a piecewise analytic function

    NASA Technical Reports Server (NTRS)

    Gottlieb, David; Shu, Chi-Wang

    1994-01-01

    The paper presents a method to recover exponential accuracy at all points (including at the discontinuities themselves), from the knowledge of an approximation to the interpolation polynomial (or trigonometrical polynomial). We show that if we are given the collocation point values (or a highly accurate approximation) at the Gauss or Gauss-Lobatto points, we can reconstruct a uniform exponentially convergent approximation to the function f(x) in any sub-interval of analyticity. The proof covers the cases of Fourier, Chebyshev, Legendre, and more general Gegenbauer collocation methods.

  7. Optimal approximation of harmonic growth clusters by orthogonal polynomials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Teodorescu, Razvan

    2008-01-01

    Interface dynamics in two-dimensional systems with a maximal number of conservation laws gives an accurate theoreticaI model for many physical processes, from the hydrodynamics of immiscible, viscous flows (zero surface-tension limit of Hele-Shaw flows), to the granular dynamics of hard spheres, and even diffusion-limited aggregation. Although a complete solution for the continuum case exists, efficient approximations of the boundary evolution are very useful due to their practical applications. In this article, the approximation scheme based on orthogonal polynomials with a deformed Gaussian kernel is discussed, as well as relations to potential theory.

  8. On Bernstein type inequalities and a weighted Chebyshev approximation problem on ellipses

    NASA Technical Reports Server (NTRS)

    Freund, Roland

    1989-01-01

    A classical inequality due to Bernstein which estimates the norm of polynomials on any given ellipse in terms of their norm on any smaller ellipse with the same foci is examined. For the uniform and a certain weighted uniform norm, and for the case that the two ellipses are not too close, sharp estimates of this type were derived and the corresponding extremal polynomials were determined. These Bernstein type inequalities are closely connected with certain constrained Chebyshev approximation problems on ellipses. Some new results were also presented for a weighted approximation problem of this type.

  9. The Western Africa ebola virus disease epidemic exhibits both global exponential and local polynomial growth rates.

    PubMed

    Chowell, Gerardo; Viboud, Cécile; Hyman, James M; Simonsen, Lone

    2015-01-21

    While many infectious disease epidemics are initially characterized by an exponential growth in time, we show that district-level Ebola virus disease (EVD) outbreaks in West Africa follow slower polynomial-based growth kinetics over several generations of the disease. We analyzed epidemic growth patterns at three different spatial scales (regional, national, and subnational) of the Ebola virus disease epidemic in Guinea, Sierra Leone and Liberia by compiling publicly available weekly time series of reported EVD case numbers from the patient database available from the World Health Organization website for the period 05-Jan to 17-Dec 2014. We found significant differences in the growth patterns of EVD cases at the scale of the country, district, and other subnational administrative divisions. The national cumulative curves of EVD cases in Guinea, Sierra Leone, and Liberia show periods of approximate exponential growth. In contrast, local epidemics are asynchronous and exhibit slow growth patterns during 3 or more EVD generations, which can be better approximated by a polynomial than an exponential function. The slower than expected growth pattern of local EVD outbreaks could result from a variety of factors, including behavior changes, success of control interventions, or intrinsic features of the disease such as a high level of clustering. Quantifying the contribution of each of these factors could help refine estimates of final epidemic size and the relative impact of different mitigation efforts in current and future EVD outbreaks.

  10. The Western Africa Ebola Virus Disease Epidemic Exhibits Both Global Exponential and Local Polynomial Growth Rates

    PubMed Central

    Chowell, Gerardo; Viboud, Cécile; Hyman, James M; Simonsen, Lone

    2015-01-01

    Background: While many infectious disease epidemics are initially characterized by an exponential growth in time, we show that district-level Ebola virus disease (EVD) outbreaks in West Africa follow slower polynomial-based growth kinetics over several generations of the disease. Methods: We analyzed epidemic growth patterns at three different spatial scales (regional, national, and subnational) of the Ebola virus disease epidemic in Guinea, Sierra Leone and Liberia by compiling publicly available weekly time series of reported EVD case numbers from the patient database available from the World Health Organization website for the period 05-Jan to 17-Dec 2014. Results: We found significant differences in the growth patterns of EVD cases at the scale of the country, district, and other subnational administrative divisions. The national cumulative curves of EVD cases in Guinea, Sierra Leone, and Liberia show periods of approximate exponential growth. In contrast, local epidemics are asynchronous and exhibit slow growth patterns during 3 or more EVD generations, which can be better approximated by a polynomial than an exponential function. Conclusions: The slower than expected growth pattern of local EVD outbreaks could result from a variety of factors, including behavior changes, success of control interventions, or intrinsic features of the disease such as a high level of clustering. Quantifying the contribution of each of these factors could help refine estimates of final epidemic size and the relative impact of different mitigation efforts in current and future EVD outbreaks. PMID:25685633

  11. Advanced reliability methods for structural evaluation

    NASA Technical Reports Server (NTRS)

    Wirsching, P. H.; Wu, Y.-T.

    1985-01-01

    Fast probability integration (FPI) methods, which can yield approximate solutions to such general structural reliability problems as the computation of the probabilities of complicated functions of random variables, are known to require one-tenth the computer time of Monte Carlo methods for a probability level of 0.001; lower probabilities yield even more dramatic differences. A strategy is presented in which a computer routine is run k times with selected perturbed values of the variables to obtain k solutions for a response variable Y. An approximating polynomial is fit to the k 'data' sets, and FPI methods are employed for this explicit form.

  12. On the parallel solution of parabolic equations

    NASA Technical Reports Server (NTRS)

    Gallopoulos, E.; Saad, Youcef

    1989-01-01

    Parallel algorithms for the solution of linear parabolic problems are proposed. The first of these methods is based on using polynomial approximation to the exponential. It does not require solving any linear systems and is highly parallelizable. The two other methods proposed are based on Pade and Chebyshev approximations to the matrix exponential. The parallelization of these methods is achieved by using partial fraction decomposition techniques to solve the resulting systems and thus offers the potential for increased time parallelism in time dependent problems. Experimental results from the Alliant FX/8 and the Cray Y-MP/832 vector multiprocessors are also presented.

  13. Efficient Quantum Pseudorandomness.

    PubMed

    Brandão, Fernando G S L; Harrow, Aram W; Horodecki, Michał

    2016-04-29

    Randomness is both a useful way to model natural systems and a useful tool for engineered systems, e.g., in computation, communication, and control. Fully random transformations require exponential time for either classical or quantum systems, but in many cases pseudorandom operations can emulate certain properties of truly random ones. Indeed, in the classical realm there is by now a well-developed theory regarding such pseudorandom operations. However, the construction of such objects turns out to be much harder in the quantum case. Here, we show that random quantum unitary time evolutions ("circuits") are a powerful source of quantum pseudorandomness. This gives for the first time a polynomial-time construction of quantum unitary designs, which can replace fully random operations in most applications, and shows that generic quantum dynamics cannot be distinguished from truly random processes. We discuss applications of our result to quantum information science, cryptography, and understanding the self-equilibration of closed quantum dynamics.

  14. FELIX-2.0: New version of the finite element solver for the time dependent generator coordinate method with the Gaussian overlap approximation

    NASA Astrophysics Data System (ADS)

    Regnier, D.; Dubray, N.; Verrière, M.; Schunck, N.

    2018-04-01

    The time-dependent generator coordinate method (TDGCM) is a powerful method to study the large amplitude collective motion of quantum many-body systems such as atomic nuclei. Under the Gaussian Overlap Approximation (GOA), the TDGCM leads to a local, time-dependent Schrödinger equation in a multi-dimensional collective space. In this paper, we present the version 2.0 of the code FELIX that solves the collective Schrödinger equation in a finite element basis. This new version features: (i) the ability to solve a generalized TDGCM+GOA equation with a metric term in the collective Hamiltonian, (ii) support for new kinds of finite elements and different types of quadrature to compute the discretized Hamiltonian and overlap matrices, (iii) the possibility to leverage the spectral element scheme, (iv) an explicit Krylov approximation of the time propagator for time integration instead of the implicit Crank-Nicolson method implemented in the first version, (v) an entirely redesigned workflow. We benchmark this release on an analytic problem as well as on realistic two-dimensional calculations of the low-energy fission of 240Pu and 256Fm. Low to moderate numerical precision calculations are most efficiently performed with simplex elements with a degree 2 polynomial basis. Higher precision calculations should instead use the spectral element method with a degree 4 polynomial basis. We emphasize that in a realistic calculation of fission mass distributions of 240Pu, FELIX-2.0 is about 20 times faster than its previous release (within a numerical precision of a few percents).

  15. Quantum digital-to-analog conversion algorithm using decoherence

    NASA Astrophysics Data System (ADS)

    SaiToh, Akira

    2015-08-01

    We consider the problem of mapping digital data encoded on a quantum register to analog amplitudes in parallel. It is shown to be unlikely that a fully unitary polynomial-time quantum algorithm exists for this problem; NP becomes a subset of BQP if it exists. In the practical point of view, we propose a nonunitary linear-time algorithm using quantum decoherence. It tacitly uses an exponentially large physical resource, which is typically a huge number of identical molecules. Quantumness of correlation appearing in the process of the algorithm is also discussed.

  16. Slave finite elements: The temporal element approach to nonlinear analysis

    NASA Technical Reports Server (NTRS)

    Gellin, S.

    1984-01-01

    A formulation method for finite elements in space and time incorporating nonlinear geometric and material behavior is presented. The method uses interpolation polynomials for approximating the behavior of various quantities over the element domain, and only explicit integration over space and time. While applications are general, the plate and shell elements that are currently being programmed are appropriate to model turbine blades, vanes, and combustor liners.

  17. Simulating Nonequilibrium Radiation via Orthogonal Polynomial Refinement

    DTIC Science & Technology

    2015-01-07

    measured by the preprocessing time, computer memory space, and average query time. In many search procedures for the number of points np of a data set, a...analytic expression for the radiative flux density is possible by the commonly accepted local thermal equilibrium ( LTE ) approximation. A semi...Vol. 227, pp. 9463-9476, 2008. 10. Galvez, M., Ray-Tracing model for radiation transport in three-dimensional LTE system, App. Physics, Vol. 38

  18. Eye aberration analysis with Zernike polynomials

    NASA Astrophysics Data System (ADS)

    Molebny, Vasyl V.; Chyzh, Igor H.; Sokurenko, Vyacheslav M.; Pallikaris, Ioannis G.; Naoumidis, Leonidas P.

    1998-06-01

    New horizons for accurate photorefractive sight correction, afforded by novel flying spot technologies, require adequate measurements of photorefractive properties of an eye. Proposed techniques of eye refraction mapping present results of measurements for finite number of points of eye aperture, requiring to approximate these data by 3D surface. A technique of wave front approximation with Zernike polynomials is described, using optimization of the number of polynomial coefficients. Criterion of optimization is the nearest proximity of the resulted continuous surface to the values calculated for given discrete points. Methodology includes statistical evaluation of minimal root mean square deviation (RMSD) of transverse aberrations, in particular, varying consecutively the values of maximal coefficient indices of Zernike polynomials, recalculating the coefficients, and computing the value of RMSD. Optimization is finished at minimal value of RMSD. Formulas are given for computing ametropia, size of the spot of light on retina, caused by spherical aberration, coma, and astigmatism. Results are illustrated by experimental data, that could be of interest for other applications, where detailed evaluation of eye parameters is needed.

  19. Dynamic response analysis of structure under time-variant interval process model

    NASA Astrophysics Data System (ADS)

    Xia, Baizhan; Qin, Yuan; Yu, Dejie; Jiang, Chao

    2016-10-01

    Due to the aggressiveness of the environmental factor, the variation of the dynamic load, the degeneration of the material property and the wear of the machine surface, parameters related with the structure are distinctly time-variant. Typical model for time-variant uncertainties is the random process model which is constructed on the basis of a large number of samples. In this work, we propose a time-variant interval process model which can be effectively used to deal with time-variant uncertainties with limit information. And then two methods are presented for the dynamic response analysis of the structure under the time-variant interval process model. The first one is the direct Monte Carlo method (DMCM) whose computational burden is relative high. The second one is the Monte Carlo method based on the Chebyshev polynomial expansion (MCM-CPE) whose computational efficiency is high. In MCM-CPE, the dynamic response of the structure is approximated by the Chebyshev polynomials which can be efficiently calculated, and then the variational range of the dynamic response is estimated according to the samples yielded by the Monte Carlo method. To solve the dependency phenomenon of the interval operation, the affine arithmetic is integrated into the Chebyshev polynomial expansion. The computational effectiveness and efficiency of MCM-CPE is verified by two numerical examples, including a spring-mass-damper system and a shell structure.

  20. Comparison of Response Surface and Kriging Models in the Multidisciplinary Design of an Aerospike Nozzle

    NASA Technical Reports Server (NTRS)

    Simpson, Timothy W.

    1998-01-01

    The use of response surface models and kriging models are compared for approximating non-random, deterministic computer analyses. After discussing the traditional response surface approach for constructing polynomial models for approximation, kriging is presented as an alternative statistical-based approximation method for the design and analysis of computer experiments. Both approximation methods are applied to the multidisciplinary design and analysis of an aerospike nozzle which consists of a computational fluid dynamics model and a finite element analysis model. Error analysis of the response surface and kriging models is performed along with a graphical comparison of the approximations. Four optimization problems are formulated and solved using both approximation models. While neither approximation technique consistently outperforms the other in this example, the kriging models using only a constant for the underlying global model and a Gaussian correlation function perform as well as the second order polynomial response surface models.

  1. On the complexity of some quadratic Euclidean 2-clustering problems

    NASA Astrophysics Data System (ADS)

    Kel'manov, A. V.; Pyatkin, A. V.

    2016-03-01

    Some problems of partitioning a finite set of points of Euclidean space into two clusters are considered. In these problems, the following criteria are minimized: (1) the sum over both clusters of the sums of squared pairwise distances between the elements of the cluster and (2) the sum of the (multiplied by the cardinalities of the clusters) sums of squared distances from the elements of the cluster to its geometric center, where the geometric center (or centroid) of a cluster is defined as the mean value of the elements in that cluster. Additionally, another problem close to (2) is considered, where the desired center of one of the clusters is given as input, while the center of the other cluster is unknown (is the variable to be optimized) as in problem (2). Two variants of the problems are analyzed, in which the cardinalities of the clusters are (1) parts of the input or (2) optimization variables. It is proved that all the considered problems are strongly NP-hard and that, in general, there is no fully polynomial-time approximation scheme for them (unless P = NP).

  2. A Galerkin method for linear PDE systems in circular geometries with structural acoustic applications

    NASA Technical Reports Server (NTRS)

    Smith, Ralph C.

    1994-01-01

    A Galerkin method for systems of PDE's in circular geometries is presented with motivating problems being drawn from structural, acoustic, and structural acoustic applications. Depending upon the application under consideration, piecewise splines or Legendre polynomials are used when approximating the system dynamics with modifications included to incorporate the analytic solution decay near the coordinate singularity. This provides an efficient method which retains its accuracy throughout the circular domain without degradation at singularity. Because the problems under consideration are linear or weakly nonlinear with constant or piecewise constant coefficients, transform methods for the problems are not investigated. While the specific method is developed for the two dimensional wave equations on a circular domain and the equation of transverse motion for a thin circular plate, examples demonstrating the extension of the techniques to a fully coupled structural acoustic system are used to illustrate the flexibility of the method when approximating the dynamics of more complex systems.

  3. A sequential method for spline approximation with variable knots. [recursive piecewise polynomial signal processing

    NASA Technical Reports Server (NTRS)

    Mier Muth, A. M.; Willsky, A. S.

    1978-01-01

    In this paper we describe a method for approximating a waveform by a spline. The method is quite efficient, as the data are processed sequentially. The basis of the approach is to view the approximation problem as a question of estimation of a polynomial in noise, with the possibility of abrupt changes in the highest derivative. This allows us to bring several powerful statistical signal processing tools into play. We also present some initial results on the application of our technique to the processing of electrocardiograms, where the knot locations themselves may be some of the most important pieces of diagnostic information.

  4. Fast template matching with polynomials.

    PubMed

    Omachi, Shinichiro; Omachi, Masako

    2007-08-01

    Template matching is widely used for many applications in image and signal processing. This paper proposes a novel template matching algorithm, called algebraic template matching. Given a template and an input image, algebraic template matching efficiently calculates similarities between the template and the partial images of the input image, for various widths and heights. The partial image most similar to the template image is detected from the input image for any location, width, and height. In the proposed algorithm, a polynomial that approximates the template image is used to match the input image instead of the template image. The proposed algorithm is effective especially when the width and height of the template image differ from the partial image to be matched. An algorithm using the Legendre polynomial is proposed for efficient approximation of the template image. This algorithm not only reduces computational costs, but also improves the quality of the approximated image. It is shown theoretically and experimentally that the computational cost of the proposed algorithm is much smaller than the existing methods.

  5. Exponential-fitted methods for integrating stiff systems of ordinary differential equations: Applications to homogeneous gas-phase chemical kinetics

    NASA Technical Reports Server (NTRS)

    Pratt, D. T.

    1984-01-01

    Conventional algorithms for the numerical integration of ordinary differential equations (ODEs) are based on the use of polynomial functions as interpolants. However, the exact solutions of stiff ODEs behave like decaying exponential functions, which are poorly approximated by polynomials. An obvious choice of interpolant are the exponential functions themselves, or their low-order diagonal Pade (rational function) approximants. A number of explicit, A-stable, integration algorithms were derived from the use of a three-parameter exponential function as interpolant, and their relationship to low-order, polynomial-based and rational-function-based implicit and explicit methods were shown by examining their low-order diagonal Pade approximants. A robust implicit formula was derived by exponential fitting the trapezoidal rule. Application of these algorithms to integration of the ODEs governing homogenous, gas-phase chemical kinetics was demonstrated in a developmental code CREK1D, which compares favorably with the Gear-Hindmarsh code LSODE in spite of the use of a primitive stepsize control strategy.

  6. Grid generation and surface modeling for CFD

    NASA Technical Reports Server (NTRS)

    Connell, Stuart D.; Sober, Janet S.; Lamson, Scott H.

    1995-01-01

    When computing the flow around complex three dimensional configurations, the generation of the mesh is the most time consuming part of any calculation. With some meshing technologies this can take of the order of a man month or more. The requirement for a number of design iterations coupled with ever decreasing time allocated for design leads to the need for a significant acceleration of this process. Of the two competing approaches, block-structured and unstructured, only the unstructured approach will allow fully automatic mesh generation directly from a CAD model. Using this approach coupled with the techniques described in this paper, it is possible to reduce the mesh generation time from man months to a few hours on a workstation. The desire to closely couple a CFD code with a design or optimization algorithm requires that the changes to the geometry be performed quickly and in a smooth manner. This need for smoothness necessitates the use of Bezier polynomials in place of the more usual NURBS or cubic splines. A two dimensional Bezier polynomial based design system is described.

  7. Towards a PTAS for the generalized TSP in grid clusters

    NASA Astrophysics Data System (ADS)

    Khachay, Michael; Neznakhina, Katherine

    2016-10-01

    The Generalized Traveling Salesman Problem (GTSP) is a combinatorial optimization problem, which is to find a minimum cost cycle visiting one point (city) from each cluster exactly. We consider a geometric case of this problem, where n nodes are given inside the integer grid (in the Euclidean plane), each grid cell is a unit square. Clusters are induced by cells `populated' by nodes of the given instance. Even in this special setting, the GTSP remains intractable enclosing the classic Euclidean TSP on the plane. Recently, it was shown that the problem has (1.5+8√2+ɛ)-approximation algorithm with complexity bound depending on n and k polynomially, where k is the number of clusters. In this paper, we propose two approximation algorithms for the Euclidean GTSP on grid clusters. For any fixed k, both algorithms are PTAS. Time complexity of the first one remains polynomial for k = O(log n) while the second one is a PTAS, when k = n - O(log n).

  8. Computing Galois Groups of Eisenstein Polynomials Over P-adic Fields

    NASA Astrophysics Data System (ADS)

    Milstead, Jonathan

    The most efficient algorithms for computing Galois groups of polynomials over global fields are based on Stauduhar's relative resolvent method. These methods are not directly generalizable to the local field case, since they require a field that contains the global field in which all roots of the polynomial can be approximated. We present splitting field-independent methods for computing the Galois group of an Eisenstein polynomial over a p-adic field. Our approach is to combine information from different disciplines. We primarily, make use of the ramification polygon of the polynomial, which is the Newton polygon of a related polynomial. This allows us to quickly calculate several invariants that serve to reduce the number of possible Galois groups. Algorithms by Greve and Pauli very efficiently return the Galois group of polynomials where the ramification polygon consists of one segment as well as information about the subfields of the stem field. Second, we look at the factorization of linear absolute resolvents to further narrow the pool of possible groups.

  9. Rigorous high-precision enclosures of fixed points and their invariant manifolds

    NASA Astrophysics Data System (ADS)

    Wittig, Alexander N.

    The well established concept of Taylor Models is introduced, which offer highly accurate C0 enclosures of functional dependencies, combining high-order polynomial approximation of functions and rigorous estimates of the truncation error, performed using verified arithmetic. The focus of this work is on the application of Taylor Models in algorithms for strongly non-linear dynamical systems. A method is proposed to extend the existing implementation of Taylor Models in COSY INFINITY from double precision coefficients to arbitrary precision coefficients. Great care is taken to maintain the highest efficiency possible by adaptively adjusting the precision of higher order coefficients in the polynomial expansion. High precision operations are based on clever combinations of elementary floating point operations yielding exact values for round-off errors. An experimental high precision interval data type is developed and implemented. Algorithms for the verified computation of intrinsic functions based on the High Precision Interval datatype are developed and described in detail. The application of these operations in the implementation of High Precision Taylor Models is discussed. An application of Taylor Model methods to the verification of fixed points is presented by verifying the existence of a period 15 fixed point in a near standard Henon map. Verification is performed using different verified methods such as double precision Taylor Models, High Precision intervals and High Precision Taylor Models. Results and performance of each method are compared. An automated rigorous fixed point finder is implemented, allowing the fully automated search for all fixed points of a function within a given domain. It returns a list of verified enclosures of each fixed point, optionally verifying uniqueness within these enclosures. An application of the fixed point finder to the rigorous analysis of beam transfer maps in accelerator physics is presented. Previous work done by Johannes Grote is extended to compute very accurate polynomial approximations to invariant manifolds of discrete maps of arbitrary dimension around hyperbolic fixed points. The algorithm presented allows for automatic removal of resonances occurring during construction. A method for the rigorous enclosure of invariant manifolds of continuous systems is introduced. Using methods developed for discrete maps, polynomial approximations of invariant manifolds of hyperbolic fixed points of ODEs are obtained. These approximations are outfit with a sharp error bound which is verified to rigorously contain the manifolds. While we focus on the three dimensional case, verification in higher dimensions is possible using similar techniques. Integrating the resulting enclosures using the verified COSY VI integrator, the initial manifold enclosures are expanded to yield sharp enclosures of large parts of the stable and unstable manifolds. To demonstrate the effectiveness of this method, we construct enclosures of the invariant manifolds of the Lorenz system and show pictures of the resulting manifold enclosures. To the best of our knowledge, these enclosures are the largest verified enclosures of manifolds in the Lorenz system in existence.

  10. Stable Numerical Approach for Fractional Delay Differential Equations

    NASA Astrophysics Data System (ADS)

    Singh, Harendra; Pandey, Rajesh K.; Baleanu, D.

    2017-12-01

    In this paper, we present a new stable numerical approach based on the operational matrix of integration of Jacobi polynomials for solving fractional delay differential equations (FDDEs). The operational matrix approach converts the FDDE into a system of linear equations, and hence the numerical solution is obtained by solving the linear system. The error analysis of the proposed method is also established. Further, a comparative study of the approximate solutions is provided for the test examples of the FDDE by varying the values of the parameters in the Jacobi polynomials. As in special case, the Jacobi polynomials reduce to the well-known polynomials such as (1) Legendre polynomial, (2) Chebyshev polynomial of second kind, (3) Chebyshev polynomial of third and (4) Chebyshev polynomial of fourth kind respectively. Maximum absolute error and root mean square error are calculated for the illustrated examples and presented in form of tables for the comparison purpose. Numerical stability of the presented method with respect to all four kind of polynomials are discussed. Further, the obtained numerical results are compared with some known methods from the literature and it is observed that obtained results from the proposed method is better than these methods.

  11. Percolation critical polynomial as a graph invariant

    DOE PAGES

    Scullard, Christian R.

    2012-10-18

    Every lattice for which the bond percolation critical probability can be found exactly possesses a critical polynomial, with the root in [0; 1] providing the threshold. Recent work has demonstrated that this polynomial may be generalized through a definition that can be applied on any periodic lattice. The polynomial depends on the lattice and on its decomposition into identical finite subgraphs, but once these are specified, the polynomial is essentially unique. On lattices for which the exact percolation threshold is unknown, the polynomials provide approximations for the critical probability with the estimates appearing to converge to the exact answer withmore » increasing subgraph size. In this paper, I show how the critical polynomial can be viewed as a graph invariant like the Tutte polynomial. In particular, the critical polynomial is computed on a finite graph and may be found using the deletion-contraction algorithm. This allows calculation on a computer, and I present such results for the kagome lattice using subgraphs of up to 36 bonds. For one of these, I find the prediction p c = 0:52440572:::, which differs from the numerical value, p c = 0:52440503(5), by only 6:9 X 10 -7.« less

  12. Competitive two-agent scheduling problems to minimize the weighted combination of makespans in a two-machine open shop

    NASA Astrophysics Data System (ADS)

    Jiang, Fuhong; Zhang, Xingong; Bai, Danyu; Wu, Chin-Chia

    2018-04-01

    In this article, a competitive two-agent scheduling problem in a two-machine open shop is studied. The objective is to minimize the weighted sum of the makespans of two competitive agents. A complexity proof is presented for minimizing the weighted combination of the makespan of each agent if the weight α belonging to agent B is arbitrary. Furthermore, two pseudo-polynomial-time algorithms using the largest alternate processing time (LAPT) rule are presented. Finally, two approximation algorithms are presented if the weight is equal to one. Additionally, another approximation algorithm is presented if the weight is larger than one.

  13. Bayesian median regression for temporal gene expression data

    NASA Astrophysics Data System (ADS)

    Yu, Keming; Vinciotti, Veronica; Liu, Xiaohui; 't Hoen, Peter A. C.

    2007-09-01

    Most of the existing methods for the identification of biologically interesting genes in a temporal expression profiling dataset do not fully exploit the temporal ordering in the dataset and are based on normality assumptions for the gene expression. In this paper, we introduce a Bayesian median regression model to detect genes whose temporal profile is significantly different across a number of biological conditions. The regression model is defined by a polynomial function where both time and condition effects as well as interactions between the two are included. MCMC-based inference returns the posterior distribution of the polynomial coefficients. From this a simple Bayes factor test is proposed to test for significance. The estimation of the median rather than the mean, and within a Bayesian framework, increases the robustness of the method compared to a Hotelling T2-test previously suggested. This is shown on simulated data and on muscular dystrophy gene expression data.

  14. Kleinberg Complex Networks

    DTIC Science & Technology

    2014-10-21

    linear combinations of paths. This project featured research on two classes of routing problems , namely traveling salesman problems and multicommodity...flows. One highlight of this research was our discovery of a polynomial-time algorithm for the metric traveling salesman s-t path problem whose...metric TSP would resolve one of the most venerable open problems in the theory of approximation algorithms. Our research on traveling salesman

  15. On the Complexity of the Asymmetric VPN Problem

    NASA Astrophysics Data System (ADS)

    Rothvoß, Thomas; Sanità, Laura

    We give the first constant factor approximation algorithm for the asymmetric Virtual Private Network (textsc{Vpn}) problem with arbitrary concave costs. We even show the stronger result, that there is always a tree solution of cost at most 2·OPT and that a tree solution of (expected) cost at most 49.84·OPT can be determined in polynomial time.

  16. FELIX-2.0: New version of the finite element solver for the time dependent generator coordinate method with the Gaussian overlap approximation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Regnier, D.; Dubray, N.; Verriere, M.

    The time-dependent generator coordinate method (TDGCM) is a powerful method to study the large amplitude collective motion of quantum many-body systems such as atomic nuclei. Under the Gaussian Overlap Approximation (GOA), the TDGCM leads to a local, time-dependent Schrödinger equation in a multi-dimensional collective space. In this study, we present the version 2.0 of the code FELIX that solves the collective Schrödinger equation in a finite element basis. This new version features: (i) the ability to solve a generalized TDGCM+GOA equation with a metric term in the collective Hamiltonian, (ii) support for new kinds of finite elements and different typesmore » of quadrature to compute the discretized Hamiltonian and overlap matrices, (iii) the possibility to leverage the spectral element scheme, (iv) an explicit Krylov approximation of the time propagator for time integration instead of the implicit Crank–Nicolson method implemented in the first version, (v) an entirely redesigned workflow. We benchmark this release on an analytic problem as well as on realistic two-dimensional calculations of the low-energy fission of 240Pu and 256Fm. Low to moderate numerical precision calculations are most efficiently performed with simplex elements with a degree 2 polynomial basis. Higher precision calculations should instead use the spectral element method with a degree 4 polynomial basis. Finally, we emphasize that in a realistic calculation of fission mass distributions of 240Pu, FELIX-2.0 is about 20 times faster than its previous release (within a numerical precision of a few percents).« less

  17. FELIX-2.0: New version of the finite element solver for the time dependent generator coordinate method with the Gaussian overlap approximation

    DOE PAGES

    Regnier, D.; Dubray, N.; Verriere, M.; ...

    2017-12-20

    The time-dependent generator coordinate method (TDGCM) is a powerful method to study the large amplitude collective motion of quantum many-body systems such as atomic nuclei. Under the Gaussian Overlap Approximation (GOA), the TDGCM leads to a local, time-dependent Schrödinger equation in a multi-dimensional collective space. In this study, we present the version 2.0 of the code FELIX that solves the collective Schrödinger equation in a finite element basis. This new version features: (i) the ability to solve a generalized TDGCM+GOA equation with a metric term in the collective Hamiltonian, (ii) support for new kinds of finite elements and different typesmore » of quadrature to compute the discretized Hamiltonian and overlap matrices, (iii) the possibility to leverage the spectral element scheme, (iv) an explicit Krylov approximation of the time propagator for time integration instead of the implicit Crank–Nicolson method implemented in the first version, (v) an entirely redesigned workflow. We benchmark this release on an analytic problem as well as on realistic two-dimensional calculations of the low-energy fission of 240Pu and 256Fm. Low to moderate numerical precision calculations are most efficiently performed with simplex elements with a degree 2 polynomial basis. Higher precision calculations should instead use the spectral element method with a degree 4 polynomial basis. Finally, we emphasize that in a realistic calculation of fission mass distributions of 240Pu, FELIX-2.0 is about 20 times faster than its previous release (within a numerical precision of a few percents).« less

  18. Relaxation distribution function of intracellular dielectric zones as an indicator of tumorous transition of living cells.

    PubMed

    Thornton, B S; Hung, W T; Irving, J

    1991-01-01

    The response decay data of living cells subject to electric polarization is associated with their relaxation distribution function (RDF) and can be determined using the inverse Laplace transform method. A new polynomial, involving a series of associated Laguerre polynomials, has been used as the approximating function for evaluating the RDF, with the advantage of avoiding the usual arbitrary trial values of a particular parameter in the numerical computations. Some numerical examples are given, followed by an application to cervical tissue. It is found that the average relaxation time and the peak amplitude of the RDF exhibit higher values for tumorous cells than normal cells and might be used as parameters to differentiate them and their associated tissues.

  19. Computational aspects of pseudospectral Laguerre approximations

    NASA Technical Reports Server (NTRS)

    Funaro, Daniele

    1989-01-01

    Pseudospectral approximations in unbounded domains by Laguerre polynomials lead to ill-conditioned algorithms. Introduced are a scaling function and appropriate numerical procedures in order to limit these unpleasant phenomena.

  20. A Lagrange-type projector on the real line

    NASA Astrophysics Data System (ADS)

    Mastroianni, G.; Notarangelo, I.

    2010-01-01

    We introduce an interpolation process based on some of the zeros of the m th generalized Freud polynomial. Convergence results and error estimates are given. In particular we show that, in some important function spaces, the interpolating polynomial behaves like the best approximation. Moreover the stability and the convergence of some quadrature rules are proved.

  1. Spectral/ hp element methods: Recent developments, applications, and perspectives

    NASA Astrophysics Data System (ADS)

    Xu, Hui; Cantwell, Chris D.; Monteserin, Carlos; Eskilsson, Claes; Engsig-Karup, Allan P.; Sherwin, Spencer J.

    2018-02-01

    The spectral/ hp element method combines the geometric flexibility of the classical h-type finite element technique with the desirable numerical properties of spectral methods, employing high-degree piecewise polynomial basis functions on coarse finite element-type meshes. The spatial approximation is based upon orthogonal polynomials, such as Legendre or Chebychev polynomials, modified to accommodate a C 0 - continuous expansion. Computationally and theoretically, by increasing the polynomial order p, high-precision solutions and fast convergence can be obtained and, in particular, under certain regularity assumptions an exponential reduction in approximation error between numerical and exact solutions can be achieved. This method has now been applied in many simulation studies of both fundamental and practical engineering flows. This paper briefly describes the formulation of the spectral/ hp element method and provides an overview of its application to computational fluid dynamics. In particular, it focuses on the use of the spectral/ hp element method in transitional flows and ocean engineering. Finally, some of the major challenges to be overcome in order to use the spectral/ hp element method in more complex science and engineering applications are discussed.

  2. Rational approximation to e to the -x power with negative poles

    NASA Technical Reports Server (NTRS)

    Cuthill, E.

    1977-01-01

    MACSYMA was applied to the generation of an expansion in terms of Laguerre polynomials to obtain approximations to e to the -x power on 0, infinity. These approximations are compared with those developed by Saff, Schonhage, and Varga.

  3. Stabilisation of discrete-time polynomial fuzzy systems via a polynomial lyapunov approach

    NASA Astrophysics Data System (ADS)

    Nasiri, Alireza; Nguang, Sing Kiong; Swain, Akshya; Almakhles, Dhafer

    2018-02-01

    This paper deals with the problem of designing a controller for a class of discrete-time nonlinear systems which is represented by discrete-time polynomial fuzzy model. Most of the existing control design methods for discrete-time fuzzy polynomial systems cannot guarantee their Lyapunov function to be a radially unbounded polynomial function, hence the global stability cannot be assured. The proposed control design in this paper guarantees a radially unbounded polynomial Lyapunov functions which ensures global stability. In the proposed design, state feedback structure is considered and non-convexity problem is solved by incorporating an integrator into the controller. Sufficient conditions of stability are derived in terms of polynomial matrix inequalities which are solved via SOSTOOLS in MATLAB. A numerical example is presented to illustrate the effectiveness of the proposed controller.

  4. An hp-adaptivity and error estimation for hyperbolic conservation laws

    NASA Technical Reports Server (NTRS)

    Bey, Kim S.

    1995-01-01

    This paper presents an hp-adaptive discontinuous Galerkin method for linear hyperbolic conservation laws. A priori and a posteriori error estimates are derived in mesh-dependent norms which reflect the dependence of the approximate solution on the element size (h) and the degree (p) of the local polynomial approximation. The a posteriori error estimate, based on the element residual method, provides bounds on the actual global error in the approximate solution. The adaptive strategy is designed to deliver an approximate solution with the specified level of error in three steps. The a posteriori estimate is used to assess the accuracy of a given approximate solution and the a priori estimate is used to predict the mesh refinements and polynomial enrichment needed to deliver the desired solution. Numerical examples demonstrate the reliability of the a posteriori error estimates and the effectiveness of the hp-adaptive strategy.

  5. Meta-Regression Approximations to Reduce Publication Selection Bias

    ERIC Educational Resources Information Center

    Stanley, T. D.; Doucouliagos, Hristos

    2014-01-01

    Publication selection bias is a serious challenge to the integrity of all empirical sciences. We derive meta-regression approximations to reduce this bias. Our approach employs Taylor polynomial approximations to the conditional mean of a truncated distribution. A quadratic approximation without a linear term, precision-effect estimate with…

  6. A FAST POLYNOMIAL TRANSFORM PROGRAM WITH A MODULARIZED STRUCTURE

    NASA Technical Reports Server (NTRS)

    Truong, T. K.

    1994-01-01

    This program utilizes a fast polynomial transformation (FPT) algorithm applicable to two-dimensional mathematical convolutions. Two-dimensional convolution has many applications, particularly in image processing. Two-dimensional cyclic convolutions can be converted to a one-dimensional convolution in a polynomial ring. Traditional FPT methods decompose the one-dimensional cyclic polynomial into polynomial convolutions of different lengths. This program will decompose a cyclic polynomial into polynomial convolutions of the same length. Thus, only FPTs and Fast Fourier Transforms of the same length are required. This modular approach can save computational resources. To further enhance its appeal, the program is written in the transportable 'C' language. The steps in the algorithm are: 1) formulate the modulus reduction equations, 2) calculate the polynomial transforms, 3) multiply the transforms using a generalized fast Fourier transformation, 4) compute the inverse polynomial transforms, and 5) reconstruct the final matrices using the Chinese remainder theorem. Input to this program is comprised of the row and column dimensions and the initial two matrices. The matrices are printed out at all steps, ending with the final reconstruction. This program is written in 'C' for batch execution and has been implemented on the IBM PC series of computers under DOS with a central memory requirement of approximately 18K of 8 bit bytes. This program was developed in 1986.

  7. A 3/2-Approximation Algorithm for Multiple Depot Multiple Traveling Salesman Problem

    NASA Astrophysics Data System (ADS)

    Xu, Zhou; Rodrigues, Brian

    As an important extension of the classical traveling salesman problem (TSP), the multiple depot multiple traveling salesman problem (MDMTSP) is to minimize the total length of a collection of tours for multiple vehicles to serve all the customers, where each vehicle must start or stay at its distinct depot. Due to the gap between the existing best approximation ratios for the TSP and for the MDMTSP in literature, which are 3/2 and 2, respectively, it is an open question whether or not a 3/2-approximation algorithm exists for the MDMTSP. We have partially addressed this question by developing a 3/2-approximation algorithm, which runs in polynomial time when the number of depots is a constant.

  8. Statistics of Data Fitting: Flaws and Fixes of Polynomial Analysis of Channeled Spectra

    NASA Astrophysics Data System (ADS)

    Karstens, William; Smith, David

    2013-03-01

    Starting from general statistical principles, we have critically examined Baumeister's procedure* for determining the refractive index of thin films from channeled spectra. Briefly, the method assumes that the index and interference fringe order may be approximated by polynomials quadratic and cubic in photon energy, respectively. The coefficients of the polynomials are related by differentiation, which is equivalent to comparing energy differences between fringes. However, we find that when the fringe order is calculated from the published IR index for silicon* and then analyzed with Baumeister's procedure, the results do not reproduce the original index. This problem has been traced to 1. Use of unphysical powers in the polynomials (e.g., time-reversal invariance requires that the index is an even function of photon energy), and 2. Use of insufficient terms of the correct parity. Exclusion of unphysical terms and addition of quartic and quintic terms to the index and order polynomials yields significantly better fits with fewer parameters. This represents a specific example of using statistics to determine if the assumed fitting model adequately captures the physics contained in experimental data. The use of analysis of variance (ANOVA) and the Durbin-Watson statistic to test criteria for the validity of least-squares fitting will be discussed. *D.F. Edwards and E. Ochoa, Appl. Opt. 19, 4130 (1980). Supported in part by the US Department of Energy, Office of Nuclear Physics under contract DE-AC02-06CH11357.

  9. An empirical analysis of the quantitative effect of data when fitting quadratic and cubic polynomials

    NASA Technical Reports Server (NTRS)

    Canavos, G. C.

    1974-01-01

    A study is made of the extent to which the size of the sample affects the accuracy of a quadratic or a cubic polynomial approximation of an experimentally observed quantity, and the trend with regard to improvement in the accuracy of the approximation as a function of sample size is established. The task is made possible through a simulated analysis carried out by the Monte Carlo method in which data are simulated by using several transcendental or algebraic functions as models. Contaminated data of varying amounts are fitted to either quadratic or cubic polynomials, and the behavior of the mean-squared error of the residual variance is determined as a function of sample size. Results indicate that the effect of the size of the sample is significant only for relatively small sizes and diminishes drastically for moderate and large amounts of experimental data.

  10. Convergence analysis of surrogate-based methods for Bayesian inverse problems

    NASA Astrophysics Data System (ADS)

    Yan, Liang; Zhang, Yuan-Xiang

    2017-12-01

    The major challenges in the Bayesian inverse problems arise from the need for repeated evaluations of the forward model, as required by Markov chain Monte Carlo (MCMC) methods for posterior sampling. Many attempts at accelerating Bayesian inference have relied on surrogates for the forward model, typically constructed through repeated forward simulations that are performed in an offline phase. Although such approaches can be quite effective at reducing computation cost, there has been little analysis of the approximation on posterior inference. In this work, we prove error bounds on the Kullback-Leibler (KL) distance between the true posterior distribution and the approximation based on surrogate models. Our rigorous error analysis show that if the forward model approximation converges at certain rate in the prior-weighted L 2 norm, then the posterior distribution generated by the approximation converges to the true posterior at least two times faster in the KL sense. The error bound on the Hellinger distance is also provided. To provide concrete examples focusing on the use of the surrogate model based methods, we present an efficient technique for constructing stochastic surrogate models to accelerate the Bayesian inference approach. The Christoffel least squares algorithms, based on generalized polynomial chaos, are used to construct a polynomial approximation of the forward solution over the support of the prior distribution. The numerical strategy and the predicted convergence rates are then demonstrated on the nonlinear inverse problems, involving the inference of parameters appearing in partial differential equations.

  11. A class of reduced-order models in the theory of waves and stability.

    PubMed

    Chapman, C J; Sorokin, S V

    2016-02-01

    This paper presents a class of approximations to a type of wave field for which the dispersion relation is transcendental. The approximations have two defining characteristics: (i) they give the field shape exactly when the frequency and wavenumber lie on a grid of points in the (frequency, wavenumber) plane and (ii) the approximate dispersion relations are polynomials that pass exactly through points on this grid. Thus, the method is interpolatory in nature, but the interpolation takes place in (frequency, wavenumber) space, rather than in physical space. Full details are presented for a non-trivial example, that of antisymmetric elastic waves in a layer. The method is related to partial fraction expansions and barycentric representations of functions. An asymptotic analysis is presented, involving Stirling's approximation to the psi function, and a logarithmic correction to the polynomial dispersion relation.

  12. A harmonic polynomial cell (HPC) method for 3D Laplace equation with application in marine hydrodynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shao, Yan-Lin, E-mail: yanlin.shao@dnvgl.com; Faltinsen, Odd M.

    2014-10-01

    We propose a new efficient and accurate numerical method based on harmonic polynomials to solve boundary value problems governed by 3D Laplace equation. The computational domain is discretized by overlapping cells. Within each cell, the velocity potential is represented by the linear superposition of a complete set of harmonic polynomials, which are the elementary solutions of Laplace equation. By its definition, the method is named as Harmonic Polynomial Cell (HPC) method. The characteristics of the accuracy and efficiency of the HPC method are demonstrated by studying analytical cases. Comparisons will be made with some other existing boundary element based methods,more » e.g. Quadratic Boundary Element Method (QBEM) and the Fast Multipole Accelerated QBEM (FMA-QBEM) and a fourth order Finite Difference Method (FDM). To demonstrate the applications of the method, it is applied to some studies relevant for marine hydrodynamics. Sloshing in 3D rectangular tanks, a fully-nonlinear numerical wave tank, fully-nonlinear wave focusing on a semi-circular shoal, and the nonlinear wave diffraction of a bottom-mounted cylinder in regular waves are studied. The comparisons with the experimental results and other numerical results are all in satisfactory agreement, indicating that the present HPC method is a promising method in solving potential-flow problems. The underlying procedure of the HPC method could also be useful in other fields than marine hydrodynamics involved with solving Laplace equation.« less

  13. Kurtosis Approach for Nonlinear Blind Source Separation

    NASA Technical Reports Server (NTRS)

    Duong, Vu A.; Stubbemd, Allen R.

    2005-01-01

    In this paper, we introduce a new algorithm for blind source signal separation for post-nonlinear mixtures. The mixtures are assumed to be linearly mixed from unknown sources first and then distorted by memoryless nonlinear functions. The nonlinear functions are assumed to be smooth and can be approximated by polynomials. Both the coefficients of the unknown mixing matrix and the coefficients of the approximated polynomials are estimated by the gradient descent method conditional on the higher order statistical requirements. The results of simulation experiments presented in this paper demonstrate the validity and usefulness of our approach for nonlinear blind source signal separation.

  14. Polynomial meta-models with canonical low-rank approximations: Numerical insights and comparison to sparse polynomial chaos expansions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Konakli, Katerina, E-mail: konakli@ibk.baug.ethz.ch; Sudret, Bruno

    2016-09-15

    The growing need for uncertainty analysis of complex computational models has led to an expanding use of meta-models across engineering and sciences. The efficiency of meta-modeling techniques relies on their ability to provide statistically-equivalent analytical representations based on relatively few evaluations of the original model. Polynomial chaos expansions (PCE) have proven a powerful tool for developing meta-models in a wide range of applications; the key idea thereof is to expand the model response onto a basis made of multivariate polynomials obtained as tensor products of appropriate univariate polynomials. The classical PCE approach nevertheless faces the “curse of dimensionality”, namely themore » exponential increase of the basis size with increasing input dimension. To address this limitation, the sparse PCE technique has been proposed, in which the expansion is carried out on only a few relevant basis terms that are automatically selected by a suitable algorithm. An alternative for developing meta-models with polynomial functions in high-dimensional problems is offered by the newly emerged low-rank approximations (LRA) approach. By exploiting the tensor–product structure of the multivariate basis, LRA can provide polynomial representations in highly compressed formats. Through extensive numerical investigations, we herein first shed light on issues relating to the construction of canonical LRA with a particular greedy algorithm involving a sequential updating of the polynomial coefficients along separate dimensions. Specifically, we examine the selection of optimal rank, stopping criteria in the updating of the polynomial coefficients and error estimation. In the sequel, we confront canonical LRA to sparse PCE in structural-mechanics and heat-conduction applications based on finite-element solutions. Canonical LRA exhibit smaller errors than sparse PCE in cases when the number of available model evaluations is small with respect to the input dimension, a situation that is often encountered in real-life problems. By introducing the conditional generalization error, we further demonstrate that canonical LRA tend to outperform sparse PCE in the prediction of extreme model responses, which is critical in reliability analysis.« less

  15. The construction of high-accuracy schemes for acoustic equations

    NASA Technical Reports Server (NTRS)

    Tang, Lei; Baeder, James D.

    1995-01-01

    An accuracy analysis of various high order schemes is performed from an interpolation point of view. The analysis indicates that classical high order finite difference schemes, which use polynomial interpolation, hold high accuracy only at nodes and are therefore not suitable for time-dependent problems. Thus, some schemes improve their numerical accuracy within grid cells by the near-minimax approximation method, but their practical significance is degraded by maintaining the same stencil as classical schemes. One-step methods in space discretization, which use piecewise polynomial interpolation and involve data at only two points, can generate a uniform accuracy over the whole grid cell and avoid spurious roots. As a result, they are more accurate and efficient than multistep methods. In particular, the Cubic-Interpolated Psuedoparticle (CIP) scheme is recommended for computational acoustics.

  16. Monte Carlo Solution to Find Input Parameters in Systems Design Problems

    NASA Astrophysics Data System (ADS)

    Arsham, Hossein

    2013-06-01

    Most engineering system designs, such as product, process, and service design, involve a framework for arriving at a target value for a set of experiments. This paper considers a stochastic approximation algorithm for estimating the controllable input parameter within a desired accuracy, given a target value for the performance function. Two different problems, what-if and goal-seeking problems, are explained and defined in an auxiliary simulation model, which represents a local response surface model in terms of a polynomial. A method of constructing this polynomial by a single run simulation is explained. An algorithm is given to select the design parameter for the local response surface model. Finally, the mean time to failure (MTTF) of a reliability subsystem is computed and compared with its known analytical MTTF value for validation purposes.

  17. Kernel K-Means Sampling for Nyström Approximation.

    PubMed

    He, Li; Zhang, Hong

    2018-05-01

    A fundamental problem in Nyström-based kernel matrix approximation is the sampling method by which training set is built. In this paper, we suggest to use kernel -means sampling, which is shown in our works to minimize the upper bound of a matrix approximation error. We first propose a unified kernel matrix approximation framework, which is able to describe most existing Nyström approximations under many popular kernels, including Gaussian kernel and polynomial kernel. We then show that, the matrix approximation error upper bound, in terms of the Frobenius norm, is equal to the -means error of data points in kernel space plus a constant. Thus, the -means centers of data in kernel space, or the kernel -means centers, are the optimal representative points with respect to the Frobenius norm error upper bound. Experimental results, with both Gaussian kernel and polynomial kernel, on real-world data sets and image segmentation tasks show the superiority of the proposed method over the state-of-the-art methods.

  18. BWM*: A Novel, Provable, Ensemble-based Dynamic Programming Algorithm for Sparse Approximations of Computational Protein Design.

    PubMed

    Jou, Jonathan D; Jain, Swati; Georgiev, Ivelin S; Donald, Bruce R

    2016-06-01

    Sparse energy functions that ignore long range interactions between residue pairs are frequently used by protein design algorithms to reduce computational cost. Current dynamic programming algorithms that fully exploit the optimal substructure produced by these energy functions only compute the GMEC. This disproportionately favors the sequence of a single, static conformation and overlooks better binding sequences with multiple low-energy conformations. Provable, ensemble-based algorithms such as A* avoid this problem, but A* cannot guarantee better performance than exhaustive enumeration. We propose a novel, provable, dynamic programming algorithm called Branch-Width Minimization* (BWM*) to enumerate a gap-free ensemble of conformations in order of increasing energy. Given a branch-decomposition of branch-width w for an n-residue protein design with at most q discrete side-chain conformations per residue, BWM* returns the sparse GMEC in O([Formula: see text]) time and enumerates each additional conformation in merely O([Formula: see text]) time. We define a new measure, Total Effective Search Space (TESS), which can be computed efficiently a priori before BWM* or A* is run. We ran BWM* on 67 protein design problems and found that TESS discriminated between BWM*-efficient and A*-efficient cases with 100% accuracy. As predicted by TESS and validated experimentally, BWM* outperforms A* in 73% of the cases and computes the full ensemble or a close approximation faster than A*, enumerating each additional conformation in milliseconds. Unlike A*, the performance of BWM* can be predicted in polynomial time before running the algorithm, which gives protein designers the power to choose the most efficient algorithm for their particular design problem.

  19. Selection of Polynomial Chaos Bases via Bayesian Model Uncertainty Methods with Applications to Sparse Approximation of PDEs with Stochastic Inputs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karagiannis, Georgios; Lin, Guang

    2014-02-15

    Generalized polynomial chaos (gPC) expansions allow the representation of the solution of a stochastic system as a series of polynomial terms. The number of gPC terms increases dramatically with the dimension of the random input variables. When the number of the gPC terms is larger than that of the available samples, a scenario that often occurs if the evaluations of the system are expensive, the evaluation of the gPC expansion can be inaccurate due to over-fitting. We propose a fully Bayesian approach that allows for global recovery of the stochastic solution, both in spacial and random domains, by coupling Bayesianmore » model uncertainty and regularization regression methods. It allows the evaluation of the PC coefficients on a grid of spacial points via (1) Bayesian model average or (2) medial probability model, and their construction as functions on the spacial domain via spline interpolation. The former accounts the model uncertainty and provides Bayes-optimal predictions; while the latter, additionally, provides a sparse representation of the solution by evaluating the expansion on a subset of dominating gPC bases when represented as a gPC expansion. Moreover, the method quantifies the importance of the gPC bases through inclusion probabilities. We design an MCMC sampler that evaluates all the unknown quantities without the need of ad-hoc techniques. The proposed method is suitable for, but not restricted to, problems whose stochastic solution is sparse at the stochastic level with respect to the gPC bases while the deterministic solver involved is expensive. We demonstrate the good performance of the proposed method and make comparisons with others on 1D, 14D and 40D in random space elliptic stochastic partial differential equations.« less

  20. Comparison Between Polynomial, Euler Beta-Function and Expo-Rational B-Spline Bases

    NASA Astrophysics Data System (ADS)

    Kristoffersen, Arnt R.; Dechevsky, Lubomir T.; Laksa˚, Arne; Bang, Børre

    2011-12-01

    Euler Beta-function B-splines (BFBS) are the practically most important instance of generalized expo-rational B-splines (GERBS) which are not true expo-rational B-splines (ERBS). BFBS do not enjoy the full range of the superproperties of ERBS but, while ERBS are special functions computable by a very rapidly converging yet approximate numerical quadrature algorithms, BFBS are explicitly computable piecewise polynomial (for integer multiplicities), similar to classical Schoenberg B-splines. In the present communication we define, compute and visualize for the first time all possible BFBS of degree up to 3 which provide Hermite interpolation in three consecutive knots of multiplicity up to 3, i.e., the function is being interpolated together with its derivatives of order up to 2. We compare the BFBS obtained for different degrees and multiplicities among themselves and versus the classical Schoenberg polynomial B-splines and the true ERBS for the considered knots. The results of the graphical comparison are discussed from analytical point of view. For the numerical computation and visualization of the new B-splines we have used Maple 12.

  1. Analytical and numerical construction of vertical periodic orbits about triangular libration points based on polynomial expansion relations among directions

    NASA Astrophysics Data System (ADS)

    Qian, Ying-Jing; Yang, Xiao-Dong; Zhai, Guan-Qiao; Zhang, Wei

    2017-08-01

    Innovated by the nonlinear modes concept in the vibrational dynamics, the vertical periodic orbits around the triangular libration points are revisited for the Circular Restricted Three-body Problem. The ζ -component motion is treated as the dominant motion and the ξ and η -component motions are treated as the slave motions. The slave motions are in nature related to the dominant motion through the approximate nonlinear polynomial expansions with respect to the ζ -position and ζ -velocity during the one of the periodic orbital motions. By employing the relations among the three directions, the three-dimensional system can be transferred into one-dimensional problem. Then the approximate three-dimensional vertical periodic solution can be analytically obtained by solving the dominant motion only on ζ -direction. To demonstrate the effectiveness of the proposed method, an accuracy study was carried out to validate the polynomial expansion (PE) method. As one of the applications, the invariant nonlinear relations in polynomial expansion form are used as constraints to obtain numerical solutions by differential correction. The nonlinear relations among the directions provide an alternative point of view to explore the overall dynamics of periodic orbits around libration points with general rules.

  2. Polynomial-time quantum algorithm for the simulation of chemical dynamics

    PubMed Central

    Kassal, Ivan; Jordan, Stephen P.; Love, Peter J.; Mohseni, Masoud; Aspuru-Guzik, Alán

    2008-01-01

    The computational cost of exact methods for quantum simulation using classical computers grows exponentially with system size. As a consequence, these techniques can be applied only to small systems. By contrast, we demonstrate that quantum computers could exactly simulate chemical reactions in polynomial time. Our algorithm uses the split-operator approach and explicitly simulates all electron-nuclear and interelectronic interactions in quadratic time. Surprisingly, this treatment is not only more accurate than the Born–Oppenheimer approximation but faster and more efficient as well, for all reactions with more than about four atoms. This is the case even though the entire electronic wave function is propagated on a grid with appropriately short time steps. Although the preparation and measurement of arbitrary states on a quantum computer is inefficient, here we demonstrate how to prepare states of chemical interest efficiently. We also show how to efficiently obtain chemically relevant observables, such as state-to-state transition probabilities and thermal reaction rates. Quantum computers using these techniques could outperform current classical computers with 100 qubits. PMID:19033207

  3. Investigation of approximate models of experimental temperature characteristics of machines

    NASA Astrophysics Data System (ADS)

    Parfenov, I. V.; Polyakov, A. N.

    2018-05-01

    This work is devoted to the investigation of various approaches to the approximation of experimental data and the creation of simulation mathematical models of thermal processes in machines with the aim of finding ways to reduce the time of their field tests and reducing the temperature error of the treatments. The main methods of research which the authors used in this work are: the full-scale thermal testing of machines; realization of various approaches at approximation of experimental temperature characteristics of machine tools by polynomial models; analysis and evaluation of modelling results (model quality) of the temperature characteristics of machines and their derivatives up to the third order in time. As a result of the performed researches, rational methods, type, parameters and complexity of simulation mathematical models of thermal processes in machine tools are proposed.

  4. Comparative Analysis of Various Single-tone Frequency Estimation Techniques in High-order Instantaneous Moments Based Phase Estimation Method

    NASA Astrophysics Data System (ADS)

    Rajshekhar, G.; Gorthi, Sai Siva; Rastogi, Pramod

    2010-04-01

    For phase estimation in digital holographic interferometry, a high-order instantaneous moments (HIM) based method was recently developed which relies on piecewise polynomial approximation of phase and subsequent evaluation of the polynomial coefficients using the HIM operator. A crucial step in the method is mapping the polynomial coefficient estimation to single-tone frequency determination for which various techniques exist. The paper presents a comparative analysis of the performance of the HIM operator based method in using different single-tone frequency estimation techniques for phase estimation. The analysis is supplemented by simulation results.

  5. Estimation of Phase in Fringe Projection Technique Using High-order Instantaneous Moments Based Method

    NASA Astrophysics Data System (ADS)

    Gorthi, Sai Siva; Rajshekhar, G.; Rastogi, Pramod

    2010-04-01

    For three-dimensional (3D) shape measurement using fringe projection techniques, the information about the 3D shape of an object is encoded in the phase of a recorded fringe pattern. The paper proposes a high-order instantaneous moments based method to estimate phase from a single fringe pattern in fringe projection. The proposed method works by approximating the phase as a piece-wise polynomial and subsequently determining the polynomial coefficients using high-order instantaneous moments to construct the polynomial phase. Simulation results are presented to show the method's potential.

  6. The Use of Generalized Laguerre Polynomials in Spectral Methods for Solving Fractional Delay Differential Equations.

    PubMed

    Khader, M M

    2013-10-01

    In this paper, an efficient numerical method for solving the fractional delay differential equations (FDDEs) is considered. The fractional derivative is described in the Caputo sense. The proposed method is based on the derived approximate formula of the Laguerre polynomials. The properties of Laguerre polynomials are utilized to reduce FDDEs to a linear or nonlinear system of algebraic equations. Special attention is given to study the error and the convergence analysis of the proposed method. Several numerical examples are provided to confirm that the proposed method is in excellent agreement with the exact solution.

  7. Associating optical measurements of MEO and GEO objects using Population-Based Meta-Heuristic methods

    NASA Astrophysics Data System (ADS)

    Zittersteijn, M.; Vananti, A.; Schildknecht, T.; Dolado Perez, J. C.; Martinot, V.

    2016-11-01

    Currently several thousands of objects are being tracked in the MEO and GEO regions through optical means. The problem faced in this framework is that of Multiple Target Tracking (MTT). The MTT problem quickly becomes an NP-hard combinatorial optimization problem. This means that the effort required to solve the MTT problem increases exponentially with the number of tracked objects. In an attempt to find an approximate solution of sufficient quality, several Population-Based Meta-Heuristic (PBMH) algorithms are implemented and tested on simulated optical measurements. These first results show that one of the tested algorithms, namely the Elitist Genetic Algorithm (EGA), consistently displays the desired behavior of finding good approximate solutions before reaching the optimum. The results further suggest that the algorithm possesses a polynomial time complexity, as the computation times are consistent with a polynomial model. With the advent of improved sensors and a heightened interest in the problem of space debris, it is expected that the number of tracked objects will grow by an order of magnitude in the near future. This research aims to provide a method that can treat the association and orbit determination problems simultaneously, and is able to efficiently process large data sets with minimal manual intervention.

  8. Application of shifted Jacobi pseudospectral method for solving (in)finite-horizon min-max optimal control problems with uncertainty

    NASA Astrophysics Data System (ADS)

    Nikooeinejad, Z.; Delavarkhalafi, A.; Heydari, M.

    2018-03-01

    The difficulty of solving the min-max optimal control problems (M-MOCPs) with uncertainty using generalised Euler-Lagrange equations is caused by the combination of split boundary conditions, nonlinear differential equations and the manner in which the final time is treated. In this investigation, the shifted Jacobi pseudospectral method (SJPM) as a numerical technique for solving two-point boundary value problems (TPBVPs) in M-MOCPs for several boundary states is proposed. At first, a novel framework of approximate solutions which satisfied the split boundary conditions automatically for various boundary states is presented. Then, by applying the generalised Euler-Lagrange equations and expanding the required approximate solutions as elements of shifted Jacobi polynomials, finding a solution of TPBVPs in nonlinear M-MOCPs with uncertainty is reduced to the solution of a system of algebraic equations. Moreover, the Jacobi polynomials are particularly useful for boundary value problems in unbounded domain, which allow us to solve infinite- as well as finite and free final time problems by domain truncation method. Some numerical examples are given to demonstrate the accuracy and efficiency of the proposed method. A comparative study between the proposed method and other existing methods shows that the SJPM is simple and accurate.

  9. Recursive algorithms for phylogenetic tree counting.

    PubMed

    Gavryushkina, Alexandra; Welch, David; Drummond, Alexei J

    2013-10-28

    In Bayesian phylogenetic inference we are interested in distributions over a space of trees. The number of trees in a tree space is an important characteristic of the space and is useful for specifying prior distributions. When all samples come from the same time point and no prior information available on divergence times, the tree counting problem is easy. However, when fossil evidence is used in the inference to constrain the tree or data are sampled serially, new tree spaces arise and counting the number of trees is more difficult. We describe an algorithm that is polynomial in the number of sampled individuals for counting of resolutions of a constraint tree assuming that the number of constraints is fixed. We generalise this algorithm to counting resolutions of a fully ranked constraint tree. We describe a quadratic algorithm for counting the number of possible fully ranked trees on n sampled individuals. We introduce a new type of tree, called a fully ranked tree with sampled ancestors, and describe a cubic time algorithm for counting the number of such trees on n sampled individuals. These algorithms should be employed for Bayesian Markov chain Monte Carlo inference when fossil data are included or data are serially sampled.

  10. Analytical Phase Equilibrium Function for Mixtures Obeying Raoult's and Henry's Laws

    NASA Astrophysics Data System (ADS)

    Hayes, Robert

    When a mixture of two substances exists in both the liquid and gas phase at equilibrium, Raoults and Henry's laws (ideal solution and ideal dilute solution approximations) can be used to estimate the gas and liquid mole fractions at the extremes of either very little solute or solvent. By assuming that a cubic polynomial can reasonably approximate the intermediate values to these extremes as a function of mole fraction, the cubic polynomial is solved and presented. A closed form equation approximating the pressure dependence on mole fraction of the constituents is thereby obtained. As a first approximation, this is a very simple and potentially useful means to estimate gas and liquid mole fractions of equilibrium mixtures. Mixtures with an azeotrope require additional attention if this type of approach is to be utilized. This work supported in part by federal Grant NRC-HQ-84-14-G-0059.

  11. Certain approximation problems for functions on the infinite-dimensional torus: Lipschitz spaces

    NASA Astrophysics Data System (ADS)

    Platonov, S. S.

    2018-02-01

    We consider some questions about the approximation of functions on the infinite-dimensional torus by trigonometric polynomials. Our main results are analogues of the direct and inverse theorems in the classical theory of approximation of periodic functions and a description of the Lipschitz spaces on the infinite-dimensional torus in terms of the best approximation.

  12. Algebraic approach to solve ttbar dilepton equations

    NASA Astrophysics Data System (ADS)

    Sonnenschein, Lars

    2006-01-01

    The set of non-linear equations describing the Standard Model kinematics of the top quark an- tiqark production system in the dilepton decay channel has at most a four-fold ambiguity due to two not fully reconstructed neutrinos. Its most precise and robust solution is of major importance for measurements of top quark properties like the top quark mass and t t spin correlations. Simple algebraic operations allow to transform the non-linear equations into a system of two polynomial equations with two unknowns. These two polynomials of multidegree eight can in turn be an- alytically reduced to one polynomial with one unknown by means of resultants. The obtained univariate polynomial is of degree sixteen and the coefficients are free of any singularity. The number of its real solutions is determined analytically by means of Sturm’s theorem, which is as well used to isolate each real solution into a unique pairwise disjoint interval. The solutions are polished by seeking the sign change of the polynomial in a given interval through binary brack- eting. Further a new Ansatz - exploiting an accidental cancelation in the process of transforming the equations - is presented. It permits to transform the initial system of equations into two poly- nomial equations with two unknowns. These two polynomials of multidegree two can be reduced to one univariate polynomial of degree four by means of resultants. The obtained quartic equation can be solved analytically. The analytical solution has singularities which can be circumvented by the algebraic approach described above.

  13. Verifying the error bound of numerical computation implemented in computer systems

    DOEpatents

    Sawada, Jun

    2013-03-12

    A verification tool receives a finite precision definition for an approximation of an infinite precision numerical function implemented in a processor in the form of a polynomial of bounded functions. The verification tool receives a domain for verifying outputs of segments associated with the infinite precision numerical function. The verification tool splits the domain into at least two segments, wherein each segment is non-overlapping with any other segment and converts, for each segment, a polynomial of bounded functions for the segment to a simplified formula comprising a polynomial, an inequality, and a constant for a selected segment. The verification tool calculates upper bounds of the polynomial for the at least two segments, beginning with the selected segment and reports the segments that violate a bounding condition.

  14. Discrete-time state estimation for stochastic polynomial systems over polynomial observations

    NASA Astrophysics Data System (ADS)

    Hernandez-Gonzalez, M.; Basin, M.; Stepanov, O.

    2018-07-01

    This paper presents a solution to the mean-square state estimation problem for stochastic nonlinear polynomial systems over polynomial observations confused with additive white Gaussian noises. The solution is given in two steps: (a) computing the time-update equations and (b) computing the measurement-update equations for the state estimate and error covariance matrix. A closed form of this filter is obtained by expressing conditional expectations of polynomial terms as functions of the state estimate and error covariance. As a particular case, the mean-square filtering equations are derived for a third-degree polynomial system with second-degree polynomial measurements. Numerical simulations show effectiveness of the proposed filter compared to the extended Kalman filter.

  15. Kurtosis Approach Nonlinear Blind Source Separation

    NASA Technical Reports Server (NTRS)

    Duong, Vu A.; Stubbemd, Allen R.

    2005-01-01

    In this paper, we introduce a new algorithm for blind source signal separation for post-nonlinear mixtures. The mixtures are assumed to be linearly mixed from unknown sources first and then distorted by memoryless nonlinear functions. The nonlinear functions are assumed to be smooth and can be approximated by polynomials. Both the coefficients of the unknown mixing matrix and the coefficients of the approximated polynomials are estimated by the gradient descent method conditional on the higher order statistical requirements. The results of simulation experiments presented in this paper demonstrate the validity and usefulness of our approach for nonlinear blind source signal separation Keywords: Independent Component Analysis, Kurtosis, Higher order statistics.

  16. Spline approximation, Part 1: Basic methodology

    NASA Astrophysics Data System (ADS)

    Ezhov, Nikolaj; Neitzel, Frank; Petrovic, Svetozar

    2018-04-01

    In engineering geodesy point clouds derived from terrestrial laser scanning or from photogrammetric approaches are almost never used as final results. For further processing and analysis a curve or surface approximation with a continuous mathematical function is required. In this paper the approximation of 2D curves by means of splines is treated. Splines offer quite flexible and elegant solutions for interpolation or approximation of "irregularly" distributed data. Depending on the problem they can be expressed as a function or as a set of equations that depend on some parameter. Many different types of splines can be used for spline approximation and all of them have certain advantages and disadvantages depending on the approximation problem. In a series of three articles spline approximation is presented from a geodetic point of view. In this paper (Part 1) the basic methodology of spline approximation is demonstrated using splines constructed from ordinary polynomials and splines constructed from truncated polynomials. In the forthcoming Part 2 the notion of B-spline will be explained in a unique way, namely by using the concept of convex combinations. The numerical stability of all spline approximation approaches as well as the utilization of splines for deformation detection will be investigated on numerical examples in Part 3.

  17. Sythesis of MCMC and Belief Propagation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahn, Sungsoo; Chertkov, Michael; Shin, Jinwoo

    Markov Chain Monte Carlo (MCMC) and Belief Propagation (BP) are the most popular algorithms for computational inference in Graphical Models (GM). In principle, MCMC is an exact probabilistic method which, however, often suffers from exponentially slow mixing. In contrast, BP is a deterministic method, which is typically fast, empirically very successful, however in general lacking control of accuracy over loopy graphs. In this paper, we introduce MCMC algorithms correcting the approximation error of BP, i.e., we provide a way to compensate for BP errors via a consecutive BP-aware MCMC. Our framework is based on the Loop Calculus (LC) approach whichmore » allows to express the BP error as a sum of weighted generalized loops. Although the full series is computationally intractable, it is known that a truncated series, summing up all 2-regular loops, is computable in polynomial-time for planar pair-wise binary GMs and it also provides a highly accurate approximation empirically. Motivated by this, we first propose a polynomial-time approximation MCMC scheme for the truncated series of general (non-planar) pair-wise binary models. Our main idea here is to use the Worm algorithm, known to provide fast mixing in other (related) problems, and then design an appropriate rejection scheme to sample 2-regular loops. Furthermore, we also design an efficient rejection-free MCMC scheme for approximating the full series. The main novelty underlying our design is in utilizing the concept of cycle basis, which provides an efficient decomposition of the generalized loops. In essence, the proposed MCMC schemes run on transformed GM built upon the non-trivial BP solution, and our experiments show that this synthesis of BP and MCMC outperforms both direct MCMC and bare BP schemes.« less

  18. Uncertainty Quantification in CO 2 Sequestration Using Surrogate Models from Polynomial Chaos Expansion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Yan; Sahinidis, Nikolaos V.

    2013-03-06

    In this paper, surrogate models are iteratively built using polynomial chaos expansion (PCE) and detailed numerical simulations of a carbon sequestration system. Output variables from a numerical simulator are approximated as polynomial functions of uncertain parameters. Once generated, PCE representations can be used in place of the numerical simulator and often decrease simulation times by several orders of magnitude. However, PCE models are expensive to derive unless the number of terms in the expansion is moderate, which requires a relatively small number of uncertain variables and a low degree of expansion. To cope with this limitation, instead of using amore » classical full expansion at each step of an iterative PCE construction method, we introduce a mixed-integer programming (MIP) formulation to identify the best subset of basis terms in the expansion. This approach makes it possible to keep the number of terms small in the expansion. Monte Carlo (MC) simulation is then performed by substituting the values of the uncertain parameters into the closed-form polynomial functions. Based on the results of MC simulation, the uncertainties of injecting CO{sub 2} underground are quantified for a saline aquifer. Moreover, based on the PCE model, we formulate an optimization problem to determine the optimal CO{sub 2} injection rate so as to maximize the gas saturation (residual trapping) during injection, and thereby minimize the chance of leakage.« less

  19. Communication: Fitting potential energy surfaces with fundamental invariant neural network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shao, Kejie; Chen, Jun; Zhao, Zhiqiang

    A more flexible neural network (NN) method using the fundamental invariants (FIs) as the input vector is proposed in the construction of potential energy surfaces for molecular systems involving identical atoms. Mathematically, FIs finitely generate the permutation invariant polynomial (PIP) ring. In combination with NN, fundamental invariant neural network (FI-NN) can approximate any function to arbitrary accuracy. Because FI-NN minimizes the size of input permutation invariant polynomials, it can efficiently reduce the evaluation time of potential energy, in particular for polyatomic systems. In this work, we provide the FIs for all possible molecular systems up to five atoms. Potential energymore » surfaces for OH{sub 3} and CH{sub 4} were constructed with FI-NN, with the accuracy confirmed by full-dimensional quantum dynamic scattering and bound state calculations.« less

  20. A fully-coupled discontinuous Galerkin spectral element method for two-phase flow in petroleum reservoirs

    NASA Astrophysics Data System (ADS)

    Taneja, Ankur; Higdon, Jonathan

    2018-01-01

    A high-order spectral element discontinuous Galerkin method is presented for simulating immiscible two-phase flow in petroleum reservoirs. The governing equations involve a coupled system of strongly nonlinear partial differential equations for the pressure and fluid saturation in the reservoir. A fully implicit method is used with a high-order accurate time integration using an implicit Rosenbrock method. Numerical tests give the first demonstration of high order hp spatial convergence results for multiphase flow in petroleum reservoirs with industry standard relative permeability models. High order convergence is shown formally for spectral elements with up to 8th order polynomials for both homogeneous and heterogeneous permeability fields. Numerical results are presented for multiphase fluid flow in heterogeneous reservoirs with complex geometric or geologic features using up to 11th order polynomials. Robust, stable simulations are presented for heterogeneous geologic features, including globally heterogeneous permeability fields, anisotropic permeability tensors, broad regions of low-permeability, high-permeability channels, thin shale barriers and thin high-permeability fractures. A major result of this paper is the demonstration that the resolution of the high order spectral element method may be exploited to achieve accurate results utilizing a simple cartesian mesh for non-conforming geological features. Eliminating the need to mesh to the boundaries of geological features greatly simplifies the workflow for petroleum engineers testing multiple scenarios in the face of uncertainty in the subsurface geology.

  1. Robust Algorithms for on Minor-Free Graphs Based on the Sherali-Adams Hierarchy

    NASA Astrophysics Data System (ADS)

    Magen, Avner; Moharrami, Mohammad

    This work provides a Linear Programming-based Polynomial Time Approximation Scheme (PTAS) for two classical NP-hard problems on graphs when the input graph is guaranteed to be planar, or more generally Minor Free. The algorithm applies a sufficiently large number (some function of when approximation is required) of rounds of the so-called Sherali-Adams Lift-and-Project system. needed to obtain a -approximation, where f is some function that depends only on the graph that should be avoided as a minor. The problem we discuss are the well-studied problems, the and problems. An curious fact we expose is that in the world of minor-free graph, the is harder in some sense than the.

  2. Final Shape of Precision Molded Optics: Part 1 - Computational Approach, Material Definitions and the Effect of Lens Shape

    DTIC Science & Technology

    2012-05-15

    subroutine by adding time-dependence to the thermal expansion coefficient. The user subroutine was written in Intel Visual Fortran that is compatible...temperature history dependent expansion and contraction, and the molds were modeled as elastic taking into account both mechanical and thermal strain. In...behavior was approximated by assuming the thermal coefficient of expansion to be a fourth order polynomial function of temperature. The authors

  3. A Generalized Sampling and Preconditioning Scheme for Sparse Approximation of Polynomial Chaos Expansions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jakeman, John D.; Narayan, Akil; Zhou, Tao

    We propose an algorithm for recovering sparse orthogonal polynomial expansions via collocation. A standard sampling approach for recovering sparse polynomials uses Monte Carlo sampling, from the density of orthogonality, which results in poor function recovery when the polynomial degree is high. Our proposed approach aims to mitigate this limitation by sampling with respect to the weighted equilibrium measure of the parametric domain and subsequently solves a preconditionedmore » $$\\ell^1$$-minimization problem, where the weights of the diagonal preconditioning matrix are given by evaluations of the Christoffel function. Our algorithm can be applied to a wide class of orthogonal polynomial families on bounded and unbounded domains, including all classical families. We present theoretical analysis to motivate the algorithm and numerical results that show our method is superior to standard Monte Carlo methods in many situations of interest. In conclusion, numerical examples are also provided to demonstrate that our proposed algorithm leads to comparable or improved accuracy even when compared with Legendre- and Hermite-specific algorithms.« less

  4. A Generalized Sampling and Preconditioning Scheme for Sparse Approximation of Polynomial Chaos Expansions

    DOE PAGES

    Jakeman, John D.; Narayan, Akil; Zhou, Tao

    2017-06-22

    We propose an algorithm for recovering sparse orthogonal polynomial expansions via collocation. A standard sampling approach for recovering sparse polynomials uses Monte Carlo sampling, from the density of orthogonality, which results in poor function recovery when the polynomial degree is high. Our proposed approach aims to mitigate this limitation by sampling with respect to the weighted equilibrium measure of the parametric domain and subsequently solves a preconditionedmore » $$\\ell^1$$-minimization problem, where the weights of the diagonal preconditioning matrix are given by evaluations of the Christoffel function. Our algorithm can be applied to a wide class of orthogonal polynomial families on bounded and unbounded domains, including all classical families. We present theoretical analysis to motivate the algorithm and numerical results that show our method is superior to standard Monte Carlo methods in many situations of interest. In conclusion, numerical examples are also provided to demonstrate that our proposed algorithm leads to comparable or improved accuracy even when compared with Legendre- and Hermite-specific algorithms.« less

  5. Finding the Best Quadratic Approximation of a Function

    ERIC Educational Resources Information Center

    Yang, Yajun; Gordon, Sheldon P.

    2011-01-01

    This article examines the question of finding the best quadratic function to approximate a given function on an interval. The prototypical function considered is f(x) = e[superscript x]. Two approaches are considered, one based on Taylor polynomial approximations at various points in the interval under consideration, the other based on the fact…

  6. Orbital component extraction by time-variant sinusoidal modeling.

    NASA Astrophysics Data System (ADS)

    Sinnesael, Matthias; Zivanovic, Miroslav; De Vleeschouwer, David; Claeys, Philippe; Schoukens, Johan

    2016-04-01

    Accurately deciphering periodic variations in paleoclimate proxy signals is essential for cyclostratigraphy. Classical spectral analysis often relies on methods based on the (Fast) Fourier Transformation. This technique has no unique solution separating variations in amplitude and frequency. This characteristic makes it difficult to correctly interpret a proxy's power spectrum or to accurately evaluate simultaneous changes in amplitude and frequency in evolutionary analyses. Here, we circumvent this drawback by using a polynomial approach to estimate instantaneous amplitude and frequency in orbital components. This approach has been proven useful to characterize audio signals (music and speech), which are non-stationary in nature (Zivanovic and Schoukens, 2010, 2012). Paleoclimate proxy signals and audio signals have in nature similar dynamics; the only difference is the frequency relationship between the different components. A harmonic frequency relationship exists in audio signals, whereas this relation is non-harmonic in paleoclimate signals. However, the latter difference is irrelevant for the problem at hand. Using a sliding window approach, the model captures time variations of an orbital component by modulating a stationary sinusoid centered at its mean frequency, with a single polynomial. Hence, the parameters that determine the model are the mean frequency of the orbital component and the polynomial coefficients. The first parameter depends on geologic interpretation, whereas the latter are estimated by means of linear least-squares. As an output, the model provides the orbital component waveform, either in the depth or time domain. Furthermore, it allows for a unique decomposition of the signal into its instantaneous amplitude and frequency. Frequency modulation patterns can be used to reconstruct changes in accumulation rate, whereas amplitude modulation can be used to reconstruct e.g. eccentricity-modulated precession. The time-variant sinusoidal model is applied to well-established Pleistocene benthic isotope records to evaluate its performance. Zivanovic M. and Schoukens J. (2010) On The Polynomial Approximation for Time-Variant Harmonic Signal Modeling. IEEE Transactions On Audio, Speech, and Language Processing vol. 19, no. 3, pp. 458-467. Doi: 10.1109/TASL.2010.2049673. Zivanovic M. and Schoukens J. (2012) Single and Piecewise Polynomials for Modeling of Pitched Sounds. IEEE Transactions On Audio, Speech, and Language Processing vol. 20, no. 4, pp. 1270-1281. Doi: 10.1109/TASL.2011.2174228.

  7. Weierstrass method for quaternionic polynomial root-finding

    NASA Astrophysics Data System (ADS)

    Falcão, M. Irene; Miranda, Fernando; Severino, Ricardo; Soares, M. Joana

    2018-01-01

    Quaternions, introduced by Hamilton in 1843 as a generalization of complex numbers, have found, in more recent years, a wealth of applications in a number of different areas which motivated the design of efficient methods for numerically approximating the zeros of quaternionic polynomials. In fact, one can find in the literature recent contributions to this subject based on the use of complex techniques, but numerical methods relying on quaternion arithmetic remain scarce. In this paper we propose a Weierstrass-like method for finding simultaneously {\\sl all} the zeros of unilateral quaternionic polynomials. The convergence analysis and several numerical examples illustrating the performance of the method are also presented.

  8. Finding the Best-Fit Polynomial Approximation in Evaluating Drill Data: the Application of a Generalized Inverse Matrix / Poszukiwanie Najlepszej ZGODNOŚCI W PRZYBLIŻENIU Wielomianowym Wykorzystanej do Oceny Danych Z ODWIERTÓW - Zastosowanie UOGÓLNIONEJ Macierzy Odwrotnej

    NASA Astrophysics Data System (ADS)

    Karakus, Dogan

    2013-12-01

    In mining, various estimation models are used to accurately assess the size and the grade distribution of an ore body. The estimation of the positional properties of unknown regions using random samples with known positional properties was first performed using polynomial approximations. Although the emergence of computer technologies and statistical evaluation of random variables after the 1950s rendered the polynomial approximations less important, theoretically the best surface passing through the random variables can be expressed as a polynomial approximation. In geoscience studies, in which the number of random variables is high, reliable solutions can be obtained only with high-order polynomials. Finding the coefficients of these types of high-order polynomials can be computationally intensive. In this study, the solution coefficients of high-order polynomials were calculated using a generalized inverse matrix method. A computer algorithm was developed to calculate the polynomial degree giving the best regression between the values obtained for solutions of different polynomial degrees and random observational data with known values, and this solution was tested with data derived from a practical application. In this application, the calorie values for data from 83 drilling points in a coal site located in southwestern Turkey were used, and the results are discussed in the context of this study. W górnictwie wykorzystuje się rozmaite modele estymacji do dokładnego określenia wielkości i rozkładu zawartości pierwiastka użytecznego w rudzie. Estymację położenia i właściwości skał w nieznanych obszarach z wykorzystaniem próbek losowych o znanym położeniu przeprowadzano na początku z wykorzystaniem przybliżenia wielomianowego. Pomimo tego, że rozwój technik komputerowych i statystycznych metod ewaluacji próbek losowych sprawiły, że po roku 1950 metody przybliżenia wielomianowego straciły na znaczeniu, nadal teoretyczna powierzchnia najlepszej zgodności przechodząca przez zmienne losowe wyrażana jest właśnie poprzez przybliżenie wielomianowe. W geofizyce, gdzie liczba próbek losowych jest zazwyczaj bardzo wysoka, wiarygodne rozwiązania uzyskać można jedynie przy wykorzystaniu wielomianów wyższych stopni. Określenie współczynników w tego typu wielomia nach jest skomplikowaną procedurą obliczeniową. W pracy tej poszukiwane współczynniki wielomianu wyższych stopni obliczono przy zastosowaniu metody uogólnionej macierzy odwrotnej. Opracowano odpowiedni algorytm komputerowy do obliczania stopnia wielomianu, zapewniający najlepszą regresję pomiędzy wartościami otrzymanymi z rozwiązań bazujących na wielomianach różnych stopni i losowymi danymi z obserwacji, o znanych wartościach. Rozwiązanie to przetestowano z użyciem danych uzyskanych z zastosowań praktycznych. W tym zastosowaniu użyto danych o wartości opałowej pochodzących z 83 odwiertów wykonanych w zagłębiu węglowym w południowo- zachodniej Turcji, wyniki obliczeń przedyskutowano w kontekście zagadnień uwzględnionych w niniejszej pracy.

  9. An Interpolation Approach to Optimal Trajectory Planning for Helicopter Unmanned Aerial Vehicles

    DTIC Science & Technology

    2012-06-01

    Armament Data Line DOF Degree of Freedom PS Pseudospectral LGL Legendre -Gauss-Lobatto quadrature nodes ODE Ordinary Differential Equation xiv...low order polynomials patched together in such away so that the resulting trajectory has several continuous derivatives at all points. In [7], Murray...claims that splines are ideal for optimal control problems because each segment of the spline’s piecewise polynomials approximate the trajectory

  10. Least Squares Approximation By G1 Piecewise Parametric Cubes

    DTIC Science & Technology

    1993-12-01

    ADDRESS(ES) 10.SPONSORING/MONITORING AGENCY REPORT NUMBER 11. SUPPLEMENTARY NOTES The views expressed in this thesis are those of the author and do not...CODE Approved for public release; distribution is unlimited. 13. ABSTRACT (maximum 200 words) Parametric piecewise cubic polynomials are used throughout...piecewise parametric cubic polynomial to a sequence of ordered points in the plane. Cubic Bdzier curves are used as a basis. The parameterization, the

  11. Approximation of eigenvalues of some differential equations by zeros of orthogonal polynomials

    NASA Astrophysics Data System (ADS)

    Volkmer, Hans

    2008-04-01

    Sequences of polynomials, orthogonal with respect to signed measures, are associated with a class of differential equations including the Mathieu, Lame and Whittaker-Hill equation. It is shown that the zeros of pn form sequences which converge to the eigenvalues of the corresponding differential equations. Moreover, interlacing properties of the zeros of pn are found. Applications to the numerical treatment of eigenvalue problems are given.

  12. Sampling-free Bayesian inversion with adaptive hierarchical tensor representations

    NASA Astrophysics Data System (ADS)

    Eigel, Martin; Marschall, Manuel; Schneider, Reinhold

    2018-03-01

    A sampling-free approach to Bayesian inversion with an explicit polynomial representation of the parameter densities is developed, based on an affine-parametric representation of a linear forward model. This becomes feasible due to the complete treatment in function spaces, which requires an efficient model reduction technique for numerical computations. The advocated perspective yields the crucial benefit that error bounds can be derived for all occuring approximations, leading to provable convergence subject to the discretization parameters. Moreover, it enables a fully adaptive a posteriori control with automatic problem-dependent adjustments of the employed discretizations. The method is discussed in the context of modern hierarchical tensor representations, which are used for the evaluation of a random PDE (the forward model) and the subsequent high-dimensional quadrature of the log-likelihood, alleviating the ‘curse of dimensionality’. Numerical experiments demonstrate the performance and confirm the theoretical results.

  13. On adaptive weighted polynomial preconditioning for Hermitian positive definite matrices

    NASA Technical Reports Server (NTRS)

    Fischer, Bernd; Freund, Roland W.

    1992-01-01

    The conjugate gradient algorithm for solving Hermitian positive definite linear systems is usually combined with preconditioning in order to speed up convergence. In recent years, there has been a revival of polynomial preconditioning, motivated by the attractive features of the method on modern architectures. Standard techniques for choosing the preconditioning polynomial are based only on bounds for the extreme eigenvalues. Here a different approach is proposed, which aims at adapting the preconditioner to the eigenvalue distribution of the coefficient matrix. The technique is based on the observation that good estimates for the eigenvalue distribution can be derived after only a few steps of the Lanczos process. This information is then used to construct a weight function for a suitable Chebyshev approximation problem. The solution of this problem yields the polynomial preconditioner. In particular, we investigate the use of Bernstein-Szego weights.

  14. A ROM-Less Direct Digital Frequency Synthesizer Based on Hybrid Polynomial Approximation

    PubMed Central

    Omran, Qahtan Khalaf; Islam, Mohammad Tariqul; Misran, Norbahiah; Faruque, Mohammad Rashed Iqbal

    2014-01-01

    In this paper, a novel design approach for a phase to sinusoid amplitude converter (PSAC) has been investigated. Two segments have been used to approximate the first sine quadrant. A first linear segment is used to fit the region near the zero point, while a second fourth-order parabolic segment is used to approximate the rest of the sine curve. The phase sample, where the polynomial changed, was chosen in such a way as to achieve the maximum spurious free dynamic range (SFDR). The invented direct digital frequency synthesizer (DDFS) has been encoded in VHDL and post simulation was carried out. The synthesized architecture exhibits a promising result of 90 dBc SFDR. The targeted structure is expected to show advantages for perceptible reduction of hardware resources and power consumption as well as high clock speeds. PMID:24892092

  15. Polynomial time blackbox identity testers for depth-3 circuits : the field doesn't matter.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seshadhri, Comandur; Saxena, Nitin

    Let C be a depth-3 circuit with n variables, degree d and top fanin k (called {Sigma}{Pi}{Sigma}(k, d, n) circuits) over base field F. It is a major open problem to design a deterministic polynomial time blackbox algorithm that tests if C is identically zero. Klivans & Spielman (STOC 2001) observed that the problem is open even when k is a constant. This case has been subjected to a serious study over the past few years, starting from the work of Dvir & Shpilka (STOC 2005). We give the first polynomial time blackbox algorithm for this problem. Our algorithm runsmore » in time poly(n)d{sup k}, regardless of the base field. The only field for which polynomial time algorithms were previously known is F = Q (Kayal & Saraf, FOCS 2009, and Saxena & Seshadhri, FOCS 2010). This is the first blackbox algorithm for depth-3 circuits that does not use the rank based approaches of Karnin & Shpilka (CCC 2008). We prove an important tool for the study of depth-3 identities. We design a blackbox polynomial time transformation that reduces the number of variables in a {Sigma}{Pi}{Sigma}(k, d, n) circuit to k variables, but preserves the identity structure. Polynomial identity testing (PIT) is a major open problem in theoretical computer science. The input is an arithmetic circuit that computes a polynomial p(x{sub 1}, x{sub 2},..., x{sub n}) over a base field F. We wish to check if p is the zero polynomial, or in other words, is identically zero. We may be provided with an explicit circuit, or may only have blackbox access. In the latter case, we can only evaluate the polynomial p at various domain points. The main goal is to devise a deterministic blackbox polynomial time algorithm for PIT.« less

  16. Solution of the mean spherical approximation for polydisperse multi-Yukawa hard-sphere fluid mixture using orthogonal polynomial expansions

    NASA Astrophysics Data System (ADS)

    Kalyuzhnyi, Yurij V.; Cummings, Peter T.

    2006-03-01

    The Blum-Høye [J. Stat. Phys. 19 317 (1978)] solution of the mean spherical approximation for a multicomponent multi-Yukawa hard-sphere fluid is extended to a polydisperse multi-Yukawa hard-sphere fluid. Our extension is based on the application of the orthogonal polynomial expansion method of Lado [Phys. Rev. E 54, 4411 (1996)]. Closed form analytical expressions for the structural and thermodynamic properties of the model are presented. They are given in terms of the parameters that follow directly from the solution. By way of illustration the method of solution is applied to describe the thermodynamic properties of the one- and two-Yukawa versions of the model.

  17. A Real-Time Marker-Based Visual Sensor Based on a FPGA and a Soft Core Processor

    PubMed Central

    Tayara, Hilal; Ham, Woonchul; Chong, Kil To

    2016-01-01

    This paper introduces a real-time marker-based visual sensor architecture for mobile robot localization and navigation. A hardware acceleration architecture for post video processing system was implemented on a field-programmable gate array (FPGA). The pose calculation algorithm was implemented in a System on Chip (SoC) with an Altera Nios II soft-core processor. For every frame, single pass image segmentation and Feature Accelerated Segment Test (FAST) corner detection were used for extracting the predefined markers with known geometries in FPGA. Coplanar PosIT algorithm was implemented on the Nios II soft-core processor supplied with floating point hardware for accelerating floating point operations. Trigonometric functions have been approximated using Taylor series and cubic approximation using Lagrange polynomials. Inverse square root method has been implemented for approximating square root computations. Real time results have been achieved and pixel streams have been processed on the fly without any need to buffer the input frame for further implementation. PMID:27983714

  18. A Real-Time Marker-Based Visual Sensor Based on a FPGA and a Soft Core Processor.

    PubMed

    Tayara, Hilal; Ham, Woonchul; Chong, Kil To

    2016-12-15

    This paper introduces a real-time marker-based visual sensor architecture for mobile robot localization and navigation. A hardware acceleration architecture for post video processing system was implemented on a field-programmable gate array (FPGA). The pose calculation algorithm was implemented in a System on Chip (SoC) with an Altera Nios II soft-core processor. For every frame, single pass image segmentation and Feature Accelerated Segment Test (FAST) corner detection were used for extracting the predefined markers with known geometries in FPGA. Coplanar PosIT algorithm was implemented on the Nios II soft-core processor supplied with floating point hardware for accelerating floating point operations. Trigonometric functions have been approximated using Taylor series and cubic approximation using Lagrange polynomials. Inverse square root method has been implemented for approximating square root computations. Real time results have been achieved and pixel streams have been processed on the fly without any need to buffer the input frame for further implementation.

  19. Reynolds Number Effect on Spatial Development of Viscous Flow Induced by Wave Propagation Over Bed Ripples

    NASA Astrophysics Data System (ADS)

    Dimas, Athanassios A.; Kolokythas, Gerasimos A.

    Numerical simulations of the free-surface flow, developing by the propagation of nonlinear water waves over a rippled bottom, are performed assuming that the corresponding flow is two-dimensional, incompressible and viscous. The simulations are based on the numerical solution of the Navier-Stokes equations subject to the fully-nonlinear free-surface boundary conditions and appropriate bottom, inflow and outflow boundary conditions. The equations are properly transformed so that the computational domain becomes time-independent. For the spatial discretization, a hybrid scheme is used where central finite-differences, in the horizontal direction, and a pseudo-spectral approximation method with Chebyshev polynomials, in the vertical direction, are applied. A fractional time-step scheme is used for the temporal discretization. Over the rippled bed, the wave boundary layer thickness increases significantly, in comparison to the one over flat bed, due to flow separation at the ripple crests, which generates alternating circulation regions. The amplitude of the wall shear stress over the ripples increases with increasing ripple height or decreasing Reynolds number, while the corresponding friction force is insensitive to the ripple height change. The amplitude of the form drag forces due to dynamic and hydrostatic pressures increase with increasing ripple height but is insensitive to the Reynolds number change, therefore, the percentage of friction in the total drag force decreases with increasing ripple height or increasing Reynolds number.

  20. Isogeometric Analysis of Boundary Integral Equations

    DTIC Science & Technology

    2015-04-21

    methods, IgA relies on Non-Uniform Rational B- splines (NURBS) [43, 46], T- splines [55, 53] or subdivision surfaces [21, 48, 51] rather than piece- wise...structural dynamics [25, 26], plates and shells [15, 16, 27, 28, 37, 22, 23], phase-field models [17, 32, 33], and shape optimization [40, 41, 45, 59...polynomials for approximating the geometry and field variables. Thus, by replacing piecewise polynomials with NURBS or T- splines , one can develop

  1. Non-stationary component extraction in noisy multicomponent signal using polynomial chirping Fourier transform.

    PubMed

    Lu, Wenlong; Xie, Junwei; Wang, Heming; Sheng, Chuan

    2016-01-01

    Inspired by track-before-detection technology in radar, a novel time-frequency transform, namely polynomial chirping Fourier transform (PCFT), is exploited to extract components from noisy multicomponent signal. The PCFT combines advantages of Fourier transform and polynomial chirplet transform to accumulate component energy along a polynomial chirping curve in the time-frequency plane. The particle swarm optimization algorithm is employed to search optimal polynomial parameters with which the PCFT will achieve a most concentrated energy ridge in the time-frequency plane for the target component. The component can be well separated in the polynomial chirping Fourier domain with a narrow-band filter and then reconstructed by inverse PCFT. Furthermore, an iterative procedure, involving parameter estimation, PCFT, filtering and recovery, is introduced to extract components from a noisy multicomponent signal successively. The Simulations and experiments show that the proposed method has better performance in component extraction from noisy multicomponent signal as well as provides more time-frequency details about the analyzed signal than conventional methods.

  2. Quadratically Convergent Method for Simultaneously Approaching the Roots of Polynomial Solutions of a Class of Differential Equations

    NASA Astrophysics Data System (ADS)

    Recchioni, Maria Cristina

    2001-12-01

    This paper investigates the application of the method introduced by L. Pasquini (1989) for simultaneously approaching the zeros of polynomial solutions to a class of second-order linear homogeneous ordinary differential equations with polynomial coefficients to a particular case in which these polynomial solutions have zeros symmetrically arranged with respect to the origin. The method is based on a family of nonlinear equations which is associated with a given class of differential equations. The roots of the nonlinear equations are related to the roots of the polynomial solutions of differential equations considered. Newton's method is applied to find the roots of these nonlinear equations. In (Pasquini, 1994) the nonsingularity of the roots of these nonlinear equations is studied. In this paper, following the lines in (Pasquini, 1994), the nonsingularity of the roots of these nonlinear equations is studied. More favourable results than the ones in (Pasquini, 1994) are proven in the particular case of polynomial solutions with symmetrical zeros. The method is applied to approximate the roots of Hermite-Sobolev type polynomials and Freud polynomials. A lower bound for the smallest positive root of Hermite-Sobolev type polynomials is given via the nonlinear equation. The quadratic convergence of the method is proven. A comparison with a classical method that uses the Jacobi matrices is carried out. We show that the algorithm derived by the proposed method is sometimes preferable to the classical QR type algorithms for computing the eigenvalues of the Jacobi matrices even if these matrices are real and symmetric.

  3. Simulated quantum computation of molecular energies.

    PubMed

    Aspuru-Guzik, Alán; Dutoi, Anthony D; Love, Peter J; Head-Gordon, Martin

    2005-09-09

    The calculation time for the energy of atoms and molecules scales exponentially with system size on a classical computer but polynomially using quantum algorithms. We demonstrate that such algorithms can be applied to problems of chemical interest using modest numbers of quantum bits. Calculations of the water and lithium hydride molecular ground-state energies have been carried out on a quantum computer simulator using a recursive phase-estimation algorithm. The recursive algorithm reduces the number of quantum bits required for the readout register from about 20 to 4. Mappings of the molecular wave function to the quantum bits are described. An adiabatic method for the preparation of a good approximate ground-state wave function is described and demonstrated for a stretched hydrogen molecule. The number of quantum bits required scales linearly with the number of basis functions, and the number of gates required grows polynomially with the number of quantum bits.

  4. Quantum Chemistry on Quantum Computers: A Polynomial-Time Quantum Algorithm for Constructing the Wave Functions of Open-Shell Molecules.

    PubMed

    Sugisaki, Kenji; Yamamoto, Satoru; Nakazawa, Shigeaki; Toyota, Kazuo; Sato, Kazunobu; Shiomi, Daisuke; Takui, Takeji

    2016-08-18

    Quantum computers are capable to efficiently perform full configuration interaction (FCI) calculations of atoms and molecules by using the quantum phase estimation (QPE) algorithm. Because the success probability of the QPE depends on the overlap between approximate and exact wave functions, efficient methods to prepare accurate initial guess wave functions enough to have sufficiently large overlap with the exact ones are highly desired. Here, we propose a quantum algorithm to construct the wave function consisting of one configuration state function, which is suitable for the initial guess wave function in QPE-based FCI calculations of open-shell molecules, based on the addition theorem of angular momentum. The proposed quantum algorithm enables us to prepare the wave function consisting of an exponential number of Slater determinants only by a polynomial number of quantum operations.

  5. Numeric Modified Adomian Decomposition Method for Power System Simulations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dimitrovski, Aleksandar D; Simunovic, Srdjan; Pannala, Sreekanth

    This paper investigates the applicability of numeric Wazwaz El Sayed modified Adomian Decomposition Method (WES-ADM) for time domain simulation of power systems. WESADM is a numerical method based on a modified Adomian decomposition (ADM) technique. WES-ADM is a numerical approximation method for the solution of nonlinear ordinary differential equations. The non-linear terms in the differential equations are approximated using Adomian polynomials. In this paper WES-ADM is applied to time domain simulations of multimachine power systems. WECC 3-generator, 9-bus system and IEEE 10-generator, 39-bus system have been used to test the applicability of the approach. Several fault scenarios have been tested.more » It has been found that the proposed approach is faster than the trapezoidal method with comparable accuracy.« less

  6. Democratic superstring field theory: gauge fixing

    NASA Astrophysics Data System (ADS)

    Kroyter, Michael

    2011-03-01

    We show that a partial gauge fixing of the NS sector of the democratic-picture superstring field theory leads to the non-polynomial theory. Moreover, by partially gauge fixing the Ramond sector we obtain a non-polynomial fully RNS theory at pictures 0 and 1/2 . Within the democratic theory and in the partially gauge fixed theory the equations of motion of both sectors are derived from an action. We also discuss a representation of the non-polynomial theory analogous to a manifestly two-dimensional representation of WZW theory and the action of bosonic pure-gauge solutions. We further demonstrate that one can consistently gauge fix the NS sector of the democratic theory at picture number -1. The resulting theory is new. It is a {mathbb{Z}_2} dual of the modified cubic theory. We construct analytical solutions of this theory and show that they possess the desired properties.

  7. Free and Forced Vibrations of Thick-Walled Anisotropic Cylindrical Shells

    NASA Astrophysics Data System (ADS)

    Marchuk, A. V.; Gnedash, S. V.; Levkovskii, S. A.

    2017-03-01

    Two approaches to studying the free and forced axisymmetric vibrations of cylindrical shell are proposed. They are based on the three-dimensional theory of elasticity and division of the original cylindrical shell with concentric cross-sectional circles into several coaxial cylindrical shells. One approach uses linear polynomials to approximate functions defined in plan and across the thickness. The other approach also uses linear polynomials to approximate functions defined in plan, but their variation with thickness is described by the analytical solution of a system of differential equations. Both approaches have approximation and arithmetic errors. When determining the natural frequencies by the semi-analytical finite-element method in combination with the divide and conqure method, it is convenient to find the initial frequencies by the finite-element method. The behavior of the shell during free and forced vibrations is analyzed in the case where the loading area is half the shell thickness

  8. Solution of the two-dimensional spectral factorization problem

    NASA Technical Reports Server (NTRS)

    Lawton, W. M.

    1985-01-01

    An approximation theorem is proven which solves a classic problem in two-dimensional (2-D) filter theory. The theorem shows that any continuous two-dimensional spectrum can be uniformly approximated by the squared modulus of a recursively stable finite trigonometric polynomial supported on a nonsymmetric half-plane.

  9. Shuttle Debris Impact Tool Assessment Using the Modern Design of Experiments

    NASA Technical Reports Server (NTRS)

    DeLoach, R.; Rayos, E. M.; Campbell, C. H.; Rickman, S. L.

    2006-01-01

    Computational tools have been developed to estimate thermal and mechanical reentry loads experienced by the Space Shuttle Orbiter as the result of cavities in the Thermal Protection System (TPS). Such cavities can be caused by impact from ice or insulating foam debris shed from the External Tank (ET) on liftoff. The reentry loads depend on cavity geometry and certain Shuttle state variables, among other factors. Certain simplifying assumptions have been made in the tool development about the cavity geometry variables. For example, the cavities are all modeled as shoeboxes , with rectangular cross-sections and planar walls. So an actual cavity is typically approximated with an idealized cavity described in terms of its length, width, and depth, as well as its entry angle, exit angle, and side angles (assumed to be the same for both sides). As part of a comprehensive assessment of the uncertainty in reentry loads estimated by the debris impact assessment tools, an effort has been initiated to quantify the component of the uncertainty that is due to imperfect geometry specifications for the debris impact cavities. The approach is to compute predicted loads for a set of geometry factor combinations sufficient to develop polynomial approximations to the complex, nonparametric underlying computational models. Such polynomial models are continuous and feature estimable, continuous derivatives, conditions that facilitate the propagation of independent variable errors. As an additional benefit, once the polynomial models have been developed, they require fewer computational resources to execute than the underlying finite element and computational fluid dynamics codes, and can generate reentry loads estimates in significantly less time. This provides a practical screening capability, in which a large number of debris impact cavities can be quickly classified either as harmless, or subject to additional analysis with the more comprehensive underlying computational tools. The polynomial models also provide useful insights into the sensitivity of reentry loads to various cavity geometry variables, and reveal complex interactions among those variables that indicate how the sensitivity of one variable depends on the level of one or more other variables. For example, the effect of cavity length on certain reentry loads depends on the depth of the cavity. Such interactions are clearly displayed in the polynomial response models.

  10. Application of the polynomial chaos expansion to approximate the homogenised response of the intervertebral disc.

    PubMed

    Karajan, N; Otto, D; Oladyshkin, S; Ehlers, W

    2014-10-01

    A possibility to simulate the mechanical behaviour of the human spine is given by modelling the stiffer structures, i.e. the vertebrae, as a discrete multi-body system (MBS), whereas the softer connecting tissue, i.e. the softer intervertebral discs (IVD), is represented in a continuum-mechanical sense using the finite-element method (FEM). From a modelling point of view, the mechanical behaviour of the IVD can be included into the MBS in two different ways. They can either be computed online in a so-called co-simulation of a MBS and a FEM or offline in a pre-computation step, where a representation of the discrete mechanical response of the IVD needs to be defined in terms of the applied degrees of freedom (DOF) of the MBS. For both methods, an appropriate homogenisation step needs to be applied to obtain the discrete mechanical response of the IVD, i.e. the resulting forces and moments. The goal of this paper was to present an efficient method to approximate the mechanical response of an IVD in an offline computation. In a previous paper (Karajan et al. in Biomech Model Mechanobiol 12(3):453-466, 2012), it was proven that a cubic polynomial for the homogenised forces and moments of the FE model is a suitable choice to approximate the purely elastic response as a coupled function of the DOF of the MBS. In this contribution, the polynomial chaos expansion (PCE) is applied to generate these high-dimensional polynomials. Following this, the main challenge is to determine suitable deformation states of the IVD for pre-computation, such that the polynomials can be constructed with high accuracy and low numerical cost. For the sake of a simple verification, the coupling method and the PCE are applied to the same simplified motion segment of the spine as was used in the previous paper, i.e. two cylindrical vertebrae and a cylindrical IVD in between. In a next step, the loading rates are included as variables in the polynomial response functions to account for a more realistic response of the overall viscoelastic intervertebral disc. Herein, an additive split into elastic and inelastic contributions to the homogenised forces and moments is applied.

  11. Accurate Estimation of Solvation Free Energy Using Polynomial Fitting Techniques

    PubMed Central

    Shyu, Conrad; Ytreberg, F. Marty

    2010-01-01

    This report details an approach to improve the accuracy of free energy difference estimates using thermodynamic integration data (slope of the free energy with respect to the switching variable λ) and its application to calculating solvation free energy. The central idea is to utilize polynomial fitting schemes to approximate the thermodynamic integration data to improve the accuracy of the free energy difference estimates. Previously, we introduced the use of polynomial regression technique to fit thermodynamic integration data (Shyu and Ytreberg, J Comput Chem 30: 2297–2304, 2009). In this report we introduce polynomial and spline interpolation techniques. Two systems with analytically solvable relative free energies are used to test the accuracy of the interpolation approach. We also use both interpolation and regression methods to determine a small molecule solvation free energy. Our simulations show that, using such polynomial techniques and non-equidistant λ values, the solvation free energy can be estimated with high accuracy without using soft-core scaling and separate simulations for Lennard-Jones and partial charges. The results from our study suggest these polynomial techniques, especially with use of non-equidistant λ values, improve the accuracy for ΔF estimates without demanding additional simulations. We also provide general guidelines for use of polynomial fitting to estimate free energy. To allow researchers to immediately utilize these methods, free software and documentation is provided via http://www.phys.uidaho.edu/ytreberg/software. PMID:20623657

  12. An efficient higher order family of root finders

    NASA Astrophysics Data System (ADS)

    Petkovic, Ljiljana D.; Rancic, Lidija; Petkovic, Miodrag S.

    2008-06-01

    A one parameter family of iterative methods for the simultaneous approximation of simple complex zeros of a polynomial, based on a cubically convergent Hansen-Patrick's family, is studied. We show that the convergence of the basic family of the fourth order can be increased to five and six using Newton's and Halley's corrections, respectively. Since these corrections use the already calculated values, the computational efficiency of the accelerated methods is significantly increased. Further acceleration is achieved by applying the Gauss-Seidel approach (single-step mode). One of the most important problems in solving nonlinear equations, the construction of initial conditions which provide both the guaranteed and fast convergence, is considered for the proposed accelerated family. These conditions are computationally verifiable; they depend only on the polynomial coefficients, its degree and initial approximations, which is of practical importance. Some modifications of the considered family, providing the computation of multiple zeros of polynomials and simple zeros of a wide class of analytic functions, are also studied. Numerical examples demonstrate the convergence properties of the presented family of root-finding methods.

  13. Modeling corneal surfaces with rational functions for high-speed videokeratoscopy data compression.

    PubMed

    Schneider, Martin; Iskander, D Robert; Collins, Michael J

    2009-02-01

    High-speed videokeratoscopy is an emerging technique that enables study of the corneal surface and tear-film dynamics. Unlike its static predecessor, this new technique results in a very large amount of digital data for which storage needs become significant. We aimed to design a compression technique that would use mathematical functions to parsimoniously fit corneal surface data with a minimum number of coefficients. Since the Zernike polynomial functions that have been traditionally used for modeling corneal surfaces may not necessarily correctly represent given corneal surface data in terms of its optical performance, we introduced the concept of Zernike polynomial-based rational functions. Modeling optimality criteria were employed in terms of both the rms surface error as well as the point spread function cross-correlation. The parameters of approximations were estimated using a nonlinear least-squares procedure based on the Levenberg-Marquardt algorithm. A large number of retrospective videokeratoscopic measurements were used to evaluate the performance of the proposed rational-function-based modeling approach. The results indicate that the rational functions almost always outperform the traditional Zernike polynomial approximations with the same number of coefficients.

  14. Global collocation methods for approximation and the solution of partial differential equations

    NASA Technical Reports Server (NTRS)

    Solomonoff, A.; Turkel, E.

    1986-01-01

    Polynomial interpolation methods are applied both to the approximation of functions and to the numerical solutions of hyperbolic and elliptic partial differential equations. The derivative matrix for a general sequence of the collocation points is constructed. The approximate derivative is then found by a matrix times vector multiply. The effects of several factors on the performance of these methods including the effect of different collocation points are then explored. The resolution of the schemes for both smooth functions and functions with steep gradients or discontinuities in some derivative are also studied. The accuracy when the gradients occur both near the center of the region and in the vicinity of the boundary is investigated. The importance of the aliasing limit on the resolution of the approximation is investigated in detail. Also examined is the effect of boundary treatment on the stability and accuracy of the scheme.

  15. An approximation technique for predicting the transient response of a second order nonlinear equation

    NASA Technical Reports Server (NTRS)

    Laurenson, R. M.; Baumgarten, J. R.

    1975-01-01

    An approximation technique has been developed for determining the transient response of a nonlinear dynamic system. The nonlinearities in the system which has been considered appear in the system's dissipation function. This function was expressed as a second order polynomial in the system's velocity. The developed approximation is an extension of the classic Kryloff-Bogoliuboff technique. Two examples of the developed approximation are presented for comparative purposes with other approximation methods.

  16. SAMBA: Sparse Approximation of Moment-Based Arbitrary Polynomial Chaos

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahlfeld, R., E-mail: r.ahlfeld14@imperial.ac.uk; Belkouchi, B.; Montomoli, F.

    2016-09-01

    A new arbitrary Polynomial Chaos (aPC) method is presented for moderately high-dimensional problems characterised by limited input data availability. The proposed methodology improves the algorithm of aPC and extends the method, that was previously only introduced as tensor product expansion, to moderately high-dimensional stochastic problems. The fundamental idea of aPC is to use the statistical moments of the input random variables to develop the polynomial chaos expansion. This approach provides the possibility to propagate continuous or discrete probability density functions and also histograms (data sets) as long as their moments exist, are finite and the determinant of the moment matrixmore » is strictly positive. For cases with limited data availability, this approach avoids bias and fitting errors caused by wrong assumptions. In this work, an alternative way to calculate the aPC is suggested, which provides the optimal polynomials, Gaussian quadrature collocation points and weights from the moments using only a handful of matrix operations on the Hankel matrix of moments. It can therefore be implemented without requiring prior knowledge about statistical data analysis or a detailed understanding of the mathematics of polynomial chaos expansions. The extension to more input variables suggested in this work, is an anisotropic and adaptive version of Smolyak's algorithm that is solely based on the moments of the input probability distributions. It is referred to as SAMBA (PC), which is short for Sparse Approximation of Moment-Based Arbitrary Polynomial Chaos. It is illustrated that for moderately high-dimensional problems (up to 20 different input variables or histograms) SAMBA can significantly simplify the calculation of sparse Gaussian quadrature rules. SAMBA's efficiency for multivariate functions with regard to data availability is further demonstrated by analysing higher order convergence and accuracy for a set of nonlinear test functions with 2, 5 and 10 different input distributions or histograms.« less

  17. SAMBA: Sparse Approximation of Moment-Based Arbitrary Polynomial Chaos

    NASA Astrophysics Data System (ADS)

    Ahlfeld, R.; Belkouchi, B.; Montomoli, F.

    2016-09-01

    A new arbitrary Polynomial Chaos (aPC) method is presented for moderately high-dimensional problems characterised by limited input data availability. The proposed methodology improves the algorithm of aPC and extends the method, that was previously only introduced as tensor product expansion, to moderately high-dimensional stochastic problems. The fundamental idea of aPC is to use the statistical moments of the input random variables to develop the polynomial chaos expansion. This approach provides the possibility to propagate continuous or discrete probability density functions and also histograms (data sets) as long as their moments exist, are finite and the determinant of the moment matrix is strictly positive. For cases with limited data availability, this approach avoids bias and fitting errors caused by wrong assumptions. In this work, an alternative way to calculate the aPC is suggested, which provides the optimal polynomials, Gaussian quadrature collocation points and weights from the moments using only a handful of matrix operations on the Hankel matrix of moments. It can therefore be implemented without requiring prior knowledge about statistical data analysis or a detailed understanding of the mathematics of polynomial chaos expansions. The extension to more input variables suggested in this work, is an anisotropic and adaptive version of Smolyak's algorithm that is solely based on the moments of the input probability distributions. It is referred to as SAMBA (PC), which is short for Sparse Approximation of Moment-Based Arbitrary Polynomial Chaos. It is illustrated that for moderately high-dimensional problems (up to 20 different input variables or histograms) SAMBA can significantly simplify the calculation of sparse Gaussian quadrature rules. SAMBA's efficiency for multivariate functions with regard to data availability is further demonstrated by analysing higher order convergence and accuracy for a set of nonlinear test functions with 2, 5 and 10 different input distributions or histograms.

  18. A new order-theoretic characterisation of the polytime computable functions☆

    PubMed Central

    Avanzini, Martin; Eguchi, Naohi; Moser, Georg

    2015-01-01

    We propose a new order-theoretic characterisation of the class of polytime computable functions. To this avail we define the small polynomial path order (sPOP⁎ for short). This termination order entails a new syntactic method to analyse the innermost runtime complexity of term rewrite systems fully automatically: for any rewrite system compatible with sPOP⁎ that employs recursion up to depth d, the (innermost) runtime complexity is polynomially bounded of degree d. This bound is tight. Thus we obtain a direct correspondence between a syntactic (and easily verifiable) condition of a program and the asymptotic worst-case complexity of the program. PMID:26412933

  19. Parallel algorithm for computation of second-order sequential best rotations

    NASA Astrophysics Data System (ADS)

    Redif, Soydan; Kasap, Server

    2013-12-01

    Algorithms for computing an approximate polynomial matrix eigenvalue decomposition of para-Hermitian systems have emerged as a powerful, generic signal processing tool. A technique that has shown much success in this regard is the sequential best rotation (SBR2) algorithm. Proposed is a scheme for parallelising SBR2 with a view to exploiting the modern architectural features and inherent parallelism of field-programmable gate array (FPGA) technology. Experiments show that the proposed scheme can achieve low execution times while requiring minimal FPGA resources.

  20. Comparison of Implicit Collocation Methods for the Heat Equation

    NASA Technical Reports Server (NTRS)

    Kouatchou, Jules; Jezequel, Fabienne; Zukor, Dorothy (Technical Monitor)

    2001-01-01

    We combine a high-order compact finite difference scheme to approximate spatial derivatives arid collocation techniques for the time component to numerically solve the two dimensional heat equation. We use two approaches to implement the collocation methods. The first one is based on an explicit computation of the coefficients of polynomials and the second one relies on differential quadrature. We compare them by studying their merits and analyzing their numerical performance. All our computations, based on parallel algorithms, are carried out on the CRAY SV1.

  1. Some Surprising Errors in Numerical Differentiation

    ERIC Educational Resources Information Center

    Gordon, Sheldon P.

    2012-01-01

    Data analysis methods, both numerical and visual, are used to discover a variety of surprising patterns in the errors associated with successive approximations to the derivatives of sinusoidal and exponential functions based on the Newton difference-quotient. L'Hopital's rule and Taylor polynomial approximations are then used to explain why these…

  2. Polynomial Approximation of Functions: Historical Perspective and New Tools

    ERIC Educational Resources Information Center

    Kidron, Ivy

    2003-01-01

    This paper examines the effect of applying symbolic computation and graphics to enhance students' ability to move from a visual interpretation of mathematical concepts to formal reasoning. The mathematics topics involved, Approximation and Interpolation, were taught according to their historical development, and the students tried to follow the…

  3. Dynamical error bounds for continuum discretisation via Gauss quadrature rules—A Lieb-Robinson bound approach

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Woods, M. P.; Centre for Quantum Technologies, National University of Singapore; QuTech, Delft University of Technology, Lorentzweg 1, 2611 CJ Delft

    2016-02-15

    Instances of discrete quantum systems coupled to a continuum of oscillators are ubiquitous in physics. Often the continua are approximated by a discrete set of modes. We derive error bounds on expectation values of system observables that have been time evolved under such discretised Hamiltonians. These bounds take on the form of a function of time and the number of discrete modes, where the discrete modes are chosen according to Gauss quadrature rules. The derivation makes use of tools from the field of Lieb-Robinson bounds and the theory of orthonormal polynomials.

  4. Compressive sampling of polynomial chaos expansions: Convergence analysis and sampling strategies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hampton, Jerrad; Doostan, Alireza, E-mail: alireza.doostan@colorado.edu

    2015-01-01

    Sampling orthogonal polynomial bases via Monte Carlo is of interest for uncertainty quantification of models with random inputs, using Polynomial Chaos (PC) expansions. It is known that bounding a probabilistic parameter, referred to as coherence, yields a bound on the number of samples necessary to identify coefficients in a sparse PC expansion via solution to an ℓ{sub 1}-minimization problem. Utilizing results for orthogonal polynomials, we bound the coherence parameter for polynomials of Hermite and Legendre type under their respective natural sampling distribution. In both polynomial bases we identify an importance sampling distribution which yields a bound with weaker dependence onmore » the order of the approximation. For more general orthonormal bases, we propose the coherence-optimal sampling: a Markov Chain Monte Carlo sampling, which directly uses the basis functions under consideration to achieve a statistical optimality among all sampling schemes with identical support. We demonstrate these different sampling strategies numerically in both high-order and high-dimensional, manufactured PC expansions. In addition, the quality of each sampling method is compared in the identification of solutions to two differential equations, one with a high-dimensional random input and the other with a high-order PC expansion. In both cases, the coherence-optimal sampling scheme leads to similar or considerably improved accuracy.« less

  5. Selection of polynomial chaos bases via Bayesian model uncertainty methods with applications to sparse approximation of PDEs with stochastic inputs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karagiannis, Georgios, E-mail: georgios.karagiannis@pnnl.gov; Lin, Guang, E-mail: guang.lin@pnnl.gov

    2014-02-15

    Generalized polynomial chaos (gPC) expansions allow us to represent the solution of a stochastic system using a series of polynomial chaos basis functions. The number of gPC terms increases dramatically as the dimension of the random input variables increases. When the number of the gPC terms is larger than that of the available samples, a scenario that often occurs when the corresponding deterministic solver is computationally expensive, evaluation of the gPC expansion can be inaccurate due to over-fitting. We propose a fully Bayesian approach that allows for global recovery of the stochastic solutions, in both spatial and random domains, bymore » coupling Bayesian model uncertainty and regularization regression methods. It allows the evaluation of the PC coefficients on a grid of spatial points, via (1) the Bayesian model average (BMA) or (2) the median probability model, and their construction as spatial functions on the spatial domain via spline interpolation. The former accounts for the model uncertainty and provides Bayes-optimal predictions; while the latter provides a sparse representation of the stochastic solutions by evaluating the expansion on a subset of dominating gPC bases. Moreover, the proposed methods quantify the importance of the gPC bases in the probabilistic sense through inclusion probabilities. We design a Markov chain Monte Carlo (MCMC) sampler that evaluates all the unknown quantities without the need of ad-hoc techniques. The proposed methods are suitable for, but not restricted to, problems whose stochastic solutions are sparse in the stochastic space with respect to the gPC bases while the deterministic solver involved is expensive. We demonstrate the accuracy and performance of the proposed methods and make comparisons with other approaches on solving elliptic SPDEs with 1-, 14- and 40-random dimensions.« less

  6. Polynomial dual energy inverse functions for bone Calcium/Phosphorus ratio determination and experimental evaluation.

    PubMed

    Sotiropoulou, P; Fountos, G; Martini, N; Koukou, V; Michail, C; Kandarakis, I; Nikiforidis, G

    2016-12-01

    An X-ray dual energy (XRDE) method was examined, using polynomial nonlinear approximation of inverse functions for the determination of the bone Calcium-to-Phosphorus (Ca/P) mass ratio. Inverse fitting functions with the least-squares estimation were used, to determine calcium and phosphate thicknesses. The method was verified by measuring test bone phantoms with a dedicated dual energy system and compared with previously published dual energy data. The accuracy in the determination of the calcium and phosphate thicknesses improved with the polynomial nonlinear inverse function method, introduced in this work, (ranged from 1.4% to 6.2%), compared to the corresponding linear inverse function method (ranged from 1.4% to 19.5%). Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Two-dimensional orthonormal trend surfaces for prospecting

    NASA Astrophysics Data System (ADS)

    Sarma, D. D.; Selvaraj, J. B.

    Orthonormal polynomials have distinct advantages over conventional polynomials: the equations for evaluating trend coefficients are not ill-conditioned and the convergence power of this method is greater compared to the least-squares approximation and therefore the approach by orthonormal functions provides a powerful alternative to the least-squares method. In this paper, orthonormal polynomials in two dimensions are obtained using the Gram-Schmidt method for a polynomial series of the type: Z = 1 + x + y + x2 + xy + y2 + … + yn, where x and y are the locational coordinates and Z is the value of the variable under consideration. Trend-surface analysis, which has wide applications in prospecting, has been carried out using the orthonormal polynomial approach for two sample sets of data from India concerned with gold accumulation from the Kolar Gold Field, and gravity data. A comparison of the orthonormal polynomial trend surfaces with those obtained by the classical least-squares method has been made for the two data sets. In both the situations, the orthonormal polynomial surfaces gave an improved fit to the data. A flowchart and a FORTRAN-IV computer program for deriving orthonormal polynomials of any order and for using them to fit trend surfaces is included. The program has provision for logarithmic transformation of the Z variable. If log-transformation is performed the predicted Z values are reconverted to the original units and the trend-surface map generated for use. The illustration of gold assay data related to the Champion lode system of Kolar Gold Fields, for which a 9th-degree orthonormal trend surface was fit, could be used for further prospecting the area.

  8. Effect of design selection on response surface performance

    NASA Technical Reports Server (NTRS)

    Carpenter, William C.

    1993-01-01

    Artificial neural nets and polynomial approximations were used to develop response surfaces for several test problems. Based on the number of functional evaluations required to build the approximations and the number of undetermined parameters associated with the approximations, the performance of the two types of approximations was found to be comparable. A rule of thumb is developed for determining the number of nodes to be used on a hidden layer of an artificial neural net and the number of designs needed to train an approximation is discussed.

  9. Constrained Chebyshev approximations to some elementary functions suitable for evaluation with floating point arithmetic

    NASA Technical Reports Server (NTRS)

    Manos, P.; Turner, L. R.

    1972-01-01

    Approximations which can be evaluated with precision using floating-point arithmetic are presented. The particular set of approximations thus far developed are for the function TAN and the functions of USASI FORTRAN excepting SQRT and EXPONENTIATION. These approximations are, furthermore, specialized to particular forms which are especially suited to a computer with a small memory, in that all of the approximations can share one general purpose subroutine for the evaluation of a polynomial in the square of the working argument.

  10. Supervised nonlinear spectral unmixing using a postnonlinear mixing model for hyperspectral imagery.

    PubMed

    Altmann, Yoann; Halimi, Abderrahim; Dobigeon, Nicolas; Tourneret, Jean-Yves

    2012-06-01

    This paper presents a nonlinear mixing model for hyperspectral image unmixing. The proposed model assumes that the pixel reflectances are nonlinear functions of pure spectral components contaminated by an additive white Gaussian noise. These nonlinear functions are approximated using polynomial functions leading to a polynomial postnonlinear mixing model. A Bayesian algorithm and optimization methods are proposed to estimate the parameters involved in the model. The performance of the unmixing strategies is evaluated by simulations conducted on synthetic and real data.

  11. Statistics of time delay and scattering correlation functions in chaotic systems. I. Random matrix theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Novaes, Marcel

    2015-06-15

    We consider the statistics of time delay in a chaotic cavity having M open channels, in the absence of time-reversal invariance. In the random matrix theory approach, we compute the average value of polynomial functions of the time delay matrix Q = − iħS{sup †}dS/dE, where S is the scattering matrix. Our results do not assume M to be large. In a companion paper, we develop a semiclassical approximation to S-matrix correlation functions, from which the statistics of Q can also be derived. Together, these papers contribute to establishing the conjectured equivalence between the random matrix and the semiclassical approaches.

  12. Transport coefficients in ultrarelativistic kinetic theory

    NASA Astrophysics Data System (ADS)

    Ambruş, Victor E.

    2018-02-01

    A spatially periodic longitudinal wave is considered in relativistic dissipative hydrodynamics. At sufficiently small wave amplitudes, an analytic solution is obtained in the linearized limit of the macroscopic conservation equations within the first- and second-order relativistic hydrodynamics formulations. A kinetic solver is used to obtain the numerical solution of the relativistic Boltzmann equation for massless particles in the Anderson-Witting approximation for the collision term. It is found that, at small values of the Anderson-Witting relaxation time τ , the transport coefficients emerging from the relativistic Boltzmann equation agree with those predicted through the Chapman-Enskog procedure, while the relaxation times of the heat flux and shear pressure are equal to τ . These claims are further strengthened by considering a moment-type approximation based on orthogonal polynomials under which the Chapman-Enskog results for the transport coefficients are exactly recovered.

  13. Approximability of the d-dimensional Euclidean capacitated vehicle routing problem

    NASA Astrophysics Data System (ADS)

    Khachay, Michael; Dubinin, Roman

    2016-10-01

    Capacitated Vehicle Routing Problem (CVRP) is the well known intractable combinatorial optimization problem, which remains NP-hard even in the Euclidean plane. Since the introduction of this problem in the middle of the 20th century, many researchers were involved into the study of its approximability. Most of the results obtained in this field are based on the well known Iterated Tour Partition heuristic proposed by M. Haimovich and A. Rinnoy Kan in their celebrated paper, where they construct the first Polynomial Time Approximation Scheme (PTAS) for the single depot CVRP in ℝ2. For decades, this result was extended by many authors to numerous useful modifications of the problem taking into account multiple depots, pick up and delivery options, time window restrictions, etc. But, to the best of our knowledge, almost none of these results go beyond the Euclidean plane. In this paper, we try to bridge this gap and propose a EPTAS for the Euclidean CVRP for any fixed dimension.

  14. A new third order finite volume weighted essentially non-oscillatory scheme on tetrahedral meshes

    NASA Astrophysics Data System (ADS)

    Zhu, Jun; Qiu, Jianxian

    2017-11-01

    In this paper a third order finite volume weighted essentially non-oscillatory scheme is designed for solving hyperbolic conservation laws on tetrahedral meshes. Comparing with other finite volume WENO schemes designed on tetrahedral meshes, the crucial advantages of such new WENO scheme are its simplicity and compactness with the application of only six unequal size spatial stencils for reconstructing unequal degree polynomials in the WENO type spatial procedures, and easy choice of the positive linear weights without considering the topology of the meshes. The original innovation of such scheme is to use a quadratic polynomial defined on a big central spatial stencil for obtaining third order numerical approximation at any points inside the target tetrahedral cell in smooth region and switch to at least one of five linear polynomials defined on small biased/central spatial stencils for sustaining sharp shock transitions and keeping essentially non-oscillatory property simultaneously. By performing such new procedures in spatial reconstructions and adopting a third order TVD Runge-Kutta time discretization method for solving the ordinary differential equation (ODE), the new scheme's memory occupancy is decreased and the computing efficiency is increased. So it is suitable for large scale engineering requirements on tetrahedral meshes. Some numerical results are provided to illustrate the good performance of such scheme.

  15. On the Rate of Relaxation for the Landau Kinetic Equation and Related Models

    NASA Astrophysics Data System (ADS)

    Bobylev, Alexander; Gamba, Irene M.; Zhang, Chenglong

    2017-08-01

    We study the rate of relaxation to equilibrium for Landau kinetic equation and some related models by considering the relatively simple case of radial solutions of the linear Landau-type equations. The well-known difficulty is that the evolution operator has no spectral gap, i.e. its spectrum is not separated from zero. Hence we do not expect purely exponential relaxation for large values of time t>0. One of the main goals of our work is to numerically identify the large time asymptotics for the relaxation to equilibrium. We recall the work of Strain and Guo (Arch Rat Mech Anal 187:287-339 2008, Commun Partial Differ Equ 31:17-429 2006), who rigorously show that the expected law of relaxation is \\exp (-ct^{2/3}) with some c > 0. In this manuscript, we find an heuristic way, performed by asymptotic methods, that finds this "law of two thirds", and then study this question numerically. More specifically, the linear Landau equation is approximated by a set of ODEs based on expansions in generalized Laguerre polynomials. We analyze the corresponding quadratic form and the solution of these ODEs in detail. It is shown that the solution has two different asymptotic stages for large values of time t and maximal order of polynomials N: the first one focus on intermediate asymptotics which agrees with the "law of two thirds" for moderately large values of time t and then the second one on absolute, purely exponential asymptotics for very large t, as expected for linear ODEs. We believe that appearance of intermediate asymptotics in finite dimensional approximations must be a generic behavior for different classes of equations in functional spaces (some PDEs, Boltzmann equations for soft potentials, etc.) and that our methods can be applied to related problems.

  16. The Fixed-Links Model in Combination with the Polynomial Function as a Tool for Investigating Choice Reaction Time Data

    ERIC Educational Resources Information Center

    Schweizer, Karl

    2006-01-01

    A model with fixed relations between manifest and latent variables is presented for investigating choice reaction time data. The numbers for fixation originate from the polynomial function. Two options are considered: the component-based (1 latent variable for each component of the polynomial function) and composite-based options (1 latent…

  17. Polynomial elimination theory and non-linear stability analysis for the Euler equations

    NASA Technical Reports Server (NTRS)

    Kennon, S. R.; Dulikravich, G. S.; Jespersen, D. C.

    1986-01-01

    Numerical methods are presented that exploit the polynomial properties of discretizations of the Euler equations. It is noted that most finite difference or finite volume discretizations of the steady-state Euler equations produce a polynomial system of equations to be solved. These equations are solved using classical polynomial elimination theory, with some innovative modifications. This paper also presents some preliminary results of a new non-linear stability analysis technique. This technique is applicable to determining the stability of polynomial iterative schemes. Results are presented for applying the elimination technique to a one-dimensional test case. For this test case, the exact solution is computed in three iterations. The non-linear stability analysis is applied to determine the optimal time step for solving Burgers' equation using the MacCormack scheme. The estimated optimal time step is very close to the time step that arises from a linear stability analysis.

  18. The Ponzano-Regge Model and Parametric Representation

    NASA Astrophysics Data System (ADS)

    Li, Dan

    2014-04-01

    We give a parametric representation of the effective noncommutative field theory derived from a -deformation of the Ponzano-Regge model and define a generalized Kirchhoff polynomial with -correction terms, obtained in a -linear approximation. We then consider the corresponding graph hypersurfaces and the question of how the presence of the correction term affects their motivic nature. We look in particular at the tetrahedron graph, which is the basic case of relevance to quantum gravity. With the help of computer calculations, we verify that the number of points over finite fields of the corresponding hypersurface does not fit polynomials with integer coefficients, hence the hypersurface of the tetrahedron is not polynomially countable. This shows that the correction term can change significantly the motivic properties of the hypersurfaces, with respect to the classical case.

  19. Blending Velocities In Task Space In Computing Robot Motions

    NASA Technical Reports Server (NTRS)

    Volpe, Richard A.

    1995-01-01

    Blending of linear and angular velocities between sequential specified points in task space constitutes theoretical basis of improved method of computing trajectories followed by robotic manipulators. In method, generalized velocity-vector-blending technique provides relatively simple, common conceptual framework for blending linear, angular, and other parametric velocities. Velocity vectors originate from straight-line segments connecting specified task-space points, called "via frames" and represent specified robot poses. Linear-velocity-blending functions chosen from among first-order, third-order-polynomial, and cycloidal options. Angular velocities blended by use of first-order approximation of previous orientation-matrix-blending formulation. Angular-velocity approximation yields small residual error, quantified and corrected. Method offers both relative simplicity and speed needed for generation of robot-manipulator trajectories in real time.

  20. Finite state modeling of aeroelastic systems

    NASA Technical Reports Server (NTRS)

    Vepa, R.

    1977-01-01

    A general theory of finite state modeling of aerodynamic loads on thin airfoils and lifting surfaces performing completely arbitrary, small, time-dependent motions in an airstream is developed and presented. The nature of the behavior of the unsteady airloads in the frequency domain is explained, using as raw materials any of the unsteady linearized theories that have been mechanized for simple harmonic oscillations. Each desired aerodynamic transfer function is approximated by means of an appropriate Pade approximant, that is, a rational function of finite degree polynomials in the Laplace transform variable. The modeling technique is applied to several two dimensional and three dimensional airfoils. Circular, elliptic, rectangular and tapered planforms are considered as examples. Identical functions are also obtained for control surfaces for two and three dimensional airfoils.

  1. Investigation on imperfection sensitivity of composite cylindrical shells using the nonlinearity reduction technique and the polynomial chaos method

    NASA Astrophysics Data System (ADS)

    Liang, Ke; Sun, Qin; Liu, Xiaoran

    2018-05-01

    The theoretical buckling load of a perfect cylinder must be reduced by a knock-down factor to account for structural imperfections. The EU project DESICOS proposed a new robust design for imperfection-sensitive composite cylindrical shells using the combination of deterministic and stochastic simulations, however the high computational complexity seriously affects its wider application in aerospace structures design. In this paper, the nonlinearity reduction technique and the polynomial chaos method are implemented into the robust design process, to significantly lower computational costs. The modified Newton-type Koiter-Newton approach which largely reduces the number of degrees of freedom in the nonlinear finite element model, serves as the nonlinear buckling solver to trace the equilibrium paths of geometrically nonlinear structures efficiently. The non-intrusive polynomial chaos method provides the buckling load with an approximate chaos response surface with respect to imperfections and uses buckling solver codes as black boxes. A fast large-sample study can be applied using the approximate chaos response surface to achieve probability characteristics of buckling loads. The performance of the method in terms of reliability, accuracy and computational effort is demonstrated with an unstiffened CFRP cylinder.

  2. Using Chebyshev polynomials and approximate inverse triangular factorizations for preconditioning the conjugate gradient method

    NASA Astrophysics Data System (ADS)

    Kaporin, I. E.

    2012-02-01

    In order to precondition a sparse symmetric positive definite matrix, its approximate inverse is examined, which is represented as the product of two sparse mutually adjoint triangular matrices. In this way, the solution of the corresponding system of linear algebraic equations (SLAE) by applying the preconditioned conjugate gradient method (CGM) is reduced to performing only elementary vector operations and calculating sparse matrix-vector products. A method for constructing the above preconditioner is described and analyzed. The triangular factor has a fixed sparsity pattern and is optimal in the sense that the preconditioned matrix has a minimum K-condition number. The use of polynomial preconditioning based on Chebyshev polynomials makes it possible to considerably reduce the amount of scalar product operations (at the cost of an insignificant increase in the total number of arithmetic operations). The possibility of an efficient massively parallel implementation of the resulting method for solving SLAEs is discussed. For a sequential version of this method, the results obtained by solving 56 test problems from the Florida sparse matrix collection (which are large-scale and ill-conditioned) are presented. These results show that the method is highly reliable and has low computational costs.

  3. Autonomous manipulation on a robot: Summary of manipulator software functions

    NASA Technical Reports Server (NTRS)

    Lewis, R. A.

    1974-01-01

    A six degree-of-freedom computer-controlled manipulator is examined, and the relationships between the arm's joint variables and 3-space are derived. Arm trajectories using sequences of third-degree polynomials to describe the time history of each joint variable are presented and two approaches to the avoidance of obstacles are given. The equations of motion for the arm are derived and then decomposed into time-dependent factors and time-independent coefficients. Several new and simplifying relationships among the coefficients are proven. Two sample trajectories are analyzed in detail for purposes of determining the most important contributions to total force in order that relatively simple approximations to the equations of motion can be used.

  4. Comparison of techniques for approximating ocean bottom topography in a wave-refraction computer model

    NASA Technical Reports Server (NTRS)

    Poole, L. R.

    1975-01-01

    A study of the effects of using different methods for approximating bottom topography in a wave-refraction computer model was conducted. Approximation techniques involving quadratic least squares, cubic least squares, and constrained bicubic polynomial interpolation were compared for computed wave patterns and parameters in the region of Saco Bay, Maine. Although substantial local differences can be attributed to use of the different approximation techniques, results indicated that overall computed wave patterns and parameter distributions were quite similar.

  5. Adaptive Window Zero-Crossing-Based Instantaneous Frequency Estimation

    NASA Astrophysics Data System (ADS)

    Sekhar, S. Chandra; Sreenivas, TV

    2004-12-01

    We address the problem of estimating instantaneous frequency (IF) of a real-valued constant amplitude time-varying sinusoid. Estimation of polynomial IF is formulated using the zero-crossings of the signal. We propose an algorithm to estimate nonpolynomial IF by local approximation using a low-order polynomial, over a short segment of the signal. This involves the choice of window length to minimize the mean square error (MSE). The optimal window length found by directly minimizing the MSE is a function of the higher-order derivatives of the IF which are not available a priori. However, an optimum solution is formulated using an adaptive window technique based on the concept of intersection of confidence intervals. The adaptive algorithm enables minimum MSE-IF (MMSE-IF) estimation without requiring a priori information about the IF. Simulation results show that the adaptive window zero-crossing-based IF estimation method is superior to fixed window methods and is also better than adaptive spectrogram and adaptive Wigner-Ville distribution (WVD)-based IF estimators for different signal-to-noise ratio (SNR).

  6. Nonlinear dynamic macromodeling techniques for audio systems

    NASA Astrophysics Data System (ADS)

    Ogrodzki, Jan; Bieńkowski, Piotr

    2015-09-01

    This paper develops a modelling method and a models identification technique for the nonlinear dynamic audio systems. Identification is performed by means of a behavioral approach based on a polynomial approximation. This approach makes use of Discrete Fourier Transform and Harmonic Balance Method. A model of an audio system is first created and identified and then it is simulated in real time using an algorithm of low computational complexity. The algorithm consists in real time emulation of the system response rather than in simulation of the system itself. The proposed software is written in Python language using object oriented programming techniques. The code is optimized for a multithreads environment.

  7. Data compression using Chebyshev transform

    NASA Technical Reports Server (NTRS)

    Cheng, Andrew F. (Inventor); Hawkins, III, S. Edward (Inventor); Nguyen, Lillian (Inventor); Monaco, Christopher A. (Inventor); Seagrave, Gordon G. (Inventor)

    2007-01-01

    The present invention is a method, system, and computer program product for implementation of a capable, general purpose compression algorithm that can be engaged on the fly. This invention has particular practical application with time-series data, and more particularly, time-series data obtained form a spacecraft, or similar situations where cost, size and/or power limitations are prevalent, although it is not limited to such applications. It is also particularly applicable to the compression of serial data streams and works in one, two, or three dimensions. The original input data is approximated by Chebyshev polynomials, achieving very high compression ratios on serial data streams with minimal loss of scientific information.

  8. Nonlinear adaptive inverse control via the unified model neural network

    NASA Astrophysics Data System (ADS)

    Jeng, Jin-Tsong; Lee, Tsu-Tian

    1999-03-01

    In this paper, we propose a new nonlinear adaptive inverse control via a unified model neural network. In order to overcome nonsystematic design and long training time in nonlinear adaptive inverse control, we propose the approximate transformable technique to obtain a Chebyshev Polynomials Based Unified Model (CPBUM) neural network for the feedforward/recurrent neural networks. It turns out that the proposed method can use less training time to get an inverse model. Finally, we apply this proposed method to control magnetic bearing system. The experimental results show that the proposed nonlinear adaptive inverse control architecture provides a greater flexibility and better performance in controlling magnetic bearing systems.

  9. Transition probability functions for applications of inelastic electron scattering

    PubMed Central

    Löffler, Stefan; Schattschneider, Peter

    2012-01-01

    In this work, the transition matrix elements for inelastic electron scattering are investigated which are the central quantity for interpreting experiments. The angular part is given by spherical harmonics. For the weighted radial wave function overlap, analytic expressions are derived in the Slater-type and the hydrogen-like orbital models. These expressions are shown to be composed of a finite sum of polynomials and elementary trigonometric functions. Hence, they are easy to use, require little computation time, and are significantly more accurate than commonly used approximations. PMID:22560709

  10. Solution of the nonlinear mixed Volterra-Fredholm integral equations by hybrid of block-pulse functions and Bernoulli polynomials.

    PubMed

    Mashayekhi, S; Razzaghi, M; Tripak, O

    2014-01-01

    A new numerical method for solving the nonlinear mixed Volterra-Fredholm integral equations is presented. This method is based upon hybrid functions approximation. The properties of hybrid functions consisting of block-pulse functions and Bernoulli polynomials are presented. The operational matrices of integration and product are given. These matrices are then utilized to reduce the nonlinear mixed Volterra-Fredholm integral equations to the solution of algebraic equations. Illustrative examples are included to demonstrate the validity and applicability of the technique.

  11. Solution of the Nonlinear Mixed Volterra-Fredholm Integral Equations by Hybrid of Block-Pulse Functions and Bernoulli Polynomials

    PubMed Central

    Mashayekhi, S.; Razzaghi, M.; Tripak, O.

    2014-01-01

    A new numerical method for solving the nonlinear mixed Volterra-Fredholm integral equations is presented. This method is based upon hybrid functions approximation. The properties of hybrid functions consisting of block-pulse functions and Bernoulli polynomials are presented. The operational matrices of integration and product are given. These matrices are then utilized to reduce the nonlinear mixed Volterra-Fredholm integral equations to the solution of algebraic equations. Illustrative examples are included to demonstrate the validity and applicability of the technique. PMID:24523638

  12. A Runge-Kutta discontinuous finite element method for high speed flows

    NASA Technical Reports Server (NTRS)

    Bey, Kim S.; Oden, J. T.

    1991-01-01

    A Runge-Kutta discontinuous finite element method is developed for hyperbolic systems of conservation laws in two space variables. The discontinuous Galerkin spatial approximation to the conservation laws results in a system of ordinary differential equations which are marched in time using Runge-Kutta methods. Numerical results for the two-dimensional Burger's equation show that the method is (p+1)-order accurate in time and space, where p is the degree of the polynomial approximation of the solution within an element and is capable of capturing shocks over a single element without oscillations. Results for this problem also show that the accuracy of the solution in smooth regions is unaffected by the local projection and that the accuracy in smooth regions increases as p increases. Numerical results for the Euler equations show that the method captures shocks without oscillations and with higher resolution than a first-order scheme.

  13. Real-time absorption and scattering characterization of slab-shaped turbid samples obtained by a combination of angular and spatially resolved measurements.

    PubMed

    Dam, Jan S; Yavari, Nazila; Sørensen, Søren; Andersson-Engels, Stefan

    2005-07-10

    We present a fast and accurate method for real-time determination of the absorption coefficient, the scattering coefficient, and the anisotropy factor of thin turbid samples by using simple continuous-wave noncoherent light sources. The three optical properties are extracted from recordings of angularly resolved transmittance in addition to spatially resolved diffuse reflectance and transmittance. The applied multivariate calibration and prediction techniques are based on multiple polynomial regression in combination with a Newton--Raphson algorithm. The numerical test results based on Monte Carlo simulations showed mean prediction errors of approximately 0.5% for all three optical properties within ranges typical for biological media. Preliminary experimental results are also presented yielding errors of approximately 5%. Thus the presented methods show a substantial potential for simultaneous absorption and scattering characterization of turbid media.

  14. On the coefficients of differentiated expansions of ultraspherical polynomials

    NASA Technical Reports Server (NTRS)

    Karageorghis, Andreas; Phillips, Timothy N.

    1989-01-01

    A formula expressing the coefficients of an expression of ultraspherical polynomials which has been differentiated an arbitrary number of times in terms of the coefficients of the original expansion is proved. The particular examples of Chebyshev and Legendre polynomials are considered.

  15. Optimization of the Monte Carlo code for modeling of photon migration in tissue.

    PubMed

    Zołek, Norbert S; Liebert, Adam; Maniewski, Roman

    2006-10-01

    The Monte Carlo method is frequently used to simulate light transport in turbid media because of its simplicity and flexibility, allowing to analyze complicated geometrical structures. Monte Carlo simulations are, however, time consuming because of the necessity to track the paths of individual photons. The time consuming computation is mainly associated with the calculation of the logarithmic and trigonometric functions as well as the generation of pseudo-random numbers. In this paper, the Monte Carlo algorithm was developed and optimized, by approximation of the logarithmic and trigonometric functions. The approximations were based on polynomial and rational functions, and the errors of these approximations are less than 1% of the values of the original functions. The proposed algorithm was verified by simulations of the time-resolved reflectance at several source-detector separations. The results of the calculation using the approximated algorithm were compared with those of the Monte Carlo simulations obtained with an exact computation of the logarithm and trigonometric functions as well as with the solution of the diffusion equation. The errors of the moments of the simulated distributions of times of flight of photons (total number of photons, mean time of flight and variance) are less than 2% for a range of optical properties, typical of living tissues. The proposed approximated algorithm allows to speed up the Monte Carlo simulations by a factor of 4. The developed code can be used on parallel machines, allowing for further acceleration.

  16. The value of continuity: Refined isogeometric analysis and fast direct solvers

    DOE PAGES

    Garcia, Daniel; Pardo, David; Dalcin, Lisandro; ...

    2016-08-24

    Here, we propose the use of highly continuous finite element spaces interconnected with low continuity hyperplanes to maximize the performance of direct solvers. Starting from a highly continuous Isogeometric Analysis (IGA) discretization, we introduce C0-separators to reduce the interconnection between degrees of freedom in the mesh. By doing so, both the solution time and best approximation errors are simultaneously improved. We call the resulting method “refined Isogeometric Analysis (rIGA)”. To illustrate the impact of the continuity reduction, we analyze the number of Floating Point Operations (FLOPs), computational times, and memory required to solve the linear system obtained by discretizing themore » Laplace problem with structured meshes and uniform polynomial orders. Theoretical estimates demonstrate that an optimal continuity reduction may decrease the total computational time by a factor between p 2 and p 3, with pp being the polynomial order of the discretization. Numerical results indicate that our proposed refined isogeometric analysis delivers a speed-up factor proportional to p 2. In a 2D mesh with four million elements and p=5, the linear system resulting from rIGA is solved 22 times faster than the one from highly continuous IGA. In a 3D mesh with one million elements and p=3, the linear system is solved 15 times faster for the refined than the maximum continuity isogeometric analysis.« less

  17. Scalable Prediction of Energy Consumption using Incremental Time Series Clustering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simmhan, Yogesh; Noor, Muhammad Usman

    2013-10-09

    Time series datasets are a canonical form of high velocity Big Data, and often generated by pervasive sensors, such as found in smart infrastructure. Performing predictive analytics on time series data can be computationally complex, and requires approximation techniques. In this paper, we motivate this problem using a real application from the smart grid domain. We propose an incremental clustering technique, along with a novel affinity score for determining cluster similarity, which help reduce the prediction error for cumulative time series within a cluster. We evaluate this technique, along with optimizations, using real datasets from smart meters, totaling ~700,000 datamore » points, and show the efficacy of our techniques in improving the prediction error of time series data within polynomial time.« less

  18. Approximate solutions for diffusive fracture-matrix transfer: Application to storage of dissolved CO 2 in fractured rocks

    DOE PAGES

    Zhou, Quanlin; Oldenburg, Curtis M.; Spangler, Lee H.; ...

    2017-01-05

    Analytical solutions with infinite exponential series are available to calculate the rate of diffusive transfer between low-permeability blocks and high-permeability zones in the subsurface. Truncation of these series is often employed by neglecting the early-time regime. Here in this paper, we present unified-form approximate solutions in which the early-time and the late-time solutions are continuous at a switchover time. The early-time solutions are based on three-term polynomial functions in terms of square root of dimensionless time, with the first coefficient dependent only on the dimensionless area-to-volume ratio. The last two coefficients are either determined analytically for isotropic blocks (e.g., spheresmore » and slabs) or obtained by fitting the exact solutions, and they solely depend on the aspect ratios for rectangular columns and parallelepipeds. For the late-time solutions, only the leading exponential term is needed for isotropic blocks, while a few additional exponential terms are needed for highly anisotropic rectangular blocks. The optimal switchover time is between 0.157 and 0.229, with highest relative approximation error less than 0.2%. The solutions are used to demonstrate the storage of dissolved CO 2 in fractured reservoirs with low-permeability matrix blocks of single and multiple shapes and sizes. These approximate solutions are building blocks for development of analytical and numerical tools for hydraulic, solute, and thermal diffusion processes in low-permeability matrix blocks.« less

  19. Use of polynomial expressions to describe the bioconcentration of hydrophobic chemicals by fish

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Connell, D.W.; Hawker, D.W.

    1988-12-01

    For the bioconcentration of hydrophobic chemicals by fish, relationships have been previously established between uptake rate constants (k1) and the octanol/water partition coefficient (Kow), and also between the clearance rate constant (k2) and Kow. These have been refined and extended on the basis of data for chlorinated hydrocarbons, and closely related compounds including polychlorinated dibenzodioxins, that covered a wider range of hydrophobicity (2.5 less than log Kow less than 9.5). This has allowed the development of new relationships between log Kow and various factors, including the bioconcentration factor (as log KB), equilibrium time (as log teq), and maximum biotic concentrationmore » (as log CB), which include extremely hydrophobic compounds previously not taken into account. The shape of the curves generated by these equations are in qualitative agreement with theoretical prediction and are described by polynomial expressions which are generally approximately linear over the more limited range of log Kow values used to develop previous relationships. The influences of factors such as hydrophobicity, aqueous solubility, molecular weight, lipid solubility, and also exposure time were considered. Decreasing lipid solubilities of extremely hydrophobic chemicals were found to result in increasing clearance rate constants, as well decreasing equilibrium times and bioconcentration factors.« less

  20. Particle drag history in a subcritical post-shock flow - data analysis method and uncertainty

    NASA Astrophysics Data System (ADS)

    Ding, Liuyang; Bordoloi, Ankur; Adrian, Ronald; Prestridge, Kathy; Arizona State University Team; Los Alamos National Laboratory Team

    2017-11-01

    A novel data analysis method for measuring particle drag in an 8-pulse particle tracking velocimetry-accelerometry (PTVA) experiment is described. We represented the particle drag history, CD(t) , using polynomials up to the third order. An analytical model for continuous particle position history was derived by integrating an equation relating CD(t) with particle velocity and acceleration. The coefficients of CD(t) were then calculated by fitting the position history model to eight measured particle locations in the sense of least squares. A preliminary test with experimental data showed that the new method yielded physically more reasonable particle velocity and acceleration history compared to conventionally adopted polynomial fitting. To fully assess and optimize the performance of the new method, we performed a PTVA simulation by assuming a ground truth of particle motion based on an ensemble of experimental data. The results indicated a significant reduction in the RMS error of CD. We also found that for particle locating noise between 0.1 and 3 pixels, a range encountered in our experiment, the lowest RMS error was achieved by using the quadratic CD(t) model. Furthermore, we will also discuss the optimization of the pulse timing configuration.

  1. A comparison of VLSI architectures for time and transform domain decoding of Reed-Solomon codes

    NASA Technical Reports Server (NTRS)

    Hsu, I. S.; Truong, T. K.; Deutsch, L. J.; Satorius, E. H.; Reed, I. S.

    1988-01-01

    It is well known that the Euclidean algorithm or its equivalent, continued fractions, can be used to find the error locator polynomial needed to decode a Reed-Solomon (RS) code. It is shown that this algorithm can be used for both time and transform domain decoding by replacing its initial conditions with the Forney syndromes and the erasure locator polynomial. By this means both the errata locator polynomial and the errate evaluator polynomial can be obtained with the Euclidean algorithm. With these ideas, both time and transform domain Reed-Solomon decoders for correcting errors and erasures are simplified and compared. As a consequence, the architectures of Reed-Solomon decoders for correcting both errors and erasures can be made more modular, regular, simple, and naturally suitable for VLSI implementation.

  2. Synthesized tissue-equivalent dielectric phantoms using salt and polyvinylpyrrolidone solutions.

    PubMed

    Ianniello, Carlotta; de Zwart, Jacco A; Duan, Qi; Deniz, Cem M; Alon, Leeor; Lee, Jae-Seung; Lattanzi, Riccardo; Brown, Ryan

    2018-07-01

    To explore the use of polyvinylpyrrolidone (PVP) for simulated materials with tissue-equivalent dielectric properties. PVP and salt were used to control, respectively, relative permittivity and electrical conductivity in a collection of 63 samples with a range of solute concentrations. Their dielectric properties were measured with a commercial probe and fitted to a 3D polynomial in order to establish an empirical recipe. The material's thermal properties and MR spectra were measured. The empirical polynomial recipe (available at https://www.amri.ninds.nih.gov/cgi-bin/phantomrecipe) provides the PVP and salt concentrations required for dielectric materials with permittivity and electrical conductivity values between approximately 45 and 78, and 0.1 to 2 siemens per meter, respectively, from 50 MHz to 4.5 GHz. The second- (solute concentrations) and seventh- (frequency) order polynomial recipe provided less than 2.5% relative error between the measured and target properties. PVP side peaks in the spectra were minor and unaffected by temperature changes. PVP-based phantoms are easy to prepare and nontoxic, and their semitransparency makes air bubbles easy to identify. The polymer can be used to create simulated material with a range of dielectric properties, negligible spectral side peaks, and long T 2 relaxation time, which are favorable in many MR applications. Magn Reson Med 80:413-419, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  3. A polynomial chaos ensemble hydrologic prediction system for efficient parameter inference and robust uncertainty assessment

    NASA Astrophysics Data System (ADS)

    Wang, S.; Huang, G. H.; Baetz, B. W.; Huang, W.

    2015-11-01

    This paper presents a polynomial chaos ensemble hydrologic prediction system (PCEHPS) for an efficient and robust uncertainty assessment of model parameters and predictions, in which possibilistic reasoning is infused into probabilistic parameter inference with simultaneous consideration of randomness and fuzziness. The PCEHPS is developed through a two-stage factorial polynomial chaos expansion (PCE) framework, which consists of an ensemble of PCEs to approximate the behavior of the hydrologic model, significantly speeding up the exhaustive sampling of the parameter space. Multiple hypothesis testing is then conducted to construct an ensemble of reduced-dimensionality PCEs with only the most influential terms, which is meaningful for achieving uncertainty reduction and further acceleration of parameter inference. The PCEHPS is applied to the Xiangxi River watershed in China to demonstrate its validity and applicability. A detailed comparison between the HYMOD hydrologic model, the ensemble of PCEs, and the ensemble of reduced PCEs is performed in terms of accuracy and efficiency. Results reveal temporal and spatial variations in parameter sensitivities due to the dynamic behavior of hydrologic systems, and the effects (magnitude and direction) of parametric interactions depending on different hydrological metrics. The case study demonstrates that the PCEHPS is capable not only of capturing both expert knowledge and probabilistic information in the calibration process, but also of implementing an acceleration of more than 10 times faster than the hydrologic model without compromising the predictive accuracy.

  4. Cylinder stitching interferometry: with and without overlap regions

    NASA Astrophysics Data System (ADS)

    Peng, Junzheng; Chen, Dingfu; Yu, Yingjie

    2017-06-01

    Since the cylinder surface is closed and periodic in the azimuthal direction, existing stitching methods cannot be used to yield the 360° form map. To address this problem, this paper presents two methods for stitching interferometry of cylinder: one requires overlap regions, and the other does not need the overlap regions. For the former, we use the first order approximation of cylindrical coordinate transformation to build the stitching model. With it, the relative parameters between the adjacent sub-apertures can be calculated by the stitching model. For the latter, a set of orthogonal polynomials, termed Legendre Fourier (LF) polynomials, was developed. With these polynomials, individual sub-aperture data can be expanded as composition of inherent form of partial cylinder surface and additional misalignment parameters. Then the 360° form map can be acquired by simultaneously fitting all sub-aperture data with LF polynomials. Finally the two proposed methods are compared under various conditions. The merits and drawbacks of each stitching method are consequently revealed to provide suggestion in acquisition of 360° form map for a precision cylinder.

  5. Mapping Landslides in Lunar Impact Craters Using Chebyshev Polynomials and Dem's

    NASA Astrophysics Data System (ADS)

    Yordanov, V.; Scaioni, M.; Brunetti, M. T.; Melis, M. T.; Zinzi, A.; Giommi, P.

    2016-06-01

    Geological slope failure processes have been observed on the Moon surface for decades, nevertheless a detailed and exhaustive lunar landslide inventory has not been produced yet. For a preliminary survey, WAC images and DEM maps from LROC at 100 m/pixels have been exploited in combination with the criteria applied by Brunetti et al. (2015) to detect the landslides. These criteria are based on the visual analysis of optical images to recognize mass wasting features. In the literature, Chebyshev polynomials have been applied to interpolate crater cross-sections in order to obtain a parametric characterization useful for classification into different morphological shapes. Here a new implementation of Chebyshev polynomial approximation is proposed, taking into account some statistical testing of the results obtained during Least-squares estimation. The presence of landslides in lunar craters is then investigated by analyzing the absolute values off odd coefficients of estimated Chebyshev polynomials. A case study on the Cassini A crater has demonstrated the key-points of the proposed methodology and outlined the required future development to carry out.

  6. On Convergence Aspects of Spheroidal Monogenics

    NASA Astrophysics Data System (ADS)

    Georgiev, S.; Morais, J.

    2011-09-01

    Orthogonal polynomials have found wide applications in mathematical physics, numerical analysis, and other fields. Accordingly there is an enormous amount of variety of such polynomials and relations that describe their properties. The paper's main results are the discussion of approximation properties for monogenic functions over prolate spheroids in R3 in terms of orthogonal monogenic polynomials and their interdependences. Certain results are stated without proof for now. The motivation for the present study stems from the fact that these polynomials play an important role in the calculation of the Bergman kernel and Green's monogenic functions in a spheroid. Once these functions are known, it is possible to solve both basic boundary value and conformal mapping problems. Interestingly, most of the used methods have a n-dimensional counterpart and can be extended to arbitrary ellipsoids. But such a procedure would make the further study of the underlying ellipsoidal monogenics somewhat laborious, and for this reason we shall not discuss these general cases here. To the best of our knowledge, this does not appear to have been done in literature before.

  7. Polynomial sequences for bond percolation critical thresholds

    DOE PAGES

    Scullard, Christian R.

    2011-09-22

    In this paper, I compute the inhomogeneous (multi-probability) bond critical surfaces for the (4, 6, 12) and (3 4, 6) using the linearity approximation described in (Scullard and Ziff, J. Stat. Mech. 03021), implemented as a branching process of lattices. I find the estimates for the bond percolation thresholds, pc(4, 6, 12) = 0.69377849... and p c(3 4, 6) = 0.43437077..., compared with Parviainen’s numerical results of p c = 0.69373383... and p c = 0.43430621... . These deviations are of the order 10 -5, as is standard for this method. Deriving thresholds in this way for a given latticemore » leads to a polynomial with integer coefficients, the root in [0, 1] of which gives the estimate for the bond threshold and I show how the method can be refined, leading to a series of higher order polynomials making predictions that likely converge to the exact answer. Finally, I discuss how this fact hints that for certain graphs, such as the kagome lattice, the exact bond threshold may not be the root of any polynomial with integer coefficients.« less

  8. Open shop scheduling problem to minimize total weighted completion time

    NASA Astrophysics Data System (ADS)

    Bai, Danyu; Zhang, Zhihai; Zhang, Qiang; Tang, Mengqian

    2017-01-01

    A given number of jobs in an open shop scheduling environment must each be processed for given amounts of time on each of a given set of machines in an arbitrary sequence. This study aims to achieve a schedule that minimizes total weighted completion time. Owing to the strong NP-hardness of the problem, the weighted shortest processing time block (WSPTB) heuristic is presented to obtain approximate solutions for large-scale problems. Performance analysis proves the asymptotic optimality of the WSPTB heuristic in the sense of probability limits. The largest weight block rule is provided to seek optimal schedules in polynomial time for a special case. A hybrid discrete differential evolution algorithm is designed to obtain high-quality solutions for moderate-scale problems. Simulation experiments demonstrate the effectiveness of the proposed algorithms.

  9. Elastic and viscoelastic mechanical properties of brain tissues on the implanting trajectory of sub-thalamic nucleus stimulation.

    PubMed

    Li, Yan; Deng, Jianxin; Zhou, Jun; Li, Xueen

    2016-11-01

    Corresponding to pre-puncture and post-puncture insertion, elastic and viscoelastic mechanical properties of brain tissues on the implanting trajectory of sub-thalamic nucleus stimulation are investigated, respectively. Elastic mechanical properties in pre-puncture are investigated through pre-puncture needle insertion experiments using whole porcine brains. A linear polynomial and a second order polynomial are fitted to the average insertion force in pre-puncture. The Young's modulus in pre-puncture is calculated from the slope of the two fittings. Viscoelastic mechanical properties of brain tissues in post-puncture insertion are investigated through indentation stress relaxation tests for six interested regions along a planned trajectory. A linear viscoelastic model with a Prony series approximation is fitted to the average load trace of each region using Boltzmann hereditary integral. Shear relaxation moduli of each region are calculated using the parameters of the Prony series approximation. The results show that, in pre-puncture insertion, needle force almost increases linearly with needle displacement. Both fitting lines can perfectly fit the average insertion force. The Young's moduli calculated from the slope of the two fittings are worthy of trust to model linearly or nonlinearly instantaneous elastic responses of brain tissues, respectively. In post-puncture insertion, both region and time significantly affect the viscoelastic behaviors. Six tested regions can be classified into three categories in stiffness. Shear relaxation moduli decay dramatically in short time scales but equilibrium is never truly achieved. The regional and temporal viscoelastic mechanical properties in post-puncture insertion are valuable for guiding probe insertion into each region on the implanting trajectory.

  10. On Nash-Equilibria of Approximation-Stable Games

    NASA Astrophysics Data System (ADS)

    Awasthi, Pranjal; Balcan, Maria-Florina; Blum, Avrim; Sheffet, Or; Vempala, Santosh

    One reason for wanting to compute an (approximate) Nash equilibrium of a game is to predict how players will play. However, if the game has multiple equilibria that are far apart, or ɛ-equilibria that are far in variation distance from the true Nash equilibrium strategies, then this prediction may not be possible even in principle. Motivated by this consideration, in this paper we define the notion of games that are approximation stable, meaning that all ɛ-approximate equilibria are contained inside a small ball of radius Δ around a true equilibrium, and investigate a number of their properties. Many natural small games such as matching pennies and rock-paper-scissors are indeed approximation stable. We show furthermore there exist 2-player n-by-n approximation-stable games in which the Nash equilibrium and all approximate equilibria have support Ω(log n). On the other hand, we show all (ɛ,Δ) approximation-stable games must have an ɛ-equilibrium of support O(Δ^{2-o(1)}/ɛ2{log n}), yielding an immediate n^{O(Δ^{2-o(1)}/ɛ^2log n)}-time algorithm, improving over the bound of [11] for games satisfying this condition. We in addition give a polynomial-time algorithm for the case that Δ and ɛ are sufficiently close together. We also consider an inverse property, namely that all non-approximate equilibria are far from some true equilibrium, and give an efficient algorithm for games satisfying that condition.

  11. Revisiting the Fundamental Analytical Solutions of Heat and Mass Transfer: The Kernel of Multirate and Multidimensional Diffusion

    NASA Astrophysics Data System (ADS)

    Zhou, Quanlin; Oldenburg, Curtis M.; Rutqvist, Jonny; Birkholzer, Jens T.

    2017-11-01

    There are two types of analytical solutions of temperature/concentration in and heat/mass transfer through boundaries of regularly shaped 1-D, 2-D, and 3-D blocks. These infinite-series solutions with either error functions or exponentials exhibit highly irregular but complementary convergence at different dimensionless times, td. In this paper, approximate solutions were developed by combining the error-function-series solutions for early times and the exponential-series solutions for late times and by using time partitioning at the switchover time, td0. The combined solutions contain either the leading term of both series for normal-accuracy approximations (with less than 0.003 relative error) or the first two terms for high-accuracy approximations (with less than 10-7 relative error) for 1-D isotropic (spheres, cylinders, slabs) and 2-D/3-D rectangular blocks (squares, cubes, rectangles, and rectangular parallelepipeds). This rapid and uniform convergence for rectangular blocks was achieved by employing the same time partitioning with individual dimensionless times for different directions and the product of their combined 1-D slab solutions. The switchover dimensionless time was determined to minimize the maximum approximation errors. Furthermore, the analytical solutions of first-order heat/mass flux for 2-D/3-D rectangular blocks were derived for normal-accuracy approximations. These flux equations contain the early-time solution with a three-term polynomial in √td and the late-time solution with the limited-term exponentials for rectangular blocks. The heat/mass flux equations and the combined temperature/concentration solutions form the ultimate kernel for fast simulations of multirate and multidimensional heat/mass transfer in porous/fractured media with millions of low-permeability blocks of varying shapes and sizes.

  12. A higher order numerical method for time fractional partial differential equations with nonsmooth data

    NASA Astrophysics Data System (ADS)

    Xing, Yanyuan; Yan, Yubin

    2018-03-01

    Gao et al. [11] (2014) introduced a numerical scheme to approximate the Caputo fractional derivative with the convergence rate O (k 3 - α), 0 < α < 1 by directly approximating the integer-order derivative with some finite difference quotients in the definition of the Caputo fractional derivative, see also Lv and Xu [20] (2016), where k is the time step size. Under the assumption that the solution of the time fractional partial differential equation is sufficiently smooth, Lv and Xu [20] (2016) proved by using energy method that the corresponding numerical method for solving time fractional partial differential equation has the convergence rate O (k 3 - α), 0 < α < 1 uniformly with respect to the time variable t. However, in general the solution of the time fractional partial differential equation has low regularity and in this case the numerical method fails to have the convergence rate O (k 3 - α), 0 < α < 1 uniformly with respect to the time variable t. In this paper, we first obtain a similar approximation scheme to the Riemann-Liouville fractional derivative with the convergence rate O (k 3 - α), 0 < α < 1 as in Gao et al. [11] (2014) by approximating the Hadamard finite-part integral with the piecewise quadratic interpolation polynomials. Based on this scheme, we introduce a time discretization scheme to approximate the time fractional partial differential equation and show by using Laplace transform methods that the time discretization scheme has the convergence rate O (k 3 - α), 0 < α < 1 for any fixed tn > 0 for smooth and nonsmooth data in both homogeneous and inhomogeneous cases. Numerical examples are given to show that the theoretical results are consistent with the numerical results.

  13. Revisiting the Fundamental Analytical Solutions of Heat and Mass Transfer: The Kernel of Multirate and Multidimensional Diffusion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Quanlin; Oldenburg, Curtis M.; Rutqvist, Jonny

    There are two types of analytical solutions of temperature/concentration in and heat/mass transfer through boundaries of regularly shaped 1D, 2D, and 3D blocks. These infinite-series solutions with either error functions or exponentials exhibit highly irregular but complementary convergence at different dimensionless times, t d0. In this paper, approximate solutions were developed by combining the error-function-series solutions for early times and the exponential-series solutions for late times and by using time partitioning at the switchover time, t d0. The combined solutions contain either the leading term of both series for normal-accuracy approximations (with less than 0.003 relative error) or the firstmore » two terms for high-accuracy approximations (with less than 10-7 relative error) for 1D isotropic (spheres, cylinders, slabs) and 2D/3D rectangular blocks (squares, cubes, rectangles, and rectangular parallelepipeds). This rapid and uniform convergence for rectangular blocks was achieved by employing the same time partitioning with individual dimensionless times for different directions and the product of their combined 1D slab solutions. The switchover dimensionless time was determined to minimize the maximum approximation errors. Furthermore, the analytical solutions of first-order heat/mass flux for 2D/3D rectangular blocks were derived for normal-accuracy approximations. These flux equations contain the early-time solution with a three-term polynomial in √td and the late-time solution with the limited-term exponentials for rectangular blocks. The heat/mass flux equations and the combined temperature/concentration solutions form the ultimate kernel for fast simulations of multirate and multidimensional heat/mass transfer in porous/fractured media with millions of low-permeability blocks of varying shapes and sizes.« less

  14. Revisiting the Fundamental Analytical Solutions of Heat and Mass Transfer: The Kernel of Multirate and Multidimensional Diffusion

    DOE PAGES

    Zhou, Quanlin; Oldenburg, Curtis M.; Rutqvist, Jonny; ...

    2017-10-24

    There are two types of analytical solutions of temperature/concentration in and heat/mass transfer through boundaries of regularly shaped 1D, 2D, and 3D blocks. These infinite-series solutions with either error functions or exponentials exhibit highly irregular but complementary convergence at different dimensionless times, t d0. In this paper, approximate solutions were developed by combining the error-function-series solutions for early times and the exponential-series solutions for late times and by using time partitioning at the switchover time, t d0. The combined solutions contain either the leading term of both series for normal-accuracy approximations (with less than 0.003 relative error) or the firstmore » two terms for high-accuracy approximations (with less than 10-7 relative error) for 1D isotropic (spheres, cylinders, slabs) and 2D/3D rectangular blocks (squares, cubes, rectangles, and rectangular parallelepipeds). This rapid and uniform convergence for rectangular blocks was achieved by employing the same time partitioning with individual dimensionless times for different directions and the product of their combined 1D slab solutions. The switchover dimensionless time was determined to minimize the maximum approximation errors. Furthermore, the analytical solutions of first-order heat/mass flux for 2D/3D rectangular blocks were derived for normal-accuracy approximations. These flux equations contain the early-time solution with a three-term polynomial in √td and the late-time solution with the limited-term exponentials for rectangular blocks. The heat/mass flux equations and the combined temperature/concentration solutions form the ultimate kernel for fast simulations of multirate and multidimensional heat/mass transfer in porous/fractured media with millions of low-permeability blocks of varying shapes and sizes.« less

  15. Hermite Functional Link Neural Network for Solving the Van der Pol-Duffing Oscillator Equation.

    PubMed

    Mall, Susmita; Chakraverty, S

    2016-08-01

    Hermite polynomial-based functional link artificial neural network (FLANN) is proposed here to solve the Van der Pol-Duffing oscillator equation. A single-layer hermite neural network (HeNN) model is used, where a hidden layer is replaced by expansion block of input pattern using Hermite orthogonal polynomials. A feedforward neural network model with the unsupervised error backpropagation principle is used for modifying the network parameters and minimizing the computed error function. The Van der Pol-Duffing and Duffing oscillator equations may not be solved exactly. Here, approximate solutions of these types of equations have been obtained by applying the HeNN model for the first time. Three mathematical example problems and two real-life application problems of Van der Pol-Duffing oscillator equation, extracting the features of early mechanical failure signal and weak signal detection problems, are solved using the proposed HeNN method. HeNN approximate solutions have been compared with results obtained by the well known Runge-Kutta method. Computed results are depicted in term of graphs. After training the HeNN model, we may use it as a black box to get numerical results at any arbitrary point in the domain. Thus, the proposed HeNN method is efficient. The results reveal that this method is reliable and can be applied to other nonlinear problems too.

  16. Study of the Influence of the Orientation of a 50-Hz Magnetic Field on Fetal Exposure Using Polynomial Chaos Decomposition

    PubMed Central

    Liorni, Ilaria; Parazzini, Marta; Fiocchi, Serena; Ravazzani, Paolo

    2015-01-01

    Human exposure modelling is a complex topic, because in a realistic exposure scenario, several parameters (e.g., the source, the orientation of incident fields, the morphology of subjects) vary and influence the dose. Deterministic dosimetry, so far used to analyze human exposure to electromagnetic fields (EMF), is highly time consuming if the previously-mentioned variations are considered. Stochastic dosimetry is an alternative method to build analytical approximations of exposure at a lower computational cost. In this study, it was used to assess the influence of magnetic flux density (B) orientation on fetal exposure at 50 Hz by polynomial chaos (PC). A PC expansion of induced electric field (E) in each fetal tissue at different gestational ages (GA) was built as a function of B orientation. Maximum E in each fetal tissue and at each GA was estimated for different exposure configurations and compared with the limits of the International Commission of Non-Ionising Radiation Protection (ICNIRP) Guidelines 2010. PC theory resulted in an efficient tool to build accurate approximations of E in each fetal tissue. B orientation strongly influenced E, with a variability across tissues from 10% to 43% with respect to the mean value. However, varying B orientation, maximum E in each fetal tissue was below the limits of ICNIRP 2010 at all GAs. PMID:26024363

  17. Study of the influence of the orientation of a 50-Hz magnetic field on fetal exposure using polynomial chaos decomposition.

    PubMed

    Liorni, Ilaria; Parazzini, Marta; Fiocchi, Serena; Ravazzani, Paolo

    2015-05-27

    Human exposure modelling is a complex topic, because in a realistic exposure scenario, several parameters (e.g., the source, the orientation of incident fields, the morphology of subjects) vary and influence the dose. Deterministic dosimetry, so far used to analyze human exposure to electromagnetic fields (EMF), is highly time consuming if the previously-mentioned variations are considered. Stochastic dosimetry is an alternative method to build analytical approximations of exposure at a lower computational cost. In this study, it was used to assess the influence of magnetic flux density (B) orientation on fetal exposure at 50 Hz by polynomial chaos (PC). A PC expansion of induced electric field (E) in each fetal tissue at different gestational ages (GA) was built as a function of B orientation. Maximum E in each fetal tissue and at each GA was estimated for different exposure configurations and compared with the limits of the International Commission of Non-Ionising Radiation Protection (ICNIRP) Guidelines 2010. PC theory resulted in an efficient tool to build accurate approximations of E in each fetal tissue. B orientation strongly influenced E, with a variability across tissues from 10% to 43% with respect to the mean value. However, varying B orientation, maximum E in each fetal tissue was below the limits of ICNIRP 2010 at all GAs.

  18. The simultaneous integration of many trajectories using nilpotent normal forms

    NASA Technical Reports Server (NTRS)

    Grayson, Matthew A.; Grossman, Robert

    1990-01-01

    Taylor's formula shows how to approximate a certain class of functions by polynomials. The approximations are arbitrarily good in some neighborhood whenever the function is analytic and they are easy to compute. The main goal is to give an efficient algorithm to approximate a neighborhood of the configuration space of a dynamical system by a nilpotent, explicitly integrable dynamical system. The major areas covered include: an approximating map; the generalized Baker-Campbell-Hausdorff formula; the Picard-Taylor method; the main theorem; simultaneous integration of trajectories; and examples.

  19. Folded concave penalized sparse linear regression: sparsity, statistical performance, and algorithmic theory for local solutions.

    PubMed

    Liu, Hongcheng; Yao, Tao; Li, Runze; Ye, Yinyu

    2017-11-01

    This paper concerns the folded concave penalized sparse linear regression (FCPSLR), a class of popular sparse recovery methods. Although FCPSLR yields desirable recovery performance when solved globally, computing a global solution is NP-complete. Despite some existing statistical performance analyses on local minimizers or on specific FCPSLR-based learning algorithms, it still remains open questions whether local solutions that are known to admit fully polynomial-time approximation schemes (FPTAS) may already be sufficient to ensure the statistical performance, and whether that statistical performance can be non-contingent on the specific designs of computing procedures. To address the questions, this paper presents the following threefold results: (i) Any local solution (stationary point) is a sparse estimator, under some conditions on the parameters of the folded concave penalties. (ii) Perhaps more importantly, any local solution satisfying a significant subspace second-order necessary condition (S 3 ONC), which is weaker than the second-order KKT condition, yields a bounded error in approximating the true parameter with high probability. In addition, if the minimal signal strength is sufficient, the S 3 ONC solution likely recovers the oracle solution. This result also explicates that the goal of improving the statistical performance is consistent with the optimization criteria of minimizing the suboptimality gap in solving the non-convex programming formulation of FCPSLR. (iii) We apply (ii) to the special case of FCPSLR with minimax concave penalty (MCP) and show that under the restricted eigenvalue condition, any S 3 ONC solution with a better objective value than the Lasso solution entails the strong oracle property. In addition, such a solution generates a model error (ME) comparable to the optimal but exponential-time sparse estimator given a sufficient sample size, while the worst-case ME is comparable to the Lasso in general. Furthermore, to guarantee the S 3 ONC admits FPTAS.

  20. A method for including external feed in depletion calculations with CRAM and implementation into ORIGEN

    DOE PAGES

    Isotalo, Aarno E.; Wieselquist, William A.

    2015-05-15

    A method for including external feed with polynomial time dependence in depletion calculations with the Chebyshev Rational Approximation Method (CRAM) is presented and the implementation of CRAM to the ORIGEN module of the SCALE suite is described. In addition to being able to handle time-dependent feed rates, the new solver also adds the capability to perform adjoint calculations. Results obtained with the new CRAM solver and the original depletion solver of ORIGEN are compared to high precision reference calculations, which shows the new solver to be orders of magnitude more accurate. Lastly, in most cases, the new solver is upmore » to several times faster due to not requiring similar substepping as the original one.« less

  1. Uncertainty Analysis via Failure Domain Characterization: Polynomial Requirement Functions

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Munoz, Cesar A.; Narkawicz, Anthony J.; Kenny, Sean P.; Giesy, Daniel P.

    2011-01-01

    This paper proposes an uncertainty analysis framework based on the characterization of the uncertain parameter space. This characterization enables the identification of worst-case uncertainty combinations and the approximation of the failure and safe domains with a high level of accuracy. Because these approximations are comprised of subsets of readily computable probability, they enable the calculation of arbitrarily tight upper and lower bounds to the failure probability. A Bernstein expansion approach is used to size hyper-rectangular subsets while a sum of squares programming approach is used to size quasi-ellipsoidal subsets. These methods are applicable to requirement functions whose functional dependency on the uncertainty is a known polynomial. Some of the most prominent features of the methodology are the substantial desensitization of the calculations from the uncertainty model assumed (i.e., the probability distribution describing the uncertainty) as well as the accommodation for changes in such a model with a practically insignificant amount of computational effort.

  2. A gradient-based model parametrization using Bernstein polynomials in Bayesian inversion of surface wave dispersion

    NASA Astrophysics Data System (ADS)

    Gosselin, Jeremy M.; Dosso, Stan E.; Cassidy, John F.; Quijano, Jorge E.; Molnar, Sheri; Dettmer, Jan

    2017-10-01

    This paper develops and applies a Bernstein-polynomial parametrization to efficiently represent general, gradient-based profiles in nonlinear geophysical inversion, with application to ambient-noise Rayleigh-wave dispersion data. Bernstein polynomials provide a stable parametrization in that small perturbations to the model parameters (basis-function coefficients) result in only small perturbations to the geophysical parameter profile. A fully nonlinear Bayesian inversion methodology is applied to estimate shear wave velocity (VS) profiles and uncertainties from surface wave dispersion data extracted from ambient seismic noise. The Bayesian information criterion is used to determine the appropriate polynomial order consistent with the resolving power of the data. Data error correlations are accounted for in the inversion using a parametric autoregressive model. The inversion solution is defined in terms of marginal posterior probability profiles for VS as a function of depth, estimated using Metropolis-Hastings sampling with parallel tempering. This methodology is applied to synthetic dispersion data as well as data processed from passive array recordings collected on the Fraser River Delta in British Columbia, Canada. Results from this work are in good agreement with previous studies, as well as with co-located invasive measurements. The approach considered here is better suited than `layered' modelling approaches in applications where smooth gradients in geophysical parameters are expected, such as soil/sediment profiles. Further, the Bernstein polynomial representation is more general than smooth models based on a fixed choice of gradient type (e.g. power-law gradient) because the form of the gradient is determined objectively by the data, rather than by a subjective parametrization choice.

  3. Splines and control theory

    NASA Technical Reports Server (NTRS)

    Zhang, Zhimin; Tomlinson, John; Martin, Clyde

    1994-01-01

    In this work, the relationship between splines and the control theory has been analyzed. We show that spline functions can be constructed naturally from the control theory. By establishing a framework based on control theory, we provide a simple and systematic way to construct splines. We have constructed the traditional spline functions including the polynomial splines and the classical exponential spline. We have also discovered some new spline functions such as trigonometric splines and the combination of polynomial, exponential and trigonometric splines. The method proposed in this paper is easy to implement. Some numerical experiments are performed to investigate properties of different spline approximations.

  4. A Polynomial-Based Nonlinear Least Squares Optimized Preconditioner for Continuous and Discontinuous Element-Based Discretizations of the Euler Equations

    DTIC Science & Technology

    2014-01-01

    system (here using left- preconditioning ) (KÃ)x = Kb̃, (3.1) where K is a low-order polynomial in à given by K = s(Ã) = m∑ i=0 kià i, (3.2) and has a... system with a complex spectrum, region E in the complex plane must be some convex form (e.g., an ellipse or polygon) that approximately encloses the...preconditioners with p = 2 and p = 20 on the spectrum of the preconditioned system matrices Kà and KH̃ for both CG Schur-complement form and DG form cases

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhou, Quanlin; Oldenburg, Curtis M.; Spangler, Lee H.

    Analytical solutions with infinite exponential series are available to calculate the rate of diffusive transfer between low-permeability blocks and high-permeability zones in the subsurface. Truncation of these series is often employed by neglecting the early-time regime. Here in this paper, we present unified-form approximate solutions in which the early-time and the late-time solutions are continuous at a switchover time. The early-time solutions are based on three-term polynomial functions in terms of square root of dimensionless time, with the first coefficient dependent only on the dimensionless area-to-volume ratio. The last two coefficients are either determined analytically for isotropic blocks (e.g., spheresmore » and slabs) or obtained by fitting the exact solutions, and they solely depend on the aspect ratios for rectangular columns and parallelepipeds. For the late-time solutions, only the leading exponential term is needed for isotropic blocks, while a few additional exponential terms are needed for highly anisotropic rectangular blocks. The optimal switchover time is between 0.157 and 0.229, with highest relative approximation error less than 0.2%. The solutions are used to demonstrate the storage of dissolved CO 2 in fractured reservoirs with low-permeability matrix blocks of single and multiple shapes and sizes. These approximate solutions are building blocks for development of analytical and numerical tools for hydraulic, solute, and thermal diffusion processes in low-permeability matrix blocks.« less

  6. Space Shuttle Debris Impact Tool Assessment Using the Modern Design of Experiments

    NASA Technical Reports Server (NTRS)

    DeLoach, Richard; Rayos, Elonsio M.; Campbell, Charles H.; Rickman, Steven L.; Larsen, Curtis E.

    2007-01-01

    Complex computer codes are used to estimate thermal and structural reentry loads on the Shuttle Orbiter induced by ice and foam debris impact during ascent. Such debris can create cavities in the Shuttle Thermal Protection System. The sizes and shapes of these cavities are approximated to accommodate a code limitation that requires simple "shoebox" geometries to describe the cavities -- rectangular areas and planar walls that are at constant angles with respect to vertical. These approximations induce uncertainty in the code results. The Modern Design of Experiments (MDOE) has recently been applied to develop a series of resource-minimal computational experiments designed to generate low-order polynomial graduating functions to approximate the more complex underlying codes. These polynomial functions were then used to propagate cavity geometry errors to estimate the uncertainty they induce in the reentry load calculations performed by the underlying code. This paper describes a methodological study focused on evaluating the application of MDOE to future operational codes in a rapid and low-cost way to assess the effects of cavity geometry uncertainty.

  7. Asymptotic safety of quantum gravity beyond Ricci scalars

    NASA Astrophysics Data System (ADS)

    Falls, Kevin; King, Callum R.; Litim, Daniel F.; Nikolakopoulos, Kostas; Rahmede, Christoph

    2018-04-01

    We investigate the asymptotic safety conjecture for quantum gravity including curvature invariants beyond Ricci scalars. Our strategy is put to work for families of gravitational actions which depend on functions of the Ricci scalar, the Ricci tensor, and products thereof. Combining functional renormalization with high order polynomial approximations and full numerical integration we derive the renormalization group flow for all couplings and analyse their fixed points, scaling exponents, and the fixed point effective action as a function of the background Ricci curvature. The theory is characterized by three relevant couplings. Higher-dimensional couplings show near-Gaussian scaling with increasing canonical mass dimension. We find that Ricci tensor invariants stabilize the UV fixed point and lead to a rapid convergence of polynomial approximations. We apply our results to models for cosmology and establish that the gravitational fixed point admits inflationary solutions. We also compare findings with those from f (R ) -type theories in the same approximation and pin-point the key new effects due to Ricci tensor interactions. Implications for the asymptotic safety conjecture of gravity are indicated.

  8. Constant-Round Concurrent Zero Knowledge From Falsifiable Assumptions

    DTIC Science & Technology

    2013-01-01

    assumptions (e.g., [DS98, Dam00, CGGM00, Gol02, PTV12, GJO+12]), or in alternative models (e.g., super -polynomial-time simulation [Pas03b, PV10]). In the...T (·)-time computations, where T (·) is some “nice” (slightly) super -polynomial function (e.g., T (n) = nlog log logn). We refer to such proof...put a cap on both using a (slightly) super -polynomial function, and thus to guarantee soundness of the concurrent zero-knowledge protocol, we need

  9. A Unified Methodology for Computing Accurate Quaternion Color Moments and Moment Invariants.

    PubMed

    Karakasis, Evangelos G; Papakostas, George A; Koulouriotis, Dimitrios E; Tourassis, Vassilios D

    2014-02-01

    In this paper, a general framework for computing accurate quaternion color moments and their corresponding invariants is proposed. The proposed unified scheme arose by studying the characteristics of different orthogonal polynomials. These polynomials are used as kernels in order to form moments, the invariants of which can easily be derived. The resulted scheme permits the usage of any polynomial-like kernel in a unified and consistent way. The resulted moments and moment invariants demonstrate robustness to noisy conditions and high discriminative power. Additionally, in the case of continuous moments, accurate computations take place to avoid approximation errors. Based on this general methodology, the quaternion Tchebichef, Krawtchouk, Dual Hahn, Legendre, orthogonal Fourier-Mellin, pseudo Zernike and Zernike color moments, and their corresponding invariants are introduced. A selected paradigm presents the reconstruction capability of each moment family, whereas proper classification scenarios evaluate the performance of color moment invariants.

  10. Classical Dynamics of Fullerenes

    NASA Astrophysics Data System (ADS)

    Sławianowski, Jan J.; Kotowski, Romuald K.

    2017-06-01

    The classical mechanics of large molecules and fullerenes is studied. The approach is based on the model of collective motion of these objects. The mixed Lagrangian (material) and Eulerian (space) description of motion is used. In particular, the Green and Cauchy deformation tensors are geometrically defined. The important issue is the group-theoretical approach to describing the affine deformations of the body. The Hamiltonian description of motion based on the Poisson brackets methodology is used. The Lagrange and Hamilton approaches allow us to formulate the mechanics in the canonical form. The method of discretization in analytical continuum theory and in classical dynamics of large molecules and fullerenes enable us to formulate their dynamics in terms of the polynomial expansions of configurations. Another approach is based on the theory of analytical functions and on their approximations by finite-order polynomials. We concentrate on the extremely simplified model of affine deformations or on their higher-order polynomial perturbations.

  11. On conjugate gradient type methods and polynomial preconditioners for a class of complex non-Hermitian matrices

    NASA Technical Reports Server (NTRS)

    Freund, Roland

    1988-01-01

    Conjugate gradient type methods are considered for the solution of large linear systems Ax = b with complex coefficient matrices of the type A = T + i(sigma)I where T is Hermitian and sigma, a real scalar. Three different conjugate gradient type approaches with iterates defined by a minimal residual property, a Galerkin type condition, and an Euclidian error minimization, respectively, are investigated. In particular, numerically stable implementations based on the ideas behind Paige and Saunder's SYMMLQ and MINRES for real symmetric matrices are proposed. Error bounds for all three methods are derived. It is shown how the special shift structure of A can be preserved by using polynomial preconditioning. Results on the optimal choice of the polynomial preconditioner are given. Also, some numerical experiments for matrices arising from finite difference approximations to the complex Helmholtz equation are reported.

  12. Exact Integrations of Polynomials and Symmetric Quadrature Formulas over Arbitrary Polyhedral Grids

    NASA Technical Reports Server (NTRS)

    Liu, Yen; Vinokur, Marcel

    1997-01-01

    This paper is concerned with two important elements in the high-order accurate spatial discretization of finite volume equations over arbitrary grids. One element is the integration of basis functions over arbitrary domains, which is used in expressing various spatial integrals in terms of discrete unknowns. The other consists of quadrature approximations to those integrals. Only polynomial basis functions applied to polyhedral and polygonal grids are treated here. Non-triangular polygonal faces are subdivided into a union of planar triangular facets, and the resulting triangulated polyhedron is subdivided into a union of tetrahedra. The straight line segment, triangle, and tetrahedron are thus the fundamental shapes that are the building blocks for all integrations and quadrature approximations. Integrals of products up to the fifth order are derived in a unified manner for the three fundamental shapes in terms of the position vectors of vertices. Results are given both in terms of tensor products and products of Cartesian coordinates. The exact polynomial integrals are used to obtain symmetric quadrature approximations of any degree of precision up to five for arbitrary integrals over the three fundamental domains. Using a coordinate-free formulation, simple and rational procedures are developed to derive virtually all quadrature formulas, including some previously unpublished. Four symmetry groups of quadrature points are introduced to derive Gauss formulas, while their limiting forms are used to derive Lobatto formulas. Representative Gauss and Lobatto formulas are tabulated. The relative efficiency of their application to polyhedral and polygonal grids is detailed. The extension to higher degrees of precision is discussed.

  13. Learning polynomial feedforward neural networks by genetic programming and backpropagation.

    PubMed

    Nikolaev, N Y; Iba, H

    2003-01-01

    This paper presents an approach to learning polynomial feedforward neural networks (PFNNs). The approach suggests, first, finding the polynomial network structure by means of a population-based search technique relying on the genetic programming paradigm, and second, further adjustment of the best discovered network weights by an especially derived backpropagation algorithm for higher order networks with polynomial activation functions. These two stages of the PFNN learning process enable us to identify networks with good training as well as generalization performance. Empirical results show that this approach finds PFNN which outperform considerably some previous constructive polynomial network algorithms on processing benchmark time series.

  14. Time-domain representation of frequency-dependent foundation impedance functions

    USGS Publications Warehouse

    Safak, E.

    2006-01-01

    Foundation impedance functions provide a simple means to account for soil-structure interaction (SSI) when studying seismic response of structures. Impedance functions represent the dynamic stiffness of the soil media surrounding the foundation. The fact that impedance functions are frequency dependent makes it difficult to incorporate SSI in standard time-history analysis software. This paper introduces a simple method to convert frequency-dependent impedance functions into time-domain filters. The method is based on the least-squares approximation of impedance functions by ratios of two complex polynomials. Such ratios are equivalent, in the time-domain, to discrete-time recursive filters, which are simple finite-difference equations giving the relationship between foundation forces and displacements. These filters can easily be incorporated into standard time-history analysis programs. Three examples are presented to show the applications of the method.

  15. COMPUTATIONAL METHODS FOR SENSITIVITY AND UNCERTAINTY ANALYSIS FOR ENVIRONMENTAL AND BIOLOGICAL MODELS

    EPA Science Inventory

    This work introduces a computationally efficient alternative method for uncertainty propagation, the Stochastic Response Surface Method (SRSM). The SRSM approximates uncertainties in model outputs through a series expansion in normal random variables (polynomial chaos expansion)...

  16. Use of dirichlet distributions and orthogonal projection techniques for the fluctuation analysis of steady-state multivariate birth-death systems

    NASA Astrophysics Data System (ADS)

    Palombi, Filippo; Toti, Simona

    2015-05-01

    Approximate weak solutions of the Fokker-Planck equation represent a useful tool to analyze the equilibrium fluctuations of birth-death systems, as they provide a quantitative knowledge lying in between numerical simulations and exact analytic arguments. In this paper, we adapt the general mathematical formalism known as the Ritz-Galerkin method for partial differential equations to the Fokker-Planck equation with time-independent polynomial drift and diffusion coefficients on the simplex. Then, we show how the method works in two examples, namely the binary and multi-state voter models with zealots.

  17. The MusIC method: a fast and quasi-optimal solution to the muscle forces estimation problem.

    PubMed

    Muller, A; Pontonnier, C; Dumont, G

    2018-02-01

    The present paper aims at presenting a fast and quasi-optimal method of muscle forces estimation: the MusIC method. It consists in interpolating a first estimation in a database generated offline thanks to a classical optimization problem, and then correcting it to respect the motion dynamics. Three different cost functions - two polynomial criteria and a min/max criterion - were tested on a planar musculoskeletal model. The MusIC method provides a computation frequency approximately 10 times higher compared to a classical optimization problem with a relative mean error of 4% on cost function evaluation.

  18. A pressure-based semi-implicit space-time discontinuous Galerkin method on staggered unstructured meshes for the solution of the compressible Navier-Stokes equations at all Mach numbers

    NASA Astrophysics Data System (ADS)

    Tavelli, Maurizio; Dumbser, Michael

    2017-07-01

    We propose a new arbitrary high order accurate semi-implicit space-time discontinuous Galerkin (DG) method for the solution of the two and three dimensional compressible Euler and Navier-Stokes equations on staggered unstructured curved meshes. The method is pressure-based and semi-implicit and is able to deal with all Mach number flows. The new DG scheme extends the seminal ideas outlined in [1], where a second order semi-implicit finite volume method for the solution of the compressible Navier-Stokes equations with a general equation of state was introduced on staggered Cartesian grids. Regarding the high order extension we follow [2], where a staggered space-time DG scheme for the incompressible Navier-Stokes equations was presented. In our scheme, the discrete pressure is defined on the primal grid, while the discrete velocity field and the density are defined on a face-based staggered dual grid. Then, the mass conservation equation, as well as the nonlinear convective terms in the momentum equation and the transport of kinetic energy in the energy equation are discretized explicitly, while the pressure terms appearing in the momentum and energy equation are discretized implicitly. Formal substitution of the discrete momentum equation into the total energy conservation equation yields a linear system for only one unknown, namely the scalar pressure. Here the equation of state is assumed linear with respect to the pressure. The enthalpy and the kinetic energy are taken explicitly and are then updated using a simple Picard procedure. Thanks to the use of a staggered grid, the final pressure system is a very sparse block five-point system for three dimensional problems and it is a block four-point system in the two dimensional case. Furthermore, for high order in space and piecewise constant polynomials in time, the system is observed to be symmetric and positive definite. This allows to use fast linear solvers such as the conjugate gradient (CG) method. In addition, all the volume and surface integrals needed by the scheme depend only on the geometry and the polynomial degree of the basis and test functions and can therefore be precomputed and stored in a preprocessing stage. This leads to significant savings in terms of computational effort for the time evolution part. In this way also the extension to a fully curved isoparametric approach becomes natural and affects only the preprocessing step. The viscous terms and the heat flux are also discretized making use of the staggered grid by defining the viscous stress tensor and the heat flux vector on the dual grid, which corresponds to the use of a lifting operator, but on the dual grid. The time step of our new numerical method is limited by a CFL condition based only on the fluid velocity and not on the sound speed. This makes the method particularly interesting for low Mach number flows. Finally, a very simple combination of artificial viscosity and the a posteriori MOOD technique allows to deal with shock waves and thus permits also to simulate high Mach number flows. We show computational results for a large set of two and three-dimensional benchmark problems, including both low and high Mach number flows and using polynomial approximation degrees up to p = 4.

  19. Airfoil Shape Optimization based on Surrogate Model

    NASA Astrophysics Data System (ADS)

    Mukesh, R.; Lingadurai, K.; Selvakumar, U.

    2018-02-01

    Engineering design problems always require enormous amount of real-time experiments and computational simulations in order to assess and ensure the design objectives of the problems subject to various constraints. In most of the cases, the computational resources and time required per simulation are large. In certain cases like sensitivity analysis, design optimisation etc where thousands and millions of simulations have to be carried out, it leads to have a life time of difficulty for designers. Nowadays approximation models, otherwise called as surrogate models (SM), are more widely employed in order to reduce the requirement of computational resources and time in analysing various engineering systems. Various approaches such as Kriging, neural networks, polynomials, Gaussian processes etc are used to construct the approximation models. The primary intention of this work is to employ the k-fold cross validation approach to study and evaluate the influence of various theoretical variogram models on the accuracy of the surrogate model construction. Ordinary Kriging and design of experiments (DOE) approaches are used to construct the SMs by approximating panel and viscous solution algorithms which are primarily used to solve the flow around airfoils and aircraft wings. The method of coupling the SMs with a suitable optimisation scheme to carryout an aerodynamic design optimisation process for airfoil shapes is also discussed.

  20. Reachability Analysis in Probabilistic Biological Networks.

    PubMed

    Gabr, Haitham; Todor, Andrei; Dobra, Alin; Kahveci, Tamer

    2015-01-01

    Extra-cellular molecules trigger a response inside the cell by initiating a signal at special membrane receptors (i.e., sources), which is then transmitted to reporters (i.e., targets) through various chains of interactions among proteins. Understanding whether such a signal can reach from membrane receptors to reporters is essential in studying the cell response to extra-cellular events. This problem is drastically complicated due to the unreliability of the interaction data. In this paper, we develop a novel method, called PReach (Probabilistic Reachability), that precisely computes the probability that a signal can reach from a given collection of receptors to a given collection of reporters when the underlying signaling network is uncertain. This is a very difficult computational problem with no known polynomial-time solution. PReach represents each uncertain interaction as a bi-variate polynomial. It transforms the reachability problem to a polynomial multiplication problem. We introduce novel polynomial collapsing operators that associate polynomial terms with possible paths between sources and targets as well as the cuts that separate sources from targets. These operators significantly shrink the number of polynomial terms and thus the running time. PReach has much better time complexity than the recent solutions for this problem. Our experimental results on real data sets demonstrate that this improvement leads to orders of magnitude of reduction in the running time over the most recent methods. Availability: All the data sets used, the software implemented and the alignments found in this paper are available at http://bioinformatics.cise.ufl.edu/PReach/.

  1. Testing of next-generation nonlinear calibration based non-uniformity correction techniques using SWIR devices

    NASA Astrophysics Data System (ADS)

    Lovejoy, McKenna R.; Wickert, Mark A.

    2017-05-01

    A known problem with infrared imaging devices is their non-uniformity. This non-uniformity is the result of dark current, amplifier mismatch as well as the individual photo response of the detectors. To improve performance, non-uniformity correction (NUC) techniques are applied. Standard calibration techniques use linear, or piecewise linear models to approximate the non-uniform gain and off set characteristics as well as the nonlinear response. Piecewise linear models perform better than the one and two-point models, but in many cases require storing an unmanageable number of correction coefficients. Most nonlinear NUC algorithms use a second order polynomial to improve performance and allow for a minimal number of stored coefficients. However, advances in technology now make higher order polynomial NUC algorithms feasible. This study comprehensively tests higher order polynomial NUC algorithms targeted at short wave infrared (SWIR) imagers. Using data collected from actual SWIR cameras, the nonlinear techniques and corresponding performance metrics are compared with current linear methods including the standard one and two-point algorithms. Machine learning, including principal component analysis, is explored for identifying and replacing bad pixels. The data sets are analyzed and the impact of hardware implementation is discussed. Average floating point results show 30% less non-uniformity, in post-corrected data, when using a third order polynomial correction algorithm rather than a second order algorithm. To maximize overall performance, a trade off analysis on polynomial order and coefficient precision is performed. Comprehensive testing, across multiple data sets, provides next generation model validation and performance benchmarks for higher order polynomial NUC methods.

  2. Transfer matrix computation of generalized critical polynomials in percolation

    DOE PAGES

    Scullard, Christian R.; Jacobsen, Jesper Lykke

    2012-09-27

    Percolation thresholds have recently been studied by means of a graph polynomial PB(p), henceforth referred to as the critical polynomial, that may be defined on any periodic lattice. The polynomial depends on a finite subgraph B, called the basis, and the way in which the basis is tiled to form the lattice. The unique root of P B(p) in [0, 1] either gives the exact percolation threshold for the lattice, or provides an approximation that becomes more accurate with appropriately increasing size of B. Initially P B(p) was defined by a contraction-deletion identity, similar to that satisfied by the Tuttemore » polynomial. Here, we give an alternative probabilistic definition of P B(p), which allows for much more efficient computations, by using the transfer matrix, than was previously possible with contraction-deletion. We present bond percolation polynomials for the (4, 82), kagome, and (3, 122) lattices for bases of up to respectively 96, 162, and 243 edges, much larger than the previous limit of 36 edges using contraction-deletion. We discuss in detail the role of the symmetries and the embedding of B. For the largest bases, we obtain the thresholds p c(4, 82) = 0.676 803 329 · · ·, p c(kagome) = 0.524 404 998 · · ·, p c(3, 122) = 0.740 420 798 · · ·, comparable to the best simulation results. We also show that the alternative definition of P B(p) can be applied to study site percolation problems.« less

  3. Fully Dynamic Bin Packing

    NASA Astrophysics Data System (ADS)

    Ivković, Zoran; Lloyd, Errol L.

    Classic bin packing seeks to pack a given set of items of possibly varying sizes into a minimum number of identical sized bins. A number of approximation algorithms have been proposed for this NP-hard problem for both the on-line and off-line cases. In this chapter we discuss fully dynamic bin packing, where items may arrive (Insert) and depart (Delete) dynamically. In accordance with standard practice for fully dynamic algorithms, it is assumed that the packing may be arbitrarily rearranged to accommodate arriving and departing items. The goal is to maintain an approximately optimal solution of provably high quality in a total amount of time comparable to that used by an off-line algorithm delivering a solution of the same quality.

  4. Near-optimal experimental design for model selection in systems biology.

    PubMed

    Busetto, Alberto Giovanni; Hauser, Alain; Krummenacher, Gabriel; Sunnåker, Mikael; Dimopoulos, Sotiris; Ong, Cheng Soon; Stelling, Jörg; Buhmann, Joachim M

    2013-10-15

    Biological systems are understood through iterations of modeling and experimentation. Not all experiments, however, are equally valuable for predictive modeling. This study introduces an efficient method for experimental design aimed at selecting dynamical models from data. Motivated by biological applications, the method enables the design of crucial experiments: it determines a highly informative selection of measurement readouts and time points. We demonstrate formal guarantees of design efficiency on the basis of previous results. By reducing our task to the setting of graphical models, we prove that the method finds a near-optimal design selection with a polynomial number of evaluations. Moreover, the method exhibits the best polynomial-complexity constant approximation factor, unless P = NP. We measure the performance of the method in comparison with established alternatives, such as ensemble non-centrality, on example models of different complexity. Efficient design accelerates the loop between modeling and experimentation: it enables the inference of complex mechanisms, such as those controlling central metabolic operation. Toolbox 'NearOED' available with source code under GPL on the Machine Learning Open Source Software Web site (mloss.org).

  5. A robust nonparametric framework for reconstruction of stochastic differential equation models

    NASA Astrophysics Data System (ADS)

    Rajabzadeh, Yalda; Rezaie, Amir Hossein; Amindavar, Hamidreza

    2016-05-01

    In this paper, we employ a nonparametric framework to robustly estimate the functional forms of drift and diffusion terms from discrete stationary time series. The proposed method significantly improves the accuracy of the parameter estimation. In this framework, drift and diffusion coefficients are modeled through orthogonal Legendre polynomials. We employ the least squares regression approach along with the Euler-Maruyama approximation method to learn coefficients of stochastic model. Next, a numerical discrete construction of mean squared prediction error (MSPE) is established to calculate the order of Legendre polynomials in drift and diffusion terms. We show numerically that the new method is robust against the variation in sample size and sampling rate. The performance of our method in comparison with the kernel-based regression (KBR) method is demonstrated through simulation and real data. In case of real dataset, we test our method for discriminating healthy electroencephalogram (EEG) signals from epilepsy ones. We also demonstrate the efficiency of the method through prediction in the financial data. In both simulation and real data, our algorithm outperforms the KBR method.

  6. Design of an essentially non-oscillatory reconstruction procedure in finite-element type meshes

    NASA Technical Reports Server (NTRS)

    Abgrall, Remi

    1992-01-01

    An essentially non oscillatory reconstruction for functions defined on finite element type meshes is designed. Two related problems are studied: the interpolation of possibly unsmooth multivariate functions on arbitary meshes and the reconstruction of a function from its averages in the control volumes surrounding the nodes of the mesh. Concerning the first problem, the behavior of the highest coefficients of two polynomial interpolations of a function that may admit discontinuities of locally regular curves is studied: the Lagrange interpolation and an approximation such that the mean of the polynomial on any control volume is equal to that of the function to be approximated. This enables the best stencil for the approximation to be chosen. The choice of the smallest possible number of stencils is addressed. Concerning the reconstruction problem, two methods were studied: one based on an adaptation of the so called reconstruction via deconvolution method to irregular meshes and one that lies on the approximation on the mean as defined above. The first method is conservative up to a quadrature formula and the second one is exactly conservative. The two methods have the expected order of accuracy, but the second one is much less expensive than the first one. Some numerical examples are given which demonstrate the efficiency of the reconstruction.

  7. The TSP-approach to approximate solving the m-Cycles Cover Problem

    NASA Astrophysics Data System (ADS)

    Gimadi, Edward Kh.; Rykov, Ivan; Tsidulko, Oxana

    2016-10-01

    In the m-Cycles Cover problem it is required to find a collection of m vertex-disjoint cycles that covers all vertices of the graph and the total weight of edges in the cover is minimum (or maximum). The problem is a generalization of the Traveling salesmen problem. It is strongly NP-hard. We discuss a TSP-approach that gives polynomial approximate solutions for this problem. It transforms an approximation TSP algorithm into an approximation m-CCP algorithm. In this paper we present a number of successful transformations with proven performance guarantees for the obtained solutions.

  8. Advanced Stochastic Collocation Methods for Polynomial Chaos in RAVEN

    NASA Astrophysics Data System (ADS)

    Talbot, Paul W.

    As experiment complexity in fields such as nuclear engineering continually increases, so does the demand for robust computational methods to simulate them. In many simulations, input design parameters and intrinsic experiment properties are sources of uncertainty. Often small perturbations in uncertain parameters have significant impact on the experiment outcome. For instance, in nuclear fuel performance, small changes in fuel thermal conductivity can greatly affect maximum stress on the surrounding cladding. The difficulty quantifying input uncertainty impact in such systems has grown with the complexity of numerical models. Traditionally, uncertainty quantification has been approached using random sampling methods like Monte Carlo. For some models, the input parametric space and corresponding response output space is sufficiently explored with few low-cost calculations. For other models, it is computationally costly to obtain good understanding of the output space. To combat the expense of random sampling, this research explores the possibilities of using advanced methods in Stochastic Collocation for generalized Polynomial Chaos (SCgPC) as an alternative to traditional uncertainty quantification techniques such as Monte Carlo (MC) and Latin Hypercube Sampling (LHS) methods for applications in nuclear engineering. We consider traditional SCgPC construction strategies as well as truncated polynomial spaces using Total Degree and Hyperbolic Cross constructions. We also consider applying anisotropy (unequal treatment of different dimensions) to the polynomial space, and offer methods whereby optimal levels of anisotropy can be approximated. We contribute development to existing adaptive polynomial construction strategies. Finally, we consider High-Dimensional Model Reduction (HDMR) expansions, using SCgPC representations for the subspace terms, and contribute new adaptive methods to construct them. We apply these methods on a series of models of increasing complexity. We use analytic models of various levels of complexity, then demonstrate performance on two engineering-scale problems: a single-physics nuclear reactor neutronics problem, and a multiphysics fuel cell problem coupling fuels performance and neutronics. Lastly, we demonstrate sensitivity analysis for a time-dependent fuels performance problem. We demonstrate the application of all the algorithms in RAVEN, a production-level uncertainty quantification framework.

  9. From the Boltzmann to the Lattice-Boltzmann Equation:. Beyond BGK Collision Models

    NASA Astrophysics Data System (ADS)

    Philippi, Paulo Cesar; Hegele, Luiz Adolfo; Surmas, Rodrigo; Siebert, Diogo Nardelli; Dos Santos, Luís Orlando Emerich

    In this work, we present a derivation for the lattice-Boltzmann equation directly from the linearized Boltzmann equation, combining the following main features: multiple relaxation times and thermodynamic consistency in the description of non isothermal compressible flows. The method presented here is based on the discretization of increasingly order kinetic models of the Boltzmann equation. Following a Gross-Jackson procedure, the linearized collision term is developed in Hermite polynomial tensors and the resulting infinite series is diagonalized after a chosen integer N, establishing the order of approximation of the collision term. The velocity space is discretized, in accordance with a quadrature method based on prescribed abscissas (Philippi et al., Phys. Rev E 73, 056702, 2006). The problem of describing the energy transfer is discussed, in relation with the order of approximation of a two relaxation-times lattice Boltzmann model. The velocity-step, temperature-step and the shock tube problems are investigated, adopting lattices with 37, 53 and 81 velocities.

  10. A 14-year dataset of in situ glacier surface velocities for a tidewater and a land-terminating glacier in Livingston Island, Antarctica

    NASA Astrophysics Data System (ADS)

    Machío, Francisco; Rodríguez-Cielos, Ricardo; Navarro, Francisco; Lapazaran, Javier; Otero, Jaime

    2017-10-01

    We present a 14-year record of in situ glacier surface velocities determined by repeated global navigation satellite system (GNSS) measurements in a dense network of 52 stakes distributed across two glaciers, Johnsons (tidewater) and Hurd (land-terminating), located on Livingston Island, South Shetland Islands, Antarctica. The measurements cover the time period 2000-2013 and were collected at the beginning and end of each austral summer season. A second-degree polynomial approximation is fitted to each stake position, which allows estimating the approximate positions and associated velocities at intermediate times. This dataset is useful as input data for numerical models of glacier dynamics or for the calibration and validation of remotely sensed velocities for a region where very scarce in situ glacier surface velocity measurements have been available so far. The link to the data repository is as follows: http://doi.pangaea.de/10.1594/PANGAEA.846791.

  11. Spectral Element Method for the Simulation of Unsteady Compressible Flows

    NASA Technical Reports Server (NTRS)

    Diosady, Laslo Tibor; Murman, Scott M.

    2013-01-01

    This work uses a discontinuous-Galerkin spectral-element method (DGSEM) to solve the compressible Navier-Stokes equations [1{3]. The inviscid ux is computed using the approximate Riemann solver of Roe [4]. The viscous fluxes are computed using the second form of Bassi and Rebay (BR2) [5] in a manner consistent with the spectral-element approximation. The method of lines with the classical 4th-order explicit Runge-Kutta scheme is used for time integration. Results for polynomial orders up to p = 15 (16th order) are presented. The code is parallelized using the Message Passing Interface (MPI). The computations presented in this work are performed using the Sandy Bridge nodes of the NASA Pleiades supercomputer at NASA Ames Research Center. Each Sandy Bridge node consists of 2 eight-core Intel Xeon E5-2670 processors with a clock speed of 2.6Ghz and 2GB per core memory. On a Sandy Bridge node the Tau Benchmark [6] runs in a time of 7.6s.

  12. Histogram-driven cupping correction (HDCC) in CT

    NASA Astrophysics Data System (ADS)

    Kyriakou, Y.; Meyer, M.; Lapp, R.; Kalender, W. A.

    2010-04-01

    Typical cupping correction methods are pre-processing methods which require either pre-calibration measurements or simulations of standard objects to approximate and correct for beam hardening and scatter. Some of them require the knowledge of spectra, detector characteristics, etc. The aim of this work was to develop a practical histogram-driven cupping correction (HDCC) method to post-process the reconstructed images. We use a polynomial representation of the raw-data generated by forward projection of the reconstructed images; forward and backprojection are performed on graphics processing units (GPU). The coefficients of the polynomial are optimized using a simplex minimization of the joint entropy of the CT image and its gradient. The algorithm was evaluated using simulations and measurements of homogeneous and inhomogeneous phantoms. For the measurements a C-arm flat-detector CT (FD-CT) system with a 30×40 cm2 detector, a kilovoltage on board imager (radiation therapy simulator) and a micro-CT system were used. The algorithm reduced cupping artifacts both in simulations and measurements using a fourth-order polynomial and was in good agreement to the reference. The minimization algorithm required less than 70 iterations to adjust the coefficients only performing a linear combination of basis images, thus executing without time consuming operations. HDCC reduced cupping artifacts without the necessity of pre-calibration or other scan information enabling a retrospective improvement of CT image homogeneity. However, the method can work with other cupping correction algorithms or in a calibration manner, as well.

  13. A parallel algorithm for computing the eigenvalues of a symmetric tridiagonal matrix

    NASA Technical Reports Server (NTRS)

    Swarztrauber, Paul N.

    1993-01-01

    A parallel algorithm, called polysection, is presented for computing the eigenvalues of a symmetric tridiagonal matrix. The method is based on a quadratic recurrence in which the characteristic polynomial is constructed on a binary tree from polynomials whose degree doubles at each level. Intervals that contain exactly one zero are determined by the zeros of polynomials at the previous level which ensures that different processors compute different zeros. The signs of the polynomials at the interval endpoints are determined a priori and used to guarantee that all zeros are found. The use of finite-precision arithmetic may result in multiple zeros; however, in this case, the intervals coalesce and their number determines exactly the multiplicity of the zero. For an N x N matrix the eigenvalues can be determined in O(log-squared N) time with N-squared processors and O(N) time with N processors. The method is compared with a parallel variant of bisection that requires O(N-squared) time on a single processor, O(N) time with N processors, and O(log N) time with N-squared processors.

  14. Pulse transmission transmitter including a higher order time derivate filter

    DOEpatents

    Dress, Jr., William B.; Smith, Stephen F.

    2003-09-23

    Systems and methods for pulse-transmission low-power communication modes are disclosed. A pulse transmission transmitter includes: a clock; a pseudorandom polynomial generator coupled to the clock, the pseudorandom polynomial generator having a polynomial load input; an exclusive-OR gate coupled to the pseudorandom polynomial generator, the exclusive-OR gate having a serial data input; a programmable delay circuit coupled to both the clock and the exclusive-OR gate; a pulse generator coupled to the programmable delay circuit; and a higher order time derivative filter coupled to the pulse generator. The systems and methods significantly reduce lower-frequency emissions from pulse transmission spread-spectrum communication modes, which reduces potentially harmful interference to existing radio frequency services and users and also simultaneously permit transmission of multiple data bits by utilizing specific pulse shapes.

  15. Matrix form of Legendre polynomials for solving linear integro-differential equations of high order

    NASA Astrophysics Data System (ADS)

    Kammuji, M.; Eshkuvatov, Z. K.; Yunus, Arif A. M.

    2017-04-01

    This paper presents an effective approximate solution of high order of Fredholm-Volterra integro-differential equations (FVIDEs) with boundary condition. Legendre truncated series is used as a basis functions to estimate the unknown function. Matrix operation of Legendre polynomials is used to transform FVIDEs with boundary conditions into matrix equation of Fredholm-Volterra type. Gauss Legendre quadrature formula and collocation method are applied to transfer the matrix equation into system of linear algebraic equations. The latter equation is solved by Gauss elimination method. The accuracy and validity of this method are discussed by solving two numerical examples and comparisons with wavelet and methods.

  16. On the Gibbs phenomenon 3: Recovering exponential accuracy in a sub-interval from a spectral partial sum of a piecewise analytic function

    NASA Technical Reports Server (NTRS)

    Gottlieb, David; Shu, Chi-Wang

    1993-01-01

    The investigation of overcoming Gibbs phenomenon was continued, i.e., obtaining exponential accuracy at all points including at the discontinuities themselves, from the knowledge of a spectral partial sum of a discontinuous but piecewise analytic function. It was shown that if we are given the first N expansion coefficients of an L(sub 2) function f(x) in terms of either the trigonometrical polynomials or the Chebyshev or Legendre polynomials, an exponentially convergent approximation to the point values of f(x) in any sub-interval in which it is analytic can be constructed.

  17. Quadrature formula for evaluating left bounded Hadamard type hypersingular integrals

    NASA Astrophysics Data System (ADS)

    Bichi, Sirajo Lawan; Eshkuvatov, Z. K.; Nik Long, N. M. A.; Okhunov, Abdurahim

    2014-12-01

    Left semi-bounded Hadamard type Hypersingular integral (HSI) of the form H(h,x) = 1/π √{1+x/1-x }∫-1 **1√{1-t/1+t }h(t)/(t-x)2 dt,x∈(-1.1), Where h(t) is a smooth function is considered. The automatic quadrature scheme (AQS) is constructed by approximating the density function h(t) by the truncated Chebyshev polynomials of the fourth kind. Numerical results revealed that the proposed AQS is highly accurate when h(t) is choosing to be the polynomial and rational functions. The results are in line with the theoretical findings.

  18. Data Assimilation and Propagation of Uncertainty in Multiscale Cardiovascular Simulation

    NASA Astrophysics Data System (ADS)

    Schiavazzi, Daniele; Marsden, Alison

    2015-11-01

    Cardiovascular modeling is the application of computational tools to predict hemodynamics. State-of-the-art techniques couple a 3D incompressible Navier-Stokes solver with a boundary circulation model and can predict local and peripheral hemodynamics, analyze the post-operative performance of surgical designs and complement clinical data collection minimizing invasive and risky measurement practices. The ability of these tools to make useful predictions is directly related to their accuracy in representing measured physiologies. Tuning of model parameters is therefore a topic of paramount importance and should include clinical data uncertainty, revealing how this uncertainty will affect the predictions. We propose a fully Bayesian, multi-level approach to data assimilation of uncertain clinical data in multiscale circulation models. To reduce the computational cost, we use a stable, condensed approximation of the 3D model build by linear sparse regression of the pressure/flow rate relationship at the outlets. Finally, we consider the problem of non-invasively propagating the uncertainty in model parameters to the resulting hemodynamics and compare Monte Carlo simulation with Stochastic Collocation approaches based on Polynomial or Multi-resolution Chaos expansions.

  19. Quadrature imposition of compatibility conditions in Chebyshev methods

    NASA Technical Reports Server (NTRS)

    Gottlieb, D.; Streett, C. L.

    1990-01-01

    Often, in solving an elliptic equation with Neumann boundary conditions, a compatibility condition has to be imposed for well-posedness. This condition involves integrals of the forcing function. When pseudospectral Chebyshev methods are used to discretize the partial differential equation, these integrals have to be approximated by an appropriate quadrature formula. The Gauss-Chebyshev (or any variant of it, like the Gauss-Lobatto) formula can not be used here since the integrals under consideration do not include the weight function. A natural candidate to be used in approximating the integrals is the Clenshaw-Curtis formula, however it is shown that this is the wrong choice and it may lead to divergence if time dependent methods are used to march the solution to steady state. The correct quadrature formula is developed for these problems. This formula takes into account the degree of the polynomials involved. It is shown that this formula leads to a well conditioned Chebyshev approximation to the differential equations and that the compatibility condition is automatically satisfied.

  20. A quasi-Lagrangian finite element method for the Navier-Stokes equations in a time-dependent domain

    NASA Astrophysics Data System (ADS)

    Lozovskiy, Alexander; Olshanskii, Maxim A.; Vassilevski, Yuri V.

    2018-05-01

    The paper develops a finite element method for the Navier-Stokes equations of incompressible viscous fluid in a time-dependent domain. The method builds on a quasi-Lagrangian formulation of the problem. The paper provides stability and convergence analysis of the fully discrete (finite-difference in time and finite-element in space) method. The analysis does not assume any CFL time-step restriction, it rather needs mild conditions of the form $\\Delta t\\le C$, where $C$ depends only on problem data, and $h^{2m_u+2}\\le c\\,\\Delta t$, $m_u$ is polynomial degree of velocity finite element space. Both conditions result from a numerical treatment of practically important non-homogeneous boundary conditions. The theoretically predicted convergence rate is confirmed by a set of numerical experiments. Further we apply the method to simulate a flow in a simplified model of the left ventricle of a human heart, where the ventricle wall dynamics is reconstructed from a sequence of contrast enhanced Computed Tomography images.

  1. Modular Expression Language for Ordinary Differential Equation Editing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Blake, Robert C.

    MELODEEis a system for describing systems of initial value problem ordinary differential equations, and a compiler for the language that produces optimized code to integrate the differential equations. Features include rational polynomial approximation for expensive functions and automatic differentiation for symbolic jacobians

  2. Optical computation using residue arithmetic.

    PubMed

    Huang, A; Tsunoda, Y; Goodman, J W; Ishihara, S

    1979-01-15

    Using residue arithmetic it is possible to perform additions, subtractions, multiplications, and polynomial evaluation without the necessity for carry operations. Calculations can, therefore, be performed in a fully parallel manner. Several different optical methods for performing residue arithmetic operations are described. A possible combination of such methods to form a matrix vector multiplier is considered. The potential advantages of optics in performing these kinds of operations are discussed.

  3. A multi-domain spectral method for time-fractional differential equations

    NASA Astrophysics Data System (ADS)

    Chen, Feng; Xu, Qinwu; Hesthaven, Jan S.

    2015-07-01

    This paper proposes an approach for high-order time integration within a multi-domain setting for time-fractional differential equations. Since the kernel is singular or nearly singular, two main difficulties arise after the domain decomposition: how to properly account for the history/memory part and how to perform the integration accurately. To address these issues, we propose a novel hybrid approach for the numerical integration based on the combination of three-term-recurrence relations of Jacobi polynomials and high-order Gauss quadrature. The different approximations used in the hybrid approach are justified theoretically and through numerical examples. Based on this, we propose a new multi-domain spectral method for high-order accurate time integrations and study its stability properties by identifying the method as a generalized linear method. Numerical experiments confirm hp-convergence for both time-fractional differential equations and time-fractional partial differential equations.

  4. Large-scale semidefinite programming for many-electron quantum mechanics.

    PubMed

    Mazziotti, David A

    2011-02-25

    The energy of a many-electron quantum system can be approximated by a constrained optimization of the two-electron reduced density matrix (2-RDM) that is solvable in polynomial time by semidefinite programming (SDP). Here we develop a SDP method for computing strongly correlated 2-RDMs that is 10-20 times faster than previous methods [D. A. Mazziotti, Phys. Rev. Lett. 93, 213001 (2004)]. We illustrate with (i) the dissociation of N(2) and (ii) the metal-to-insulator transition of H(50). For H(50) the SDP problem has 9.4×10(6) variables. This advance also expands the feasibility of large-scale applications in quantum information, control, statistics, and economics. © 2011 American Physical Society

  5. Large-Scale Semidefinite Programming for Many-Electron Quantum Mechanics

    NASA Astrophysics Data System (ADS)

    Mazziotti, David A.

    2011-02-01

    The energy of a many-electron quantum system can be approximated by a constrained optimization of the two-electron reduced density matrix (2-RDM) that is solvable in polynomial time by semidefinite programming (SDP). Here we develop a SDP method for computing strongly correlated 2-RDMs that is 10-20 times faster than previous methods [D. A. Mazziotti, Phys. Rev. Lett. 93, 213001 (2004)PRLTAO0031-900710.1103/PhysRevLett.93.213001]. We illustrate with (i) the dissociation of N2 and (ii) the metal-to-insulator transition of H50. For H50 the SDP problem has 9.4×106 variables. This advance also expands the feasibility of large-scale applications in quantum information, control, statistics, and economics.

  6. Fully decoupled monolithic projection method for natural convection problems

    NASA Astrophysics Data System (ADS)

    Pan, Xiaomin; Kim, Kyoungyoun; Lee, Changhoon; Choi, Jung-Il

    2017-04-01

    To solve time-dependent natural convection problems, we propose a fully decoupled monolithic projection method. The proposed method applies the Crank-Nicolson scheme in time and the second-order central finite difference in space. To obtain a non-iterative monolithic method from the fully discretized nonlinear system, we first adopt linearizations of the nonlinear convection terms and the general buoyancy term with incurring second-order errors in time. Approximate block lower-upper decompositions, along with an approximate factorization technique, are additionally employed to a global linearly coupled system, which leads to several decoupled subsystems, i.e., a fully decoupled monolithic procedure. We establish global error estimates to verify the second-order temporal accuracy of the proposed method for velocity, pressure, and temperature in terms of a discrete l2-norm. Moreover, according to the energy evolution, the proposed method is proved to be stable if the time step is less than or equal to a constant. In addition, we provide numerical simulations of two-dimensional Rayleigh-Bénard convection and periodic forced flow. The results demonstrate that the proposed method significantly mitigates the time step limitation, reduces the computational cost because only one Poisson equation is required to be solved, and preserves the second-order temporal accuracy for velocity, pressure, and temperature. Finally, the proposed method reasonably predicts a three-dimensional Rayleigh-Bénard convection for different Rayleigh numbers.

  7. An arbitrary high-order Discontinuous Galerkin method for elastic waves on unstructured meshes - III. Viscoelastic attenuation

    NASA Astrophysics Data System (ADS)

    Käser, Martin; Dumbser, Michael; de la Puente, Josep; Igel, Heiner

    2007-01-01

    We present a new numerical method to solve the heterogeneous anelastic, seismic wave equations with arbitrary high order accuracy in space and time on 3-D unstructured tetrahedral meshes. Using the velocity-stress formulation provides a linear hyperbolic system of equations with source terms that is completed by additional equations for the anelastic functions including the strain history of the material. These additional equations result from the rheological model of the generalized Maxwell body and permit the incorporation of realistic attenuation properties of viscoelastic material accounting for the behaviour of elastic solids and viscous fluids. The proposed method combines the Discontinuous Galerkin (DG) finite element (FE) method with the ADER approach using Arbitrary high order DERivatives for flux calculations. The DG approach, in contrast to classical FE methods, uses a piecewise polynomial approximation of the numerical solution which allows for discontinuities at element interfaces. Therefore, the well-established theory of numerical fluxes across element interfaces obtained by the solution of Riemann problems can be applied as in the finite volume framework. The main idea of the ADER time integration approach is a Taylor expansion in time in which all time derivatives are replaced by space derivatives using the so-called Cauchy-Kovalewski procedure which makes extensive use of the governing PDE. Due to the ADER time integration technique the same approximation order in space and time is achieved automatically and the method is a one-step scheme advancing the solution for one time step without intermediate stages. To this end, we introduce a new unrolled recursive algorithm for efficiently computing the Cauchy-Kovalewski procedure by making use of the sparsity of the system matrices. The numerical convergence analysis demonstrates that the new schemes provide very high order accuracy even on unstructured tetrahedral meshes while computational cost and storage space for a desired accuracy can be reduced when applying higher degree approximation polynomials. In addition, we investigate the increase in computing time, when the number of relaxation mechanisms due to the generalized Maxwell body are increased. An application to a well-acknowledged test case and comparisons with analytic and reference solutions, obtained by different well-established numerical methods, confirm the performance of the proposed method. Therefore, the development of the highly accurate ADER-DG approach for tetrahedral meshes including viscoelastic material provides a novel, flexible and efficient numerical technique to approach 3-D wave propagation problems including realistic attenuation and complex geometry.

  8. On the stability of projection methods for the incompressible Navier-Stokes equations based on high-order discontinuous Galerkin discretizations

    NASA Astrophysics Data System (ADS)

    Fehn, Niklas; Wall, Wolfgang A.; Kronbichler, Martin

    2017-12-01

    The present paper deals with the numerical solution of the incompressible Navier-Stokes equations using high-order discontinuous Galerkin (DG) methods for discretization in space. For DG methods applied to the dual splitting projection method, instabilities have recently been reported that occur for small time step sizes. Since the critical time step size depends on the viscosity and the spatial resolution, these instabilities limit the robustness of the Navier-Stokes solver in case of complex engineering applications characterized by coarse spatial resolutions and small viscosities. By means of numerical investigation we give evidence that these instabilities are related to the discontinuous Galerkin formulation of the velocity divergence term and the pressure gradient term that couple velocity and pressure. Integration by parts of these terms with a suitable definition of boundary conditions is required in order to obtain a stable and robust method. Since the intermediate velocity field does not fulfill the boundary conditions prescribed for the velocity, a consistent boundary condition is derived from the convective step of the dual splitting scheme to ensure high-order accuracy with respect to the temporal discretization. This new formulation is stable in the limit of small time steps for both equal-order and mixed-order polynomial approximations. Although the dual splitting scheme itself includes inf-sup stabilizing contributions, we demonstrate that spurious pressure oscillations appear for equal-order polynomials and small time steps highlighting the necessity to consider inf-sup stability explicitly.

  9. Unconventional Hamilton-type variational principle in phase space and symplectic algorithm

    NASA Astrophysics Data System (ADS)

    Luo, En; Huang, Weijiang; Zhang, Hexin

    2003-06-01

    By a novel approach proposed by Luo, the unconventional Hamilton-type variational principle in phase space for elastodynamics of multidegree-of-freedom system is established in this paper. It not only can fully characterize the initial-value problem of this dynamic, but also has a natural symplectic structure. Based on this variational principle, a symplectic algorithm which is called a symplectic time-subdomain method is proposed. A non-difference scheme is constructed by applying Lagrange interpolation polynomial to the time subdomain. Furthermore, it is also proved that the presented symplectic algorithm is an unconditionally stable one. From the results of the two numerical examples of different types, it can be seen that the accuracy and the computational efficiency of the new method excel obviously those of widely used Wilson-θ and Newmark-β methods. Therefore, this new algorithm is a highly efficient one with better computational performance.

  10. Factorization of differential expansion for non-rectangular representations

    NASA Astrophysics Data System (ADS)

    Morozov, A.

    2018-04-01

    Factorization of the differential expansion (DE) coefficients for colored HOMFLY-PT polynomials of antiparallel double braids, originally discovered for rectangular representations R, in the case of rectangular representations R, is extended to the first non-rectangular representations R = [2, 1] and R = [3, 1]. This increases chances that such factorization will take place for generic R, thus fixing the shape of the DE. We illustrate the power of the method by conjecturing the DE-induced expression for double-braid polynomials for all R = [r, 1]. In variance with the rectangular case, the knowledge for double braids is not fully sufficient to deduce the exclusive Racah matrix S¯ — the entries in the sectors with nontrivial multiplicities sum up and remain unseparated. Still, a considerable piece of the matrix is extracted directly and its other elements can be found by solving the unitarity constraints.

  11. Design of a wearable hand exoskeleton for exercising flexion/extension of the fingers.

    PubMed

    Jo, Inseong; Lee, Jeongsoo; Park, Yeongyu; Bae, Joonbum

    2017-07-01

    In this paper, design of a wearable hand exoskeleton system for exercising flexion/extension of the fingers, is proposed. The exoskeleton was designed with a simple and wearable structure to aid finger motions in 1 degree of freedom (DOF). A hand grasping experiment by fully-abled people was performed to investigate general hand flexion/extension motions and the polynomial curve of general hand motions was obtained. To customize the hand exoskeleton for the user, the polynomial curve was adjusted to the joint range of motion (ROM) of the user and the optimal design of the exoskeleton structure was obtained using the optimization algorithm. A prototype divided into two parts (one part for the thumb, the other for rest fingers) was actuated by only two linear motors for compact size and light weight.

  12. The expression and comparison of healthy and ptotic upper eyelid contours using a polynomial mathematical function.

    PubMed

    Mocan, Mehmet C; Ilhan, Hacer; Gurcay, Hasmet; Dikmetas, Ozlem; Karabulut, Erdem; Erdener, Ugur; Irkec, Murat

    2014-06-01

    To derive a mathematical expression for the healthy upper eyelid (UE) contour and to use this expression to differentiate the normal UE curve from its abnormal configuration in the setting of blepharoptosis. The study was designed as a cross-sectional study. Fifty healthy subjects (26M/24F) and 50 patients with blepharoptosis (28M/22F) with a margin-reflex distance (MRD1) of ≤2.5 mm were recruited. A polynomial interpolation was used to approximate UE curve. The polynomial coefficients were calculated from digital eyelid images of all participants using a set of operator defined points along the UE curve. Coefficients up to the fourth-order polynomial, iris area covered by the UE, iris area covered by the lower eyelid and total iris area covered by both the upper and the lower eyelids were defined using the polynomial function and used in statistical comparisons. The t-test, Mann-Whitney U test and the Spearman's correlation test were used for statistical comparisons. The mathematical expression derived from the data of 50 healthy subjects aged 24.1 ± 2.6 years was defined as y = 22.0915 + (-1.3213)x + 0.0318x(2 )+ (-0.0005x)(3). The fifth and the consecutive coefficients were <0.00001 in all cases and were not included in the polynomial function. None of the first fourth-order coefficients of the equation were found to be significantly different in male versus female subjects. In normal subjects, the percentage of the iris area covered by upper and lower lids was 6.46 ± 5.17% and 0.66% ± 1.62%, respectively. All coefficients and mean iris area covered by the UE were significantly different between healthy and ptotic eyelids. The healthy and abnormal eyelid contour can be defined and differentiated using a polynomial mathematical function.

  13. A general one-dimension nonlinear magneto-elastic coupled constitutive model for magnetostrictive materials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Da-Guang; Li, Meng-Han; Zhou, Hao-Miao, E-mail: zhouhm@cjlu.edu.cn

    2015-10-15

    For magnetostrictive rods under combined axial pre-stress and magnetic field, a general one-dimension nonlinear magneto-elastic coupled constitutive model was built in this paper. First, the elastic Gibbs free energy was expanded into polynomial, and the relationship between stress and strain and the relationship between magnetization and magnetic field with the polynomial form were obtained with the help of thermodynamic relations. Then according to microscopic magneto-elastic coupling mechanism and some physical facts of magnetostrictive materials, a nonlinear magneto-elastic constitutive with concise form was obtained when the relations of nonlinear strain and magnetization in the polynomial constitutive were instead with transcendental functions.more » The comparisons between the prediction and the experimental data of different magnetostrictive materials, such as Terfenol-D, Metglas and Ni showed that the predicted magnetostrictive strain and magnetization curves were consistent with experimental results under different pre-stresses whether in the region of low and moderate field or high field. Moreover, the model can fully reflect the nonlinear magneto-mechanical coupling characteristics between magnetic, magnetostriction and elasticity, and it can effectively predict the changes of material parameters with pre-stress and bias field, which is useful in practical applications.« less

  14. Higher-order Fourier analysis over finite fields and applications

    NASA Astrophysics Data System (ADS)

    Hatami, Pooya

    Higher-order Fourier analysis is a powerful tool in the study of problems in additive and extremal combinatorics, for instance the study of arithmetic progressions in primes, where the traditional Fourier analysis comes short. In recent years, higher-order Fourier analysis has found multiple applications in computer science in fields such as property testing and coding theory. In this thesis, we develop new tools within this theory with several new applications such as a characterization theorem in algebraic property testing. One of our main contributions is a strong near-equidistribution result for regular collections of polynomials. The densities of small linear structures in subsets of Abelian groups can be expressed as certain analytic averages involving linear forms. Higher-order Fourier analysis examines such averages by approximating the indicator function of a subset by a function of bounded number of polynomials. Then, to approximate the average, it suffices to know the joint distribution of the polynomials applied to the linear forms. We prove a near-equidistribution theorem that describes these distributions for the group F(n/p) when p is a fixed prime. This fundamental fact was previously known only under various extra assumptions about the linear forms or the field size. We use this near-equidistribution theorem to settle a conjecture of Gowers and Wolf on the true complexity of systems of linear forms. Our next application is towards a characterization of testable algebraic properties. We prove that every locally characterized affine-invariant property of functions f : F(n/p) → R with n∈ N, is testable. In fact, we prove that any such property P is proximity-obliviously testable. More generally, we show that any affine-invariant property that is closed under subspace restrictions and has "bounded complexity" is testable. We also prove that any property that can be described as the property of decomposing into a known structure of low-degree polynomials is locally characterized and is, hence, testable. We discuss several notions of regularity which allow us to deduce algorithmic versions of various regularity lemmas for polynomials by Green and Tao and by Kaufman and Lovett. We show that our algorithmic regularity lemmas for polynomials imply algorithmic versions of several results relying on regularity, such as decoding Reed-Muller codes beyond the list decoding radius (for certain structured errors), and prescribed polynomial decompositions. Finally, motivated by the definition of Gowers norms, we investigate norms defined by different systems of linear forms. We give necessary conditions on the structure of systems of linear forms that define norms. We prove that such norms can be one of only two types, and assuming that |F p| is sufficiently large, they essentially are equivalent to either a Gowers norm or Lp norms.

  15. Comparison of polynomial approximations and artificial neural nets for response surfaces in engineering optimization

    NASA Technical Reports Server (NTRS)

    Carpenter, William C.

    1991-01-01

    Engineering optimization problems involve minimizing some function subject to constraints. In areas such as aircraft optimization, the constraint equations may be from numerous disciplines such as transfer of information between these disciplines and the optimization algorithm. They are also suited to problems which may require numerous re-optimizations such as in multi-objective function optimization or to problems where the design space contains numerous local minima, thus requiring repeated optimizations from different initial designs. Their use has been limited, however, by the fact that development of response surfaces randomly selected or preselected points in the design space. Thus, they have been thought to be inefficient compared to algorithms to the optimum solution. A development has taken place in the last several years which may effect the desirability of using response surfaces. It may be possible that artificial neural nets are more efficient in developing response surfaces than polynomial approximations which have been used in the past. This development is the concern of the work.

  16. Additive-Multiplicative Approximation of Genotype-Environment Interaction

    PubMed Central

    Gimelfarb, A.

    1994-01-01

    A model of genotype-environment interaction in quantitative traits is considered. The model represents an expansion of the traditional additive (first degree polynomial) approximation of genotypic and environmental effects to a second degree polynomial incorporating a multiplicative term besides the additive terms. An experimental evaluation of the model is suggested and applied to a trait in Drosophila melanogaster. The environmental variance of a genotype in the model is shown to be a function of the genotypic value: it is a convex parabola. The broad sense heritability in a population depends not only on the genotypic and environmental variances, but also on the position of the genotypic mean in the population relative to the minimum of the parabola. It is demonstrated, using the model, that GXE interaction rectional may cause a substantial non-linearity in offspring-parent regression and a reversed response to directional selection. It is also shown that directional selection may be accompanied by an increase in the heritability. PMID:7896113

  17. Stitching interferometry of a full cylinder without using overlap areas

    NASA Astrophysics Data System (ADS)

    Peng, Junzheng; Chen, Dingfu; Yu, Yingjie

    2017-08-01

    Traditional stitching interferometry requires finding out the overlap correspondence and computing the discrepancies in the overlap regions, which makes it complex and time-consuming to obtain the 360° form map of a cylinder. In this paper, we develop a cylinder stitching model based on a new set of orthogonal polynomials, termed Legendre Fourier (LF) polynomials. With these polynomials, individual subaperture data can be expanded as a composition of the inherent form of a partial cylinder surface and additional misalignment parameters. Then the 360° form map can be acquired by simultaneously fitting all subaperture data with the LF polynomials. A metal shaft was measured to experimentally verify the proposed method. In contrast to traditional stitching interferometry, our technique does not require overlapping of adjacent subapertures, thus significantly reducing the measurement time and making the stitching algorithm simple.

  18. Molecular Dynamics Analysis of Lysozyme Protein in Ethanol-Water Mixed Solvent Environment

    NASA Astrophysics Data System (ADS)

    Ochije, Henry Ikechukwu

    Effect of protein-solvent interaction on the protein structure is widely studied using both experimental and computational techniques. Despite such extensive studies molecular level understanding of proteins and some simple solvents is still not fully understood. This work focuses on detailed molecular dynamics simulations to study of solvent effect on lysozyme protein, using water, alcohol and different concentrations of water-alcohol mixtures as solvents. The lysozyme protein structure in water, alcohol and alcohol-water mixture (0-12% alcohol) was studied using GROMACS molecular dynamics simulation code. Compared to water environment, the lysozome structure showed remarkable changes in solvents with increasing alcohol concentration. In particular, significant changes were observed in the protein secondary structure involving alpha helices. The influence of alcohol on the lysozyme protein was investigated by studying thermodynamic and structural properties. With increasing ethanol concentration we observed a systematic increase in total energy, enthalpy, root mean square deviation (RMSD), and radius of gyration. a polynomial interpolation approach. Using the resulting polynomial equation, we could determine above quantities for any intermediate alcohol percentage. In order to validate this approach, we selected an intermediate ethanol percentage and carried out full MD simulation. The results from MD simulation were in reasonably good agreement with that obtained using polynomial approach. Hence, the polynomial approach based method proposed here eliminates the need for computationally intensive full MD analysis for the concentrations within the range (0-12%) studied in this work.

  19. Characterization of high order spatial discretizations and lumping techniques for discontinuous finite element SN transport

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maginot, P. G.; Ragusa, J. C.; Morel, J. E.

    2013-07-01

    We examine several possible methods of mass matrix lumping for discontinuous finite element discrete ordinates transport using a Lagrange interpolatory polynomial trial space. Though positive outflow angular flux is guaranteed with traditional mass matrix lumping in a purely absorbing 1-D slab cell for the linear discontinuous approximation, we show that when used with higher degree interpolatory polynomial trial spaces, traditional lumping does yield strictly positive outflows and does not increase in accuracy with an increase in trial space polynomial degree. As an alternative, we examine methods which are 'self-lumping'. Self-lumping methods yield diagonal mass matrices by using numerical quadrature restrictedmore » to the Lagrange interpolatory points. Using equally-spaced interpolatory points, self-lumping is achieved through the use of closed Newton-Cotes formulas, resulting in strictly positive outflows in pure absorbers for odd power polynomials in 1-D slab geometry. By changing interpolatory points from the traditional equally-spaced points to the quadrature points of the Gauss-Legendre or Lobatto-Gauss-Legendre quadratures, it is possible to generate solution representations with a diagonal mass matrix and a strictly positive outflow for any degree polynomial solution representation in a pure absorber medium in 1-D slab geometry. Further, there is no inherent limit to local truncation error order of accuracy when using interpolatory points that correspond to the quadrature points of high order accuracy numerical quadrature schemes. (authors)« less

  20. An efficient method for the computation of Legendre moments.

    PubMed

    Yap, Pew-Thian; Paramesran, Raveendran

    2005-12-01

    Legendre moments are continuous moments, hence, when applied to discrete-space images, numerical approximation is involved and error occurs. This paper proposes a method to compute the exact values of the moments by mathematically integrating the Legendre polynomials over the corresponding intervals of the image pixels. Experimental results show that the values obtained match those calculated theoretically, and the image reconstructed from these moments have lower error than that of the conventional methods for the same order. Although the same set of exact Legendre moments can be obtained indirectly from the set of geometric moments, the computation time taken is much longer than the proposed method.

  1. Fourier-Legendre spectral methods for incompressible channel flow

    NASA Technical Reports Server (NTRS)

    Zang, T. A.; Hussaini, M. Y.

    1984-01-01

    An iterative collocation technique is described for modeling implicit viscosity in three-dimensional incompressible wall bounded shear flow. The viscosity can vary temporally and in the vertical direction. Channel flow is modeled with a Fourier-Legendre approximation and the mean streamwise advection is treated implicitly. Explicit terms are handled with an Adams-Bashforth method to increase the allowable time-step for calculation of the implicit terms. The algorithm is applied to low amplitude unstable waves in a plane Poiseuille flow at an Re of 7500. Comparisons are made between results using the Legendre method and with Chebyshev polynomials. Comparable accuracy is obtained for the perturbation kinetic energy predicted using both discretizations.

  2. On-Orbit Range Set Applications

    NASA Astrophysics Data System (ADS)

    Holzinger, M.; Scheeres, D.

    2011-09-01

    History and methodology of Δv range set computation is briefly reviewed, followed by a short summary of the Δv optimal spacecraft servicing problem literature. Service vehicle placement is approached from a Δv range set viewpoint, providing a framework under which the analysis becomes quite geometric and intuitive. The optimal servicing tour design problem is shown to be a specific instantiation of the metric- Traveling Salesman Problem (TSP), which in general is an NP-hard problem. The Δv-TSP is argued to be quite similar to the Euclidean-TSP, for which approximate optimal solutions may be found in polynomial time. Applications of range sets are demonstrated using analytical and simulation results.

  3. MagIC: Fluid dynamics in a spherical shell simulator

    NASA Astrophysics Data System (ADS)

    Wicht, J.; Gastine, T.; Barik, A.; Putigny, B.; Yadav, R.; Duarte, L.; Dintrans, B.

    2017-09-01

    MagIC simulates fluid dynamics in a spherical shell. It solves for the Navier-Stokes equation including Coriolis force, optionally coupled with an induction equation for Magneto-Hydro Dynamics (MHD), a temperature (or entropy) equation and an equation for chemical composition under both the anelastic and the Boussinesq approximations. MagIC uses either Chebyshev polynomials or finite differences in the radial direction and spherical harmonic decomposition in the azimuthal and latitudinal directions. The time-stepping scheme relies on a semi-implicit Crank-Nicolson for the linear terms of the MHD equations and a Adams-Bashforth scheme for the non-linear terms and the Coriolis force.

  4. Developing the Polynomial Expressions for Fields in the ITER Tokamak

    NASA Astrophysics Data System (ADS)

    Sharma, Stephen

    2017-10-01

    The two most important problems to be solved in the development of working nuclear fusion power plants are: sustained partial ignition and turbulence. These two phenomena are the subject of research and investigation through the development of analytic functions and computational models. Ansatz development through Gaussian wave-function approximations, dielectric quark models, field solutions using new elliptic functions, and better descriptions of the polynomials of the superconducting current loops are the critical theoretical developments that need to be improved. Euler-Lagrange equations of motion in addition to geodesic formulations generate the particle model which should correspond to the Dirac dispersive scattering coefficient calculations and the fluid plasma model. Feynman-Hellman formalism and Heaviside step functional forms are introduced to the fusion equations to produce simple expressions for the kinetic energy and loop currents. Conclusively, a polynomial description of the current loops, the Biot-Savart field, and the Lagrangian must be uncovered before there can be an adequate computational and iterative model of the thermonuclear plasma.

  5. An Efficient Spectral Method for Ordinary Differential Equations with Rational Function Coefficients

    NASA Technical Reports Server (NTRS)

    Coutsias, Evangelos A.; Torres, David; Hagstrom, Thomas

    1994-01-01

    We present some relations that allow the efficient approximate inversion of linear differential operators with rational function coefficients. We employ expansions in terms of a large class of orthogonal polynomial families, including all the classical orthogonal polynomials. These families obey a simple three-term recurrence relation for differentiation, which implies that on an appropriately restricted domain the differentiation operator has a unique banded inverse. The inverse is an integration operator for the family, and it is simply the tridiagonal coefficient matrix for the recurrence. Since in these families convolution operators (i.e. matrix representations of multiplication by a function) are banded for polynomials, we are able to obtain a banded representation for linear differential operators with rational coefficients. This leads to a method of solution of initial or boundary value problems that, besides having an operation count that scales linearly with the order of truncation N, is computationally well conditioned. Among the applications considered is the use of rational maps for the resolution of sharp interior layers.

  6. Bayesian B-spline mapping for dynamic quantitative traits.

    PubMed

    Xing, Jun; Li, Jiahan; Yang, Runqing; Zhou, Xiaojing; Xu, Shizhong

    2012-04-01

    Owing to their ability and flexibility to describe individual gene expression at different time points, random regression (RR) analyses have become a popular procedure for the genetic analysis of dynamic traits whose phenotypes are collected over time. Specifically, when modelling the dynamic patterns of gene expressions in the RR framework, B-splines have been proved successful as an alternative to orthogonal polynomials. In the so-called Bayesian B-spline quantitative trait locus (QTL) mapping, B-splines are used to characterize the patterns of QTL effects and individual-specific time-dependent environmental errors over time, and the Bayesian shrinkage estimation method is employed to estimate model parameters. Extensive simulations demonstrate that (1) in terms of statistical power, Bayesian B-spline mapping outperforms the interval mapping based on the maximum likelihood; (2) for the simulated dataset with complicated growth curve simulated by B-splines, Legendre polynomial-based Bayesian mapping is not capable of identifying the designed QTLs accurately, even when higher-order Legendre polynomials are considered and (3) for the simulated dataset using Legendre polynomials, the Bayesian B-spline mapping can find the same QTLs as those identified by Legendre polynomial analysis. All simulation results support the necessity and flexibility of B-spline in Bayesian mapping of dynamic traits. The proposed method is also applied to a real dataset, where QTLs controlling the growth trajectory of stem diameters in Populus are located.

  7. Near Real-Time Closed-Loop Optimal Control Feedback for Spacecraft Attitude Maneuvers

    DTIC Science & Technology

    2009-03-01

    60 3.8 Positive ωi Static Thrust Fan Characterization Polynomial Coefficients . . 62 3.9 Negative ωi Static Thrust Fan...Characterization Polynomial Coefficients . 62 4.1 Coefficients for SimSAT II’s Air Drag Polynomial Function . . . . . . . . . . . 78 5.1 OLOC Simulation...maneuver. Researchers using OCT identified that naturally occurring aerodynamic drag and gravity forces could be exploited in such a way that the CMGs

  8. Ridge Polynomial Neural Network with Error Feedback for Time Series Forecasting

    PubMed Central

    Ghazali, Rozaida; Herawan, Tutut

    2016-01-01

    Time series forecasting has gained much attention due to its many practical applications. Higher-order neural network with recurrent feedback is a powerful technique that has been used successfully for time series forecasting. It maintains fast learning and the ability to learn the dynamics of the time series over time. Network output feedback is the most common recurrent feedback for many recurrent neural network models. However, not much attention has been paid to the use of network error feedback instead of network output feedback. In this study, we propose a novel model, called Ridge Polynomial Neural Network with Error Feedback (RPNN-EF) that incorporates higher order terms, recurrence and error feedback. To evaluate the performance of RPNN-EF, we used four univariate time series with different forecasting horizons, namely star brightness, monthly smoothed sunspot numbers, daily Euro/Dollar exchange rate, and Mackey-Glass time-delay differential equation. We compared the forecasting performance of RPNN-EF with the ordinary Ridge Polynomial Neural Network (RPNN) and the Dynamic Ridge Polynomial Neural Network (DRPNN). Simulation results showed an average 23.34% improvement in Root Mean Square Error (RMSE) with respect to RPNN and an average 10.74% improvement with respect to DRPNN. That means that using network errors during training helps enhance the overall forecasting performance for the network. PMID:27959927

  9. Ridge Polynomial Neural Network with Error Feedback for Time Series Forecasting.

    PubMed

    Waheeb, Waddah; Ghazali, Rozaida; Herawan, Tutut

    2016-01-01

    Time series forecasting has gained much attention due to its many practical applications. Higher-order neural network with recurrent feedback is a powerful technique that has been used successfully for time series forecasting. It maintains fast learning and the ability to learn the dynamics of the time series over time. Network output feedback is the most common recurrent feedback for many recurrent neural network models. However, not much attention has been paid to the use of network error feedback instead of network output feedback. In this study, we propose a novel model, called Ridge Polynomial Neural Network with Error Feedback (RPNN-EF) that incorporates higher order terms, recurrence and error feedback. To evaluate the performance of RPNN-EF, we used four univariate time series with different forecasting horizons, namely star brightness, monthly smoothed sunspot numbers, daily Euro/Dollar exchange rate, and Mackey-Glass time-delay differential equation. We compared the forecasting performance of RPNN-EF with the ordinary Ridge Polynomial Neural Network (RPNN) and the Dynamic Ridge Polynomial Neural Network (DRPNN). Simulation results showed an average 23.34% improvement in Root Mean Square Error (RMSE) with respect to RPNN and an average 10.74% improvement with respect to DRPNN. That means that using network errors during training helps enhance the overall forecasting performance for the network.

  10. Moment-based metrics for global sensitivity analysis of hydrological systems

    NASA Astrophysics Data System (ADS)

    Dell'Oca, Aronne; Riva, Monica; Guadagnini, Alberto

    2017-12-01

    We propose new metrics to assist global sensitivity analysis, GSA, of hydrological and Earth systems. Our approach allows assessing the impact of uncertain parameters on main features of the probability density function, pdf, of a target model output, y. These include the expected value of y, the spread around the mean and the degree of symmetry and tailedness of the pdf of y. Since reliable assessment of higher-order statistical moments can be computationally demanding, we couple our GSA approach with a surrogate model, approximating the full model response at a reduced computational cost. Here, we consider the generalized polynomial chaos expansion (gPCE), other model reduction techniques being fully compatible with our theoretical framework. We demonstrate our approach through three test cases, including an analytical benchmark, a simplified scenario mimicking pumping in a coastal aquifer and a laboratory-scale conservative transport experiment. Our results allow ascertaining which parameters can impact some moments of the model output pdf while being uninfluential to others. We also investigate the error associated with the evaluation of our sensitivity metrics by replacing the original system model through a gPCE. Our results indicate that the construction of a surrogate model with increasing level of accuracy might be required depending on the statistical moment considered in the GSA. The approach is fully compatible with (and can assist the development of) analysis techniques employed in the context of reduction of model complexity, model calibration, design of experiment, uncertainty quantification and risk assessment.

  11. Constrained Low-Interference Relay Node Deployment for Underwater Acoustic Wireless Sensor Networks

    NASA Astrophysics Data System (ADS)

    Li, Deying; Li, Zheng; Ma, Wenkai; Chen, Wenping

    An Underwater Acoustic Wireless Sensor Network (UA-WSN) consists of many resource-constrained Underwater Sensor Nodes (USNs), which are deployed to perform collaborative monitoring tasks over a given region. One way to preserve network connectivity while guaranteing other network QoS is to deploy some Relay Nodes (RNs) in the networks, in which RNs' function is more powerful than USNs and their cost is more expensive. This paper addresses Constrained Low-interference Relay Node Deployment (C-LRND) problem for 3-D UA-WSNs in which the RNs are placed at a subset of candidate locations to ensure connectivity between the USNs, under both the number of RNs deployed and the value of total incremental interference constraints. We first prove that it is NP-hard, then present a general approximation algorithm framework and get two polynomial time O(1)-approximation algorithms.

  12. Efficient Jacobi-Gauss collocation method for solving initial value problems of Bratu type

    NASA Astrophysics Data System (ADS)

    Doha, E. H.; Bhrawy, A. H.; Baleanu, D.; Hafez, R. M.

    2013-09-01

    In this paper, we propose the shifted Jacobi-Gauss collocation spectral method for solving initial value problems of Bratu type, which is widely applicable in fuel ignition of the combustion theory and heat transfer. The spatial approximation is based on shifted Jacobi polynomials J {/n (α,β)}( x) with α, β ∈ (-1, ∞), x ∈ [0, 1] and n the polynomial degree. The shifted Jacobi-Gauss points are used as collocation nodes. Illustrative examples have been discussed to demonstrate the validity and applicability of the proposed technique. Comparing the numerical results of the proposed method with some well-known results show that the method is efficient and gives excellent numerical results.

  13. Efficient spectral-Galerkin algorithms for direct solution for second-order differential equations using Jacobi polynomials

    NASA Astrophysics Data System (ADS)

    Doha, E.; Bhrawy, A.

    2006-06-01

    It is well known that spectral methods (tau, Galerkin, collocation) have a condition number of ( is the number of retained modes of polynomial approximations). This paper presents some efficient spectral algorithms, which have a condition number of , based on the Jacobi?Galerkin methods of second-order elliptic equations in one and two space variables. The key to the efficiency of these algorithms is to construct appropriate base functions, which lead to systems with specially structured matrices that can be efficiently inverted. The complexities of the algorithms are a small multiple of operations for a -dimensional domain with unknowns, while the convergence rates of the algorithms are exponentials with smooth solutions.

  14. A Fresh Math Perspective Opens New Possibilities for Computational Chemistry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vu, Linda; Govind, Niranjan; Yang, Chao

    2017-05-26

    By reformulating the TDDFT problem as a matrix function approximation, making use of a special transformation and taking advantage of the underlying symmetry with respect to a non-Euclidean metric, Yang and his colleagues were able to apply the Lanczos algorithm and a Kernal Polynomial Method (KPM) to approximate the absorption spectrum of several molecules. Both of these algorithms require relatively low-memory compared to non-symmetrical alternatives, which is the key to the computational savings.

  15. a Unified Matrix Polynomial Approach to Modal Identification

    NASA Astrophysics Data System (ADS)

    Allemang, R. J.; Brown, D. L.

    1998-04-01

    One important current focus of modal identification is a reformulation of modal parameter estimation algorithms into a single, consistent mathematical formulation with a corresponding set of definitions and unifying concepts. Particularly, a matrix polynomial approach is used to unify the presentation with respect to current algorithms such as the least-squares complex exponential (LSCE), the polyreference time domain (PTD), Ibrahim time domain (ITD), eigensystem realization algorithm (ERA), rational fraction polynomial (RFP), polyreference frequency domain (PFD) and the complex mode indication function (CMIF) methods. Using this unified matrix polynomial approach (UMPA) allows a discussion of the similarities and differences of the commonly used methods. the use of least squares (LS), total least squares (TLS), double least squares (DLS) and singular value decomposition (SVD) methods is discussed in order to take advantage of redundant measurement data. Eigenvalue and SVD transformation methods are utilized to reduce the effective size of the resulting eigenvalue-eigenvector problem as well.

  16. A Formally Verified Conflict Detection Algorithm for Polynomial Trajectories

    NASA Technical Reports Server (NTRS)

    Narkawicz, Anthony; Munoz, Cesar

    2015-01-01

    In air traffic management, conflict detection algorithms are used to determine whether or not aircraft are predicted to lose horizontal and vertical separation minima within a time interval assuming a trajectory model. In the case of linear trajectories, conflict detection algorithms have been proposed that are both sound, i.e., they detect all conflicts, and complete, i.e., they do not present false alarms. In general, for arbitrary nonlinear trajectory models, it is possible to define detection algorithms that are either sound or complete, but not both. This paper considers the case of nonlinear aircraft trajectory models based on polynomial functions. In particular, it proposes a conflict detection algorithm that precisely determines whether, given a lookahead time, two aircraft flying polynomial trajectories are in conflict. That is, it has been formally verified that, assuming that the aircraft trajectories are modeled as polynomial functions, the proposed algorithm is both sound and complete.

  17. Length and time for development of laminar flow in tubes following a step increase of volume flux

    NASA Astrophysics Data System (ADS)

    Chaudhury, Rafeed A.; Herrmann, Marcus; Frakes, David H.; Adrian, Ronald J.

    2015-01-01

    Laminar flows starting up from rest in round tubes are relevant to numerous industrial and biomedical applications. The two most common types are flows driven by an abruptly imposed constant pressure gradient or by an abruptly imposed constant volume flux. Analytical solutions are available for transient, fully developed flows, wherein streamwise development over the entrance length is absent (Szymanski in J de Mathématiques Pures et Appliquées 11:67-107, 1932; Andersson and Tiseth in Chem Eng Commun 112(1):121-133, 1992, respectively). They represent the transient responses of flows in tubes that are very long compared with the entrance length, a condition that is seldom satisfied in biomedical tube networks. This study establishes the entrance (development) length and development time of starting laminar flow in a round tube of finite length driven by a piston pump that produces a step change from zero flow to a constant volume flux for Reynolds numbers between 500 and 3,000. The flows are examined experimentally, using stereographic particle image velocimetry and computationally using computational fluid dynamics, and are then compared with the known analytical solutions for fully developed flow conditions in infinitely long tubes. Results show that step function volume flux start-up flows reach steady state and fully developed flow five times more quickly than those driven by a step function pressure gradient, a 500 % change when compared with existing estimates. Based on these results, we present new, simple guidelines for achieving experimental flows that are fully developed in space and time in realistic (finite) tube geometries. To a first approximation, the time to achieve steady spatially developing flow is nearly equal to the time needed to achieve steady, fully developed flow. Conversely, the entrance length needed to achieve fully developed transient flow is approximately equal to the length needed to achieve fully developed steady flow. Beyond this level of description, the numerical results reveal interaction between the effects of space and time development and nonlinear Reynolds number effects.

  18. A simplified procedure for correcting both errors and erasures of a Reed-Solomon code using the Euclidean algorithm

    NASA Technical Reports Server (NTRS)

    Truong, T. K.; Hsu, I. S.; Eastman, W. L.; Reed, I. S.

    1987-01-01

    It is well known that the Euclidean algorithm or its equivalent, continued fractions, can be used to find the error locator polynomial and the error evaluator polynomial in Berlekamp's key equation needed to decode a Reed-Solomon (RS) code. A simplified procedure is developed and proved to correct erasures as well as errors by replacing the initial condition of the Euclidean algorithm by the erasure locator polynomial and the Forney syndrome polynomial. By this means, the errata locator polynomial and the errata evaluator polynomial can be obtained, simultaneously and simply, by the Euclidean algorithm only. With this improved technique the complexity of time domain RS decoders for correcting both errors and erasures is reduced substantially from previous approaches. As a consequence, decoders for correcting both errors and erasures of RS codes can be made more modular, regular, simple, and naturally suitable for both VLSI and software implementation. An example illustrating this modified decoding procedure is given for a (15, 9) RS code.

  19. Theater-Level Stochastic Air-to-Air Engagement Modeling via Event Occurrence Networks Using Piecewise Polynomial Approximation

    DTIC Science & Technology

    2001-09-01

    diagnosis natural language understanding circuit fault diagnosis pattern recognition machine vision nancial auditing map learning sensor... ACCA ACCB A ights degree of command and control FCC value is assumed to be the average of all the ACC values of the aircraft in the

  20. Analysis of spectral operators in one-dimensional domains

    NASA Technical Reports Server (NTRS)

    Maday, Y.

    1985-01-01

    Results are proven concerning certain projection operators on the space of all polynomials of degree less than or equal to N with respect to a class of one-dimensional weighted Sobolev spaces. The results are useful in the theory of the approximation of partial differential equations with spectral methods.

  1. Quantitative DLA-based compressed sensing for T1-weighted acquisitions

    NASA Astrophysics Data System (ADS)

    Svehla, Pavel; Nguyen, Khieu-Van; Li, Jing-Rebecca; Ciobanu, Luisa

    2017-08-01

    High resolution Manganese Enhanced Magnetic Resonance Imaging (MEMRI), which uses manganese as a T1 contrast agent, has great potential for functional imaging of live neuronal tissue at single neuron scale. However, reaching high resolutions often requires long acquisition times which can lead to reduced image quality due to sample deterioration and hardware instability. Compressed Sensing (CS) techniques offer the opportunity to significantly reduce the imaging time. The purpose of this work is to test the feasibility of CS acquisitions based on Diffusion Limited Aggregation (DLA) sampling patterns for high resolution quantitative T1-weighted imaging. Fully encoded and DLA-CS T1-weighted images of Aplysia californica neural tissue were acquired on a 17.2T MRI system. The MR signal corresponding to single, identified neurons was quantified for both versions of the T1 weighted images. For a 50% undersampling, DLA-CS can accurately quantify signal intensities in T1-weighted acquisitions leading to only 1.37% differences when compared to the fully encoded data, with minimal impact on image spatial resolution. In addition, we compared the conventional polynomial undersampling scheme with the DLA and showed that, for the data at hand, the latter performs better. Depending on the image signal to noise ratio, higher undersampling ratios can be used to further reduce the acquisition time in MEMRI based functional studies of living tissues.

  2. A Multipixel Time Series Analysis Method Accounting for Ground Motion, Atmospheric Noise, and Orbital Errors

    NASA Astrophysics Data System (ADS)

    Jolivet, R.; Simons, M.

    2018-02-01

    Interferometric synthetic aperture radar time series methods aim to reconstruct time-dependent ground displacements over large areas from sets of interferograms in order to detect transient, periodic, or small-amplitude deformation. Because of computational limitations, most existing methods consider each pixel independently, ignoring important spatial covariances between observations. We describe a framework to reconstruct time series of ground deformation while considering all pixels simultaneously, allowing us to account for spatial covariances, imprecise orbits, and residual atmospheric perturbations. We describe spatial covariances by an exponential decay function dependent of pixel-to-pixel distance. We approximate the impact of imprecise orbit information and residual long-wavelength atmosphere as a low-order polynomial function. Tests on synthetic data illustrate the importance of incorporating full covariances between pixels in order to avoid biased parameter reconstruction. An example of application to the northern Chilean subduction zone highlights the potential of this method.

  3. Spline-based Rayleigh-Ritz methods for the approximation of the natural modes of vibration for flexible beams with tip bodies

    NASA Technical Reports Server (NTRS)

    Rosen, I. G.

    1985-01-01

    Rayleigh-Ritz methods for the approximation of the natural modes for a class of vibration problems involving flexible beams with tip bodies using subspaces of piecewise polynomial spline functions are developed. An abstract operator theoretic formulation of the eigenvalue problem is derived and spectral properties investigated. The existing theory for spline-based Rayleigh-Ritz methods applied to elliptic differential operators and the approximation properties of interpolatory splines are useed to argue convergence and establish rates of convergence. An example and numerical results are discussed.

  4. A finite element formulation for scattering from electrically large 2-dimensional structures

    NASA Technical Reports Server (NTRS)

    Ross, Daniel C.; Volakis, John L.

    1992-01-01

    A finite element formulation is given using the scattered field approach with a fictitious material absorber to truncate the mesh. The formulation includes the use of arbitrary approximation functions so that more accurate results can be achieved without any modification to the software. Additionally, non-polynomial approximation functions can be used, including complex approximation functions. The banded system that results is solved with an efficient sparse/banded iterative scheme and as a consequence, large structures can be analyzed. Results are given for simple cases to verify the formulation and also for large, complex geometries.

  5. A point-value enhanced finite volume method based on approximate delta functions

    NASA Astrophysics Data System (ADS)

    Xuan, Li-Jun; Majdalani, Joseph

    2018-02-01

    We revisit the concept of an approximate delta function (ADF), introduced by Huynh (2011) [1], in the form of a finite-order polynomial that holds identical integral properties to the Dirac delta function when used in conjunction with a finite-order polynomial integrand over a finite domain. We show that the use of generic ADF polynomials can be effective at recovering and generalizing several high-order methods, including Taylor-based and nodal-based Discontinuous Galerkin methods, as well as the Correction Procedure via Reconstruction. Based on the ADF concept, we then proceed to formulate a Point-value enhanced Finite Volume (PFV) method, which stores and updates the cell-averaged values inside each element as well as the unknown quantities and, if needed, their derivatives on nodal points. The sharing of nodal information with surrounding elements saves the number of degrees of freedom compared to other compact methods at the same order. To ensure conservation, cell-averaged values are updated using an identical approach to that adopted in the finite volume method. Here, the updating of nodal values and their derivatives is achieved through an ADF concept that leverages all of the elements within the domain of integration that share the same nodal point. The resulting scheme is shown to be very stable at successively increasing orders. Both accuracy and stability of the PFV method are verified using a Fourier analysis and through applications to the linear wave and nonlinear Burgers' equations in one-dimensional space.

  6. A new ball launching system with controlled flight parameters for catching experiments.

    PubMed

    d'Avella, A; Cesqui, B; Portone, A; Lacquaniti, F

    2011-03-30

    Systematic investigations of sensorimotor control of interceptive actions in naturalistic conditions, such as catching or hitting a ball moving in three-dimensional space, requires precise control of the projectile flight parameters and of the associated visual stimuli. Such control is challenging when air drag cannot be neglected because the mapping of launch parameters into flight parameters cannot be computed analytically. We designed, calibrated, and experimentally validated an actuated launching apparatus that can control the average spatial position and flight duration of a ball at a given distance from a fixed launch location. The apparatus was constructed by mounting a ball launching machine with adjustable delivery speed on an actuated structure capable of changing the spatial orientation of the launch axis while projecting balls through a hole in a screen hiding the apparatus. The calibration procedure relied on tracking the balls with a motion capture system and on approximating the mapping of launch parameters into flight parameters by means of polynomials functions. Polynomials were also used to estimate the variability of the flight parameters. The coefficients of these polynomials were obtained using the launch and flight parameters of 660 launches with 65 different initial conditions. The relative accuracy and precision of the apparatus were larger than 98% for flight times and larger than 96% for ball heights at a distance of 6m from the screen. Such novel apparatus, by reliably and automatically controlling desired ball flight characteristics without neglecting air drag, allows for a systematic investigation of naturalistic interceptive tasks. Copyright © 2011 Elsevier B.V. All rights reserved.

  7. Lossy Wavefield Compression for Full-Waveform Inversion

    NASA Astrophysics Data System (ADS)

    Boehm, C.; Fichtner, A.; de la Puente, J.; Hanzich, M.

    2015-12-01

    We present lossy compression techniques, tailored to the inexact computation of sensitivity kernels, that significantly reduce the memory requirements of adjoint-based minimization schemes. Adjoint methods are a powerful tool to solve tomography problems in full-waveform inversion (FWI). Yet they face the challenge of massive memory requirements caused by the opposite directions of forward and adjoint simulations and the necessity to access both wavefields simultaneously during the computation of the sensitivity kernel. Thus, storage, I/O operations, and memory bandwidth become key topics in FWI. In this talk, we present strategies for the temporal and spatial compression of the forward wavefield. This comprises re-interpolation with coarse time steps and an adaptive polynomial degree of the spectral element shape functions. In addition, we predict the projection errors on a hierarchy of grids and re-quantize the residuals with an adaptive floating-point accuracy to improve the approximation. Furthermore, we use the first arrivals of adjoint waves to identify "shadow zones" that do not contribute to the sensitivity kernel at all. Updating and storing the wavefield within these shadow zones is skipped, which reduces memory requirements and computational costs at the same time. Compared to check-pointing, our approach has only a negligible computational overhead, utilizing the fact that a sufficiently accurate sensitivity kernel does not require a fully resolved forward wavefield. Furthermore, we use adaptive compression thresholds during the FWI iterations to ensure convergence. Numerical experiments on the reservoir scale and for the Western Mediterranean prove the high potential of this approach with an effective compression factor of 500-1000. Furthermore, it is computationally cheap and easy to integrate in both, finite-differences and finite-element wave propagation codes.

  8. Control Synthesis of Discrete-Time T-S Fuzzy Systems via a Multi-Instant Homogenous Polynomial Approach.

    PubMed

    Xie, Xiangpeng; Yue, Dong; Zhang, Huaguang; Xue, Yusheng

    2016-03-01

    This paper deals with the problem of control synthesis of discrete-time Takagi-Sugeno fuzzy systems by employing a novel multiinstant homogenous polynomial approach. A new multiinstant fuzzy control scheme and a new class of fuzzy Lyapunov functions, which are homogenous polynomially parameter-dependent on both the current-time normalized fuzzy weighting functions and the past-time normalized fuzzy weighting functions, are proposed for implementing the object of relaxed control synthesis. Then, relaxed stabilization conditions are derived with less conservatism than existing ones. Furthermore, the relaxation quality of obtained stabilization conditions is further ameliorated by developing an efficient slack variable approach, which presents a multipolynomial dependence on the normalized fuzzy weighting functions at the current and past instants of time. Two simulation examples are given to demonstrate the effectiveness and benefits of the results developed in this paper.

  9. Analytic Regularity and Polynomial Approximation of Parametric and Stochastic Elliptic PDEs

    DTIC Science & Technology

    2010-05-31

    Todor : Finite elements for elliptic problems with stochastic coefficients Comp. Meth. Appl. Mech. Engg. 194 (2005) 205-228. [14] R. Ghanem and P. Spanos...for elliptic partial differential equations with random input data SIAM J. Num. Anal. 46(2008), 2411–2442. [20] R. Todor , Robust eigenvalue computation...for smoothing operators, SIAM J. Num. Anal. 44(2006), 865– 878. [21] Ch. Schwab and R.A. Todor , Karhúnen-Loève Approximation of Random Fields by

  10. Evaluation of Analytical Modeling Functions for the Phonation Onset Process.

    PubMed

    Petermann, Simon; Kniesburges, Stefan; Ziethe, Anke; Schützenberger, Anne; Döllinger, Michael

    2016-01-01

    The human voice originates from oscillations of the vocal folds in the larynx. The duration of the voice onset (VO), called the voice onset time (VOT), is currently under investigation as a clinical indicator for correct laryngeal functionality. Different analytical approaches for computing the VOT based on endoscopic imaging were compared to determine the most reliable method to quantify automatically the transient vocal fold oscillations during VO. Transnasal endoscopic imaging in combination with a high-speed camera (8000 fps) was applied to visualize the phonation onset process. Two different definitions of VO interval were investigated. Six analytical functions were tested that approximate the envelope of the filtered or unfiltered glottal area waveform (GAW) during phonation onset. A total of 126 recordings from nine healthy males and 210 recordings from 15 healthy females were evaluated. Three criteria were analyzed to determine the most appropriate computation approach: (1) reliability of the fit function for a correct approximation of VO; (2) consistency represented by the standard deviation of VOT; and (3) accuracy of the approximation of VO. The results suggest the computation of VOT by a fourth-order polynomial approximation in the interval between 32.2 and 67.8% of the saturation amplitude of the filtered GAW.

  11. Ranked solutions to a class of combinatorial optimizations - with applications in mass spectrometry based peptide sequencing

    NASA Astrophysics Data System (ADS)

    Doerr, Timothy; Alves, Gelio; Yu, Yi-Kuo

    2006-03-01

    Typical combinatorial optimizations are NP-hard; however, for a particular class of cost functions the corresponding combinatorial optimizations can be solved in polynomial time. This suggests a way to efficiently find approximate solutions - - find a transformation that makes the cost function as similar as possible to that of the solvable class. After keeping many high-ranking solutions using the approximate cost function, one may then re-assess these solutions with the full cost function to find the best approximate solution. Under this approach, it is important to be able to assess the quality of the solutions obtained, e.g., by finding the true ranking of kth best approximate solution when all possible solutions are considered exhaustively. To tackle this statistical issue, we provide a systematic method starting with a scaling function generated from the fininte number of high- ranking solutions followed by a convergent iterative mapping. This method, useful in a variant of the directed paths in random media problem proposed here, can also provide a statistical significance assessment for one of the most important proteomic tasks - - peptide sequencing using tandem mass spectrometry data.

  12. Low rank approximation methods for MR fingerprinting with large scale dictionaries.

    PubMed

    Yang, Mingrui; Ma, Dan; Jiang, Yun; Hamilton, Jesse; Seiberlich, Nicole; Griswold, Mark A; McGivney, Debra

    2018-04-01

    This work proposes new low rank approximation approaches with significant memory savings for large scale MR fingerprinting (MRF) problems. We introduce a compressed MRF with randomized singular value decomposition method to significantly reduce the memory requirement for calculating a low rank approximation of large sized MRF dictionaries. We further relax this requirement by exploiting the structures of MRF dictionaries in the randomized singular value decomposition space and fitting them to low-degree polynomials to generate high resolution MRF parameter maps. In vivo 1.5T and 3T brain scan data are used to validate the approaches. T 1 , T 2 , and off-resonance maps are in good agreement with that of the standard MRF approach. Moreover, the memory savings is up to 1000 times for the MRF-fast imaging with steady-state precession sequence and more than 15 times for the MRF-balanced, steady-state free precession sequence. The proposed compressed MRF with randomized singular value decomposition and dictionary fitting methods are memory efficient low rank approximation methods, which can benefit the usage of MRF in clinical settings. They also have great potentials in large scale MRF problems, such as problems considering multi-component MRF parameters or high resolution in the parameter space. Magn Reson Med 79:2392-2400, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  13. Computational complexity of ecological and evolutionary spatial dynamics

    PubMed Central

    Ibsen-Jensen, Rasmus; Chatterjee, Krishnendu; Nowak, Martin A.

    2015-01-01

    There are deep, yet largely unexplored, connections between computer science and biology. Both disciplines examine how information proliferates in time and space. Central results in computer science describe the complexity of algorithms that solve certain classes of problems. An algorithm is deemed efficient if it can solve a problem in polynomial time, which means the running time of the algorithm is a polynomial function of the length of the input. There are classes of harder problems for which the fastest possible algorithm requires exponential time. Another criterion is the space requirement of the algorithm. There is a crucial distinction between algorithms that can find a solution, verify a solution, or list several distinct solutions in given time and space. The complexity hierarchy that is generated in this way is the foundation of theoretical computer science. Precise complexity results can be notoriously difficult. The famous question whether polynomial time equals nondeterministic polynomial time (i.e., P = NP) is one of the hardest open problems in computer science and all of mathematics. Here, we consider simple processes of ecological and evolutionary spatial dynamics. The basic question is: What is the probability that a new invader (or a new mutant) will take over a resident population? We derive precise complexity results for a variety of scenarios. We therefore show that some fundamental questions in this area cannot be answered by simple equations (assuming that P is not equal to NP). PMID:26644569

  14. Investigating and Modelling Effects of Climatically and Hydrologically Indicators on the Urmia Lake Coastline Changes Using Time Series Analysis

    NASA Astrophysics Data System (ADS)

    Ahmadijamal, M.; Hasanlou, M.

    2017-09-01

    Study of hydrological parameters of lakes and examine the variation of water level to operate management on water resources are important. The purpose of this study is to investigate and model the Urmia Lake water level changes due to changes in climatically and hydrological indicators that affects in the process of level variation and area of this lake. For this purpose, Landsat satellite images, hydrological data, the daily precipitation, the daily surface evaporation and the daily discharge in total of the lake basin during the period of 2010-2016 have been used. Based on time-series analysis that is conducted on individual data independently with same procedure, to model variation of Urmia Lake level, we used polynomial regression technique and combined polynomial with periodic behavior. In the first scenario, we fit a multivariate linear polynomial to our datasets and determining RMSE, NRSME and R² value. We found that fourth degree polynomial can better fit to our datasets with lowest RMSE value about 9 cm. In the second scenario, we combine polynomial with periodic behavior for modeling. The second scenario has superiority comparing to the first one, by RMSE value about 3 cm.

  15. Reliable before-fabrication forecasting of normal and touch mode MEMS capacitive pressure sensor: modeling and simulation

    NASA Astrophysics Data System (ADS)

    Jindal, Sumit Kumar; Mahajan, Ankush; Raghuwanshi, Sanjeev Kumar

    2017-10-01

    An analytical model and numerical simulation for the performance of MEMS capacitive pressure sensors in both normal and touch modes is required for expected behavior of the sensor prior to their fabrication. Obtaining such information should be based on a complete analysis of performance parameters such as deflection of diaphragm, change of capacitance when the diaphragm deflects, and sensitivity of the sensor. In the literature, limited work has been carried out on the above-stated issue; moreover, due to approximation factors of polynomials, a tolerance error cannot be overseen. Reliable before-fabrication forecasting requires exact mathematical calculation of the parameters involved. A second-order polynomial equation is calculated mathematically for key performance parameters of both modes. This eliminates the approximation factor, and an exact result can be studied, maintaining high accuracy. The elimination of approximation factors and an approach of exact results are based on a new design parameter (δ) that we propose. The design parameter gives an initial hint to the designers on how the sensor will behave once it is fabricated. The complete work is aided by extensive mathematical detailing of all the parameters involved. Next, we verified our claims using MATLAB® simulation. Since MATLAB® effectively provides the simulation theory for the design approach, more complicated finite element method is not used.

  16. The background model in the energy range from 0.1 MeV up to several MeV for low altitude and high inclination satellites.

    NASA Astrophysics Data System (ADS)

    Arkhangelskaja, I. V.; Arkhangelskiy, A. I.

    2016-02-01

    The gamma-ray background physical origin for low altitude orbits defined by: diffuse cosmic gamma-emission, atmospheric gamma-rays, gamma-emission formed in interactions of charged particles (both prompt and activation) and transient events such as electrons precipitations and solar flares. The background conditions in the energy range from 0.1 MeV up to several MeV for low altitude orbits differ due to frequency of Earth Radiation Belts - ERBs (included South Atlantic Anomaly - SAA) passes and cosmic rays rigidity. The detectors and satellite constructive elements are activated by trapped in ERBs and moving along magnetic lines charged particles. In this case we propose simplified polynomial model separately for polar and equatorial orbits parts: background count rate temporal profile approximation by 4-5 order polynomials in equatorial regions, and linear approximations, parabolas or constants in polar caps. The polynomials’ coefficients supposed to be similar for identical spectral channels for each analyzed equatorial part taken into account normalization coefficients defined due to Kp-indexes study within period corresponding to calibration coefficients being approximately constants. The described model was successfully applied for the solar flares hard X-ray and gamma-ray emission characteristic studies by AVS-F apparatus data onboard CORONAS-F satellite.

  17. Vector-valued Jack polynomials and wavefunctions on the torus

    NASA Astrophysics Data System (ADS)

    Dunkl, Charles F.

    2017-06-01

    The Hamiltonian of the quantum Calogero-Sutherland model of N identical particles on the circle with 1/r 2 interactions has eigenfunctions consisting of Jack polynomials times the base state. By use of the generalized Jack polynomials taking values in modules of the symmetric group and the matrix solution of a system of linear differential equations one constructs novel eigenfunctions of the Hamiltonian. Like the usual wavefunctions each eigenfunction determines a symmetric probability density on the N-torus. The construction applies to any irreducible representation of the symmetric group. The methods depend on the theory of generalized Jack polynomials due to Griffeth, and the Yang-Baxter graph approach of Luque and the author.

  18. A general theory on frequency and time-frequency analysis of irregularly sampled time series based on projection methods - Part 1: Frequency analysis

    NASA Astrophysics Data System (ADS)

    Lenoir, Guillaume; Crucifix, Michel

    2018-03-01

    We develop a general framework for the frequency analysis of irregularly sampled time series. It is based on the Lomb-Scargle periodogram, but extended to algebraic operators accounting for the presence of a polynomial trend in the model for the data, in addition to a periodic component and a background noise. Special care is devoted to the correlation between the trend and the periodic component. This new periodogram is then cast into the Welch overlapping segment averaging (WOSA) method in order to reduce its variance. We also design a test of significance for the WOSA periodogram, against the background noise. The model for the background noise is a stationary Gaussian continuous autoregressive-moving-average (CARMA) process, more general than the classical Gaussian white or red noise processes. CARMA parameters are estimated following a Bayesian framework. We provide algorithms that compute the confidence levels for the WOSA periodogram and fully take into account the uncertainty in the CARMA noise parameters. Alternatively, a theory using point estimates of CARMA parameters provides analytical confidence levels for the WOSA periodogram, which are more accurate than Markov chain Monte Carlo (MCMC) confidence levels and, below some threshold for the number of data points, less costly in computing time. We then estimate the amplitude of the periodic component with least-squares methods, and derive an approximate proportionality between the squared amplitude and the periodogram. This proportionality leads to a new extension for the periodogram: the weighted WOSA periodogram, which we recommend for most frequency analyses with irregularly sampled data. The estimated signal amplitude also permits filtering in a frequency band. Our results generalise and unify methods developed in the fields of geosciences, engineering, astronomy and astrophysics. They also constitute the starting point for an extension to the continuous wavelet transform developed in a companion article (Lenoir and Crucifix, 2018). All the methods presented in this paper are available to the reader in the Python package WAVEPAL.

  19. New class of photonic quantum error correction codes

    NASA Astrophysics Data System (ADS)

    Silveri, Matti; Michael, Marios; Brierley, R. T.; Salmilehto, Juha; Albert, Victor V.; Jiang, Liang; Girvin, S. M.

    We present a new class of quantum error correction codes for applications in quantum memories, communication and scalable computation. These codes are constructed from a finite superposition of Fock states and can exactly correct errors that are polynomial up to a specified degree in creation and destruction operators. Equivalently, they can perform approximate quantum error correction to any given order in time step for the continuous-time dissipative evolution under these errors. The codes are related to two-mode photonic codes but offer the advantage of requiring only a single photon mode to correct loss (amplitude damping), as well as the ability to correct other errors, e.g. dephasing. Our codes are also similar in spirit to photonic ''cat codes'' but have several advantages including smaller mean occupation number and exact rather than approximate orthogonality of the code words. We analyze how the rate of uncorrectable errors scales with the code complexity and discuss the unitary control for the recovery process. These codes are realizable with current superconducting qubit technology and can increase the fidelity of photonic quantum communication and memories.

  20. A Linear Kernel for Co-Path/Cycle Packing

    NASA Astrophysics Data System (ADS)

    Chen, Zhi-Zhong; Fellows, Michael; Fu, Bin; Jiang, Haitao; Liu, Yang; Wang, Lusheng; Zhu, Binhai

    Bounded-Degree Vertex Deletion is a fundamental problem in graph theory that has new applications in computational biology. In this paper, we address a special case of Bounded-Degree Vertex Deletion, the Co-Path/Cycle Packing problem, which asks to delete as few vertices as possible such that the graph of the remaining (residual) vertices is composed of disjoint paths and simple cycles. The problem falls into the well-known class of 'node-deletion problems with hereditary properties', is hence NP-complete and unlikely to admit a polynomial time approximation algorithm with approximation factor smaller than 2. In the framework of parameterized complexity, we present a kernelization algorithm that produces a kernel with at most 37k vertices, improving on the super-linear kernel of Fellows et al.'s general theorem for Bounded-Degree Vertex Deletion. Using this kernel,and the method of bounded search trees, we devise an FPT algorithm that runs in time O *(3.24 k ). On the negative side, we show that the problem is APX-hard and unlikely to have a kernel smaller than 2k by a reduction from Vertex Cover.

  1. On the coefficients of integrated expansions of Bessel polynomials

    NASA Astrophysics Data System (ADS)

    Doha, E. H.; Ahmed, H. M.

    2006-03-01

    A new formula expressing explicitly the integrals of Bessel polynomials of any degree and for any order in terms of the Bessel polynomials themselves is proved. Another new explicit formula relating the Bessel coefficients of an expansion for infinitely differentiable function that has been integrated an arbitrary number of times in terms of the coefficients of the original expansion of the function is also established. An application of these formulae for solving ordinary differential equations with varying coefficients is discussed.

  2. Optimal control and Galois theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zelikin, M I; Kiselev, D D; Lokutsievskiy, L V

    2013-11-30

    An important role is played in the solution of a class of optimal control problems by a certain special polynomial of degree 2(n−1) with integer coefficients. The linear independence of a family of k roots of this polynomial over the field Q implies the existence of a solution of the original problem with optimal control in the form of an irrational winding of a k-dimensional Clifford torus, which is passed in finite time. In the paper, we prove that for n≤15 one can take an arbitrary positive integer not exceeding [n/2] for k. The apparatus developed in the paper is applied to the systems ofmore » Chebyshev-Hermite polynomials and generalized Chebyshev-Laguerre polynomials. It is proved that for such polynomials of degree 2m every subsystem of [(m+1)/2] roots with pairwise distinct squares is linearly independent over the field Q. Bibliography: 11 titles.« less

  3. Design of polynomial fuzzy observer-controller for nonlinear systems with state delay: sum of squares approach

    NASA Astrophysics Data System (ADS)

    Gassara, H.; El Hajjaji, A.; Chaabane, M.

    2017-07-01

    This paper investigates the problem of observer-based control for two classes of polynomial fuzzy systems with time-varying delay. The first class concerns a special case where the polynomial matrices do not depend on the estimated state variables. The second one is the general case where the polynomial matrices could depend on unmeasurable system states that will be estimated. For the last case, two design procedures are proposed. The first one gives the polynomial fuzzy controller and observer gains in two steps. In the second procedure, the designed gains are obtained using a single-step approach to overcome the drawback of a two-step procedure. The obtained conditions are presented in terms of sum of squares (SOS) which can be solved via the SOSTOOLS and a semi-definite program solver. Illustrative examples show the validity and applicability of the proposed results.

  4. Georeferencing CAMS data: Polynomial rectification and beyond

    NASA Astrophysics Data System (ADS)

    Yang, Xinghe

    The Calibrated Airborne Multispectral Scanner (CAMS) is a sensor used in the commercial remote sensing program at NASA Stennis Space Center. In geographic applications of the CAMS data, accurate geometric rectification is essential for the analysis of the remotely sensed data and for the integration of the data into Geographic Information Systems (GIS). The commonly used rectification techniques such as the polynomial transformation and ortho rectification have been very successful in the field of remote sensing and GIS for most remote sensing data such as Landsat imagery, SPOT imagery and aerial photos. However, due to the geometric nature of the airborne line scanner which has high spatial frequency distortions, the polynomial model and the ortho rectification technique in current commercial software packages such as Erdas Imagine are not adequate for obtaining sufficient geometric accuracy. In this research, the geometric nature, especially the major distortions, of the CAMS data has been described. An analytical step-by-step geometric preprocessing has been utilized to deal with the potential high frequency distortions of the CAMS data. A generic sensor-independent photogrammetric model has been developed for the ortho-rectification of the CAMS data. Three generalized kernel classes and directional elliptical basis have been formulated into a rectification model of summation of multisurface functions, which is a significant extension to the traditional radial basis functions. The preprocessing mechanism has been fully incorporated into the polynomial, the triangle-based finite element analysis as well as the summation of multisurface functions. While the multisurface functions and the finite element analysis have the characteristics of localization, piecewise logic has been applied to the polynomial and photogrammetric methods, which can produce significant accuracy improvement over the global approach. A software module has been implemented with full integration of data preprocessing and rectification techniques under Erdas Imagine development environment. The final root mean square (RMS) errors for the test CAMS data are about two pixels which are compatible with the random RMS errors existed in the reference map coordinates.

  5. Investigation to realize a computationally efficient implementation of the high-order instantaneous-moments-based fringe analysis method

    NASA Astrophysics Data System (ADS)

    Gorthi, Sai Siva; Rajshekhar, Gannavarpu; Rastogi, Pramod

    2010-06-01

    Recently, a high-order instantaneous moments (HIM)-operator-based method was proposed for accurate phase estimation in digital holographic interferometry. The method relies on piece-wise polynomial approximation of phase and subsequent evaluation of the polynomial coefficients from the HIM operator using single-tone frequency estimation. The work presents a comparative analysis of the performance of different single-tone frequency estimation techniques, like Fourier transform followed by optimization, estimation of signal parameters by rotational invariance technique (ESPRIT), multiple signal classification (MUSIC), and iterative frequency estimation by interpolation on Fourier coefficients (IFEIF) in HIM-operator-based methods for phase estimation. Simulation and experimental results demonstrate the potential of the IFEIF technique with respect to computational efficiency and estimation accuracy.

  6. Energy efficient data representation and aggregation with event region detection in wireless sensor networks

    NASA Astrophysics Data System (ADS)

    Banerjee, Torsha

    Unlike conventional networks, wireless sensor networks (WSNs) are limited in power, have much smaller memory buffers, and possess relatively slower processing speeds. These characteristics necessitate minimum transfer and storage of information in order to prolong the network lifetime. In this dissertation, we exploit the spatio-temporal nature of sensor data to approximate the current values of the sensors based on readings obtained from neighboring sensors and itself. We propose a Tree based polynomial REGression algorithm, (TREG) that addresses the problem of data compression in wireless sensor networks. Instead of aggregated data, a polynomial function (P) is computed by the regression function, TREG. The coefficients of P are then passed to achieve the following goals: (i) The sink can get attribute values in the regions devoid of sensor nodes, and (ii) Readings over any portion of the region can be obtained at one time by querying the root of the tree. As the size of the data packet from each tree node to its parent remains constant, the proposed scheme scales very well with growing network density or increased coverage area. Since physical attributes exhibit a gradual change over time, we propose an iterative scheme, UPDATE_COEFF, which obviates the need to perform the regression function repeatedly and uses approximations based on previous readings. Extensive simulations are performed on real world data to demonstrate the effectiveness of our proposed aggregation algorithm, TREG. Results reveal that for a network density of 0.0025 nodes/m2, a complete binary tree of depth 4 could provide the absolute error to be less than 6%. A data compression ratio of about 0.02 is achieved using our proposed algorithm, which is almost independent of the tree depth. In addition, our proposed updating scheme makes the aggregation process faster while maintaining the desired error bounds. We also propose a Polynomial-based scheme that addresses the problem of Event Region Detection (PERD) for WSNs. When a single event occurs, a child of the tree sends a Flagged Polynomial (FP) to its parent, if the readings approximated by it falls outside the data range defining the existing phenomenon. After the aggregation process is over, the root having the two polynomials, P and FP can be queried for FP (approximating the new event region) instead of flooding the whole network. For multiple such events, instead of computing a polynomial corresponding to each new event, areas with same data range are combined by the corresponding tree nodes and the aggregated coefficients are passed on. Results reveal that a new event can be detected by PERD while error in detection remains constant and is less than a threshold of 10%. As the node density increases, accuracy and delay for event detection are found to remain almost constant, making PERD highly scalable. Whenever an event occurs in a WSN, data is generated by closeby sensors and relaying the data to the base station (BS) make sensors closer to the BS run out of energy at a much faster rate than sensors in other parts of the network. This gives rise to an unequal distribution of residual energy in the network and makes those sensors with lower remaining energy level die at much faster rate than others. We propose a scheme for enhancing network Lifetime using mobile cluster heads (CH) in a WSN. To maintain remaining energy more evenly, some energy-rich nodes are designated as CHs which move in a controlled manner towards sensors rich in energy and data. This eliminates multihop transmission required by the static sensors and thus increases the overall lifetime of the WSN. We combine the idea of clustering and mobile CH to first form clusters of static sensor nodes. A collaborative strategy among the CHs further increases the lifetime of the network. Time taken for transmitting data to the BS is reduced further by making the CHs follow a connectivity strategy that always maintain a connected path to the BS. Spatial correlation of sensor data can be further exploited for dynamic channel selection in Cellular Communication. In such a scenario within a licensed band, wireless sensors can be deployed (each sensor tuned to a frequency of the channel at a particular time) to sense the interference power of the frequency band. In an ideal channel, interference temperature (IT) which is directly proportional to the interference power, can be assumed to vary spatially with the frequency of the sub channel. We propose a scheme for fitting the sub channel frequencies and corresponding ITs to a regression model for calculating the IT of a random sub channel for further analysis of the channel interference at the base station. Our scheme, based on the readings reported by Sensors helps in Dynamic Channel Selection (S-DCS) in extended C-band for assignment to unlicensed secondary users. S-DCS proves to be economic from energy consumption point of view and it also achieves accuracy with error bound within 6.8%. Again, users are assigned empty sub channels without actually probing them, incurring minimum delay in the process. The overall channel throughput is maximized along with fairness to individual users.

  7. Quantum-inspired algorithm for estimating the permanent of positive semidefinite matrices

    NASA Astrophysics Data System (ADS)

    Chakhmakhchyan, L.; Cerf, N. J.; Garcia-Patron, R.

    2017-08-01

    We construct a quantum-inspired classical algorithm for computing the permanent of Hermitian positive semidefinite matrices by exploiting a connection between these mathematical structures and the boson sampling model. Specifically, the permanent of a Hermitian positive semidefinite matrix can be expressed in terms of the expected value of a random variable, which stands for a specific photon-counting probability when measuring a linear-optically evolved random multimode coherent state. Our algorithm then approximates the matrix permanent from the corresponding sample mean and is shown to run in polynomial time for various sets of Hermitian positive semidefinite matrices, achieving a precision that improves over known techniques. This work illustrates how quantum optics may benefit algorithm development.

  8. A Note on Alternating Minimization Algorithm for the Matrix Completion Problem

    DOE PAGES

    Gamarnik, David; Misra, Sidhant

    2016-06-06

    Here, we consider the problem of reconstructing a low-rank matrix from a subset of its entries and analyze two variants of the so-called alternating minimization algorithm, which has been proposed in the past.We establish that when the underlying matrix has rank one, has positive bounded entries, and the graph underlying the revealed entries has diameter which is logarithmic in the size of the matrix, both algorithms succeed in reconstructing the matrix approximately in polynomial time starting from an arbitrary initialization.We further provide simulation results which suggest that the second variant which is based on the message passing type updates performsmore » significantly better.« less

  9. Constrained Surface-Level Gateway Placement for Underwater Acoustic Wireless Sensor Networks

    NASA Astrophysics Data System (ADS)

    Li, Deying; Li, Zheng; Ma, Wenkai; Chen, Hong

    One approach to guarantee the performance of underwater acoustic sensor networks is to deploy multiple Surface-level Gateways (SGs) at the surface. This paper addresses the connected (or survivable) Constrained Surface-level Gateway Placement (C-SGP) problem for 3-D underwater acoustic sensor networks. Given a set of candidate locations where SGs can be placed, our objective is to place minimum number of SGs at a subset of candidate locations such that it is connected (or 2-connected) from any USN to the base station. We propose a polynomial time approximation algorithm for the connected C-SGP problem and survivable C-SGP problem, respectively. Simulations are conducted to verify our algorithms' efficiency.

  10. Polynomial complexity despite the fermionic sign

    NASA Astrophysics Data System (ADS)

    Rossi, R.; Prokof'ev, N.; Svistunov, B.; Van Houcke, K.; Werner, F.

    2017-04-01

    It is commonly believed that in unbiased quantum Monte Carlo approaches to fermionic many-body problems, the infamous sign problem generically implies prohibitively large computational times for obtaining thermodynamic-limit quantities. We point out that for convergent Feynman diagrammatic series evaluated with a recently introduced Monte Carlo algorithm (see Rossi R., arXiv:1612.05184), the computational time increases only polynomially with the inverse error on thermodynamic-limit quantities.

  11. Analytical description of changes in the magnetic states of chromium-nickel steel under uniaxial elastic deformation

    NASA Astrophysics Data System (ADS)

    Gorkunov, E. S.; Yakushenko, E. I.; Zadvorkin, S. M.; Mushnikov, A. N.

    2017-12-01

    Dependences of magnetization and magnetic permeability of the 15KhN4D structural steel on the value of uniaxial stresses and magnetic field strength are obtained. A polynomial approximation fairly accurately describing the observed changes is proposed on the basis of experimental data.

  12. Design Method for Numerical Function Generators Based on Polynomial Approximation for FPGA Implementation

    DTIC Science & Technology

    2007-08-01

    with a Design Specification de- scribed by Scilab [26], a MATLAB-like software applica- tion, and ends up with HDL code. The Design Specifica- tion...Conf. on Field Programmable Logic and Applications (FPL’05), Tampere, Finland, pp. 118–123, Aug. 2005. [26] Scilab 3.0, INRIA-ENPC, France, http

  13. Comparing Inference Approaches for RD Designs: A Reexamination of the Effect of Head Start on Child Mortality

    ERIC Educational Resources Information Center

    Cattaneo, Matias D.; Titiunik, Rocío; Vazquez-Bare, Gonzalo

    2017-01-01

    The regression discontinuity (RD) design is a popular quasi-experimental design for causal inference and policy evaluation. The most common inference approaches in RD designs employ "flexible" parametric and nonparametric local polynomial methods, which rely on extrapolation and large-sample approximations of conditional expectations…

  14. Sobolev-orthogonal systems of functions associated with an orthogonal system

    NASA Astrophysics Data System (ADS)

    Sharapudinov, I. I.

    2018-02-01

    For every system of functions \\{\\varphi_k(x)\\} which is orthonormal on (a,b) with weight ρ(x) and every positive integer r we construct a new associated system of functions \\{\\varphir,k(x)\\}k=0^∞ which is orthonormal with respect to a Sobolev-type inner product of the form \\displaystyle < f,g >=\\sumν=0r-1f(ν)(a)g(ν)(a)+\\intab f(r)(t)g(r)(t)ρ(t) dt. We study the convergence of Fourier series in the systems \\{\\varphir,k(x)\\}k=0^∞. In the important particular cases of such systems generated by the Haar functions and the Chebyshev polynomials T_n(x)=\\cos(n\\arccos x), we obtain explicit representations for the \\varphir,k(x) that can be used to study their asymptotic properties as k\\to∞ and the approximation properties of Fourier sums in the system \\{\\varphir,k(x)\\}k=0^∞. Special attention is paid to the study of approximation properties of Fourier series in systems of type \\{\\varphir,k(x)\\}k=0^∞ generated by Haar functions and Chebyshev polynomials.

  15. A three-dimensional semianalytical model of hydraulic fracture growth through weak barriers

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luiskutty, C.T.; Tomutes, L.; Palmer, I.D.

    1989-08-01

    The goal of this research was to develop a fracture model for length/height ratio {le}4 that includes 2D flow (and a line source corresponding to the perforated interval) but makes approximations that allow a semianalytical solution, with large computer-time savings over the fully numerical mode. The height, maximum width, and pressure at the wellbore in this semianalytical model are calculated and compared with the results of the fully three-dimensional (3D) model. There is reasonable agreement in all parameters, the maximum discrepancy being 24%. Comparisons of fracture volume and leakoff volume also show reasonable agreement in volume and fluid efficiencies. Themore » values of length/height ratio, in the four cases in which agreement is found, vary from 1.5 to 3.7. The model offers a useful first-order (or screening) calculation of fracture-height growth through weak barriers (e.g., low stress contrasts). When coupled with the model developed for highly elongated fractures of length/height ratio {ge}4, which are also found to be in basic agreement with the fully numerical model, this new model provides the capability for approximating fracture-height growth through barriers for vertical fracture shapes that vary from penny to highly elongated. The computer time required is estimated to be less than the time required for the fully numerical model by a factor of 10 or more.« less

  16. A new surrogate modeling technique combining Kriging and polynomial chaos expansions – Application to uncertainty analysis in computational dosimetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kersaudy, Pierric, E-mail: pierric.kersaudy@orange.com; Whist Lab, 38 avenue du Général Leclerc, 92130 Issy-les-Moulineaux; ESYCOM, Université Paris-Est Marne-la-Vallée, 5 boulevard Descartes, 77700 Marne-la-Vallée

    2015-04-01

    In numerical dosimetry, the recent advances in high performance computing led to a strong reduction of the required computational time to assess the specific absorption rate (SAR) characterizing the human exposure to electromagnetic waves. However, this procedure remains time-consuming and a single simulation can request several hours. As a consequence, the influence of uncertain input parameters on the SAR cannot be analyzed using crude Monte Carlo simulation. The solution presented here to perform such an analysis is surrogate modeling. This paper proposes a novel approach to build such a surrogate model from a design of experiments. Considering a sparse representationmore » of the polynomial chaos expansions using least-angle regression as a selection algorithm to retain the most influential polynomials, this paper proposes to use the selected polynomials as regression functions for the universal Kriging model. The leave-one-out cross validation is used to select the optimal number of polynomials in the deterministic part of the Kriging model. The proposed approach, called LARS-Kriging-PC modeling, is applied to three benchmark examples and then to a full-scale metamodeling problem involving the exposure of a numerical fetus model to a femtocell device. The performances of the LARS-Kriging-PC are compared to an ordinary Kriging model and to a classical sparse polynomial chaos expansion. The LARS-Kriging-PC appears to have better performances than the two other approaches. A significant accuracy improvement is observed compared to the ordinary Kriging or to the sparse polynomial chaos depending on the studied case. This approach seems to be an optimal solution between the two other classical approaches. A global sensitivity analysis is finally performed on the LARS-Kriging-PC model of the fetus exposure problem.« less

  17. A conforming spectral collocation strategy for Stokes flow through a channel contraction

    NASA Technical Reports Server (NTRS)

    Phillips, Timothy N.; Karageorghis, Andreas

    1989-01-01

    A formula expressing the coefficients of an expansion of ultraspherical polynomials which has been differentiated an arbitrary number of times in terms of the coefficients of the original expansion is proved. The particular examples of Chebyshev and Legendre polynomials are considered.

  18. Maximum likelihood decoding of Reed Solomon Codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sudan, M.

    We present a randomized algorithm which takes as input n distinct points ((x{sub i}, y{sub i})){sup n}{sub i=1} from F x F (where F is a field) and integer parameters t and d and returns a list of all univariate polynomials f over F in the variable x of degree at most d which agree with the given set of points in at least t places (i.e., y{sub i} = f (x{sub i}) for at least t values of i), provided t = {Omega}({radical}nd). The running time is bounded by a polynomial in n. This immediately provides a maximum likelihoodmore » decoding algorithm for Reed Solomon Codes, which works in a setting with a larger number of errors than any previously known algorithm. To the best of our knowledge, this is the first efficient (i.e., polynomial time bounded) algorithm which provides some maximum likelihood decoding for any efficient (i.e., constant or even polynomial rate) code.« less

  19. Transfer matrix computation of critical polynomials for two-dimensional Potts models

    DOE PAGES

    Jacobsen, Jesper Lykke; Scullard, Christian R.

    2013-02-04

    We showed, In our previous work, that critical manifolds of the q-state Potts model can be studied by means of a graph polynomial P B(q, v), henceforth referred to as the critical polynomial. This polynomial may be defined on any periodic two-dimensional lattice. It depends on a finite subgraph B, called the basis, and the manner in which B is tiled to construct the lattice. The real roots v = e K — 1 of P B(q, v) either give the exact critical points for the lattice, or provide approximations that, in principle, can be made arbitrarily accurate by increasingmore » the size of B in an appropriate way. In earlier work, P B(q, v) was defined by a contraction-deletion identity, similar to that satisfied by the Tutte polynomial. Here, we give a probabilistic definition of P B(q, v), which facilitates its computation, using the transfer matrix, on much larger B than was previously possible.We present results for the critical polynomial on the (4, 8 2), kagome, and (3, 12 2) lattices for bases of up to respectively 96, 162, and 243 edges, compared to the limit of 36 edges with contraction-deletion. We discuss in detail the role of the symmetries and the embedding of B. The critical temperatures v c obtained for ferromagnetic (v > 0) Potts models are at least as precise as the best available results from Monte Carlo simulations or series expansions. For instance, with q = 3 we obtain v c(4, 8 2) = 3.742 489 (4), v c(kagome) = 1.876 459 7 (2), and v c(3, 12 2) = 5.033 078 49 (4), the precision being comparable or superior to the best simulation results. More generally, we trace the critical manifolds in the real (q, v) plane and discuss the intricate structure of the phase diagram in the antiferromagnetic (v < 0) region.« less

  20. Analytic double product integrals for all-frequency relighting.

    PubMed

    Wang, Rui; Pan, Minghao; Chen, Weifeng; Ren, Zhong; Zhou, Kun; Hua, Wei; Bao, Hujun

    2013-07-01

    This paper presents a new technique for real-time relighting of static scenes with all-frequency shadows from complex lighting and highly specular reflections from spatially varying BRDFs. The key idea is to depict the boundaries of visible regions using piecewise linear functions, and convert the shading computation into double product integrals—the integral of the product of lighting and BRDF on visible regions. By representing lighting and BRDF with spherical Gaussians and approximating their product using Legendre polynomials locally in visible regions, we show that such double product integrals can be evaluated in an analytic form. Given the precomputed visibility, our technique computes the visibility boundaries on the fly at each shading point, and performs the analytic integral to evaluate the shading color. The result is a real-time all-frequency relighting technique for static scenes with dynamic, spatially varying BRDFs, which can generate more accurate shadows than the state-of-the-art real-time PRT methods.

  1. Inexact adaptive Newton methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bertiger, W.I.; Kelsey, F.J.

    1985-02-01

    The Inexact Adaptive Newton method (IAN) is a modification of the Adaptive Implicit Method/sup 1/ (AIM) with improved Newton convergence. Both methods simplify the Jacobian at each time step by zeroing coefficients in regions where saturations are changing slowly. The methods differ in how the diagonal block terms are treated. On test problems with up to 3,000 cells, IAN consistently saves approximately 30% of the CPU time when compared to the fully implicit method. AIM shows similar savings on some problems, but takes as much CPU time as fully implicit on other test problems due to poor Newton convergence.

  2. Remarks on a New Possible Discretization Scheme for Gauge Theories

    NASA Astrophysics Data System (ADS)

    Magnot, Jean-Pierre

    2018-03-01

    We propose here a new discretization method for a class of continuum gauge theories which action functionals are polynomials of the curvature. Based on the notion of holonomy, this discretization procedure appears gauge-invariant for discretized analogs of Yang-Mills theories, and hence gauge-fixing is fully rigorous for these discretized action functionals. Heuristic parts are forwarded to the quantization procedure via Feynman integrals and the meaning of the heuristic infinite dimensional Lebesgue integral is questioned.

  3. Remarks on a New Possible Discretization Scheme for Gauge Theories

    NASA Astrophysics Data System (ADS)

    Magnot, Jean-Pierre

    2018-07-01

    We propose here a new discretization method for a class of continuum gauge theories which action functionals are polynomials of the curvature. Based on the notion of holonomy, this discretization procedure appears gauge-invariant for discretized analogs of Yang-Mills theories, and hence gauge-fixing is fully rigorous for these discretized action functionals. Heuristic parts are forwarded to the quantization procedure via Feynman integrals and the meaning of the heuristic infinite dimensional Lebesgue integral is questioned.

  4. APPROXIMATION OF SOLUTIONS OF THE EQUATION \\overline\\partial^jf=0, j\\geq1, IN DOMAINS WITH QUASICONFORMAL BOUNDARY

    NASA Astrophysics Data System (ADS)

    Andrievskiĭ, V. V.; Belyĭ, V. I.; Maĭmeskul, V. V.

    1991-02-01

    This article establishes direct and inverse theorems of approximation theory (of the same type as theorems of Dzyadyk) that describe the quantitative connection between the smoothness properties of solutions of the equation \\overline\\partial^jf=0, j\\geq1, and the rate of their approximation by "module" polynomials of the form \\displaystyle P_N(z)=\\sum_{n=0}^{j-1}\\sum_{m=0}^{N-n}a_{m,n}z^m\\overline{z}^n,\\qquad N\\geq j-1.In particular, a constructive characterization is obtained for generalized Hölder classes of such functions on domains with quasiconformal boundary.Bibliography: 19 titles.

  5. Comparison of Response Surface and Kriging Models for Multidisciplinary Design Optimization

    NASA Technical Reports Server (NTRS)

    Simpson, Timothy W.; Korte, John J.; Mauery, Timothy M.; Mistree, Farrokh

    1998-01-01

    In this paper, we compare and contrast the use of second-order response surface models and kriging models for approximating non-random, deterministic computer analyses. After reviewing the response surface method for constructing polynomial approximations, kriging is presented as an alternative approximation method for the design and analysis of computer experiments. Both methods are applied to the multidisciplinary design of an aerospike nozzle which consists of a computational fluid dynamics model and a finite-element model. Error analysis of the response surface and kriging models is performed along with a graphical comparison of the approximations, and four optimization problems m formulated and solved using both sets of approximation models. The second-order response surface models and kriging models-using a constant underlying global model and a Gaussian correlation function-yield comparable results.

  6. Homogenous polynomially parameter-dependent H∞ filter designs of discrete-time fuzzy systems.

    PubMed

    Zhang, Huaguang; Xie, Xiangpeng; Tong, Shaocheng

    2011-10-01

    This paper proposes a novel H(∞) filtering technique for a class of discrete-time fuzzy systems. First, a novel kind of fuzzy H(∞) filter, which is homogenous polynomially parameter dependent on membership functions with an arbitrary degree, is developed to guarantee the asymptotic stability and a prescribed H(∞) performance of the filtering error system. Second, relaxed conditions for H(∞) performance analysis are proposed by using a new fuzzy Lyapunov function and the Finsler lemma with homogenous polynomial matrix Lagrange multipliers. Then, based on a new kind of slack variable technique, relaxed linear matrix inequality-based H(∞) filtering conditions are proposed. Finally, two numerical examples are provided to illustrate the effectiveness of the proposed approach.

  7. Low-Complexity Polynomial Channel Estimation in Large-Scale MIMO With Arbitrary Statistics

    NASA Astrophysics Data System (ADS)

    Shariati, Nafiseh; Bjornson, Emil; Bengtsson, Mats; Debbah, Merouane

    2014-10-01

    This paper considers pilot-based channel estimation in large-scale multiple-input multiple-output (MIMO) communication systems, also known as massive MIMO, where there are hundreds of antennas at one side of the link. Motivated by the fact that computational complexity is one of the main challenges in such systems, a set of low-complexity Bayesian channel estimators, coined Polynomial ExpAnsion CHannel (PEACH) estimators, are introduced for arbitrary channel and interference statistics. While the conventional minimum mean square error (MMSE) estimator has cubic complexity in the dimension of the covariance matrices, due to an inversion operation, our proposed estimators significantly reduce this to square complexity by approximating the inverse by a L-degree matrix polynomial. The coefficients of the polynomial are optimized to minimize the mean square error (MSE) of the estimate. We show numerically that near-optimal MSEs are achieved with low polynomial degrees. We also derive the exact computational complexity of the proposed estimators, in terms of the floating-point operations (FLOPs), by which we prove that the proposed estimators outperform the conventional estimators in large-scale MIMO systems of practical dimensions while providing a reasonable MSEs. Moreover, we show that L needs not scale with the system dimensions to maintain a certain normalized MSE. By analyzing different interference scenarios, we observe that the relative MSE loss of using the low-complexity PEACH estimators is smaller in realistic scenarios with pilot contamination. On the other hand, PEACH estimators are not well suited for noise-limited scenarios with high pilot power; therefore, we also introduce the low-complexity diagonalized estimator that performs well in this regime. Finally, we ...

  8. Gravitational waves from plunges into Gargantua

    NASA Astrophysics Data System (ADS)

    Compère, Geoffrey; Fransen, Kwinten; Hertog, Thomas; Long, Jiang

    2018-05-01

    We analytically compute time domain gravitational waveforms produced in the final stages of extreme mass ratio inspirals of non-spinning compact objects into supermassive nearly extremal Kerr black holes. Conformal symmetry relates all corotating equatorial orbits in the geodesic approximation to circular orbits through complex conformal transformations. We use this to obtain the time domain Teukolsky perturbations for generic equatorial corotating plunges in closed form. The resulting gravitational waveforms consist of an intermediate polynomial ringdown phase in which the decay rate depends on the impact parameters, followed by an exponential quasi-normal mode decay. The waveform amplitude exhibits critical behavior when the orbital angular momentum tends to a minimal value determined by the innermost stable circular orbit. We show that either near-critical or large angular momentum leads to a significant extension of the LISA observable volume of gravitational wave sources of this kind.

  9. Scaling and efficiency of PRISM in adaptive simulations of turbulent premixed flames

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tonse, Shaheen R.; Bell, J.B.; Brown, N.J.

    1999-12-01

    The dominant computational cost in modeling turbulent combustion phenomena numerically with high fidelity chemical mechanisms is the time required to solve the ordinary differential equations associated with chemical kinetics. One approach to reducing that computational cost is to develop an inexpensive surrogate model that accurately represents evolution of chemical kinetics. One such approach, PRISM, develops a polynomial representation of the chemistry evolution in a local region of chemical composition space. This representation is then stored for later use. As the computation proceeds, the chemistry evolution for other points within the same region are computed by evaluating these polynomials instead ofmore » calling an ordinary differential equation solver. If initial data for advancing the chemistry is encountered that is not in any region for which a polynomial is defined, the methodology dynamically samples that region and constructs a new representation for that region. The utility of this approach is determined by the size of the regions over which the representation provides a good approximation to the kinetics and the number of these regions that are necessary to model the subset of composition space that is active during a simulation. In this paper, we assess the PRISM methodology in the context of a turbulent premixed flame in two dimensions. We consider a range of turbulent intensities ranging from weak turbulence that has little effect on the flame to strong turbulence that tears pockets of burning fluid from the main flame. For each case, we explore a range of sizes for the local regions and determine the scaling behavior as a function of region size and turbulent intensity.« less

  10. Robustness analysis of an air heating plant and control law by using polynomial chaos

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Colón, Diego; Ferreira, Murillo A. S.; Bueno, Átila M.

    2014-12-10

    This paper presents a robustness analysis of an air heating plant with a multivariable closed-loop control law by using the polynomial chaos methodology (MPC). The plant consists of a PVC tube with a fan in the air input (that forces the air through the tube) and a mass flux sensor in the output. A heating resistance warms the air as it flows inside the tube, and a thermo-couple sensor measures the air temperature. The plant has thus two inputs (the fan's rotation intensity and heat generated by the resistance, both measured in percent of the maximum value) and two outputsmore » (air temperature and air mass flux, also in percent of the maximal value). The mathematical model is obtained by System Identification techniques. The mass flux sensor, which is nonlinear, is linearized and the delays in the transfer functions are properly approximated by non-minimum phase transfer functions. The resulting model is transformed to a state-space model, which is used for control design purposes. The multivariable robust control design techniques used is the LQG/LTR, and the controllers are validated in simulation software and in the real plant. Finally, the MPC is applied by considering some of the system's parameters as random variables (one at a time, and the system's stochastic differential equations are solved by expanding the solution (a stochastic process) in an orthogonal basis of polynomial functions of the basic random variables. This method transforms the stochastic equations in a set of deterministic differential equations, which can be solved by traditional numerical methods (That is the MPC). Statistical data for the system (like expected values and variances) are then calculated. The effects of randomness in the parameters are evaluated in the open-loop and closed-loop pole's positions.« less

  11. Segmented Polynomial Models in Quasi-Experimental Research.

    ERIC Educational Resources Information Center

    Wasik, John L.

    1981-01-01

    The use of segmented polynomial models is explained. Examples of design matrices of dummy variables are given for the least squares analyses of time series and discontinuity quasi-experimental research designs. Linear combinations of dummy variable vectors appear to provide tests of effects in the two quasi-experimental designs. (Author/BW)

  12. Extinction time of a stochastic predator-prey model by the generalized cell mapping method

    NASA Astrophysics Data System (ADS)

    Han, Qun; Xu, Wei; Hu, Bing; Huang, Dongmei; Sun, Jian-Qiao

    2018-03-01

    The stochastic response and extinction time of a predator-prey model with Gaussian white noise excitations are studied by the generalized cell mapping (GCM) method based on the short-time Gaussian approximation (STGA). The methods for stochastic response probability density functions (PDFs) and extinction time statistics are developed. The Taylor expansion is used to deal with non-polynomial nonlinear terms of the model for deriving the moment equations with Gaussian closure, which are needed for the STGA in order to compute the one-step transition probabilities. The work is validated with direct Monte Carlo simulations. We have presented the transient responses showing the evolution from a Gaussian initial distribution to a non-Gaussian steady-state one. The effects of the model parameter and noise intensities on the steady-state PDFs are discussed. It is also found that the effects of noise intensities on the extinction time statistics are opposite to the effects on the limit probability distributions of the survival species.

  13. Minimal-resource computer program for automatic generation of ocean wave ray or crest diagrams in shoaling waters

    NASA Technical Reports Server (NTRS)

    Poole, L. R.; Lecroy, S. R.; Morris, W. D.

    1977-01-01

    A computer program for studying linear ocean wave refraction is described. The program features random-access modular bathymetry data storage. Three bottom topography approximation techniques are available in the program which provide varying degrees of bathymetry data smoothing. Refraction diagrams are generated automatically and can be displayed graphically in three forms: Ray patterns with specified uniform deepwater ray density, ray patterns with controlled nearshore ray density, or crest patterns constructed by using a cubic polynomial to approximate crest segments between adjacent rays.

  14. A Fast Hermite Transform★

    PubMed Central

    Leibon, Gregory; Rockmore, Daniel N.; Park, Wooram; Taintor, Robert; Chirikjian, Gregory S.

    2008-01-01

    We present algorithms for fast and stable approximation of the Hermite transform of a compactly supported function on the real line, attainable via an application of a fast algebraic algorithm for computing sums associated with a three-term relation. Trade-offs between approximation in bandlimit (in the Hermite sense) and size of the support region are addressed. Numerical experiments are presented that show the feasibility and utility of our approach. Generalizations to any family of orthogonal polynomials are outlined. Applications to various problems in tomographic reconstruction, including the determination of protein structure, are discussed. PMID:20027202

  15. On the arbitrary l-wave solutions of the deformed hyperbolic manning-rosen potential including an improved approximation to the orbital centrifugal term

    NASA Astrophysics Data System (ADS)

    Xu, Chun-Long; Zhang, Min-Cang

    2017-01-01

    The arbitrary l-wave solutions to the Schrödinger equation for the deformed hyperbolic Manning-Rosen potential is investigated analytically by using the Nikiforov-Uvarov method, the centrifugal term is treated with an improved Greene and Aldrich's approximation scheme. The wavefunctions depend on the deformation parameter q, which is expressed in terms of the Jocobi polynomial or the hypergeometric function. The bound state energy is obtained, and the discrete spectrum is shown to be independent of the deformation parameter q.

  16. Grid and basis adaptive polynomial chaos techniques for sensitivity and uncertainty analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perkó, Zoltán, E-mail: Z.Perko@tudelft.nl; Gilli, Luca, E-mail: Gilli@nrg.eu; Lathouwers, Danny, E-mail: D.Lathouwers@tudelft.nl

    2014-03-01

    The demand for accurate and computationally affordable sensitivity and uncertainty techniques is constantly on the rise and has become especially pressing in the nuclear field with the shift to Best Estimate Plus Uncertainty methodologies in the licensing of nuclear installations. Besides traditional, already well developed methods – such as first order perturbation theory or Monte Carlo sampling – Polynomial Chaos Expansion (PCE) has been given a growing emphasis in recent years due to its simple application and good performance. This paper presents new developments of the research done at TU Delft on such Polynomial Chaos (PC) techniques. Our work ismore » focused on the Non-Intrusive Spectral Projection (NISP) approach and adaptive methods for building the PCE of responses of interest. Recent efforts resulted in a new adaptive sparse grid algorithm designed for estimating the PC coefficients. The algorithm is based on Gerstner's procedure for calculating multi-dimensional integrals but proves to be computationally significantly cheaper, while at the same it retains a similar accuracy as the original method. More importantly the issue of basis adaptivity has been investigated and two techniques have been implemented for constructing the sparse PCE of quantities of interest. Not using the traditional full PC basis set leads to further reduction in computational time since the high order grids necessary for accurately estimating the near zero expansion coefficients of polynomial basis vectors not needed in the PCE can be excluded from the calculation. Moreover the sparse PC representation of the response is easier to handle when used for sensitivity analysis or uncertainty propagation due to the smaller number of basis vectors. The developed grid and basis adaptive methods have been implemented in Matlab as the Fully Adaptive Non-Intrusive Spectral Projection (FANISP) algorithm and were tested on four analytical problems. These show consistent good performance both in terms of the accuracy of the resulting PC representation of quantities and the computational costs associated with constructing the sparse PCE. Basis adaptivity also seems to make the employment of PC techniques possible for problems with a higher number of input parameters (15–20), alleviating a well known limitation of the traditional approach. The prospect of larger scale applicability and the simplicity of implementation makes such adaptive PC algorithms particularly appealing for the sensitivity and uncertainty analysis of complex systems and legacy codes.« less

  17. On the design of recursive digital filters

    NASA Technical Reports Server (NTRS)

    Shenoi, K.; Narasimha, M. J.; Peterson, A. M.

    1976-01-01

    A change of variables is described which transforms the problem of designing a recursive digital filter to that of approximation by a ratio of polynomials on a finite interval. Some analytic techniques for the design of low-pass filters are presented, illustrating the use of the transformation. Also considered are methods for the design of phase equalizers.

  18. Latency-Efficient Communication in Wireless Mesh Networks under Consideration of Large Interference Range

    NASA Astrophysics Data System (ADS)

    Xin, Qin; Yao, Xiaolan; Engelstad, Paal E.

    2010-09-01

    Wireless Mesh Networking is an emerging communication paradigm to enable resilient, cost-efficient and reliable services for the future-generation wireless networks. We study here the minimum-latency communication primitive of gossiping (all-to-all communication) in multi-hop ad-hoc Wireless Mesh Networks (WMNs). Each mesh node in the WMN is initially given a message and the objective is to design a minimum-latency schedule such that each mesh node distributes its message to all other mesh nodes. Minimum-latency gossiping problem is well known to be NP-hard even for the scenario in which the topology of the WMN is known to all mesh nodes in advance. In this paper, we propose a new latency-efficient approximation scheme that can accomplish gossiping task in polynomial time units in any ad-hoc WMN under consideration of Large Interference Range (LIR), e.g., the interference range is much larger than the transmission range. To the best of our knowledge, it is first time to investigate such a scenario in ad-hoc WMNs under LIR, our algorithm allows the labels (e.g., identifiers) of the mesh nodes to be polynomially large in terms of the size of the WMN, which is the first time that the scenario of large labels has been considered in ad-hoc WMNs under LIR. Furthermore, our gossiping scheme can be considered as a framework which can be easily implied to the scenario under consideration of mobility-related issues since we assume that the mesh nodes have no knowledge on the network topology even for its neighboring mesh nodes.

  19. Charge-based MOSFET model based on the Hermite interpolation polynomial

    NASA Astrophysics Data System (ADS)

    Colalongo, Luigi; Richelli, Anna; Kovacs, Zsolt

    2017-04-01

    An accurate charge-based compact MOSFET model is developed using the third order Hermite interpolation polynomial to approximate the relation between surface potential and inversion charge in the channel. This new formulation of the drain current retains the same simplicity of the most advanced charge-based compact MOSFET models such as BSIM, ACM and EKV, but it is developed without requiring the crude linearization of the inversion charge. Hence, the asymmetry and the non-linearity in the channel are accurately accounted for. Nevertheless, the expression of the drain current can be worked out to be analytically equivalent to BSIM, ACM and EKV. Furthermore, thanks to this new mathematical approach the slope factor is rigorously defined in all regions of operation and no empirical assumption is required.

  20. Novel quadrilateral elements based on explicit Hermite polynomials for bending of Kirchhoff-Love plates

    NASA Astrophysics Data System (ADS)

    Beheshti, Alireza

    2018-03-01

    The contribution addresses the finite element analysis of bending of plates given the Kirchhoff-Love model. To analyze the static deformation of plates with different loadings and geometries, the principle of virtual work is used to extract the weak form. Following deriving the strain field, stresses and resultants may be obtained. For constructing four-node quadrilateral plate elements, the Hermite polynomials defined with respect to the variables in the parent space are applied explicitly. Based on the approximated field of displacement, the stiffness matrix and the load vector in the finite element method are obtained. To demonstrate the performance of the subparametric 4-node plate elements, some known, classical examples in structural mechanics are solved and there are comparisons with the analytical solutions available in the literature.

  1. A weighted ℓ{sub 1}-minimization approach for sparse polynomial chaos expansions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peng, Ji; Hampton, Jerrad; Doostan, Alireza, E-mail: alireza.doostan@colorado.edu

    2014-06-15

    This work proposes a method for sparse polynomial chaos (PC) approximation of high-dimensional stochastic functions based on non-adapted random sampling. We modify the standard ℓ{sub 1}-minimization algorithm, originally proposed in the context of compressive sampling, using a priori information about the decay of the PC coefficients, when available, and refer to the resulting algorithm as weightedℓ{sub 1}-minimization. We provide conditions under which we may guarantee recovery using this weighted scheme. Numerical tests are used to compare the weighted and non-weighted methods for the recovery of solutions to two differential equations with high-dimensional random inputs: a boundary value problem with amore » random elliptic operator and a 2-D thermally driven cavity flow with random boundary condition.« less

  2. Polynomial Chaos decomposition applied to stochastic dosimetry: study of the influence of the magnetic field orientation on the pregnant woman exposure at 50 Hz.

    PubMed

    Liorni, I; Parazzini, M; Fiocchi, S; Guadagnin, V; Ravazzani, P

    2014-01-01

    Polynomial Chaos (PC) is a decomposition method used to build a meta-model, which approximates the unknown response of a model. In this paper the PC method is applied to the stochastic dosimetry to assess the variability of human exposure due to the change of the orientation of the B-field vector respect to the human body. In detail, the analysis of the pregnant woman exposure at 7 months of gestational age is carried out, to build-up a statistical meta-model of the induced electric field for each fetal tissue and in the fetal whole-body by means of the PC expansion as a function of the B-field orientation, considering a uniform exposure at 50 Hz.

  3. A Geometric Method for Model Reduction of Biochemical Networks with Polynomial Rate Functions.

    PubMed

    Samal, Satya Swarup; Grigoriev, Dima; Fröhlich, Holger; Weber, Andreas; Radulescu, Ovidiu

    2015-12-01

    Model reduction of biochemical networks relies on the knowledge of slow and fast variables. We provide a geometric method, based on the Newton polytope, to identify slow variables of a biochemical network with polynomial rate functions. The gist of the method is the notion of tropical equilibration that provides approximate descriptions of slow invariant manifolds. Compared to extant numerical algorithms such as the intrinsic low-dimensional manifold method, our approach is symbolic and utilizes orders of magnitude instead of precise values of the model parameters. Application of this method to a large collection of biochemical network models supports the idea that the number of dynamical variables in minimal models of cell physiology can be small, in spite of the large number of molecular regulatory actors.

  4. Computational algebraic geometry for statistical modeling FY09Q2 progress.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thompson, David C.; Rojas, Joseph Maurice; Pebay, Philippe Pierre

    2009-03-01

    This is a progress report on polynomial system solving for statistical modeling. This is a progress report on polynomial system solving for statistical modeling. This quarter we have developed our first model of shock response data and an algorithm for identifying the chamber cone containing a polynomial system in n variables with n+k terms within polynomial time - a significant improvement over previous algorithms, all having exponential worst-case complexity. We have implemented and verified the chamber cone algorithm for n+3 and are working to extend the implementation to handle arbitrary k. Later sections of this report explain chamber cones inmore » more detail; the next section provides an overview of the project and how the current progress fits into it.« less

  5. Direct solution for thermal stresses in a nose cap under an arbitrary axisymmetric temperature distribution

    NASA Technical Reports Server (NTRS)

    Davis, Randall C.

    1988-01-01

    The design of a nose cap for a hypersonic vehicle is an iterative process requiring a rapid, easy to use and accurate stress analysis. The objective of this paper is to develop such a stress analysis technique from a direct solution of the thermal stress equations for a spherical shell. The nose cap structure is treated as a thin spherical shell with an axisymmetric temperature distribution. The governing differential equations are solved by expressing the stress solution to the thermoelastic equations in terms of a series of derivatives of the Legendre polynomials. The process of finding the coefficients for the series solution in terms of the temperature distribution is generalized by expressing the temperature along the shell and through the thickness as a polynomial in the spherical angle coordinate. Under this generalization the orthogonality property of the Legendre polynomials leads to a sequence of integrals involving powers of the spherical shell coordinate times the derivative of the Legendre polynomials. The coefficients of the temperature polynomial appear outside of these integrals. Thus, the integrals are evaluated only once and their values tabulated for use with any arbitrary polynomial temperature distribution.

  6. Model-based estimates of long-term persistence of induced HPV antibodies: a flexible subject-specific approach.

    PubMed

    Aregay, Mehreteab; Shkedy, Ziv; Molenberghs, Geert; David, Marie-Pierre; Tibaldi, Fabián

    2013-01-01

    In infectious diseases, it is important to predict the long-term persistence of vaccine-induced antibodies and to estimate the time points where the individual titers are below the threshold value for protection. This article focuses on HPV-16/18, and uses a so-called fractional-polynomial model to this effect, derived in a data-driven fashion. Initially, model selection was done from among the second- and first-order fractional polynomials on the one hand and from the linear mixed model on the other. According to a functional selection procedure, the first-order fractional polynomial was selected. Apart from the fractional polynomial model, we also fitted a power-law model, which is a special case of the fractional polynomial model. Both models were compared using Akaike's information criterion. Over the observation period, the fractional polynomials fitted the data better than the power-law model; this, of course, does not imply that it fits best over the long run, and hence, caution ought to be used when prediction is of interest. Therefore, we point out that the persistence of the anti-HPV responses induced by these vaccines can only be ascertained empirically by long-term follow-up analysis.

  7. Consensus seeking in a network of discrete-time linear agents with communication noises

    NASA Astrophysics Data System (ADS)

    Wang, Yunpeng; Cheng, Long; Hou, Zeng-Guang; Tan, Min; Zhou, Chao; Wang, Ming

    2015-07-01

    This paper studies the mean square consensus of discrete-time linear time-invariant multi-agent systems with communication noises. A distributed consensus protocol, which is composed of the agent's own state feedback and the relative states between the agent and its neighbours, is proposed. A time-varying consensus gain a[k] is applied to attenuate the effect of noises which inherits in the inaccurate measurement of relative states with neighbours. A polynomial, namely 'parameter polynomial', is constructed. And its coefficients are the parameters in the feedback gain vector of the proposed protocol. It turns out that the parameter polynomial plays an important role in guaranteeing the consensus of linear multi-agent systems. By the proposed protocol, necessary and sufficient conditions for mean square consensus are presented under different topology conditions: (1) if the communication topology graph has a spanning tree and every node in the graph has at least one parent node, then the mean square consensus can be achieved if and only if ∑∞k = 0a[k] = ∞, ∑∞k = 0a2[k] < ∞ and all roots of the parameter polynomial are in the unit circle; (2) if the communication topology graph has a spanning tree and there exits one node without any parent node (the leader-follower case), then the mean square consensus can be achieved if and only if ∑∞k = 0a[k] = ∞, limk → ∞a[k] = 0 and all roots of the parameter polynomial are in the unit circle; (3) if the communication topology graph does not have a spanning tree, then the mean square consensus can never be achieved. Finally, one simulation example on the multiple aircrafts system is provided to validate the theoretical analysis.

  8. Competition in high dimensional spaces using a sparse approximation of neural fields.

    PubMed

    Quinton, Jean-Charles; Girau, Bernard; Lefort, Mathieu

    2011-01-01

    The Continuum Neural Field Theory implements competition within topologically organized neural networks with lateral inhibitory connections. However, due to the polynomial complexity of matrix-based implementations, updating dense representations of the activity becomes computationally intractable when an adaptive resolution or an arbitrary number of input dimensions is required. This paper proposes an alternative to self-organizing maps with a sparse implementation based on Gaussian mixture models, promoting a trade-off in redundancy for higher computational efficiency and alleviating constraints on the underlying substrate.This version reproduces the emergent attentional properties of the original equations, by directly applying them within a continuous approximation of a high dimensional neural field. The model is compatible with preprocessed sensory flows but can also be interfaced with artificial systems. This is particularly important for sensorimotor systems, where decisions and motor actions must be taken and updated in real-time. Preliminary tests are performed on a reactive color tracking application, using spatially distributed color features.

  9. Approximation algorithm for the problem of partitioning a sequence into clusters

    NASA Astrophysics Data System (ADS)

    Kel'manov, A. V.; Mikhailova, L. V.; Khamidullin, S. A.; Khandeev, V. I.

    2017-08-01

    We consider the problem of partitioning a finite sequence of Euclidean points into a given number of clusters (subsequences) using the criterion of the minimal sum (over all clusters) of intercluster sums of squared distances from the elements of the clusters to their centers. It is assumed that the center of one of the desired clusters is at the origin, while the center of each of the other clusters is unknown and determined as the mean value over all elements in this cluster. Additionally, the partition obeys two structural constraints on the indices of sequence elements contained in the clusters with unknown centers: (1) the concatenation of the indices of elements in these clusters is an increasing sequence, and (2) the difference between an index and the preceding one is bounded above and below by prescribed constants. It is shown that this problem is strongly NP-hard. A 2-approximation algorithm is constructed that is polynomial-time for a fixed number of clusters.

  10. Efficient implementation of neural network deinterlacing

    NASA Astrophysics Data System (ADS)

    Seo, Guiwon; Choi, Hyunsoo; Lee, Chulhee

    2009-02-01

    Interlaced scanning has been widely used in most broadcasting systems. However, there are some undesirable artifacts such as jagged patterns, flickering, and line twitters. Moreover, most recent TV monitors utilize flat panel display technologies such as LCD or PDP monitors and these monitors require progressive formats. Consequently, the conversion of interlaced video into progressive video is required in many applications and a number of deinterlacing methods have been proposed. Recently deinterlacing methods based on neural network have been proposed with good results. On the other hand, with high resolution video contents such as HDTV, the amount of video data to be processed is very large. As a result, the processing time and hardware complexity become an important issue. In this paper, we propose an efficient implementation of neural network deinterlacing using polynomial approximation of the sigmoid function. Experimental results show that these approximations provide equivalent performance with a considerable reduction of complexity. This implementation of neural network deinterlacing can be efficiently incorporated in HW implementation.

  11. Effect of boundary representation on viscous, separated flows in a discontinuous-Galerkin Navier-Stokes solver

    NASA Astrophysics Data System (ADS)

    Nelson, Daniel A.; Jacobs, Gustaaf B.; Kopriva, David A.

    2016-08-01

    The effect of curved-boundary representation on the physics of the separated flow over a NACA 65(1)-412 airfoil is thoroughly investigated. A method is presented to approximate curved boundaries with a high-order discontinuous-Galerkin spectral element method for the solution of the Navier-Stokes equations. Multiblock quadrilateral element meshes are constructed with the grid generation software GridPro. The boundary of a NACA 65(1)-412 airfoil, defined by a cubic natural spline, is piecewise-approximated by isoparametric polynomial interpolants that represent the edges of boundary-fitted elements. Direct numerical simulation of the airfoil is performed on a coarse mesh and fine mesh with polynomial orders ranging from four to twelve. The accuracy of the curve fitting is investigated by comparing the flows computed on curved-sided meshes with those given by straight-sided meshes. Straight-sided meshes yield irregular wakes, whereas curved-sided meshes produce a regular Karman street wake. Straight-sided meshes also produce lower lift and higher viscous drag as compared with curved-sided meshes. When the mesh is refined by reducing the sizes of the elements, the lift decrease and viscous drag increase are less pronounced. The differences in the aerodynamic performance between the straight-sided meshes and the curved-sided meshes are concluded to be the result of artificial surface roughness introduced by the piecewise-linear boundary approximation provided by the straight-sided meshes.

  12. Comparison of permutationally invariant polynomials, neural networks, and Gaussian approximation potentials in representing water interactions through many-body expansions

    NASA Astrophysics Data System (ADS)

    Nguyen, Thuong T.; Székely, Eszter; Imbalzano, Giulio; Behler, Jörg; Csányi, Gábor; Ceriotti, Michele; Götz, Andreas W.; Paesani, Francesco

    2018-06-01

    The accurate representation of multidimensional potential energy surfaces is a necessary requirement for realistic computer simulations of molecular systems. The continued increase in computer power accompanied by advances in correlated electronic structure methods nowadays enables routine calculations of accurate interaction energies for small systems, which can then be used as references for the development of analytical potential energy functions (PEFs) rigorously derived from many-body (MB) expansions. Building on the accuracy of the MB-pol many-body PEF, we investigate here the performance of permutationally invariant polynomials (PIPs), neural networks, and Gaussian approximation potentials (GAPs) in representing water two-body and three-body interaction energies, denoting the resulting potentials PIP-MB-pol, Behler-Parrinello neural network-MB-pol, and GAP-MB-pol, respectively. Our analysis shows that all three analytical representations exhibit similar levels of accuracy in reproducing both two-body and three-body reference data as well as interaction energies of small water clusters obtained from calculations carried out at the coupled cluster level of theory, the current gold standard for chemical accuracy. These results demonstrate the synergy between interatomic potentials formulated in terms of a many-body expansion, such as MB-pol, that are physically sound and transferable, and machine-learning techniques that provide a flexible framework to approximate the short-range interaction energy terms.

  13. Investigation of Fully Three-Dimensional Helical RF Field Effects on TWT Beam/Circuit Interaction

    NASA Technical Reports Server (NTRS)

    Kory, Carol L.

    2000-01-01

    A fully three-dimensional (3D), time-dependent, helical traveling wave-tube (TWT) interaction model has been developed using the electromagnetic particle-in-cell (PIC) code MAFIA. The model includes a short section of helical slow-wave circuit with excitation fed by RF input/output couplers, and electron beam contained by periodic permanent magnet (PPM) focusing. All components of the model are simulated in three dimensions allowing the effects of the fully 3D helical fields on RF circuit/beam interaction to be investigated for the first time. The development of the interaction model is presented, and predicted TWT performance using 2.5D and 3D models is compared to investigate the effect of conventional approximations used in TWT analyses.

  14. Very high order discontinuous Galerkin method in elliptic problems

    NASA Astrophysics Data System (ADS)

    Jaśkowiec, Jan

    2017-09-01

    The paper deals with high-order discontinuous Galerkin (DG) method with the approximation order that exceeds 20 and reaches 100 and even 1000 with respect to one-dimensional case. To achieve such a high order solution, the DG method with finite difference method has to be applied. The basis functions of this method are high-order orthogonal Legendre or Chebyshev polynomials. These polynomials are defined in one-dimensional space (1D), but they can be easily adapted to two-dimensional space (2D) by cross products. There are no nodes in the elements and the degrees of freedom are coefficients of linear combination of basis functions. In this sort of analysis the reference elements are needed, so the transformations of the reference element into the real one are needed as well as the transformations connected with the mesh skeleton. Due to orthogonality of the basis functions, the obtained matrices are sparse even for finite elements with more than thousands degrees of freedom. In consequence, the truncation errors are limited and very high-order analysis can be performed. The paper is illustrated with a set of benchmark examples of 1D and 2D for the elliptic problems. The example presents the great effectiveness of the method that can shorten the length of calculation over hundreds times.

  15. Very high order discontinuous Galerkin method in elliptic problems

    NASA Astrophysics Data System (ADS)

    Jaśkowiec, Jan

    2018-07-01

    The paper deals with high-order discontinuous Galerkin (DG) method with the approximation order that exceeds 20 and reaches 100 and even 1000 with respect to one-dimensional case. To achieve such a high order solution, the DG method with finite difference method has to be applied. The basis functions of this method are high-order orthogonal Legendre or Chebyshev polynomials. These polynomials are defined in one-dimensional space (1D), but they can be easily adapted to two-dimensional space (2D) by cross products. There are no nodes in the elements and the degrees of freedom are coefficients of linear combination of basis functions. In this sort of analysis the reference elements are needed, so the transformations of the reference element into the real one are needed as well as the transformations connected with the mesh skeleton. Due to orthogonality of the basis functions, the obtained matrices are sparse even for finite elements with more than thousands degrees of freedom. In consequence, the truncation errors are limited and very high-order analysis can be performed. The paper is illustrated with a set of benchmark examples of 1D and 2D for the elliptic problems. The example presents the great effectiveness of the method that can shorten the length of calculation over hundreds times.

  16. Staircase tableaux, the asymmetric exclusion process, and Askey-Wilson polynomials

    PubMed Central

    Corteel, Sylvie; Williams, Lauren K.

    2010-01-01

    We introduce some combinatorial objects called staircase tableaux, which have cardinality 4nn !, and connect them to both the asymmetric exclusion process (ASEP) and Askey-Wilson polynomials. The ASEP is a model from statistical mechanics introduced in the late 1960s, which describes a system of interacting particles hopping left and right on a one-dimensional lattice of n sites with open boundaries. It has been cited as a model for traffic flow and translation in protein synthesis. In its most general form, particles may enter and exit at the left with probabilities α and γ, and they may exit and enter at the right with probabilities β and δ. In the bulk, the probability of hopping left is q times the probability of hopping right. Our first result is a formula for the stationary distribution of the ASEP with all parameters general, in terms of staircase tableaux. Our second result is a formula for the moments of (the weight function of) Askey-Wilson polynomials, also in terms of staircase tableaux. Since the 1980s there has been a great deal of work giving combinatorial formulas for moments of classical orthogonal polynomials (e.g. Hermite, Charlier, Laguerre); among these polynomials, the Askey-Wilson polynomials are the most important, because they are at the top of the hierarchy of classical orthogonal polynomials. PMID:20348417

  17. Staircase tableaux, the asymmetric exclusion process, and Askey-Wilson polynomials.

    PubMed

    Corteel, Sylvie; Williams, Lauren K

    2010-04-13

    We introduce some combinatorial objects called staircase tableaux, which have cardinality 4(n)n!, and connect them to both the asymmetric exclusion process (ASEP) and Askey-Wilson polynomials. The ASEP is a model from statistical mechanics introduced in the late 1960s, which describes a system of interacting particles hopping left and right on a one-dimensional lattice of n sites with open boundaries. It has been cited as a model for traffic flow and translation in protein synthesis. In its most general form, particles may enter and exit at the left with probabilities alpha and gamma, and they may exit and enter at the right with probabilities beta and delta. In the bulk, the probability of hopping left is q times the probability of hopping right. Our first result is a formula for the stationary distribution of the ASEP with all parameters general, in terms of staircase tableaux. Our second result is a formula for the moments of (the weight function of) Askey-Wilson polynomials, also in terms of staircase tableaux. Since the 1980s there has been a great deal of work giving combinatorial formulas for moments of classical orthogonal polynomials (e.g. Hermite, Charlier, Laguerre); among these polynomials, the Askey-Wilson polynomials are the most important, because they are at the top of the hierarchy of classical orthogonal polynomials.

  18. Polynomial-Time Algorithms for Building a Consensus MUL-Tree

    PubMed Central

    Cui, Yun; Jansson, Jesper

    2012-01-01

    Abstract A multi-labeled phylogenetic tree, or MUL-tree, is a generalization of a phylogenetic tree that allows each leaf label to be used many times. MUL-trees have applications in biogeography, the study of host–parasite cospeciation, gene evolution studies, and computer science. Here, we consider the problem of inferring a consensus MUL-tree that summarizes a given set of conflicting MUL-trees, and present the first polynomial-time algorithms for solving it. In particular, we give a straightforward, fast algorithm for building a strict consensus MUL-tree for any input set of MUL-trees with identical leaf label multisets, as well as a polynomial-time algorithm for building a majority rule consensus MUL-tree for the special case where every leaf label occurs at most twice. We also show that, although it is NP-hard to find a majority rule consensus MUL-tree in general, the variant that we call the singular majority rule consensus MUL-tree can be constructed efficiently whenever it exists. PMID:22963134

  19. Polynomial-time algorithms for building a consensus MUL-tree.

    PubMed

    Cui, Yun; Jansson, Jesper; Sung, Wing-Kin

    2012-09-01

    A multi-labeled phylogenetic tree, or MUL-tree, is a generalization of a phylogenetic tree that allows each leaf label to be used many times. MUL-trees have applications in biogeography, the study of host-parasite cospeciation, gene evolution studies, and computer science. Here, we consider the problem of inferring a consensus MUL-tree that summarizes a given set of conflicting MUL-trees, and present the first polynomial-time algorithms for solving it. In particular, we give a straightforward, fast algorithm for building a strict consensus MUL-tree for any input set of MUL-trees with identical leaf label multisets, as well as a polynomial-time algorithm for building a majority rule consensus MUL-tree for the special case where every leaf label occurs at most twice. We also show that, although it is NP-hard to find a majority rule consensus MUL-tree in general, the variant that we call the singular majority rule consensus MUL-tree can be constructed efficiently whenever it exists.

  20. bcc-to-hcp transformation pathways for iron versus hydrostatic pressure: Coupled shuffle and shear modes

    NASA Astrophysics Data System (ADS)

    Liu, J. B.; Johnson, D. D.

    2009-04-01

    Using density-functional theory, we calculate the potential-energy surface (PES), minimum-energy pathway (MEP), and transition state (TS) versus hydrostatic pressure σhyd for the reconstructive transformation in Fe from body-centered cubic (bcc) to hexagonal closed-packed (hcp). At fixed σhyd , the PES is described by coupled shear (γ) and shuffle (η) modes and is determined from structurally minimized hcp-bcc energy differences at a set of (η,γ) . We fit the PES using symmetry-adapted polynomials, permitting the MEP to be found analytically. The MEP is continuous and fully explains the transformation and its associated magnetization and volume discontinuity at TS. We show that σhyd (while not able to induce shear) dramatically alters the MEP to drive reconstruction by a shuffle-only mode at ≤30GPa , as observed. Finally, we relate our polynomial-based results to Landau and nudge-elastic-band approaches and show they yield incorrect MEP in general.

  1. Interpolation problem for the solutions of linear elasticity equations based on monogenic functions

    NASA Astrophysics Data System (ADS)

    Grigor'ev, Yuri; Gürlebeck, Klaus; Legatiuk, Dmitrii

    2017-11-01

    Interpolation is an important tool for many practical applications, and very often it is beneficial to interpolate not only with a simple basis system, but rather with solutions of a certain differential equation, e.g. elasticity equation. A typical example for such type of interpolation are collocation methods widely used in practice. It is known, that interpolation theory is fully developed in the framework of the classical complex analysis. However, in quaternionic analysis, which shows a lot of analogies to complex analysis, the situation is more complicated due to the non-commutative multiplication. Thus, a fundamental theorem of algebra is not available, and standard tools from linear algebra cannot be applied in the usual way. To overcome these problems, a special system of monogenic polynomials the so-called Pseudo Complex Polynomials, sharing some properties of complex powers, is used. In this paper, we present an approach to deal with the interpolation problem, where solutions of elasticity equations in three dimensions are used as an interpolation basis.

  2. Second Order Boltzmann-Gibbs Principle for Polynomial Functions and Applications

    NASA Astrophysics Data System (ADS)

    Gonçalves, Patrícia; Jara, Milton; Simon, Marielle

    2017-01-01

    In this paper we give a new proof of the second order Boltzmann-Gibbs principle introduced in Gonçalves and Jara (Arch Ration Mech Anal 212(2):597-644, 2014). The proof does not impose the knowledge on the spectral gap inequality for the underlying model and it relies on a proper decomposition of the antisymmetric part of the current of the system in terms of polynomial functions. In addition, we fully derive the convergence of the equilibrium fluctuations towards (1) a trivial process in case of super-diffusive systems, (2) an Ornstein-Uhlenbeck process or the unique energy solution of the stochastic Burgers equation, as defined in Gubinelli and Jara (SPDEs Anal Comput (1):325-350, 2013) and Gubinelli and Perkowski (Arxiv:1508.07764, 2015), in case of weakly asymmetric diffusive systems. Examples and applications are presented for weakly and partial asymmetric exclusion processes, weakly asymmetric speed change exclusion processes and hamiltonian systems with exponential interactions.

  3. High-precision numerical integration of equations in dynamics

    NASA Astrophysics Data System (ADS)

    Alesova, I. M.; Babadzanjanz, L. K.; Pototskaya, I. Yu.; Pupysheva, Yu. Yu.; Saakyan, A. T.

    2018-05-01

    An important requirement for the process of solving differential equations in Dynamics, such as the equations of the motion of celestial bodies and, in particular, the motion of cosmic robotic systems is high accuracy at large time intervals. One of effective tools for obtaining such solutions is the Taylor series method. In this connection, we note that it is very advantageous to reduce the given equations of Dynamics to systems with polynomial (in unknowns) right-hand sides. This allows us to obtain effective algorithms for finding the Taylor coefficients, a priori error estimates at each step of integration, and an optimal choice of the order of the approximation used. In the paper, these questions are discussed and appropriate algorithms are considered.

  4. Graph traversals, genes, and matroids: An efficient case of the travelling salesman problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gusfield, D.; Stelling, P.; Wang, Lusheng

    1996-12-31

    In this paper the authors consider graph traversal problems that arise from a particular technology for DNA sequencing - sequencing by hybridization (SBH). They first explain the connection of the graph problems to SBH and then focus on the traversal problems. They describe a practical polynomial time solution to the Travelling Salesman Problem in a rich class of directed graphs (including edge weighted binary de Bruijn graphs), and provide a bounded-error approximation algorithm for the maximum weight TSP in a superset of those directed graphs. The authors also establish the existence of a matroid structure defined on the set ofmore » Euler and Hamilton paths in the restricted class of graphs. 8 refs., 5 figs.« less

  5. Top-d Rank Aggregation in Web Meta-search Engine

    NASA Astrophysics Data System (ADS)

    Fang, Qizhi; Xiao, Han; Zhu, Shanfeng

    In this paper, we consider the rank aggregation problem for information retrieval over Web making use of a kind of metric, the coherence, which considers both the normalized Kendall-τ distance and the size of overlap between two partial rankings. In general, the top-d coherence aggregation problem is defined as: given collection of partial rankings Π = {τ 1,τ 2, ⋯ , τ K }, how to find a final ranking π with specific length d, which maximizes the total coherence Φ(π,Pi)=sum_{i=1}^K Φ(π,tau_i). The corresponding complexity and algorithmic issues are discussed in this paper. Our main technical contribution is a polynomial time approximation scheme (PTAS) for a restricted top-d coherence aggregation problem.

  6. A general U-block model-based design procedure for nonlinear polynomial control systems

    NASA Astrophysics Data System (ADS)

    Zhu, Q. M.; Zhao, D. Y.; Zhang, Jianhua

    2016-10-01

    The proposition of U-model concept (in terms of 'providing concise and applicable solutions for complex problems') and a corresponding basic U-control design algorithm was originated in the first author's PhD thesis. The term of U-model appeared (not rigorously defined) for the first time in the first author's other journal paper, which established a framework for using linear polynomial control system design approaches to design nonlinear polynomial control systems (in brief, linear polynomial approaches → nonlinear polynomial plants). This paper represents the next milestone work - using linear state-space approaches to design nonlinear polynomial control systems (in brief, linear state-space approaches → nonlinear polynomial plants). The overall aim of the study is to establish a framework, defined as the U-block model, which provides a generic prototype for using linear state-space-based approaches to design the control systems with smooth nonlinear plants/processes described by polynomial models. For analysing the feasibility and effectiveness, sliding mode control design approach is selected as an exemplary case study. Numerical simulation studies provide a user-friendly step-by-step procedure for the readers/users with interest in their ad hoc applications. In formality, this is the first paper to present the U-model-oriented control system design in a formal way and to study the associated properties and theorems. The previous publications, in the main, have been algorithm-based studies and simulation demonstrations. In some sense, this paper can be treated as a landmark for the U-model-based research from intuitive/heuristic stage to rigour/formal/comprehensive studies.

  7. Explicit analytical expression for the condition number of polynomials in power form

    NASA Astrophysics Data System (ADS)

    Rack, Heinz-Joachim

    2017-07-01

    In his influential papers [1-3] W. Gautschi has defined and reshaped the condition number κ∞ of polynomials Pn of degree ≤ n which are represented in power form on a zero-symmetric interval [-ω, ω]. Basically, κ∞ is expressed as the product of two operator norms: an explicit factor times an implicit one (the l∞-norm of the coefficient vector of the n-th Chebyshev polynomial of the first kind relative to [-ω, ω]). We provide a new proof, economize the second factor and express it by an explicit analytical formula.

  8. Analytical solution of tt¯ dilepton equations

    NASA Astrophysics Data System (ADS)

    Sonnenschein, Lars

    2006-03-01

    The top quark antiquark production system in the dilepton decay channel is described by a set of equations which is nonlinear in the unknown neutrino momenta. Its most precise and least time consuming solution is of major importance for measurements of top quark properties like the top quark mass and tt¯ spin correlations. The initial system of equations can be transformed into two polynomial equations with two unknowns by means of elementary algebraic operations. These two polynomials of multidegree two can be reduced to one univariate polynomial of degree four by means of resultants. The obtained quartic equation is solved analytically.

  9. Localization of the lumbar discs using machine learning and exact probabilistic inference.

    PubMed

    Oktay, Ayse Betul; Akgul, Yusuf Sinan

    2011-01-01

    We propose a novel fully automatic approach to localize the lumbar intervertebral discs in MR images with PHOG based SVM and a probabilistic graphical model. At the local level, our method assigns a score to each pixel in target image that indicates whether it is a disc center or not. At the global level, we define a chain-like graphical model that represents the lumbar intervertebral discs and we use an exact inference algorithm to localize the discs. Our main contributions are the employment of the SVM with the PHOG based descriptor which is robust against variations of the discs and a graphical model that reflects the linear nature of the vertebral column. Our inference algorithm runs in polynomial time and produces globally optimal results. The developed system is validated on a real spine MRI dataset and the final localization results are favorable compared to the results reported in the literature.

  10. Robust quantum optimizer with full connectivity.

    PubMed

    Nigg, Simon E; Lörch, Niels; Tiwari, Rakesh P

    2017-04-01

    Quantum phenomena have the potential to speed up the solution of hard optimization problems. For example, quantum annealing, based on the quantum tunneling effect, has recently been shown to scale exponentially better with system size than classical simulated annealing. However, current realizations of quantum annealers with superconducting qubits face two major challenges. First, the connectivity between the qubits is limited, excluding many optimization problems from a direct implementation. Second, decoherence degrades the success probability of the optimization. We address both of these shortcomings and propose an architecture in which the qubits are robustly encoded in continuous variable degrees of freedom. By leveraging the phenomenon of flux quantization, all-to-all connectivity with sufficient tunability to implement many relevant optimization problems is obtained without overhead. Furthermore, we demonstrate the robustness of this architecture by simulating the optimal solution of a small instance of the nondeterministic polynomial-time hard (NP-hard) and fully connected number partitioning problem in the presence of dissipation.

  11. Modified method of simplest equation: Powerful tool for obtaining exact and approximate traveling-wave solutions of nonlinear PDEs

    NASA Astrophysics Data System (ADS)

    Vitanov, Nikolay K.

    2011-03-01

    We discuss the class of equations ∑i,j=0mAij(u){∂iu}/{∂ti}∂+∑k,l=0nBkl(u){∂ku}/{∂xk}∂=C(u) where Aij( u), Bkl( u) and C( u) are functions of u( x, t) as follows: (i) Aij, Bkl and C are polynomials of u; or (ii) Aij, Bkl and C can be reduced to polynomials of u by means of Taylor series for small values of u. For these two cases the above-mentioned class of equations consists of nonlinear PDEs with polynomial nonlinearities. We show that the modified method of simplest equation is powerful tool for obtaining exact traveling-wave solution of this class of equations. The balance equations for the sub-class of traveling-wave solutions of the investigated class of equations are obtained. We illustrate the method by obtaining exact traveling-wave solutions (i) of the Swift-Hohenberg equation and (ii) of the generalized Rayleigh equation for the cases when the extended tanh-equation or the equations of Bernoulli and Riccati are used as simplest equations.

  12. Using polynomials to simplify fixed pattern noise and photometric correction of logarithmic CMOS image sensors.

    PubMed

    Li, Jing; Mahmoodi, Alireza; Joseph, Dileepan

    2015-10-16

    An important class of complementary metal-oxide-semiconductor (CMOS) image sensors are those where pixel responses are monotonic nonlinear functions of light stimuli. This class includes various logarithmic architectures, which are easily capable of wide dynamic range imaging, at video rates, but which are vulnerable to image quality issues. To minimize fixed pattern noise (FPN) and maximize photometric accuracy, pixel responses must be calibrated and corrected due to mismatch and process variation during fabrication. Unlike literature approaches, which employ circuit-based models of varying complexity, this paper introduces a novel approach based on low-degree polynomials. Although each pixel may have a highly nonlinear response, an approximately-linear FPN calibration is possible by exploiting the monotonic nature of imaging. Moreover, FPN correction requires only arithmetic, and an optimal fixed-point implementation is readily derived, subject to a user-specified number of bits per pixel. Using a monotonic spline, involving cubic polynomials, photometric calibration is also possible without a circuit-based model, and fixed-point photometric correction requires only a look-up table. The approach is experimentally validated with a logarithmic CMOS image sensor and is compared to a leading approach from the literature. The novel approach proves effective and efficient.

  13. Geometric accuracy of LANDSAT-4 MSS image data

    NASA Technical Reports Server (NTRS)

    Welch, R.; Usery, E. L.

    1983-01-01

    Analyses of the LANDSAT-4 MSS image data of North Georgia provided by the EDC in CCT-p formats reveal that errors of approximately + or - 30 m in the raw data can be reduced to about + or - 55 m based on rectification procedures involving the use of 20 to 30 well-distributed GCPs and 2nd or 3rd degree polynomial equations. Higher order polynomials do not appear to improve the rectification accuracy. A subscene area of 256 x 256 pixels was rectified with a 1st degree polynomial to yield an RMSE sub xy value of + or - 40 m, indicating that USGS 1:24,000 scale quadrangle-sized areas of LANDSAT-4 data can be fitted to a map base with relatively few control points and simple equations. The errors in the rectification process are caused by the spatial resolution of the MSS data, by errors in the maps and GCP digitizing process, and by displacements caused by terrain relief. Overall, due to the improved pointing and attitude control of the spacecraft, the geometric quality of the LANDSAT-4 MSS data appears much improved over that of LANDSATS -1, -2 and -3.

  14. Uncertainty Quantification for Polynomial Systems via Bernstein Expansions

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2012-01-01

    This paper presents a unifying framework to uncertainty quantification for systems having polynomial response metrics that depend on both aleatory and epistemic uncertainties. The approach proposed, which is based on the Bernstein expansions of polynomials, enables bounding the range of moments and failure probabilities of response metrics as well as finding supersets of the extreme epistemic realizations where the limits of such ranges occur. These bounds and supersets, whose analytical structure renders them free of approximation error, can be made arbitrarily tight with additional computational effort. Furthermore, this framework enables determining the importance of particular uncertain parameters according to the extent to which they affect the first two moments of response metrics and failure probabilities. This analysis enables determining the parameters that should be considered uncertain as well as those that can be assumed to be constants without incurring significant error. The analytical nature of the approach eliminates the numerical error that characterizes the sampling-based techniques commonly used to propagate aleatory uncertainties as well as the possibility of under predicting the range of the statistic of interest that may result from searching for the best- and worstcase epistemic values via nonlinear optimization or sampling.

  15. Image Processing Language. Phase 1

    DTIC Science & Technology

    1988-05-01

    their entirety. Nonetheless, they can serve as guidelines to which the construction of a useful and comprehensive imaging algebra might aspire. 3. TIH... guidelines to which the construction of a useful and comprehensive imaging algebra might aspire. * It was recognized that any structure which encompasses...Bernstein Polynomial Approximation Best Plane Fit ( BPF , Sobel, Roberts, Prewitt, Gradient) Boundary Finder Boundary Segmenter Chain Code Angle

  16. Alternatives to the stochastic "noise vector" approach

    NASA Astrophysics Data System (ADS)

    de Forcrand, Philippe; Jäger, Benjamin

    2018-03-01

    Several important observables, like the quark condensate and the Taylor coefficients of the expansion of the QCD pressure with respect to the chemical potential, are based on the trace of the inverse Dirac operator and of its powers. Such traces are traditionally estimated with "noise vectors" sandwiching the operator. We explore alternative approaches based on polynomial approximations of the inverse Dirac operator.

  17. The Ritz - Sublaminate Generalized Unified Formulation approach for piezoelectric composite plates

    NASA Astrophysics Data System (ADS)

    D'Ottavio, Michele; Dozio, Lorenzo; Vescovini, Riccardo; Polit, Olivier

    2018-01-01

    This paper extends to composite plates including piezoelectric plies the variable kinematics plate modeling approach called Sublaminate Generalized Unified Formulation (SGUF). Two-dimensional plate equations are obtained upon defining a priori the through-thickness distribution of the displacement field and electric potential. According to SGUF, independent approximations can be adopted for the four components of these generalized displacements: an Equivalent Single Layer (ESL) or Layer-Wise (LW) description over an arbitrary group of plies constituting the composite plate (the sublaminate) and the polynomial order employed in each sublaminate. The solution of the two-dimensional equations is sought in weak form by means of a Ritz method. In this work, boundary functions are used in conjunction with the domain approximation expressed by an orthogonal basis spanned by Legendre polynomials. The proposed computational tool is capable to represent electroded surfaces with equipotentiality conditions. Free-vibration problems as well as static problems involving actuator and sensor configurations are addressed. Two case studies are presented, which demonstrate the high accuracy of the proposed Ritz-SGUF approach. A model assessment is proposed for showcasing to which extent the SGUF approach allows a reduction of the number of unknowns with a controlled impact on the accuracy of the result.

  18. Building dynamical models from data and prior knowledge: the case of the first period-doubling bifurcation.

    PubMed

    Aguirre, Luis Antonio; Furtado, Edgar Campos

    2007-10-01

    This paper reviews some aspects of nonlinear model building from data with (gray box) and without (black box) prior knowledge. The model class is very important because it determines two aspects of the final model, namely (i) the type of nonlinearity that can be accurately approximated and (ii) the type of prior knowledge that can be taken into account. Such features are usually in conflict when it comes to choosing the model class. The problem of model structure selection is also reviewed. It is argued that such a problem is philosophically different depending on the model class and it is suggested that the choice of model class should be performed based on the type of a priori available. A procedure is proposed to build polynomial models from data on a Poincaré section and prior knowledge about the first period-doubling bifurcation, for which the normal form is also polynomial. The final models approximate dynamical data in a least-squares sense and, by design, present the first period-doubling bifurcation at a specified value of parameters. The procedure is illustrated by means of simulated examples.

  19. Estimates of the absolute error and a scheme for an approximate solution to scheduling problems

    NASA Astrophysics Data System (ADS)

    Lazarev, A. A.

    2009-02-01

    An approach is proposed for estimating absolute errors and finding approximate solutions to classical NP-hard scheduling problems of minimizing the maximum lateness for one or many machines and makespan is minimized. The concept of a metric (distance) between instances of the problem is introduced. The idea behind the approach is, given the problem instance, to construct another instance for which an optimal or approximate solution can be found at the minimum distance from the initial instance in the metric introduced. Instead of solving the original problem (instance), a set of approximating polynomially/pseudopolynomially solvable problems (instances) are considered, an instance at the minimum distance from the given one is chosen, and the resulting schedule is then applied to the original instance.

  20. A Jacobi collocation approximation for nonlinear coupled viscous Burgers' equation

    NASA Astrophysics Data System (ADS)

    Doha, Eid H.; Bhrawy, Ali H.; Abdelkawy, Mohamed A.; Hafez, Ramy M.

    2014-02-01

    This article presents a numerical approximation of the initial-boundary nonlinear coupled viscous Burgers' equation based on spectral methods. A Jacobi-Gauss-Lobatto collocation (J-GL-C) scheme in combination with the implicit Runge-Kutta-Nyström (IRKN) scheme are employed to obtain highly accurate approximations to the mentioned problem. This J-GL-C method, based on Jacobi polynomials and Gauss-Lobatto quadrature integration, reduces solving the nonlinear coupled viscous Burgers' equation to a system of nonlinear ordinary differential equation which is far easier to solve. The given examples show, by selecting relatively few J-GL-C points, the accuracy of the approximations and the utility of the approach over other analytical or numerical methods. The illustrative examples demonstrate the accuracy, efficiency, and versatility of the proposed algorithm.

  1. Non-model-based damage identification of plates using principal, mean and Gaussian curvature mode shapes

    DOE PAGES

    Xu, Yongfeng F.; Zhu, Weidong D.; Smith, Scott A.

    2017-07-01

    Mode shapes (MSs) have been extensively used to identify structural damage. This paper presents a new non-model-based method that uses principal, mean and Gaussian curvature MSs (CMSs) to identify damage in plates; the method is applicable and robust to MSs associated with low and high elastic modes on dense and coarse measurement grids. A multi-scale discrete differential-geometry scheme is proposed to calculate principal, mean and Gaussian CMSs associated with a MS of a plate, which can alleviate adverse effects of measurement noise on calculating the CMSs. Principal, mean and Gaussian CMSs of a damaged plate and those of an undamagedmore » one are used to yield four curvature damage indices (CDIs), including Maximum-CDIs, Minimum-CDIs, Mean-CDIs and Gaussian-CDIs. Damage can be identified near regions with consistently higher values of the CDIs. It is shown that a MS of an undamaged plate can be well approximated using a polynomial with a properly determined order that fits a MS of a damaged one, provided that the undamaged plate has a smooth geometry and is made of material that has no stiffness and mass discontinuities. New fitting and convergence indices are proposed to quantify the level of approximation of a MS from a polynomial fit to that of a damaged plate and to determine the proper order of the polynomial fit, respectively. A MS of an aluminum plate with damage in the form of a machined thickness reduction area was measured to experimentally investigate the effectiveness of the proposed CDIs in damage identification; the damage on the plate was successfully identified.« less

  2. Non-model-based damage identification of plates using principal, mean and Gaussian curvature mode shapes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Yongfeng F.; Zhu, Weidong D.; Smith, Scott A.

    Mode shapes (MSs) have been extensively used to identify structural damage. This paper presents a new non-model-based method that uses principal, mean and Gaussian curvature MSs (CMSs) to identify damage in plates; the method is applicable and robust to MSs associated with low and high elastic modes on dense and coarse measurement grids. A multi-scale discrete differential-geometry scheme is proposed to calculate principal, mean and Gaussian CMSs associated with a MS of a plate, which can alleviate adverse effects of measurement noise on calculating the CMSs. Principal, mean and Gaussian CMSs of a damaged plate and those of an undamagedmore » one are used to yield four curvature damage indices (CDIs), including Maximum-CDIs, Minimum-CDIs, Mean-CDIs and Gaussian-CDIs. Damage can be identified near regions with consistently higher values of the CDIs. It is shown that a MS of an undamaged plate can be well approximated using a polynomial with a properly determined order that fits a MS of a damaged one, provided that the undamaged plate has a smooth geometry and is made of material that has no stiffness and mass discontinuities. New fitting and convergence indices are proposed to quantify the level of approximation of a MS from a polynomial fit to that of a damaged plate and to determine the proper order of the polynomial fit, respectively. A MS of an aluminum plate with damage in the form of a machined thickness reduction area was measured to experimentally investigate the effectiveness of the proposed CDIs in damage identification; the damage on the plate was successfully identified.« less

  3. Efficient computation of the joint probability of multiple inherited risk alleles from pedigree data.

    PubMed

    Madsen, Thomas; Braun, Danielle; Peng, Gang; Parmigiani, Giovanni; Trippa, Lorenzo

    2018-06-25

    The Elston-Stewart peeling algorithm enables estimation of an individual's probability of harboring germline risk alleles based on pedigree data, and serves as the computational backbone of important genetic counseling tools. However, it remains limited to the analysis of risk alleles at a small number of genetic loci because its computing time grows exponentially with the number of loci considered. We propose a novel, approximate version of this algorithm, dubbed the peeling and paring algorithm, which scales polynomially in the number of loci. This allows extending peeling-based models to include many genetic loci. The algorithm creates a trade-off between accuracy and speed, and allows the user to control this trade-off. We provide exact bounds on the approximation error and evaluate it in realistic simulations. Results show that the loss of accuracy due to the approximation is negligible in important applications. This algorithm will improve genetic counseling tools by increasing the number of pathogenic risk alleles that can be addressed. To illustrate we create an extended five genes version of BRCAPRO, a widely used model for estimating the carrier probabilities of BRCA1 and BRCA2 risk alleles and assess its computational properties. © 2018 WILEY PERIODICALS, INC.

  4. Exact and Approximate Stability of Solutions to Traveling Salesman Problems.

    PubMed

    Niendorf, Moritz; Girard, Anouck R

    2018-02-01

    This paper presents the stability analysis of an optimal tour for the symmetric traveling salesman problem (TSP) by obtaining stability regions. The stability region of an optimal tour is the set of all cost changes for which that solution remains optimal and can be understood as the margin of optimality for a solution with respect to perturbations in the problem data. It is known that it is not possible to test in polynomial time whether an optimal tour remains optimal after the cost of an arbitrary set of edges changes. Therefore, this paper develops tractable methods to obtain under and over approximations of stability regions based on neighborhoods and relaxations. The application of the results to the two-neighborhood and the minimum 1 tree (M1T) relaxation are discussed in detail. For Euclidean TSPs, stability regions with respect to vertex location perturbations and the notion of safe radii and location criticalities are introduced. Benefits of this paper include insight into robustness properties of tours, minimum spanning trees, M1Ts, and fast methods to evaluate optimality after perturbations occur. Numerical examples are given to demonstrate the methods and achievable approximation quality.

  5. An Efficient Algorithm for Perturbed Orbit Integration Combining Analytical Continuation and Modified Chebyshev Picard Iteration

    NASA Astrophysics Data System (ADS)

    Elgohary, T.; Kim, D.; Turner, J.; Junkins, J.

    2014-09-01

    Several methods exist for integrating the motion in high order gravity fields. Some recent methods use an approximate starting orbit, and an efficient method is needed for generating warm starts that account for specific low order gravity approximations. By introducing two scalar Lagrange-like invariants and employing Leibniz product rule, the perturbed motion is integrated by a novel recursive formulation. The Lagrange-like invariants allow exact arbitrary order time derivatives. Restricting attention to the perturbations due to the zonal harmonics J2 through J6, we illustrate an idea. The recursively generated vector-valued time derivatives for the trajectory are used to develop a continuation series-based solution for propagating position and velocity. Numerical comparisons indicate performance improvements of ~ 70X over existing explicit Runge-Kutta methods while maintaining mm accuracy for the orbit predictions. The Modified Chebyshev Picard Iteration (MCPI) is an iterative path approximation method to solve nonlinear ordinary differential equations. The MCPI utilizes Picard iteration with orthogonal Chebyshev polynomial basis functions to recursively update the states. The key advantages of the MCPI are as follows: 1) Large segments of a trajectory can be approximated by evaluating the forcing function at multiple nodes along the current approximation during each iteration. 2) It can readily handle general gravity perturbations as well as non-conservative forces. 3) Parallel applications are possible. The Picard sequence converges to the solution over large time intervals when the forces are continuous and differentiable. According to the accuracy of the starting solutions, however, the MCPI may require significant number of iterations and function evaluations compared to other integrators. In this work, we provide an efficient methodology to establish good starting solutions from the continuation series method; this warm start improves the performance of the MCPI significantly and will likely be useful for other applications where efficiently computed approximate orbit solutions are needed.

  6. O+OH-->O(2)+H: A key reaction for interstellar chemistry. New theoretical results and comparison with experiment.

    PubMed

    Lique, F; Jorfi, M; Honvault, P; Halvick, P; Lin, S Y; Guo, H; Xie, D Q; Dagdigian, P J; Kłos, J; Alexander, M H

    2009-12-14

    We report extensive, fully quantum, time-independent (TID) calculations of cross sections at low collision energies and rate constants at low temperatures for the O+OH reaction, of key importance in the production of molecular oxygen in cold, dark, interstellar clouds and in the chemistry of the Earth's atmosphere. Our calculations are compared with TID calculations within the J-shifting approximation, with wave-packet calculations, and with quasiclassical trajectory calculations. The fully quantum TID calculations yield rate constants higher than those from the more approximate methods and are qualitatively consistent with a low-temperature extrapolation of earlier experimental values but not with the most recent experiments at the lowest temperatures.

  7. On the convergence of a fully discrete scheme of LES type to physically relevant solutions of the incompressible Navier-Stokes

    NASA Astrophysics Data System (ADS)

    Berselli, Luigi C.; Spirito, Stefano

    2018-06-01

    Obtaining reliable numerical simulations of turbulent fluids is a challenging problem in computational fluid mechanics. The large eddy simulation (LES) models are efficient tools to approximate turbulent fluids, and an important step in the validation of these models is the ability to reproduce relevant properties of the flow. In this paper, we consider a fully discrete approximation of the Navier-Stokes-Voigt model by an implicit Euler algorithm (with respect to the time variable) and a Fourier-Galerkin method (in the space variables). We prove the convergence to weak solutions of the incompressible Navier-Stokes equations satisfying the natural local entropy condition, hence selecting the so-called physically relevant solutions.

  8. Cubic Polynomials with Real or Complex Coefficients: The Full Picture

    ERIC Educational Resources Information Center

    Bardell, Nicholas S.

    2016-01-01

    The cubic polynomial with real coefficients has a rich and interesting history primarily associated with the endeavours of great mathematicians like del Ferro, Tartaglia, Cardano or Vieta who sought a solution for the roots (Katz, 1998; see Chapter 12.3: The Solution of the Cubic Equation). Suffice it to say that since the times of renaissance…

  9. Complexity of Quantum Impurity Problems

    NASA Astrophysics Data System (ADS)

    Bravyi, Sergey; Gosset, David

    2017-12-01

    We give a quasi-polynomial time classical algorithm for estimating the ground state energy and for computing low energy states of quantum impurity models. Such models describe a bath of free fermions coupled to a small interacting subsystem called an impurity. The full system consists of n fermionic modes and has a Hamiltonian {H=H_0+H_{imp}}, where H 0 is quadratic in creation-annihilation operators and H imp is an arbitrary Hamiltonian acting on a subset of O(1) modes. We show that the ground energy of H can be approximated with an additive error {2^{-b}} in time {n^3 \\exp{[O(b^3)]}}. Our algorithm also finds a low energy state that achieves this approximation. The low energy state is represented as a superposition of {\\exp{[O(b^3)]}} fermionic Gaussian states. To arrive at this result we prove several theorems concerning exact ground states of impurity models. In particular, we show that eigenvalues of the ground state covariance matrix decay exponentially with the exponent depending very mildly on the spectral gap of H 0. A key ingredient of our proof is Zolotarev's rational approximation to the {√{x}} function. We anticipate that our algorithms may be used in hybrid quantum-classical simulations of strongly correlated materials based on dynamical mean field theory. We implemented a simplified practical version of our algorithm and benchmarked it using the single impurity Anderson model.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deka, Deepjyoti; Backhaus, Scott N.; Chertkov, Michael

    Limited placement of real-time monitoring devices in the distribution grid, recent trends notwithstanding, has prevented the easy implementation of demand-response and other smart grid applications. Part I of this paper discusses the problem of learning the operational structure of the grid from nodal voltage measurements. In this work (Part II), the learning of the operational radial structure is coupled with the problem of estimating nodal consumption statistics and inferring the line parameters in the grid. Based on a Linear-Coupled(LC) approximation of AC power flows equations, polynomial time algorithms are designed to identify the structure and estimate nodal load characteristics and/ormore » line parameters in the grid using the available nodal voltage measurements. Then the structure learning algorithm is extended to cases with missing data, where available observations are limited to a fraction of the grid nodes. The efficacy of the presented algorithms are demonstrated through simulations on several distribution test cases.« less

  11. FAST TRACK COMMUNICATION: The unusual asymptotics of three-sided prudent polygons

    NASA Astrophysics Data System (ADS)

    Beaton, Nicholas R.; Flajolet, Philippe; Guttmann, Anthony J.

    2010-08-01

    We have studied the area-generating function of prudent polygons on the square lattice. Exact solutions are obtained for the generating function of two-sided and three-sided prudent polygons, and a functional equation is found for four-sided prudent polygons. This is used to generate series coefficients in polynomial time, and these are analysed to determine the asymptotics numerically. A careful asymptotic analysis of the three-sided polygons produces a most surprising result. A transcendental critical exponent is found, and the leading amplitude is not quite a constant, but is a constant plus a small oscillatory component with an amplitude approximately 10-8 times that of the leading amplitude. This effect cannot be seen by any standard numerical analysis, but it may be present in other models. If so, it changes our whole view of the asymptotic behaviour of lattice models.

  12. Network of time-multiplexed optical parametric oscillators as a coherent Ising machine

    NASA Astrophysics Data System (ADS)

    Marandi, Alireza; Wang, Zhe; Takata, Kenta; Byer, Robert L.; Yamamoto, Yoshihisa

    2014-12-01

    Finding the ground states of the Ising Hamiltonian maps to various combinatorial optimization problems in biology, medicine, wireless communications, artificial intelligence and social network. So far, no efficient classical and quantum algorithm is known for these problems and intensive research is focused on creating physical systems—Ising machines—capable of finding the absolute or approximate ground states of the Ising Hamiltonian. Here, we report an Ising machine using a network of degenerate optical parametric oscillators (OPOs). Spins are represented with above-threshold binary phases of the OPOs and the Ising couplings are realized by mutual injections. The network is implemented in a single OPO ring cavity with multiple trains of femtosecond pulses and configurable mutual couplings, and operates at room temperature. We programmed a small non-deterministic polynomial time-hard problem on a 4-OPO Ising machine and in 1,000 runs no computational error was detected.

  13. Study of a vibrating plate: comparison between experimental (ESPI) and analytical results

    NASA Astrophysics Data System (ADS)

    Romero, G.; Alvarez, L.; Alanís, E.; Nallim, L.; Grossi, R.

    2003-07-01

    Real-time electronic speckle pattern interferometry (ESPI) was used for tuning and visualization of natural frequencies of a trapezoidal plate. The plate was excited to resonant vibration by a sinusoidal acoustical source, which provided a continuous range of audio frequencies. Fringe patterns produced during the time-average recording of the vibrating plate—corresponding to several resonant frequencies—were registered. From these interferograms, calculations of vibrational amplitudes by means of zero-order Bessel functions were performed in some particular cases. The system was also studied analytically. The analytical approach developed is based on the Rayleigh-Ritz method and on the use of non-orthogonal right triangular co-ordinates. The deflection of the plate is approximated by a set of beam characteristic orthogonal polynomials generated by using the Gram-Schmidt procedure. A high degree of correlation between computational analysis and experimental results was observed.

  14. Power of one nonclean qubit

    NASA Astrophysics Data System (ADS)

    Morimae, Tomoyuki; Fujii, Keisuke; Nishimura, Harumichi

    2017-04-01

    The one-clean qubit model (or the DQC1 model) is a restricted model of quantum computing where only a single qubit of the initial state is pure and others are maximally mixed. Although the model is not universal, it can efficiently solve several problems whose classical efficient solutions are not known. Furthermore, it was recently shown that if the one-clean qubit model is classically efficiently simulated, the polynomial hierarchy collapses to the second level. A disadvantage of the one-clean qubit model is, however, that the clean qubit is too clean: for example, in realistic NMR experiments, polarizations are not high enough to have the perfectly pure qubit. In this paper, we consider a more realistic one-clean qubit model, where the clean qubit is not clean, but depolarized. We first show that, for any polarization, a multiplicative-error calculation of the output probability distribution of the model is possible in a classical polynomial time if we take an appropriately large multiplicative error. The result is in strong contrast with that of the ideal one-clean qubit model where the classical efficient multiplicative-error calculation (or even the sampling) with the same amount of error causes the collapse of the polynomial hierarchy. We next show that, for any polarization lower-bounded by an inverse polynomial, a classical efficient sampling (in terms of a sufficiently small multiplicative error or an exponentially small additive error) of the output probability distribution of the model is impossible unless BQP (bounded error quantum polynomial time) is contained in the second level of the polynomial hierarchy, which suggests the hardness of the classical efficient simulation of the one nonclean qubit model.

  15. A Novel Method for Dynamic Short-Beam Shear Testing of 3D Woven Composites

    DTIC Science & Technology

    2011-08-11

    specimen was homogenized as an orthotropic elastic material with properties given in Table 1 [38]. The use of fully elastic model removes any material...impact event however after approximately 0.5 mm of deflection, equilibrium is reached. It is observed from Fig. 4(d) that equilibrium is never fully ...The public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing

  16. A class of generalized Ginzburg-Landau equations with random switching

    NASA Astrophysics Data System (ADS)

    Wu, Zheng; Yin, George; Lei, Dongxia

    2018-09-01

    This paper focuses on a class of generalized Ginzburg-Landau equations with random switching. In our formulation, the nonlinear term is allowed to have higher polynomial growth rate than the usual cubic polynomials. The random switching is modeled by a continuous-time Markov chain with a finite state space. First, an explicit solution is obtained. Then properties such as stochastic-ultimate boundedness and permanence of the solution processes are investigated. Finally, two-time-scale models are examined leading to a reduction of complexity.

  17. Control of magnetic bearing systems via the Chebyshev polynomial-based unified model (CPBUM) neural network.

    PubMed

    Jeng, J T; Lee, T T

    2000-01-01

    A Chebyshev polynomial-based unified model (CPBUM) neural network is introduced and applied to control a magnetic bearing systems. First, we show that the CPBUM neural network not only has the same capability of universal approximator, but also has faster learning speed than conventional feedforward/recurrent neural network. It turns out that the CPBUM neural network is more suitable in the design of controller than the conventional feedforward/recurrent neural network. Second, we propose the inverse system method, based on the CPBUM neural networks, to control a magnetic bearing system. The proposed controller has two structures; namely, off-line and on-line learning structures. We derive a new learning algorithm for each proposed structure. The experimental results show that the proposed neural network architecture provides a greater flexibility and better performance in controlling magnetic bearing systems.

  18. Data driven discrete-time parsimonious identification of a nonlinear state-space model for a weakly nonlinear system with short data record

    NASA Astrophysics Data System (ADS)

    Relan, Rishi; Tiels, Koen; Marconato, Anna; Dreesen, Philippe; Schoukens, Johan

    2018-05-01

    Many real world systems exhibit a quasi linear or weakly nonlinear behavior during normal operation, and a hard saturation effect for high peaks of the input signal. In this paper, a methodology to identify a parsimonious discrete-time nonlinear state space model (NLSS) for the nonlinear dynamical system with relatively short data record is proposed. The capability of the NLSS model structure is demonstrated by introducing two different initialisation schemes, one of them using multivariate polynomials. In addition, a method using first-order information of the multivariate polynomials and tensor decomposition is employed to obtain the parsimonious decoupled representation of the set of multivariate real polynomials estimated during the identification of NLSS model. Finally, the experimental verification of the model structure is done on the cascaded water-benchmark identification problem.

  19. Local Composite Quantile Regression Smoothing for Harris Recurrent Markov Processes

    PubMed Central

    Li, Degui; Li, Runze

    2016-01-01

    In this paper, we study the local polynomial composite quantile regression (CQR) smoothing method for the nonlinear and nonparametric models under the Harris recurrent Markov chain framework. The local polynomial CQR regression method is a robust alternative to the widely-used local polynomial method, and has been well studied in stationary time series. In this paper, we relax the stationarity restriction on the model, and allow that the regressors are generated by a general Harris recurrent Markov process which includes both the stationary (positive recurrent) and nonstationary (null recurrent) cases. Under some mild conditions, we establish the asymptotic theory for the proposed local polynomial CQR estimator of the mean regression function, and show that the convergence rate for the estimator in nonstationary case is slower than that in stationary case. Furthermore, a weighted type local polynomial CQR estimator is provided to improve the estimation efficiency, and a data-driven bandwidth selection is introduced to choose the optimal bandwidth involved in the nonparametric estimators. Finally, we give some numerical studies to examine the finite sample performance of the developed methodology and theory. PMID:27667894

  20. Fast numerical methods for simulating large-scale integrate-and-fire neuronal networks.

    PubMed

    Rangan, Aaditya V; Cai, David

    2007-02-01

    We discuss numerical methods for simulating large-scale, integrate-and-fire (I&F) neuronal networks. Important elements in our numerical methods are (i) a neurophysiologically inspired integrating factor which casts the solution as a numerically tractable integral equation, and allows us to obtain stable and accurate individual neuronal trajectories (i.e., voltage and conductance time-courses) even when the I&F neuronal equations are stiff, such as in strongly fluctuating, high-conductance states; (ii) an iterated process of spike-spike corrections within groups of strongly coupled neurons to account for spike-spike interactions within a single large numerical time-step; and (iii) a clustering procedure of firing events in the network to take advantage of localized architectures, such as spatial scales of strong local interactions, which are often present in large-scale computational models-for example, those of the primary visual cortex. (We note that the spike-spike corrections in our methods are more involved than the correction of single neuron spike-time via a polynomial interpolation as in the modified Runge-Kutta methods commonly used in simulations of I&F neuronal networks.) Our methods can evolve networks with relatively strong local interactions in an asymptotically optimal way such that each neuron fires approximately once in [Formula: see text] operations, where N is the number of neurons in the system. We note that quantifications used in computational modeling are often statistical, since measurements in a real experiment to characterize physiological systems are typically statistical, such as firing rate, interspike interval distributions, and spike-triggered voltage distributions. We emphasize that it takes much less computational effort to resolve statistical properties of certain I&F neuronal networks than to fully resolve trajectories of each and every neuron within the system. For networks operating in realistic dynamical regimes, such as strongly fluctuating, high-conductance states, our methods are designed to achieve statistical accuracy when very large time-steps are used. Moreover, our methods can also achieve trajectory-wise accuracy when small time-steps are used.

Top