Sample records for two-variable link polynomials

  1. The Fixed-Links Model in Combination with the Polynomial Function as a Tool for Investigating Choice Reaction Time Data

    ERIC Educational Resources Information Center

    Schweizer, Karl

    2006-01-01

    A model with fixed relations between manifest and latent variables is presented for investigating choice reaction time data. The numbers for fixation originate from the polynomial function. Two options are considered: the component-based (1 latent variable for each component of the polynomial function) and composite-based options (1 latent…

  2. Where are the roots of the Bethe Ansatz equations?

    NASA Astrophysics Data System (ADS)

    Vieira, R. S.; Lima-Santos, A.

    2015-10-01

    Changing the variables in the Bethe Ansatz Equations (BAE) for the XXZ six-vertex model we had obtained a coupled system of polynomial equations. This provided a direct link between the BAE deduced from the Algebraic Bethe Ansatz (ABA) and the BAE arising from the Coordinate Bethe Ansatz (CBA). For two magnon states this polynomial system could be decoupled and the solutions given in terms of the roots of some self-inversive polynomials. From theorems concerning the distribution of the roots of self-inversive polynomials we made a thorough analysis of the two magnon states, which allowed us to find the location and multiplicity of the Bethe roots in the complex plane, to discuss the completeness and singularities of Bethe's equations, the ill-founded string-hypothesis concerning the location of their roots, as well as to find an interesting connection between the BAE with Salem's polynomials.

  3. HOMFLYPT polynomial is the best quantifier for topological cascades of vortex knots

    NASA Astrophysics Data System (ADS)

    Ricca, Renzo L.; Liu, Xin

    2018-02-01

    In this paper we derive and compare numerical sequences obtained by adapted polynomials such as HOMFLYPT, Jones and Alexander-Conway for the topological cascade of vortex torus knots and links that progressively untie by a single reconnection event at a time. Two cases are considered: the alternate sequence of knots and co-oriented links (with positive crossings) and the sequence of two-component links with oppositely oriented components (negative crossings). New recurrence equations are derived and sequences of numerical values are computed. In all cases the adapted HOMFLYPT polynomial proves to be the best quantifier for the topological cascade of torus knots and links.

  4. FIT: Computer Program that Interactively Determines Polynomial Equations for Data which are a Function of Two Independent Variables

    NASA Technical Reports Server (NTRS)

    Arbuckle, P. D.; Sliwa, S. M.; Roy, M. L.; Tiffany, S. H.

    1985-01-01

    A computer program for interactively developing least-squares polynomial equations to fit user-supplied data is described. The program is characterized by the ability to compute the polynomial equations of a surface fit through data that are a function of two independent variables. The program utilizes the Langley Research Center graphics packages to display polynomial equation curves and data points, facilitating a qualitative evaluation of the effectiveness of the fit. An explanation of the fundamental principles and features of the program, as well as sample input and corresponding output, are included.

  5. Knotted optical vortices in exact solutions to Maxwell's equations

    NASA Astrophysics Data System (ADS)

    de Klerk, Albertus J. J. M.; van der Veen, Roland I.; Dalhuisen, Jan Willem; Bouwmeester, Dirk

    2017-05-01

    We construct a family of exact solutions to Maxwell's equations in which the points of zero intensity form knotted lines topologically equivalent to a given but arbitrary algebraic link. These lines of zero intensity, more commonly referred to as optical vortices, and their topology are preserved as time evolves and the fields have finite energy. To derive explicit expressions for these new electromagnetic fields that satisfy the nullness property, we make use of the Bateman variables for the Hopf field as well as complex polynomials in two variables whose zero sets give rise to algebraic links. The class of algebraic links includes not only all torus knots and links thereof, but also more intricate cable knots. While the unknot has been considered before, the solutions presented here show that more general knotted structures can also arise as optical vortices in exact solutions to Maxwell's equations.

  6. On multiple orthogonal polynomials for discrete Meixner measures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sorokin, Vladimir N

    2010-12-07

    The paper examines two examples of multiple orthogonal polynomials generalizing orthogonal polynomials of a discrete variable, meaning thereby the Meixner polynomials. One example is bound up with a discrete Nikishin system, and the other leads to essentially new effects. The limit distribution of the zeros of polynomials is obtained in terms of logarithmic equilibrium potentials and in terms of algebraic curves. Bibliography: 9 titles.

  7. Polynomial decay rate of a thermoelastic Mindlin-Timoshenko plate model with Dirichlet boundary conditions

    NASA Astrophysics Data System (ADS)

    Grobbelaar-Van Dalsen, Marié

    2015-02-01

    In this article, we are concerned with the polynomial stabilization of a two-dimensional thermoelastic Mindlin-Timoshenko plate model with no mechanical damping. The model is subject to Dirichlet boundary conditions on the elastic as well as the thermal variables. The work complements our earlier work in Grobbelaar-Van Dalsen (Z Angew Math Phys 64:1305-1325, 2013) on the polynomial stabilization of a Mindlin-Timoshenko model in a radially symmetric domain under Dirichlet boundary conditions on the displacement and thermal variables and free boundary conditions on the shear angle variables. In particular, our aim is to investigate the effect of the Dirichlet boundary conditions on all the variables on the polynomial decay rate of the model. By once more applying a frequency domain method in which we make critical use of an inequality for the trace of Sobolev functions on the boundary of a bounded, open connected set we show that the decay is slower than in the model considered in the cited work. A comparison of our result with our polynomial decay result for a magnetoelastic Mindlin-Timoshenko model subject to Dirichlet boundary conditions on the elastic variables in Grobbelaar-Van Dalsen (Z Angew Math Phys 63:1047-1065, 2012) also indicates a correlation between the robustness of the coupling between parabolic and hyperbolic dynamics and the polynomial decay rate in the two models.

  8. Polynomial chaos expansion with random and fuzzy variables

    NASA Astrophysics Data System (ADS)

    Jacquelin, E.; Friswell, M. I.; Adhikari, S.; Dessombz, O.; Sinou, J.-J.

    2016-06-01

    A dynamical uncertain system is studied in this paper. Two kinds of uncertainties are addressed, where the uncertain parameters are described through random variables and/or fuzzy variables. A general framework is proposed to deal with both kinds of uncertainty using a polynomial chaos expansion (PCE). It is shown that fuzzy variables may be expanded in terms of polynomial chaos when Legendre polynomials are used. The components of the PCE are a solution of an equation that does not depend on the nature of uncertainty. Once this equation is solved, the post-processing of the data gives the moments of the random response when the uncertainties are random or gives the response interval when the variables are fuzzy. With the PCE approach, it is also possible to deal with mixed uncertainty, when some parameters are random and others are fuzzy. The results provide a fuzzy description of the response statistical moments.

  9. Improving multivariate Horner schemes with Monte Carlo tree search

    NASA Astrophysics Data System (ADS)

    Kuipers, J.; Plaat, A.; Vermaseren, J. A. M.; van den Herik, H. J.

    2013-11-01

    Optimizing the cost of evaluating a polynomial is a classic problem in computer science. For polynomials in one variable, Horner's method provides a scheme for producing a computationally efficient form. For multivariate polynomials it is possible to generalize Horner's method, but this leaves freedom in the order of the variables. Traditionally, greedy schemes like most-occurring variable first are used. This simple textbook algorithm has given remarkably efficient results. Finding better algorithms has proved difficult. In trying to improve upon the greedy scheme we have implemented Monte Carlo tree search, a recent search method from the field of artificial intelligence. This results in better Horner schemes and reduces the cost of evaluating polynomials, sometimes by factors up to two.

  10. Segmented Polynomial Models in Quasi-Experimental Research.

    ERIC Educational Resources Information Center

    Wasik, John L.

    1981-01-01

    The use of segmented polynomial models is explained. Examples of design matrices of dummy variables are given for the least squares analyses of time series and discontinuity quasi-experimental research designs. Linear combinations of dummy variable vectors appear to provide tests of effects in the two quasi-experimental designs. (Author/BW)

  11. Two-dimensional orthonormal trend surfaces for prospecting

    NASA Astrophysics Data System (ADS)

    Sarma, D. D.; Selvaraj, J. B.

    Orthonormal polynomials have distinct advantages over conventional polynomials: the equations for evaluating trend coefficients are not ill-conditioned and the convergence power of this method is greater compared to the least-squares approximation and therefore the approach by orthonormal functions provides a powerful alternative to the least-squares method. In this paper, orthonormal polynomials in two dimensions are obtained using the Gram-Schmidt method for a polynomial series of the type: Z = 1 + x + y + x2 + xy + y2 + … + yn, where x and y are the locational coordinates and Z is the value of the variable under consideration. Trend-surface analysis, which has wide applications in prospecting, has been carried out using the orthonormal polynomial approach for two sample sets of data from India concerned with gold accumulation from the Kolar Gold Field, and gravity data. A comparison of the orthonormal polynomial trend surfaces with those obtained by the classical least-squares method has been made for the two data sets. In both the situations, the orthonormal polynomial surfaces gave an improved fit to the data. A flowchart and a FORTRAN-IV computer program for deriving orthonormal polynomials of any order and for using them to fit trend surfaces is included. The program has provision for logarithmic transformation of the Z variable. If log-transformation is performed the predicted Z values are reconverted to the original units and the trend-surface map generated for use. The illustration of gold assay data related to the Champion lode system of Kolar Gold Fields, for which a 9th-degree orthonormal trend surface was fit, could be used for further prospecting the area.

  12. Some Curious Properties and Loci Problems Associated with Cubics and Other Polynomials

    ERIC Educational Resources Information Center

    de Alwis, Amal

    2012-01-01

    The article begins with a well-known property regarding tangent lines to a cubic polynomial that has distinct, real zeros. We were then able to generalize this property to any polynomial with distinct, real zeros. We also considered a certain family of cubics with two fixed zeros and one variable zero, and explored the loci of centroids of…

  13. Design of polynomial fuzzy observer-controller for nonlinear systems with state delay: sum of squares approach

    NASA Astrophysics Data System (ADS)

    Gassara, H.; El Hajjaji, A.; Chaabane, M.

    2017-07-01

    This paper investigates the problem of observer-based control for two classes of polynomial fuzzy systems with time-varying delay. The first class concerns a special case where the polynomial matrices do not depend on the estimated state variables. The second one is the general case where the polynomial matrices could depend on unmeasurable system states that will be estimated. For the last case, two design procedures are proposed. The first one gives the polynomial fuzzy controller and observer gains in two steps. In the second procedure, the designed gains are obtained using a single-step approach to overcome the drawback of a two-step procedure. The obtained conditions are presented in terms of sum of squares (SOS) which can be solved via the SOSTOOLS and a semi-definite program solver. Illustrative examples show the validity and applicability of the proposed results.

  14. Thermodynamic characterization of networks using graph polynomials

    NASA Astrophysics Data System (ADS)

    Ye, Cheng; Comin, César H.; Peron, Thomas K. DM.; Silva, Filipi N.; Rodrigues, Francisco A.; Costa, Luciano da F.; Torsello, Andrea; Hancock, Edwin R.

    2015-09-01

    In this paper, we present a method for characterizing the evolution of time-varying complex networks by adopting a thermodynamic representation of network structure computed from a polynomial (or algebraic) characterization of graph structure. Commencing from a representation of graph structure based on a characteristic polynomial computed from the normalized Laplacian matrix, we show how the polynomial is linked to the Boltzmann partition function of a network. This allows us to compute a number of thermodynamic quantities for the network, including the average energy and entropy. Assuming that the system does not change volume, we can also compute the temperature, defined as the rate of change of entropy with energy. All three thermodynamic variables can be approximated using low-order Taylor series that can be computed using the traces of powers of the Laplacian matrix, avoiding explicit computation of the normalized Laplacian spectrum. These polynomial approximations allow a smoothed representation of the evolution of networks to be constructed in the thermodynamic space spanned by entropy, energy, and temperature. We show how these thermodynamic variables can be computed in terms of simple network characteristics, e.g., the total number of nodes and node degree statistics for nodes connected by edges. We apply the resulting thermodynamic characterization to real-world time-varying networks representing complex systems in the financial and biological domains. The study demonstrates that the method provides an efficient tool for detecting abrupt changes and characterizing different stages in network evolution.

  15. A method for fitting regression splines with varying polynomial order in the linear mixed model.

    PubMed

    Edwards, Lloyd J; Stewart, Paul W; MacDougall, James E; Helms, Ronald W

    2006-02-15

    The linear mixed model has become a widely used tool for longitudinal analysis of continuous variables. The use of regression splines in these models offers the analyst additional flexibility in the formulation of descriptive analyses, exploratory analyses and hypothesis-driven confirmatory analyses. We propose a method for fitting piecewise polynomial regression splines with varying polynomial order in the fixed effects and/or random effects of the linear mixed model. The polynomial segments are explicitly constrained by side conditions for continuity and some smoothness at the points where they join. By using a reparameterization of this explicitly constrained linear mixed model, an implicitly constrained linear mixed model is constructed that simplifies implementation of fixed-knot regression splines. The proposed approach is relatively simple, handles splines in one variable or multiple variables, and can be easily programmed using existing commercial software such as SAS or S-plus. The method is illustrated using two examples: an analysis of longitudinal viral load data from a study of subjects with acute HIV-1 infection and an analysis of 24-hour ambulatory blood pressure profiles.

  16. Evolution method and ``differential hierarchy'' of colored knot polynomials

    NASA Astrophysics Data System (ADS)

    Mironov, A.; Morozov, A.; Morozov, And.

    2013-10-01

    We consider braids with repeating patterns inside arbitrary knots which provides a multi-parametric family of knots, depending on the "evolution" parameter, which controls the number of repetitions. The dependence of knot (super)polynomials on such evolution parameters is very easy to find. We apply this evolution method to study of the families of knots and links which include the cases with just two parallel and anti-parallel strands in the braid, like the ordinary twist and 2-strand torus knots/links and counter-oriented 2-strand links. When the answers were available before, they are immediately reproduced, and an essentially new example is added of the "double braid", which is a combination of parallel and anti-parallel 2-strand braids. This study helps us to reveal with the full clarity and partly investigate a mysterious hierarchical structure of the colored HOMFLY polynomials, at least, in (anti)symmetric representations, which extends the original observation for the figure-eight knot to many (presumably all) knots. We demonstrate that this structure is typically respected by the t-deformation to the superpolynomials.

  17. Maximal aggregation of polynomial dynamical systems

    PubMed Central

    Cardelli, Luca; Tschaikowski, Max

    2017-01-01

    Ordinary differential equations (ODEs) with polynomial derivatives are a fundamental tool for understanding the dynamics of systems across many branches of science, but our ability to gain mechanistic insight and effectively conduct numerical evaluations is critically hindered when dealing with large models. Here we propose an aggregation technique that rests on two notions of equivalence relating ODE variables whenever they have the same solution (backward criterion) or if a self-consistent system can be written for describing the evolution of sums of variables in the same equivalence class (forward criterion). A key feature of our proposal is to encode a polynomial ODE system into a finitary structure akin to a formal chemical reaction network. This enables the development of a discrete algorithm to efficiently compute the largest equivalence, building on approaches rooted in computer science to minimize basic models of computation through iterative partition refinements. The physical interpretability of the aggregation is shown on polynomial ODE systems for biochemical reaction networks, gene regulatory networks, and evolutionary game theory. PMID:28878023

  18. Minimum Sobolev norm interpolation of scattered derivative data

    NASA Astrophysics Data System (ADS)

    Chandrasekaran, S.; Gorman, C. H.; Mhaskar, H. N.

    2018-07-01

    We study the problem of reconstructing a function on a manifold satisfying some mild conditions, given data of the values and some derivatives of the function at arbitrary points on the manifold. While the problem of finding a polynomial of two variables with total degree ≤n given the values of the polynomial and some of its derivatives at exactly the same number of points as the dimension of the polynomial space is sometimes impossible, we show that such a problem always has a solution in a very general situation if the degree of the polynomials is sufficiently large. We give estimates on how large the degree should be, and give explicit constructions for such a polynomial even in a far more general case. As the number of sampling points at which the data is available increases, our polynomials converge to the target function on the set where the sampling points are dense. Numerical examples in single and double precision show that this method is stable, efficient, and of high-order.

  19. Orbifold E-functions of dual invertible polynomials

    NASA Astrophysics Data System (ADS)

    Ebeling, Wolfgang; Gusein-Zade, Sabir M.; Takahashi, Atsushi

    2016-08-01

    An invertible polynomial is a weighted homogeneous polynomial with the number of monomials coinciding with the number of variables and such that the weights of the variables and the quasi-degree are well defined. In the framework of the search for mirror symmetric orbifold Landau-Ginzburg models, P. Berglund and M. Henningson considered a pair (f , G) consisting of an invertible polynomial f and an abelian group G of its symmetries together with a dual pair (f ˜ , G ˜) . We consider the so-called orbifold E-function of such a pair (f , G) which is a generating function for the exponents of the monodromy action on an orbifold version of the mixed Hodge structure on the Milnor fibre of f. We prove that the orbifold E-functions of Berglund-Henningson dual pairs coincide up to a sign depending on the number of variables and a simple change of variables. The proof is based on a relation between monomials (say, elements of a monomial basis of the Milnor algebra of an invertible polynomial) and elements of the whole symmetry group of the dual polynomial.

  20. An algorithmic approach to solving polynomial equations associated with quantum circuits

    NASA Astrophysics Data System (ADS)

    Gerdt, V. P.; Zinin, M. V.

    2009-12-01

    In this paper we present two algorithms for reducing systems of multivariate polynomial equations over the finite field F 2 to the canonical triangular form called lexicographical Gröbner basis. This triangular form is the most appropriate for finding solutions of the system. On the other hand, the system of polynomials over F 2 whose variables also take values in F 2 (Boolean polynomials) completely describes the unitary matrix generated by a quantum circuit. In particular, the matrix itself can be computed by counting the number of solutions (roots) of the associated polynomial system. Thereby, efficient construction of the lexicographical Gröbner bases over F 2 associated with quantum circuits gives a method for computing their circuit matrices that is alternative to the direct numerical method based on linear algebra. We compare our implementation of both algorithms with some other software packages available for computing Gröbner bases over F 2.

  1. Reliability of the Load-Velocity Relationship Obtained Through Linear and Polynomial Regression Models to Predict the One-Repetition Maximum Load.

    PubMed

    Pestaña-Melero, Francisco Luis; Haff, G Gregory; Rojas, Francisco Javier; Pérez-Castilla, Alejandro; García-Ramos, Amador

    2017-12-18

    This study aimed to compare the between-session reliability of the load-velocity relationship between (1) linear vs. polynomial regression models, (2) concentric-only vs. eccentric-concentric bench press variants, as well as (3) the within-participants vs. the between-participants variability of the velocity attained at each percentage of the one-repetition maximum (%1RM). The load-velocity relationship of 30 men (age: 21.2±3.8 y; height: 1.78±0.07 m, body mass: 72.3±7.3 kg; bench press 1RM: 78.8±13.2 kg) were evaluated by means of linear and polynomial regression models in the concentric-only and eccentric-concentric bench press variants in a Smith Machine. Two sessions were performed with each bench press variant. The main findings were: (1) first-order-polynomials (CV: 4.39%-4.70%) provided the load-velocity relationship with higher reliability than second-order-polynomials (CV: 4.68%-5.04%); (2) the reliability of the load-velocity relationship did not differ between the concentric-only and eccentric-concentric bench press variants; (3) the within-participants variability of the velocity attained at each %1RM was markedly lower than the between-participants variability. Taken together, these results highlight that, regardless of the bench press variant considered, the individual determination of the load-velocity relationship by a linear regression model could be recommended to monitor and prescribe the relative load in the Smith machine bench press exercise.

  2. Constructing general partial differential equations using polynomial and neural networks.

    PubMed

    Zjavka, Ladislav; Pedrycz, Witold

    2016-01-01

    Sum fraction terms can approximate multi-variable functions on the basis of discrete observations, replacing a partial differential equation definition with polynomial elementary data relation descriptions. Artificial neural networks commonly transform the weighted sum of inputs to describe overall similarity relationships of trained and new testing input patterns. Differential polynomial neural networks form a new class of neural networks, which construct and solve an unknown general partial differential equation of a function of interest with selected substitution relative terms using non-linear multi-variable composite polynomials. The layers of the network generate simple and composite relative substitution terms whose convergent series combinations can describe partial dependent derivative changes of the input variables. This regression is based on trained generalized partial derivative data relations, decomposed into a multi-layer polynomial network structure. The sigmoidal function, commonly used as a nonlinear activation of artificial neurons, may transform some polynomial items together with the parameters with the aim to improve the polynomial derivative term series ability to approximate complicated periodic functions, as simple low order polynomials are not able to fully make up for the complete cycles. The similarity analysis facilitates substitutions for differential equations or can form dimensional units from data samples to describe real-world problems. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Noncommutative Differential Geometry of Generalized Weyl Algebras

    NASA Astrophysics Data System (ADS)

    Brzeziński, Tomasz

    2016-06-01

    Elements of noncommutative differential geometry of Z-graded generalized Weyl algebras A(p;q) over the ring of polynomials in two variables and their zero-degree subalgebras B(p;q), which themselves are generalized Weyl algebras over the ring of polynomials in one variable, are discussed. In particular, three classes of skew derivations of A(p;q) are constructed, and three-dimensional first-order differential calculi induced by these derivations are described. The associated integrals are computed and it is shown that the dimension of the integral space coincides with the order of the defining polynomial p(z). It is proven that the restriction of these first-order differential calculi to the calculi on B(p;q) is isomorphic to the direct sum of degree 2 and degree -2 components of A(p;q). A Dirac operator for B(p;q) is constructed from a (strong) connection with respect to this differential calculus on the (free) spinor bimodule defined as the direct sum of degree 1 and degree -1 components of A(p;q). The real structure of KO-dimension two for this Dirac operator is also described.

  4. Classifying quantum entanglement through topological links

    NASA Astrophysics Data System (ADS)

    Quinta, Gonçalo M.; André, Rui

    2018-04-01

    We propose an alternative classification scheme for quantum entanglement based on topological links. This is done by identifying a nonrigid ring to a particle, attributing the act of cutting and removing a ring to the operation of tracing out the particle, and associating linked rings to entangled particles. This analogy naturally leads us to a classification of multipartite quantum entanglement based on all possible distinct links for a given number of rings. To determine all different possibilities, we develop a formalism that associates any link to a polynomial, with each polynomial thereby defining a distinct equivalence class. To demonstrate the use of this classification scheme, we choose qubit quantum states as our example of physical system. A possible procedure to obtain qubit states from the polynomials is also introduced, providing an example state for each link class. We apply the formalism for the quantum systems of three and four qubits and demonstrate the potential of these tools in a context of qubit networks.

  5. Heat transfer of phase-change materials in two-dimensional cylindrical coordinates

    NASA Technical Reports Server (NTRS)

    Labdon, M. B.; Guceri, S. I.

    1981-01-01

    Two-dimensional phase-change problem is numerically solved in cylindrical coordinates (r and z) by utilizing two Taylor series expansions for the temperature distributions in the neighborhood of the interface location. These two expansions form two polynomials in r and z directions. For the regions sufficiently away from the interface the temperature field equations are numerically solved in the usual way and the results are coupled with the polynomials. The main advantages of this efficient approach include ability to accept arbitrarily time dependent boundary conditions of all types and arbitrarily specified initial temperature distributions. A modified approach using a single Taylor series expansion in two variables is also suggested.

  6. Significantly Reduced Blood Pressure Measurement Variability for Both Normotensive and Hypertensive Subjects: Effect of Polynomial Curve Fitting of Oscillometric Pulses

    PubMed Central

    Zhu, Mingping; Chen, Aiqing

    2017-01-01

    This study aimed to compare within-subject blood pressure (BP) variabilities from different measurement techniques. Cuff pressures from three repeated BP measurements were obtained from 30 normotensive and 30 hypertensive subjects. Automatic BPs were determined from the pulses with normalised peak amplitude larger than a threshold (0.5 for SBP, 0.7 for DBP, and 1.0 for MAP). They were also determined from cuff pressures associated with the above thresholds on a fitted curve polynomial curve of the oscillometric pulse peaks. Finally, the standard deviation (SD) of three repeats and its coefficient of variability (CV) were compared between the two automatic techniques. For the normotensive group, polynomial curve fitting significantly reduced SD of repeats from 3.6 to 2.5 mmHg for SBP and from 3.7 to 2.1 mmHg for MAP and reduced CV from 3.0% to 2.2% for SBP and from 4.3% to 2.4% for MAP (all P < 0.01). For the hypertensive group, SD of repeats decreased from 6.5 to 5.5 mmHg for SBP and from 6.7 to 4.2 mmHg for MAP, and CV decreased from 4.2% to 3.6% for SBP and from 5.8% to 3.8% for MAP (all P < 0.05). In conclusion, polynomial curve fitting of oscillometric pulses had the ability to reduce automatic BP measurement variability. PMID:28785580

  7. Polynomial approximations of thermodynamic properties of arbitrary gas mixtures over wide pressure and density ranges

    NASA Technical Reports Server (NTRS)

    Allison, D. O.

    1972-01-01

    Computer programs for flow fields around planetary entry vehicles require real-gas equilibrium thermodynamic properties in a simple form which can be evaluated quickly. To fill this need, polynomial approximations were found for thermodynamic properties of air and model planetary atmospheres. A coefficient-averaging technique was used for curve fitting in lieu of the usual least-squares method. The polynomials consist of terms up to the ninth degree in each of two variables (essentially pressure and density) including all cross terms. Four of these polynomials can be joined to cover, for example, a range of about 1000 to 11000 K and 0.00001 to 1 atmosphere (1 atm = 1.0133 x 100,000 N/m sq) for a given thermodynamic property. Relative errors of less than 1 percent are found over most of the applicable range.

  8. Colored knot polynomials for arbitrary pretzel knots and links

    DOE PAGES

    Galakhov, D.; Melnikov, D.; Mironov, A.; ...

    2015-04-01

    A very simple expression is conjectured for arbitrary colored Jones and HOMFLY polynomials of a rich (g+1)-parametric family of pretzel knots and links. The answer for the Jones and HOMFLY is fully and explicitly expressed through the Racah matrix of Uq(SU N), and looks related to a modular transformation of toric conformal block. Knot polynomials are among the hottest topics in modern theory. They are supposed to summarize nicely representation theory of quantum algebras and modular properties of conformal blocks. The result reported in the present letter, provides a spectacular illustration and support to this general expectation.

  9. Construction of Response Surface with Higher Order Continuity and Its Application to Reliability Engineering

    NASA Technical Reports Server (NTRS)

    Krishnamurthy, T.; Romero, V. J.

    2002-01-01

    The usefulness of piecewise polynomials with C1 and C2 derivative continuity for response surface construction method is examined. A Moving Least Squares (MLS) method is developed and compared with four other interpolation methods, including kriging. First the selected methods are applied and compared with one another in a two-design variables problem with a known theoretical response function. Next the methods are tested in a four-design variables problem from a reliability-based design application. In general the piecewise polynomial with higher order derivative continuity methods produce less error in the response prediction. The MLS method was found to be superior for response surface construction among the methods evaluated.

  10. Efficient conservative ADER schemes based on WENO reconstruction and space-time predictor in primitive variables

    NASA Astrophysics Data System (ADS)

    Zanotti, Olindo; Dumbser, Michael

    2016-01-01

    We present a new version of conservative ADER-WENO finite volume schemes, in which both the high order spatial reconstruction as well as the time evolution of the reconstruction polynomials in the local space-time predictor stage are performed in primitive variables, rather than in conserved ones. To obtain a conservative method, the underlying finite volume scheme is still written in terms of the cell averages of the conserved quantities. Therefore, our new approach performs the spatial WENO reconstruction twice: the first WENO reconstruction is carried out on the known cell averages of the conservative variables. The WENO polynomials are then used at the cell centers to compute point values of the conserved variables, which are subsequently converted into point values of the primitive variables. This is the only place where the conversion from conservative to primitive variables is needed in the new scheme. Then, a second WENO reconstruction is performed on the point values of the primitive variables to obtain piecewise high order reconstruction polynomials of the primitive variables. The reconstruction polynomials are subsequently evolved in time with a novel space-time finite element predictor that is directly applied to the governing PDE written in primitive form. The resulting space-time polynomials of the primitive variables can then be directly used as input for the numerical fluxes at the cell boundaries in the underlying conservative finite volume scheme. Hence, the number of necessary conversions from the conserved to the primitive variables is reduced to just one single conversion at each cell center. We have verified the validity of the new approach over a wide range of hyperbolic systems, including the classical Euler equations of gas dynamics, the special relativistic hydrodynamics (RHD) and ideal magnetohydrodynamics (RMHD) equations, as well as the Baer-Nunziato model for compressible two-phase flows. In all cases we have noticed that the new ADER schemes provide less oscillatory solutions when compared to ADER finite volume schemes based on the reconstruction in conserved variables, especially for the RMHD and the Baer-Nunziato equations. For the RHD and RMHD equations, the overall accuracy is improved and the CPU time is reduced by about 25 %. Because of its increased accuracy and due to the reduced computational cost, we recommend to use this version of ADER as the standard one in the relativistic framework. At the end of the paper, the new approach has also been extended to ADER-DG schemes on space-time adaptive grids (AMR).

  11. Investigation of advanced UQ for CRUD prediction with VIPRE.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eldred, Michael Scott

    2011-09-01

    This document summarizes the results from a level 3 milestone study within the CASL VUQ effort. It demonstrates the application of 'advanced UQ,' in particular dimension-adaptive p-refinement for polynomial chaos and stochastic collocation. The study calculates statistics for several quantities of interest that are indicators for the formation of CRUD (Chalk River unidentified deposit), which can lead to CIPS (CRUD induced power shift). Stochastic expansion methods are attractive methods for uncertainty quantification due to their fast convergence properties. For smooth functions (i.e., analytic, infinitely-differentiable) in L{sup 2} (i.e., possessing finite variance), exponential convergence rates can be obtained under order refinementmore » for integrated statistical quantities of interest such as mean, variance, and probability. Two stochastic expansion methods are of interest: nonintrusive polynomial chaos expansion (PCE), which computes coefficients for a known basis of multivariate orthogonal polynomials, and stochastic collocation (SC), which forms multivariate interpolation polynomials for known coefficients. Within the DAKOTA project, recent research in stochastic expansion methods has focused on automated polynomial order refinement ('p-refinement') of expansions to support scalability to higher dimensional random input spaces [4, 3]. By preferentially refining only in the most important dimensions of the input space, the applicability of these methods can be extended from O(10{sup 0})-O(10{sup 1}) random variables to O(10{sup 2}) and beyond, depending on the degree of anisotropy (i.e., the extent to which randominput variables have differing degrees of influence on the statistical quantities of interest (QOIs)). Thus, the purpose of this study is to investigate the application of these adaptive stochastic expansion methods to the analysis of CRUD using the VIPRE simulation tools for two different plant models of differing random dimension, anisotropy, and smoothness.« less

  12. Explaining variation in tropical plant community composition: influence of environmental and spatial data quality.

    PubMed

    Jones, Mirkka M; Tuomisto, Hanna; Borcard, Daniel; Legendre, Pierre; Clark, David B; Olivas, Paulo C

    2008-03-01

    The degree to which variation in plant community composition (beta-diversity) is predictable from environmental variation, relative to other spatial processes, is of considerable current interest. We addressed this question in Costa Rican rain forest pteridophytes (1,045 plots, 127 species). We also tested the effect of data quality on the results, which has largely been overlooked in earlier studies. To do so, we compared two alternative spatial models [polynomial vs. principal coordinates of neighbour matrices (PCNM)] and ten alternative environmental models (all available environmental variables vs. four subsets, and including their polynomials vs. not). Of the environmental data types, soil chemistry contributed most to explaining pteridophyte community variation, followed in decreasing order of contribution by topography, soil type and forest structure. Environmentally explained variation increased moderately when polynomials of the environmental variables were included. Spatially explained variation increased substantially when the multi-scale PCNM spatial model was used instead of the traditional, broad-scale polynomial spatial model. The best model combination (PCNM spatial model and full environmental model including polynomials) explained 32% of pteridophyte community variation, after correcting for the number of sampling sites and explanatory variables. Overall evidence for environmental control of beta-diversity was strong, and the main floristic gradients detected were correlated with environmental variation at all scales encompassed by the study (c. 100-2,000 m). Depending on model choice, however, total explained variation differed more than fourfold, and the apparent relative importance of space and environment could be reversed. Therefore, we advocate a broader recognition of the impacts that data quality has on analysis results. A general understanding of the relative contributions of spatial and environmental processes to species distributions and beta-diversity requires that methodological artefacts are separated from real ecological differences.

  13. A Comparison of Approximation Modeling Techniques: Polynomial Versus Interpolating Models

    NASA Technical Reports Server (NTRS)

    Giunta, Anthony A.; Watson, Layne T.

    1998-01-01

    Two methods of creating approximation models are compared through the calculation of the modeling accuracy on test problems involving one, five, and ten independent variables. Here, the test problems are representative of the modeling challenges typically encountered in realistic engineering optimization problems. The first approximation model is a quadratic polynomial created using the method of least squares. This type of polynomial model has seen considerable use in recent engineering optimization studies due to its computational simplicity and ease of use. However, quadratic polynomial models may be of limited accuracy when the response data to be modeled have multiple local extrema. The second approximation model employs an interpolation scheme known as kriging developed in the fields of spatial statistics and geostatistics. This class of interpolating model has the flexibility to model response data with multiple local extrema. However, this flexibility is obtained at an increase in computational expense and a decrease in ease of use. The intent of this study is to provide an initial exploration of the accuracy and modeling capabilities of these two approximation methods.

  14. Homogenous polynomially parameter-dependent H∞ filter designs of discrete-time fuzzy systems.

    PubMed

    Zhang, Huaguang; Xie, Xiangpeng; Tong, Shaocheng

    2011-10-01

    This paper proposes a novel H(∞) filtering technique for a class of discrete-time fuzzy systems. First, a novel kind of fuzzy H(∞) filter, which is homogenous polynomially parameter dependent on membership functions with an arbitrary degree, is developed to guarantee the asymptotic stability and a prescribed H(∞) performance of the filtering error system. Second, relaxed conditions for H(∞) performance analysis are proposed by using a new fuzzy Lyapunov function and the Finsler lemma with homogenous polynomial matrix Lagrange multipliers. Then, based on a new kind of slack variable technique, relaxed linear matrix inequality-based H(∞) filtering conditions are proposed. Finally, two numerical examples are provided to illustrate the effectiveness of the proposed approach.

  15. Comparative assessment of orthogonal polynomials for wavefront reconstruction over the square aperture.

    PubMed

    Ye, Jingfei; Gao, Zhishan; Wang, Shuai; Cheng, Jinlong; Wang, Wei; Sun, Wenqing

    2014-10-01

    Four orthogonal polynomials for reconstructing a wavefront over a square aperture based on the modal method are currently available, namely, the 2D Chebyshev polynomials, 2D Legendre polynomials, Zernike square polynomials and Numerical polynomials. They are all orthogonal over the full unit square domain. 2D Chebyshev polynomials are defined by the product of Chebyshev polynomials in x and y variables, as are 2D Legendre polynomials. Zernike square polynomials are derived by the Gram-Schmidt orthogonalization process, where the integration region across the full unit square is circumscribed outside the unit circle. Numerical polynomials are obtained by numerical calculation. The presented study is to compare these four orthogonal polynomials by theoretical analysis and numerical experiments from the aspects of reconstruction accuracy, remaining errors, and robustness. Results show that the Numerical orthogonal polynomial is superior to the other three polynomials because of its high accuracy and robustness even in the case of a wavefront with incomplete data.

  16. Entanglement entropy and the colored Jones polynomial

    NASA Astrophysics Data System (ADS)

    Balasubramanian, Vijay; DeCross, Matthew; Fliss, Jackson; Kar, Arjun; Leigh, Robert G.; Parrikar, Onkar

    2018-05-01

    We study the multi-party entanglement structure of states in Chern-Simons theory created by performing the path integral on 3-manifolds with linked torus boundaries, called link complements. For gauge group SU(2), the wavefunctions of these states (in a particular basis) are the colored Jones polynomials of the corresponding links. We first review the case of U(1) Chern-Simons theory where these are stabilizer states, a fact we use to re-derive an explicit formula for the entanglement entropy across a general link bipartition. We then present the following results for SU(2) Chern-Simons theory: (i) The entanglement entropy for a bipartition of a link gives a lower bound on the genus of surfaces in the ambient S 3 separating the two sublinks. (ii) All torus links (namely, links which can be drawn on the surface of a torus) have a GHZ-like entanglement structure — i.e., partial traces leave a separable state. By contrast, through explicit computation, we test in many examples that hyperbolic links (namely, links whose complements admit hyperbolic structures) have W-like entanglement — i.e., partial traces leave a non-separable state. (iii) Finally, we consider hyperbolic links in the complexified SL(2,C) Chern-Simons theory, which is closely related to 3d Einstein gravity with a negative cosmological constant. In the limit of small Newton constant, we discuss how the entanglement structure is controlled by the Neumann-Zagier potential on the moduli space of hyperbolic structures on the link complement.

  17. Multivariable Hermite polynomials and phase-space dynamics

    NASA Technical Reports Server (NTRS)

    Dattoli, G.; Torre, Amalia; Lorenzutta, S.; Maino, G.; Chiccoli, C.

    1994-01-01

    The phase-space approach to classical and quantum systems demands for advanced analytical tools. Such an approach characterizes the evolution of a physical system through a set of variables, reducing to the canonically conjugate variables in the classical limit. It often happens that phase-space distributions can be written in terms of quadratic forms involving the above quoted variables. A significant analytical tool to treat these problems may come from the generalized many-variables Hermite polynomials, defined on quadratic forms in R(exp n). They form an orthonormal system in many dimensions and seem the natural tool to treat the harmonic oscillator dynamics in phase-space. In this contribution we discuss the properties of these polynomials and present some applications to physical problems.

  18. Polynomial time blackbox identity testers for depth-3 circuits : the field doesn't matter.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seshadhri, Comandur; Saxena, Nitin

    Let C be a depth-3 circuit with n variables, degree d and top fanin k (called {Sigma}{Pi}{Sigma}(k, d, n) circuits) over base field F. It is a major open problem to design a deterministic polynomial time blackbox algorithm that tests if C is identically zero. Klivans & Spielman (STOC 2001) observed that the problem is open even when k is a constant. This case has been subjected to a serious study over the past few years, starting from the work of Dvir & Shpilka (STOC 2005). We give the first polynomial time blackbox algorithm for this problem. Our algorithm runsmore » in time poly(n)d{sup k}, regardless of the base field. The only field for which polynomial time algorithms were previously known is F = Q (Kayal & Saraf, FOCS 2009, and Saxena & Seshadhri, FOCS 2010). This is the first blackbox algorithm for depth-3 circuits that does not use the rank based approaches of Karnin & Shpilka (CCC 2008). We prove an important tool for the study of depth-3 identities. We design a blackbox polynomial time transformation that reduces the number of variables in a {Sigma}{Pi}{Sigma}(k, d, n) circuit to k variables, but preserves the identity structure. Polynomial identity testing (PIT) is a major open problem in theoretical computer science. The input is an arithmetic circuit that computes a polynomial p(x{sub 1}, x{sub 2},..., x{sub n}) over a base field F. We wish to check if p is the zero polynomial, or in other words, is identically zero. We may be provided with an explicit circuit, or may only have blackbox access. In the latter case, we can only evaluate the polynomial p at various domain points. The main goal is to devise a deterministic blackbox polynomial time algorithm for PIT.« less

  19. Accurate Estimation of Solvation Free Energy Using Polynomial Fitting Techniques

    PubMed Central

    Shyu, Conrad; Ytreberg, F. Marty

    2010-01-01

    This report details an approach to improve the accuracy of free energy difference estimates using thermodynamic integration data (slope of the free energy with respect to the switching variable λ) and its application to calculating solvation free energy. The central idea is to utilize polynomial fitting schemes to approximate the thermodynamic integration data to improve the accuracy of the free energy difference estimates. Previously, we introduced the use of polynomial regression technique to fit thermodynamic integration data (Shyu and Ytreberg, J Comput Chem 30: 2297–2304, 2009). In this report we introduce polynomial and spline interpolation techniques. Two systems with analytically solvable relative free energies are used to test the accuracy of the interpolation approach. We also use both interpolation and regression methods to determine a small molecule solvation free energy. Our simulations show that, using such polynomial techniques and non-equidistant λ values, the solvation free energy can be estimated with high accuracy without using soft-core scaling and separate simulations for Lennard-Jones and partial charges. The results from our study suggest these polynomial techniques, especially with use of non-equidistant λ values, improve the accuracy for ΔF estimates without demanding additional simulations. We also provide general guidelines for use of polynomial fitting to estimate free energy. To allow researchers to immediately utilize these methods, free software and documentation is provided via http://www.phys.uidaho.edu/ytreberg/software. PMID:20623657

  20. Humeral development from neonatal period to skeletal maturity--application in age and sex assessment.

    PubMed

    Rissech, Carme; López-Costas, Olalla; Turbón, Daniel

    2013-01-01

    The goal of the present study is to examine cross-sectional information on the growth of the humerus based on the analysis of four measurements, namely, diaphyseal length, transversal diameter of the proximal (metaphyseal) end of the shaft, epicondylar breadth and vertical diameter of the head. This analysis was performed in 181 individuals (90 ♂ and 91 ♀) ranging from birth to 25 years of age and belonging to three documented Western European skeletal collections (Coimbra, Lisbon and St. Bride). After testing the homogeneity of the sample, the existence of sexual differences (Student's t- and Mann-Whitney U-test) and the growth of the variables (polynomial regression) were evaluated. The results showed the presence of sexual differences in epicondylar breadth above 20 years of age and vertical diameter of the head from 15 years of age, thus indicating that these two variables may be of use in determining sex from that age onward. The growth pattern of the variables showed a continuous increase and followed first- and second-degree polynomials. However, growth of the transversal diameter of the proximal end of the shaft followed a fourth-degree polynomial. Strong correlation coefficients were identified between humeral size and age for each of the four metric variables. These results indicate that any of the humeral measurements studied herein is likely to serve as a useful means of estimating sub-adult age in forensic samples.

  1. Testing Informant Discrepancies as Predictors of Early Adolescent Psychopathology: Why Difference Scores Cannot Tell You What You Want to Know and How Polynomial Regression May

    ERIC Educational Resources Information Center

    Laird, Robert D.; De Los Reyes, Andres

    2013-01-01

    Multiple informants commonly disagree when reporting child and family behavior. In many studies of informant discrepancies, researchers take the difference between two informants' reports and seek to examine the link between this difference score and external constructs (e.g., child maladjustment). In this paper, we review two reasons why…

  2. Modelling the breeding of Aedes Albopictus species in an urban area in Pulau Pinang using polynomial regression

    NASA Astrophysics Data System (ADS)

    Salleh, Nur Hanim Mohd; Ali, Zalila; Noor, Norlida Mohd.; Baharum, Adam; Saad, Ahmad Ramli; Sulaiman, Husna Mahirah; Ahmad, Wan Muhamad Amir W.

    2014-07-01

    Polynomial regression is used to model a curvilinear relationship between a response variable and one or more predictor variables. It is a form of a least squares linear regression model that predicts a single response variable by decomposing the predictor variables into an nth order polynomial. In a curvilinear relationship, each curve has a number of extreme points equal to the highest order term in the polynomial. A quadratic model will have either a single maximum or minimum, whereas a cubic model has both a relative maximum and a minimum. This study used quadratic modeling techniques to analyze the effects of environmental factors: temperature, relative humidity, and rainfall distribution on the breeding of Aedes albopictus, a type of Aedes mosquito. Data were collected at an urban area in south-west Penang from September 2010 until January 2011. The results indicated that the breeding of Aedes albopictus in the urban area is influenced by all three environmental characteristics. The number of mosquito eggs is estimated to reach a maximum value at a medium temperature, a medium relative humidity and a high rainfall distribution.

  3. Soliton interactions and Bäcklund transformation for a (2+1)-dimensional variable-coefficient modified Kadomtsev-Petviashvili equation in fluid dynamics

    NASA Astrophysics Data System (ADS)

    Xiao, Zi-Jian; Tian, Bo; Sun, Yan

    2018-01-01

    In this paper, we investigate a (2+1)-dimensional variable-coefficient modified Kadomtsev-Petviashvili (mKP) equation in fluid dynamics. With the binary Bell-polynomial and an auxiliary function, bilinear forms for the equation are constructed. Based on the bilinear forms, multi-soliton solutions and Bell-polynomial-type Bäcklund transformation for such an equation are obtained through the symbolic computation. Soliton interactions are presented. Based on the graphic analysis, Parametric conditions for the existence of the shock waves, elevation solitons and depression solitons are given, and it is shown that under the condition of keeping the wave vectors invariable, the change of α(t) and β(t) can lead to the change of the solitonic velocities, but the shape of each soliton remains unchanged, where α(t) and β(t) are the variable coefficients in the equation. Oblique elastic interactions can exist between the (i) two shock waves, (ii) two elevation solitons, and (iii) elevation and depression solitons. However, oblique interactions between (i) shock waves and elevation solitons, (ii) shock waves and depression solitons are inelastic.

  4. Polynomial expansions of single-mode motions around equilibrium points in the circular restricted three-body problem

    NASA Astrophysics Data System (ADS)

    Lei, Hanlun; Xu, Bo; Circi, Christian

    2018-05-01

    In this work, the single-mode motions around the collinear and triangular libration points in the circular restricted three-body problem are studied. To describe these motions, we adopt an invariant manifold approach, which states that a suitable pair of independent variables are taken as modal coordinates and the remaining state variables are expressed as polynomial series of them. Based on the invariant manifold approach, the general procedure on constructing polynomial expansions up to a certain order is outlined. Taking the Earth-Moon system as the example dynamical model, we construct the polynomial expansions up to the tenth order for the single-mode motions around collinear libration points, and up to order eight and six for the planar and vertical-periodic motions around triangular libration point, respectively. The application of the polynomial expansions constructed lies in that they can be used to determine the initial states for the single-mode motions around equilibrium points. To check the validity, the accuracy of initial states determined by the polynomial expansions is evaluated.

  5. A quadratic regression modelling on paddy production in the area of Perlis

    NASA Astrophysics Data System (ADS)

    Goh, Aizat Hanis Annas; Ali, Zalila; Nor, Norlida Mohd; Baharum, Adam; Ahmad, Wan Muhamad Amir W.

    2017-08-01

    Polynomial regression models are useful in situations in which the relationship between a response variable and predictor variables is curvilinear. Polynomial regression fits the nonlinear relationship into a least squares linear regression model by decomposing the predictor variables into a kth order polynomial. The polynomial order determines the number of inflexions on the curvilinear fitted line. A second order polynomial forms a quadratic expression (parabolic curve) with either a single maximum or minimum, a third order polynomial forms a cubic expression with both a relative maximum and a minimum. This study used paddy data in the area of Perlis to model paddy production based on paddy cultivation characteristics and environmental characteristics. The results indicated that a quadratic regression model best fits the data and paddy production is affected by urea fertilizer application and the interaction between amount of average rainfall and percentage of area defected by pest and disease. Urea fertilizer application has a quadratic effect in the model which indicated that if the number of days of urea fertilizer application increased, paddy production is expected to decrease until it achieved a minimum value and paddy production is expected to increase at higher number of days of urea application. The decrease in paddy production with an increased in rainfall is greater, the higher the percentage of area defected by pest and disease.

  6. Novel Threshold Changeable Secret Sharing Schemes Based on Polynomial Interpolation

    PubMed Central

    Li, Mingchu; Guo, Cheng; Choo, Kim-Kwang Raymond; Ren, Yizhi

    2016-01-01

    After any distribution of secret sharing shadows in a threshold changeable secret sharing scheme, the threshold may need to be adjusted to deal with changes in the security policy and adversary structure. For example, when employees leave the organization, it is not realistic to expect departing employees to ensure the security of their secret shadows. Therefore, in 2012, Zhang et al. proposed (t → t′, n) and ({t1, t2,⋯, tN}, n) threshold changeable secret sharing schemes. However, their schemes suffer from a number of limitations such as strict limit on the threshold values, large storage space requirement for secret shadows, and significant computation for constructing and recovering polynomials. To address these limitations, we propose two improved dealer-free threshold changeable secret sharing schemes. In our schemes, we construct polynomials to update secret shadows, and use two-variable one-way function to resist collusion attacks and secure the information stored by the combiner. We then demonstrate our schemes can adjust the threshold safely. PMID:27792784

  7. The algebra of two dimensional generalized Chebyshev-Koornwinder oscillator

    NASA Astrophysics Data System (ADS)

    Borzov, V. V.; Damaskinsky, E. V.

    2014-10-01

    In the previous works of Borzov and Damaskinsky ["Chebyshev-Koornwinder oscillator," Theor. Math. Phys. 175(3), 765-772 (2013)] and ["Ladder operators for Chebyshev-Koornwinder oscillator," in Proceedings of the Days on Diffraction, 2013], the authors have defined the oscillator-like system that is associated with the two variable Chebyshev-Koornwinder polynomials. We call this system the generalized Chebyshev-Koornwinder oscillator. In this paper, we study the properties of infinite-dimensional Lie algebra that is analogous to the Heisenberg algebra for the Chebyshev-Koornwinder oscillator. We construct the exact irreducible representation of this algebra in a Hilbert space H of functions that are defined on a region which is bounded by the Steiner hypocycloid. The functions are square-integrable with respect to the orthogonality measure for the Chebyshev-Koornwinder polynomials and these polynomials form an orthonormalized basis in the space H. The generalized oscillator which is studied in the work can be considered as the simplest nontrivial example of multiboson quantum system that is composed of three interacting oscillators.

  8. Novel Threshold Changeable Secret Sharing Schemes Based on Polynomial Interpolation.

    PubMed

    Yuan, Lifeng; Li, Mingchu; Guo, Cheng; Choo, Kim-Kwang Raymond; Ren, Yizhi

    2016-01-01

    After any distribution of secret sharing shadows in a threshold changeable secret sharing scheme, the threshold may need to be adjusted to deal with changes in the security policy and adversary structure. For example, when employees leave the organization, it is not realistic to expect departing employees to ensure the security of their secret shadows. Therefore, in 2012, Zhang et al. proposed (t → t', n) and ({t1, t2,⋯, tN}, n) threshold changeable secret sharing schemes. However, their schemes suffer from a number of limitations such as strict limit on the threshold values, large storage space requirement for secret shadows, and significant computation for constructing and recovering polynomials. To address these limitations, we propose two improved dealer-free threshold changeable secret sharing schemes. In our schemes, we construct polynomials to update secret shadows, and use two-variable one-way function to resist collusion attacks and secure the information stored by the combiner. We then demonstrate our schemes can adjust the threshold safely.

  9. Semiparametric methods for estimation of a nonlinear exposure-outcome relationship using instrumental variables with application to Mendelian randomization.

    PubMed

    Staley, James R; Burgess, Stephen

    2017-05-01

    Mendelian randomization, the use of genetic variants as instrumental variables (IV), can test for and estimate the causal effect of an exposure on an outcome. Most IV methods assume that the function relating the exposure to the expected value of the outcome (the exposure-outcome relationship) is linear. However, in practice, this assumption may not hold. Indeed, often the primary question of interest is to assess the shape of this relationship. We present two novel IV methods for investigating the shape of the exposure-outcome relationship: a fractional polynomial method and a piecewise linear method. We divide the population into strata using the exposure distribution, and estimate a causal effect, referred to as a localized average causal effect (LACE), in each stratum of population. The fractional polynomial method performs metaregression on these LACE estimates. The piecewise linear method estimates a continuous piecewise linear function, the gradient of which is the LACE estimate in each stratum. Both methods were demonstrated in a simulation study to estimate the true exposure-outcome relationship well, particularly when the relationship was a fractional polynomial (for the fractional polynomial method) or was piecewise linear (for the piecewise linear method). The methods were used to investigate the shape of relationship of body mass index with systolic blood pressure and diastolic blood pressure. © 2017 The Authors Genetic Epidemiology Published by Wiley Periodicals, Inc.

  10. Semiparametric methods for estimation of a nonlinear exposure‐outcome relationship using instrumental variables with application to Mendelian randomization

    PubMed Central

    Staley, James R.

    2017-01-01

    ABSTRACT Mendelian randomization, the use of genetic variants as instrumental variables (IV), can test for and estimate the causal effect of an exposure on an outcome. Most IV methods assume that the function relating the exposure to the expected value of the outcome (the exposure‐outcome relationship) is linear. However, in practice, this assumption may not hold. Indeed, often the primary question of interest is to assess the shape of this relationship. We present two novel IV methods for investigating the shape of the exposure‐outcome relationship: a fractional polynomial method and a piecewise linear method. We divide the population into strata using the exposure distribution, and estimate a causal effect, referred to as a localized average causal effect (LACE), in each stratum of population. The fractional polynomial method performs metaregression on these LACE estimates. The piecewise linear method estimates a continuous piecewise linear function, the gradient of which is the LACE estimate in each stratum. Both methods were demonstrated in a simulation study to estimate the true exposure‐outcome relationship well, particularly when the relationship was a fractional polynomial (for the fractional polynomial method) or was piecewise linear (for the piecewise linear method). The methods were used to investigate the shape of relationship of body mass index with systolic blood pressure and diastolic blood pressure. PMID:28317167

  11. Optimization of Turbine Blade Design for Reusable Launch Vehicles

    NASA Technical Reports Server (NTRS)

    Shyy, Wei

    1998-01-01

    To facilitate design optimization of turbine blade shape for reusable launching vehicles, appropriate techniques need to be developed to process and estimate the characteristics of the design variables and the response of the output with respect to the variations of the design variables. The purpose of this report is to offer insight into developing appropriate techniques for supporting such design and optimization needs. Neural network and polynomial-based techniques are applied to process aerodynamic data obtained from computational simulations for flows around a two-dimensional airfoil and a generic three- dimensional wing/blade. For the two-dimensional airfoil, a two-layered radial-basis network is designed and trained. The performances of two different design functions for radial-basis networks, one based on the accuracy requirement, whereas the other one based on the limit on the network size. While the number of neurons needed to satisfactorily reproduce the information depends on the size of the data, the neural network technique is shown to be more accurate for large data set (up to 765 simulations have been used) than the polynomial-based response surface method. For the three-dimensional wing/blade case, smaller aerodynamic data sets (between 9 to 25 simulations) are considered, and both the neural network and the polynomial-based response surface techniques improve their performance as the data size increases. It is found while the relative performance of two different network types, a radial-basis network and a back-propagation network, depends on the number of input data, the number of iterations required for radial-basis network is less than that for the back-propagation network.

  12. Control design and robustness analysis of a ball and plate system by using polynomial chaos

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Colón, Diego; Balthazar, José M.; Reis, Célia A. dos

    2014-12-10

    In this paper, we present a mathematical model of a ball and plate system, a control law and analyze its robustness properties by using the polynomial chaos method. The ball rolls without slipping. There is an auxiliary robot vision system that determines the bodies' positions and velocities, and is used for control purposes. The actuators are to orthogonal DC motors, that changes the plate's angles with the ground. The model is a extension of the ball and beam system and is highly nonlinear. The system is decoupled in two independent equations for coordinates x and y. Finally, the resulting nonlinearmore » closed loop systems are analyzed by the polynomial chaos methodology, which considers that some system parameters are random variables, and generates statistical data that can be used in the robustness analysis.« less

  13. Control design and robustness analysis of a ball and plate system by using polynomial chaos

    NASA Astrophysics Data System (ADS)

    Colón, Diego; Balthazar, José M.; dos Reis, Célia A.; Bueno, Átila M.; Diniz, Ivando S.; de S. R. F. Rosa, Suelia

    2014-12-01

    In this paper, we present a mathematical model of a ball and plate system, a control law and analyze its robustness properties by using the polynomial chaos method. The ball rolls without slipping. There is an auxiliary robot vision system that determines the bodies' positions and velocities, and is used for control purposes. The actuators are to orthogonal DC motors, that changes the plate's angles with the ground. The model is a extension of the ball and beam system and is highly nonlinear. The system is decoupled in two independent equations for coordinates x and y. Finally, the resulting nonlinear closed loop systems are analyzed by the polynomial chaos methodology, which considers that some system parameters are random variables, and generates statistical data that can be used in the robustness analysis.

  14. Equivalent Colorings with "Maple"

    ERIC Educational Resources Information Center

    Cecil, David R.; Wang, Rongdong

    2005-01-01

    Many counting problems can be modeled as "colorings" and solved by considering symmetries and Polya's cycle index polynomial. This paper presents a "Maple 7" program link http://users.tamuk.edu/kfdrc00/ that, given Polya's cycle index polynomial, determines all possible associated colorings and their partitioning into equivalence classes. These…

  15. A DDDAS Framework for Volcanic Ash Propagation and Hazard Analysis

    DTIC Science & Technology

    2012-01-01

    probability distribution for the input variables (for example, Hermite polynomials for normally distributed parameters, or Legendre for uniformly...parameters and windfields will drive our simulations. We will use uncertainty quantification methodology – polynomial chaos quadrature in combination...quantification methodology ? polynomial chaos quadrature in combination with data integration to complete the DDDAS loop. 15. SUBJECT TERMS 16. SECURITY

  16. Learning Activity Package, Algebra.

    ERIC Educational Resources Information Center

    Evans, Diane

    A set of ten teacher-prepared Learning Activity Packages (LAPs) in beginning algebra and nine in intermediate algebra, these units cover sets, properties of operations, number systems, open expressions, solution sets of equations and inequalities in one and two variables, exponents, factoring and polynomials, relations and functions, radicals,…

  17. Control Synthesis of Discrete-Time T-S Fuzzy Systems via a Multi-Instant Homogenous Polynomial Approach.

    PubMed

    Xie, Xiangpeng; Yue, Dong; Zhang, Huaguang; Xue, Yusheng

    2016-03-01

    This paper deals with the problem of control synthesis of discrete-time Takagi-Sugeno fuzzy systems by employing a novel multiinstant homogenous polynomial approach. A new multiinstant fuzzy control scheme and a new class of fuzzy Lyapunov functions, which are homogenous polynomially parameter-dependent on both the current-time normalized fuzzy weighting functions and the past-time normalized fuzzy weighting functions, are proposed for implementing the object of relaxed control synthesis. Then, relaxed stabilization conditions are derived with less conservatism than existing ones. Furthermore, the relaxation quality of obtained stabilization conditions is further ameliorated by developing an efficient slack variable approach, which presents a multipolynomial dependence on the normalized fuzzy weighting functions at the current and past instants of time. Two simulation examples are given to demonstrate the effectiveness and benefits of the results developed in this paper.

  18. Why High-Order Polynomials Should Not Be Used in Regression Discontinuity Designs. NBER Working Paper No. 20405

    ERIC Educational Resources Information Center

    Gelman, Andrew; Imbens, Guido

    2014-01-01

    It is common in regression discontinuity analysis to control for high order (third, fourth, or higher) polynomials of the forcing variable. We argue that estimators for causal effects based on such methods can be misleading, and we recommend researchers do not use them, and instead use estimators based on local linear or quadratic polynomials or…

  19. A Constant-Factor Approximation Algorithm for the Link Building Problem

    NASA Astrophysics Data System (ADS)

    Olsen, Martin; Viglas, Anastasios; Zvedeniouk, Ilia

    In this work we consider the problem of maximizing the PageRank of a given target node in a graph by adding k new links. We consider the case that the new links must point to the given target node (backlinks). Previous work [7] shows that this problem has no fully polynomial time approximation schemes unless P = NP. We present a polynomial time algorithm yielding a PageRank value within a constant factor from the optimal. We also consider the naive algorithm where we choose backlinks from nodes with high PageRank values compared to the outdegree and show that the naive algorithm performs much worse on certain graphs compared to the constant factor approximation scheme.

  20. Polynomial approximation of non-Gaussian unitaries by counting one photon at a time

    NASA Astrophysics Data System (ADS)

    Arzani, Francesco; Treps, Nicolas; Ferrini, Giulia

    2017-05-01

    In quantum computation with continuous-variable systems, quantum advantage can only be achieved if some non-Gaussian resource is available. Yet, non-Gaussian unitary evolutions and measurements suited for computation are challenging to realize in the laboratory. We propose and analyze two methods to apply a polynomial approximation of any unitary operator diagonal in the amplitude quadrature representation, including non-Gaussian operators, to an unknown input state. Our protocols use as a primary non-Gaussian resource a single-photon counter. We use the fidelity of the transformation with the target one on Fock and coherent states to assess the quality of the approximate gate.

  1. Multi-soliton solutions and Bäcklund transformation for a two-mode KdV equation in a fluid

    NASA Astrophysics Data System (ADS)

    Xiao, Zi-Jian; Tian, Bo; Zhen, Hui-Ling; Chai, Jun; Wu, Xiao-Yu

    2017-01-01

    In this paper, we investigate a two-mode Korteweg-de Vries equation, which describes the one-dimensional propagation of shallow water waves with two modes in a weakly nonlinear and dispersive fluid system. With the binary Bell polynomial and an auxiliary variable, bilinear forms, multi-soliton solutions in the two-wave modes and Bell polynomial-type Bäcklund transformation for such an equation are obtained through the symbolic computation. Soliton propagation and collisions between the two solitons are presented. Based on the graphic analysis, it is shown that the increase in s can lead to the increase in the soliton velocities under the condition of ?, but the soliton amplitudes remain unchanged when s changes, where s means the difference between the phase velocities of two-mode waves, ? and ? are the nonlinearity parameter and dispersion parameter respectively. Elastic collisions between the two solitons in both two modes are analyzed with the help of graphic analysis.

  2. Deformed oscillator algebra approach of some quantum superintegrable Lissajous systems on the sphere and of their rational extensions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marquette, Ian, E-mail: i.marquette@uq.edu.au; Quesne, Christiane, E-mail: cquesne@ulb.ac.be

    2015-06-15

    We extend the construction of 2D superintegrable Hamiltonians with separation of variables in spherical coordinates using combinations of shift, ladder, and supercharge operators to models involving rational extensions of the two-parameter Lissajous systems on the sphere. These new families of superintegrable systems with integrals of arbitrary order are connected with Jacobi exceptional orthogonal polynomials of type I (or II) and supersymmetric quantum mechanics. Moreover, we present an algebraic derivation of the degenerate energy spectrum for the one- and two-parameter Lissajous systems and the rationally extended models. These results are based on finitely generated polynomial algebras, Casimir operators, realizations as deformedmore » oscillator algebras, and finite-dimensional unitary representations. Such results have only been established so far for 2D superintegrable systems separable in Cartesian coordinates, which are related to a class of polynomial algebras that display a simpler structure. We also point out how the structure function of these deformed oscillator algebras is directly related with the generalized Heisenberg algebras spanned by the nonpolynomial integrals.« less

  3. Trade off between variable and fixed size normalization in orthogonal polynomials based iris recognition system.

    PubMed

    Krishnamoorthi, R; Anna Poorani, G

    2016-01-01

    Iris normalization is an important stage in any iris biometric, as it has a propensity to trim down the consequences of iris distortion. To indemnify the variation in size of the iris owing to the action of stretching or enlarging the pupil in iris acquisition process and camera to eyeball distance, two normalization schemes has been proposed in this work. In the first method, the iris region of interest is normalized by converting the iris into the variable size rectangular model in order to avoid the under samples near the limbus border. In the second method, the iris region of interest is normalized by converting the iris region into a fixed size rectangular model in order to avoid the dimensional discrepancies between the eye images. The performance of the proposed normalization methods is evaluated with orthogonal polynomials based iris recognition in terms of FAR, FRR, GAR, CRR and EER.

  4. Some rules for polydimensional squeezing

    NASA Technical Reports Server (NTRS)

    Manko, Vladimir I.

    1994-01-01

    The review of the following results is presented: For mixed state light of N-mode electromagnetic field described by Wigner function which has generic Gaussian form, the photon distribution function is obtained and expressed explicitly in terms of Hermite polynomials of 2N-variables. The momenta of this distribution are calculated and expressed as functions of matrix invariants of the dispersion matrix. The role of new uncertainty relation depending on photon state mixing parameter is elucidated. New sum rules for Hermite polynomials of several variables are found. The photon statistics of polymode even and odd coherent light and squeezed polymode Schroedinger cat light is given explicitly. Photon distribution for polymode squeezed number states expressed in terms of multivariable Hermite polynomials is discussed.

  5. Independence polynomial and matching polynomial of the Koch network

    NASA Astrophysics Data System (ADS)

    Liao, Yunhua; Xie, Xiaoliang

    2015-11-01

    The lattice gas model and the monomer-dimer model are two classical models in statistical mechanics. It is well known that the partition functions of these two models are associated with the independence polynomial and the matching polynomial in graph theory, respectively. Both polynomials have been shown to belong to the “#P-complete” class, which indicate the problems are computationally “intractable”. We consider these two polynomials of the Koch networks which are scale-free with small-world effects. Explicit recurrences are derived, and explicit formulae are presented for the number of independent sets of a certain type.

  6. Charactering baseline shift with 4th polynomial function for portable biomedical near-infrared spectroscopy device

    NASA Astrophysics Data System (ADS)

    Zhao, Ke; Ji, Yaoyao; Pan, Boan; Li, Ting

    2018-02-01

    The continuous-wave Near-infrared spectroscopy (NIRS) devices have been highlighted for its clinical and health care applications in noninvasive hemodynamic measurements. The baseline shift of the deviation measurement attracts lots of attentions for its clinical importance. Nonetheless current published methods have low reliability or high variability. In this study, we found a perfect polynomial fitting function for baseline removal, using NIRS. Unlike previous studies on baseline correction for near-infrared spectroscopy evaluation of non-hemodynamic particles, we focused on baseline fitting and corresponding correction method for NIRS and found that the polynomial fitting function at 4th order is greater than the function at 2nd order reported in previous research. Through experimental tests of hemodynamic parameters of the solid phantom, we compared the fitting effect between the 4th order polynomial and the 2nd order polynomial, by recording and analyzing the R values and the SSE (the sum of squares due to error) values. The R values of the 4th order polynomial function fitting are all higher than 0.99, which are significantly higher than the corresponding ones of 2nd order, while the SSE values of the 4th order are significantly smaller than the corresponding ones of the 2nd order. By using the high-reliable and low-variable 4th order polynomial fitting function, we are able to remove the baseline online to obtain more accurate NIRS measurements.

  7. Orthonormal aberration polynomials for anamorphic optical imaging systems with circular pupils.

    PubMed

    Mahajan, Virendra N

    2012-06-20

    In a recent paper, we considered the classical aberrations of an anamorphic optical imaging system with a rectangular pupil, representing the terms of a power series expansion of its aberration function. These aberrations are inherently separable in the Cartesian coordinates (x,y) of a point on the pupil. Accordingly, there is x-defocus and x-coma, y-defocus and y-coma, and so on. We showed that the aberration polynomials orthonormal over the pupil and representing balanced aberrations for such a system are represented by the products of two Legendre polynomials, one for each of the two Cartesian coordinates of the pupil point; for example, L(l)(x)L(m)(y), where l and m are positive integers (including zero) and L(l)(x), for example, represents an orthonormal Legendre polynomial of degree l in x. The compound two-dimensional (2D) Legendre polynomials, like the classical aberrations, are thus also inherently separable in the Cartesian coordinates of the pupil point. Moreover, for every orthonormal polynomial L(l)(x)L(m)(y), there is a corresponding orthonormal polynomial L(l)(y)L(m)(x) obtained by interchanging x and y. These polynomials are different from the corresponding orthogonal polynomials for a system with rotational symmetry but a rectangular pupil. In this paper, we show that the orthonormal aberration polynomials for an anamorphic system with a circular pupil, obtained by the Gram-Schmidt orthogonalization of the 2D Legendre polynomials, are not separable in the two coordinates. Moreover, for a given polynomial in x and y, there is no corresponding polynomial obtained by interchanging x and y. For example, there are polynomials representing x-defocus, balanced x-coma, and balanced x-spherical aberration, but no corresponding y-aberration polynomials. The missing y-aberration terms are contained in other polynomials. We emphasize that the Zernike circle polynomials, although orthogonal over a circular pupil, are not suitable for an anamorphic system as they do not represent balanced aberrations for such a system.

  8. Uncertainty Quantification in Simulations of Epidemics Using Polynomial Chaos

    PubMed Central

    Santonja, F.; Chen-Charpentier, B.

    2012-01-01

    Mathematical models based on ordinary differential equations are a useful tool to study the processes involved in epidemiology. Many models consider that the parameters are deterministic variables. But in practice, the transmission parameters present large variability and it is not possible to determine them exactly, and it is necessary to introduce randomness. In this paper, we present an application of the polynomial chaos approach to epidemiological mathematical models based on ordinary differential equations with random coefficients. Taking into account the variability of the transmission parameters of the model, this approach allows us to obtain an auxiliary system of differential equations, which is then integrated numerically to obtain the first-and the second-order moments of the output stochastic processes. A sensitivity analysis based on the polynomial chaos approach is also performed to determine which parameters have the greatest influence on the results. As an example, we will apply the approach to an obesity epidemic model. PMID:22927889

  9. Uncertainty propagation of p-boxes using sparse polynomial chaos expansions

    NASA Astrophysics Data System (ADS)

    Schöbi, Roland; Sudret, Bruno

    2017-06-01

    In modern engineering, physical processes are modelled and analysed using advanced computer simulations, such as finite element models. Furthermore, concepts of reliability analysis and robust design are becoming popular, hence, making efficient quantification and propagation of uncertainties an important aspect. In this context, a typical workflow includes the characterization of the uncertainty in the input variables. In this paper, input variables are modelled by probability-boxes (p-boxes), accounting for both aleatory and epistemic uncertainty. The propagation of p-boxes leads to p-boxes of the output of the computational model. A two-level meta-modelling approach is proposed using non-intrusive sparse polynomial chaos expansions to surrogate the exact computational model and, hence, to facilitate the uncertainty quantification analysis. The capabilities of the proposed approach are illustrated through applications using a benchmark analytical function and two realistic engineering problem settings. They show that the proposed two-level approach allows for an accurate estimation of the statistics of the response quantity of interest using a small number of evaluations of the exact computational model. This is crucial in cases where the computational costs are dominated by the runs of high-fidelity computational models.

  10. Uncertainty propagation of p-boxes using sparse polynomial chaos expansions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schöbi, Roland, E-mail: schoebi@ibk.baug.ethz.ch; Sudret, Bruno, E-mail: sudret@ibk.baug.ethz.ch

    2017-06-15

    In modern engineering, physical processes are modelled and analysed using advanced computer simulations, such as finite element models. Furthermore, concepts of reliability analysis and robust design are becoming popular, hence, making efficient quantification and propagation of uncertainties an important aspect. In this context, a typical workflow includes the characterization of the uncertainty in the input variables. In this paper, input variables are modelled by probability-boxes (p-boxes), accounting for both aleatory and epistemic uncertainty. The propagation of p-boxes leads to p-boxes of the output of the computational model. A two-level meta-modelling approach is proposed using non-intrusive sparse polynomial chaos expansions tomore » surrogate the exact computational model and, hence, to facilitate the uncertainty quantification analysis. The capabilities of the proposed approach are illustrated through applications using a benchmark analytical function and two realistic engineering problem settings. They show that the proposed two-level approach allows for an accurate estimation of the statistics of the response quantity of interest using a small number of evaluations of the exact computational model. This is crucial in cases where the computational costs are dominated by the runs of high-fidelity computational models.« less

  11. Efficient spectral-Galerkin algorithms for direct solution for second-order differential equations using Jacobi polynomials

    NASA Astrophysics Data System (ADS)

    Doha, E.; Bhrawy, A.

    2006-06-01

    It is well known that spectral methods (tau, Galerkin, collocation) have a condition number of ( is the number of retained modes of polynomial approximations). This paper presents some efficient spectral algorithms, which have a condition number of , based on the Jacobi?Galerkin methods of second-order elliptic equations in one and two space variables. The key to the efficiency of these algorithms is to construct appropriate base functions, which lead to systems with specially structured matrices that can be efficiently inverted. The complexities of the algorithms are a small multiple of operations for a -dimensional domain with unknowns, while the convergence rates of the algorithms are exponentials with smooth solutions.

  12. Laguerre-Freud Equations for the Recurrence Coefficients of Some Discrete Semi-Classical Orthogonal Polynomials of Class Two

    NASA Astrophysics Data System (ADS)

    Hounga, C.; Hounkonnou, M. N.; Ronveaux, A.

    2006-10-01

    In this paper, we give Laguerre-Freud equations for the recurrence coefficients of discrete semi-classical orthogonal polynomials of class two, when the polynomials in the Pearson equation are of the same degree. The case of generalized Charlier polynomials is also presented.

  13. On the coefficients of integrated expansions and integrals of ultraspherical polynomials and their applications for solving differential equations

    NASA Astrophysics Data System (ADS)

    Doha, E. H.

    2002-02-01

    An analytical formula expressing the ultraspherical coefficients of an expansion for an infinitely differentiable function that has been integrated an arbitrary number of times in terms of the coefficients of the original expansion of the function is stated in a more compact form and proved in a simpler way than the formula suggested by Phillips and Karageorghis (27 (1990) 823). A new formula expressing explicitly the integrals of ultraspherical polynomials of any degree that has been integrated an arbitrary number of times of ultraspherical polynomials is given. The tensor product of ultraspherical polynomials is used to approximate a function of more than one variable. Formulae expressing the coefficients of differentiated expansions of double and triple ultraspherical polynomials in terms of the original expansion are stated and proved. Some applications of how to use ultraspherical polynomials for solving ordinary and partial differential equations are described.

  14. Generalized clustering conditions of Jack polynomials at negative Jack parameter {alpha}

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bernevig, B. Andrei; Department of Physics, Princeton University, Princeton, New Jersey 08544; Haldane, F. D. M.

    We present several conjectures on the behavior and clustering properties of Jack polynomials at a negative parameter {alpha}=-(k+1/r-1), with partitions that violate the (k,r,N)- admissibility rule of [Feigin et al. [Int. Math. Res. Notices 23, 1223 (2002)]. We find that the ''highest weight'' Jack polynomials of specific partitions represent the minimum degree polynomials in N variables that vanish when s distinct clusters of k+1 particles are formed, where s and k are positive integers. Explicit counting formulas are conjectured. The generalized clustering conditions are useful in a forthcoming description of fractional quantum Hall quasiparticles.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Borzov, V. V., E-mail: borzov.vadim@yandex.ru; Damaskinsky, E. V., E-mail: evd@pdmi.ras.ru

    In the previous works of Borzov and Damaskinsky [“Chebyshev-Koornwinder oscillator,” Theor. Math. Phys. 175(3), 765–772 (2013)] and [“Ladder operators for Chebyshev-Koornwinder oscillator,” in Proceedings of the Days on Diffraction, 2013], the authors have defined the oscillator-like system that is associated with the two variable Chebyshev-Koornwinder polynomials. We call this system the generalized Chebyshev-Koornwinder oscillator. In this paper, we study the properties of infinite-dimensional Lie algebra that is analogous to the Heisenberg algebra for the Chebyshev-Koornwinder oscillator. We construct the exact irreducible representation of this algebra in a Hilbert space H of functions that are defined on a region which ismore » bounded by the Steiner hypocycloid. The functions are square-integrable with respect to the orthogonality measure for the Chebyshev-Koornwinder polynomials and these polynomials form an orthonormalized basis in the space H. The generalized oscillator which is studied in the work can be considered as the simplest nontrivial example of multiboson quantum system that is composed of three interacting oscillators.« less

  16. Euler polynomials and identities for non-commutative operators

    NASA Astrophysics Data System (ADS)

    De Angelis, Valerio; Vignat, Christophe

    2015-12-01

    Three kinds of identities involving non-commutating operators and Euler and Bernoulli polynomials are studied. The first identity, as given by Bender and Bettencourt [Phys. Rev. D 54(12), 7710-7723 (1996)], expresses the nested commutator of the Hamiltonian and momentum operators as the commutator of the momentum and the shifted Euler polynomial of the Hamiltonian. The second one, by Pain [J. Phys. A: Math. Theor. 46, 035304 (2013)], links the commutators and anti-commutators of the monomials of the position and momentum operators. The third appears in a work by Figuieira de Morisson and Fring [J. Phys. A: Math. Gen. 39, 9269 (2006)] in the context of non-Hermitian Hamiltonian systems. In each case, we provide several proofs and extensions of these identities that highlight the role of Euler and Bernoulli polynomials.

  17. Angular shear plate

    DOEpatents

    Ruda, Mitchell C [Tucson, AZ; Greynolds, Alan W [Tucson, AZ; Stuhlinger, Tilman W [Tucson, AZ

    2009-07-14

    One or more disc-shaped angular shear plates each include a region thereon having a thickness that varies with a nonlinear function. For the case of two such shear plates, they are positioned in a facing relationship and rotated relative to each other. Light passing through the variable thickness regions in the angular plates is refracted. By properly timing the relative rotation of the plates and by the use of an appropriate polynomial function for the thickness of the shear plate, light passing therethrough can be focused at variable positions.

  18. CKP Hierarchy, Bosonic Tau Function and Bosonization Formulae

    NASA Astrophysics Data System (ADS)

    van de Leur, Johan W.; Orlov, Alexander Yu.; Shiota, Takahiro

    2012-06-01

    We develop the theory of CKP hierarchy introduced in the papers of Kyoto school [Date E., Jimbo M., Kashiwara M., Miwa T., J. Phys. Soc. Japan 50 (1981), 3806-3812] (see also [Kac V.G., van de Leur J.W., Adv. Ser. Math. Phys., Vol. 7, World Sci. Publ., Teaneck, NJ, 1989, 369-406]). We present appropriate bosonization formulae. We show that in the context of the CKP theory certain orthogonal polynomials appear. These polynomials are polynomial both in even and odd (in Grassmannian sense) variables.

  19. Kinematics and dynamics of robotic systems with multiple closed loops

    NASA Astrophysics Data System (ADS)

    Zhang, Chang-De

    The kinematics and dynamics of robotic systems with multiple closed loops, such as Stewart platforms, walking machines, and hybrid manipulators, are studied. In the study of kinematics, focus is on the closed-form solutions of the forward position analysis of different parallel systems. A closed-form solution means that the solution is expressed as a polynomial in one variable. If the order of the polynomial is less than or equal to four, the solution has analytical closed-form. First, the conditions of obtaining analytical closed-form solutions are studied. For a Stewart platform, the condition is found to be that one rotational degree of freedom of the output link is decoupled from the other five. Based on this condition, a class of Stewart platforms which has analytical closed-form solution is formulated. Conditions of analytical closed-form solution for other parallel systems are also studied. Closed-form solutions of forward kinematics for walking machines and multi-fingered grippers are then studied. For a parallel system with three three-degree-of-freedom subchains, there are 84 possible ways to select six independent joints among nine joints. These 84 ways can be classified into three categories: Category 3:3:0, Category 3:2:1, and Category 2:2:2. It is shown that the first category has no solutions; the solutions of the second category have analytical closed-form; and the solutions of the last category are higher order polynomials. The study is then extended to a nearly general Stewart platform. The solution is a 20th order polynomial and the Stewart platform has a maximum of 40 possible configurations. Also, the study is extended to a new class of hybrid manipulators which consists of two serially connected parallel mechanisms. In the study of dynamics, a computationally efficient method for inverse dynamics of manipulators based on the virtual work principle is developed. Although this method is comparable with the recursive Newton-Euler method for serial manipulators, its advantage is more noteworthy when applied to parallel systems. An approach of inverse dynamics of a walking machine is also developed, which includes inverse dynamic modeling, foot force distribution, and joint force/torque allocation.

  20. A FAST POLYNOMIAL TRANSFORM PROGRAM WITH A MODULARIZED STRUCTURE

    NASA Technical Reports Server (NTRS)

    Truong, T. K.

    1994-01-01

    This program utilizes a fast polynomial transformation (FPT) algorithm applicable to two-dimensional mathematical convolutions. Two-dimensional convolution has many applications, particularly in image processing. Two-dimensional cyclic convolutions can be converted to a one-dimensional convolution in a polynomial ring. Traditional FPT methods decompose the one-dimensional cyclic polynomial into polynomial convolutions of different lengths. This program will decompose a cyclic polynomial into polynomial convolutions of the same length. Thus, only FPTs and Fast Fourier Transforms of the same length are required. This modular approach can save computational resources. To further enhance its appeal, the program is written in the transportable 'C' language. The steps in the algorithm are: 1) formulate the modulus reduction equations, 2) calculate the polynomial transforms, 3) multiply the transforms using a generalized fast Fourier transformation, 4) compute the inverse polynomial transforms, and 5) reconstruct the final matrices using the Chinese remainder theorem. Input to this program is comprised of the row and column dimensions and the initial two matrices. The matrices are printed out at all steps, ending with the final reconstruction. This program is written in 'C' for batch execution and has been implemented on the IBM PC series of computers under DOS with a central memory requirement of approximately 18K of 8 bit bytes. This program was developed in 1986.

  1. Hydrodynamics-based functional forms of activity metabolism: a case for the power-law polynomial function in animal swimming energetics.

    PubMed

    Papadopoulos, Anthony

    2009-01-01

    The first-degree power-law polynomial function is frequently used to describe activity metabolism for steady swimming animals. This function has been used in hydrodynamics-based metabolic studies to evaluate important parameters of energetic costs, such as the standard metabolic rate and the drag power indices. In theory, however, the power-law polynomial function of any degree greater than one can be used to describe activity metabolism for steady swimming animals. In fact, activity metabolism has been described by the conventional exponential function and the cubic polynomial function, although only the power-law polynomial function models drag power since it conforms to hydrodynamic laws. Consequently, the first-degree power-law polynomial function yields incorrect parameter values of energetic costs if activity metabolism is governed by the power-law polynomial function of any degree greater than one. This issue is important in bioenergetics because correct comparisons of energetic costs among different steady swimming animals cannot be made unless the degree of the power-law polynomial function derives from activity metabolism. In other words, a hydrodynamics-based functional form of activity metabolism is a power-law polynomial function of any degree greater than or equal to one. Therefore, the degree of the power-law polynomial function should be treated as a parameter, not as a constant. This new treatment not only conforms to hydrodynamic laws, but also ensures correct comparisons of energetic costs among different steady swimming animals. Furthermore, the exponential power-law function, which is a new hydrodynamics-based functional form of activity metabolism, is a special case of the power-law polynomial function. Hence, the link between the hydrodynamics of steady swimming and the exponential-based metabolic model is defined.

  2. Polynomial Size Formulations for the Distance and Capacity Constrained Vehicle Routing Problem

    NASA Astrophysics Data System (ADS)

    Kara, Imdat; Derya, Tusan

    2011-09-01

    The Distance and Capacity Constrained Vehicle Routing Problem (DCVRP) is an extension of the well known Traveling Salesman Problem (TSP). DCVRP arises in distribution and logistics problems. It would be beneficial to construct new formulations, which is the main motivation and contribution of this paper. We focused on two indexed integer programming formulations for DCVRP. One node based and one arc (flow) based formulation for DCVRP are presented. Both formulations have O(n2) binary variables and O(n2) constraints, i.e., the number of the decision variables and constraints grows with a polynomial function of the nodes of the underlying graph. It is shown that proposed arc based formulation produces better lower bound than the existing one (this refers to the Water's formulation in the paper). Finally, various problems from literature are solved with the node based and arc based formulations by using CPLEX 8.0. Preliminary computational analysis shows that, arc based formulation outperforms the node based formulation in terms of linear programming relaxation.

  3. Symmetric polynomials in information theory: Entropy and subentropy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jozsa, Richard; Mitchison, Graeme

    2015-06-15

    Entropy and other fundamental quantities of information theory are customarily expressed and manipulated as functions of probabilities. Here we study the entropy H and subentropy Q as functions of the elementary symmetric polynomials in the probabilities and reveal a series of remarkable properties. Derivatives of all orders are shown to satisfy a complete monotonicity property. H and Q themselves become multivariate Bernstein functions and we derive the density functions of their Levy-Khintchine representations. We also show that H and Q are Pick functions in each symmetric polynomial variable separately. Furthermore, we see that H and the intrinsically quantum informational quantitymore » Q become surprisingly closely related in functional form, suggesting a special significance for the symmetric polynomials in quantum information theory. Using the symmetric polynomials, we also derive a series of further properties of H and Q.« less

  4. On Certain Wronskians of Multiple Orthogonal Polynomials

    NASA Astrophysics Data System (ADS)

    Zhang, Lun; Filipuk, Galina

    2014-11-01

    We consider determinants of Wronskian type whose entries are multiple orthogonal polynomials associated with a path connecting two multi-indices. By assuming that the weight functions form an algebraic Chebyshev (AT) system, we show that the polynomials represented by the Wronskians keep a constant sign in some cases, while in some other cases oscillatory behavior appears, which generalizes classical results for orthogonal polynomials due to Karlin and Szegő. There are two applications of our results. The first application arises from the observation that the m-th moment of the average characteristic polynomials for multiple orthogonal polynomial ensembles can be expressed as a Wronskian of the type II multiple orthogonal polynomials. Hence, it is straightforward to obtain the distinct behavior of the moments for odd and even m in a special multiple orthogonal ensemble - the AT ensemble. As the second application, we derive some Turán type inequalities for m! ultiple Hermite and multiple Laguerre polynomials (of two kinds). Finally, we study numerically the geometric configuration of zeros for the Wronskians of these multiple orthogonal polynomials. We observe that the zeros have regular configurations in the complex plane, which might be of independent interest.

  5. Computer program ETC improves computation of elastic transfer matrices of Legendre polynomials P/0/ and P/1/

    NASA Technical Reports Server (NTRS)

    Gibson, G.; Miller, M.

    1967-01-01

    Computer program ETC improves computation of elastic transfer matrices of Legendre polynomials P/0/ and P/1/. Rather than carrying out a double integration numerically, one of the integrations is accomplished analytically and the numerical integration need only be carried out over one variable.

  6. An Artificial Intelligence Approach to the Symbolic Factorization of Multivariable Polynomials. Technical Report No. CS74019-R.

    ERIC Educational Resources Information Center

    Claybrook, Billy G.

    A new heuristic factorization scheme uses learning to improve the efficiency of determining the symbolic factorization of multivariable polynomials with interger coefficients and an arbitrary number of variables and terms. The factorization scheme makes extensive use of artificial intelligence techniques (e.g., model-building, learning, and…

  7. Kinematics and design of a class of parallel manipulators

    NASA Astrophysics Data System (ADS)

    Hertz, Roger Barry

    1998-12-01

    This dissertation is concerned with the kinematic analysis and design of a class of three degree-of-freedom, spatial parallel manipulators. The class of manipulators is characterized by two platforms, between which are three legs, each possessing a succession of revolute, spherical, and revolute joints. The class is termed the "revolute-spherical-revolute" class of parallel manipulators. Two members of this class are examined. The first mechanism is a double-octahedral variable-geometry truss, and the second is termed a double tripod. The history the mechanisms is explored---the variable-geometry truss dates back to 1984, while predecessors of the double tripod mechanism date back to 1869. This work centers on the displacement analysis of these three-degree-of-freedom mechanisms. Two types of problem are solved: the forward displacement analysis (forward kinematics) and the inverse displacement analysis (inverse kinematics). The kinematic model of the class of mechanism is general in nature. A classification scheme for the revolute-spherical-revolute class of mechanism is introduced, which uses dominant geometric features to group designs into 8 different sub-classes. The forward kinematics problem is discussed: given a set of independently controllable input variables, solve for the relative position and orientation between the two platforms. For the variable-geometry truss, the controllable input variables are assumed to be the linear (prismatic) joints. For the double tripod, the controllable input variables are the three revolute joints adjacent to the base (proximal) platform. Multiple solutions are presented to the forward kinematics problem, indicating that there are many different positions (assemblies) that the manipulator can assume with equivalent inputs. For the double tripod these solutions can be expressed as a 16th degree polynomial in one unknown, while for the variable-geometry truss there exist two 16th degree polynomials, giving rise to 256 solutions. For special cases of the double tripod, the forward kinematics problem is shown to have a closed-form solution. Numerical examples are presented for the solution to the forward kinematics. A double tripod is presented that admits 16 unique and real forward kinematics solutions. Another example for a variable geometry truss is given that possesses 64 real solutions: 8 for each 16th order polynomial. The inverse kinematics problem is also discussed: given the relative position of the hand (end-effector), which is rigidly attached to one platform, solve for the independently controlled joint variables. Iterative solutions are proposed for both the variable-geometry truss and the double tripod. For special cases of both mechanisms, closed-form solutions are given. The practical problems of designing, building, and controlling a double-tripod manipulator are addressed. The resulting manipulator is a first-of-its kind prototype of a tapered (asymmetric) double-tripod manipulator. Real-time forward and inverse kinematics algorithms on an industrial robot controller is presented. The resulting performance of the prototype is impressive, since it was to achieve a maximum tool-tip speed of 4064 mm/s, maximum acceleration of 5 g, and a cycle time of 1.2 seconds for a typical pick-and-place pattern.

  8. Polynomial approximation of the Lense-Thirring rigid precession frequency

    NASA Astrophysics Data System (ADS)

    De Falco, Vittorio; Motta, Sara

    2018-05-01

    We propose a polynomial approximation of the global Lense-Thirring rigid precession frequency to study low-frequency quasi-periodic oscillations around spinning black holes. This high-performing approximation allows to determine the expected frequencies of a precessing thick accretion disc with fixed inner radius and variable outer radius around a black hole with given mass and spin. We discuss the accuracy and the applicability regions of our polynomial approximation, showing that the computational times are reduced by a factor of ≈70 in the range of minutes.

  9. Shuttle Debris Impact Tool Assessment Using the Modern Design of Experiments

    NASA Technical Reports Server (NTRS)

    DeLoach, R.; Rayos, E. M.; Campbell, C. H.; Rickman, S. L.

    2006-01-01

    Computational tools have been developed to estimate thermal and mechanical reentry loads experienced by the Space Shuttle Orbiter as the result of cavities in the Thermal Protection System (TPS). Such cavities can be caused by impact from ice or insulating foam debris shed from the External Tank (ET) on liftoff. The reentry loads depend on cavity geometry and certain Shuttle state variables, among other factors. Certain simplifying assumptions have been made in the tool development about the cavity geometry variables. For example, the cavities are all modeled as shoeboxes , with rectangular cross-sections and planar walls. So an actual cavity is typically approximated with an idealized cavity described in terms of its length, width, and depth, as well as its entry angle, exit angle, and side angles (assumed to be the same for both sides). As part of a comprehensive assessment of the uncertainty in reentry loads estimated by the debris impact assessment tools, an effort has been initiated to quantify the component of the uncertainty that is due to imperfect geometry specifications for the debris impact cavities. The approach is to compute predicted loads for a set of geometry factor combinations sufficient to develop polynomial approximations to the complex, nonparametric underlying computational models. Such polynomial models are continuous and feature estimable, continuous derivatives, conditions that facilitate the propagation of independent variable errors. As an additional benefit, once the polynomial models have been developed, they require fewer computational resources to execute than the underlying finite element and computational fluid dynamics codes, and can generate reentry loads estimates in significantly less time. This provides a practical screening capability, in which a large number of debris impact cavities can be quickly classified either as harmless, or subject to additional analysis with the more comprehensive underlying computational tools. The polynomial models also provide useful insights into the sensitivity of reentry loads to various cavity geometry variables, and reveal complex interactions among those variables that indicate how the sensitivity of one variable depends on the level of one or more other variables. For example, the effect of cavity length on certain reentry loads depends on the depth of the cavity. Such interactions are clearly displayed in the polynomial response models.

  10. SAMBA: Sparse Approximation of Moment-Based Arbitrary Polynomial Chaos

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahlfeld, R., E-mail: r.ahlfeld14@imperial.ac.uk; Belkouchi, B.; Montomoli, F.

    2016-09-01

    A new arbitrary Polynomial Chaos (aPC) method is presented for moderately high-dimensional problems characterised by limited input data availability. The proposed methodology improves the algorithm of aPC and extends the method, that was previously only introduced as tensor product expansion, to moderately high-dimensional stochastic problems. The fundamental idea of aPC is to use the statistical moments of the input random variables to develop the polynomial chaos expansion. This approach provides the possibility to propagate continuous or discrete probability density functions and also histograms (data sets) as long as their moments exist, are finite and the determinant of the moment matrixmore » is strictly positive. For cases with limited data availability, this approach avoids bias and fitting errors caused by wrong assumptions. In this work, an alternative way to calculate the aPC is suggested, which provides the optimal polynomials, Gaussian quadrature collocation points and weights from the moments using only a handful of matrix operations on the Hankel matrix of moments. It can therefore be implemented without requiring prior knowledge about statistical data analysis or a detailed understanding of the mathematics of polynomial chaos expansions. The extension to more input variables suggested in this work, is an anisotropic and adaptive version of Smolyak's algorithm that is solely based on the moments of the input probability distributions. It is referred to as SAMBA (PC), which is short for Sparse Approximation of Moment-Based Arbitrary Polynomial Chaos. It is illustrated that for moderately high-dimensional problems (up to 20 different input variables or histograms) SAMBA can significantly simplify the calculation of sparse Gaussian quadrature rules. SAMBA's efficiency for multivariate functions with regard to data availability is further demonstrated by analysing higher order convergence and accuracy for a set of nonlinear test functions with 2, 5 and 10 different input distributions or histograms.« less

  11. SAMBA: Sparse Approximation of Moment-Based Arbitrary Polynomial Chaos

    NASA Astrophysics Data System (ADS)

    Ahlfeld, R.; Belkouchi, B.; Montomoli, F.

    2016-09-01

    A new arbitrary Polynomial Chaos (aPC) method is presented for moderately high-dimensional problems characterised by limited input data availability. The proposed methodology improves the algorithm of aPC and extends the method, that was previously only introduced as tensor product expansion, to moderately high-dimensional stochastic problems. The fundamental idea of aPC is to use the statistical moments of the input random variables to develop the polynomial chaos expansion. This approach provides the possibility to propagate continuous or discrete probability density functions and also histograms (data sets) as long as their moments exist, are finite and the determinant of the moment matrix is strictly positive. For cases with limited data availability, this approach avoids bias and fitting errors caused by wrong assumptions. In this work, an alternative way to calculate the aPC is suggested, which provides the optimal polynomials, Gaussian quadrature collocation points and weights from the moments using only a handful of matrix operations on the Hankel matrix of moments. It can therefore be implemented without requiring prior knowledge about statistical data analysis or a detailed understanding of the mathematics of polynomial chaos expansions. The extension to more input variables suggested in this work, is an anisotropic and adaptive version of Smolyak's algorithm that is solely based on the moments of the input probability distributions. It is referred to as SAMBA (PC), which is short for Sparse Approximation of Moment-Based Arbitrary Polynomial Chaos. It is illustrated that for moderately high-dimensional problems (up to 20 different input variables or histograms) SAMBA can significantly simplify the calculation of sparse Gaussian quadrature rules. SAMBA's efficiency for multivariate functions with regard to data availability is further demonstrated by analysing higher order convergence and accuracy for a set of nonlinear test functions with 2, 5 and 10 different input distributions or histograms.

  12. Enhanced production of medicinal polysaccharide by submerged fermentation of Lingzhi or Reishi medicinal mushroom Ganoderma lucidium (W.Curt.:Fr.) P. Karst. Using statistical and evolutionary optimization methods.

    PubMed

    Baskar, Gurunathan; Sathya, Shree Rajesh K

    2011-01-01

    Statistical and evolutionary optimization of media composition was employed for the production of medicinal exopolysaccharide (EPS) by Lingzhi or Reishi medicinal mushroom Ganoderma lucidium MTCC 1039 using soya bean meal flour as low-cost substrate. Soya bean meal flour, ammonium chloride, glucose, and pH were identified as the most important variables for EPS yield using the two-level Plackett-Burman design and further optimized using the central composite design (CCD) and the artificial neural network (ANN)-linked genetic algorithm (GA). The high value of coefficient of determination of ANN (R² = 0.982) indicates that the ANN model was more accurate than the second-order polynomial model of CCD (R² = 0.91) for representing the effect of media composition on EPS yield. The predicted optimum media composition using ANN-linked GA was soybean meal flour 2.98%, glucose 3.26%, ammonium chloride 0.25%, and initial pH 7.5 for the maximum predicted EPS yield of 1005.55 mg/L. The experimental EPS yield obtained using the predicted optimum media composition was 1012.36 mg/L, which validates the high degree of accuracy of evolutionary optimization for enhanced production of EPS by submerged fermentation of G. lucidium.

  13. Modeling Source Water TOC Using Hydroclimate Variables and Local Polynomial Regression.

    PubMed

    Samson, Carleigh C; Rajagopalan, Balaji; Summers, R Scott

    2016-04-19

    To control disinfection byproduct (DBP) formation in drinking water, an understanding of the source water total organic carbon (TOC) concentration variability can be critical. Previously, TOC concentrations in water treatment plant source waters have been modeled using streamflow data. However, the lack of streamflow data or unimpaired flow scenarios makes it difficult to model TOC. In addition, TOC variability under climate change further exacerbates the problem. Here we proposed a modeling approach based on local polynomial regression that uses climate, e.g. temperature, and land surface, e.g., soil moisture, variables as predictors of TOC concentration, obviating the need for streamflow. The local polynomial approach has the ability to capture non-Gaussian and nonlinear features that might be present in the relationships. The utility of the methodology is demonstrated using source water quality and climate data in three case study locations with surface source waters including river and reservoir sources. The models show good predictive skill in general at these locations, with lower skills at locations with the most anthropogenic influences in their streams. Source water TOC predictive models can provide water treatment utilities important information for making treatment decisions for DBP regulation compliance under future climate scenarios.

  14. Probing baryogenesis through the Higgs boson self-coupling

    NASA Astrophysics Data System (ADS)

    Reichert, M.; Eichhorn, A.; Gies, H.; Pawlowski, J. M.; Plehn, T.; Scherer, M. M.

    2018-04-01

    The link between a modified Higgs self-coupling and the strong first-order phase transition necessary for baryogenesis is well explored for polynomial extensions of the Higgs potential. We broaden this argument beyond leading polynomial expansions of the Higgs potential to higher polynomial terms and to nonpolynomial Higgs potentials. For our quantitative analysis we resort to the functional renormalization group, which allows us to evolve the full Higgs potential to higher scales and finite temperature. In all cases we find that a strong first-order phase transition manifests itself in an enhancement of the Higgs self-coupling by at least 50%, implying that such modified Higgs potentials should be accessible at the LHC.

  15. PolyWaTT: A polynomial water travel time estimator based on Derivative Dynamic Time Warping and Perceptually Important Points

    NASA Astrophysics Data System (ADS)

    Claure, Yuri Navarro; Matsubara, Edson Takashi; Padovani, Carlos; Prati, Ronaldo Cristiano

    2018-03-01

    Traditional methods for estimating timing parameters in hydrological science require a rigorous study of the relations of flow resistance, slope, flow regime, watershed size, water velocity, and other local variables. These studies are mostly based on empirical observations, where the timing parameter is estimated using empirically derived formulas. The application of these studies to other locations is not always direct. The locations in which equations are used should have comparable characteristics to the locations from which such equations have been derived. To overcome this barrier, in this work, we developed a data-driven approach to estimate timing parameters such as travel time. Our proposal estimates timing parameters using historical data of the location without the need of adapting or using empirical formulas from other locations. The proposal only uses one variable measured at two different locations on the same river (for instance, two river-level measurements, one upstream and the other downstream on the same river). The recorded data from each location generates two time series. Our method aligns these two time series using derivative dynamic time warping (DDTW) and perceptually important points (PIP). Using data from timing parameters, a polynomial function generalizes the data by inducing a polynomial water travel time estimator, called PolyWaTT. To evaluate the potential of our proposal, we applied PolyWaTT to three different watersheds: a floodplain ecosystem located in the part of Brazil known as Pantanal, the world's largest tropical wetland area; and the Missouri River and the Pearl River, in United States of America. We compared our proposal with empirical formulas and a data-driven state-of-the-art method. The experimental results demonstrate that PolyWaTT showed a lower mean absolute error than all other methods tested in this study, and for longer distances the mean absolute error achieved by PolyWaTT is three times smaller than empirical formulas.

  16. Explorations of the Gauss-Lucas Theorem

    ERIC Educational Resources Information Center

    Brilleslyper, Michael A.; Schaubroeck, Beth

    2017-01-01

    The Gauss-Lucas Theorem is a classical complex analysis result that states the critical points of a single-variable complex polynomial lie inside the closed convex hull of the zeros of the polynomial. Although the result is well-known, it is not typically presented in a first course in complex analysis. The ease with which modern technology allows…

  17. Toward a New Method of Decoding Algebraic Codes Using Groebner Bases

    DTIC Science & Technology

    1993-10-01

    variables over GF(2m). A celebrated algorithm by Buchberger produces a reduced Groebner basis of that ideal. It tums out that, since the common roots of...all the polynomials in the ideal are a set of isolated points, this reduced Groebner basis is in triangular form, and the univariate polynomial in that

  18. Hypergeometric Series Solution to a Class of Second-Order Boundary Value Problems via Laplace Transform with Applications to Nanofluids

    NASA Astrophysics Data System (ADS)

    Ebaid, Abdelhalim; Wazwaz, Abdul-Majid; Alali, Elham; Masaedeh, Basem S.

    2017-03-01

    Very recently, it was observed that the temperature of nanofluids is finally governed by second-order ordinary differential equations with variable coefficients of exponential orders. Such coefficients were then transformed to polynomials type by using new independent variables. In this paper, a class of second-order ordinary differential equations with variable coefficients of polynomials type has been solved analytically. The analytical solution is expressed in terms of a hypergeometric function with generalized parameters. Moreover, applications of the present results have been applied on some selected nanofluids problems in the literature. The exact solutions in the literature were derived as special cases of our generalized analytical solution.

  19. Colorimetric characterization models based on colorimetric characteristics evaluation for active matrix organic light emitting diode panels.

    PubMed

    Gong, Rui; Xu, Haisong; Tong, Qingfen

    2012-10-20

    The colorimetric characterization of active matrix organic light emitting diode (AMOLED) panels suffers from their poor channel independence. Based on the colorimetric characteristics evaluation of channel independence and chromaticity constancy, an accurate colorimetric characterization method, namely, the polynomial compensation model (PC model) considering channel interactions was proposed for AMOLED panels. In this model, polynomial expressions are employed to calculate the relationship between the prediction errors of XYZ tristimulus values and the digital inputs to compensate the XYZ prediction errors of the conventional piecewise linear interpolation assuming the variable chromaticity coordinates (PLVC) model. The experimental results indicated that the proposed PC model outperformed other typical characterization models for the two tested AMOLED smart-phone displays and for the professional liquid crystal display monitor as well.

  20. Distribution functions of probabilistic automata

    NASA Technical Reports Server (NTRS)

    Vatan, F.

    2001-01-01

    Each probabilistic automaton M over an alphabet A defines a probability measure Prob sub(M) on the set of all finite and infinite words over A. We can identify a k letter alphabet A with the set {0, 1,..., k-1}, and, hence, we can consider every finite or infinite word w over A as a radix k expansion of a real number X(w) in the interval [0, 1]. This makes X(w) a random variable and the distribution function of M is defined as usual: F(x) := Prob sub(M) { w: X(w) < x }. Utilizing the fixed-point semantics (denotational semantics), extended to probabilistic computations, we investigate the distribution functions of probabilistic automata in detail. Automata with continuous distribution functions are characterized. By a new, and much more easier method, it is shown that the distribution function F(x) is an analytic function if it is a polynomial. Finally, answering a question posed by D. Knuth and A. Yao, we show that a polynomial distribution function F(x) on [0, 1] can be generated by a prob abilistic automaton iff all the roots of F'(x) = 0 in this interval, if any, are rational numbers. For this, we define two dynamical systems on the set of polynomial distributions and study attracting fixed points of random composition of these two systems.

  1. Hadamard Factorization of Stable Polynomials

    NASA Astrophysics Data System (ADS)

    Loredo-Villalobos, Carlos Arturo; Aguirre-Hernández, Baltazar

    2011-11-01

    The stable (Hurwitz) polynomials are important in the study of differential equations systems and control theory (see [7] and [19]). A property of these polynomials is related to Hadamard product. Consider two polynomials p,q ∈ R[x]:p(x) = anxn+an-1xn-1+...+a1x+a0q(x) = bmx m+bm-1xm-1+...+b1x+b0the Hadamard product (p × q) is defined as (p×q)(x) = akbkxk+ak-1bk-1xk-1+...+a1b1x+a0b0where k = min(m,n). Some results (see [16]) shows that if p,q ∈R[x] are stable polynomials then (p×q) is stable, also, i.e. the Hadamard product is closed; however, the reciprocal is not always true, that is, not all stable polynomial has a factorization into two stable polynomials the same degree n, if n> 4 (see [15]).In this work we will give some conditions to Hadamard factorization existence for stable polynomials.

  2. Computing multiple periodic solutions of nonlinear vibration problems using the harmonic balance method and Groebner bases

    NASA Astrophysics Data System (ADS)

    Grolet, Aurelien; Thouverez, Fabrice

    2015-02-01

    This paper is devoted to the study of vibration of mechanical systems with geometric nonlinearities. The harmonic balance method is used to derive systems of polynomial equations whose solutions give the frequency component of the possible steady states. Groebner basis methods are used for computing all solutions of polynomial systems. This approach allows to reduce the complete system to an unique polynomial equation in one variable driving all solutions of the problem. In addition, in order to decrease the number of variables, we propose to first work on the undamped system, and recover solution of the damped system using a continuation on the damping parameter. The search for multiple solutions is illustrated on a simple system, where the influence of the retained number of harmonic is studied. Finally, the procedure is applied on a simple cyclic system and we give a representation of the multiple states versus frequency.

  3. Algebraic approach to solve ttbar dilepton equations

    NASA Astrophysics Data System (ADS)

    Sonnenschein, Lars

    2006-01-01

    The set of non-linear equations describing the Standard Model kinematics of the top quark an- tiqark production system in the dilepton decay channel has at most a four-fold ambiguity due to two not fully reconstructed neutrinos. Its most precise and robust solution is of major importance for measurements of top quark properties like the top quark mass and t t spin correlations. Simple algebraic operations allow to transform the non-linear equations into a system of two polynomial equations with two unknowns. These two polynomials of multidegree eight can in turn be an- alytically reduced to one polynomial with one unknown by means of resultants. The obtained univariate polynomial is of degree sixteen and the coefficients are free of any singularity. The number of its real solutions is determined analytically by means of Sturm’s theorem, which is as well used to isolate each real solution into a unique pairwise disjoint interval. The solutions are polished by seeking the sign change of the polynomial in a given interval through binary brack- eting. Further a new Ansatz - exploiting an accidental cancelation in the process of transforming the equations - is presented. It permits to transform the initial system of equations into two poly- nomial equations with two unknowns. These two polynomials of multidegree two can be reduced to one univariate polynomial of degree four by means of resultants. The obtained quartic equation can be solved analytically. The analytical solution has singularities which can be circumvented by the algebraic approach described above.

  4. A comparison of companion matrix methods to find roots of a trigonometric polynomial

    NASA Astrophysics Data System (ADS)

    Boyd, John P.

    2013-08-01

    A trigonometric polynomial is a truncated Fourier series of the form fN(t)≡∑j=0Naj cos(jt)+∑j=1N bj sin(jt). It has been previously shown by the author that zeros of such a polynomial can be computed as the eigenvalues of a companion matrix with elements which are complex valued combinations of the Fourier coefficients, the "CCM" method. However, previous work provided no examples, so one goal of this new work is to experimentally test the CCM method. A second goal is introduce a new alternative, the elimination/Chebyshev algorithm, and experimentally compare it with the CCM scheme. The elimination/Chebyshev matrix (ECM) algorithm yields a companion matrix with real-valued elements, albeit at the price of usefulness only for real roots. The new elimination scheme first converts the trigonometric rootfinding problem to a pair of polynomial equations in the variables (c,s) where c≡cos(t) and s≡sin(t). The elimination method next reduces the system to a single univariate polynomial P(c). We show that this same polynomial is the resultant of the system and is also a generator of the Groebner basis with lexicographic ordering for the system. Both methods give very high numerical accuracy for real-valued roots, typically at least 11 decimal places in Matlab/IEEE 754 16 digit floating point arithmetic. The CCM algorithm is typically one or two decimal places more accurate, though these differences disappear if the roots are "Newton-polished" by a single Newton's iteration. The complex-valued matrix is accurate for complex-valued roots, too, though accuracy decreases with the magnitude of the imaginary part of the root. The cost of both methods scales as O(N3) floating point operations. In spite of intimate connections of the elimination/Chebyshev scheme to two well-established technologies for solving systems of equations, resultants and Groebner bases, and the advantages of using only real-valued arithmetic to obtain a companion matrix with real-valued elements, the ECM algorithm is noticeably inferior to the complex-valued companion matrix in simplicity, ease of programming, and accuracy.

  5. Nano-transfersomes as a novel carrier for transdermal delivery.

    PubMed

    Chaudhary, Hema; Kohli, Kanchan; Kumar, Vikash

    2013-09-15

    The aim of this study was to design and optimize a nano-transfersomes of Diclofenac diethylamine (DDEA) and Curcumin (CRM). A 3(3) factorial design (Box-Behnken) was used to derive a polynomial equation (second order) to construct 2-D (contour) and 3-D (Response Surface) plots for prediction of responses. The ratio of lipid to surfactant (X1), weight of lipid to surfactant (X2) and sonication time (X3) (independent variables) and dependent variables [entrapment efficiency of DDEA (Y1), entrapment efficiency of CRM (Y2), effect on particle size (Y3), flux of DDEA (Y4), and flux of CRM (Y5)] were studied. The 2-D and 3-D plots were drawn and a statistical validity of the polynomials was established to find the compositions of optimized formulation. The design established the role of the derived polynomial equation, 2-D and 3-D plots in predicting the values of dependent variables for the preparation and optimization of nano-transfersomes for transdermal drug release. Copyright © 2013 Elsevier B.V. All rights reserved.

  6. Robustness analysis of an air heating plant and control law by using polynomial chaos

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Colón, Diego; Ferreira, Murillo A. S.; Bueno, Átila M.

    2014-12-10

    This paper presents a robustness analysis of an air heating plant with a multivariable closed-loop control law by using the polynomial chaos methodology (MPC). The plant consists of a PVC tube with a fan in the air input (that forces the air through the tube) and a mass flux sensor in the output. A heating resistance warms the air as it flows inside the tube, and a thermo-couple sensor measures the air temperature. The plant has thus two inputs (the fan's rotation intensity and heat generated by the resistance, both measured in percent of the maximum value) and two outputsmore » (air temperature and air mass flux, also in percent of the maximal value). The mathematical model is obtained by System Identification techniques. The mass flux sensor, which is nonlinear, is linearized and the delays in the transfer functions are properly approximated by non-minimum phase transfer functions. The resulting model is transformed to a state-space model, which is used for control design purposes. The multivariable robust control design techniques used is the LQG/LTR, and the controllers are validated in simulation software and in the real plant. Finally, the MPC is applied by considering some of the system's parameters as random variables (one at a time, and the system's stochastic differential equations are solved by expanding the solution (a stochastic process) in an orthogonal basis of polynomial functions of the basic random variables. This method transforms the stochastic equations in a set of deterministic differential equations, which can be solved by traditional numerical methods (That is the MPC). Statistical data for the system (like expected values and variances) are then calculated. The effects of randomness in the parameters are evaluated in the open-loop and closed-loop pole's positions.« less

  7. Multiple zeros of polynomials

    NASA Technical Reports Server (NTRS)

    Wood, C. A.

    1974-01-01

    For polynomials of higher degree, iterative numerical methods must be used. Four iterative methods are presented for approximating the zeros of a polynomial using a digital computer. Newton's method and Muller's method are two well known iterative methods which are presented. They extract the zeros of a polynomial by generating a sequence of approximations converging to each zero. However, both of these methods are very unstable when used on a polynomial which has multiple zeros. That is, either they fail to converge to some or all of the zeros, or they converge to very bad approximations of the polynomial's zeros. This material introduces two new methods, the greatest common divisor (G.C.D.) method and the repeated greatest common divisor (repeated G.C.D.) method, which are superior methods for numerically approximating the zeros of a polynomial having multiple zeros. These methods were programmed in FORTRAN 4 and comparisons in time and accuracy are given.

  8. Equivalences of the multi-indexed orthogonal polynomials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Odake, Satoru

    2014-01-15

    Multi-indexed orthogonal polynomials describe eigenfunctions of exactly solvable shape-invariant quantum mechanical systems in one dimension obtained by the method of virtual states deletion. Multi-indexed orthogonal polynomials are labeled by a set of degrees of polynomial parts of virtual state wavefunctions. For multi-indexed orthogonal polynomials of Laguerre, Jacobi, Wilson, and Askey-Wilson types, two different index sets may give equivalent multi-indexed orthogonal polynomials. We clarify these equivalences. Multi-indexed orthogonal polynomials with both type I and II indices are proportional to those of type I indices only (or type II indices only) with shifted parameters.

  9. Fusion Products of { s} { l} N Symmetric Power Representations and Kostka Polynomials

    NASA Astrophysics Data System (ADS)

    Kedem, Rinat

    2004-10-01

    We explain the relation between the Feigin-Loktev fusion product and the graded multiplicities of Specht modules in the integer cohomology ring of the GLN generalized flag manifold. We use only very basic notions, most notably the Schur-Weyl duality and the description of the cohomology ring as a quotient of the polynomial ring in N variables.

  10. Computational algebraic geometry for statistical modeling FY09Q2 progress.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thompson, David C.; Rojas, Joseph Maurice; Pebay, Philippe Pierre

    2009-03-01

    This is a progress report on polynomial system solving for statistical modeling. This is a progress report on polynomial system solving for statistical modeling. This quarter we have developed our first model of shock response data and an algorithm for identifying the chamber cone containing a polynomial system in n variables with n+k terms within polynomial time - a significant improvement over previous algorithms, all having exponential worst-case complexity. We have implemented and verified the chamber cone algorithm for n+3 and are working to extend the implementation to handle arbitrary k. Later sections of this report explain chamber cones inmore » more detail; the next section provides an overview of the project and how the current progress fits into it.« less

  11. Magnetic zero-modes, vortices and Cartan geometry

    NASA Astrophysics Data System (ADS)

    Ross, Calum; Schroers, Bernd J.

    2018-04-01

    We exhibit a close relation between vortex configurations on the 2-sphere and magnetic zero-modes of the Dirac operator on R^3 which obey an additional nonlinear equation. We show that both are best understood in terms of the geometry induced on the 3-sphere via pull-back of the round geometry with bundle maps of the Hopf fibration. We use this viewpoint to deduce a manifestly smooth formula for square-integrable magnetic zero-modes in terms of two homogeneous polynomials in two complex variables.

  12. Enhancing sparsity of Hermite polynomial expansions by iterative rotations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Xiu; Lei, Huan; Baker, Nathan A.

    2016-02-01

    Compressive sensing has become a powerful addition to uncertainty quantification in recent years. This paper identifies new bases for random variables through linear mappings such that the representation of the quantity of interest is more sparse with new basis functions associated with the new random variables. This sparsity increases both the efficiency and accuracy of the compressive sensing-based uncertainty quantification method. Specifically, we consider rotation- based linear mappings which are determined iteratively for Hermite polynomial expansions. We demonstrate the effectiveness of the new method with applications in solving stochastic partial differential equations and high-dimensional (O(100)) problems.

  13. Explaining Support Vector Machines: A Color Based Nomogram

    PubMed Central

    Van Belle, Vanya; Van Calster, Ben; Van Huffel, Sabine; Suykens, Johan A. K.; Lisboa, Paulo

    2016-01-01

    Problem setting Support vector machines (SVMs) are very popular tools for classification, regression and other problems. Due to the large choice of kernels they can be applied with, a large variety of data can be analysed using these tools. Machine learning thanks its popularity to the good performance of the resulting models. However, interpreting the models is far from obvious, especially when non-linear kernels are used. Hence, the methods are used as black boxes. As a consequence, the use of SVMs is less supported in areas where interpretability is important and where people are held responsible for the decisions made by models. Objective In this work, we investigate whether SVMs using linear, polynomial and RBF kernels can be explained such that interpretations for model-based decisions can be provided. We further indicate when SVMs can be explained and in which situations interpretation of SVMs is (hitherto) not possible. Here, explainability is defined as the ability to produce the final decision based on a sum of contributions which depend on one single or at most two input variables. Results Our experiments on simulated and real-life data show that explainability of an SVM depends on the chosen parameter values (degree of polynomial kernel, width of RBF kernel and regularization constant). When several combinations of parameter values yield the same cross-validation performance, combinations with a lower polynomial degree or a larger kernel width have a higher chance of being explainable. Conclusions This work summarizes SVM classifiers obtained with linear, polynomial and RBF kernels in a single plot. Linear and polynomial kernels up to the second degree are represented exactly. For other kernels an indication of the reliability of the approximation is presented. The complete methodology is available as an R package and two apps and a movie are provided to illustrate the possibilities offered by the method. PMID:27723811

  14. A combinatorial model for the Macdonald polynomials.

    PubMed

    Haglund, J

    2004-11-16

    We introduce a polynomial C(mu)[Z; q, t], depending on a set of variables Z = z(1), z(2),..., a partition mu, and two extra parameters q, t. The definition of C(mu) involves a pair of statistics (maj(sigma, mu), inv(sigma, mu)) on words sigma of positive integers, and the coefficients of the z(i) are manifestly in N[q,t]. We conjecture that C(mu)[Z; q, t] is none other than the modified Macdonald polynomial H(mu)[Z; q, t]. We further introduce a general family of polynomials F(T)[Z; q, S], where T is an arbitrary set of squares in the first quadrant of the xy plane, and S is an arbitrary subset of T. The coefficients of the F(T)[Z; q, S] are in N[q], and C(mu)[Z; q, t] is a sum of certain F(T)[Z; q, S] times nonnegative powers of t. We prove F(T)[Z; q, S] is symmetric in the z(i) and satisfies other properties consistent with the conjecture. We also show how the coefficient of a monomial in F(T)[Z; q, S] can be expressed recursively. maple calculations indicate the F(T)[Z; q, S] are Schur-positive, and we present a combinatorial conjecture for their Schur coefficients when the set T is a partition with at most three columns.

  15. Optimization of Paclitaxel Containing pH-Sensitive Liposomes By 3 Factor, 3 Level Box-Behnken Design.

    PubMed

    Rane, Smita; Prabhakar, Bala

    2013-07-01

    The aim of this study was to investigate the combined influence of 3 independent variables in the preparation of paclitaxel containing pH-sensitive liposomes. A 3 factor, 3 levels Box-Behnken design was used to derive a second order polynomial equation and construct contour plots to predict responses. The independent variables selected were molar ratio phosphatidylcholine:diolylphosphatidylethanolamine (X1), molar concentration of cholesterylhemisuccinate (X2), and amount of drug (X3). Fifteen batches were prepared by thin film hydration method and evaluated for percent drug entrapment, vesicle size, and pH sensitivity. The transformed values of the independent variables and the percent drug entrapment were subjected to multiple regression to establish full model second order polynomial equation. F was calculated to confirm the omission of insignificant terms from the full model equation to derive a reduced model polynomial equation to predict the dependent variables. Contour plots were constructed to show the effects of X1, X2, and X3 on the percent drug entrapment. A model was validated for accurate prediction of the percent drug entrapment by performing checkpoint analysis. The computer optimization process and contour plots predicted the levels of independent variables X1, X2, and X3 (0.99, -0.06, 0, respectively), for maximized response of percent drug entrapment with constraints on vesicle size and pH sensitivity.

  16. Autonomous manipulation on a robot: Summary of manipulator software functions

    NASA Technical Reports Server (NTRS)

    Lewis, R. A.

    1974-01-01

    A six degree-of-freedom computer-controlled manipulator is examined, and the relationships between the arm's joint variables and 3-space are derived. Arm trajectories using sequences of third-degree polynomials to describe the time history of each joint variable are presented and two approaches to the avoidance of obstacles are given. The equations of motion for the arm are derived and then decomposed into time-dependent factors and time-independent coefficients. Several new and simplifying relationships among the coefficients are proven. Two sample trajectories are analyzed in detail for purposes of determining the most important contributions to total force in order that relatively simple approximations to the equations of motion can be used.

  17. Interpolation and Polynomial Curve Fitting

    ERIC Educational Resources Information Center

    Yang, Yajun; Gordon, Sheldon P.

    2014-01-01

    Two points determine a line. Three noncollinear points determine a quadratic function. Four points that do not lie on a lower-degree polynomial curve determine a cubic function. In general, n + 1 points uniquely determine a polynomial of degree n, presuming that they do not fall onto a polynomial of lower degree. The process of finding such a…

  18. Generating the patterns of variation with GeoGebra: the case of polynomial approximations

    NASA Astrophysics Data System (ADS)

    Attorps, Iiris; Björk, Kjell; Radic, Mirko

    2016-01-01

    In this paper, we report a teaching experiment regarding the theory of polynomial approximations at the university mathematics teaching in Sweden. The experiment was designed by applying Variation theory and by using the free dynamic mathematics software GeoGebra. The aim of this study was to investigate if the technology-assisted teaching of Taylor polynomials compared with traditional way of work at the university level can support the teaching and learning of mathematical concepts and ideas. An engineering student group (n = 19) was taught Taylor polynomials with the assistance of GeoGebra while a control group (n = 18) was taught in a traditional way. The data were gathered by video recording of the lectures, by doing a post-test concerning Taylor polynomials in both groups and by giving one question regarding Taylor polynomials at the final exam for the course in Real Analysis in one variable. In the analysis of the lectures, we found Variation theory combined with GeoGebra to be a potentially powerful tool for revealing some critical aspects of Taylor Polynomials. Furthermore, the research results indicated that applying Variation theory, when planning the technology-assisted teaching, supported and enriched students' learning opportunities in the study group compared with the control group.

  19. Selection of relevant input variables in storm water quality modeling by multiobjective evolutionary polynomial regression paradigm

    NASA Astrophysics Data System (ADS)

    Creaco, E.; Berardi, L.; Sun, Siao; Giustolisi, O.; Savic, D.

    2016-04-01

    The growing availability of field data, from information and communication technologies (ICTs) in "smart" urban infrastructures, allows data modeling to understand complex phenomena and to support management decisions. Among the analyzed phenomena, those related to storm water quality modeling have recently been gaining interest in the scientific literature. Nonetheless, the large amount of available data poses the problem of selecting relevant variables to describe a phenomenon and enable robust data modeling. This paper presents a procedure for the selection of relevant input variables using the multiobjective evolutionary polynomial regression (EPR-MOGA) paradigm. The procedure is based on scrutinizing the explanatory variables that appear inside the set of EPR-MOGA symbolic model expressions of increasing complexity and goodness of fit to target output. The strategy also enables the selection to be validated by engineering judgement. In such context, the multiple case study extension of EPR-MOGA, called MCS-EPR-MOGA, is adopted. The application of the proposed procedure to modeling storm water quality parameters in two French catchments shows that it was able to significantly reduce the number of explanatory variables for successive analyses. Finally, the EPR-MOGA models obtained after the input selection are compared with those obtained by using the same technique without benefitting from input selection and with those obtained in previous works where other data-modeling techniques were used on the same data. The comparison highlights the effectiveness of both EPR-MOGA and the input selection procedure.

  20. The Coulomb problem on a 3-sphere and Heun polynomials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bellucci, Stefano; Yeghikyan, Vahagn; Yerevan State University, Alex-Manoogian st. 1, 00025 Yerevan

    2013-08-15

    The paper studies the quantum mechanical Coulomb problem on a 3-sphere. We present a special parametrization of the ellipto-spheroidal coordinate system suitable for the separation of variables. After quantization we get the explicit form of the spectrum and present an algebraic equation for the eigenvalues of the Runge-Lentz vector. We also present the wave functions expressed via Heun polynomials.

  1. Geometrization and Generalization of the Kowalevski Top

    NASA Astrophysics Data System (ADS)

    Dragović, Vladimir

    2010-08-01

    A new view on the Kowalevski top and the Kowalevski integration procedure is presented. For more than a century, the Kowalevski 1889 case, has attracted full attention of a wide community as the highlight of the classical theory of integrable systems. Despite hundreds of papers on the subject, the Kowalevski integration is still understood as a magic recipe, an unbelievable sequence of skillful tricks, unexpected identities and smart changes of variables. The novelty of our present approach is based on our four observations. The first one is that the so-called fundamental Kowalevski equation is an instance of a pencil equation of the theory of conics which leads us to a new geometric interpretation of the Kowalevski variables w, x 1, x 2 as the pencil parameter and the Darboux coordinates, respectively. The second is observation of the key algebraic property of the pencil equation which is followed by introduction and study of a new class of discriminantly separable polynomials. All steps of the Kowalevski integration procedure are now derived as easy and transparent logical consequences of our theory of discriminantly separable polynomials. The third observation connects the Kowalevski integration and the pencil equation with the theory of multi-valued groups. The Kowalevski change of variables is now recognized as an example of a two-valued group operation and its action. The final observation is surprising equivalence of the associativity of the two-valued group operation and its action to the n = 3 case of the Great Poncelet Theorem for pencils of conics.

  2. Combined mixture-process variable approach: a suitable statistical tool for nanovesicular systems optimization.

    PubMed

    Habib, Basant A; AbouGhaly, Mohamed H H

    2016-06-01

    This study aims to illustrate the applicability of combined mixture-process variable (MPV) design and modeling for optimization of nanovesicular systems. The D-optimal experimental plan studied the influence of three mixture components (MCs) and two process variables (PVs) on lercanidipine transfersomes. The MCs were phosphatidylcholine (A), sodium glycocholate (B) and lercanidipine hydrochloride (C), while the PVs were glycerol amount in the hydration mixture (D) and sonication time (E). The studied responses were Y1: particle size, Y2: zeta potential and Y3: entrapment efficiency percent (EE%). Polynomial equations were used to study the influence of MCs and PVs on each response. Response surface methodology and multiple response optimization were applied to optimize the formulation with the goals of minimizing Y1 and maximizing Y2 and Y3. The obtained polynomial models had prediction R(2) values of 0.645, 0.947 and 0.795 for Y1, Y2 and Y3, respectively. Contour, Piepel's response trace, perturbation, and interaction plots were drawn for responses representation. The optimized formulation, A: 265 mg, B: 10 mg, C: 40 mg, D: zero g and E: 120 s, had desirability of 0.9526. The actual response values for the optimized formulation were within the two-sided 95% prediction intervals and were close to the predicted values with maximum percent deviation of 6.2%. This indicates the validity of combined MPV design and modeling for optimization of transfersomal formulations as an example of nanovesicular systems.

  3. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Genest, Vincent X.; Vinet, Luc; Zhedanov, Alexei

    The algebra H of the dual -1 Hahn polynomials is derived and shown to arise in the Clebsch-Gordan problem of sl{sub -1}(2). The dual -1 Hahn polynomials are the bispectral polynomials of a discrete argument obtained from the q{yields}-1 limit of the dual q-Hahn polynomials. The Hopf algebra sl{sub -1}(2) has four generators including an involution, it is also a q{yields}-1 limit of the quantum algebra sl{sub q}(2) and furthermore, the dynamical algebra of the parabose oscillator. The algebra H, a two-parameter generalization of u(2) with an involution as additional generator, is first derived from the recurrence relation of themore » -1 Hahn polynomials. It is then shown that H can be realized in terms of the generators of two added sl{sub -1}(2) algebras, so that the Clebsch-Gordan coefficients of sl{sub -1}(2) are dual -1 Hahn polynomials. An irreducible representation of H involving five-diagonal matrices and connected to the difference equation of the dual -1 Hahn polynomials is constructed.« less

  4. A concatenated coding scheme for error control

    NASA Technical Reports Server (NTRS)

    Lin, S.

    1985-01-01

    A concatenated coding scheme for error control in data communications is analyzed. The inner code is used for both error correction and detection, however the outer code is used only for error detection. A retransmission is requested if the outer code detects the presence of errors after the inner code decoding. The probability of undetected error of the above error control scheme is derived and upper bounded. Two specific exmaples are analyzed. In the first example, the inner code is a distance-4 shortened Hamming code with generator polynomial (X+1)(X(6)+X+1) = X(7)+X(6)+X(2)+1 and the outer code is a distance-4 shortened Hamming code with generator polynomial (X+1)X(15+X(14)+X(13)+X(12)+X(4)+X(3)+X(2)+X+1) = X(16)+X(12)+X(5)+1 which is the X.25 standard for packet-switched data network. This example is proposed for error control on NASA telecommand links. In the second example, the inner code is the same as that in the first example but the outer code is a shortened Reed-Solomon code with symbols from GF(2(8)) and generator polynomial (X+1)(X+alpha) where alpha is a primitive element in GF(z(8)).

  5. From Chebyshev to Bernstein: A Tour of Polynomials Small and Large

    ERIC Educational Resources Information Center

    Boelkins, Matthew; Miller, Jennifer; Vugteveen, Benjamin

    2006-01-01

    Consider the family of monic polynomials of degree n having zeros at -1 and +1 and all their other real zeros in between these two values. This article explores the size of these polynomials using the supremum of the absolute value on [-1, 1], showing that scaled Chebyshev and Bernstein polynomials give the extremes.

  6. Orthogonal basis with a conicoid first mode for shape specification of optical surfaces.

    PubMed

    Ferreira, Chelo; López, José L; Navarro, Rafael; Sinusía, Ester Pérez

    2016-03-07

    A rigorous and powerful theoretical framework is proposed to obtain systems of orthogonal functions (or shape modes) to represent optical surfaces. The method is general so it can be applied to different initial shapes and different polynomials. Here we present results for surfaces with circular apertures when the first basis function (mode) is a conicoid. The system for aspheres with rotational symmetry is obtained applying an appropriate change of variables to Legendre polynomials, whereas the system for general freeform case is obtained applying a similar procedure to spherical harmonics. Numerical comparisons with standard systems, such as Forbes and Zernike polynomials, are performed and discussed.

  7. Fitting by Orthonormal Polynomials of Silver Nanoparticles Spectroscopic Data

    NASA Astrophysics Data System (ADS)

    Bogdanova, Nina; Koleva, Mihaela

    2018-02-01

    Our original Orthonormal Polynomial Expansion Method (OPEM) in one-dimensional version is applied for first time to describe the silver nanoparticles (NPs) spectroscopic data. The weights for approximation include experimental errors in variables. In this way we construct orthonormal polynomial expansion for approximating the curve on a non equidistant point grid. The corridors of given data and criteria define the optimal behavior of searched curve. The most important subinterval of spectra data is investigated, where the minimum (surface plasmon resonance absorption) is looking for. This study describes the Ag nanoparticles produced by laser approach in a ZnO medium forming a AgNPs/ZnO nanocomposite heterostructure.

  8. Research on the Diesel Engine with Sliding Mode Variable Structure Theory

    NASA Astrophysics Data System (ADS)

    Ma, Zhexuan; Mao, Xiaobing; Cai, Le

    2018-05-01

    This study constructed the nonlinear mathematical model of the diesel engine high-pressure common rail (HPCR) system through two polynomial fitting which was treated as a kind of affine nonlinear system. Based on sliding-mode variable structure control (SMVSC) theory, a sliding-mode controller for affine nonlinear systems was designed for achieving the control of common rail pressure and the diesel engine’s rotational speed. Finally, on the simulation platform of MATLAB, the designed nonlinear HPCR system was simulated. The simulation results demonstrated that sliding-mode variable structure control algorithm shows favourable control performances which are overcoming the shortcomings of traditional PID control in overshoot, parameter adjustment, system precision, adjustment time and ascending time.

  9. Discrete-time state estimation for stochastic polynomial systems over polynomial observations

    NASA Astrophysics Data System (ADS)

    Hernandez-Gonzalez, M.; Basin, M.; Stepanov, O.

    2018-07-01

    This paper presents a solution to the mean-square state estimation problem for stochastic nonlinear polynomial systems over polynomial observations confused with additive white Gaussian noises. The solution is given in two steps: (a) computing the time-update equations and (b) computing the measurement-update equations for the state estimate and error covariance matrix. A closed form of this filter is obtained by expressing conditional expectations of polynomial terms as functions of the state estimate and error covariance. As a particular case, the mean-square filtering equations are derived for a third-degree polynomial system with second-degree polynomial measurements. Numerical simulations show effectiveness of the proposed filter compared to the extended Kalman filter.

  10. Improved multivariate polynomial factoring algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, P.S.

    1978-10-01

    A new algorithm for factoring multivariate polynomials over the integers based on an algorithm by Wang and Rothschild is described. The new algorithm has improved strategies for dealing with the known problems of the original algorithm, namely, the leading coefficient problem, the bad-zero problem and the occurrence of extraneous factors. It has an algorithm for correctly predetermining leading coefficients of the factors. A new and efficient p-adic algorithm named EEZ is described. Bascially it is a linearly convergent variable-by-variable parallel construction. The improved algorithm is generally faster and requires less store then the original algorithm. Machine examples with comparative timingmore » are included.« less

  11. Frequency domain system identification methods - Matrix fraction description approach

    NASA Technical Reports Server (NTRS)

    Horta, Luca G.; Juang, Jer-Nan

    1993-01-01

    This paper presents the use of matrix fraction descriptions for least-squares curve fitting of the frequency spectra to compute two matrix polynomials. The matrix polynomials are intermediate step to obtain a linearized representation of the experimental transfer function. Two approaches are presented: first, the matrix polynomials are identified using an estimated transfer function; second, the matrix polynomials are identified directly from the cross/auto spectra of the input and output signals. A set of Markov parameters are computed from the polynomials and subsequently realization theory is used to recover a minimum order state space model. Unevenly spaced frequency response functions may be used. Results from a simple numerical example and an experiment are discussed to highlight some of the important aspect of the algorithm.

  12. Extending a Property of Cubic Polynomials to Higher-Degree Polynomials

    ERIC Educational Resources Information Center

    Miller, David A.; Moseley, James

    2012-01-01

    In this paper, the authors examine a property that holds for all cubic polynomials given two zeros. This property is discovered after reviewing a variety of ways to determine the equation of a cubic polynomial given specific conditions through algebra and calculus. At the end of the article, they will connect the property to a very famous method…

  13. A Geometric Method for Model Reduction of Biochemical Networks with Polynomial Rate Functions.

    PubMed

    Samal, Satya Swarup; Grigoriev, Dima; Fröhlich, Holger; Weber, Andreas; Radulescu, Ovidiu

    2015-12-01

    Model reduction of biochemical networks relies on the knowledge of slow and fast variables. We provide a geometric method, based on the Newton polytope, to identify slow variables of a biochemical network with polynomial rate functions. The gist of the method is the notion of tropical equilibration that provides approximate descriptions of slow invariant manifolds. Compared to extant numerical algorithms such as the intrinsic low-dimensional manifold method, our approach is symbolic and utilizes orders of magnitude instead of precise values of the model parameters. Application of this method to a large collection of biochemical network models supports the idea that the number of dynamical variables in minimal models of cell physiology can be small, in spite of the large number of molecular regulatory actors.

  14. Viewing the Roots of Polynomial Functions in Complex Variable: The Use of Geogebra and the CAS Maple

    ERIC Educational Resources Information Center

    Alves, Francisco Regis Vieira

    2013-01-01

    Admittedly, the Fundamental Theorem of Calculus-TFA holds an important role in the Complex Analysis-CA, as well as in other mathematical branches. In this article, we bring a discussion about the TFA, the Rouché's theorem and the winding number with the intention to analyze the roots of a polynomial equation. We propose also a description for a…

  15. Pseudo spectral collocation with Maxwell polynomials for kinetic equations with energy diffusion

    NASA Astrophysics Data System (ADS)

    Sánchez-Vizuet, Tonatiuh; Cerfon, Antoine J.

    2018-02-01

    We study the approximation and stability properties of a recently popularized discretization strategy for the speed variable in kinetic equations, based on pseudo-spectral collocation on a grid defined by the zeros of a non-standard family of orthogonal polynomials called Maxwell polynomials. Taking a one-dimensional equation describing energy diffusion due to Fokker-Planck collisions with a Maxwell-Boltzmann background distribution as the test bench for the performance of the scheme, we find that Maxwell based discretizations outperform other commonly used schemes in most situations, often by orders of magnitude. This provides a strong motivation for their use in high-dimensional gyrokinetic simulations. However, we also show that Maxwell based schemes are subject to a non-modal time stepping instability in their most straightforward implementation, so that special care must be given to the discrete representation of the linear operators in order to benefit from the advantages provided by Maxwell polynomials.

  16. LMI-based stability analysis of fuzzy-model-based control systems using approximated polynomial membership functions.

    PubMed

    Narimani, Mohammand; Lam, H K; Dilmaghani, R; Wolfe, Charles

    2011-06-01

    Relaxed linear-matrix-inequality-based stability conditions for fuzzy-model-based control systems with imperfect premise matching are proposed. First, the derivative of the Lyapunov function, containing the product terms of the fuzzy model and fuzzy controller membership functions, is derived. Then, in the partitioned operating domain of the membership functions, the relations between the state variables and the mentioned product terms are represented by approximated polynomials in each subregion. Next, the stability conditions containing the information of all subsystems and the approximated polynomials are derived. In addition, the concept of the S-procedure is utilized to release the conservativeness caused by considering the whole operating region for approximated polynomials. It is shown that the well-known stability conditions can be special cases of the proposed stability conditions. Simulation examples are given to illustrate the validity of the proposed approach.

  17. From Jack to Double Jack Polynomials via the Supersymmetric Bridge

    NASA Astrophysics Data System (ADS)

    Lapointe, Luc; Mathieu, Pierre

    2015-07-01

    The Calogero-Sutherland model occurs in a large number of physical contexts, either directly or via its eigenfunctions, the Jack polynomials. The supersymmetric counterpart of this model, although much less ubiquitous, has an equally rich structure. In particular, its eigenfunctions, the Jack superpolynomials, appear to share the very same remarkable combinatorial and structural properties as their non-supersymmetric version. These super-functions are parametrized by superpartitions with fixed bosonic and fermionic degrees. Now, a truly amazing feature pops out when the fermionic degree is sufficiently large: the Jack superpolynomials stabilize and factorize. Their stability is with respect to their expansion in terms of an elementary basis where, in the stable sector, the expansion coefficients become independent of the fermionic degree. Their factorization is seen when the fermionic variables are stripped off in a suitable way which results in a product of two ordinary Jack polynomials (somewhat modified by plethystic transformations), dubbed the double Jack polynomials. Here, in addition to spelling out these results, which were first obtained in the context of Macdonal superpolynomials, we provide a heuristic derivation of the Jack superpolynomial case by performing simple manipulations on the supersymmetric eigen-operators, rendering them independent of the number of particles and of the fermionic degree. In addition, we work out the expression of the Hamiltonian which characterizes the double Jacks. This Hamiltonian, which defines a new integrable system, involves not only the expected Calogero-Sutherland pieces but also combinations of the generators of an underlying affine {widehat{sl}_2} algebra.

  18. Nonlinear channel equalization for QAM signal constellation using artificial neural networks.

    PubMed

    Patra, J C; Pal, R N; Baliarsingh, R; Panda, G

    1999-01-01

    Application of artificial neural networks (ANN's) to adaptive channel equalization in a digital communication system with 4-QAM signal constellation is reported in this paper. A novel computationally efficient single layer functional link ANN (FLANN) is proposed for this purpose. This network has a simple structure in which the nonlinearity is introduced by functional expansion of the input pattern by trigonometric polynomials. Because of input pattern enhancement, the FLANN is capable of forming arbitrarily nonlinear decision boundaries and can perform complex pattern classification tasks. Considering channel equalization as a nonlinear classification problem, the FLANN has been utilized for nonlinear channel equalization. The performance of the FLANN is compared with two other ANN structures [a multilayer perceptron (MLP) and a polynomial perceptron network (PPN)] along with a conventional linear LMS-based equalizer for different linear and nonlinear channel models. The effect of eigenvalue ratio (EVR) of input correlation matrix on the equalizer performance has been studied. The comparison of computational complexity involved for the three ANN structures is also provided.

  19. Colour calibration of a laboratory computer vision system for quality evaluation of pre-sliced hams.

    PubMed

    Valous, Nektarios A; Mendoza, Fernando; Sun, Da-Wen; Allen, Paul

    2009-01-01

    Due to the high variability and complex colour distribution in meats and meat products, the colour signal calibration of any computer vision system used for colour quality evaluations, represents an essential condition for objective and consistent analyses. This paper compares two methods for CIE colour characterization using a computer vision system (CVS) based on digital photography; namely the polynomial transform procedure and the transform proposed by the sRGB standard. Also, it presents a procedure for evaluating the colour appearance and presence of pores and fat-connective tissue on pre-sliced hams made from pork, turkey and chicken. Our results showed high precision, in colour matching, for device characterization when the polynomial transform was used to match the CIE tristimulus values in comparison with the sRGB standard approach as indicated by their ΔE(ab)(∗) values. The [3×20] polynomial transfer matrix yielded a modelling accuracy averaging below 2.2 ΔE(ab)(∗) units. Using the sRGB transform, high variability was appreciated among the computed ΔE(ab)(∗) (8.8±4.2). The calibrated laboratory CVS, implemented with a low-cost digital camera, exhibited reproducible colour signals in a wide range of colours capable of pinpointing regions-of-interest and allowed the extraction of quantitative information from the overall ham slice surface with high accuracy. The extracted colour and morphological features showed potential for characterizing the appearance of ham slice surfaces. CVS is a tool that can objectively specify colour and appearance properties of non-uniformly coloured commercial ham slices.

  20. Application of the polynomial chaos expansion to approximate the homogenised response of the intervertebral disc.

    PubMed

    Karajan, N; Otto, D; Oladyshkin, S; Ehlers, W

    2014-10-01

    A possibility to simulate the mechanical behaviour of the human spine is given by modelling the stiffer structures, i.e. the vertebrae, as a discrete multi-body system (MBS), whereas the softer connecting tissue, i.e. the softer intervertebral discs (IVD), is represented in a continuum-mechanical sense using the finite-element method (FEM). From a modelling point of view, the mechanical behaviour of the IVD can be included into the MBS in two different ways. They can either be computed online in a so-called co-simulation of a MBS and a FEM or offline in a pre-computation step, where a representation of the discrete mechanical response of the IVD needs to be defined in terms of the applied degrees of freedom (DOF) of the MBS. For both methods, an appropriate homogenisation step needs to be applied to obtain the discrete mechanical response of the IVD, i.e. the resulting forces and moments. The goal of this paper was to present an efficient method to approximate the mechanical response of an IVD in an offline computation. In a previous paper (Karajan et al. in Biomech Model Mechanobiol 12(3):453-466, 2012), it was proven that a cubic polynomial for the homogenised forces and moments of the FE model is a suitable choice to approximate the purely elastic response as a coupled function of the DOF of the MBS. In this contribution, the polynomial chaos expansion (PCE) is applied to generate these high-dimensional polynomials. Following this, the main challenge is to determine suitable deformation states of the IVD for pre-computation, such that the polynomials can be constructed with high accuracy and low numerical cost. For the sake of a simple verification, the coupling method and the PCE are applied to the same simplified motion segment of the spine as was used in the previous paper, i.e. two cylindrical vertebrae and a cylindrical IVD in between. In a next step, the loading rates are included as variables in the polynomial response functions to account for a more realistic response of the overall viscoelastic intervertebral disc. Herein, an additive split into elastic and inelastic contributions to the homogenised forces and moments is applied.

  1. Application of polynomial su(1, 1) algebra to Pöschl-Teller potentials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Hong-Biao, E-mail: zhanghb017@nenu.edu.cn; Lu, Lu

    2013-12-15

    Two novel polynomial su(1, 1) algebras for the physical systems with the first and second Pöschl-Teller (PT) potentials are constructed, and their specific representations are presented. Meanwhile, these polynomial su(1, 1) algebras are used as an algebraic technique to solve eigenvalues and eigenfunctions of the Hamiltonians associated with the first and second PT potentials. The algebraic approach explores an appropriate new pair of raising and lowing operators K-circumflex{sub ±} of polynomial su(1, 1) algebra as a pair of shift operators of our Hamiltonians. In addition, two usual su(1, 1) algebras associated with the first and second PT potentials are derivedmore » naturally from the polynomial su(1, 1) algebras built by us.« less

  2. Entanglement of coherent superposition of photon-subtraction squeezed vacuum

    NASA Astrophysics Data System (ADS)

    Liu, Cun-Jin; Ye, Wei; Zhou, Wei-Dong; Zhang, Hao-Liang; Huang, Jie-Hui; Hu, Li-Yun

    2017-10-01

    A new kind of non-Gaussian quantum state is introduced by applying nonlocal coherent superposition ( τa + sb) m of photon subtraction to two single-mode squeezed vacuum states, and the properties of entanglement are investigated according to the degree of entanglement and the average fidelity of quantum teleportation. The state can be seen as a single-variable Hermitian polynomial excited squeezed vacuum state, and its normalization factor is related to the Legendre polynomial. It is shown that, for τ = s, the maximum fidelity can be achieved, even over the classical limit (1/2), only for even-order operation m and equivalent squeezing parameters in a certain region. However, the maximum entanglement can be achieved for squeezing parameters with a π phase difference. These indicate that the optimal realizations of fidelity and entanglement could be different from one another. In addition, the parameter τ/ s has an obvious effect on entanglement and fidelity.

  3. Potts-model critical manifolds revisited

    DOE PAGES

    Scullard, Christian R.; Jacobsen, Jesper Lykke

    2016-02-11

    We compute the critical polynomials for the q-state Potts model on all Archimedean lattices, using a parallel implementation of the algorithm of Ref. [1] that gives us access to larger sizes than previously possible. The exact polynomials are computed for bases of size 6 6 unit cells, and the root in the temperature variable v = e K-1 is determined numerically at q = 1 for bases of size 8 8. This leads to improved results for bond percolation thresholds, and for the Potts-model critical manifolds in the real (q; v) plane. In the two most favourable cases, we findmore » now the kagome-lattice threshold to eleven digits and that of the (3; 12 2) lattice to thirteen. Our critical manifolds reveal many interesting features in the antiferromagnetic region of the Potts model, and determine accurately the extent of the Berker-Kadano phase for the lattices studied.« less

  4. Isogeometric Analysis of Boundary Integral Equations

    DTIC Science & Technology

    2015-04-21

    methods, IgA relies on Non-Uniform Rational B- splines (NURBS) [43, 46], T- splines [55, 53] or subdivision surfaces [21, 48, 51] rather than piece- wise...structural dynamics [25, 26], plates and shells [15, 16, 27, 28, 37, 22, 23], phase-field models [17, 32, 33], and shape optimization [40, 41, 45, 59...polynomials for approximating the geometry and field variables. Thus, by replacing piecewise polynomials with NURBS or T- splines , one can develop

  5. A study of the orthogonal polynomials associated with the quantum harmonic oscillator on constant curvature spaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vignat, C.; Lamberti, P. W.

    2009-10-15

    Recently, Carinena, et al. [Ann. Phys. 322, 434 (2007)] introduced a new family of orthogonal polynomials that appear in the wave functions of the quantum harmonic oscillator in two-dimensional constant curvature spaces. They are a generalization of the Hermite polynomials and will be called curved Hermite polynomials in the following. We show that these polynomials are naturally related to the relativistic Hermite polynomials introduced by Aldaya et al. [Phys. Lett. A 156, 381 (1991)], and thus are Jacobi polynomials. Moreover, we exhibit a natural bijection between the solutions of the quantum harmonic oscillator on negative curvature spaces and on positivemore » curvature spaces. At last, we show a maximum entropy property for the ground states of these oscillators.« less

  6. Dirac(-Pauli), Fokker-Planck equations and exceptional Laguerre polynomials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ho, Choon-Lin, E-mail: hcl@mail.tku.edu.tw

    2011-04-15

    Research Highlights: > Physical examples involving exceptional orthogonal polynomials. > Exceptional polynomials as deformations of classical orthogonal polynomials. > Exceptional polynomials from Darboux-Crum transformation. - Abstract: An interesting discovery in the last two years in the field of mathematical physics has been the exceptional X{sub l} Laguerre and Jacobi polynomials. Unlike the well-known classical orthogonal polynomials which start with constant terms, these new polynomials have lowest degree l = 1, 2, and ..., and yet they form complete set with respect to some positive-definite measure. While the mathematical properties of these new X{sub l} polynomials deserve further analysis, it ismore » also of interest to see if they play any role in physical systems. In this paper we indicate some physical models in which these new polynomials appear as the main part of the eigenfunctions. The systems we consider include the Dirac equations coupled minimally and non-minimally with some external fields, and the Fokker-Planck equations. The systems presented here have enlarged the number of exactly solvable physical systems known so far.« less

  7. New methodology to reconstruct in 2-D the cuspal enamel of modern human lower molars.

    PubMed

    Modesto-Mata, Mario; García-Campos, Cecilia; Martín-Francés, Laura; Martínez de Pinillos, Marina; García-González, Rebeca; Quintino, Yuliet; Canals, Antoni; Lozano, Marina; Dean, M Christopher; Martinón-Torres, María; Bermúdez de Castro, José María

    2017-08-01

    In the last years different methodologies have been developed to reconstruct worn teeth. In this article, we propose a new 2-D methodology to reconstruct the worn enamel of lower molars. Our main goals are to reconstruct molars with a high level of accuracy when measuring relevant histological variables and to validate the methodology calculating the errors associated with the measurements. This methodology is based on polynomial regression equations, and has been validated using two different dental variables: cuspal enamel thickness and crown height of the protoconid. In order to perform the validation process, simulated worn modern human molars were employed. The associated errors of the measurements were also estimated applying methodologies previously proposed by other authors. The mean percentage error estimated in reconstructed molars for these two variables in comparison with their own real values is -2.17% for the cuspal enamel thickness of the protoconid and -3.18% for the crown height of the protoconid. This error significantly improves the results of other methodologies, both in the interobserver error and in the accuracy of the measurements. The new methodology based on polynomial regressions can be confidently applied to the reconstruction of cuspal enamel of lower molars, as it improves the accuracy of the measurements and reduces the interobserver error. The present study shows that it is important to validate all methodologies in order to know the associated errors. This new methodology can be easily exportable to other modern human populations, the human fossil record and forensic sciences. © 2017 Wiley Periodicals, Inc.

  8. Uncertainty propagation through an aeroelastic wind turbine model using polynomial surrogates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Murcia, Juan Pablo; Réthoré, Pierre-Elouan; Dimitrov, Nikolay

    Polynomial surrogates are used to characterize the energy production and lifetime equivalent fatigue loads for different components of the DTU 10 MW reference wind turbine under realistic atmospheric conditions. The variability caused by different turbulent inflow fields are captured by creating independent surrogates for the mean and standard deviation of each output with respect to the inflow realizations. A global sensitivity analysis shows that the turbulent inflow realization has a bigger impact on the total distribution of equivalent fatigue loads than the shear coefficient or yaw miss-alignment. The methodology presented extends the deterministic power and thrust coefficient curves to uncertaintymore » models and adds new variables like damage equivalent fatigue loads in different components of the turbine. These surrogate models can then be implemented inside other work-flows such as: estimation of the uncertainty in annual energy production due to wind resource variability and/or robust wind power plant layout optimization. It can be concluded that it is possible to capture the global behavior of a modern wind turbine and its uncertainty under realistic inflow conditions using polynomial response surfaces. In conclusion, the surrogates are a way to obtain power and load estimation under site specific characteristics without sharing the proprietary aeroelastic design.« less

  9. Uncertainty propagation through an aeroelastic wind turbine model using polynomial surrogates

    DOE PAGES

    Murcia, Juan Pablo; Réthoré, Pierre-Elouan; Dimitrov, Nikolay; ...

    2017-07-17

    Polynomial surrogates are used to characterize the energy production and lifetime equivalent fatigue loads for different components of the DTU 10 MW reference wind turbine under realistic atmospheric conditions. The variability caused by different turbulent inflow fields are captured by creating independent surrogates for the mean and standard deviation of each output with respect to the inflow realizations. A global sensitivity analysis shows that the turbulent inflow realization has a bigger impact on the total distribution of equivalent fatigue loads than the shear coefficient or yaw miss-alignment. The methodology presented extends the deterministic power and thrust coefficient curves to uncertaintymore » models and adds new variables like damage equivalent fatigue loads in different components of the turbine. These surrogate models can then be implemented inside other work-flows such as: estimation of the uncertainty in annual energy production due to wind resource variability and/or robust wind power plant layout optimization. It can be concluded that it is possible to capture the global behavior of a modern wind turbine and its uncertainty under realistic inflow conditions using polynomial response surfaces. In conclusion, the surrogates are a way to obtain power and load estimation under site specific characteristics without sharing the proprietary aeroelastic design.« less

  10. Analytical solution of tt¯ dilepton equations

    NASA Astrophysics Data System (ADS)

    Sonnenschein, Lars

    2006-03-01

    The top quark antiquark production system in the dilepton decay channel is described by a set of equations which is nonlinear in the unknown neutrino momenta. Its most precise and least time consuming solution is of major importance for measurements of top quark properties like the top quark mass and tt¯ spin correlations. The initial system of equations can be transformed into two polynomial equations with two unknowns by means of elementary algebraic operations. These two polynomials of multidegree two can be reduced to one univariate polynomial of degree four by means of resultants. The obtained quartic equation is solved analytically.

  11. Uncertainty Quantification in CO 2 Sequestration Using Surrogate Models from Polynomial Chaos Expansion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Yan; Sahinidis, Nikolaos V.

    2013-03-06

    In this paper, surrogate models are iteratively built using polynomial chaos expansion (PCE) and detailed numerical simulations of a carbon sequestration system. Output variables from a numerical simulator are approximated as polynomial functions of uncertain parameters. Once generated, PCE representations can be used in place of the numerical simulator and often decrease simulation times by several orders of magnitude. However, PCE models are expensive to derive unless the number of terms in the expansion is moderate, which requires a relatively small number of uncertain variables and a low degree of expansion. To cope with this limitation, instead of using amore » classical full expansion at each step of an iterative PCE construction method, we introduce a mixed-integer programming (MIP) formulation to identify the best subset of basis terms in the expansion. This approach makes it possible to keep the number of terms small in the expansion. Monte Carlo (MC) simulation is then performed by substituting the values of the uncertain parameters into the closed-form polynomial functions. Based on the results of MC simulation, the uncertainties of injecting CO{sub 2} underground are quantified for a saline aquifer. Moreover, based on the PCE model, we formulate an optimization problem to determine the optimal CO{sub 2} injection rate so as to maximize the gas saturation (residual trapping) during injection, and thereby minimize the chance of leakage.« less

  12. Combinatorial theory of Macdonald polynomials I: proof of Haglund's formula.

    PubMed

    Haglund, J; Haiman, M; Loehr, N

    2005-02-22

    Haglund recently proposed a combinatorial interpretation of the modified Macdonald polynomials H(mu). We give a combinatorial proof of this conjecture, which establishes the existence and integrality of H(mu). As corollaries, we obtain the cocharge formula of Lascoux and Schutzenberger for Hall-Littlewood polynomials, a formula of Sahi and Knop for Jack's symmetric functions, a generalization of this result to the integral Macdonald polynomials J(mu), a formula for H(mu) in terms of Lascoux-Leclerc-Thibon polynomials, and combinatorial expressions for the Kostka-Macdonald coefficients K(lambda,mu) when mu is a two-column shape.

  13. Multi-indexed (q-)Racah polynomials

    NASA Astrophysics Data System (ADS)

    Odake, Satoru; Sasaki, Ryu

    2012-09-01

    As the second stage of the project multi-indexed orthogonal polynomials, we present, in the framework of ‘discrete quantum mechanics’ with real shifts in one dimension, the multi-indexed (q-)Racah polynomials. They are obtained from the (q-)Racah polynomials by the multiple application of the discrete analogue of the Darboux transformations or the Crum-Krein-Adler deletion of ‘virtual state’ vectors, in a similar way to the multi-indexed Laguerre and Jacobi polynomials reported earlier. The virtual state vectors are the ‘solutions’ of the matrix Schrödinger equation with negative ‘eigenvalues’, except for one of the two boundary points.

  14. Solitons interaction and integrability for a (2+1)-dimensional variable-coefficient Broer-Kaup system in water waves

    NASA Astrophysics Data System (ADS)

    Zhao, Xue-Hui; Tian, Bo; Guo, Yong-Jiang; Li, Hui-Min

    2018-03-01

    Under investigation in this paper is a (2+1)-dimensional variable-coefficient Broer-Kaup system in water waves. Via the symbolic computation, Bell polynomials and Hirota method, the Bäcklund transformation, Lax pair, bilinear forms, one- and two-soliton solutions are derived. Propagation and interaction for the solitons are illustrated: Amplitudes and shapes of the one soliton keep invariant during the propagation, which implies that the transport of the energy is stable for the (2+1)-dimensional water waves; and inelastic interactions between the two solitons are discussed. Elastic interactions between the two parabolic-, cubic- and periodic-type solitons are displayed, where the solitonic amplitudes and shapes remain unchanged except for certain phase shifts. However, inelastically, amplitudes of the two solitons have a linear superposition after each interaction which is called as a soliton resonance phenomenon.

  15. The Ritz - Sublaminate Generalized Unified Formulation approach for piezoelectric composite plates

    NASA Astrophysics Data System (ADS)

    D'Ottavio, Michele; Dozio, Lorenzo; Vescovini, Riccardo; Polit, Olivier

    2018-01-01

    This paper extends to composite plates including piezoelectric plies the variable kinematics plate modeling approach called Sublaminate Generalized Unified Formulation (SGUF). Two-dimensional plate equations are obtained upon defining a priori the through-thickness distribution of the displacement field and electric potential. According to SGUF, independent approximations can be adopted for the four components of these generalized displacements: an Equivalent Single Layer (ESL) or Layer-Wise (LW) description over an arbitrary group of plies constituting the composite plate (the sublaminate) and the polynomial order employed in each sublaminate. The solution of the two-dimensional equations is sought in weak form by means of a Ritz method. In this work, boundary functions are used in conjunction with the domain approximation expressed by an orthogonal basis spanned by Legendre polynomials. The proposed computational tool is capable to represent electroded surfaces with equipotentiality conditions. Free-vibration problems as well as static problems involving actuator and sensor configurations are addressed. Two case studies are presented, which demonstrate the high accuracy of the proposed Ritz-SGUF approach. A model assessment is proposed for showcasing to which extent the SGUF approach allows a reduction of the number of unknowns with a controlled impact on the accuracy of the result.

  16. Hermite Functional Link Neural Network for Solving the Van der Pol-Duffing Oscillator Equation.

    PubMed

    Mall, Susmita; Chakraverty, S

    2016-08-01

    Hermite polynomial-based functional link artificial neural network (FLANN) is proposed here to solve the Van der Pol-Duffing oscillator equation. A single-layer hermite neural network (HeNN) model is used, where a hidden layer is replaced by expansion block of input pattern using Hermite orthogonal polynomials. A feedforward neural network model with the unsupervised error backpropagation principle is used for modifying the network parameters and minimizing the computed error function. The Van der Pol-Duffing and Duffing oscillator equations may not be solved exactly. Here, approximate solutions of these types of equations have been obtained by applying the HeNN model for the first time. Three mathematical example problems and two real-life application problems of Van der Pol-Duffing oscillator equation, extracting the features of early mechanical failure signal and weak signal detection problems, are solved using the proposed HeNN method. HeNN approximate solutions have been compared with results obtained by the well known Runge-Kutta method. Computed results are depicted in term of graphs. After training the HeNN model, we may use it as a black box to get numerical results at any arbitrary point in the domain. Thus, the proposed HeNN method is efficient. The results reveal that this method is reliable and can be applied to other nonlinear problems too.

  17. Soliton interactions, Bäcklund transformations, Lax pair for a variable-coefficient generalized dispersive water-wave system

    NASA Astrophysics Data System (ADS)

    Liu, Lei; Tian, Bo; Zhen, Hui-Ling; Liu, De-Yin; Xie, Xi-Yang

    2018-04-01

    Under investigation in this paper is a variable-coefficient generalized dispersive water-wave system, which can simulate the propagation of the long weakly non-linear and weakly dispersive surface waves of variable depth in the shallow water. Under certain variable-coefficient constraints, by virtue of the Bell polynomials, Hirota method and symbolic computation, the bilinear forms, one- and two-soliton solutions are obtained. Bäcklund transformations and new Lax pair are also obtained. Our Lax pair is different from that previously reported. Based on the asymptotic and graphic analysis, with different forms of the variable coefficients, we find that there exist the elastic interactions for u, while either the elastic or inelastic interactions for v, with u and v as the horizontal velocity field and deviation height from the equilibrium position of the water, respectively. When the interactions are inelastic, we see the fission and fusion phenomena.

  18. Application of Statistic Experimental Design to Assess the Effect of Gammairradiation Pre-Treatment on the Drying Characteristics and Qualities of Wheat

    NASA Astrophysics Data System (ADS)

    Yu, Yong; Wang, Jun

    Wheat, pretreated by 60Co gamma irradiation, was dried by hot-air with irradiation dosage 0-3 kGy, drying temperature 40-60 °C, and initial moisture contents 19-25% (drying basis). The drying characteristics and dried qualities of wheat were evaluated based on drying time, average dehydration rate, wet gluten content (WGC), moisture content of wet gluten (MCWG)and titratable acidity (TA). A quadratic rotation-orthogonal composite experimental design, with three variables (at five levels) and five response functions, and analysis method were employed to study the effect of three variables on the individual response functions. The five response functions (drying time, average dehydration rate, WGC, MCWG, TA) correlated with these variables by second order polynomials consisting of linear, quadratic and interaction terms. A high correlation coefficient indicated the suitability of the second order polynomial to predict these response functions. The linear, interaction and quadratic effects of three variables on the five response functions were all studied.

  19. On the construction of recurrence relations for the expansion and connection coefficients in series of Jacobi polynomials

    NASA Astrophysics Data System (ADS)

    Doha, E. H.

    2004-01-01

    Formulae expressing explicitly the Jacobi coefficients of a general-order derivative (integral) of an infinitely differentiable function in terms of its original expansion coefficients, and formulae for the derivatives (integrals) of Jacobi polynomials in terms of Jacobi polynomials themselves are stated. A formula for the Jacobi coefficients of the moments of one single Jacobi polynomial of certain degree is proved. Another formula for the Jacobi coefficients of the moments of a general-order derivative of an infinitely differentiable function in terms of its original expanded coefficients is also given. A simple approach in order to construct and solve recursively for the connection coefficients between Jacobi-Jacobi polynomials is described. Explicit formulae for these coefficients between ultraspherical and Jacobi polynomials are deduced, of which the Chebyshev polynomials of the first and second kinds and Legendre polynomials are important special cases. Two analytical formulae for the connection coefficients between Laguerre-Jacobi and Hermite-Jacobi are developed.

  20. Constructing a polynomial whose nodal set is the three-twist knot 52

    NASA Astrophysics Data System (ADS)

    Dennis, Mark R.; Bode, Benjamin

    2017-06-01

    We describe a procedure that creates an explicit complex-valued polynomial function of three-dimensional space, whose nodal lines are the three-twist knot 52. The construction generalizes a similar approach for lemniscate knots: a braid representation is engineered from finite Fourier series and then considered as the nodal set of a certain complex polynomial which depends on an additional parameter. For sufficiently small values of this parameter, the nodal lines form the three-twist knot. Further mathematical properties of this map are explored, including the relationship of the phase critical points with the Morse-Novikov number, which is nonzero as this knot is not fibred. We also find analogous functions for other simple knots and links. The particular function we find, and the general procedure, should be useful for designing knotted fields of particular knot types in various physical systems.

  1. Poisson traces, D-modules, and symplectic resolutions

    NASA Astrophysics Data System (ADS)

    Etingof, Pavel; Schedler, Travis

    2018-03-01

    We survey the theory of Poisson traces (or zeroth Poisson homology) developed by the authors in a series of recent papers. The goal is to understand this subtle invariant of (singular) Poisson varieties, conditions for it to be finite-dimensional, its relationship to the geometry and topology of symplectic resolutions, and its applications to quantizations. The main technique is the study of a canonical D-module on the variety. In the case the variety has finitely many symplectic leaves (such as for symplectic singularities and Hamiltonian reductions of symplectic vector spaces by reductive groups), the D-module is holonomic, and hence, the space of Poisson traces is finite-dimensional. As an application, there are finitely many irreducible finite-dimensional representations of every quantization of the variety. Conjecturally, the D-module is the pushforward of the canonical D-module under every symplectic resolution of singularities, which implies that the space of Poisson traces is dual to the top cohomology of the resolution. We explain many examples where the conjecture is proved, such as symmetric powers of du Val singularities and symplectic surfaces and Slodowy slices in the nilpotent cone of a semisimple Lie algebra. We compute the D-module in the case of surfaces with isolated singularities and show it is not always semisimple. We also explain generalizations to arbitrary Lie algebras of vector fields, connections to the Bernstein-Sato polynomial, relations to two-variable special polynomials such as Kostka polynomials and Tutte polynomials, and a conjectural relationship with deformations of symplectic resolutions. In the appendix we give a brief recollection of the theory of D-modules on singular varieties that we require.

  2. Poisson traces, D-modules, and symplectic resolutions.

    PubMed

    Etingof, Pavel; Schedler, Travis

    2018-01-01

    We survey the theory of Poisson traces (or zeroth Poisson homology) developed by the authors in a series of recent papers. The goal is to understand this subtle invariant of (singular) Poisson varieties, conditions for it to be finite-dimensional, its relationship to the geometry and topology of symplectic resolutions, and its applications to quantizations. The main technique is the study of a canonical D-module on the variety. In the case the variety has finitely many symplectic leaves (such as for symplectic singularities and Hamiltonian reductions of symplectic vector spaces by reductive groups), the D-module is holonomic, and hence, the space of Poisson traces is finite-dimensional. As an application, there are finitely many irreducible finite-dimensional representations of every quantization of the variety. Conjecturally, the D-module is the pushforward of the canonical D-module under every symplectic resolution of singularities, which implies that the space of Poisson traces is dual to the top cohomology of the resolution. We explain many examples where the conjecture is proved, such as symmetric powers of du Val singularities and symplectic surfaces and Slodowy slices in the nilpotent cone of a semisimple Lie algebra. We compute the D-module in the case of surfaces with isolated singularities and show it is not always semisimple. We also explain generalizations to arbitrary Lie algebras of vector fields, connections to the Bernstein-Sato polynomial, relations to two-variable special polynomials such as Kostka polynomials and Tutte polynomials, and a conjectural relationship with deformations of symplectic resolutions. In the appendix we give a brief recollection of the theory of D-modules on singular varieties that we require.

  3. Neural Network and Response Surface Methodology for Rocket Engine Component Optimization

    NASA Technical Reports Server (NTRS)

    Vaidyanathan, Rajkumar; Papita, Nilay; Shyy, Wei; Tucker, P. Kevin; Griffin, Lisa W.; Haftka, Raphael; Fitz-Coy, Norman; McConnaughey, Helen (Technical Monitor)

    2000-01-01

    The goal of this work is to compare the performance of response surface methodology (RSM) and two types of neural networks (NN) to aid preliminary design of two rocket engine components. A data set of 45 training points and 20 test points obtained from a semi-empirical model based on three design variables is used for a shear coaxial injector element. Data for supersonic turbine design is based on six design variables, 76 training, data and 18 test data obtained from simplified aerodynamic analysis. Several RS and NN are first constructed using the training data. The test data are then employed to select the best RS or NN. Quadratic and cubic response surfaces. radial basis neural network (RBNN) and back-propagation neural network (BPNN) are compared. Two-layered RBNN are generated using two different training algorithms, namely solverbe and solverb. A two layered BPNN is generated with Tan-Sigmoid transfer function. Various issues related to the training of the neural networks are addressed including number of neurons, error goals, spread constants and the accuracy of different models in representing the design space. A search for the optimum design is carried out using a standard gradient-based optimization algorithm over the response surfaces represented by the polynomials and trained neural networks. Usually a cubic polynominal performs better than the quadratic polynomial but exceptions have been noticed. Among the NN choices, the RBNN designed using solverb yields more consistent performance for both engine components considered. The training of RBNN is easier as it requires linear regression. This coupled with the consistency in performance promise the possibility of it being used as an optimization strategy for engineering design problems.

  4. Explicit bounds for the positive root of classes of polynomials with applications

    NASA Astrophysics Data System (ADS)

    Herzberger, Jürgen

    2003-03-01

    We consider a certain type of polynomial equations for which there exists--according to Descartes' rule of signs--only one simple positive root. These equations are occurring in Numerical Analysis when calculating or estimating the R-order or Q-order of convergence of certain iterative processes with an error-recursion of special form. On the other hand, these polynomial equations are very common as defining equations for the effective rate of return for certain cashflows like bonds or annuities in finance. The effective rate of interest i* for those cashflows is i*=q*-1, where q* is the unique positive root of such polynomial. We construct bounds for i* for a special problem concerning an ordinary simple annuity which is obtained by changing the conditions of such an annuity with given data applying the German rule (Preisangabeverordnung or short PAngV). Moreover, we consider a number of results for such polynomial roots in Numerical Analysis showing that by a simple variable transformation we can derive several formulas out of earlier results by applying this transformation. The same is possible in finance in order to generalize results to more complicated cashflows.

  5. Maximum likelihood decoding of Reed Solomon Codes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sudan, M.

    We present a randomized algorithm which takes as input n distinct points ((x{sub i}, y{sub i})){sup n}{sub i=1} from F x F (where F is a field) and integer parameters t and d and returns a list of all univariate polynomials f over F in the variable x of degree at most d which agree with the given set of points in at least t places (i.e., y{sub i} = f (x{sub i}) for at least t values of i), provided t = {Omega}({radical}nd). The running time is bounded by a polynomial in n. This immediately provides a maximum likelihoodmore » decoding algorithm for Reed Solomon Codes, which works in a setting with a larger number of errors than any previously known algorithm. To the best of our knowledge, this is the first efficient (i.e., polynomial time bounded) algorithm which provides some maximum likelihood decoding for any efficient (i.e., constant or even polynomial rate) code.« less

  6. Parallel multigrid smoothing: polynomial versus Gauss-Seidel

    NASA Astrophysics Data System (ADS)

    Adams, Mark; Brezina, Marian; Hu, Jonathan; Tuminaro, Ray

    2003-07-01

    Gauss-Seidel is often the smoother of choice within multigrid applications. In the context of unstructured meshes, however, maintaining good parallel efficiency is difficult with multiplicative iterative methods such as Gauss-Seidel. This leads us to consider alternative smoothers. We discuss the computational advantages of polynomial smoothers within parallel multigrid algorithms for positive definite symmetric systems. Two particular polynomials are considered: Chebyshev and a multilevel specific polynomial. The advantages of polynomial smoothing over traditional smoothers such as Gauss-Seidel are illustrated on several applications: Poisson's equation, thin-body elasticity, and eddy current approximations to Maxwell's equations. While parallelizing the Gauss-Seidel method typically involves a compromise between a scalable convergence rate and maintaining high flop rates, polynomial smoothers achieve parallel scalable multigrid convergence rates without sacrificing flop rates. We show that, although parallel computers are the main motivation, polynomial smoothers are often surprisingly competitive with Gauss-Seidel smoothers on serial machines.

  7. Squeezed states and Hermite polynomials in a complex variable

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ali, S. Twareque, E-mail: twareque.ali@concordia.ca; Górska, K., E-mail: katarzyna.gorska@ifj.edu.pl; Horzela, A., E-mail: andrzej.horzela@ifj.edu.pl

    2014-01-15

    Following the lines of the recent paper of J.-P. Gazeau and F. H. Szafraniec [J. Phys. A: Math. Theor. 44, 495201 (2011)], we construct here three types of coherent states, related to the Hermite polynomials in a complex variable which are orthogonal with respect to a non-rotationally invariant measure. We investigate relations between these coherent states and obtain the relationship between them and the squeezed states of quantum optics. We also obtain a second realization of the canonical coherent states in the Bargmann space of analytic functions, in terms of a squeezed basis. All this is done in the flavormore » of the classical approach of V. Bargmann [Commun. Pure Appl. Math. 14, 187 (1961)].« less

  8. Monograph on the use of the multivariate Gram Charlier series Type A

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hatayodom, T.; Heydt, G.

    1978-01-01

    The Gram-Charlier series in an infinite series expansion for a probability density function (pdf) in which terms of the series are Hermite polynomials. There are several Gram-Charlier series - the best known is Type A. The Gram-Charlier series, Type A (GCA) exists for both univariate and multivariate random variables. This monograph introduces the multivariate GCA and illustrates its use through several examples. A brief bibliography and discussion of Hermite polynomials is also included. 9 figures, 2 tables.

  9. Learning polynomial feedforward neural networks by genetic programming and backpropagation.

    PubMed

    Nikolaev, N Y; Iba, H

    2003-01-01

    This paper presents an approach to learning polynomial feedforward neural networks (PFNNs). The approach suggests, first, finding the polynomial network structure by means of a population-based search technique relying on the genetic programming paradigm, and second, further adjustment of the best discovered network weights by an especially derived backpropagation algorithm for higher order networks with polynomial activation functions. These two stages of the PFNN learning process enable us to identify networks with good training as well as generalization performance. Empirical results show that this approach finds PFNN which outperform considerably some previous constructive polynomial network algorithms on processing benchmark time series.

  10. Orthogonal Polynomials Associated with Complementary Chain Sequences

    NASA Astrophysics Data System (ADS)

    Behera, Kiran Kumar; Sri Ranga, A.; Swaminathan, A.

    2016-07-01

    Using the minimal parameter sequence of a given chain sequence, we introduce the concept of complementary chain sequences, which we view as perturbations of chain sequences. Using the relation between these complementary chain sequences and the corresponding Verblunsky coefficients, the para-orthogonal polynomials and the associated Szegő polynomials are analyzed. Two illustrations, one involving Gaussian hypergeometric functions and the other involving Carathéodory functions are also provided. A connection between these two illustrations by means of complementary chain sequences is also observed.

  11. A polynomial based model for cell fate prediction in human diseases.

    PubMed

    Ma, Lichun; Zheng, Jie

    2017-12-21

    Cell fate regulation directly affects tissue homeostasis and human health. Research on cell fate decision sheds light on key regulators, facilitates understanding the mechanisms, and suggests novel strategies to treat human diseases that are related to abnormal cell development. In this study, we proposed a polynomial based model to predict cell fate. This model was derived from Taylor series. As a case study, gene expression data of pancreatic cells were adopted to test and verify the model. As numerous features (genes) are available, we employed two kinds of feature selection methods, i.e. correlation based and apoptosis pathway based. Then polynomials of different degrees were used to refine the cell fate prediction function. 10-fold cross-validation was carried out to evaluate the performance of our model. In addition, we analyzed the stability of the resultant cell fate prediction model by evaluating the ranges of the parameters, as well as assessing the variances of the predicted values at randomly selected points. Results show that, within both the two considered gene selection methods, the prediction accuracies of polynomials of different degrees show little differences. Interestingly, the linear polynomial (degree 1 polynomial) is more stable than others. When comparing the linear polynomials based on the two gene selection methods, it shows that although the accuracy of the linear polynomial that uses correlation analysis outcomes is a little higher (achieves 86.62%), the one within genes of the apoptosis pathway is much more stable. Considering both the prediction accuracy and the stability of polynomial models of different degrees, the linear model is a preferred choice for cell fate prediction with gene expression data of pancreatic cells. The presented cell fate prediction model can be extended to other cells, which may be important for basic research as well as clinical study of cell development related diseases.

  12. Wetlands explain most in the genetic divergence pattern of Oncomelania hupensis.

    PubMed

    Liang, Lu; Liu, Yang; Liao, Jishan; Gong, Peng

    2014-10-01

    Understanding the divergence patterns of hosts could shed lights on the prediction of their parasite transmission. No effort has been devoted to understand the drivers of genetic divergence pattern of Oncomelania hupensis, the only intermediate host of Schistosoma japonicum. Based on a compilation of two O. hupensis gene datasets covering a wide geographic range in China and an array of geographical distance and environmental dissimilarity metrics built from earth observation data and ecological niche modeling, we conducted causal modeling analysis via simple, partial Mantel test and local polynomial fitting to understand the interactions among isolation-by-distance, isolation-by-environment, and genetic divergence. We found that geography contributes more to genetic divergence than environmental isolation, and among all variables involved, wetland showed the strongest correlation with the genetic pairwise distances. These results suggested that in China, O. hupensis dispersal is strongly linked to the distribution of wetlands, and the current divergence pattern of both O. hupensis and schistosomiasis might be altered due to the changed wetland pattern with the accomplishment of the Three Gorges Dam and the South-to-North water transfer project. Copyright © 2014 Elsevier B.V. All rights reserved.

  13. Interbasis expansions in the Zernike system

    NASA Astrophysics Data System (ADS)

    Atakishiyev, Natig M.; Pogosyan, George S.; Wolf, Kurt Bernardo; Yakhno, Alexander

    2017-10-01

    The differential equation with free boundary conditions on the unit disk that was proposed by Frits Zernike in 1934 to find Jacobi polynomial solutions (indicated as I) serves to define a classical system and a quantum system which have been found to be superintegrable. We have determined two new orthogonal polynomial solutions (indicated as II and III) that are separable and involve Legendre and Gegenbauer polynomials. Here we report on their three interbasis expansion coefficients: between the I-II and I-III bases, they are given by F32(⋯|1 ) polynomials that are also special su(2) Clebsch-Gordan coefficients and Hahn polynomials. Between the II-III bases, we find an expansion expressed by F43(⋯|1 ) 's and Racah polynomials that are related to the Wigner 6j coefficients.

  14. Evaluation of alternative model selection criteria in the analysis of unimodal response curves using CART

    USGS Publications Warehouse

    Ribic, C.A.; Miller, T.W.

    1998-01-01

    We investigated CART performance with a unimodal response curve for one continuous response and four continuous explanatory variables, where two variables were important (ie directly related to the response) and the other two were not. We explored performance under three relationship strengths and two explanatory variable conditions: equal importance and one variable four times as important as the other. We compared CART variable selection performance using three tree-selection rules ('minimum risk', 'minimum risk complexity', 'one standard error') to stepwise polynomial ordinary least squares (OLS) under four sample size conditions. The one-standard-error and minimum-risk-complexity methods performed about as well as stepwise OLS with large sample sizes when the relationship was strong. With weaker relationships, equally important explanatory variables and larger sample sizes, the one-standard-error and minimum-risk-complexity rules performed better than stepwise OLS. With weaker relationships and explanatory variables of unequal importance, tree-structured methods did not perform as well as stepwise OLS. Comparing performance within tree-structured methods, with a strong relationship and equally important explanatory variables, the one-standard-error-rule was more likely to choose the correct model than were the other tree-selection rules 1) with weaker relationships and equally important explanatory variables; and 2) under all relationship strengths when explanatory variables were of unequal importance and sample sizes were lower.

  15. Approximating Multilinear Monomial Coefficients and Maximum Multilinear Monomials in Multivariate Polynomials

    NASA Astrophysics Data System (ADS)

    Chen, Zhixiang; Fu, Bin

    This paper is our third step towards developing a theory of testing monomials in multivariate polynomials and concentrates on two problems: (1) How to compute the coefficients of multilinear monomials; and (2) how to find a maximum multilinear monomial when the input is a ΠΣΠ polynomial. We first prove that the first problem is #P-hard and then devise a O *(3 n s(n)) upper bound for this problem for any polynomial represented by an arithmetic circuit of size s(n). Later, this upper bound is improved to O *(2 n ) for ΠΣΠ polynomials. We then design fully polynomial-time randomized approximation schemes for this problem for ΠΣ polynomials. On the negative side, we prove that, even for ΠΣΠ polynomials with terms of degree ≤ 2, the first problem cannot be approximated at all for any approximation factor ≥ 1, nor "weakly approximated" in a much relaxed setting, unless P=NP. For the second problem, we first give a polynomial time λ-approximation algorithm for ΠΣΠ polynomials with terms of degrees no more a constant λ ≥ 2. On the inapproximability side, we give a n (1 - ɛ)/2 lower bound, for any ɛ> 0, on the approximation factor for ΠΣΠ polynomials. When the degrees of the terms in these polynomials are constrained as ≤ 2, we prove a 1.0476 lower bound, assuming Pnot=NP; and a higher 1.0604 lower bound, assuming the Unique Games Conjecture.

  16. Orthonormal vector general polynomials derived from the Cartesian gradient of the orthonormal Zernike-based polynomials.

    PubMed

    Mafusire, Cosmas; Krüger, Tjaart P J

    2018-06-01

    The concept of orthonormal vector circle polynomials is revisited by deriving a set from the Cartesian gradient of Zernike polynomials in a unit circle using a matrix-based approach. The heart of this model is a closed-form matrix equation of the gradient of Zernike circle polynomials expressed as a linear combination of lower-order Zernike circle polynomials related through a gradient matrix. This is a sparse matrix whose elements are two-dimensional standard basis transverse Euclidean vectors. Using the outer product form of the Cholesky decomposition, the gradient matrix is used to calculate a new matrix, which we used to express the Cartesian gradient of the Zernike circle polynomials as a linear combination of orthonormal vector circle polynomials. Since this new matrix is singular, the orthonormal vector polynomials are recovered by reducing the matrix to its row echelon form using the Gauss-Jordan elimination method. We extend the model to derive orthonormal vector general polynomials, which are orthonormal in a general pupil by performing a similarity transformation on the gradient matrix to give its equivalent in the general pupil. The outer form of the Gram-Schmidt procedure and the Gauss-Jordan elimination method are then applied to the general pupil to generate the orthonormal vector general polynomials from the gradient of the orthonormal Zernike-based polynomials. The performance of the model is demonstrated with a simulated wavefront in a square pupil inscribed in a unit circle.

  17. A method for deriving lower bounds for the complexity of monotone arithmetic circuits computing real polynomials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gashkov, Sergey B; Sergeev, Igor' S

    2012-10-31

    This work suggests a method for deriving lower bounds for the complexity of polynomials with positive real coefficients implemented by circuits of functional elements over the monotone arithmetic basis {l_brace}x+y, x {center_dot} y{r_brace} Union {l_brace}a {center_dot} x | a Element-Of R{sub +}{r_brace}. Using this method, several new results are obtained. In particular, we construct examples of polynomials of degree m-1 in each of the n variables with coefficients 0 and 1 having additive monotone complexity m{sup (1-o(1))n} and multiplicative monotone complexity m{sup (1/2-o(1))n} as m{sup n}{yields}{infinity}. In this form, the lower bounds derived here are sharp. Bibliography: 72 titles.

  18. Finding the Best-Fit Polynomial Approximation in Evaluating Drill Data: the Application of a Generalized Inverse Matrix / Poszukiwanie Najlepszej ZGODNOŚCI W PRZYBLIŻENIU Wielomianowym Wykorzystanej do Oceny Danych Z ODWIERTÓW - Zastosowanie UOGÓLNIONEJ Macierzy Odwrotnej

    NASA Astrophysics Data System (ADS)

    Karakus, Dogan

    2013-12-01

    In mining, various estimation models are used to accurately assess the size and the grade distribution of an ore body. The estimation of the positional properties of unknown regions using random samples with known positional properties was first performed using polynomial approximations. Although the emergence of computer technologies and statistical evaluation of random variables after the 1950s rendered the polynomial approximations less important, theoretically the best surface passing through the random variables can be expressed as a polynomial approximation. In geoscience studies, in which the number of random variables is high, reliable solutions can be obtained only with high-order polynomials. Finding the coefficients of these types of high-order polynomials can be computationally intensive. In this study, the solution coefficients of high-order polynomials were calculated using a generalized inverse matrix method. A computer algorithm was developed to calculate the polynomial degree giving the best regression between the values obtained for solutions of different polynomial degrees and random observational data with known values, and this solution was tested with data derived from a practical application. In this application, the calorie values for data from 83 drilling points in a coal site located in southwestern Turkey were used, and the results are discussed in the context of this study. W górnictwie wykorzystuje się rozmaite modele estymacji do dokładnego określenia wielkości i rozkładu zawartości pierwiastka użytecznego w rudzie. Estymację położenia i właściwości skał w nieznanych obszarach z wykorzystaniem próbek losowych o znanym położeniu przeprowadzano na początku z wykorzystaniem przybliżenia wielomianowego. Pomimo tego, że rozwój technik komputerowych i statystycznych metod ewaluacji próbek losowych sprawiły, że po roku 1950 metody przybliżenia wielomianowego straciły na znaczeniu, nadal teoretyczna powierzchnia najlepszej zgodności przechodząca przez zmienne losowe wyrażana jest właśnie poprzez przybliżenie wielomianowe. W geofizyce, gdzie liczba próbek losowych jest zazwyczaj bardzo wysoka, wiarygodne rozwiązania uzyskać można jedynie przy wykorzystaniu wielomianów wyższych stopni. Określenie współczynników w tego typu wielomia nach jest skomplikowaną procedurą obliczeniową. W pracy tej poszukiwane współczynniki wielomianu wyższych stopni obliczono przy zastosowaniu metody uogólnionej macierzy odwrotnej. Opracowano odpowiedni algorytm komputerowy do obliczania stopnia wielomianu, zapewniający najlepszą regresję pomiędzy wartościami otrzymanymi z rozwiązań bazujących na wielomianach różnych stopni i losowymi danymi z obserwacji, o znanych wartościach. Rozwiązanie to przetestowano z użyciem danych uzyskanych z zastosowań praktycznych. W tym zastosowaniu użyto danych o wartości opałowej pochodzących z 83 odwiertów wykonanych w zagłębiu węglowym w południowo- zachodniej Turcji, wyniki obliczeń przedyskutowano w kontekście zagadnień uwzględnionych w niniejszej pracy.

  19. Optimization and formulation design of gels of Diclofenac and Curcumin for transdermal drug delivery by Box-Behnken statistical design.

    PubMed

    Chaudhary, Hema; Kohli, Kanchan; Amin, Saima; Rathee, Permender; Kumar, Vikash

    2011-02-01

    The aim of this study was to develop and optimize a transdermal gel formulation for Diclofenac diethylamine (DDEA) and Curcumin (CRM). A 3-factor, 3-level Box-Behnken design was used to derive a second-order polynomial equation to construct contour plots for prediction of responses. Independent variables studied were the polymer concentration (X(1)), ethanol (X(2)) and propylene glycol (X(3)) and the levels of each factor were low, medium, and high. The dependent variables studied were the skin permeation rate of DDEA (Y(1)), skin permeation rate of CRM (Y(2)), and viscosity of the gels (Y(3)). Response surface plots were drawn, statistical validity of the polynomials was established to find the compositions of optimized formulation which was evaluated using the Franz-type diffusion cell. The permeation rate of DDEA increased proportionally with ethanol concentration but decreased with polymer concentration, whereas the permeation rate of CRM increased proportionally with polymer concentration. Gels showed a non-Fickian super case II (typical zero order) and non-Fickian diffusion release mechanism for DDEA and CRM, respectively. The design demonstrated the role of the derived polynomial equation and contour plots in predicting the values of dependent variables for the preparation and optimization of gel formulation for transdermal drug release. Copyright © 2010 Wiley-Liss, Inc.

  20. Are We All in the Same Boat? The Role of Perceptual Distance in Organizational Health Interventions.

    PubMed

    Hasson, Henna; von Thiele Schwarz, Ulrica; Nielsen, Karina; Tafvelin, Susanne

    2016-10-01

    The study investigates how agreement between leaders' and their team's perceptions influence intervention outcomes in a leadership-training intervention aimed at improving organizational learning. Agreement, i.e. perceptual distance was calculated for the organizational learning dimensions at baseline. Changes in the dimensions from pre-intervention to post-intervention were evaluated using polynomial regression analysis with response surface analysis. The general pattern of the results indicated that the organizational learning improved when leaders and their teams agreed on the level of organizational learning prior to the intervention. The improvement was greatest when the leader's and the team's perceptions at baseline were aligned and high rather than aligned and low. The least beneficial scenario was when the leader's perceptions were higher than the team's perceptions. These results give insights into the importance of comparing leaders' and their team's perceptions in intervention research. Polynomial regression analyses with response surface methodology allow three-dimensional examination of relationship between two predictor variables and an outcome. This contributes with knowledge on how combination of predictor variables may affect outcome and allows studies of potential non-linearity relating to the outcome. Future studies could use these methods in process evaluation of interventions. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  1. From r-spin intersection numbers to Hodge integrals

    NASA Astrophysics Data System (ADS)

    Ding, Xiang-Mao; Li, Yuping; Meng, Lingxian

    2016-01-01

    Generalized Kontsevich Matrix Model (GKMM) with a certain given potential is the partition function of r-spin intersection numbers. We represent this GKMM in terms of fermions and expand it in terms of the Schur polynomials by boson-fermion correspondence, and link it with a Hurwitz partition function and a Hodge partition by operators in a widehat{GL}(∞) group. Then, from a W 1+∞ constraint of the partition function of r-spin intersection numbers, we get a W 1+∞ constraint for the Hodge partition function. The W 1+∞ constraint completely determines the Schur polynomials expansion of the Hodge partition function.

  2. Evaluation of more general integrals involving universal associated Legendre polynomials

    NASA Astrophysics Data System (ADS)

    You, Yuan; Chen, Chang-Yuan; Tahir, Farida; Dong, Shi-Hai

    2017-05-01

    We find that the solution of the polar angular differential equation can be written as the universal associated Legendre polynomials. We present a popular integral formula which includes universal associated Legendre polynomials and we also evaluate some important integrals involving the product of two universal associated Legendre polynomials Pl' m'(x ) , Pk' n'(x ) and x2 a(1-x2 ) -p -1, xb(1±x2 ) -p, and xc(1-x2 ) -p(1±x ) -1, where l'≠k' and m'≠n'. Their selection rules are also mentioned.

  3. On polynomial preconditioning for indefinite Hermitian matrices

    NASA Technical Reports Server (NTRS)

    Freund, Roland W.

    1989-01-01

    The minimal residual method is studied combined with polynomial preconditioning for solving large linear systems (Ax = b) with indefinite Hermitian coefficient matrices (A). The standard approach for choosing the polynomial preconditioners leads to preconditioned systems which are positive definite. Here, a different strategy is studied which leaves the preconditioned coefficient matrix indefinite. More precisely, the polynomial preconditioner is designed to cluster the positive, resp. negative eigenvalues of A around 1, resp. around some negative constant. In particular, it is shown that such indefinite polynomial preconditioners can be obtained as the optimal solutions of a certain two parameter family of Chebyshev approximation problems. Some basic results are established for these approximation problems and a Remez type algorithm is sketched for their numerical solution. The problem of selecting the parameters such that the resulting indefinite polynomial preconditioners speeds up the convergence of minimal residual method optimally is also addressed. An approach is proposed based on the concept of asymptotic convergence factors. Finally, some numerical examples of indefinite polynomial preconditioners are given.

  4. Coherent orthogonal polynomials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Celeghini, E., E-mail: celeghini@fi.infn.it; Olmo, M.A. del, E-mail: olmo@fta.uva.es

    2013-08-15

    We discuss a fundamental characteristic of orthogonal polynomials, like the existence of a Lie algebra behind them, which can be added to their other relevant aspects. At the basis of the complete framework for orthogonal polynomials we include thus–in addition to differential equations, recurrence relations, Hilbert spaces and square integrable functions–Lie algebra theory. We start here from the square integrable functions on the open connected subset of the real line whose bases are related to orthogonal polynomials. All these one-dimensional continuous spaces allow, besides the standard uncountable basis (|x〉), for an alternative countable basis (|n〉). The matrix elements that relatemore » these two bases are essentially the orthogonal polynomials: Hermite polynomials for the line and Laguerre and Legendre polynomials for the half-line and the line interval, respectively. Differential recurrence relations of orthogonal polynomials allow us to realize that they determine an infinite-dimensional irreducible representation of a non-compact Lie algebra, whose second order Casimir C gives rise to the second order differential equation that defines the corresponding family of orthogonal polynomials. Thus, the Weyl–Heisenberg algebra h(1) with C=0 for Hermite polynomials and su(1,1) with C=−1/4 for Laguerre and Legendre polynomials are obtained. Starting from the orthogonal polynomials the Lie algebra is extended both to the whole space of the L{sup 2} functions and to the corresponding Universal Enveloping Algebra and transformation group. Generalized coherent states from each vector in the space L{sup 2} and, in particular, generalized coherent polynomials are thus obtained. -- Highlights: •Fundamental characteristic of orthogonal polynomials (OP): existence of a Lie algebra. •Differential recurrence relations of OP determine a unitary representation of a non-compact Lie group. •2nd order Casimir originates a 2nd order differential equation that defines the corresponding OP family. •Generalized coherent polynomials are obtained from OP.« less

  5. Verifying the error bound of numerical computation implemented in computer systems

    DOEpatents

    Sawada, Jun

    2013-03-12

    A verification tool receives a finite precision definition for an approximation of an infinite precision numerical function implemented in a processor in the form of a polynomial of bounded functions. The verification tool receives a domain for verifying outputs of segments associated with the infinite precision numerical function. The verification tool splits the domain into at least two segments, wherein each segment is non-overlapping with any other segment and converts, for each segment, a polynomial of bounded functions for the segment to a simplified formula comprising a polynomial, an inequality, and a constant for a selected segment. The verification tool calculates upper bounds of the polynomial for the at least two segments, beginning with the selected segment and reports the segments that violate a bounding condition.

  6. Heun Polynomials and Exact Solutions for the Massless Dirac Particle in the C-Metric

    NASA Astrophysics Data System (ADS)

    Kar, Priyasri; Singh, Ritesh K.; Dasgupta, Ananda; Panigrahi, Prasanta K.

    2018-03-01

    The equation of motion of a massless Dirac particle in the C-metric leads to the general Heun equation (GHE) for the radial and the polar variables. The GHE, under certain parametric conditions, is cast in terms of a new set of su(1, 1) generators involving differential operators of degrees ±1/2 and 0. Additional Heun polynomials are obtained using this new algebraic structure and are used to construct some exact solutions for the radial and the polar parts of the Dirac equation.

  7. Towards spinning Mellin amplitudes

    NASA Astrophysics Data System (ADS)

    Chen, Heng-Yu; Kuo, En-Jui; Kyono, Hideki

    2018-06-01

    We construct the Mellin representation of four point conformal correlation function with external primary operators with arbitrary integer spacetime spins, and obtain a natural proposal for spinning Mellin amplitudes. By restricting to the exchange of symmetric traceless primaries, we generalize the Mellin transform for scalar case to introduce discrete Mellin variables for incorporating spin degrees of freedom. Based on the structures about spinning three and four point Witten diagrams, we also obtain a generalization of the Mack polynomial which can be regarded as a natural kinematical polynomial basis for computing spinning Mellin amplitudes using different choices of interaction vertices.

  8. Modeling of price and profit in coupled-ring networks

    NASA Astrophysics Data System (ADS)

    Tangmongkollert, Kittiwat; Suwanna, Sujin

    2016-06-01

    We study the behaviors of magnetization, price, and profit profiles in ring networks in the presence of the external magnetic field. The Ising model is used to determine the state of each node, which is mapped to the buy-or-sell state in a financial market, where +1 is identified as the buying state, and -1 as the selling state. Price and profit mechanisms are modeled based on the assumption that price should increase if demand is larger than supply, and it should decrease otherwise. We find that the magnetization can be induced between two rings via coupling links, where the induced magnetization strength depends on the number of the coupling links. Consequently, the price behaves linearly with time, where its rate of change depends on the magnetization. The profit grows like a quadratic polynomial with coefficients dependent on the magnetization. If two rings have opposite direction of net spins, the price flows in the direction of the majority spins, and the network with the minority spins gets a loss in profit.

  9. Recurrence relations for orthogonal polynomials for PDEs in polar and cylindrical geometries.

    PubMed

    Richardson, Megan; Lambers, James V

    2016-01-01

    This paper introduces two families of orthogonal polynomials on the interval (-1,1), with weight function [Formula: see text]. The first family satisfies the boundary condition [Formula: see text], and the second one satisfies the boundary conditions [Formula: see text]. These boundary conditions arise naturally from PDEs defined on a disk with Dirichlet boundary conditions and the requirement of regularity in Cartesian coordinates. The families of orthogonal polynomials are obtained by orthogonalizing short linear combinations of Legendre polynomials that satisfy the same boundary conditions. Then, the three-term recurrence relations are derived. Finally, it is shown that from these recurrence relations, one can efficiently compute the corresponding recurrences for generalized Jacobi polynomials that satisfy the same boundary conditions.

  10. A new numerical treatment based on Lucas polynomials for 1D and 2D sinh-Gordon equation

    NASA Astrophysics Data System (ADS)

    Oruç, Ömer

    2018-04-01

    In this paper, a new mixed method based on Lucas and Fibonacci polynomials is developed for numerical solutions of 1D and 2D sinh-Gordon equations. Firstly time variable discretized by central finite difference and then unknown function and its derivatives are expanded to Lucas series. With the help of these series expansion and Fibonacci polynomials, matrices for differentiation are derived. With this approach, finding the solution of sinh-Gordon equation transformed to finding the solution of an algebraic system of equations. Lucas series coefficients are acquired by solving this system of algebraic equations. Then by plugginging these coefficients into Lucas series expansion numerical solutions can be obtained consecutively. The main objective of this paper is to demonstrate that Lucas polynomial based method is convenient for 1D and 2D nonlinear problems. By calculating L2 and L∞ error norms of some 1D and 2D test problems efficiency and performance of the proposed method is monitored. Acquired accurate results confirm the applicability of the method.

  11. Polynomial chaos representation of databases on manifolds

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Soize, C., E-mail: christian.soize@univ-paris-est.fr; Ghanem, R., E-mail: ghanem@usc.edu

    2017-04-15

    Characterizing the polynomial chaos expansion (PCE) of a vector-valued random variable with probability distribution concentrated on a manifold is a relevant problem in data-driven settings. The probability distribution of such random vectors is multimodal in general, leading to potentially very slow convergence of the PCE. In this paper, we build on a recent development for estimating and sampling from probabilities concentrated on a diffusion manifold. The proposed methodology constructs a PCE of the random vector together with an associated generator that samples from the target probability distribution which is estimated from data concentrated in the neighborhood of the manifold. Themore » method is robust and remains efficient for high dimension and large datasets. The resulting polynomial chaos construction on manifolds permits the adaptation of many uncertainty quantification and statistical tools to emerging questions motivated by data-driven queries.« less

  12. Correction factors for on-line microprobe analysis of multielement alloy systems

    NASA Technical Reports Server (NTRS)

    Unnam, J.; Tenney, D. R.; Brewer, W. D.

    1977-01-01

    An on-line correction technique was developed for the conversion of electron probe X-ray intensities into concentrations of emitting elements. This technique consisted of off-line calculation and representation of binary interaction data which were read into an on-line minicomputer to calculate variable correction coefficients. These coefficients were used to correct the X-ray data without significantly increasing computer core requirements. The binary interaction data were obtained by running Colby's MAGIC 4 program in the reverse mode. The data for each binary interaction were represented by polynomial coefficients obtained by least-squares fitting a third-order polynomial. Polynomial coefficients were generated for most of the common binary interactions at different accelerating potentials and are included. Results are presented for the analyses of several alloy standards to demonstrate the applicability of this correction procedure.

  13. Introduction to methodology of dose-response meta-analysis for binary outcome: With application on software.

    PubMed

    Zhang, Chao; Jia, Pengli; Yu, Liu; Xu, Chang

    2018-05-01

    Dose-response meta-analysis (DRMA) is widely applied to investigate the dose-specific relationship between independent and dependent variables. Such methods have been in use for over 30 years and are increasingly employed in healthcare and clinical decision-making. In this article, we give an overview of the methodology used in DRMA. We summarize the commonly used regression model and the pooled method in DRMA. We also use an example to illustrate how to employ a DRMA by these methods. Five regression models, linear regression, piecewise regression, natural polynomial regression, fractional polynomial regression, and restricted cubic spline regression, were illustrated in this article to fit the dose-response relationship. And two types of pooling approaches, that is, one-stage approach and two-stage approach are illustrated to pool the dose-response relationship across studies. The example showed similar results among these models. Several dose-response meta-analysis methods can be used for investigating the relationship between exposure level and the risk of an outcome. However the methodology of DRMA still needs to be improved. © 2018 Chinese Cochrane Center, West China Hospital of Sichuan University and John Wiley & Sons Australia, Ltd.

  14. First Instances of Generalized Expo-Rational Finite Elements on Triangulations

    NASA Astrophysics Data System (ADS)

    Dechevsky, Lubomir T.; Zanaty, Peter; Laksa˚, Arne; Bang, Børre

    2011-12-01

    In this communication we consider a construction of simplicial finite elements on triangulated two-dimensional polygonal domains. This construction is, in some sense, dual to the construction of generalized expo-rational B-splines (GERBS). The main result is in the obtaining of new polynomial simplicial patches of the first several lowest possible total polynomial degrees which exhibit Hermite interpolatory properties. The derivation of these results is based on the theory of piecewise polynomial GERBS called Euler Beta-function B-splines. We also provide 3-dimensional visualization of the graphs of the new polynomial simplicial patches and their control polygons.

  15. Analytic complexity of functions of two variables

    NASA Astrophysics Data System (ADS)

    Beloshapka, V. K.

    2007-09-01

    The definition of analytic complexity of an analytic function of two variables is given. It is proved that the class of functions of a chosen complexity is a differentialalgebraic set. A differential polynomial defining the functions of first class is constructed. An algorithm for obtaining relations defining an arbitrary class is described. Examples of functions are given whose order of complexity is equal to zero, one, two, and infinity. It is shown that the formal order of complexity of the Cardano and Ferrari formulas is significantly higher than their analytic complexity. The complexity classes turn out to be invariant with respect to a certain infinite-dimensional transformation pseudogroup. In this connection, we describe the orbits of the action of this pseudogroup in the jets of orders one, two, and three. The notion of complexity order is extended to plane (or “planar”) 3-webs. It is discovered that webs of complexity order one are the hexagonal webs. Some problems are posed.

  16. Three-dimensional trend mapping from wire-line logs

    USGS Publications Warehouse

    Doveton, J.H.; Ke-an, Z.

    1985-01-01

    Mapping of lithofacies and porosities of stratigraphic units is complicated because these properties vary in three dimensions. The method of moments was proposed by Krumbein and Libby (1957) as a technique to aid in resolving this problem. Moments are easily computed from wireline logs and are simple statistics which summarize vertical variation in a log trace. Combinations of moment maps have proved useful in understanding vertical and lateral changes in lithology of sedimentary rock units. Although moments have meaning both as statistical descriptors and as mechanical properties, they also define polynomial curves which approximate lithologic changes as a function of depth. These polynomials can be fitted by least-squares methods, partitioning major trends in rock properties from finescale fluctuations. Analysis of variance yields the degree of fit of any polynomial and measures the proportion of vertical variability expressed by any moment or combination of moments. In addition, polynomial curves can be differentiated to determine depths at which pronounced expressions of facies occur and to determine the locations of boundaries between major lithologic subdivisions. Moments can be estimated at any location in an area by interpolating from log moments at control wells. A matrix algebra operation then converts moment estimates to coefficients of a polynomial function which describes a continuous curve of lithologic variation with depth. If this procedure is applied to a grid of geographic locations, the result is a model of variability in three dimensions. Resolution of the model is determined largely by number of moments used in its generation. The method is illustrated with an analysis of lithofacies in the Simpson Group of south-central Kansas; the three-dimensional model is shown as cross sections and slice maps. In this study, the gamma-ray log is used as a measure of shaliness of the unit. However, the method is general and can be applied, for example, to suites of neutron, density, or sonic logs to produce three-dimensional models of porosity in reservoir rocks. ?? 1985 Plenum Publishing Corporation.

  17. SPSS macros to compare any two fitted values from a regression model.

    PubMed

    Weaver, Bruce; Dubois, Sacha

    2012-12-01

    In regression models with first-order terms only, the coefficient for a given variable is typically interpreted as the change in the fitted value of Y for a one-unit increase in that variable, with all other variables held constant. Therefore, each regression coefficient represents the difference between two fitted values of Y. But the coefficients represent only a fraction of the possible fitted value comparisons that might be of interest to researchers. For many fitted value comparisons that are not captured by any of the regression coefficients, common statistical software packages do not provide the standard errors needed to compute confidence intervals or carry out statistical tests-particularly in more complex models that include interactions, polynomial terms, or regression splines. We describe two SPSS macros that implement a matrix algebra method for comparing any two fitted values from a regression model. The !OLScomp and !MLEcomp macros are for use with models fitted via ordinary least squares and maximum likelihood estimation, respectively. The output from the macros includes the standard error of the difference between the two fitted values, a 95% confidence interval for the difference, and a corresponding statistical test with its p-value.

  18. Existence of entire solutions of some non-linear differential-difference equations.

    PubMed

    Chen, Minfeng; Gao, Zongsheng; Du, Yunfei

    2017-01-01

    In this paper, we investigate the admissible entire solutions of finite order of the differential-difference equations [Formula: see text] and [Formula: see text], where [Formula: see text], [Formula: see text] are two non-zero polynomials, [Formula: see text] is a polynomial and [Formula: see text]. In addition, we investigate the non-existence of entire solutions of finite order of the differential-difference equation [Formula: see text], where [Formula: see text], [Formula: see text] are two non-constant polynomials, [Formula: see text], m , n are positive integers and satisfy [Formula: see text] except for [Formula: see text], [Formula: see text].

  19. Orthonormal aberration polynomials for anamorphic optical imaging systems with rectangular pupils.

    PubMed

    Mahajan, Virendra N

    2010-12-20

    The classical aberrations of an anamorphic optical imaging system, representing the terms of a power-series expansion of its aberration function, are separable in the Cartesian coordinates of a point on its pupil. We discuss the balancing of a classical aberration of a certain order with one or more such aberrations of lower order to minimize its variance across a rectangular pupil of such a system. We show that the balanced aberrations are the products of two Legendre polynomials, one for each of the two Cartesian coordinates of the pupil point. The compound Legendre polynomials are orthogonal across a rectangular pupil and, like the classical aberrations, are inherently separable in the Cartesian coordinates of the pupil point. They are different from the balanced aberrations and the corresponding orthogonal polynomials for a system with rotational symmetry but a rectangular pupil.

  20. Evaluation of Piecewise Polynomial Equations for Two Types of Thermocouples

    PubMed Central

    Chen, Andrew; Chen, Chiachung

    2013-01-01

    Thermocouples are the most frequently used sensors for temperature measurement because of their wide applicability, long-term stability and high reliability. However, one of the major utilization problems is the linearization of the transfer relation between temperature and output voltage of thermocouples. The linear calibration equation and its modules could be improved by using regression analysis to help solve this problem. In this study, two types of thermocouple and five temperature ranges were selected to evaluate the fitting agreement of different-order polynomial equations. Two quantitative criteria, the average of the absolute error values |e|ave and the standard deviation of calibration equation estd, were used to evaluate the accuracy and precision of these calibrations equations. The optimal order of polynomial equations differed with the temperature range. The accuracy and precision of the calibration equation could be improved significantly with an adequate higher degree polynomial equation. The technique could be applied with hardware modules to serve as an intelligent sensor for temperature measurement. PMID:24351627

  1. Experimental injury study of children seated behind collapsing front seats in rear impacts.

    PubMed

    Saczalski, Kenneth J; Sances, Anthony; Kumaresan, Srirangam; Burton, Joseph L; Lewis, Paul R

    2003-01-01

    In the mid 1990's the U.S. Department of Transportation made recommendations to place children and infants into the rear seating areas of motor vehicles to avoid front seat airbag induced injuries and fatalities. In most rear-impacts, however, the adult occupied front seats will collapse into the rear occupant area and pose another potentially serious injury hazard to the rear-seated children. Since rear-impacts involve a wide range of speeds, impact severity, and various sizes of adults in collapsing front seats, a multi-variable experimental method was employed in conjunction with a multi-level "factorial analysis" technique to study injury potential of rear-seated children. Various sizes of Hybrid III adult surrogates, seated in a "typical" average strength collapsing type of front seat, and a three-year-old Hybrid III child surrogate, seated on a built-in booster seat located directly behind the front adult occupant, were tested at various impact severity levels in a popular "minivan" sled-buck test set up. A total of five test configurations were utilized in this study. Three levels of velocity changes ranging from 22.5 to 42.5 kph were used. The average of peak accelerations on the sled-buck tests ranged from approximately 8.2 G's up to about 11.1 G's, with absolute peak values of just over 14 G's at the higher velocity change. The parameters of the test configuration enabled the experimental data to be combined into a polynomial "injury" function of the two primary independent variables (i.e. front seat adult occupant weight and velocity change) so that the "likelihood" of rear child "injury potential" could be determined over a wide range of the key parameters. The experimentally derived head injury data was used to obtain a preliminary HIC (Head Injury Criteria) polynomial fit at the 900 level for the rear-seated child. Several actual accident cases were compared with the preliminary polynomial fit. This study provides a test efficient, multi-variable, method to compare the injury biomechanical data with actual accident cases.

  2. Algorithms for Solvents and Spectral Factors of Matrix Polynomials

    DTIC Science & Technology

    1981-01-01

    spectral factors of matrix polynomials LEANG S. SHIEHt, YIH T. TSAYt and NORMAN P. COLEMANt A generalized Newton method , based on the contracted gradient...of a matrix poly- nomial, is derived for solving the right (left) solvents and spectral factors of matrix polynomials. Two methods of selecting initial...estimates for rapid convergence of the newly developed numerical method are proposed. Also, new algorithms for solving complete sets of the right

  3. Polynomial Interpolation and Sums of Powers of Integers

    ERIC Educational Resources Information Center

    Cereceda, José Luis

    2017-01-01

    In this note, we revisit the problem of polynomial interpolation and explicitly construct two polynomials in n of degree k + 1, P[subscript k](n) and Q[subscript k](n), such that P[subscript k](n) = Q[subscript k](n) = f[subscript k](n) for n = 1, 2,… , k, where f[subscript k](1), f[subscript k](2),… , f[subscript k](k) are k arbitrarily chosen…

  4. AKLSQF - LEAST SQUARES CURVE FITTING

    NASA Technical Reports Server (NTRS)

    Kantak, A. V.

    1994-01-01

    The Least Squares Curve Fitting program, AKLSQF, computes the polynomial which will least square fit uniformly spaced data easily and efficiently. The program allows the user to specify the tolerable least squares error in the fitting or allows the user to specify the polynomial degree. In both cases AKLSQF returns the polynomial and the actual least squares fit error incurred in the operation. The data may be supplied to the routine either by direct keyboard entry or via a file. AKLSQF produces the least squares polynomial in two steps. First, the data points are least squares fitted using the orthogonal factorial polynomials. The result is then reduced to a regular polynomial using Sterling numbers of the first kind. If an error tolerance is specified, the program starts with a polynomial of degree 1 and computes the least squares fit error. The degree of the polynomial used for fitting is then increased successively until the error criterion specified by the user is met. At every step the polynomial as well as the least squares fitting error is printed to the screen. In general, the program can produce a curve fitting up to a 100 degree polynomial. All computations in the program are carried out under Double Precision format for real numbers and under long integer format for integers to provide the maximum accuracy possible. AKLSQF was written for an IBM PC X/AT or compatible using Microsoft's Quick Basic compiler. It has been implemented under DOS 3.2.1 using 23K of RAM. AKLSQF was developed in 1989.

  5. Tsallis p, q-deformed Touchard polynomials and Stirling numbers

    NASA Astrophysics Data System (ADS)

    Herscovici, O.; Mansour, T.

    2017-01-01

    In this paper, we develop and investigate a new two-parametrized deformation of the Touchard polynomials, based on the definition of the NEXT q-exponential function of Tsallis. We obtain new generalizations of the Stirling numbers of the second kind and of the binomial coefficients and represent two new statistics for the set partitions.

  6. Explicit formulae for Chern-Simons invariants of the twist-knot orbifolds and edge polynomials of twist knots

    NASA Astrophysics Data System (ADS)

    Ham, J.-Y.; Lee, J.

    2016-09-01

    We calculate the Chern-Simons invariants of twist-knot orbifolds using the Schläfli formula for the generalized Chern-Simons function on the family of twist knot cone-manifold structures. Following the general instruction of Hilden, Lozano, and Montesinos-Amilibia, we here present concrete formulae and calculations. We use the Pythagorean Theorem, which was used by Ham, Mednykh and Petrov, to relate the complex length of the longitude and the complex distance between the two axes fixed by two generators. As an application, we calculate the Chern-Simons invariants of cyclic coverings of the hyperbolic twist-knot orbifolds. We also derive some interesting results. The explicit formulae of the A-polynomials of twist knots are obtained from the complex distance polynomials. Hence the edge polynomials corresponding to the edges of the Newton polygons of the A-polynomials of twist knots can be obtained. In particular, the number of boundary components of every incompressible surface corresponding to slope -4n+2 turns out to be 2. Bibliography: 39 titles.

  7. The Extrapolar SWIFT model (version 1.0): fast stratospheric ozone chemistry for global climate models

    NASA Astrophysics Data System (ADS)

    Kreyling, Daniel; Wohltmann, Ingo; Lehmann, Ralph; Rex, Markus

    2018-03-01

    The Extrapolar SWIFT model is a fast ozone chemistry scheme for interactive calculation of the extrapolar stratospheric ozone layer in coupled general circulation models (GCMs). In contrast to the widely used prescribed ozone, the SWIFT ozone layer interacts with the model dynamics and can respond to atmospheric variability or climatological trends.The Extrapolar SWIFT model employs a repro-modelling approach, in which algebraic functions are used to approximate the numerical output of a full stratospheric chemistry and transport model (ATLAS). The full model solves a coupled chemical differential equation system with 55 initial and boundary conditions (mixing ratio of various chemical species and atmospheric parameters). Hence the rate of change of ozone over 24 h is a function of 55 variables. Using covariances between these variables, we can find linear combinations in order to reduce the parameter space to the following nine basic variables: latitude, pressure altitude, temperature, overhead ozone column and the mixing ratio of ozone and of the ozone-depleting families (Cly, Bry, NOy and HOy). We will show that these nine variables are sufficient to characterize the rate of change of ozone. An automated procedure fits a polynomial function of fourth degree to the rate of change of ozone obtained from several simulations with the ATLAS model. One polynomial function is determined per month, which yields the rate of change of ozone over 24 h. A key aspect for the robustness of the Extrapolar SWIFT model is to include a wide range of stratospheric variability in the numerical output of the ATLAS model, also covering atmospheric states that will occur in a future climate (e.g. temperature and meridional circulation changes or reduction of stratospheric chlorine loading).For validation purposes, the Extrapolar SWIFT model has been integrated into the ATLAS model, replacing the full stratospheric chemistry scheme. Simulations with SWIFT in ATLAS have proven that the systematic error is small and does not accumulate during the course of a simulation. In the context of a 10-year simulation, the ozone layer simulated by SWIFT shows a stable annual cycle, with inter-annual variations comparable to the ATLAS model. The application of Extrapolar SWIFT requires the evaluation of polynomial functions with 30-100 terms. Computers can currently calculate such polynomial functions at thousands of model grid points in seconds. SWIFT provides the desired numerical efficiency and computes the ozone layer 104 times faster than the chemistry scheme in the ATLAS CTM.

  8. Gabor-based kernel PCA with fractional power polynomial models for face recognition.

    PubMed

    Liu, Chengjun

    2004-05-01

    This paper presents a novel Gabor-based kernel Principal Component Analysis (PCA) method by integrating the Gabor wavelet representation of face images and the kernel PCA method for face recognition. Gabor wavelets first derive desirable facial features characterized by spatial frequency, spatial locality, and orientation selectivity to cope with the variations due to illumination and facial expression changes. The kernel PCA method is then extended to include fractional power polynomial models for enhanced face recognition performance. A fractional power polynomial, however, does not necessarily define a kernel function, as it might not define a positive semidefinite Gram matrix. Note that the sigmoid kernels, one of the three classes of widely used kernel functions (polynomial kernels, Gaussian kernels, and sigmoid kernels), do not actually define a positive semidefinite Gram matrix either. Nevertheless, the sigmoid kernels have been successfully used in practice, such as in building support vector machines. In order to derive real kernel PCA features, we apply only those kernel PCA eigenvectors that are associated with positive eigenvalues. The feasibility of the Gabor-based kernel PCA method with fractional power polynomial models has been successfully tested on both frontal and pose-angled face recognition, using two data sets from the FERET database and the CMU PIE database, respectively. The FERET data set contains 600 frontal face images of 200 subjects, while the PIE data set consists of 680 images across five poses (left and right profiles, left and right half profiles, and frontal view) with two different facial expressions (neutral and smiling) of 68 subjects. The effectiveness of the Gabor-based kernel PCA method with fractional power polynomial models is shown in terms of both absolute performance indices and comparative performance against the PCA method, the kernel PCA method with polynomial kernels, the kernel PCA method with fractional power polynomial models, the Gabor wavelet-based PCA method, and the Gabor wavelet-based kernel PCA method with polynomial kernels.

  9. Development of reaching during mid-childhood from a Developmental Systems perspective.

    PubMed

    Golenia, Laura; Schoemaker, Marina M; Otten, Egbert; Mouton, Leonora J; Bongers, Raoul M

    2018-01-01

    Inspired by the Developmental Systems perspective, we studied the development of reaching during mid-childhood (5-10 years of age) not just at the performance level (i.e., endpoint movements), as commonly done in earlier studies, but also at the joint angle level. Because the endpoint position (i.e., the tip of the index finger) at the reaching target can be achieved with multiple joint angle combinations, we partitioned variability in joint angles over trials into variability that does not (goal-equivalent variability, GEV) and that does (non-goal-equivalent variability, NGEV) influence the endpoint position, using the Uncontrolled Manifold method. Quantifying this structure in joint angle variability allowed us to examine whether and how spatial variability of the endpoint at the reaching target is related to variability in joint angles and how this changes over development. 6-, 8- and 10-year-old children and young adults performed reaching movements to a target with the index finger. Polynomial trend analysis revealed a linear and a quadratic decreasing trend for the variable error. Linear decreasing and cubic trends were found for joint angle standard deviations at movement end. GEV and NGEV decreased gradually with age, but interestingly, the decrease of GEV was steeper than the decrease of NGEV, showing that the different parts of the joint angle variability changed differently over age. We interpreted these changes in the structure of variability as indicating changes over age in exploration for synergies (a family of task solutions), a concept that links the performance level with the joint angle level. Our results suggest changes in the search for synergies during mid-childhood development.

  10. Convex optimisation approach to constrained fuel optimal control of spacecraft in close relative motion

    NASA Astrophysics Data System (ADS)

    Massioni, Paolo; Massari, Mauro

    2018-05-01

    This paper describes an interesting and powerful approach to the constrained fuel-optimal control of spacecraft in close relative motion. The proposed approach is well suited for problems under linear dynamic equations, therefore perfectly fitting to the case of spacecraft flying in close relative motion. If the solution of the optimisation is approximated as a polynomial with respect to the time variable, then the problem can be approached with a technique developed in the control engineering community, known as "Sum Of Squares" (SOS), and the constraints can be reduced to bounds on the polynomials. Such a technique allows rewriting polynomial bounding problems in the form of convex optimisation problems, at the cost of a certain amount of conservatism. The principles of the techniques are explained and some application related to spacecraft flying in close relative motion are shown.

  11. Numerical Solutions of the Nonlinear Fractional-Order Brusselator System by Bernstein Polynomials

    PubMed Central

    Khan, Rahmat Ali; Tajadodi, Haleh; Johnston, Sarah Jane

    2014-01-01

    In this paper we propose the Bernstein polynomials to achieve the numerical solutions of nonlinear fractional-order chaotic system known by fractional-order Brusselator system. We use operational matrices of fractional integration and multiplication of Bernstein polynomials, which turns the nonlinear fractional-order Brusselator system to a system of algebraic equations. Two illustrative examples are given in order to demonstrate the accuracy and simplicity of the proposed techniques. PMID:25485293

  12. Microencapsulation of citronella oil for mosquito-repellent application: formulation and in vitro permeation studies.

    PubMed

    Solomon, B; Sahle, F F; Gebre-Mariam, T; Asres, K; Neubert, R H H

    2012-01-01

    Citronella oil (CO) has been reported to possess a mosquito-repellent action. However, its application in topical preparations is limited due to its rapid volatility. The objective of this study was therefore to reduce the rate of evaporation of the oil via microencapsulation. Microcapsules (MCs) were prepared using gelatin simple coacervation method and sodium sulfate (20%) as a coacervating agent. The MCs were hardened with a cross-linking agent, formaldehyde (37%). The effects of three variables, stirring rate, oil loading and the amount of cross-linking agent, on encapsulation efficiency (EE, %) were studied. Response surface methodology was employed to optimize the EE (%), and a polynomial regression model equation was generated. The effect of the amount of cross-linker was insignificant on EE (%). The response surface plot constructed for the polynomial equation provided an optimum area. The MCs under the optimized conditions provided EE of 60%. The optimized MCs were observed to have a sustained in vitro release profile (70% of the content was released at the 10th hour of the study) with minimum initial burst effect. Topical formulations of the microencapsulated oil and non-microencapsulated oil were prepared with different bases, white petrolatum, wool wax alcohol, hydrophilic ointment (USP) and PEG ointment (USP). In vitro membrane permeation of CO from the ointments was evaluated in Franz diffusion cells using cellulose acetate membrane at 32 °C, with the receptor compartment containing a water-ethanol solution (50:50). The receptor phase samples were analyzed with GC/MS, using citronellal as a reference standard. The results showed that microencapsulation decreased membrane permeation of the CO by at least 50%. The amount of CO permeated was dependent on the type of ointment base used; PEG base exhibited the highest degree of release. Therefore, microencapsulation reduces membrane permeation of CO while maintaining a constant supply of the oil. Copyright © 2011 Elsevier B.V. All rights reserved.

  13. Spillover in the Academy: Marriage Stability and Faculty Evaluations.

    ERIC Educational Resources Information Center

    Ludlow, Larry H.; Alvarez-Salvat, Rose M.

    2001-01-01

    Studied the spillover between family and work by examining the link between marital status and work performance across marriage, divorce, and remarriage. A polynomial regression model was fit to the data from 78 evaluations of an individual professor, and a cubic curve through the 3 periods was statistically significant. (SLD)

  14. The Promise and Pitfalls of Making Connections in Mathematics

    ERIC Educational Resources Information Center

    Fyfe, Emily R.; Alibali, Martha W.; Nathan, Mitchell J.

    2017-01-01

    Making connections during math instruction is a recommended practice, but may increase the difficulty of the lesson. We used an avatar video instructor to qualitatively examine the role of linking multiple representations for 24 middle school students learning algebra. Students were taught how to solve polynomial multiplication problems, such as…

  15. Efficient algorithms for construction of recurrence relations for the expansion and connection coefficients in series of Al-Salam Carlitz I polynomials

    NASA Astrophysics Data System (ADS)

    Doha, E. H.; Ahmed, H. M.

    2005-12-01

    Two formulae expressing explicitly the derivatives and moments of Al-Salam-Carlitz I polynomials of any degree and for any order in terms of Al-Salam-Carlitz I themselves are proved. Two other formulae for the expansion coefficients of general-order derivatives Dpqf(x), and for the moments xellDpqf(x), of an arbitrary function f(x) in terms of its original expansion coefficients are also obtained. Application of these formulae for solving q-difference equations with varying coefficients, by reducing them to recurrence relations in the expansion coefficients of the solution, is explained. An algebraic symbolic approach (using Mathematica) in order to build and solve recursively for the connection coefficients between Al-Salam-Carlitz I polynomials and any system of basic hypergeometric orthogonal polynomials, belonging to the q-Hahn class, is described.

  16. Exact traveling-wave and spatiotemporal soliton solutions to the generalized (3+1)-dimensional Schrödinger equation with polynomial nonlinearity of arbitrary order.

    PubMed

    Petrović, Nikola Z; Belić, Milivoj; Zhong, Wei-Ping

    2011-02-01

    We obtain exact traveling wave and spatiotemporal soliton solutions to the generalized (3+1)-dimensional nonlinear Schrödinger equation with variable coefficients and polynomial Kerr nonlinearity of an arbitrarily high order. Exact solutions, given in terms of Jacobi elliptic functions, are presented for the special cases of cubic-quintic and septic models. We demonstrate that the widely used method for finding exact solutions in terms of Jacobi elliptic functions is not applicable to the nonlinear Schrödinger equation with saturable nonlinearity. ©2011 American Physical Society

  17. Algebraic criteria for positive realness relative to the unit circle.

    NASA Technical Reports Server (NTRS)

    Siljak, D. D.

    1973-01-01

    A definition is presented of the circle positive realness of real rational functions relative to the unit circle in the complex variable plane. The problem of testing this kind of positive reality is reduced to the algebraic problem of determining the distribution of zeros of a real polynomial with respect to and on the unit circle. Such reformulation of the problem avoids the search for explicit information about imaginary poles of rational functions. The stated algebraic problem is solved by applying the polynomial criteria of Marden (1966) and Jury (1964), and a completely recursive algorithm for circle positive realness is obtained.

  18. Active Subspaces of Airfoil Shape Parameterizations

    NASA Astrophysics Data System (ADS)

    Grey, Zachary J.; Constantine, Paul G.

    2018-05-01

    Design and optimization benefit from understanding the dependence of a quantity of interest (e.g., a design objective or constraint function) on the design variables. A low-dimensional active subspace, when present, identifies important directions in the space of design variables; perturbing a design along the active subspace associated with a particular quantity of interest changes that quantity more, on average, than perturbing the design orthogonally to the active subspace. This low-dimensional structure provides insights that characterize the dependence of quantities of interest on design variables. Airfoil design in a transonic flow field with a parameterized geometry is a popular test problem for design methodologies. We examine two particular airfoil shape parameterizations, PARSEC and CST, and study the active subspaces present in two common design quantities of interest, transonic lift and drag coefficients, under each shape parameterization. We mathematically relate the two parameterizations with a common polynomial series. The active subspaces enable low-dimensional approximations of lift and drag that relate to physical airfoil properties. In particular, we obtain and interpret a two-dimensional approximation of both transonic lift and drag, and we show how these approximation inform a multi-objective design problem.

  19. A bispectral q-hypergeometric basis for a class of quantum integrable models

    NASA Astrophysics Data System (ADS)

    Baseilhac, Pascal; Martin, Xavier

    2018-01-01

    For the class of quantum integrable models generated from the q-Onsager algebra, a basis of bispectral multivariable q-orthogonal polynomials is exhibited. In the first part, it is shown that the multivariable Askey-Wilson polynomials with N variables and N + 3 parameters introduced by Gasper and Rahman [Dev. Math. 13, 209 (2005)] generate a family of infinite dimensional modules for the q-Onsager algebra, whose fundamental generators are realized in terms of the multivariable q-difference and difference operators proposed by Iliev [Trans. Am. Math. Soc. 363, 1577 (2011)]. Raising and lowering operators extending those of Sahi [SIGMA 3, 002 (2007)] are also constructed. In the second part, finite dimensional modules are constructed and studied for a certain class of parameters and if the N variables belong to a discrete support. In this case, the bispectral property finds a natural interpretation within the framework of tridiagonal pairs. In the third part, eigenfunctions of the q-Dolan-Grady hierarchy are considered in the polynomial basis. In particular, invariant subspaces are identified for certain conditions generalizing Nepomechie's relations. In the fourth part, the analysis is extended to the special case q = 1. This framework provides a q-hypergeometric formulation of quantum integrable models such as the open XXZ spin chain with generic integrable boundary conditions (q ≠ 1).

  20. Local polynomial estimation of heteroscedasticity in a multivariate linear regression model and its applications in economics.

    PubMed

    Su, Liyun; Zhao, Yanyong; Yan, Tianshun; Li, Fenglan

    2012-01-01

    Multivariate local polynomial fitting is applied to the multivariate linear heteroscedastic regression model. Firstly, the local polynomial fitting is applied to estimate heteroscedastic function, then the coefficients of regression model are obtained by using generalized least squares method. One noteworthy feature of our approach is that we avoid the testing for heteroscedasticity by improving the traditional two-stage method. Due to non-parametric technique of local polynomial estimation, it is unnecessary to know the form of heteroscedastic function. Therefore, we can improve the estimation precision, when the heteroscedastic function is unknown. Furthermore, we verify that the regression coefficients is asymptotic normal based on numerical simulations and normal Q-Q plots of residuals. Finally, the simulation results and the local polynomial estimation of real data indicate that our approach is surely effective in finite-sample situations.

  1. Recursive approach to the moment-based phase unwrapping method.

    PubMed

    Langley, Jason A; Brice, Robert G; Zhao, Qun

    2010-06-01

    The moment-based phase unwrapping algorithm approximates the phase map as a product of Gegenbauer polynomials, but the weight function for the Gegenbauer polynomials generates artificial singularities along the edge of the phase map. A method is presented to remove the singularities inherent to the moment-based phase unwrapping algorithm by approximating the phase map as a product of two one-dimensional Legendre polynomials and applying a recursive property of derivatives of Legendre polynomials. The proposed phase unwrapping algorithm is tested on simulated and experimental data sets. The results are then compared to those of PRELUDE 2D, a widely used phase unwrapping algorithm, and a Chebyshev-polynomial-based phase unwrapping algorithm. It was found that the proposed phase unwrapping algorithm provides results that are comparable to those obtained by using PRELUDE 2D and the Chebyshev phase unwrapping algorithm.

  2. A new sampling scheme for developing metamodels with the zeros of Chebyshev polynomials

    NASA Astrophysics Data System (ADS)

    Wu, Jinglai; Luo, Zhen; Zhang, Nong; Zhang, Yunqing

    2015-09-01

    The accuracy of metamodelling is determined by both the sampling and approximation. This article proposes a new sampling method based on the zeros of Chebyshev polynomials to capture the sampling information effectively. First, the zeros of one-dimensional Chebyshev polynomials are applied to construct Chebyshev tensor product (CTP) sampling, and the CTP is then used to construct high-order multi-dimensional metamodels using the 'hypercube' polynomials. Secondly, the CTP sampling is further enhanced to develop Chebyshev collocation method (CCM) sampling, to construct the 'simplex' polynomials. The samples of CCM are randomly and directly chosen from the CTP samples. Two widely studied sampling methods, namely the Smolyak sparse grid and Hammersley, are used to demonstrate the effectiveness of the proposed sampling method. Several numerical examples are utilized to validate the approximation accuracy of the proposed metamodel under different dimensions.

  3. A solver for General Unilateral Polynomial Matrix Equation with Second-Order Matrices Over Prime Finite Fields

    NASA Astrophysics Data System (ADS)

    Burtyka, Filipp

    2018-03-01

    The paper firstly considers the problem of finding solvents for arbitrary unilateral polynomial matrix equations with second-order matrices over prime finite fields from the practical point of view: we implement the solver for this problem. The solver’s algorithm has two step: the first is finding solvents, having Jordan Normal Form (JNF), the second is finding solvents among the rest matrices. The first step reduces to the finding roots of usual polynomials over finite fields, the second is essentially exhaustive search. The first step’s algorithms essentially use the polynomial matrices theory. We estimate the practical duration of computations using our software implementation (for example that one can’t construct unilateral matrix polynomial over finite field, having any predefined number of solvents) and answer some theoretically-valued questions.

  4. Quantitative Tomography for Continuous Variable Quantum Systems

    NASA Astrophysics Data System (ADS)

    Landon-Cardinal, Olivier; Govia, Luke C. G.; Clerk, Aashish A.

    2018-03-01

    We present a continuous variable tomography scheme that reconstructs the Husimi Q function (Wigner function) by Lagrange interpolation, using measurements of the Q function (Wigner function) at the Padua points, conjectured to be optimal sampling points for two dimensional reconstruction. Our approach drastically reduces the number of measurements required compared to using equidistant points on a regular grid, although reanalysis of such experiments is possible. The reconstruction algorithm produces a reconstructed function with exponentially decreasing error and quasilinear runtime in the number of Padua points. Moreover, using the interpolating polynomial of the Q function, we present a technique to directly estimate the density matrix elements of the continuous variable state, with only a linear propagation of input measurement error. Furthermore, we derive a state-independent analytical bound on this error, such that our estimate of the density matrix is accompanied by a measure of its uncertainty.

  5. Recurrences and explicit formulae for the expansion and connection coefficients in series of Bessel polynomials

    NASA Astrophysics Data System (ADS)

    Doha, E. H.; Ahmed, H. M.

    2004-08-01

    A formula expressing explicitly the derivatives of Bessel polynomials of any degree and for any order in terms of the Bessel polynomials themselves is proved. Another explicit formula, which expresses the Bessel expansion coefficients of a general-order derivative of an infinitely differentiable function in terms of its original Bessel coefficients, is also given. A formula for the Bessel coefficients of the moments of one single Bessel polynomial of certain degree is proved. A formula for the Bessel coefficients of the moments of a general-order derivative of an infinitely differentiable function in terms of its Bessel coefficients is also obtained. Application of these formulae for solving ordinary differential equations with varying coefficients, by reducing them to recurrence relations in the expansion coefficients of the solution, is explained. An algebraic symbolic approach (using Mathematica) in order to build and solve recursively for the connection coefficients between Bessel-Bessel polynomials is described. An explicit formula for these coefficients between Jacobi and Bessel polynomials is given, of which the ultraspherical polynomial and its consequences are important special cases. Two analytical formulae for the connection coefficients between Laguerre-Bessel and Hermite-Bessel are also developed.

  6. The hit problem for symmetric polynomials over the Steenrod algebra

    NASA Astrophysics Data System (ADS)

    Janfada, A. S.; Wood, R. M. W.

    2002-09-01

    We cite [18] for references to work on the hit problem for the polynomial algebra P(n) = [open face F]2[x1, ;…, xn] = [oplus B: plus sign in circle]d[gt-or-equal, slanted]0 Pd(n), viewed as a graded left module over the Steenrod algebra [script A] at the prime 2. The grading is by the homogeneous polynomials Pd(n) of degree d in the n variables x1, …, xn of grading 1. The present article investigates the hit problem for the [script A]-submodule of symmetric polynomials B(n) = P(n)[sum L: summation operator]n , where [sum L: summation operator]n denotes the symmetric group on n letters acting on the right of P(n). Among the main results is the symmetric version of the well-known Peterson conjecture. For a positive integer d, let [mu](d) denote the smallest value of k for which d = [sum L: summation operator]ki=1(2[lambda]i[minus sign]1), where [lambda]i [gt-or-equal, slanted] 0.

  7. Design of hybrid radial basis function neural networks (HRBFNNs) realized with the aid of hybridization of fuzzy clustering method (FCM) and polynomial neural networks (PNNs).

    PubMed

    Huang, Wei; Oh, Sung-Kwun; Pedrycz, Witold

    2014-12-01

    In this study, we propose Hybrid Radial Basis Function Neural Networks (HRBFNNs) realized with the aid of fuzzy clustering method (Fuzzy C-Means, FCM) and polynomial neural networks. Fuzzy clustering used to form information granulation is employed to overcome a possible curse of dimensionality, while the polynomial neural network is utilized to build local models. Furthermore, genetic algorithm (GA) is exploited here to optimize the essential design parameters of the model (including fuzzification coefficient, the number of input polynomial fuzzy neurons (PFNs), and a collection of the specific subset of input PFNs) of the network. To reduce dimensionality of the input space, principal component analysis (PCA) is considered as a sound preprocessing vehicle. The performance of the HRBFNNs is quantified through a series of experiments, in which we use several modeling benchmarks of different levels of complexity (different number of input variables and the number of available data). A comparative analysis reveals that the proposed HRBFNNs exhibit higher accuracy in comparison to the accuracy produced by some models reported previously in the literature. Copyright © 2014 Elsevier Ltd. All rights reserved.

  8. On the Computation of Comprehensive Boolean Gröbner Bases

    NASA Astrophysics Data System (ADS)

    Inoue, Shutaro

    We show that a comprehensive Boolean Gröbner basis of an ideal I in a Boolean polynomial ring B (bar A,bar X) with main variables bar X and parameters bar A can be obtained by simply computing a usual Boolean Gröbner basis of I regarding both bar X and bar A as variables with a certain block term order such that bar X ≫ bar A. The result together with a fact that a finite Boolean ring is isomorphic to a direct product of the Galois field mathbb{GF}_2 enables us to compute a comprehensive Boolean Gröbner basis by only computing corresponding Gröbner bases in a polynomial ring over mathbb{GF}_2. Our implementation in a computer algebra system Risa/Asir shows that our method is extremely efficient comparing with existing computation algorithms of comprehensive Boolean Gröbner bases.

  9. Comparison of yellow poplar growth models on the basis of derived growth analysis variables

    Treesearch

    Keith F. Jensen; Daniel A. Yaussy

    1986-01-01

    Quadratic and cubic polynomials, and Gompertz and Richards asymptotic models were fitted to yellow poplar growth data. These data included height, leaf area, leaf weight and new shoot height for 23 weeks. Seven growth analysis variables were estimated from each function. The Gompertz and Richards models fitted the data best and provided the most accurate derived...

  10. Fock expansion of multimode pure Gaussian states

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cariolaro, Gianfranco; Pierobon, Gianfranco, E-mail: gianfranco.pierobon@unipd.it

    2015-12-15

    The Fock expansion of multimode pure Gaussian states is derived starting from their representation as displaced and squeezed multimode vacuum states. The approach is new and appears to be simpler and more general than previous ones starting from the phase-space representation given by the characteristic or Wigner function. Fock expansion is performed in terms of easily evaluable two-variable Hermite–Kampé de Fériet polynomials. A relatively simple and compact expression for the joint statistical distribution of the photon numbers in the different modes is obtained. In particular, this result enables one to give a simple characterization of separable and entangled states, asmore » shown for two-mode and three-mode Gaussian states.« less

  11. Affine theory of gravitation

    NASA Astrophysics Data System (ADS)

    Popławski, Nikodem

    2014-01-01

    We propose a theory of gravitation, in which the affine connection is the only dynamical variable describing the gravitational field. We construct a simple dynamical Lagrangian density that is entirely composed from the connection, via its curvature and torsion, and is a polynomial function of its derivatives. It is given by the contraction of the Ricci tensor with a tensor which is inverse to the symmetric, contracted square of the torsion tensor, . We vary the total action for the gravitational field and matter with respect to the affine connection, assuming that the matter fields couple to the connection only through . We derive the resulting field equations and show that they are identical with the Einstein equations of general relativity with a nonzero cosmological constant if the tensor is regarded as proportional to the metric tensor. The cosmological constant is simply a constant of proportionality between the two tensors, which together with and provides a natural system of units in gravitational physics. This theory therefore provides a physical construction of the metric as a polynomial function of the connection, and explains dark energy as an intrinsic property of spacetime.

  12. Polynomials for crystal frameworks and the rigid unit mode spectrum

    PubMed Central

    Power, S. C.

    2014-01-01

    To each discrete translationally periodic bar-joint framework in , we associate a matrix-valued function defined on the d-torus. The rigid unit mode (RUM) spectrum of is defined in terms of the multi-phases of phase-periodic infinitesimal flexes and is shown to correspond to the singular points of the function and also to the set of wavevectors of harmonic excitations which have vanishing energy in the long wavelength limit. To a crystal framework in Maxwell counting equilibrium, which corresponds to being square, the determinant of gives rise to a unique multi-variable polynomial . For ideal zeolites, the algebraic variety of zeros of on the d-torus coincides with the RUM spectrum. The matrix function is related to other aspects of idealized framework rigidity and flexibility, and in particular leads to an explicit formula for the number of supercell-periodic floppy modes. In the case of certain zeolite frameworks in dimensions two and three, direct proofs are given to show the maximal floppy mode property (order N). In particular, this is the case for the cubic symmetry sodalite framework and some other idealized zeolites. PMID:24379422

  13. New Formulae for the High-Order Derivatives of Some Jacobi Polynomials: An Application to Some High-Order Boundary Value Problems

    PubMed Central

    Abd-Elhameed, W. M.

    2014-01-01

    This paper is concerned with deriving some new formulae expressing explicitly the high-order derivatives of Jacobi polynomials whose parameters difference is one or two of any degree and of any order in terms of their corresponding Jacobi polynomials. The derivatives formulae for Chebyshev polynomials of third and fourth kinds of any degree and of any order in terms of their corresponding Chebyshev polynomials are deduced as special cases. Some new reduction formulae for summing some terminating hypergeometric functions of unit argument are also deduced. As an application, and with the aid of the new introduced derivatives formulae, an algorithm for solving special sixth-order boundary value problems are implemented with the aid of applying Galerkin method. A numerical example is presented hoping to ascertain the validity and the applicability of the proposed algorithms. PMID:25386599

  14. Combining freeform optics and curved detectors for wide field imaging: a polynomial approach over squared aperture.

    PubMed

    Muslimov, Eduard; Hugot, Emmanuel; Jahn, Wilfried; Vives, Sebastien; Ferrari, Marc; Chambion, Bertrand; Henry, David; Gaschet, Christophe

    2017-06-26

    In the recent years a significant progress was achieved in the field of design and fabrication of optical systems based on freeform optical surfaces. They provide a possibility to build fast, wide-angle and high-resolution systems, which are very compact and free of obscuration. However, the field of freeform surfaces design techniques still remains underexplored. In the present paper we use the mathematical apparatus of orthogonal polynomials defined over a square aperture, which was developed before for the tasks of wavefront reconstruction, to describe shape of a mirror surface. Two cases, namely Legendre polynomials and generalization of the Zernike polynomials on a square, are considered. The potential advantages of these polynomials sets are demonstrated on example of a three-mirror unobscured telescope with F/# = 2.5 and FoV = 7.2x7.2°. In addition, we discuss possibility of use of curved detectors in such a design.

  15. Quantization of gauge fields, graph polynomials and graph homology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kreimer, Dirk, E-mail: kreimer@physik.hu-berlin.de; Sars, Matthias; Suijlekom, Walter D. van

    2013-09-15

    We review quantization of gauge fields using algebraic properties of 3-regular graphs. We derive the Feynman integrand at n loops for a non-abelian gauge theory quantized in a covariant gauge from scalar integrands for connected 3-regular graphs, obtained from the two Symanzik polynomials. The transition to the full gauge theory amplitude is obtained by the use of a third, new, graph polynomial, the corolla polynomial. This implies effectively a covariant quantization without ghosts, where all the relevant signs of the ghost sector are incorporated in a double complex furnished by the corolla polynomial–we call it cycle homology–and by graph homology.more » -- Highlights: •We derive gauge theory Feynman from scalar field theory with 3-valent vertices. •We clarify the role of graph homology and cycle homology. •We use parametric renormalization and the new corolla polynomial.« less

  16. Polynomial interpolation and sums of powers of integers

    NASA Astrophysics Data System (ADS)

    Cereceda, José Luis

    2017-02-01

    In this note, we revisit the problem of polynomial interpolation and explicitly construct two polynomials in n of degree k + 1, Pk(n) and Qk(n), such that Pk(n) = Qk(n) = fk(n) for n = 1, 2,… , k, where fk(1), fk(2),… , fk(k) are k arbitrarily chosen (real or complex) values. Then, we focus on the case that fk(n) is given by the sum of powers of the first n positive integers Sk(n) = 1k + 2k + ṡṡṡ + nk, and show that Sk(n) admits the polynomial representations Sk(n) = Pk(n) and Sk(n) = Qk(n) for all n = 1, 2,… , and k ≥ 1, where the first representation involves the Eulerian numbers, and the second one the Stirling numbers of the second kind. Finally, we consider yet another polynomial formula for Sk(n) alternative to the well-known formula of Bernoulli.

  17. Assessment of Hybrid High-Order methods on curved meshes and comparison with discontinuous Galerkin methods

    NASA Astrophysics Data System (ADS)

    Botti, Lorenzo; Di Pietro, Daniele A.

    2018-10-01

    We propose and validate a novel extension of Hybrid High-Order (HHO) methods to meshes featuring curved elements. HHO methods are based on discrete unknowns that are broken polynomials on the mesh and its skeleton. We propose here the use of physical frame polynomials over mesh elements and reference frame polynomials over mesh faces. With this choice, the degree of face unknowns must be suitably selected in order to recover on curved meshes the same convergence rates as on straight meshes. We provide an estimate of the optimal face polynomial degree depending on the element polynomial degree and on the so-called effective mapping order. The estimate is numerically validated through specifically crafted numerical tests. All test cases are conducted considering two- and three-dimensional pure diffusion problems, and include comparisons with discontinuous Galerkin discretizations. The extension to agglomerated meshes with curved boundaries is also considered.

  18. Multi-indexed Meixner and little q-Jacobi (Laguerre) polynomials

    NASA Astrophysics Data System (ADS)

    Odake, Satoru; Sasaki, Ryu

    2017-04-01

    As the fourth stage of the project multi-indexed orthogonal polynomials, we present the multi-indexed Meixner and little q-Jacobi (Laguerre) polynomials in the framework of ‘discrete quantum mechanics’ with real shifts defined on the semi-infinite lattice in one dimension. They are obtained, in a similar way to the multi-indexed Laguerre and Jacobi polynomials reported earlier, from the quantum mechanical systems corresponding to the original orthogonal polynomials by multiple application of the discrete analogue of the Darboux transformations or the Crum-Krein-Adler deletion of virtual state vectors. The virtual state vectors are the solutions of the matrix Schrödinger equation on all the lattice points having negative energies and infinite norm. This is in good contrast to the (q-)Racah systems defined on a finite lattice, in which the ‘virtual state’ vectors satisfy the matrix Schrödinger equation except for one of the two boundary points.

  19. Regularization of the Perturbed Spatial Restricted Three-Body Problem by L-Transformations

    NASA Astrophysics Data System (ADS)

    Poleshchikov, S. M.

    2018-03-01

    Equations of motion for the perturbed circular restricted three-body problem have been regularized in canonical variables in a moving coordinate system. Two different L-matrices of the fourth order are used in the regularization. Conditions for generalized symplecticity of the constructed transform have been checked. In the unperturbed case, the regular equations have a polynomial structure. The regular equations have been numerically integrated using the Runge-Kutta-Fehlberg method. The results of numerical experiments are given for the Earth-Moon system parameters taking into account the perturbation of the Sun for different L-matrices.

  20. Enumerative Algebraic Geometry of Conics

    DTIC Science & Technology

    2008-10-01

    polynomial defining the conic factors into a product of linear polynomials, then the conic is just the union of two lines. Such a conic is said to be...corresponds to the union of two varieties, so [H ] + [H ] will be the class representing the union of two hyperplanes. But the union of two...sets form a topology, the union S′ = S ∪ [(P5)5 × E] is also closed. Now one great fact about projective varieties is that if we have a projection

  1. Nodal-line dynamics via exact polynomial solutions for coherent waves traversing aberrated imaging systems.

    PubMed

    Paganin, David M; Beltran, Mario A; Petersen, Timothy C

    2018-03-01

    We obtain exact polynomial solutions for two-dimensional coherent complex scalar fields propagating through arbitrary aberrated shift-invariant linear imaging systems. These solutions are used to model nodal-line dynamics of coherent fields output by such systems.

  2. COMPUTATIONAL METHODS FOR SENSITIVITY AND UNCERTAINTY ANALYSIS FOR ENVIRONMENTAL AND BIOLOGICAL MODELS

    EPA Science Inventory

    This work introduces a computationally efficient alternative method for uncertainty propagation, the Stochastic Response Surface Method (SRSM). The SRSM approximates uncertainties in model outputs through a series expansion in normal random variables (polynomial chaos expansion)...

  3. Anomalous negative magnetoresistance of two-dimensional electrons

    NASA Astrophysics Data System (ADS)

    Kanter, Jesse; Vitkalov, Sergey; Bykov, A. A.

    2018-05-01

    Effects of temperature T (6-18 K) and variable in situ static disorder on dissipative resistance of two-dimensional electrons are investigated in GaAs quantum wells placed in a perpendicular magnetic-field B⊥. Quantum contributions to the magnetoresistance, leading to quantum positive magnetoresistance (QPMR), are separated by application of an in-plane magnetic field. QPMR decreases considerably with both the temperature and the static disorder and is in good quantitative agreement with theory. The remaining resistance R decreases with the magnetic field exhibiting an anomalous polynomial dependence on B⊥:[R (B⊥) -R (0 ) ] =A (T ,τq) B⊥η where the power is η ≈1.5 ±0.1 in a broad range of temperatures and disorder. The disorder is characterized by electron quantum lifetime τq. The scaling factor A (T ,τq) ˜[κ(τq) +β (τq) T2] -1 depends significantly on both τq and T where the first term κ ˜τq-1/2 decreases with τq. The second term is proportional to the square of the temperature and diverges with increasing static disorder. Above a critical disorder the anomalous magnetoresistance is absent, and only a positive magnetoresistance, exhibiting no distinct polynomial behavior with the magnetic field, is observed. The presented model accounts memory effects and yields η = 3/2.

  4. Theoretical Analysis of Local Search and Simple Evolutionary Algorithms for the Generalized Travelling Salesperson Problem.

    PubMed

    Pourhassan, Mojgan; Neumann, Frank

    2018-06-22

    The generalized travelling salesperson problem is an important NP-hard combinatorial optimization problem for which meta-heuristics, such as local search and evolutionary algorithms, have been used very successfully. Two hierarchical approaches with different neighbourhood structures, namely a Cluster-Based approach and a Node-Based approach, have been proposed by Hu and Raidl (2008) for solving this problem. In this paper, local search algorithms and simple evolutionary algorithms based on these approaches are investigated from a theoretical perspective. For local search algorithms, we point out the complementary abilities of the two approaches by presenting instances where they mutually outperform each other. Afterwards, we introduce an instance which is hard for both approaches when initialized on a particular point of the search space, but where a variable neighbourhood search combining them finds the optimal solution in polynomial time. Then we turn our attention to analysing the behaviour of simple evolutionary algorithms that use these approaches. We show that the Node-Based approach solves the hard instance of the Cluster-Based approach presented in Corus et al. (2016) in polynomial time. Furthermore, we prove an exponential lower bound on the optimization time of the Node-Based approach for a class of Euclidean instances.

  5. A new surrogate modeling technique combining Kriging and polynomial chaos expansions – Application to uncertainty analysis in computational dosimetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kersaudy, Pierric, E-mail: pierric.kersaudy@orange.com; Whist Lab, 38 avenue du Général Leclerc, 92130 Issy-les-Moulineaux; ESYCOM, Université Paris-Est Marne-la-Vallée, 5 boulevard Descartes, 77700 Marne-la-Vallée

    2015-04-01

    In numerical dosimetry, the recent advances in high performance computing led to a strong reduction of the required computational time to assess the specific absorption rate (SAR) characterizing the human exposure to electromagnetic waves. However, this procedure remains time-consuming and a single simulation can request several hours. As a consequence, the influence of uncertain input parameters on the SAR cannot be analyzed using crude Monte Carlo simulation. The solution presented here to perform such an analysis is surrogate modeling. This paper proposes a novel approach to build such a surrogate model from a design of experiments. Considering a sparse representationmore » of the polynomial chaos expansions using least-angle regression as a selection algorithm to retain the most influential polynomials, this paper proposes to use the selected polynomials as regression functions for the universal Kriging model. The leave-one-out cross validation is used to select the optimal number of polynomials in the deterministic part of the Kriging model. The proposed approach, called LARS-Kriging-PC modeling, is applied to three benchmark examples and then to a full-scale metamodeling problem involving the exposure of a numerical fetus model to a femtocell device. The performances of the LARS-Kriging-PC are compared to an ordinary Kriging model and to a classical sparse polynomial chaos expansion. The LARS-Kriging-PC appears to have better performances than the two other approaches. A significant accuracy improvement is observed compared to the ordinary Kriging or to the sparse polynomial chaos depending on the studied case. This approach seems to be an optimal solution between the two other classical approaches. A global sensitivity analysis is finally performed on the LARS-Kriging-PC model of the fetus exposure problem.« less

  6. Modeling continuous covariates with a "spike" at zero: Bivariate approaches.

    PubMed

    Jenkner, Carolin; Lorenz, Eva; Becher, Heiko; Sauerbrei, Willi

    2016-07-01

    In epidemiology and clinical research, predictors often take value zero for a large amount of observations while the distribution of the remaining observations is continuous. These predictors are called variables with a spike at zero. Examples include smoking or alcohol consumption. Recently, an extension of the fractional polynomial (FP) procedure, a technique for modeling nonlinear relationships, was proposed to deal with such situations. To indicate whether or not a value is zero, a binary variable is added to the model. In a two stage procedure, called FP-spike, the necessity of the binary variable and/or the continuous FP function for the positive part are assessed for a suitable fit. In univariate analyses, the FP-spike procedure usually leads to functional relationships that are easy to interpret. This paper introduces four approaches for dealing with two variables with a spike at zero (SAZ). The methods depend on the bivariate distribution of zero and nonzero values. Bi-Sep is the simplest of the four bivariate approaches. It uses the univariate FP-spike procedure separately for the two SAZ variables. In Bi-D3, Bi-D1, and Bi-Sub, proportions of zeros in both variables are considered simultaneously in the binary indicators. Therefore, these strategies can account for correlated variables. The methods can be used for arbitrary distributions of the covariates. For illustration and comparison of results, data from a case-control study on laryngeal cancer, with smoking and alcohol intake as two SAZ variables, is considered. In addition, a possible extension to three or more SAZ variables is outlined. A combination of log-linear models for the analysis of the correlation in combination with the bivariate approaches is proposed. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  7. A sequential method for spline approximation with variable knots. [recursive piecewise polynomial signal processing

    NASA Technical Reports Server (NTRS)

    Mier Muth, A. M.; Willsky, A. S.

    1978-01-01

    In this paper we describe a method for approximating a waveform by a spline. The method is quite efficient, as the data are processed sequentially. The basis of the approach is to view the approximation problem as a question of estimation of a polynomial in noise, with the possibility of abrupt changes in the highest derivative. This allows us to bring several powerful statistical signal processing tools into play. We also present some initial results on the application of our technique to the processing of electrocardiograms, where the knot locations themselves may be some of the most important pieces of diagnostic information.

  8. Exploring the use of random regression models with legendre polynomials to analyze measures of volume of ejaculate in Holstein bulls.

    PubMed

    Carabaño, M J; Díaz, C; Ugarte, C; Serrano, M

    2007-02-01

    Artificial insemination centers routinely collect records of quantity and quality of semen of bulls throughout the animals' productive period. The goal of this paper was to explore the use of random regression models with orthogonal polynomials to analyze repeated measures of semen production of Spanish Holstein bulls. A total of 8,773 records of volume of first ejaculate (VFE) collected between 12 and 30 mo of age from 213 Spanish Holstein bulls was analyzed under alternative random regression models. Legendre polynomial functions of increasing order (0 to 6) were fitted to the average trajectory, additive genetic and permanent environmental effects. Age at collection and days in production were used as time variables. Heterogeneous and homogeneous residual variances were alternatively assumed. Analyses were carried out within a Bayesian framework. The logarithm of the marginal density and the cross-validation predictive ability of the data were used as model comparison criteria. Based on both criteria, age at collection as a time variable and heterogeneous residuals models are recommended to analyze changes of VFE over time. Both criteria indicated that fitting random curves for genetic and permanent environmental components as well as for the average trajector improved the quality of models. Furthermore, models with a higher order polynomial for the permanent environmental (5 to 6) than for the genetic components (4 to 5) and the average trajectory (2 to 3) tended to perform best. High-order polynomials were needed to accommodate the highly oscillating nature of the phenotypic values. Heritability and repeatability estimates, disregarding the extremes of the studied period, ranged from 0.15 to 0.35 and from 0.20 to 0.50, respectively, indicating that selection for VFE may be effective at any stage. Small differences among models were observed. Apart from the extremes, estimated correlations between ages decreased steadily from 0.9 and 0.4 for measures 1 mo apart to 0.4 and 0.2 for most distant measures for additive genetic and phenotypic components, respectively. Further investigation to account for environmental factors that may be responsible for the oscillating observations of VFE is needed.

  9. A model-based 3D phase unwrapping algorithm using Gegenbauer polynomials.

    PubMed

    Langley, Jason; Zhao, Qun

    2009-09-07

    The application of a two-dimensional (2D) phase unwrapping algorithm to a three-dimensional (3D) phase map may result in an unwrapped phase map that is discontinuous in the direction normal to the unwrapped plane. This work investigates the problem of phase unwrapping for 3D phase maps. The phase map is modeled as a product of three one-dimensional Gegenbauer polynomials. The orthogonality of Gegenbauer polynomials and their derivatives on the interval [-1, 1] are exploited to calculate the expansion coefficients. The algorithm was implemented using two well-known Gegenbauer polynomials: Chebyshev polynomials of the first kind and Legendre polynomials. Both implementations of the phase unwrapping algorithm were tested on 3D datasets acquired from a magnetic resonance imaging (MRI) scanner. The first dataset was acquired from a homogeneous spherical phantom. The second dataset was acquired using the same spherical phantom but magnetic field inhomogeneities were introduced by an external coil placed adjacent to the phantom, which provided an additional burden to the phase unwrapping algorithm. Then Gaussian noise was added to generate a low signal-to-noise ratio dataset. The third dataset was acquired from the brain of a human volunteer. The results showed that Chebyshev implementation and the Legendre implementation of the phase unwrapping algorithm give similar results on the 3D datasets. Both implementations of the phase unwrapping algorithm compare well to PRELUDE 3D, 3D phase unwrapping software well recognized for functional MRI.

  10. Polynomials to model the growth of young bulls in performance tests.

    PubMed

    Scalez, D C B; Fragomeni, B O; Passafaro, T L; Pereira, I G; Toral, F L B

    2014-03-01

    The use of polynomial functions to describe the average growth trajectory and covariance functions of Nellore and MA (21/32 Charolais+11/32 Nellore) young bulls in performance tests was studied. The average growth trajectories and additive genetic and permanent environmental covariance functions were fit with Legendre (linear through quintic) and quadratic B-spline (with two to four intervals) polynomials. In general, the Legendre and quadratic B-spline models that included more covariance parameters provided a better fit with the data. When comparing models with the same number of parameters, the quadratic B-spline provided a better fit than the Legendre polynomials. The quadratic B-spline with four intervals provided the best fit for the Nellore and MA groups. The fitting of random regression models with different types of polynomials (Legendre polynomials or B-spline) affected neither the genetic parameters estimates nor the ranking of the Nellore young bulls. However, fitting different type of polynomials affected the genetic parameters estimates and the ranking of the MA young bulls. Parsimonious Legendre or quadratic B-spline models could be used for genetic evaluation of body weight of Nellore young bulls in performance tests, whereas these parsimonious models were less efficient for animals of the MA genetic group owing to limited data at the extreme ages.

  11. Cylinder surface test with Chebyshev polynomial fitting method

    NASA Astrophysics Data System (ADS)

    Yu, Kui-bang; Guo, Pei-ji; Chen, Xi

    2017-10-01

    Zernike polynomials fitting method is often applied in the test of optical components and systems, used to represent the wavefront and surface error in circular domain. Zernike polynomials are not orthogonal in rectangular region which results in its unsuitable for the test of optical element with rectangular aperture such as cylinder surface. Applying the Chebyshev polynomials which are orthogonal among the rectangular area as an substitution to the fitting method, can solve the problem. Corresponding to a cylinder surface with diameter of 50 mm and F number of 1/7, a measuring system has been designed in Zemax based on Fizeau Interferometry. The expressions of the two-dimensional Chebyshev polynomials has been given and its relationship with the aberration has been presented. Furthermore, Chebyshev polynomials are used as base items to analyze the rectangular aperture test data. The coefficient of different items are obtained from the test data through the method of least squares. Comparing the Chebyshev spectrum in different misalignment, it show that each misalignment is independence and has a certain relationship with the certain Chebyshev terms. The simulation results show that, through the Legendre polynomials fitting method, it will be a great improvement in the efficient of the detection and adjustment of the cylinder surface test.

  12. Classification of Phylogenetic Profiles for Protein Function Prediction: An SVM Approach

    NASA Astrophysics Data System (ADS)

    Kotaru, Appala Raju; Joshi, Ramesh C.

    Predicting the function of an uncharacterized protein is a major challenge in post-genomic era due to problems complexity and scale. Having knowledge of protein function is a crucial link in the development of new drugs, better crops, and even the development of biochemicals such as biofuels. Recently numerous high-throughput experimental procedures have been invented to investigate the mechanisms leading to the accomplishment of a protein’s function and Phylogenetic profile is one of them. Phylogenetic profile is a way of representing a protein which encodes evolutionary history of proteins. In this paper we proposed a method for classification of phylogenetic profiles using supervised machine learning method, support vector machine classification along with radial basis function as kernel for identifying functionally linked proteins. We experimentally evaluated the performance of the classifier with the linear kernel, polynomial kernel and compared the results with the existing tree kernel. In our study we have used proteins of the budding yeast saccharomyces cerevisiae genome. We generated the phylogenetic profiles of 2465 yeast genes and for our study we used the functional annotations that are available in the MIPS database. Our experiments show that the performance of the radial basis kernel is similar to polynomial kernel is some functional classes together are better than linear, tree kernel and over all radial basis kernel outperformed the polynomial kernel, linear kernel and tree kernel. In analyzing these results we show that it will be feasible to make use of SVM classifier with radial basis function as kernel to predict the gene functionality using phylogenetic profiles.

  13. Low-Complexity Polynomial Channel Estimation in Large-Scale MIMO With Arbitrary Statistics

    NASA Astrophysics Data System (ADS)

    Shariati, Nafiseh; Bjornson, Emil; Bengtsson, Mats; Debbah, Merouane

    2014-10-01

    This paper considers pilot-based channel estimation in large-scale multiple-input multiple-output (MIMO) communication systems, also known as massive MIMO, where there are hundreds of antennas at one side of the link. Motivated by the fact that computational complexity is one of the main challenges in such systems, a set of low-complexity Bayesian channel estimators, coined Polynomial ExpAnsion CHannel (PEACH) estimators, are introduced for arbitrary channel and interference statistics. While the conventional minimum mean square error (MMSE) estimator has cubic complexity in the dimension of the covariance matrices, due to an inversion operation, our proposed estimators significantly reduce this to square complexity by approximating the inverse by a L-degree matrix polynomial. The coefficients of the polynomial are optimized to minimize the mean square error (MSE) of the estimate. We show numerically that near-optimal MSEs are achieved with low polynomial degrees. We also derive the exact computational complexity of the proposed estimators, in terms of the floating-point operations (FLOPs), by which we prove that the proposed estimators outperform the conventional estimators in large-scale MIMO systems of practical dimensions while providing a reasonable MSEs. Moreover, we show that L needs not scale with the system dimensions to maintain a certain normalized MSE. By analyzing different interference scenarios, we observe that the relative MSE loss of using the low-complexity PEACH estimators is smaller in realistic scenarios with pilot contamination. On the other hand, PEACH estimators are not well suited for noise-limited scenarios with high pilot power; therefore, we also introduce the low-complexity diagonalized estimator that performs well in this regime. Finally, we ...

  14. Multiresponse semiparametric regression for modelling the effect of regional socio-economic variables on the use of information technology

    NASA Astrophysics Data System (ADS)

    Wibowo, Wahyu; Wene, Chatrien; Budiantara, I. Nyoman; Permatasari, Erma Oktania

    2017-03-01

    Multiresponse semiparametric regression is simultaneous equation regression model and fusion of parametric and nonparametric model. The regression model comprise several models and each model has two components, parametric and nonparametric. The used model has linear function as parametric and polynomial truncated spline as nonparametric component. The model can handle both linearity and nonlinearity relationship between response and the sets of predictor variables. The aim of this paper is to demonstrate the application of the regression model for modeling of effect of regional socio-economic on use of information technology. More specific, the response variables are percentage of households has access to internet and percentage of households has personal computer. Then, predictor variables are percentage of literacy people, percentage of electrification and percentage of economic growth. Based on identification of the relationship between response and predictor variable, economic growth is treated as nonparametric predictor and the others are parametric predictors. The result shows that the multiresponse semiparametric regression can be applied well as indicate by the high coefficient determination, 90 percent.

  15. multiUQ: An intrusive uncertainty quantification tool for gas-liquid multiphase flows

    NASA Astrophysics Data System (ADS)

    Turnquist, Brian; Owkes, Mark

    2017-11-01

    Uncertainty quantification (UQ) can improve our understanding of the sensitivity of gas-liquid multiphase flows to variability about inflow conditions and fluid properties, creating a valuable tool for engineers. While non-intrusive UQ methods (e.g., Monte Carlo) are simple and robust, the cost associated with these techniques can render them unrealistic. In contrast, intrusive UQ techniques modify the governing equations by replacing deterministic variables with stochastic variables, adding complexity, but making UQ cost effective. Our numerical framework, called multiUQ, introduces an intrusive UQ approach for gas-liquid flows, leveraging a polynomial chaos expansion of the stochastic variables: density, momentum, pressure, viscosity, and surface tension. The gas-liquid interface is captured using a conservative level set approach, including a modified reinitialization equation which is robust and quadrature free. A least-squares method is leveraged to compute the stochastic interface normal and curvature needed in the continuum surface force method for surface tension. The solver is tested by applying uncertainty to one or two variables and verifying results against the Monte Carlo approach. NSF Grant #1511325.

  16. Chemical Equilibrium and Polynomial Equations: Beware of Roots.

    ERIC Educational Resources Information Center

    Smith, William R.; Missen, Ronald W.

    1989-01-01

    Describes two easily applied mathematical theorems, Budan's rule and Rolle's theorem, that in addition to Descartes's rule of signs and intermediate-value theorem, are useful in chemical equilibrium. Provides examples that illustrate the use of all four theorems. Discusses limitations of the polynomial equation representation of chemical…

  17. Nonbinary Tree-Based Phylogenetic Networks.

    PubMed

    Jetten, Laura; van Iersel, Leo

    2018-01-01

    Rooted phylogenetic networks are used to describe evolutionary histories that contain non-treelike evolutionary events such as hybridization and horizontal gene transfer. In some cases, such histories can be described by a phylogenetic base-tree with additional linking arcs, which can, for example, represent gene transfer events. Such phylogenetic networks are called tree-based. Here, we consider two possible generalizations of this concept to nonbinary networks, which we call tree-based and strictly-tree-based nonbinary phylogenetic networks. We give simple graph-theoretic characterizations of tree-based and strictly-tree-based nonbinary phylogenetic networks. Moreover, we show for each of these two classes that it can be decided in polynomial time whether a given network is contained in the class. Our approach also provides a new view on tree-based binary phylogenetic networks. Finally, we discuss two examples of nonbinary phylogenetic networks in biology and show how our results can be applied to them.

  18. Filtrations on Springer fiber cohomology and Kostka polynomials

    NASA Astrophysics Data System (ADS)

    Bellamy, Gwyn; Schedler, Travis

    2018-03-01

    We prove a conjecture which expresses the bigraded Poisson-de Rham homology of the nilpotent cone of a semisimple Lie algebra in terms of the generalized (one-variable) Kostka polynomials, via a formula suggested by Lusztig. This allows us to construct a canonical family of filtrations on the flag variety cohomology, and hence on irreducible representations of the Weyl group, whose Hilbert series are given by the generalized Kostka polynomials. We deduce consequences for the cohomology of all Springer fibers. In particular, this computes the grading on the zeroth Poisson homology of all classical finite W-algebras, as well as the filtration on the zeroth Hochschild homology of all quantum finite W-algebras, and we generalize to all homology degrees. As a consequence, we deduce a conjecture of Proudfoot on symplectic duality, relating in type A the Poisson homology of Slodowy slices to the intersection cohomology of nilpotent orbit closures. In the last section, we give an analogue of our main theorem in the setting of mirabolic D-modules.

  19. Bell-polynomial approach and Wronskian determinant solutions for three sets of differential-difference nonlinear evolution equations with symbolic computation

    NASA Astrophysics Data System (ADS)

    Qin, Bo; Tian, Bo; Wang, Yu-Feng; Shen, Yu-Jia; Wang, Ming

    2017-10-01

    Under investigation in this paper are the Belov-Chaltikian (BC), Leznov and Blaszak-Marciniak (BM) lattice equations, which are associated with the conformal field theory, UToda(m_1,m_2) system and r-matrix, respectively. With symbolic computation, the Bell-polynomial approach is developed to directly bilinearize those three sets of differential-difference nonlinear evolution equations (NLEEs). This Bell-polynomial approach does not rely on any dependent variable transformation, which constitutes the key step and main difficulty of the Hirota bilinear method, and thus has the advantage in the bilinearization of the differential-difference NLEEs. Based on the bilinear forms obtained, the N-soliton solutions are constructed in terms of the N × N Wronskian determinant. Graphic illustrations demonstrate that those solutions, more general than the existing results, permit some new properties, such as the solitonic propagation and interactions for the BC lattice equations, and the nonnegative dark solitons for the BM lattice equations.

  20. Bounding Averages Rigorously Using Semidefinite Programming: Mean Moments of the Lorenz System

    NASA Astrophysics Data System (ADS)

    Goluskin, David

    2018-04-01

    We describe methods for proving bounds on infinite-time averages in differential dynamical systems. The methods rely on the construction of nonnegative polynomials with certain properties, similarly to the way nonlinear stability can be proved using Lyapunov functions. Nonnegativity is enforced by requiring the polynomials to be sums of squares, a condition which is then formulated as a semidefinite program (SDP) that can be solved computationally. Although such computations are subject to numerical error, we demonstrate two ways to obtain rigorous results: using interval arithmetic to control the error of an approximate SDP solution, and finding exact analytical solutions to relatively small SDPs. Previous formulations are extended to allow for bounds depending analytically on parametric variables. These methods are illustrated using the Lorenz equations, a system with three state variables ( x, y, z) and three parameters (β ,σ ,r). Bounds are reported for infinite-time averages of all eighteen moments x^ly^mz^n up to quartic degree that are symmetric under (x,y)\\mapsto (-x,-y). These bounds apply to all solutions regardless of stability, including chaotic trajectories, periodic orbits, and equilibrium points. The analytical approach yields two novel bounds that are sharp: the mean of z^3 can be no larger than its value of (r-1)^3 at the nonzero equilibria, and the mean of xy^3 must be nonnegative. The interval arithmetic approach is applied at the standard chaotic parameters to bound eleven average moments that all appear to be maximized on the shortest periodic orbit. Our best upper bound on each such average exceeds its value on the maximizing orbit by less than 1%. Many bounds reported here are much tighter than would be possible without computer assistance.

  1. Elucidating the functional relationship between working memory capacity and psychometric intelligence: a fixed-links modeling approach for experimental repeated-measures designs.

    PubMed

    Thomas, Philipp; Rammsayer, Thomas; Schweizer, Karl; Troche, Stefan

    2015-01-01

    Numerous studies reported a strong link between working memory capacity (WMC) and fluid intelligence (Gf), although views differ in respect to how close these two constructs are related to each other. In the present study, we used a WMC task with five levels of task demands to assess the relationship between WMC and Gf by means of a new methodological approach referred to as fixed-links modeling. Fixed-links models belong to the family of confirmatory factor analysis (CFA) and are of particular interest for experimental, repeated-measures designs. With this technique, processes systematically varying across task conditions can be disentangled from processes unaffected by the experimental manipulation. Proceeding from the assumption that experimental manipulation in a WMC task leads to increasing demands on WMC, the processes systematically varying across task conditions can be assumed to be WMC-specific. Processes not varying across task conditions, on the other hand, are probably independent of WMC. Fixed-links models allow for representing these two kinds of processes by two independent latent variables. In contrast to traditional CFA where a common latent variable is derived from the different task conditions, fixed-links models facilitate a more precise or purified representation of the WMC-related processes of interest. By using fixed-links modeling to analyze data of 200 participants, we identified a non-experimental latent variable, representing processes that remained constant irrespective of the WMC task conditions, and an experimental latent variable which reflected processes that varied as a function of experimental manipulation. This latter variable represents the increasing demands on WMC and, hence, was considered a purified measure of WMC controlled for the constant processes. Fixed-links modeling showed that both the purified measure of WMC (β = .48) as well as the constant processes involved in the task (β = .45) were related to Gf. Taken together, these two latent variables explained the same portion of variance of Gf as a single latent variable obtained by traditional CFA (β = .65) indicating that traditional CFA causes an overestimation of the effective relationship between WMC and Gf. Thus, fixed-links modeling provides a feasible method for a more valid investigation of the functional relationship between specific constructs.

  2. THEORETICAL p-MODE OSCILLATION FREQUENCIES FOR THE RAPIDLY ROTATING {delta} SCUTI STAR {alpha} OPHIUCHI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deupree, Robert G., E-mail: bdeupree@ap.smu.ca

    2011-11-20

    A rotating, two-dimensional stellar model is evolved to match the approximate conditions of {alpha} Oph. Both axisymmetric and nonaxisymmetric oscillation frequencies are computed for two-dimensional rotating models which approximate the properties of {alpha} Oph. These computed frequencies are compared to the observed frequencies. Oscillation calculations are made assuming the eigenfunction can be fitted with six Legendre polynomials, but comparison calculations with eight Legendre polynomials show the frequencies agree to within about 0.26% on average. The surface horizontal shape of the eigenfunctions for the two sets of assumed number of Legendre polynomials agrees less well, but all calculations show significant departuresmore » from that of a single Legendre polynomial. It is still possible to determine the large separation, although the small separation is more complicated to estimate. With the addition of the nonaxisymmetric modes with |m| {<=} 4, the frequency space becomes sufficiently dense that it is difficult to comment on the adequacy of the fit of the computed to the observed frequencies. While the nonaxisymmetric frequency mode splitting is no longer uniform, the frequency difference between the frequencies for positive and negative values of the same m remains 2m times the rotation rate.« less

  3. Integrand reduction for two-loop scattering amplitudes through multivariate polynomial division

    NASA Astrophysics Data System (ADS)

    Mastrolia, Pierpaolo; Mirabella, Edoardo; Ossola, Giovanni; Peraro, Tiziano

    2013-04-01

    We describe the application of a novel approach for the reduction of scattering amplitudes, based on multivariate polynomial division, which we have recently presented. This technique yields the complete integrand decomposition for arbitrary amplitudes, regardless of the number of loops. It allows for the determination of the residue at any multiparticle cut, whose knowledge is a mandatory prerequisite for applying the integrand-reduction procedure. By using the division modulo Gröbner basis, we can derive a simple integrand recurrence relation that generates the multiparticle pole decomposition for integrands of arbitrary multiloop amplitudes. We apply the new reduction algorithm to the two-loop planar and nonplanar diagrams contributing to the five-point scattering amplitudes in N=4 super Yang-Mills and N=8 supergravity in four dimensions, whose numerator functions contain up to rank-two terms in the integration momenta. We determine all polynomial residues parametrizing the cuts of the corresponding topologies and subtopologies. We obtain the integral basis for the decomposition of each diagram from the polynomial form of the residues. Our approach is well suited for a seminumerical implementation, and its general mathematical properties provide an effective algorithm for the generalization of the integrand-reduction method to all orders in perturbation theory.

  4. Photoelectric absorption cross sections with variable abundances

    NASA Technical Reports Server (NTRS)

    Balucinska-Church, Monika; Mccammon, Dan

    1992-01-01

    Polynomial fit coefficients have been obtained for the energy dependences of the photoelectric absorption cross sections of 17 astrophysically important elements. These results allow the calculation of X-ray absorption in the energy range 0.03-10 keV in material with noncosmic abundances.

  5. Polynomial fuzzy observer designs: a sum-of-squares approach.

    PubMed

    Tanaka, Kazuo; Ohtake, Hiroshi; Seo, Toshiaki; Tanaka, Motoyasu; Wang, Hua O

    2012-10-01

    This paper presents a sum-of-squares (SOS) approach to polynomial fuzzy observer designs for three classes of polynomial fuzzy systems. The proposed SOS-based framework provides a number of innovations and improvements over the existing linear matrix inequality (LMI)-based approaches to Takagi-Sugeno (T-S) fuzzy controller and observer designs. First, we briefly summarize previous results with respect to a polynomial fuzzy system that is a more general representation of the well-known T-S fuzzy system. Next, we propose polynomial fuzzy observers to estimate states in three classes of polynomial fuzzy systems and derive SOS conditions to design polynomial fuzzy controllers and observers. A remarkable feature of the SOS design conditions for the first two classes (Classes I and II) is that they realize the so-called separation principle, i.e., the polynomial fuzzy controller and observer for each class can be separately designed without lack of guaranteeing the stability of the overall control system in addition to converging state-estimation error (via the observer) to zero. Although, for the last class (Class III), the separation principle does not hold, we propose an algorithm to design polynomial fuzzy controller and observer satisfying the stability of the overall control system in addition to converging state-estimation error (via the observer) to zero. All the design conditions in the proposed approach can be represented in terms of SOS and are symbolically and numerically solved via the recently developed SOSTOOLS and a semidefinite-program solver, respectively. To illustrate the validity and applicability of the proposed approach, three design examples are provided. The examples demonstrate the advantages of the SOS-based approaches for the existing LMI approaches to T-S fuzzy observer designs.

  6. A color-coded vision scheme for robotics

    NASA Technical Reports Server (NTRS)

    Johnson, Kelley Tina

    1991-01-01

    Most vision systems for robotic applications rely entirely on the extraction of information from gray-level images. Humans, however, regularly depend on color to discriminate between objects. Therefore, the inclusion of color in a robot vision system seems a natural extension of the existing gray-level capabilities. A method for robot object recognition using a color-coding classification scheme is discussed. The scheme is based on an algebraic system in which a two-dimensional color image is represented as a polynomial of two variables. The system is then used to find the color contour of objects. In a controlled environment, such as that of the in-orbit space station, a particular class of objects can thus be quickly recognized by its color.

  7. Finite state modeling of aeroelastic systems

    NASA Technical Reports Server (NTRS)

    Vepa, R.

    1977-01-01

    A general theory of finite state modeling of aerodynamic loads on thin airfoils and lifting surfaces performing completely arbitrary, small, time-dependent motions in an airstream is developed and presented. The nature of the behavior of the unsteady airloads in the frequency domain is explained, using as raw materials any of the unsteady linearized theories that have been mechanized for simple harmonic oscillations. Each desired aerodynamic transfer function is approximated by means of an appropriate Pade approximant, that is, a rational function of finite degree polynomials in the Laplace transform variable. The modeling technique is applied to several two dimensional and three dimensional airfoils. Circular, elliptic, rectangular and tapered planforms are considered as examples. Identical functions are also obtained for control surfaces for two and three dimensional airfoils.

  8. Accurate polynomial expressions for the density and specific volume of seawater using the TEOS-10 standard

    NASA Astrophysics Data System (ADS)

    Roquet, F.; Madec, G.; McDougall, Trevor J.; Barker, Paul M.

    2015-06-01

    A new set of approximations to the standard TEOS-10 equation of state are presented. These follow a polynomial form, making it computationally efficient for use in numerical ocean models. Two versions are provided, the first being a fit of density for Boussinesq ocean models, and the second fitting specific volume which is more suitable for compressible models. Both versions are given as the sum of a vertical reference profile (6th-order polynomial) and an anomaly (52-term polynomial, cubic in pressure), with relative errors of ∼0.1% on the thermal expansion coefficients. A 75-term polynomial expression is also presented for computing specific volume, with a better accuracy than the existing TEOS-10 48-term rational approximation, especially regarding the sound speed, and it is suggested that this expression represents a valuable approximation of the TEOS-10 equation of state for hydrographic data analysis. In the last section, practical aspects about the implementation of TEOS-10 in ocean models are discussed.

  9. Scheduling Jobs and a Variable Maintenance on a Single Machine with Common Due-Date Assignment

    PubMed Central

    Wan, Long

    2014-01-01

    We investigate a common due-date assignment scheduling problem with a variable maintenance on a single machine. The goal is to minimize the total earliness, tardiness, and due-date cost. We derive some properties on an optimal solution for our problem. For a special case with identical jobs we propose an optimal polynomial time algorithm followed by a numerical example. PMID:25147861

  10. Scan Order in Gibbs Sampling: Models in Which it Matters and Bounds on How Much.

    PubMed

    He, Bryan; De Sa, Christopher; Mitliagkas, Ioannis; Ré, Christopher

    2016-01-01

    Gibbs sampling is a Markov Chain Monte Carlo sampling technique that iteratively samples variables from their conditional distributions. There are two common scan orders for the variables: random scan and systematic scan. Due to the benefits of locality in hardware, systematic scan is commonly used, even though most statistical guarantees are only for random scan. While it has been conjectured that the mixing times of random scan and systematic scan do not differ by more than a logarithmic factor, we show by counterexample that this is not the case, and we prove that that the mixing times do not differ by more than a polynomial factor under mild conditions. To prove these relative bounds, we introduce a method of augmenting the state space to study systematic scan using conductance.

  11. Scan Order in Gibbs Sampling: Models in Which it Matters and Bounds on How Much

    PubMed Central

    He, Bryan; De Sa, Christopher; Mitliagkas, Ioannis; Ré, Christopher

    2016-01-01

    Gibbs sampling is a Markov Chain Monte Carlo sampling technique that iteratively samples variables from their conditional distributions. There are two common scan orders for the variables: random scan and systematic scan. Due to the benefits of locality in hardware, systematic scan is commonly used, even though most statistical guarantees are only for random scan. While it has been conjectured that the mixing times of random scan and systematic scan do not differ by more than a logarithmic factor, we show by counterexample that this is not the case, and we prove that that the mixing times do not differ by more than a polynomial factor under mild conditions. To prove these relative bounds, we introduce a method of augmenting the state space to study systematic scan using conductance. PMID:28344429

  12. Testing of next-generation nonlinear calibration based non-uniformity correction techniques using SWIR devices

    NASA Astrophysics Data System (ADS)

    Lovejoy, McKenna R.; Wickert, Mark A.

    2017-05-01

    A known problem with infrared imaging devices is their non-uniformity. This non-uniformity is the result of dark current, amplifier mismatch as well as the individual photo response of the detectors. To improve performance, non-uniformity correction (NUC) techniques are applied. Standard calibration techniques use linear, or piecewise linear models to approximate the non-uniform gain and off set characteristics as well as the nonlinear response. Piecewise linear models perform better than the one and two-point models, but in many cases require storing an unmanageable number of correction coefficients. Most nonlinear NUC algorithms use a second order polynomial to improve performance and allow for a minimal number of stored coefficients. However, advances in technology now make higher order polynomial NUC algorithms feasible. This study comprehensively tests higher order polynomial NUC algorithms targeted at short wave infrared (SWIR) imagers. Using data collected from actual SWIR cameras, the nonlinear techniques and corresponding performance metrics are compared with current linear methods including the standard one and two-point algorithms. Machine learning, including principal component analysis, is explored for identifying and replacing bad pixels. The data sets are analyzed and the impact of hardware implementation is discussed. Average floating point results show 30% less non-uniformity, in post-corrected data, when using a third order polynomial correction algorithm rather than a second order algorithm. To maximize overall performance, a trade off analysis on polynomial order and coefficient precision is performed. Comprehensive testing, across multiple data sets, provides next generation model validation and performance benchmarks for higher order polynomial NUC methods.

  13. Sharing Teaching Ideas.

    ERIC Educational Resources Information Center

    Mathematics Teacher, 1985

    1985-01-01

    Discusses: (1) use of matrix techniques to write secret codes (includes ready-to-duplicate worksheets); (2) a method of multiplication and division of polynomials in one variable that is not tedius, time-consuming, or dependent on guesswork; and (3) adding and subtracting rational expressions and solving rational equations. (JN)

  14. Design of state-feedback controllers including sensitivity reduction, with applications to precision pointing

    NASA Technical Reports Server (NTRS)

    Hadass, Z.

    1974-01-01

    The design procedure of feedback controllers was described and the considerations for the selection of the design parameters were given. The frequency domain properties of single-input single-output systems using state feedback controllers are analyzed, and desirable phase and gain margin properties are demonstrated. Special consideration is given to the design of controllers for tracking systems, especially those designed to track polynomial commands. As an example, a controller was designed for a tracking telescope with a polynomial tracking requirement and some special features such as actuator saturation and multiple measurements, one of which is sampled. The resulting system has a tracking performance comparing favorably with a much more complicated digital aided tracker. The parameter sensitivity reduction was treated by considering the variable parameters as random variables. A performance index is defined as a weighted sum of the state and control convariances that sum from both the random system disturbances and the parameter uncertainties, and is minimized numerically by adjusting a set of free parameters.

  15. Uncertainty analysis for the steady-state flows in a dual throat nozzle

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, Q.-Y.; Gottlieb, David; Hesthaven, Jan S.

    2005-03-20

    It is well known that the steady state of an isentropic flow in a dual-throat nozzle with equal throat areas is not unique. In particular there is a possibility that the flow contains a shock wave, whose location is determined solely by the initial condition. In this paper, we consider cases with uncertainty in this initial condition and use generalized polynomial chaos methods to study the steady-state solutions for stochastic initial conditions. Special interest is given to the statistics of the shock location. The polynomial chaos (PC) expansion modes are shown to be smooth functions of the spatial variable x,more » although each solution realization is discontinuous in the spatial variable x. When the variance of the initial condition is small, the probability density function of the shock location is computed with high accuracy. Otherwise, many terms are needed in the PC expansion to produce reasonable results due to the slow convergence of the PC expansion, caused by non-smoothness in random space.« less

  16. Cylinder stitching interferometry: with and without overlap regions

    NASA Astrophysics Data System (ADS)

    Peng, Junzheng; Chen, Dingfu; Yu, Yingjie

    2017-06-01

    Since the cylinder surface is closed and periodic in the azimuthal direction, existing stitching methods cannot be used to yield the 360° form map. To address this problem, this paper presents two methods for stitching interferometry of cylinder: one requires overlap regions, and the other does not need the overlap regions. For the former, we use the first order approximation of cylindrical coordinate transformation to build the stitching model. With it, the relative parameters between the adjacent sub-apertures can be calculated by the stitching model. For the latter, a set of orthogonal polynomials, termed Legendre Fourier (LF) polynomials, was developed. With these polynomials, individual sub-aperture data can be expanded as composition of inherent form of partial cylinder surface and additional misalignment parameters. Then the 360° form map can be acquired by simultaneously fitting all sub-aperture data with LF polynomials. Finally the two proposed methods are compared under various conditions. The merits and drawbacks of each stitching method are consequently revealed to provide suggestion in acquisition of 360° form map for a precision cylinder.

  17. Slice regular functions of several Clifford variables

    NASA Astrophysics Data System (ADS)

    Ghiloni, R.; Perotti, A.

    2012-11-01

    We introduce a class of slice regular functions of several Clifford variables. Our approach to the definition of slice functions is based on the concept of stem functions of several variables and on the introduction on real Clifford algebras of a family of commuting complex structures. The class of slice regular functions include, in particular, the family of (ordered) polynomials in several Clifford variables. We prove some basic properties of slice and slice regular functions and give examples to illustrate this function theory. In particular, we give integral representation formulas for slice regular functions and a Hartogs type extension result.

  18. Optimization of isolation of cellulose from orange peel using sodium hydroxide and chelating agents.

    PubMed

    Bicu, Ioan; Mustata, Fanica

    2013-10-15

    Response surface methodology was used to optimize cellulose recovery from orange peel using sodium hydroxide (NaOH) as isolation reagent, and to minimize its ash content using ethylenediaminetetraacetic acid (EDTA) as chelating agent. The independent variables were NaOH charge, EDTA charge and cooking time. Other two constant parameters were cooking temperature (98 °C) and liquid-to-solid ratio (7.5). The dependent variables were cellulose yield and ash content. A second-order polynomial model was used for plotting response surfaces and for determining optimum cooking conditions. The analysis of coefficient values for independent variables in the regression equation showed that NaOH and EDTA charges were major factors influencing the cellulose yield and ash content, respectively. Optimum conditions were defined by: NaOH charge 38.2%, EDTA charge 9.56%, and cooking time 317 min. The predicted cellulose yield was 24.06% and ash content 0.69%. A good agreement between the experimental values and the predicted was observed. Copyright © 2013 Elsevier Ltd. All rights reserved.

  19. Explicit analytical expression for the condition number of polynomials in power form

    NASA Astrophysics Data System (ADS)

    Rack, Heinz-Joachim

    2017-07-01

    In his influential papers [1-3] W. Gautschi has defined and reshaped the condition number κ∞ of polynomials Pn of degree ≤ n which are represented in power form on a zero-symmetric interval [-ω, ω]. Basically, κ∞ is expressed as the product of two operator norms: an explicit factor times an implicit one (the l∞-norm of the coefficient vector of the n-th Chebyshev polynomial of the first kind relative to [-ω, ω]). We provide a new proof, economize the second factor and express it by an explicit analytical formula.

  20. How many invariant polynomials are needed to decide local unitary equivalence of qubit states?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maciążek, Tomasz; Faculty of Physics, University of Warsaw, ul. Hoża 69, 00-681 Warszawa; Oszmaniec, Michał

    2013-09-15

    Given L-qubit states with the fixed spectra of reduced one-qubit density matrices, we find a formula for the minimal number of invariant polynomials needed for solving local unitary (LU) equivalence problem, that is, problem of deciding if two states can be connected by local unitary operations. Interestingly, this number is not the same for every collection of the spectra. Some spectra require less polynomials to solve LU equivalence problem than others. The result is obtained using geometric methods, i.e., by calculating the dimensions of reduced spaces, stemming from the symplectic reduction procedure.

  1. Efficient uncertainty quantification in fully-integrated surface and subsurface hydrologic simulations

    NASA Astrophysics Data System (ADS)

    Miller, K. L.; Berg, S. J.; Davison, J. H.; Sudicky, E. A.; Forsyth, P. A.

    2018-01-01

    Although high performance computers and advanced numerical methods have made the application of fully-integrated surface and subsurface flow and transport models such as HydroGeoSphere common place, run times for large complex basin models can still be on the order of days to weeks, thus, limiting the usefulness of traditional workhorse algorithms for uncertainty quantification (UQ) such as Latin Hypercube simulation (LHS) or Monte Carlo simulation (MCS), which generally require thousands of simulations to achieve an acceptable level of accuracy. In this paper we investigate non-intrusive polynomial chaos for uncertainty quantification, which in contrast to random sampling methods (e.g., LHS and MCS), represents a model response of interest as a weighted sum of polynomials over the random inputs. Once a chaos expansion has been constructed, approximating the mean, covariance, probability density function, cumulative distribution function, and other common statistics as well as local and global sensitivity measures is straightforward and computationally inexpensive, thus making PCE an attractive UQ method for hydrologic models with long run times. Our polynomial chaos implementation was validated through comparison with analytical solutions as well as solutions obtained via LHS for simple numerical problems. It was then used to quantify parametric uncertainty in a series of numerical problems with increasing complexity, including a two-dimensional fully-saturated, steady flow and transient transport problem with six uncertain parameters and one quantity of interest; a one-dimensional variably-saturated column test involving transient flow and transport, four uncertain parameters, and two quantities of interest at 101 spatial locations and five different times each (1010 total); and a three-dimensional fully-integrated surface and subsurface flow and transport problem for a small test catchment involving seven uncertain parameters and three quantities of interest at 241 different times each. Numerical experiments show that polynomial chaos is an effective and robust method for quantifying uncertainty in fully-integrated hydrologic simulations, which provides a rich set of features and is computationally efficient. Our approach has the potential for significant speedup over existing sampling based methods when the number of uncertain model parameters is modest ( ≤ 20). To our knowledge, this is the first implementation of the algorithm in a comprehensive, fully-integrated, physically-based three-dimensional hydrosystem model.

  2. Supersymmetric quantum mechanics: Engineered hierarchies of integrable potentials and related orthogonal polynomials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balondo Iyela, Daddy; Centre for Cosmology, Particle Physics and Phenomenology; Département de Physique, Université de Kinshasa

    2013-09-15

    Within the context of supersymmetric quantum mechanics and its related hierarchies of integrable quantum Hamiltonians and potentials, a general programme is outlined and applied to its first two simplest illustrations. Going beyond the usual restriction of shape invariance for intertwined potentials, it is suggested to require a similar relation for Hamiltonians in the hierarchy separated by an arbitrary number of levels, N. By requiring further that these two Hamiltonians be in fact identical up to an overall shift in energy, a periodic structure is installed in the hierarchy which should allow for its resolution. Specific classes of orthogonal polynomials characteristicmore » of such periodic hierarchies are thereby generated, while the methods of supersymmetric quantum mechanics then lead to generalised Rodrigues formulae and recursion relations for such polynomials. The approach also offers the practical prospect of quantum modelling through the engineering of quantum potentials from experimental energy spectra. In this paper, these ideas are presented and solved explicitly for the cases N= 1 and N= 2. The latter case is related to the generalised Laguerre polynomials, for which indeed new results are thereby obtained. In the context of dressing chains and deformed polynomial Heisenberg algebras, some partial results for N⩾ 3 also exist in the literature, which should be relevant to a complete study of the N⩾ 3 general periodic hierarchies.« less

  3. Limitations of the paraxial Debye approximation.

    PubMed

    Sheppard, Colin J R

    2013-04-01

    In the paraxial form of the Debye integral for focusing, higher order defocus terms are ignored, which can result in errors in dealing with aberrations, even for low numerical aperture. These errors can be avoided by using a different integration variable. The aberrations of a glass slab, such as a coverslip, are expanded in terms of the new variable, and expressed in terms of Zernike polynomials to assist with aberration balancing. Tube length error is also discussed.

  4. Simple and practical approach for computing the ray Hessian matrix in geometrical optics.

    PubMed

    Lin, Psang Dain

    2018-02-01

    A method is proposed for simplifying the computation of the ray Hessian matrix in geometrical optics by replacing the angular variables in the system variable vector with their equivalent cosine and sine functions. The variable vector of a boundary surface is similarly defined in such a way as to exclude any angular variables. It is shown that the proposed formulations reduce the computation time of the Hessian matrix by around 10 times compared to the previous method reported by the current group in Advanced Geometrical Optics (2016). Notably, the method proposed in this study involves only polynomial differentiation, i.e., trigonometric function calls are not required. As a consequence, the computation complexity is significantly reduced. Five illustrative examples are given. The first three examples show that the proposed method is applicable to the determination of the Hessian matrix for any pose matrix, irrespective of the order in which the rotation and translation motions are specified. The last two examples demonstrate the use of the proposed Hessian matrix in determining the axial and lateral chromatic aberrations of a typical optical system.

  5. Comparison of three methods for registration of abdominal/pelvic volume data sets from functional-anatomic scans

    NASA Astrophysics Data System (ADS)

    Mahmoud, Faaiza; Ton, Anthony; Crafoord, Joakim; Kramer, Elissa L.; Maguire, Gerald Q., Jr.; Noz, Marilyn E.; Zeleznik, Michael P.

    2000-06-01

    The purpose of this work was to evaluate three volumetric registration methods in terms of technique, user-friendliness and time requirements. CT and SPECT data from 11 patients were interactively registered using: a 3D method involving only affine transformation; a mixed 3D - 2D non-affine (warping) method; and a 3D non-affine (warping) method. In the first method representative isosurfaces are generated from the anatomical images. Registration proceeds through translation, rotation, and scaling in all three space variables. Resulting isosurfaces are fused and quantitative measurements are possible. In the second method, the 3D volumes are rendered co-planar by performing an oblique projection. Corresponding landmark pairs are chosen on matching axial slice sets. A polynomial warp is then applied. This method has undergone extensive validation and was used to evaluate the results. The third method employs visualization tools. The data model allows images to be localized within two separate volumes. Landmarks are chosen on separate slices. Polynomial warping coefficients are generated and data points from one volume are moved to the corresponding new positions. The two landmark methods were the least time consuming (10 to 30 minutes from start to finish), but did demand a good knowledge of anatomy. The affine method was tedious and required a fair understanding of 3D geometry.

  6. Capacity planning of a wide-sense nonblocking generalized survivable network

    NASA Astrophysics Data System (ADS)

    Ho, Kwok Shing; Cheung, Kwok Wai

    2006-06-01

    Generalized survivable networks (GSNs) have two interesting properties that are essential attributes for future backbone networks--full survivability against link failures and support for dynamic traffic demands. GSNs incorporate the nonblocking network concept into the survivable network models. Given a set of nodes and a topology that is at least two-edge connected, a certain minimum capacity is required for each edge to form a GSN. The edge capacity is bounded because each node has an input-output capacity limit that serves as a constraint for any allowable traffic demand matrix. The GSN capacity planning problem is nondeterministic polynomial time (NP) hard. We first give a rigorous mathematical framework; then we offer two different solution approaches. The two-phase approach is fast, but the joint optimization approach yields a better bound. We carried out numerical computations for eight networks with different topologies and found that the cost of a GSN is only a fraction (from 52% to 89%) more than that of a static survivable network.

  7. A Maximum Likelihood Approach to Determine Sensor Radiometric Response Coefficients for NPP VIIRS Reflective Solar Bands

    NASA Technical Reports Server (NTRS)

    Lei, Ning; Chiang, Kwo-Fu; Oudrari, Hassan; Xiong, Xiaoxiong

    2011-01-01

    Optical sensors aboard Earth orbiting satellites such as the next generation Visible/Infrared Imager/Radiometer Suite (VIIRS) assume that the sensors radiometric response in the Reflective Solar Bands (RSB) is described by a quadratic polynomial, in relating the aperture spectral radiance to the sensor Digital Number (DN) readout. For VIIRS Flight Unit 1, the coefficients are to be determined before launch by an attenuation method, although the linear coefficient will be further determined on-orbit through observing the Solar Diffuser. In determining the quadratic polynomial coefficients by the attenuation method, a Maximum Likelihood approach is applied in carrying out the least-squares procedure. Crucial to the Maximum Likelihood least-squares procedure is the computation of the weight. The weight not only has a contribution from the noise of the sensor s digital count, with an important contribution from digitization error, but also is affected heavily by the mathematical expression used to predict the value of the dependent variable, because both the independent and the dependent variables contain random noise. In addition, model errors have a major impact on the uncertainties of the coefficients. The Maximum Likelihood approach demonstrates the inadequacy of the attenuation method model with a quadratic polynomial for the retrieved spectral radiance. We show that using the inadequate model dramatically increases the uncertainties of the coefficients. We compute the coefficient values and their uncertainties, considering both measurement and model errors.

  8. Comparison of Linear and Non-linear Regression Analysis to Determine Pulmonary Pressure in Hyperthyroidism.

    PubMed

    Scarneciu, Camelia C; Sangeorzan, Livia; Rus, Horatiu; Scarneciu, Vlad D; Varciu, Mihai S; Andreescu, Oana; Scarneciu, Ioan

    2017-01-01

    This study aimed at assessing the incidence of pulmonary hypertension (PH) at newly diagnosed hyperthyroid patients and at finding a simple model showing the complex functional relation between pulmonary hypertension in hyperthyroidism and the factors causing it. The 53 hyperthyroid patients (H-group) were evaluated mainly by using an echocardiographical method and compared with 35 euthyroid (E-group) and 25 healthy people (C-group). In order to identify the factors causing pulmonary hypertension the statistical method of comparing the values of arithmetical means is used. The functional relation between the two random variables (PAPs and each of the factors determining it within our research study) can be expressed by linear or non-linear function. By applying the linear regression method described by a first-degree equation the line of regression (linear model) has been determined; by applying the non-linear regression method described by a second degree equation, a parabola-type curve of regression (non-linear or polynomial model) has been determined. We made the comparison and the validation of these two models by calculating the determination coefficient (criterion 1), the comparison of residuals (criterion 2), application of AIC criterion (criterion 3) and use of F-test (criterion 4). From the H-group, 47% have pulmonary hypertension completely reversible when obtaining euthyroidism. The factors causing pulmonary hypertension were identified: previously known- level of free thyroxin, pulmonary vascular resistance, cardiac output; new factors identified in this study- pretreatment period, age, systolic blood pressure. According to the four criteria and to the clinical judgment, we consider that the polynomial model (graphically parabola- type) is better than the linear one. The better model showing the functional relation between the pulmonary hypertension in hyperthyroidism and the factors identified in this study is given by a polynomial equation of second degree where the parabola is its graphical representation.

  9. Compressive sampling of polynomial chaos expansions: Convergence analysis and sampling strategies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hampton, Jerrad; Doostan, Alireza, E-mail: alireza.doostan@colorado.edu

    2015-01-01

    Sampling orthogonal polynomial bases via Monte Carlo is of interest for uncertainty quantification of models with random inputs, using Polynomial Chaos (PC) expansions. It is known that bounding a probabilistic parameter, referred to as coherence, yields a bound on the number of samples necessary to identify coefficients in a sparse PC expansion via solution to an ℓ{sub 1}-minimization problem. Utilizing results for orthogonal polynomials, we bound the coherence parameter for polynomials of Hermite and Legendre type under their respective natural sampling distribution. In both polynomial bases we identify an importance sampling distribution which yields a bound with weaker dependence onmore » the order of the approximation. For more general orthonormal bases, we propose the coherence-optimal sampling: a Markov Chain Monte Carlo sampling, which directly uses the basis functions under consideration to achieve a statistical optimality among all sampling schemes with identical support. We demonstrate these different sampling strategies numerically in both high-order and high-dimensional, manufactured PC expansions. In addition, the quality of each sampling method is compared in the identification of solutions to two differential equations, one with a high-dimensional random input and the other with a high-order PC expansion. In both cases, the coherence-optimal sampling scheme leads to similar or considerably improved accuracy.« less

  10. Long-time uncertainty propagation using generalized polynomial chaos and flow map composition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Luchtenburg, Dirk M., E-mail: dluchten@cooper.edu; Brunton, Steven L.; Rowley, Clarence W.

    2014-10-01

    We present an efficient and accurate method for long-time uncertainty propagation in dynamical systems. Uncertain initial conditions and parameters are both addressed. The method approximates the intermediate short-time flow maps by spectral polynomial bases, as in the generalized polynomial chaos (gPC) method, and uses flow map composition to construct the long-time flow map. In contrast to the gPC method, this approach has spectral error convergence for both short and long integration times. The short-time flow map is characterized by small stretching and folding of the associated trajectories and hence can be well represented by a relatively low-degree basis. The compositionmore » of these low-degree polynomial bases then accurately describes the uncertainty behavior for long integration times. The key to the method is that the degree of the resulting polynomial approximation increases exponentially in the number of time intervals, while the number of polynomial coefficients either remains constant (for an autonomous system) or increases linearly in the number of time intervals (for a non-autonomous system). The findings are illustrated on several numerical examples including a nonlinear ordinary differential equation (ODE) with an uncertain initial condition, a linear ODE with an uncertain model parameter, and a two-dimensional, non-autonomous double gyre flow.« less

  11. Wavefront analysis from its slope data

    NASA Astrophysics Data System (ADS)

    Mahajan, Virendra N.; Acosta, Eva

    2017-08-01

    In the aberration analysis of a wavefront over a certain domain, the polynomials that are orthogonal over and represent balanced wave aberrations for this domain are used. For example, Zernike circle polynomials are used for the analysis of a circular wavefront. Similarly, the annular polynomials are used to analyze the annular wavefronts for systems with annular pupils, as in a rotationally symmetric two-mirror system, such as the Hubble space telescope. However, when the data available for analysis are the slopes of a wavefront, as, for example, in a Shack- Hartmann sensor, we can integrate the slope data to obtain the wavefront data, and then use the orthogonal polynomials to obtain the aberration coefficients. An alternative is to find vector functions that are orthogonal to the gradients of the wavefront polynomials, and obtain the aberration coefficients directly as the inner products of these functions with the slope data. In this paper, we show that an infinite number of vector functions can be obtained in this manner. We show further that the vector functions that are irrotational are unique and propagate minimum uncorrelated additive random noise from the slope data to the aberration coefficients.

  12. The NonConforming Virtual Element Method for the Stokes Equations

    DOE PAGES

    Cangiani, Andrea; Gyrya, Vitaliy; Manzini, Gianmarco

    2016-01-01

    In this paper, we present the nonconforming virtual element method (VEM) for the numerical approximation of velocity and pressure in the steady Stokes problem. The pressure is approximated using discontinuous piecewise polynomials, while each component of the velocity is approximated using the nonconforming virtual element space. On each mesh element the local virtual space contains the space of polynomials of up to a given degree, plus suitable nonpolynomial functions. The virtual element functions are implicitly defined as the solution of local Poisson problems with polynomial Neumann boundary conditions. As typical in VEM approaches, the explicit evaluation of the non-polynomial functionsmore » is not required. This approach makes it possible to construct nonconforming (virtual) spaces for any polynomial degree regardless of the parity, for two- and three-dimensional problems, and for meshes with very general polygonal and polyhedral elements. We show that the nonconforming VEM is inf-sup stable and establish optimal a priori error estimates for the velocity and pressure approximations. Finally, numerical examples confirm the convergence analysis and the effectiveness of the method in providing high-order accurate approximations.« less

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cangiani, Andrea; Gyrya, Vitaliy; Manzini, Gianmarco

    In this paper, we present the nonconforming virtual element method (VEM) for the numerical approximation of velocity and pressure in the steady Stokes problem. The pressure is approximated using discontinuous piecewise polynomials, while each component of the velocity is approximated using the nonconforming virtual element space. On each mesh element the local virtual space contains the space of polynomials of up to a given degree, plus suitable nonpolynomial functions. The virtual element functions are implicitly defined as the solution of local Poisson problems with polynomial Neumann boundary conditions. As typical in VEM approaches, the explicit evaluation of the non-polynomial functionsmore » is not required. This approach makes it possible to construct nonconforming (virtual) spaces for any polynomial degree regardless of the parity, for two- and three-dimensional problems, and for meshes with very general polygonal and polyhedral elements. We show that the nonconforming VEM is inf-sup stable and establish optimal a priori error estimates for the velocity and pressure approximations. Finally, numerical examples confirm the convergence analysis and the effectiveness of the method in providing high-order accurate approximations.« less

  14. Phase unwrapping algorithm using polynomial phase approximation and linear Kalman filter.

    PubMed

    Kulkarni, Rishikesh; Rastogi, Pramod

    2018-02-01

    A noise-robust phase unwrapping algorithm is proposed based on state space analysis and polynomial phase approximation using wrapped phase measurement. The true phase is approximated as a two-dimensional first order polynomial function within a small sized window around each pixel. The estimates of polynomial coefficients provide the measurement of phase and local fringe frequencies. A state space representation of spatial phase evolution and the wrapped phase measurement is considered with the state vector consisting of polynomial coefficients as its elements. Instead of using the traditional nonlinear Kalman filter for the purpose of state estimation, we propose to use the linear Kalman filter operating directly with the wrapped phase measurement. The adaptive window width is selected at each pixel based on the local fringe density to strike a balance between the computation time and the noise robustness. In order to retrieve the unwrapped phase, either a line-scanning approach or a quality guided strategy of pixel selection is used depending on the underlying continuous or discontinuous phase distribution, respectively. Simulation and experimental results are provided to demonstrate the applicability of the proposed method.

  15. Orthonormal vector polynomials in a unit circle, Part I: Basis set derived from gradients of Zernike polynomials.

    PubMed

    Zhao, Chunyu; Burge, James H

    2007-12-24

    Zernike polynomials provide a well known, orthogonal set of scalar functions over a circular domain, and are commonly used to represent wavefront phase or surface irregularity. A related set of orthogonal functions is given here which represent vector quantities, such as mapping distortion or wavefront gradient. These functions are generated from gradients of Zernike polynomials, made orthonormal using the Gram- Schmidt technique. This set provides a complete basis for representing vector fields that can be defined as a gradient of some scalar function. It is then efficient to transform from the coefficients of the vector functions to the scalar Zernike polynomials that represent the function whose gradient was fit. These new vector functions have immediate application for fitting data from a Shack-Hartmann wavefront sensor or for fitting mapping distortion for optical testing. A subsequent paper gives an additional set of vector functions consisting only of rotational terms with zero divergence. The two sets together provide a complete basis that can represent all vector distributions in a circular domain.

  16. Limit cycles via higher order perturbations for some piecewise differential systems

    NASA Astrophysics Data System (ADS)

    Buzzi, Claudio A.; Lima, Maurício Firmino Silva; Torregrosa, Joan

    2018-05-01

    A classical perturbation problem is the polynomial perturbation of the harmonic oscillator, (x‧ ,y‧) =(- y + εf(x , y , ε) , x + εg(x , y , ε)) . In this paper we study the limit cycles that bifurcate from the period annulus via piecewise polynomial perturbations in two zones separated by a straight line. We prove that, for polynomial perturbations of degree n , no more than Nn - 1 limit cycles appear up to a study of order N. We also show that this upper bound is reached for orders one and two. Moreover, we study this problem in some classes of piecewise Liénard differential systems providing better upper bounds for higher order perturbation in ε, showing also when they are reached. The Poincaré-Pontryagin-Melnikov theory is the main technique used to prove all the results.

  17. Mixed kernel function support vector regression for global sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Cheng, Kai; Lu, Zhenzhou; Wei, Yuhao; Shi, Yan; Zhou, Yicheng

    2017-11-01

    Global sensitivity analysis (GSA) plays an important role in exploring the respective effects of input variables on an assigned output response. Amongst the wide sensitivity analyses in literature, the Sobol indices have attracted much attention since they can provide accurate information for most models. In this paper, a mixed kernel function (MKF) based support vector regression (SVR) model is employed to evaluate the Sobol indices at low computational cost. By the proposed derivation, the estimation of the Sobol indices can be obtained by post-processing the coefficients of the SVR meta-model. The MKF is constituted by the orthogonal polynomials kernel function and Gaussian radial basis kernel function, thus the MKF possesses both the global characteristic advantage of the polynomials kernel function and the local characteristic advantage of the Gaussian radial basis kernel function. The proposed approach is suitable for high-dimensional and non-linear problems. Performance of the proposed approach is validated by various analytical functions and compared with the popular polynomial chaos expansion (PCE). Results demonstrate that the proposed approach is an efficient method for global sensitivity analysis.

  18. Further studies on stability analysis of nonlinear Roesser-type two-dimensional systems

    NASA Astrophysics Data System (ADS)

    Dai, Xiao-Lin

    2014-04-01

    This paper is concerned with further relaxations of the stability analysis of nonlinear Roesser-type two-dimensional (2D) systems in the Takagi-Sugeno fuzzy form. To achieve the goal, a novel slack matrix variable technique, which is homogenous polynomially parameter-dependent on the normalized fuzzy weighting functions with arbitrary degree, is developed and the algebraic properties of the normalized fuzzy weighting functions are collected into a set of augmented matrices. Consequently, more information about the normalized fuzzy weighting functions is involved and the relaxation quality of the stability analysis is significantly improved. Moreover, the obtained result is formulated in the form of linear matrix inequalities, which can be easily solved via standard numerical software. Finally, a numerical example is provided to demonstrate the effectiveness of the proposed result.

  19. A polyhedral study of production ramping

    DOE PAGES

    Damci-Kurt, Pelin; Kucukyavuz, Simge; Rajan, Deepak; ...

    2015-06-12

    Here, we give strong formulations of ramping constraints—used to model the maximum change in production level for a generator or machine from one time period to the next—and production limits. For the two-period case, we give a complete description of the convex hull of the feasible solutions. The two-period inequalities can be readily used to strengthen ramping formulations without the need for separation. For the general case, we define exponential classes of multi-period variable upper bound and multi-period ramping inequalities, and give conditions under which these inequalities define facets of ramping polyhedra. Finally, we present exact polynomial separation algorithms formore » the inequalities and report computational experiments on using them in a branch-and-cut algorithm to solve unit commitment problems in power generation.« less

  20. PRECONDITIONED CONJUGATE-GRADIENT 2 (PCG2), a computer program for solving ground-water flow equations

    USGS Publications Warehouse

    Hill, Mary C.

    1990-01-01

    This report documents PCG2 : a numerical code to be used with the U.S. Geological Survey modular three-dimensional, finite-difference, ground-water flow model . PCG2 uses the preconditioned conjugate-gradient method to solve the equations produced by the model for hydraulic head. Linear or nonlinear flow conditions may be simulated. PCG2 includes two reconditioning options : modified incomplete Cholesky preconditioning, which is efficient on scalar computers; and polynomial preconditioning, which requires less computer storage and, with modifications that depend on the computer used, is most efficient on vector computers . Convergence of the solver is determined using both head-change and residual criteria. Nonlinear problems are solved using Picard iterations. This documentation provides a description of the preconditioned conjugate gradient method and the two preconditioners, detailed instructions for linking PCG2 to the modular model, sample data inputs, a brief description of PCG2, and a FORTRAN listing.

  1. Analytic Expressions for the Gravity Gradient Tensor of 3D Prisms with Depth-Dependent Density

    NASA Astrophysics Data System (ADS)

    Jiang, Li; Liu, Jie; Zhang, Jianzhong; Feng, Zhibing

    2017-12-01

    Variable-density sources have been paid more attention in gravity modeling. We conduct the computation of gravity gradient tensor of given mass sources with variable density in this paper. 3D rectangular prisms, as simple building blocks, can be used to approximate well 3D irregular-shaped sources. A polynomial function of depth can represent flexibly the complicated density variations in each prism. Hence, we derive the analytic expressions in closed form for computing all components of the gravity gradient tensor due to a 3D right rectangular prism with an arbitrary-order polynomial density function of depth. The singularity of the expressions is analyzed. The singular points distribute at the corners of the prism or on some of the lines through the edges of the prism in the lower semi-space containing the prism. The expressions are validated, and their numerical stability is also evaluated through numerical tests. The numerical examples with variable-density prism and basin models show that the expressions within their range of numerical stability are superior in computational accuracy and efficiency to the common solution that sums up the effects of a collection of uniform subprisms, and provide an effective method for computing gravity gradient tensor of 3D irregular-shaped sources with complicated density variation. In addition, the tensor computed with variable density is different in magnitude from that with constant density. It demonstrates the importance of the gravity gradient tensor modeling with variable density.

  2. Current advances on polynomial resultant formulations

    NASA Astrophysics Data System (ADS)

    Sulaiman, Surajo; Aris, Nor'aini; Ahmad, Shamsatun Nahar

    2017-08-01

    Availability of computer algebra systems (CAS) lead to the resurrection of the resultant method for eliminating one or more variables from the polynomials system. The resultant matrix method has advantages over the Groebner basis and Ritt-Wu method due to their high complexity and storage requirement. This paper focuses on the current resultant matrix formulations and investigates their ability or otherwise towards producing optimal resultant matrices. A determinantal formula that gives exact resultant or a formulation that can minimize the presence of extraneous factors in the resultant formulation is often sought for when certain conditions that it exists can be determined. We present some applications of elimination theory via resultant formulations and examples are given to explain each of the presented settings.

  3. Rational integrability of trigonometric polynomial potentials on the flat torus

    NASA Astrophysics Data System (ADS)

    Combot, Thierry

    2017-07-01

    We consider a lattice ℒ ⊂ ℝ n and a trigonometric potential V with frequencies k ∈ ℒ. We then prove a strong rational integrability condition on V, using the support of its Fourier transform. We then use this condition to prove that a real trigonometric polynomial potential is rationally integrable if and only if it separates up to rotation of the coordinates. Removing the real condition, we also make a classification of rationally integrable potentials in dimensions 2 and 3 and recover several integrable cases. After a complex change of variables, these potentials become real and correspond to generalized Toda integrable potentials. Moreover, along the proof, some of them with high-degree first integrals are explicitly integrated.

  4. Planar harmonic polynomials of type B

    NASA Astrophysics Data System (ADS)

    Dunkl, Charles F.

    1999-11-01

    The hyperoctahedral group acting on icons/Journals/Common/BbbR" ALT="BbbR" ALIGN="TOP"/>N is the Weyl group of type B and is associated with a two-parameter family of differential-difference operators {Ti:1icons/Journals/Common/leq" ALT="leq" ALIGN="TOP"/> iicons/Journals/Common/leq" ALT="leq" ALIGN="TOP"/> N}. These operators are analogous to partial derivative operators. This paper finds all the polynomials h on icons/Journals/Common/BbbR" ALT="BbbR" ALIGN="TOP"/>N which are harmonic, icons/Journals/Common/Delta" ALT="Delta" ALIGN="TOP"/>Bh = 0 and annihilated by Ti for i>2, where the Laplacian 0305-4470/32/46/308/img1" ALT="(sum). They are given explicitly in terms of a novel basis of polynomials, defined by generating functions. The harmonic polynomials can be used to find wavefunctions for the quantum many-body spin Calogero model.

  5. A Formally Verified Conflict Detection Algorithm for Polynomial Trajectories

    NASA Technical Reports Server (NTRS)

    Narkawicz, Anthony; Munoz, Cesar

    2015-01-01

    In air traffic management, conflict detection algorithms are used to determine whether or not aircraft are predicted to lose horizontal and vertical separation minima within a time interval assuming a trajectory model. In the case of linear trajectories, conflict detection algorithms have been proposed that are both sound, i.e., they detect all conflicts, and complete, i.e., they do not present false alarms. In general, for arbitrary nonlinear trajectory models, it is possible to define detection algorithms that are either sound or complete, but not both. This paper considers the case of nonlinear aircraft trajectory models based on polynomial functions. In particular, it proposes a conflict detection algorithm that precisely determines whether, given a lookahead time, two aircraft flying polynomial trajectories are in conflict. That is, it has been formally verified that, assuming that the aircraft trajectories are modeled as polynomial functions, the proposed algorithm is both sound and complete.

  6. Polynomial stability of a magneto-thermoelastic Mindlin-Timoshenko plate model

    NASA Astrophysics Data System (ADS)

    Ferreira, Marcio V.; Muñoz Rivera, Jaime E.

    2018-02-01

    In this paper, we consider the magneto-thermoelastic interactions in a two-dimensional Mindlin-Timoshenko plate. Our main result is concerned with the strong asymptotic stabilization of the model. In particular, we determine the rate of polynomial decay of the associated energy. In contrast with what was observed in other related articles, geometrical hypotheses on the plate configuration (such as radial symmetry) are not imposed in this study nor any kind of frictional damping mechanism. A suitable multiplier is instrumental in establishing the polynomial stability with the aid of a recent result due to Borichev and Tomilov (Math Ann 347(2):455-478, 2010).

  7. Size-segregated particle number concentrations and respiratory emergency room visits in Beijing, China.

    PubMed

    Leitte, Arne Marian; Schlink, Uwe; Herbarth, Olf; Wiedensohler, Alfred; Pan, Xiao-Chuan; Hu, Min; Richter, Matthia; Wehner, Birgit; Tuch, Thomas; Wu, Zhijun; Yang, Minjuan; Liu, Liqun; Breitner, Susanne; Cyrys, Josef; Peters, Annette; Wichmann, H-Erich; Franck, Ulrich

    2011-04-01

    The link between concentrations of particulate matter (PM) and respiratory morbidity has been investigated in numerous studies. The aim of this study was to analyze the role of different particle size fractions with respect to respiratory health in Beijing, China. Data on particle size distributions from 3 nm to 1 µm; PM10 (PM ≤ 10 µm), nitrogen dioxide (NO(2)), and sulfur dioxide concentrations; and meteorologic variables were collected daily from March 2004 to December 2006. Concurrently, daily counts of emergency room visits (ERV) for respiratory diseases were obtained from the Peking University Third Hospital. We estimated pollutant effects in single- and two-pollutant generalized additive models, controlling for meteorologic and other time-varying covariates. Time-delayed associations were estimated using polynomial distributed lag, cumulative effects, and single lag models. Associations of respiratory ERV with NO(2) concentrations and 100-1,000 nm particle number or surface area concentrations were of similar magnitude-that is, approximately 5% increase in respiratory ERV with an interquartile range increase in air pollution concentration. In general, particles < 50 nm were not positively associated with ERV, whereas particles 50-100 nm were adversely associated with respiratory ERV, both being fractions of ultrafine particles. Effect estimates from two-pollutant models were most consistent for NO(2). Present levels of air pollution in Beijing were adversely associated with respiratory ERV. NO(2) concentrations seemed to be a better surrogate for evaluating overall respiratory health effects of ambient air pollution than PM(10) or particle number concentrations in Beijing.

  8. A Semiparametric Approach for Composite Functional Mapping of Dynamic Quantitative Traits

    PubMed Central

    Yang, Runqing; Gao, Huijiang; Wang, Xin; Zhang, Ji; Zeng, Zhao-Bang; Wu, Rongling

    2007-01-01

    Functional mapping has emerged as a powerful tool for mapping quantitative trait loci (QTL) that control developmental patterns of complex dynamic traits. Original functional mapping has been constructed within the context of simple interval mapping, without consideration of separate multiple linked QTL for a dynamic trait. In this article, we present a statistical framework for mapping QTL that affect dynamic traits by capitalizing on the strengths of functional mapping and composite interval mapping. Within this so-called composite functional-mapping framework, functional mapping models the time-dependent genetic effects of a QTL tested within a marker interval using a biologically meaningful parametric function, whereas composite interval mapping models the time-dependent genetic effects of the markers outside the test interval to control the genome background using a flexible nonparametric approach based on Legendre polynomials. Such a semiparametric framework was formulated by a maximum-likelihood model and implemented with the EM algorithm, allowing for the estimation and the test of the mathematical parameters that define the QTL effects and the regression coefficients of the Legendre polynomials that describe the marker effects. Simulation studies were performed to investigate the statistical behavior of composite functional mapping and compare its advantage in separating multiple linked QTL as compared to functional mapping. We used the new mapping approach to analyze a genetic mapping example in rice, leading to the identification of multiple QTL, some of which are linked on the same chromosome, that control the developmental trajectory of leaf age. PMID:17947431

  9. Human salmonellosis: estimation of dose-illness from outbreak data.

    PubMed

    Bollaerts, Kaatje; Aerts, Marc; Faes, Christel; Grijspeerdt, Koen; Dewulf, Jeroen; Mintiens, Koen

    2008-04-01

    The quantification of the relationship between the amount of microbial organisms ingested and a specific outcome such as infection, illness, or mortality is a key aspect of quantitative risk assessment. A main problem in determining such dose-response models is the availability of appropriate data. Human feeding trials have been criticized because only young healthy volunteers are selected to participate and low doses, as often occurring in real life, are typically not considered. Epidemiological outbreak data are considered to be more valuable, but are more subject to data uncertainty. In this article, we model the dose-illness relationship based on data of 20 Salmonella outbreaks, as discussed by the World Health Organization. In particular, we model the dose-illness relationship using generalized linear mixed models and fractional polynomials of dose. The fractional polynomial models are modified to satisfy the properties of different types of dose-illness models as proposed by Teunis et al. Within these models, differences in host susceptibility (susceptible versus normal population) are modeled as fixed effects whereas differences in serovar type and food matrix are modeled as random effects. In addition, two bootstrap procedures are presented. A first procedure accounts for stochastic variability whereas a second procedure accounts for both stochastic variability and data uncertainty. The analyses indicate that the susceptible population has a higher probability of illness at low dose levels when the combination pathogen-food matrix is extremely virulent and at high dose levels when the combination is less virulent. Furthermore, the analyses suggest that immunity exists in the normal population but not in the susceptible population.

  10. Molecular Isotopic Distribution Analysis (MIDAs) with Adjustable Mass Accuracy

    NASA Astrophysics Data System (ADS)

    Alves, Gelio; Ogurtsov, Aleksey Y.; Yu, Yi-Kuo

    2014-01-01

    In this paper, we present Molecular Isotopic Distribution Analysis (MIDAs), a new software tool designed to compute molecular isotopic distributions with adjustable accuracies. MIDAs offers two algorithms, one polynomial-based and one Fourier-transform-based, both of which compute molecular isotopic distributions accurately and efficiently. The polynomial-based algorithm contains few novel aspects, whereas the Fourier-transform-based algorithm consists mainly of improvements to other existing Fourier-transform-based algorithms. We have benchmarked the performance of the two algorithms implemented in MIDAs with that of eight software packages (BRAIN, Emass, Mercury, Mercury5, NeutronCluster, Qmass, JFC, IC) using a consensus set of benchmark molecules. Under the proposed evaluation criteria, MIDAs's algorithms, JFC, and Emass compute with comparable accuracy the coarse-grained (low-resolution) isotopic distributions and are more accurate than the other software packages. For fine-grained isotopic distributions, we compared IC, MIDAs's polynomial algorithm, and MIDAs's Fourier transform algorithm. Among the three, IC and MIDAs's polynomial algorithm compute isotopic distributions that better resemble their corresponding exact fine-grained (high-resolution) isotopic distributions. MIDAs can be accessed freely through a user-friendly web-interface at http://www.ncbi.nlm.nih.gov/CBBresearch/Yu/midas/index.html.

  11. Molecular Isotopic Distribution Analysis (MIDAs) with adjustable mass accuracy.

    PubMed

    Alves, Gelio; Ogurtsov, Aleksey Y; Yu, Yi-Kuo

    2014-01-01

    In this paper, we present Molecular Isotopic Distribution Analysis (MIDAs), a new software tool designed to compute molecular isotopic distributions with adjustable accuracies. MIDAs offers two algorithms, one polynomial-based and one Fourier-transform-based, both of which compute molecular isotopic distributions accurately and efficiently. The polynomial-based algorithm contains few novel aspects, whereas the Fourier-transform-based algorithm consists mainly of improvements to other existing Fourier-transform-based algorithms. We have benchmarked the performance of the two algorithms implemented in MIDAs with that of eight software packages (BRAIN, Emass, Mercury, Mercury5, NeutronCluster, Qmass, JFC, IC) using a consensus set of benchmark molecules. Under the proposed evaluation criteria, MIDAs's algorithms, JFC, and Emass compute with comparable accuracy the coarse-grained (low-resolution) isotopic distributions and are more accurate than the other software packages. For fine-grained isotopic distributions, we compared IC, MIDAs's polynomial algorithm, and MIDAs's Fourier transform algorithm. Among the three, IC and MIDAs's polynomial algorithm compute isotopic distributions that better resemble their corresponding exact fine-grained (high-resolution) isotopic distributions. MIDAs can be accessed freely through a user-friendly web-interface at http://www.ncbi.nlm.nih.gov/CBBresearch/Yu/midas/index.html.

  12. Topology of Large-Scale Structures of Galaxies in two Dimensions—Systematic Effects

    NASA Astrophysics Data System (ADS)

    Appleby, Stephen; Park, Changbom; Hong, Sungwook E.; Kim, Juhan

    2017-02-01

    We study the two-dimensional topology of galactic distribution when projected onto two-dimensional spherical shells. Using the latest Horizon Run 4 simulation data, we construct the genus of the two-dimensional field and consider how this statistic is affected by late-time nonlinear effects—principally gravitational collapse and redshift space distortion (RSD). We also consider systematic and numerical artifacts, such as shot noise, galaxy bias, and finite pixel effects. We model the systematics using a Hermite polynomial expansion and perform a comprehensive analysis of known effects on the two-dimensional genus, with a view toward using the statistic for cosmological parameter estimation. We find that the finite pixel effect is dominated by an amplitude drop and can be made less than 1% by adopting pixels smaller than 1/3 of the angular smoothing length. Nonlinear gravitational evolution introduces time-dependent coefficients of the zeroth, first, and second Hermite polynomials, but the genus amplitude changes by less than 1% between z = 1 and z = 0 for smoothing scales {R}{{G}}> 9 {Mpc}/{{h}}. Non-zero terms are measured up to third order in the Hermite polynomial expansion when studying RSD. Differences in the shapes of the genus curves in real and redshift space are small when we adopt thick redshift shells, but the amplitude change remains a significant ˜ { O }(10 % ) effect. The combined effects of galaxy biasing and shot noise produce systematic effects up to the second Hermite polynomial. It is shown that, when sampling, the use of galaxy mass cuts significantly reduces the effect of shot noise relative to random sampling.

  13. Precision measurement of the η → π + π - π 0 Dalitz plot distribution with the KLOE detector

    NASA Astrophysics Data System (ADS)

    Anastasi, A.; Babusci, D.; Bencivenni, G.; Berlowski, M.; Bloise, C.; Bossi, F.; Branchini, P.; Budano, A.; Caldeira Balkeståhl, L.; Cao, B.; Ceradini, F.; Ciambrone, P.; Curciarello, F.; Czerwinski, E.; D'Agostini, G.; Danè, E.; De Leo, V.; De Lucia, E.; De Santis, A.; De Simone, P.; Di Cicco, A.; Di Domenico, A.; Di Salvo, R.; Domenici, D.; D'Uffizi, A.; Fantini, A.; Felici, G.; Fiore, S.; Gajos, A.; Gauzzi, P.; Giardina, G.; Giovannella, S.; Graziani, E.; Happacher, F.; Heijkenskjöld, L.; Ikegami Andersson, W.; Johansson, T.; Kaminska, D.; Krzemien, W.; Kupsc, A.; Loffredo, S.; Mandaglio, G.; Martini, M.; Mascolo, M.; Messi, R.; Miscetti, S.; Morello, G.; Moricciani, D.; Moskal, P.; Papenbrock, M.; Passeri, A.; Patera, V.; Perez del Rio, E.; Ranieri, A.; Santangelo, P.; Sarra, I.; Schioppa, M.; Silarski, M.; Sirghi, F.; Tortora, L.; Venanzoni, G.; Wislicki, W.; Wolke, M.

    2016-05-01

    Using 1.6 fb-1 of e + e - → ϕ → ηγ data collected with the KLOE detector at DAΦNE, the Dalitz plot distribution for the η → π + π - π 0 decay is studied with the world's largest sample of ˜ 4 .7 · 106 events. The Dalitz plot density is parametrized as a polynomial expansion up to cubic terms in the normalized dimensionless variables X and Y . The experiment is sensitive to all charge conjugation conserving terms of the expansion, including a gX 2 Y term. The statistical uncertainty of all parameters is improved by a factor two with respect to earlier measurements.

  14. Poly-Frobenius-Euler polynomials

    NASA Astrophysics Data System (ADS)

    Kurt, Burak

    2017-07-01

    Hamahata [3] defined poly-Euler polynomials and the generalized poly-Euler polynomials. He proved some relations and closed formulas for the poly-Euler polynomials. By this motivation, we define poly-Frobenius-Euler polynomials. We give some relations for this polynomials. Also, we prove the relationships between poly-Frobenius-Euler polynomials and Stirling numbers of the second kind.

  15. Necessary and sufficient conditions for the complete controllability and observability of systems in series using the coprime factorization of a rational matrix

    NASA Technical Reports Server (NTRS)

    Callier, F. M.; Nahum, C. D.

    1975-01-01

    The series connection of two linear time-invariant systems that have minimal state space system descriptions is considered. From these descriptions, strict-system-equivalent polynomial matrix system descriptions in the manner of Rosenbrock are derived. They are based on the factorization of the transfer matrix of the subsystems as a ratio of two right or left coprime polynomial matrices. They give rise to a simple polynomial matrix system description of the tandem connection. Theorem 1 states that for the complete controllability and observability of the state space system description of the series connection, it is necessary and sufficient that certain 'denominator' and 'numerator' groups are coprime. Consequences for feedback systems are drawn in Corollary 1. The role of pole-zero cancellations is explained by Lemma 3 and Corollaires 2 and 3.

  16. Computing border bases using mutant strategies

    NASA Astrophysics Data System (ADS)

    Ullah, E.; Abbas Khan, S.

    2014-01-01

    Border bases, a generalization of Gröbner bases, have actively been addressed during recent years due to their applicability to industrial problems. In cryptography and coding theory a useful application of border based is to solve zero-dimensional systems of polynomial equations over finite fields, which motivates us for developing optimizations of the algorithms that compute border bases. In 2006, Kehrein and Kreuzer formulated the Border Basis Algorithm (BBA), an algorithm which allows the computation of border bases that relate to a degree compatible term ordering. In 2007, J. Ding et al. introduced mutant strategies bases on finding special lower degree polynomials in the ideal. The mutant strategies aim to distinguish special lower degree polynomials (mutants) from the other polynomials and give them priority in the process of generating new polynomials in the ideal. In this paper we develop hybrid algorithms that use the ideas of J. Ding et al. involving the concept of mutants to optimize the Border Basis Algorithm for solving systems of polynomial equations over finite fields. In particular, we recall a version of the Border Basis Algorithm which is actually called the Improved Border Basis Algorithm and propose two hybrid algorithms, called MBBA and IMBBA. The new mutants variants provide us space efficiency as well as time efficiency. The efficiency of these newly developed hybrid algorithms is discussed using standard cryptographic examples.

  17. Joint two-dimensional inversion of magnetotelluric and gravity data using correspondence maps

    NASA Astrophysics Data System (ADS)

    Carrillo, Jonathan; Gallardo, Luis A.

    2018-05-01

    An accurate characterization of subsurface targets relies on the interpretation of multiple geophysical properties and their relationships. There are mainly two links to jointly invert different geophysical parameters: structural and petrophysical relationships. Structural approaches aim at minimizing topological differences and are widely popular since they need only a few assumptions about models. Conversely, methods based on petrophysical links rely mostly on the property values themselves and can provide a strong coupling between models, but they need to be treated carefully because specific direct relationship must be known or assumed. While some petrophysical relationships are widely accepted, it remains the question whether we may be able to detect them directly from the geophysical data. Currently, there is no reported development that takes full advantage of the flexibility of jointly estimating in-situ empirical relationships and geophysical models for a given geological scenario. We thus developed an algorithm for the two dimensional joint inversion of gravity and magnetotelluric data that seeks simultaneously for a density-resistivity relationship optimal for each studied site described trough a polynomial function. The iterative two-dimensional scheme is tested using synthetic and field data from Cerro Prieto, Mexico. The resulting models show an enhanced resolution with an increased structural and petrophysical correlation. We show that by fitting a functional relationship we increased significantly the coupled geological sense of the models at a little cost in terms of data misfit.

  18. Stochastic Analysis of the Efficiency of a Wireless Power Transfer System Subject to Antenna Variability and Position Uncertainties

    PubMed Central

    Rossi, Marco; Stockman, Gert-Jan; Rogier, Hendrik; Vande Ginste, Dries

    2016-01-01

    The efficiency of a wireless power transfer (WPT) system in the radiative near-field is inevitably affected by the variability in the design parameters of the deployed antennas and by uncertainties in their mutual position. Therefore, we propose a stochastic analysis that combines the generalized polynomial chaos (gPC) theory with an efficient model for the interaction between devices in the radiative near-field. This framework enables us to investigate the impact of random effects on the power transfer efficiency (PTE) of a WPT system. More specifically, the WPT system under study consists of a transmitting horn antenna and a receiving textile antenna operating in the Industrial, Scientific and Medical (ISM) band at 2.45 GHz. First, we model the impact of the textile antenna’s variability on the WPT system. Next, we include the position uncertainties of the antennas in the analysis in order to quantify the overall variations in the PTE. The analysis is carried out by means of polynomial-chaos-based macromodels, whereas a Monte Carlo simulation validates the complete technique. It is shown that the proposed approach is very accurate, more flexible and more efficient than a straightforward Monte Carlo analysis, with demonstrated speedup factors up to 2500. PMID:27447632

  19. Stochastic Analysis of the Efficiency of a Wireless Power Transfer System Subject to Antenna Variability and Position Uncertainties.

    PubMed

    Rossi, Marco; Stockman, Gert-Jan; Rogier, Hendrik; Vande Ginste, Dries

    2016-07-19

    The efficiency of a wireless power transfer (WPT) system in the radiative near-field is inevitably affected by the variability in the design parameters of the deployed antennas and by uncertainties in their mutual position. Therefore, we propose a stochastic analysis that combines the generalized polynomial chaos (gPC) theory with an efficient model for the interaction between devices in the radiative near-field. This framework enables us to investigate the impact of random effects on the power transfer efficiency (PTE) of a WPT system. More specifically, the WPT system under study consists of a transmitting horn antenna and a receiving textile antenna operating in the Industrial, Scientific and Medical (ISM) band at 2.45 GHz. First, we model the impact of the textile antenna's variability on the WPT system. Next, we include the position uncertainties of the antennas in the analysis in order to quantify the overall variations in the PTE. The analysis is carried out by means of polynomial-chaos-based macromodels, whereas a Monte Carlo simulation validates the complete technique. It is shown that the proposed approach is very accurate, more flexible and more efficient than a straightforward Monte Carlo analysis, with demonstrated speedup factors up to 2500.

  20. Algebraic solution for the forward displacement analysis of the general 6-6 stewart mechanism

    NASA Astrophysics Data System (ADS)

    Wei, Feng; Wei, Shimin; Zhang, Ying; Liao, Qizheng

    2016-01-01

    The solution for the forward displacement analysis(FDA) of the general 6-6 Stewart mechanism(i.e., the connection points of the moving and fixed platforms are not restricted to lying in a plane) has been extensively studied, but the efficiency of the solution remains to be effectively addressed. To this end, an algebraic elimination method is proposed for the FDA of the general 6-6 Stewart mechanism. The kinematic constraint equations are built using conformal geometric algebra(CGA). The kinematic constraint equations are transformed by a substitution of variables into seven equations with seven unknown variables. According to the characteristic of anti-symmetric matrices, the aforementioned seven equations can be further transformed into seven equations with four unknown variables by a substitution of variables using the Gröbner basis. Its elimination weight is increased through changing the degree of one variable, and sixteen equations with four unknown variables can be obtained using the Gröbner basis. A 40th-degree univariate polynomial equation is derived by constructing a relatively small-sized 9´9 Sylvester resultant matrix. Finally, two numerical examples are employed to verify the proposed method. The results indicate that the proposed method can effectively improve the efficiency of solution and reduce the computational burden because of the small-sized resultant matrix.

  1. Advanced reliability methods for structural evaluation

    NASA Technical Reports Server (NTRS)

    Wirsching, P. H.; Wu, Y.-T.

    1985-01-01

    Fast probability integration (FPI) methods, which can yield approximate solutions to such general structural reliability problems as the computation of the probabilities of complicated functions of random variables, are known to require one-tenth the computer time of Monte Carlo methods for a probability level of 0.001; lower probabilities yield even more dramatic differences. A strategy is presented in which a computer routine is run k times with selected perturbed values of the variables to obtain k solutions for a response variable Y. An approximating polynomial is fit to the k 'data' sets, and FPI methods are employed for this explicit form.

  2. Intrusive Method for Uncertainty Quantification in a Multiphase Flow Solver

    NASA Astrophysics Data System (ADS)

    Turnquist, Brian; Owkes, Mark

    2016-11-01

    Uncertainty quantification (UQ) is a necessary, interesting, and often neglected aspect of fluid flow simulations. To determine the significance of uncertain initial and boundary conditions, a multiphase flow solver is being created which extends a single phase, intrusive, polynomial chaos scheme into multiphase flows. Reliably estimating the impact of input uncertainty on design criteria can help identify and minimize unwanted variability in critical areas, and has the potential to help advance knowledge in atomizing jets, jet engines, pharmaceuticals, and food processing. Use of an intrusive polynomial chaos method has been shown to significantly reduce computational cost over non-intrusive collocation methods such as Monte-Carlo. This method requires transforming the model equations into a weak form through substitution of stochastic (random) variables. Ultimately, the model deploys a stochastic Navier Stokes equation, a stochastic conservative level set approach including reinitialization, as well as stochastic normals and curvature. By implementing these approaches together in one framework, basic problems may be investigated which shed light on model expansion, uncertainty theory, and fluid flow in general. NSF Grant Number 1511325.

  3. Volumetric calibration of a plenoptic camera.

    PubMed

    Hall, Elise Munz; Fahringer, Timothy W; Guildenbecher, Daniel R; Thurow, Brian S

    2018-02-01

    The volumetric calibration of a plenoptic camera is explored to correct for inaccuracies due to real-world lens distortions and thin-lens assumptions in current processing methods. Two methods of volumetric calibration based on a polynomial mapping function that does not require knowledge of specific lens parameters are presented and compared to a calibration based on thin-lens assumptions. The first method, volumetric dewarping, is executed by creation of a volumetric representation of a scene using the thin-lens assumptions, which is then corrected in post-processing using a polynomial mapping function. The second method, direct light-field calibration, uses the polynomial mapping in creation of the initial volumetric representation to relate locations in object space directly to image sensor locations. The accuracy and feasibility of these methods is examined experimentally by capturing images of a known dot card at a variety of depths. Results suggest that use of a 3D polynomial mapping function provides a significant increase in reconstruction accuracy and that the achievable accuracy is similar using either polynomial-mapping-based method. Additionally, direct light-field calibration provides significant computational benefits by eliminating some intermediate processing steps found in other methods. Finally, the flexibility of this method is shown for a nonplanar calibration.

  4. Generalized neurofuzzy network modeling algorithms using Bézier-Bernstein polynomial functions and additive decomposition.

    PubMed

    Hong, X; Harris, C J

    2000-01-01

    This paper introduces a new neurofuzzy model construction algorithm for nonlinear dynamic systems based upon basis functions that are Bézier-Bernstein polynomial functions. This paper is generalized in that it copes with n-dimensional inputs by utilising an additive decomposition construction to overcome the curse of dimensionality associated with high n. This new construction algorithm also introduces univariate Bézier-Bernstein polynomial functions for the completeness of the generalized procedure. Like the B-spline expansion based neurofuzzy systems, Bézier-Bernstein polynomial function based neurofuzzy networks hold desirable properties such as nonnegativity of the basis functions, unity of support, and interpretability of basis function as fuzzy membership functions, moreover with the additional advantages of structural parsimony and Delaunay input space partition, essentially overcoming the curse of dimensionality associated with conventional fuzzy and RBF networks. This new modeling network is based on additive decomposition approach together with two separate basis function formation approaches for both univariate and bivariate Bézier-Bernstein polynomial functions used in model construction. The overall network weights are then learnt using conventional least squares methods. Numerical examples are included to demonstrate the effectiveness of this new data based modeling approach.

  5. [Hyperspectral Remote Sensing Estimation Models for Pasture Quality].

    PubMed

    Ma, Wei-wei; Gong, Cai-lan; Hu, Yong; Wei, Yong-lin; Li, Long; Liu, Feng-yi; Meng, Peng

    2015-10-01

    Crude protein (CP), crude fat (CFA) and crude fiber (CFI) are key indicators for evaluation of the quality and feeding value of pasture. Hence, identification of these biological contents is an essential practice for animal husbandry. As current approaches to pasture quality estimation are time-consuming and costly, and even generate hazardous waste, a real-time and non-destructive method is therefore developed in this study using pasture canopy hyperspectral data. A field campaign was carried out in August 2013 around Qinghai Lake in order to obtain field spectral properties of 19 types of natural pasture using the ASD Field Spec 3, a field spectrometer that works in the optical region (350-2 500 nm) of the electromagnetic spectrum. In additional to the spectral data, pasture samples were also collected from the field and examined in laboratory to measure the relative concentration of CP (%), CFA (%) and CFI (%). After spectral denoising and smoothing, the relationship of pasture quality parameters with the reflectance spectrum, the first derivatives of reflectance (FDR), band ratio and the wavelet coefficients (WCs) was analyzed respectively. The concentration of CP, CFA and CFI of pasture was found closely correlated with FDR with wavebands centered at 424, 1 668, and 918 nm as well as with the low-scale (scale = 2, 4) Morlet, Coiflets and Gassian WCs. Accordingly, the linear, exponential, and polynomial equations between each pasture variable and FDR or WCs were developed. Validation of the developed equations indicated that the polynomial model with an independent variable of Coiflets WCs (scale = 4, wavelength =1 209 nm), the polynomial model with an independent variable of FDR, and the exponential model with an independent variable of FDR were the optimal model for prediction of concentration of CP, CFA and CFI of pasture, respectively. The R2 of the pasture quality estimation models was between 0.646 and 0.762 at the 0.01 significance level. Results suggest that the first derivatives or the wavelet coefficients of hyperspectral reflectance in visible and near-infrared regions can be used for pasture quality estimation, and that it will provide a basis for real-time prediction of pasture quality using remote sensing techniques.

  6. Stochastic Modeling of Flow-Structure Interactions using Generalized Polynomial Chaos

    DTIC Science & Technology

    2001-09-11

    Some basic hypergeometric polynomials that generalize Jacobi polynomials . Memoirs Amer. Math. Soc...scheme, which is represented as a tree structure in figure 1 (following [24]), classifies the hypergeometric orthogonal polynomials and indicates the...2F0(1) 2F0(0) Figure 1: The Askey scheme of orthogonal polynomials The orthogonal polynomials associated with the generalized polynomial chaos,

  7. Evaluate More General Integrals Involving Universal Associated Legendre Polynomials via Taylor’s Theorem

    NASA Astrophysics Data System (ADS)

    Yañez-Navarro, G.; Sun, Guo-Hua; Sun, Dong-Sheng; Chen, Chang-Yuan; Dong, Shi-Hai

    2017-08-01

    A few important integrals involving the product of two universal associated Legendre polynomials {P}{l\\prime}{m\\prime}(x), {P}{k\\prime}{n\\prime}(x) and x2a(1 - x2)-p-1, xb(1 ± x)-p-1 and xc(1 -x2)-p-1 (1 ± x) are evaluated using the operator form of Taylor’s theorem and an integral over a single universal associated Legendre polynomial. These integrals are more general since the quantum numbers are unequal, i.e. l‧ ≠ k‧ and m‧ ≠ n‧. Their selection rules are also given. We also verify the correctness of those integral formulas numerically. Supported by 20170938-SIP-IPN, Mexico

  8. On the complexity of some quadratic Euclidean 2-clustering problems

    NASA Astrophysics Data System (ADS)

    Kel'manov, A. V.; Pyatkin, A. V.

    2016-03-01

    Some problems of partitioning a finite set of points of Euclidean space into two clusters are considered. In these problems, the following criteria are minimized: (1) the sum over both clusters of the sums of squared pairwise distances between the elements of the cluster and (2) the sum of the (multiplied by the cardinalities of the clusters) sums of squared distances from the elements of the cluster to its geometric center, where the geometric center (or centroid) of a cluster is defined as the mean value of the elements in that cluster. Additionally, another problem close to (2) is considered, where the desired center of one of the clusters is given as input, while the center of the other cluster is unknown (is the variable to be optimized) as in problem (2). Two variants of the problems are analyzed, in which the cardinalities of the clusters are (1) parts of the input or (2) optimization variables. It is proved that all the considered problems are strongly NP-hard and that, in general, there is no fully polynomial-time approximation scheme for them (unless P = NP).

  9. Peculiarities of stochastic regime of Arctic ice cover time evolution over 1987-2014 from microwave satellite sounding on the basis of NASA team 2 algorithm

    NASA Astrophysics Data System (ADS)

    Raev, M. D.; Sharkov, E. A.; Tikhonov, V. V.; Repina, I. A.; Komarova, N. Yu.

    2015-12-01

    The GLOBAL-RT database (DB) is composed of long-term radio heat multichannel observation data received from DMSP F08-F17 satellites; it is permanently supplemented with new data on the Earth's exploration from the space department of the Space Research Institute, Russian Academy of Sciences. Arctic ice-cover areas for regions higher than 60° N latitude were calculated using the DB polar version and NASA Team 2 algorithm, which is widely used in foreign scientific literature. According to the analysis of variability of Arctic ice cover during 1987-2014, 2 months were selected when the Arctic ice cover was maximal (February) and minimal (September), and the average ice cover area was calculated for these months. Confidence intervals of the average values are in the 95-98% limits. Several approximations are derived for the time dependences of the ice-cover maximum and minimum over the period under study. Regression dependences were calculated for polynomials from the first degree (linear) to sextic. It was ascertained that the minimal root-mean-square error of deviation from the approximated curve sharply decreased for the biquadratic polynomial and then varied insignificantly: from 0.5593 for the polynomial of third degree to 0.4560 for the biquadratic polynomial. Hence, the commonly used strictly linear regression with a negative time gradient for the September Arctic ice cover minimum over 30 years should be considered incorrect.

  10. Groebner Basis Solutions to Satellite Trajectory Control by Pole Placement

    NASA Astrophysics Data System (ADS)

    Kukelova, Z.; Krsek, P.; Smutny, V.; Pajdla, T.

    2013-09-01

    Satellites play an important role, e.g., in telecommunication, navigation and weather monitoring. Controlling their trajectories is an important problem. In [1], an approach to the pole placement for the synthesis of a linear controller has been presented. It leads to solving five polynomial equations in nine unknown elements of the state space matrices of a compensator. This is an underconstrained system and therefore four of the unknown elements need to be considered as free parameters and set to some prior values to obtain a system of five equations in five unknowns. In [1], this system was solved for one chosen set of free parameters with the help of Dixon resultants. In this work, we study and present Groebner basis solutions to this problem of computation of a dynamic compensator for the satellite for different combinations of input free parameters. We show that the Groebner basis method for solving systems of polynomial equations leads to very simple solutions for all combinations of free parameters. These solutions require to perform only the Gauss-Jordan elimination of a small matrix and computation of roots of a single variable polynomial. The maximum degree of this polynomial is not greater than six in general but for most combinations of the input free parameters its degree is even lower. [1] B. Palancz. Application of Dixon resultant to satellite trajectory control by pole placement. Journal of Symbolic Computation, Volume 50, March 2013, Pages 79-99, Elsevier.

  11. Novel quadrilateral elements based on explicit Hermite polynomials for bending of Kirchhoff-Love plates

    NASA Astrophysics Data System (ADS)

    Beheshti, Alireza

    2018-03-01

    The contribution addresses the finite element analysis of bending of plates given the Kirchhoff-Love model. To analyze the static deformation of plates with different loadings and geometries, the principle of virtual work is used to extract the weak form. Following deriving the strain field, stresses and resultants may be obtained. For constructing four-node quadrilateral plate elements, the Hermite polynomials defined with respect to the variables in the parent space are applied explicitly. Based on the approximated field of displacement, the stiffness matrix and the load vector in the finite element method are obtained. To demonstrate the performance of the subparametric 4-node plate elements, some known, classical examples in structural mechanics are solved and there are comparisons with the analytical solutions available in the literature.

  12. Formal methods for modeling and analysis of hybrid systems

    NASA Technical Reports Server (NTRS)

    Tiwari, Ashish (Inventor); Lincoln, Patrick D. (Inventor)

    2009-01-01

    A technique based on the use of a quantifier elimination decision procedure for real closed fields and simple theorem proving to construct a series of successively finer qualitative abstractions of hybrid automata is taught. The resulting abstractions are always discrete transition systems which can then be used by any traditional analysis tool. The constructed abstractions are conservative and can be used to establish safety properties of the original system. The technique works on linear and non-linear polynomial hybrid systems: the guards on discrete transitions and the continuous flows in all modes can be specified using arbitrary polynomial expressions over the continuous variables. An exemplar tool in the SAL environment built over the theorem prover PVS is detailed. The technique scales well to large and complex hybrid systems.

  13. Polynomial Chaos decomposition applied to stochastic dosimetry: study of the influence of the magnetic field orientation on the pregnant woman exposure at 50 Hz.

    PubMed

    Liorni, I; Parazzini, M; Fiocchi, S; Guadagnin, V; Ravazzani, P

    2014-01-01

    Polynomial Chaos (PC) is a decomposition method used to build a meta-model, which approximates the unknown response of a model. In this paper the PC method is applied to the stochastic dosimetry to assess the variability of human exposure due to the change of the orientation of the B-field vector respect to the human body. In detail, the analysis of the pregnant woman exposure at 7 months of gestational age is carried out, to build-up a statistical meta-model of the induced electric field for each fetal tissue and in the fetal whole-body by means of the PC expansion as a function of the B-field orientation, considering a uniform exposure at 50 Hz.

  14. Efficient evaluation of the material response of tissues reinforced by statistically oriented fibres

    NASA Astrophysics Data System (ADS)

    Hashlamoun, Kotaybah; Grillo, Alfio; Federico, Salvatore

    2016-10-01

    For several classes of soft biological tissues, modelling complexity is in part due to the arrangement of the collagen fibres. In general, the arrangement of the fibres can be described by defining, at each point in the tissue, the structure tensor (i.e. the tensor product of the unit vector of the local fibre arrangement by itself) and a probability distribution of orientation. In this approach, assuming that the fibres do not interact with each other, the overall contribution of the collagen fibres to a given mechanical property of the tissue can be estimated by means of an averaging integral of the constitutive function describing the mechanical property at study over the set of all possible directions in space. Except for the particular case of fibre constitutive functions that are polynomial in the transversely isotropic invariants of the deformation, the averaging integral cannot be evaluated directly, in a single calculation because, in general, the integrand depends both on deformation and on fibre orientation in a non-separable way. The problem is thus, in a sense, analogous to that of solving the integral of a function of two variables, which cannot be split up into the product of two functions, each depending only on one of the variables. Although numerical schemes can be used to evaluate the integral at each deformation increment, this is computationally expensive. With the purpose of containing computational costs, this work proposes approximation methods that are based on the direct integrability of polynomial functions and that do not require the step-by-step evaluation of the averaging integrals. Three different methods are proposed: (a) a Taylor expansion of the fibre constitutive function in the transversely isotropic invariants of the deformation; (b) a Taylor expansion of the fibre constitutive function in the structure tensor; (c) for the case of a fibre constitutive function having a polynomial argument, an approximation in which the directional average of the constitutive function is replaced by the constitutive function evaluated at the directional average of the argument. Each of the proposed methods approximates the averaged constitutive function in such a way that it is multiplicatively decomposed into the product of a function of the deformation only and a function of the structure tensors only. In order to assess the accuracy of these methods, we evaluate the constitutive functions of the elastic potential and the Cauchy stress, for a biaxial test, under different conditions, i.e. different fibre distributions and different ratios of the nominal strains in the two directions. The results are then compared against those obtained for an averaging method available in the literature, as well as against the integration made at each increment of deformation.

  15. An adaptive sampling method for variable-fidelity surrogate models using improved hierarchical kriging

    NASA Astrophysics Data System (ADS)

    Hu, Jiexiang; Zhou, Qi; Jiang, Ping; Shao, Xinyu; Xie, Tingli

    2018-01-01

    Variable-fidelity (VF) modelling methods have been widely used in complex engineering system design to mitigate the computational burden. Building a VF model generally includes two parts: design of experiments and metamodel construction. In this article, an adaptive sampling method based on improved hierarchical kriging (ASM-IHK) is proposed to refine the improved VF model. First, an improved hierarchical kriging model is developed as the metamodel, in which the low-fidelity model is varied through a polynomial response surface function to capture the characteristics of a high-fidelity model. Secondly, to reduce local approximation errors, an active learning strategy based on a sequential sampling method is introduced to make full use of the already required information on the current sampling points and to guide the sampling process of the high-fidelity model. Finally, two numerical examples and the modelling of the aerodynamic coefficient for an aircraft are provided to demonstrate the approximation capability of the proposed approach, as well as three other metamodelling methods and two sequential sampling methods. The results show that ASM-IHK provides a more accurate metamodel at the same simulation cost, which is very important in metamodel-based engineering design problems.

  16. Characterization of the spatial variability of channel morphology

    USGS Publications Warehouse

    Moody, J.A.; Troutman, B.M.

    2002-01-01

    The spatial variability of two fundamental morphological variables is investigated for rivers having a wide range of discharge (five orders of magnitude). The variables, water-surface width and average depth, were measured at 58 to 888 equally spaced cross-sections in channel links (river reaches between major tributaries). These measurements provide data to characterize the two-dimensional structure of a channel link which is the fundamental unit of a channel network. The morphological variables have nearly log-normal probability distributions. A general relation was determined which relates the means of the log-transformed variables to the logarithm of discharge similar to previously published downstream hydraulic geometry relations. The spatial variability of the variables is described by two properties: (1) the coefficient of variation which was nearly constant (0.13-0.42) over a wide range of discharge; and (2) the integral length scale in the downstream direction which was approximately equal to one to two mean channel widths. The joint probability distribution of the morphological variables in the downstream direction was modelled as a first-order, bivariate autoregressive process. This model accounted for up to 76 per cent of the total variance. The two-dimensional morphological variables can be scaled such that the channel width-depth process is independent of discharge. The scaling properties will be valuable to modellers of both basin and channel dynamics. Published in 2002 John Wiley and Sons, Ltd.

  17. Multiple and Single Green Area Measurements and Classification Using Phantom Images in Comparison with Derived Experimental Law

    NASA Astrophysics Data System (ADS)

    Abu-Zaid, N. A. M.

    2017-11-01

    In many circumstances, it is difficult for humans to reach some areas, due to its topography, personal safety, or security regulations in the country. Governments and persons need to calculate those areas and classify the green parts for reclamation to benefit from it.To solve this problem, this research proposes to use a phantom air plane to capture a digital image for the targeted area, then use a segmentation algorithm to separate the green space and calculate it's area. It was necessary to deal with two problems. The first is the variable elevation at which an image was taken, which leads to a change in the physical area of each pixel. To overcome this problem a fourth degree polynomial was fit to some experimental data. The second problem was the existence of different unconnected pieces of green areas in a single image, but we might be interested only in one of them. To solve this problem, the probability of classifying the targeted area as green was increased, while the probability of other untargeted sections was decreased by the inclusion of parts of it as non-green. A practical law was also devised to measure the target area in the digital image for comparison purposes with practical measurements and the polynomial fit.

  18. Constraint analysis of two-dimensional quadratic gravity from { BF} theory

    NASA Astrophysics Data System (ADS)

    Valcárcel, C. E.

    2017-01-01

    Quadratic gravity in two dimensions can be formulated as a background field ( BF) theory plus an interaction term which is polynomial in both, the gauge and background fields. This formulation is similar to the one given by Freidel and Starodubtsev to obtain MacDowell-Mansouri gravity in four dimensions. In this article we use the Dirac's Hamiltonian formalism to analyze the constraint structure of the two-dimensional Polynomial BF action. After we obtain the constraints of the theory, we proceed with the Batalin-Fradkin-Vilkovisky procedure to obtain the transition amplitude. We also compare our results with the ones obtained from generalized dilaton gravity.

  19. Spectral solver for multi-scale plasma physics simulations with dynamically adaptive number of moments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vencels, Juris; Delzanno, Gian Luca; Johnson, Alec

    2015-06-01

    A spectral method for kinetic plasma simulations based on the expansion of the velocity distribution function in a variable number of Hermite polynomials is presented. The method is based on a set of non-linear equations that is solved to determine the coefficients of the Hermite expansion satisfying the Vlasov and Poisson equations. In this paper, we first show that this technique combines the fluid and kinetic approaches into one framework. Second, we present an adaptive strategy to increase and decrease the number of Hermite functions dynamically during the simulation. The technique is applied to the Landau damping and two-stream instabilitymore » test problems. Performance results show 21% and 47% saving of total simulation time in the Landau and two-stream instability test cases, respectively.« less

  20. Zernike expansion of derivatives and Laplacians of the Zernike circle polynomials.

    PubMed

    Janssen, A J E M

    2014-07-01

    The partial derivatives and Laplacians of the Zernike circle polynomials occur in various places in the literature on computational optics. In a number of cases, the expansion of these derivatives and Laplacians in the circle polynomials are required. For the first-order partial derivatives, analytic results are scattered in the literature. Results start as early as 1942 in Nijboer's thesis and continue until present day, with some emphasis on recursive computation schemes. A brief historic account of these results is given in the present paper. By choosing the unnormalized version of the circle polynomials, with exponential rather than trigonometric azimuthal dependence, and by a proper combination of the two partial derivatives, a concise form of the expressions emerges. This form is appropriate for the formulation and solution of a model wavefront sensing problem of reconstructing a wavefront on the level of its expansion coefficients from (measurements of the expansion coefficients of) the partial derivatives. It turns out that the least-squares estimation problem arising here decouples per azimuthal order m, and per m the generalized inverse solution assumes a concise analytic form so that singular value decompositions are avoided. The preferred version of the circle polynomials, with proper combination of the partial derivatives, also leads to a concise analytic result for the Zernike expansion of the Laplacian of the circle polynomials. From these expansions, the properties of the Laplacian as a mapping from the space of circle polynomials of maximal degree N, as required in the study of the Neumann problem associated with the transport-of-intensity equation, can be read off within a single glance. Furthermore, the inverse of the Laplacian on this space is shown to have a concise analytic form.

  1. Transfer matrix computation of critical polynomials for two-dimensional Potts models

    DOE PAGES

    Jacobsen, Jesper Lykke; Scullard, Christian R.

    2013-02-04

    We showed, In our previous work, that critical manifolds of the q-state Potts model can be studied by means of a graph polynomial P B(q, v), henceforth referred to as the critical polynomial. This polynomial may be defined on any periodic two-dimensional lattice. It depends on a finite subgraph B, called the basis, and the manner in which B is tiled to construct the lattice. The real roots v = e K — 1 of P B(q, v) either give the exact critical points for the lattice, or provide approximations that, in principle, can be made arbitrarily accurate by increasingmore » the size of B in an appropriate way. In earlier work, P B(q, v) was defined by a contraction-deletion identity, similar to that satisfied by the Tutte polynomial. Here, we give a probabilistic definition of P B(q, v), which facilitates its computation, using the transfer matrix, on much larger B than was previously possible.We present results for the critical polynomial on the (4, 8 2), kagome, and (3, 12 2) lattices for bases of up to respectively 96, 162, and 243 edges, compared to the limit of 36 edges with contraction-deletion. We discuss in detail the role of the symmetries and the embedding of B. The critical temperatures v c obtained for ferromagnetic (v > 0) Potts models are at least as precise as the best available results from Monte Carlo simulations or series expansions. For instance, with q = 3 we obtain v c(4, 8 2) = 3.742 489 (4), v c(kagome) = 1.876 459 7 (2), and v c(3, 12 2) = 5.033 078 49 (4), the precision being comparable or superior to the best simulation results. More generally, we trace the critical manifolds in the real (q, v) plane and discuss the intricate structure of the phase diagram in the antiferromagnetic (v < 0) region.« less

  2. An Online Gravity Modeling Method Applied for High Precision Free-INS

    PubMed Central

    Wang, Jing; Yang, Gongliu; Li, Jing; Zhou, Xiao

    2016-01-01

    For real-time solution of inertial navigation system (INS), the high-degree spherical harmonic gravity model (SHM) is not applicable because of its time and space complexity, in which traditional normal gravity model (NGM) has been the dominant technique for gravity compensation. In this paper, a two-dimensional second-order polynomial model is derived from SHM according to the approximate linear characteristic of regional disturbing potential. Firstly, deflections of vertical (DOVs) on dense grids are calculated with SHM in an external computer. And then, the polynomial coefficients are obtained using these DOVs. To achieve global navigation, the coefficients and applicable region of polynomial model are both updated synchronously in above computer. Compared with high-degree SHM, the polynomial model takes less storage and computational time at the expense of minor precision. Meanwhile, the model is more accurate than NGM. Finally, numerical test and INS experiment show that the proposed method outperforms traditional gravity models applied for high precision free-INS. PMID:27669261

  3. An Online Gravity Modeling Method Applied for High Precision Free-INS.

    PubMed

    Wang, Jing; Yang, Gongliu; Li, Jing; Zhou, Xiao

    2016-09-23

    For real-time solution of inertial navigation system (INS), the high-degree spherical harmonic gravity model (SHM) is not applicable because of its time and space complexity, in which traditional normal gravity model (NGM) has been the dominant technique for gravity compensation. In this paper, a two-dimensional second-order polynomial model is derived from SHM according to the approximate linear characteristic of regional disturbing potential. Firstly, deflections of vertical (DOVs) on dense grids are calculated with SHM in an external computer. And then, the polynomial coefficients are obtained using these DOVs. To achieve global navigation, the coefficients and applicable region of polynomial model are both updated synchronously in above computer. Compared with high-degree SHM, the polynomial model takes less storage and computational time at the expense of minor precision. Meanwhile, the model is more accurate than NGM. Finally, numerical test and INS experiment show that the proposed method outperforms traditional gravity models applied for high precision free-INS.

  4. Fibonacci chain polynomials: Identities from self-similarity

    NASA Technical Reports Server (NTRS)

    Lang, Wolfdieter

    1995-01-01

    Fibonacci chains are special diatomic, harmonic chains with uniform nearest neighbor interaction and two kinds of atoms (mass-ratio r) arranged according to the self-similar binary Fibonacci sequence ABAABABA..., which is obtained by repeated substitution of A yields AB and B yields A. The implications of the self-similarity of this sequence for the associated orthogonal polynomial systems which govern these Fibonacci chains with fixed mass-ratio r are studied.

  5. Segmented polynomial taper equation incorporating years since thinning for loblolly pine plantations

    Treesearch

    A. Gordon Holley; Thomas B. Lynch; Charles T. Stiff; William Stansfield

    2010-01-01

    Data from 108 trees felled from 16 loblolly pine stands owned by Temple-Inland Forest Products Corp. were used to determine effects of years since thinning (YST) on stem taper using the Max–Burkhart type segmented polynomial taper model. Sample tree YST ranged from two to nine years prior to destructive sampling. In an effort to equalize sample sizes, tree data were...

  6. Creating a non-linear total sediment load formula using polynomial best subset regression model

    NASA Astrophysics Data System (ADS)

    Okcu, Davut; Pektas, Ali Osman; Uyumaz, Ali

    2016-08-01

    The aim of this study is to derive a new total sediment load formula which is more accurate and which has less application constraints than the well-known formulae of the literature. 5 most known stream power concept sediment formulae which are approved by ASCE are used for benchmarking on a wide range of datasets that includes both field and flume (lab) observations. The dimensionless parameters of these widely used formulae are used as inputs in a new regression approach. The new approach is called Polynomial Best subset regression (PBSR) analysis. The aim of the PBRS analysis is fitting and testing all possible combinations of the input variables and selecting the best subset. Whole the input variables with their second and third powers are included in the regression to test the possible relation between the explanatory variables and the dependent variable. While selecting the best subset a multistep approach is used that depends on significance values and also the multicollinearity degrees of inputs. The new formula is compared to others in a holdout dataset and detailed performance investigations are conducted for field and lab datasets within this holdout data. Different goodness of fit statistics are used as they represent different perspectives of the model accuracy. After the detailed comparisons are carried out we figured out the most accurate equation that is also applicable on both flume and river data. Especially, on field dataset the prediction performance of the proposed formula outperformed the benchmark formulations.

  7. [Using fractional polynomials to estimate the safety threshold of fluoride in drinking water].

    PubMed

    Pan, Shenling; An, Wei; Li, Hongyan; Yang, Min

    2014-01-01

    To study the dose-response relationship between fluoride content in drinking water and prevalence of dental fluorosis on the national scale, then to determine the safety threshold of fluoride in drinking water. Meta-regression analysis was applied to the 2001-2002 national endemic fluorosis survey data of key wards. First, fractional polynomial (FP) was adopted to establish fixed effect model, determining the best FP structure, after that restricted maximum likelihood (REML) was adopted to estimate between-study variance, then the best random effect model was established. The best FP structure was first-order logarithmic transformation. Based on the best random effect model, the benchmark dose (BMD) of fluoride in drinking water and its lower limit (BMDL) was calculated as 0.98 mg/L and 0.78 mg/L. Fluoride in drinking water can only explain 35.8% of the variability of the prevalence, among other influencing factors, ward type was a significant factor, while temperature condition and altitude were not. Fractional polynomial-based meta-regression method is simple, practical and can provide good fitting effect, based on it, the safety threshold of fluoride in drinking water of our country is determined as 0.8 mg/L.

  8. Higher-order Multivariable Polynomial Regression to Estimate Human Affective States

    NASA Astrophysics Data System (ADS)

    Wei, Jie; Chen, Tong; Liu, Guangyuan; Yang, Jiemin

    2016-03-01

    From direct observations, facial, vocal, gestural, physiological, and central nervous signals, estimating human affective states through computational models such as multivariate linear-regression analysis, support vector regression, and artificial neural network, have been proposed in the past decade. In these models, linear models are generally lack of precision because of ignoring intrinsic nonlinearities of complex psychophysiological processes; and nonlinear models commonly adopt complicated algorithms. To improve accuracy and simplify model, we introduce a new computational modeling method named as higher-order multivariable polynomial regression to estimate human affective states. The study employs standardized pictures in the International Affective Picture System to induce thirty subjects’ affective states, and obtains pure affective patterns of skin conductance as input variables to the higher-order multivariable polynomial model for predicting affective valence and arousal. Experimental results show that our method is able to obtain efficient correlation coefficients of 0.98 and 0.96 for estimation of affective valence and arousal, respectively. Moreover, the method may provide certain indirect evidences that valence and arousal have their brain’s motivational circuit origins. Thus, the proposed method can serve as a novel one for efficiently estimating human affective states.

  9. Regression-based adaptive sparse polynomial dimensional decomposition for sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Tang, Kunkun; Congedo, Pietro; Abgrall, Remi

    2014-11-01

    Polynomial dimensional decomposition (PDD) is employed in this work for global sensitivity analysis and uncertainty quantification of stochastic systems subject to a large number of random input variables. Due to the intimate structure between PDD and Analysis-of-Variance, PDD is able to provide simpler and more direct evaluation of the Sobol' sensitivity indices, when compared to polynomial chaos (PC). Unfortunately, the number of PDD terms grows exponentially with respect to the size of the input random vector, which makes the computational cost of the standard method unaffordable for real engineering applications. In order to address this problem of curse of dimensionality, this work proposes a variance-based adaptive strategy aiming to build a cheap meta-model by sparse-PDD with PDD coefficients computed by regression. During this adaptive procedure, the model representation by PDD only contains few terms, so that the cost to resolve repeatedly the linear system of the least-square regression problem is negligible. The size of the final sparse-PDD representation is much smaller than the full PDD, since only significant terms are eventually retained. Consequently, a much less number of calls to the deterministic model is required to compute the final PDD coefficients.

  10. Higher-order Multivariable Polynomial Regression to Estimate Human Affective States

    PubMed Central

    Wei, Jie; Chen, Tong; Liu, Guangyuan; Yang, Jiemin

    2016-01-01

    From direct observations, facial, vocal, gestural, physiological, and central nervous signals, estimating human affective states through computational models such as multivariate linear-regression analysis, support vector regression, and artificial neural network, have been proposed in the past decade. In these models, linear models are generally lack of precision because of ignoring intrinsic nonlinearities of complex psychophysiological processes; and nonlinear models commonly adopt complicated algorithms. To improve accuracy and simplify model, we introduce a new computational modeling method named as higher-order multivariable polynomial regression to estimate human affective states. The study employs standardized pictures in the International Affective Picture System to induce thirty subjects’ affective states, and obtains pure affective patterns of skin conductance as input variables to the higher-order multivariable polynomial model for predicting affective valence and arousal. Experimental results show that our method is able to obtain efficient correlation coefficients of 0.98 and 0.96 for estimation of affective valence and arousal, respectively. Moreover, the method may provide certain indirect evidences that valence and arousal have their brain’s motivational circuit origins. Thus, the proposed method can serve as a novel one for efficiently estimating human affective states. PMID:26996254

  11. Genetic parameters of legendre polynomials for first parity lactation curves.

    PubMed

    Pool, M H; Janss, L L; Meuwissen, T H

    2000-11-01

    Variance components of the covariance function coefficients in a random regression test-day model were estimated by Legendre polynomials up to a fifth order for first-parity records of Dutch dairy cows using Gibbs sampling. Two Legendre polynomials of equal order were used to model the random part of the lactation curve, one for the genetic component and one for permanent environment. Test-day records from cows registered between 1990 to 1996 and collected by regular milk recording were available. For the data set, 23,700 complete lactations were selected from 475 herds sired by 262 sires. Because the application of a random regression model is limited by computing capacity, we investigated the minimum order needed to fit the variance structure in the data sufficiently. Predictions of genetic and permanent environmental variance structures were compared with bivariate estimates on 30-d intervals. A third-order or higher polynomial modeled the shape of variance curves over DIM with sufficient accuracy for the genetic and permanent environment part. Also, the genetic correlation structure was fitted with sufficient accuracy by a third-order polynomial, but, for the permanent environmental component, a fourth order was needed. Because equal orders are suggested in the literature, a fourth-order Legendre polynomial is recommended in this study. However, a rank of three for the genetic covariance matrix and of four for permanent environment allows a simpler covariance function with a reduced number of parameters based on the eigenvalues and eigenvectors.

  12. By any other name: when will preschoolers produce several labels for a referent?

    PubMed

    Deák, G O; Yen, L; Pettit, J

    2001-10-01

    Two experiments investigated why preschool children sometimes produce multiple words for a referent (i.e. polynomy), but other times seem to allow only one word. In Experiment 1, 40 three- and four-year-olds completed a modification of Deák & Maratsos' (1998) naming task. Although social demands to produce multiple words were reduced, children produced, on average, more than two words per object. Number of words produced was predicted by receptive vocabulary. Lexical insight (i.e. knowing that a word refers to function or appearance) and metalexical beliefs (i.e. that a hypothetical referent has one label, or more than one) were not preconditions of polynomy. Polynomy was independent of bias to map novel words to unfamiliar referents. In Experiment 2, 40 three- and four-year-olds learned new words for nameable objects. Children showed a correction effect, yet produced more than two words per object. Children do not have a generalized one-word-per-object bias, even during word learning. Other explanations (e.g. contextual restriction of lexical access) are discussed.

  13. On the design of recursive digital filters

    NASA Technical Reports Server (NTRS)

    Shenoi, K.; Narasimha, M. J.; Peterson, A. M.

    1976-01-01

    A change of variables is described which transforms the problem of designing a recursive digital filter to that of approximation by a ratio of polynomials on a finite interval. Some analytic techniques for the design of low-pass filters are presented, illustrating the use of the transformation. Also considered are methods for the design of phase equalizers.

  14. A stabilized Runge–Kutta–Legendre method for explicit super-time-stepping of parabolic and mixed equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meyer, Chad D.; Balsara, Dinshaw S.; Aslam, Tariq D.

    2014-01-15

    Parabolic partial differential equations appear in several physical problems, including problems that have a dominant hyperbolic part coupled to a sub-dominant parabolic component. Explicit methods for their solution are easy to implement but have very restrictive time step constraints. Implicit solution methods can be unconditionally stable but have the disadvantage of being computationally costly or difficult to implement. Super-time-stepping methods for treating parabolic terms in mixed type partial differential equations occupy an intermediate position. In such methods each superstep takes “s” explicit Runge–Kutta-like time-steps to advance the parabolic terms by a time-step that is s{sup 2} times larger than amore » single explicit time-step. The expanded stability is usually obtained by mapping the short recursion relation of the explicit Runge–Kutta scheme to the recursion relation of some well-known, stable polynomial. Prior work has built temporally first- and second-order accurate super-time-stepping methods around the recursion relation associated with Chebyshev polynomials. Since their stability is based on the boundedness of the Chebyshev polynomials, these methods have been called RKC1 and RKC2. In this work we build temporally first- and second-order accurate super-time-stepping methods around the recursion relation associated with Legendre polynomials. We call these methods RKL1 and RKL2. The RKL1 method is first-order accurate in time; the RKL2 method is second-order accurate in time. We verify that the newly-designed RKL1 and RKL2 schemes have a very desirable monotonicity preserving property for one-dimensional problems – a solution that is monotone at the beginning of a time step retains that property at the end of that time step. It is shown that RKL1 and RKL2 methods are stable for all values of the diffusion coefficient up to the maximum value. We call this a convex monotonicity preserving property and show by examples that it is very useful in parabolic problems with variable diffusion coefficients. This includes variable coefficient parabolic equations that might give rise to skew symmetric terms. The RKC1 and RKC2 schemes do not share this convex monotonicity preserving property. One-dimensional and two-dimensional von Neumann stability analyses of RKC1, RKC2, RKL1 and RKL2 are also presented, showing that the latter two have some advantages. The paper includes several details to facilitate implementation. A detailed accuracy analysis is presented to show that the methods reach their design accuracies. A stringent set of test problems is also presented. To demonstrate the robustness and versatility of our methods, we show their successful operation on problems involving linear and non-linear heat conduction and viscosity, resistive magnetohydrodynamics, ambipolar diffusion dominated magnetohydrodynamics, level set methods and flux limited radiation diffusion. In a prior paper (Meyer, Balsara and Aslam 2012 [36]) we have also presented an extensive test-suite showing that the RKL2 method works robustly in the presence of shocks in an anisotropically conducting, magnetized plasma.« less

  15. A stabilized Runge-Kutta-Legendre method for explicit super-time-stepping of parabolic and mixed equations

    NASA Astrophysics Data System (ADS)

    Meyer, Chad D.; Balsara, Dinshaw S.; Aslam, Tariq D.

    2014-01-01

    Parabolic partial differential equations appear in several physical problems, including problems that have a dominant hyperbolic part coupled to a sub-dominant parabolic component. Explicit methods for their solution are easy to implement but have very restrictive time step constraints. Implicit solution methods can be unconditionally stable but have the disadvantage of being computationally costly or difficult to implement. Super-time-stepping methods for treating parabolic terms in mixed type partial differential equations occupy an intermediate position. In such methods each superstep takes “s” explicit Runge-Kutta-like time-steps to advance the parabolic terms by a time-step that is s2 times larger than a single explicit time-step. The expanded stability is usually obtained by mapping the short recursion relation of the explicit Runge-Kutta scheme to the recursion relation of some well-known, stable polynomial. Prior work has built temporally first- and second-order accurate super-time-stepping methods around the recursion relation associated with Chebyshev polynomials. Since their stability is based on the boundedness of the Chebyshev polynomials, these methods have been called RKC1 and RKC2. In this work we build temporally first- and second-order accurate super-time-stepping methods around the recursion relation associated with Legendre polynomials. We call these methods RKL1 and RKL2. The RKL1 method is first-order accurate in time; the RKL2 method is second-order accurate in time. We verify that the newly-designed RKL1 and RKL2 schemes have a very desirable monotonicity preserving property for one-dimensional problems - a solution that is monotone at the beginning of a time step retains that property at the end of that time step. It is shown that RKL1 and RKL2 methods are stable for all values of the diffusion coefficient up to the maximum value. We call this a convex monotonicity preserving property and show by examples that it is very useful in parabolic problems with variable diffusion coefficients. This includes variable coefficient parabolic equations that might give rise to skew symmetric terms. The RKC1 and RKC2 schemes do not share this convex monotonicity preserving property. One-dimensional and two-dimensional von Neumann stability analyses of RKC1, RKC2, RKL1 and RKL2 are also presented, showing that the latter two have some advantages. The paper includes several details to facilitate implementation. A detailed accuracy analysis is presented to show that the methods reach their design accuracies. A stringent set of test problems is also presented. To demonstrate the robustness and versatility of our methods, we show their successful operation on problems involving linear and non-linear heat conduction and viscosity, resistive magnetohydrodynamics, ambipolar diffusion dominated magnetohydrodynamics, level set methods and flux limited radiation diffusion. In a prior paper (Meyer, Balsara and Aslam 2012 [36]) we have also presented an extensive test-suite showing that the RKL2 method works robustly in the presence of shocks in an anisotropically conducting, magnetized plasma.

  16. Minimizing Higgs potentials via numerical polynomial homotopy continuation

    NASA Astrophysics Data System (ADS)

    Maniatis, M.; Mehta, D.

    2012-08-01

    The study of models with extended Higgs sectors requires to minimize the corresponding Higgs potentials, which is in general very difficult. Here, we apply a recently developed method, called numerical polynomial homotopy continuation (NPHC), which guarantees to find all the stationary points of the Higgs potentials with polynomial-like non-linearity. The detection of all stationary points reveals the structure of the potential with maxima, metastable minima, saddle points besides the global minimum. We apply the NPHC method to the most general Higgs potential having two complex Higgs-boson doublets and up to five real Higgs-boson singlets. Moreover the method is applicable to even more involved potentials. Hence the NPHC method allows to go far beyond the limits of the Gröbner basis approach.

  17. Revision of the Phenomenological Characteristics of the Algol-Type Stars Using the Nav Algorithm

    NASA Astrophysics Data System (ADS)

    Tkachenko, M. G.; Andronov, I. L.; Chinarova, L. L.

    Phenomenological characteristics of the sample of the Algol-type stars are revised using a recently developed NAV ("New Algol Variable") algorithm (2012Ap.....55..536A, 2012arXiv 1212.6707A) and compared to that obtained using common methods of Trigonometric Polynomial Fit (TP) or local Algebraic Polynomial (A) fit of a fixed or (alternately) statistically optimal degree (1994OAP.....7...49A, 2003ASPC..292..391A). The computer program NAV is introduced, which allows to determine the best fit with 7 "linear" and 5 "nonlinear" parameters and their error estimates. The number of parameters is much smaller than for the TP fit (typically 20-40, depending on the width of the eclipse, and is much smaller (5-20) for the W UMa and β Lyrae-type stars. This causes more smooth approximation taking into account the reflection and ellipsoidal effects (TP2) and generally different shapes of the primary and secondary eclipses. An application of the method to two-color CCD photometry to the recently discovered eclipsing variable 2MASS J18024395 + 4003309 = VSX J180243.9 +400331 (2015JASS...32..101A) allowed to make estimates of the physical parameters of the binary system based on the phenomenological parameters of the light curve. The phenomenological parameters of the light curves were determined for the sample of newly discovered EA and EW-type stars (VSX J223429.3+552903, VSX J223421.4+553013, VSX J223416.2+553424, USNO-B1.0 1347-0483658, UCAC3-191-085589, VSX J180755.6+074711= UCAC3 196-166827). Despite we have used original observations published by the discoverers, the accuracy estimates of the period using the NAV method are typically better than the original ones.

  18. Modeling Uncertainty in Steady State Diffusion Problems via Generalized Polynomial Chaos

    DTIC Science & Technology

    2002-07-25

    Some basic hypergeometric polynomials that generalize Jacobi polynomials . Memoirs Amer. Math. Soc., AMS... orthogonal polynomial functionals from the Askey scheme, as a generalization of the original polynomial chaos idea of Wiener (1938). A Galerkin projection...1) by generalized polynomial chaos expansion, where the uncertainties can be introduced through κ, f , or g, or some combinations. It is worth

  19. Volumetric calibration of a plenoptic camera

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hall, Elise Munz; Fahringer, Timothy W.; Guildenbecher, Daniel Robert

    Here, the volumetric calibration of a plenoptic camera is explored to correct for inaccuracies due to real-world lens distortions and thin-lens assumptions in current processing methods. Two methods of volumetric calibration based on a polynomial mapping function that does not require knowledge of specific lens parameters are presented and compared to a calibration based on thin-lens assumptions. The first method, volumetric dewarping, is executed by creation of a volumetric representation of a scene using the thin-lens assumptions, which is then corrected in post-processing using a polynomial mapping function. The second method, direct light-field calibration, uses the polynomial mapping in creationmore » of the initial volumetric representation to relate locations in object space directly to image sensor locations. The accuracy and feasibility of these methods is examined experimentally by capturing images of a known dot card at a variety of depths. Results suggest that use of a 3D polynomial mapping function provides a significant increase in reconstruction accuracy and that the achievable accuracy is similar using either polynomial-mapping-based method. Additionally, direct light-field calibration provides significant computational benefits by eliminating some intermediate processing steps found in other methods. Finally, the flexibility of this method is shown for a nonplanar calibration.« less

  20. Volumetric calibration of a plenoptic camera

    DOE PAGES

    Hall, Elise Munz; Fahringer, Timothy W.; Guildenbecher, Daniel Robert; ...

    2018-02-01

    Here, the volumetric calibration of a plenoptic camera is explored to correct for inaccuracies due to real-world lens distortions and thin-lens assumptions in current processing methods. Two methods of volumetric calibration based on a polynomial mapping function that does not require knowledge of specific lens parameters are presented and compared to a calibration based on thin-lens assumptions. The first method, volumetric dewarping, is executed by creation of a volumetric representation of a scene using the thin-lens assumptions, which is then corrected in post-processing using a polynomial mapping function. The second method, direct light-field calibration, uses the polynomial mapping in creationmore » of the initial volumetric representation to relate locations in object space directly to image sensor locations. The accuracy and feasibility of these methods is examined experimentally by capturing images of a known dot card at a variety of depths. Results suggest that use of a 3D polynomial mapping function provides a significant increase in reconstruction accuracy and that the achievable accuracy is similar using either polynomial-mapping-based method. Additionally, direct light-field calibration provides significant computational benefits by eliminating some intermediate processing steps found in other methods. Finally, the flexibility of this method is shown for a nonplanar calibration.« less

  1. Stability analysis of spectral methods for hyperbolic initial-boundary value systems

    NASA Technical Reports Server (NTRS)

    Gottlieb, D.; Lustman, L.; Tadmor, E.

    1986-01-01

    A constant coefficient hyperbolic system in one space variable, with zero initial data is discussed. Dissipative boundary conditions are imposed at the two points x = + or - 1. This problem is discretized by a spectral approximation in space. Sufficient conditions under which the spectral numerical solution is stable are demonstrated - moreover, these conditions have to be checked only for scalar equations. The stability theorems take the form of explicit bounds for the norm of the solution in terms of the boundary data. The dependence of these bounds on N, the number of points in the domain (or equivalently the degree of the polynomials involved), is investigated for a class of standard spectral methods, including Chebyshev and Legendre collocations.

  2. Large-scale semidefinite programming for many-electron quantum mechanics.

    PubMed

    Mazziotti, David A

    2011-02-25

    The energy of a many-electron quantum system can be approximated by a constrained optimization of the two-electron reduced density matrix (2-RDM) that is solvable in polynomial time by semidefinite programming (SDP). Here we develop a SDP method for computing strongly correlated 2-RDMs that is 10-20 times faster than previous methods [D. A. Mazziotti, Phys. Rev. Lett. 93, 213001 (2004)]. We illustrate with (i) the dissociation of N(2) and (ii) the metal-to-insulator transition of H(50). For H(50) the SDP problem has 9.4×10(6) variables. This advance also expands the feasibility of large-scale applications in quantum information, control, statistics, and economics. © 2011 American Physical Society

  3. Large-Scale Semidefinite Programming for Many-Electron Quantum Mechanics

    NASA Astrophysics Data System (ADS)

    Mazziotti, David A.

    2011-02-01

    The energy of a many-electron quantum system can be approximated by a constrained optimization of the two-electron reduced density matrix (2-RDM) that is solvable in polynomial time by semidefinite programming (SDP). Here we develop a SDP method for computing strongly correlated 2-RDMs that is 10-20 times faster than previous methods [D. A. Mazziotti, Phys. Rev. Lett. 93, 213001 (2004)PRLTAO0031-900710.1103/PhysRevLett.93.213001]. We illustrate with (i) the dissociation of N2 and (ii) the metal-to-insulator transition of H50. For H50 the SDP problem has 9.4×106 variables. This advance also expands the feasibility of large-scale applications in quantum information, control, statistics, and economics.

  4. Analysis of aircraft tires via semianalytic finite elements

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.; Kim, Kyun O.; Tanner, John A.

    1990-01-01

    A computational procedure is presented for the geometrically nonlinear analysis of aircraft tires. The tire was modeled by using a two-dimensional laminated anisotropic shell theory with the effects of variation in material and geometric parameters included. The four key elements of the procedure are: (1) semianalytic finite elements in which the shell variables are represented by Fourier series in the circumferential direction and piecewise polynomials in the meridional direction; (2) a mixed formulation with the fundamental unknowns consisting of strain parameters, stress-resultant parameters, and generalized displacements; (3) multilevel operator splitting to effect successive simplifications, and to uncouple the equations associated with different Fourier harmonics; and (4) multilevel iterative procedures and reduction techniques to generate the response of the shell.

  5. Nonclassicality of Photon-Added Displaced Thermal State via Quantum Phase-Space Distributions

    NASA Astrophysics Data System (ADS)

    Zhang, Ran; Meng, Xiang-Guo; Du, Chuan-Xun; Wang, Ji-Suo

    2018-02-01

    We introduce a new kind of nonclassical mixed state generated by adding arbitrary photons to a displaced thermal state, i.e., the photon-added displaced thermal state (PADTS), and obtain the normalization factor, which is simply related to two-variable Hermite polynomials. We also discuss the nonclassicality of the PADTS by considering quantum phase-space distributions. The results indicate that the value of the photon count statistics is maximum when the number of detected photons is equal to the number of added photons, and that the photon-added operation has a similar modulation effect with increasing displacement. Moreover, the negative volume of the Wigner function for the PADTS takes a maximal value for a specific photon-added number.

  6. Traveling wave solutions to a reaction-diffusion equation

    NASA Astrophysics Data System (ADS)

    Feng, Zhaosheng; Zheng, Shenzhou; Gao, David Y.

    2009-07-01

    In this paper, we restrict our attention to traveling wave solutions of a reaction-diffusion equation. Firstly we apply the Divisor Theorem for two variables in the complex domain, which is based on the ring theory of commutative algebra, to find a quasi-polynomial first integral of an explicit form to an equivalent autonomous system. Then through this first integral, we reduce the reaction-diffusion equation to a first-order integrable ordinary differential equation, and a class of traveling wave solutions is obtained accordingly. Comparisons with the existing results in the literature are also provided, which indicates that some analytical results in the literature contain errors. We clarify the errors and instead give a refined result in a simple and straightforward manner.

  7. Novel algebraic aspects of Liouvillian integrability for two-dimensional polynomial dynamical systems

    NASA Astrophysics Data System (ADS)

    Demina, Maria V.

    2018-05-01

    The general structure of irreducible invariant algebraic curves for a polynomial dynamical system in C2 is found. Necessary conditions for existence of exponential factors related to an invariant algebraic curve are derived. As a consequence, all the cases when the classical force-free Duffing and Duffing-van der Pol oscillators possess Liouvillian first integrals are obtained. New exact solutions for the force-free Duffing-van der Pol system are constructed.

  8. Inflection point in running kinetic term inflation

    NASA Astrophysics Data System (ADS)

    Gao, Tie-Jun; Xiu, Wu-Tao; Yang, Xiu-Yi

    2017-04-01

    In this work, we calculate the general form of the scalar potential with polynomial superpotential in the framework of running kinetic term inflation, then focus on a polynomial superpotential with two terms and obtain the inflection point inflationary model. We study the inflationary dynamics and show that the predicted value of the scalar spectral index and tensor-to-scalar ratio can lie within the 1σ confidence region allowed by the result of Planck 2015.

  9. The Cauchy Two-Matrix Model, C-Toda Lattice and CKP Hierarchy

    NASA Astrophysics Data System (ADS)

    Li, Chunxia; Li, Shi-Hao

    2018-06-01

    This paper mainly talks about the Cauchy two-matrix model and its corresponding integrable hierarchy with the help of orthogonal polynomial theory and Toda-type equations. Starting from the symmetric reduction in Cauchy biorthogonal polynomials, we derive the Toda equation of CKP type (or the C-Toda lattice) as well as its Lax pair by introducing time flows. Then, matrix integral solutions to the C-Toda lattice are extended to give solutions to the CKP hierarchy which reveals the time-dependent partition function of the Cauchy two-matrix model is nothing but the τ -function of the CKP hierarchy. At last, the connection between the Cauchy two-matrix model and Bures ensemble is established from the point of view of integrable systems.

  10. Developing the Polynomial Expressions for Fields in the ITER Tokamak

    NASA Astrophysics Data System (ADS)

    Sharma, Stephen

    2017-10-01

    The two most important problems to be solved in the development of working nuclear fusion power plants are: sustained partial ignition and turbulence. These two phenomena are the subject of research and investigation through the development of analytic functions and computational models. Ansatz development through Gaussian wave-function approximations, dielectric quark models, field solutions using new elliptic functions, and better descriptions of the polynomials of the superconducting current loops are the critical theoretical developments that need to be improved. Euler-Lagrange equations of motion in addition to geodesic formulations generate the particle model which should correspond to the Dirac dispersive scattering coefficient calculations and the fluid plasma model. Feynman-Hellman formalism and Heaviside step functional forms are introduced to the fusion equations to produce simple expressions for the kinetic energy and loop currents. Conclusively, a polynomial description of the current loops, the Biot-Savart field, and the Lagrangian must be uncovered before there can be an adequate computational and iterative model of the thermonuclear plasma.

  11. A two-step, fourth-order method with energy preserving properties

    NASA Astrophysics Data System (ADS)

    Brugnano, Luigi; Iavernaro, Felice; Trigiante, Donato

    2012-09-01

    We introduce a family of fourth-order two-step methods that preserve the energy function of canonical polynomial Hamiltonian systems. As is the case with linear mutistep and one-leg methods, a prerogative of the new formulae is that the associated nonlinear systems to be solved at each step of the integration procedure have the very same dimension of the underlying continuous problem. The key tools in the new methods are the line integral associated with a conservative vector field (such as the one defined by a Hamiltonian dynamical system) and its discretization obtained by the aid of a quadrature formula. Energy conservation is equivalent to the requirement that the quadrature is exact, which turns out to be always the case in the event that the Hamiltonian function is a polynomial and the degree of precision of the quadrature formula is high enough. The non-polynomial case is also discussed and a number of test problems are finally presented in order to compare the behavior of the new methods to the theoretical results.

  12. Approximating exponential and logarithmic functions using polynomial interpolation

    NASA Astrophysics Data System (ADS)

    Gordon, Sheldon P.; Yang, Yajun

    2017-04-01

    This article takes a closer look at the problem of approximating the exponential and logarithmic functions using polynomials. Either as an alternative to or a precursor to Taylor polynomial approximations at the precalculus level, interpolating polynomials are considered. A measure of error is given and the behaviour of the error function is analysed. The results of interpolating polynomials are compared with those of Taylor polynomials.

  13. Due-Window Assignment Scheduling with Variable Job Processing Times

    PubMed Central

    Wu, Yu-Bin

    2015-01-01

    We consider a common due-window assignment scheduling problem jobs with variable job processing times on a single machine, where the processing time of a job is a function of its position in a sequence (i.e., learning effect) or its starting time (i.e., deteriorating effect). The problem is to determine the optimal due-windows, and the processing sequence simultaneously to minimize a cost function includes earliness, tardiness, the window location, window size, and weighted number of tardy jobs. We prove that the problem can be solved in polynomial time. PMID:25918745

  14. Spike-adding in parabolic bursters: The role of folded-saddle canards

    NASA Astrophysics Data System (ADS)

    Desroches, Mathieu; Krupa, Martin; Rodrigues, Serafim

    2016-09-01

    The present work develops a new approach to studying parabolic bursting, and also proposes a novel four-dimensional canonical and polynomial-based parabolic burster. In addition to this new polynomial system, we also consider the conductance-based model of the Aplysia R15 neuron known as the Plant model, and a reduction of this prototypical biophysical parabolic burster to three variables, including one phase variable, namely the Baer-Rinzel-Carillo (BRC) phase model. Revisiting these models from the perspective of slow-fast dynamics reveals that the number of spikes per burst may vary upon parameter changes, however the spike-adding process occurs in an explosive fashion that involves special solutions called canards. This spike-adding canard explosion phenomenon is analysed by using tools from geometric singular perturbation theory in tandem with numerical bifurcation techniques. We find that the bifurcation structure persists across all considered systems, that is, spikes within the burst are incremented via the crossing of an excitability threshold given by a particular type of canard orbit, namely the true canard of a folded-saddle singularity. However there can be a difference in the spike-adding transitions in parameter space from one case to another, according to whether the process is continuous or discontinuous, which depends upon the geometry of the folded-saddle canard. Using these findings, we construct a new polynomial approximation of the Plant model, which retains all the key elements for parabolic bursting, including the spike-adding transitions mediated by folded-saddle canards. Finally, we briefly investigate the presence of spike-adding via canards in planar phase models of parabolic bursting, namely the theta model by Ermentrout and Kopell.

  15. Assessing the effects of pharmacological agents on respiratory dynamics using time-series modeling.

    PubMed

    Wong, Kin Foon Kevin; Gong, Jen J; Cotten, Joseph F; Solt, Ken; Brown, Emery N

    2013-04-01

    Developing quantitative descriptions of how stimulant and depressant drugs affect the respiratory system is an important focus in medical research. Respiratory variables-respiratory rate, tidal volume, and end tidal carbon dioxide-have prominent temporal dynamics that make it inappropriate to use standard hypothesis-testing methods that assume independent observations to assess the effects of these pharmacological agents. We present a polynomial signal plus autoregressive noise model for analysis of continuously recorded respiratory variables. We use a cyclic descent algorithm to maximize the conditional log likelihood of the parameters and the corrected Akaike's information criterion to choose simultaneously the orders of the polynomial and the autoregressive models. In an analysis of respiratory rates recorded from anesthetized rats before and after administration of the respiratory stimulant methylphenidate, we use the model to construct within-animal z-tests of the drug effect that take account of the time-varying nature of the mean respiratory rate and the serial dependence in rate measurements. We correct for the effect of model lack-of-fit on our inferences by also computing bootstrap confidence intervals for the average difference in respiratory rate pre- and postmethylphenidate treatment. Our time-series modeling quantifies within each animal the substantial increase in mean respiratory rate and respiratory dynamics following methylphenidate administration. This paradigm can be readily adapted to analyze the dynamics of other respiratory variables before and after pharmacologic treatments.

  16. Examining Impulse-Variability in Kicking.

    PubMed

    Chappell, Andrew; Molina, Sergio L; McKibben, Jonathon; Stodden, David F

    2016-07-01

    This study examined variability in kicking speed and spatial accuracy to test the impulse-variability theory prediction of an inverted-U function and the speed-accuracy trade-off. Twenty-eight 18- to 25-year-old adults kicked a playground ball at various percentages (50-100%) of their maximum speed at a wall target. Speed variability and spatial error were analyzed using repeated-measures ANOVA with built-in polynomial contrasts. Results indicated a significant inverse linear trajectory for speed variability (p < .001, η2= .345) where 50% and 60% maximum speed had significantly higher variability than the 100% condition. A significant quadratic fit was found for spatial error scores of mean radial error (p < .0001, η2 = .474) and subject-centroid radial error (p < .0001, η2 = .453). Findings suggest variability and accuracy of multijoint, ballistic skill performance may not follow the general principles of impulse-variability theory or the speed-accuracy trade-off.

  17. An Arrhenius-type viscosity function to model sintering using the Skorohod Olevsky viscous sintering model within a finite element code.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ewsuk, Kevin Gregory; Arguello, Jose Guadalupe, Jr.; Reiterer, Markus W.

    2006-02-01

    The ease and ability to predict sintering shrinkage and densification with the Skorohod-Olevsky viscous sintering (SOVS) model within a finite-element (FE) code have been improved with the use of an Arrhenius-type viscosity function. The need for a better viscosity function was identified by evaluating SOVS model predictions made using a previously published polynomial viscosity function. Predictions made using the original, polynomial viscosity function do not accurately reflect experimentally observed sintering behavior. To more easily and better predict sintering behavior using FE simulations, a thermally activated viscosity function based on creep theory was used with the SOVS model. In comparison withmore » the polynomial viscosity function, SOVS model predictions made using the Arrhenius-type viscosity function are more representative of experimentally observed viscosity and sintering behavior. Additionally, the effects of changes in heating rate on densification can easily be predicted with the Arrhenius-type viscosity function. Another attribute of the Arrhenius-type viscosity function is that it provides the potential to link different sintering models. For example, the apparent activation energy, Q, for densification used in the construction of the master sintering curve for a low-temperature cofire ceramic dielectric has been used as the apparent activation energy for material flow in the Arrhenius-type viscosity function to predict heating rate-dependent sintering behavior using the SOVS model.« less

  18. Entropy of orthogonal polynomials with Freud weights and information entropies of the harmonic oscillator potential

    NASA Astrophysics Data System (ADS)

    Van Assche, W.; Yáñez, R. J.; Dehesa, J. S.

    1995-08-01

    The information entropy of the harmonic oscillator potential V(x)=1/2λx2 in both position and momentum spaces can be expressed in terms of the so-called ``entropy of Hermite polynomials,'' i.e., the quantity Sn(H):= -∫-∞+∞H2n(x)log H2n(x) e-x2dx. These polynomials are instances of the polynomials orthogonal with respect to the Freud weights w(x)=exp(-||x||m), m≳0. Here, a very precise and general result of the entropy of Freud polynomials recently established by Aptekarev et al. [J. Math. Phys. 35, 4423-4428 (1994)], specialized to the Hermite kernel (case m=2), leads to an important refined asymptotic expression for the information entropies of very excited states (i.e., for large n) in both position and momentum spaces, to be denoted by Sρ and Sγ, respectively. Briefly, it is shown that, for large values of n, Sρ+1/2logλ≂log(π√2n/e)+o(1) and Sγ-1/2log λ≂log(π√2n/e)+o(1), so that Sρ+Sγ≂log(2π2n/e2)+o(1) in agreement with the generalized indetermination relation of Byalinicki-Birula and Mycielski [Commun. Math. Phys. 44, 129-132 (1975)]. Finally, the rate of convergence of these two information entropies is numerically analyzed. In addition, using a Rakhmanov result, we describe a totally new proof of the leading term of the entropy of Freud polynomials which, naturally, is just a weak version of the aforementioned general result.

  19. Improved Remapping Processor For Digital Imagery

    NASA Technical Reports Server (NTRS)

    Fisher, Timothy E.

    1991-01-01

    Proposed digital image processor improved version of Programmable Remapper, which performs geometric and radiometric transformations on digital images. Features include overlapping and variably sized preimages. Overcomes some of limitations of image-warping circuit boards implementing only those geometric tranformations expressible in terms of polynomials of limited order. Also overcomes limitations of existing Programmable Remapper and made to perform transformations at video rate.

  20. Notes on the boundaries of quadrature domains

    NASA Astrophysics Data System (ADS)

    Verma, Kaushal

    2018-03-01

    We highlight an intrinsic connection between classical quadrature domains and the well-studied theme of removable singularities of analytic sets in several complex variables. Exploiting this connection provides a new framework to recover several basic properties of such domains, namely the algebraicity of their boundary, a better understanding of the associated defining polynomial and the possible boundary singularities that can occur.

  1. Analyzing Quadratic Unconstrained Binary Optimization Problems Via Multicommodity Flows

    PubMed Central

    Wang, Di; Kleinberg, Robert D.

    2009-01-01

    Quadratic Unconstrained Binary Optimization (QUBO) problems concern the minimization of quadratic polynomials in n {0, 1}-valued variables. These problems are NP-complete, but prior work has identified a sequence of polynomial-time computable lower bounds on the minimum value, denoted by C2, C3, C4,…. It is known that C2 can be computed by solving a maximum-flow problem, whereas the only previously known algorithms for computing Ck (k > 2) require solving a linear program. In this paper we prove that C3 can be computed by solving a maximum multicommodity flow problem in a graph constructed from the quadratic function. In addition to providing a lower bound on the minimum value of the quadratic function on {0, 1}n, this multicommodity flow problem also provides some information about the coordinates of the point where this minimum is achieved. By looking at the edges that are never saturated in any maximum multicommodity flow, we can identify relational persistencies: pairs of variables that must have the same or different values in any minimizing assignment. We furthermore show that all of these persistencies can be detected by solving single-commodity flow problems in the same network. PMID:20161596

  2. Analyzing Quadratic Unconstrained Binary Optimization Problems Via Multicommodity Flows.

    PubMed

    Wang, Di; Kleinberg, Robert D

    2009-11-28

    Quadratic Unconstrained Binary Optimization (QUBO) problems concern the minimization of quadratic polynomials in n {0, 1}-valued variables. These problems are NP-complete, but prior work has identified a sequence of polynomial-time computable lower bounds on the minimum value, denoted by C(2), C(3), C(4),…. It is known that C(2) can be computed by solving a maximum-flow problem, whereas the only previously known algorithms for computing C(k) (k > 2) require solving a linear program. In this paper we prove that C(3) can be computed by solving a maximum multicommodity flow problem in a graph constructed from the quadratic function. In addition to providing a lower bound on the minimum value of the quadratic function on {0, 1}(n), this multicommodity flow problem also provides some information about the coordinates of the point where this minimum is achieved. By looking at the edges that are never saturated in any maximum multicommodity flow, we can identify relational persistencies: pairs of variables that must have the same or different values in any minimizing assignment. We furthermore show that all of these persistencies can be detected by solving single-commodity flow problems in the same network.

  3. Wind Tunnel Database Development using Modern Experiment Design and Multivariate Orthogonal Functions

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.; DeLoach, Richard

    2003-01-01

    A wind tunnel experiment for characterizing the aerodynamic and propulsion forces and moments acting on a research model airplane is described. The model airplane called the Free-flying Airplane for Sub-scale Experimental Research (FASER), is a modified off-the-shelf radio-controlled model airplane, with 7 ft wingspan, a tractor propeller driven by an electric motor, and aerobatic capability. FASER was tested in the NASA Langley 12-foot Low-Speed Wind Tunnel, using a combination of traditional sweeps and modern experiment design. Power level was included as an independent variable in the wind tunnel test, to allow characterization of power effects on aerodynamic forces and moments. A modeling technique that employs multivariate orthogonal functions was used to develop accurate analytic models for the aerodynamic and propulsion force and moment coefficient dependencies from the wind tunnel data. Efficient methods for generating orthogonal modeling functions, expanding the orthogonal modeling functions in terms of ordinary polynomial functions, and analytical orthogonal blocking were developed and discussed. The resulting models comprise a set of smooth, differentiable functions for the non-dimensional aerodynamic force and moment coefficients in terms of ordinary polynomials in the independent variables, suitable for nonlinear aircraft simulation.

  4. Application of ANNs approach for wave-like and heat-like equations

    NASA Astrophysics Data System (ADS)

    Jafarian, Ahmad; Baleanu, Dumitru

    2017-12-01

    Artificial neural networks are data processing systems which originate from human brain tissue studies. The remarkable abilities of these networks help us to derive desired results from complicated raw data. In this study, we intend to duplicate an efficient iterative method to the numerical solution of two famous partial differential equations, namely the wave-like and heat-like problems. It should be noted that many physical phenomena such as coupling currents in a flat multi-strand two-layer super conducting cable, non-homogeneous elastic waves in soils and earthquake stresses, are described by initial-boundary value wave and heat partial differential equations with variable coefficients. To the numerical solution of these equations, a combination of the power series method and artificial neural networks approach, is used to seek an appropriate bivariate polynomial solution of the mentioned initial-boundary value problem. Finally, several computer simulations confirmed the theoretical results and demonstrating applicability of the method.

  5. Multigrid methods for isogeometric discretization

    PubMed Central

    Gahalaut, K.P.S.; Kraus, J.K.; Tomar, S.K.

    2013-01-01

    We present (geometric) multigrid methods for isogeometric discretization of scalar second order elliptic problems. The smoothing property of the relaxation method, and the approximation property of the intergrid transfer operators are analyzed. These properties, when used in the framework of classical multigrid theory, imply uniform convergence of two-grid and multigrid methods. Supporting numerical results are provided for the smoothing property, the approximation property, convergence factor and iterations count for V-, W- and F-cycles, and the linear dependence of V-cycle convergence on the smoothing steps. For two dimensions, numerical results include the problems with variable coefficients, simple multi-patch geometry, a quarter annulus, and the dependence of convergence behavior on refinement levels ℓ, whereas for three dimensions, only the constant coefficient problem in a unit cube is considered. The numerical results are complete up to polynomial order p=4, and for C0 and Cp-1 smoothness. PMID:24511168

  6. A Runge-Kutta discontinuous finite element method for high speed flows

    NASA Technical Reports Server (NTRS)

    Bey, Kim S.; Oden, J. T.

    1991-01-01

    A Runge-Kutta discontinuous finite element method is developed for hyperbolic systems of conservation laws in two space variables. The discontinuous Galerkin spatial approximation to the conservation laws results in a system of ordinary differential equations which are marched in time using Runge-Kutta methods. Numerical results for the two-dimensional Burger's equation show that the method is (p+1)-order accurate in time and space, where p is the degree of the polynomial approximation of the solution within an element and is capable of capturing shocks over a single element without oscillations. Results for this problem also show that the accuracy of the solution in smooth regions is unaffected by the local projection and that the accuracy in smooth regions increases as p increases. Numerical results for the Euler equations show that the method captures shocks without oscillations and with higher resolution than a first-order scheme.

  7. Routh's algorithm - A centennial survey

    NASA Technical Reports Server (NTRS)

    Barnett, S.; Siljak, D. D.

    1977-01-01

    One hundred years have passed since the publication of Routh's fundamental work on determining the stability of constant linear systems. The paper presents an outline of the algorithm and considers such aspects of it as the distribution of zeros and applications of it that relate to the greatest common divisor, the abscissa of stability, continued fractions, canonical forms, the nonnegativity of polynomials and polynomial matrices, the absolute stability, optimality and passivity of dynamic systems, and the stability of two-dimensional circuits.

  8. Coping and back problems: analysis of multiple data sources on an entire cross-sectional cohort of Swedish military recruits.

    PubMed

    Leboeuf-Yde, Charlotte; Larsen, Kristian; Ahlstrand, Ingvar; Volinn, Ernest

    2006-05-03

    As the literature now stands, a bewildering number and variety of biological, psychological and social factors are, apparently, implicated in back problems. However, if and how these have a direct influence on back problems is not clear. Obesity, for example, has in many studies been shown to be associated with back problems but there is no evidence for a causal link. This could be explained by a dearth of suitably designed studies but also because obesity may be but a proxy for some other, truly explanatory variable. Coping has been linked with, particularly, persistent back problems as well as with health in general. The question is, whether coping could be the explanatory link between, for example, these two variables. A cross-sectional study was undertaken using data from the Swedish Army, consisting of the entire cohort of males (N = 48,502) summoned in 1998 to serve in the military. The purpose of the study was to investigate the relation between five independent variables and two dependent variables ("outcome variables"). The independent variables were two anthropomorphic variables (height and body mass index), two psychological variables (intellectual capacity and coping in relation to stress), and one social variable (type of education). The two outcome variables were back problems and ill health. In particular, we wanted to determine whether controlling for coping would affect the associations between the other four independent variables and the two outcome variables. Data for the analysis come from a battery of standardized examinations, including medical examinations, a test of intellectual capacity, and a test of coping in relation to stress. Each of these examinations was conducted independently of the others. Unadjusted and adjusted odds ratios were calculated for the outcome variables of back problems and ill health. The associations between height, body mass index, intellectual capacity, type of education and the two outcome variables (back problems and ill health) were weak to moderate. Additionally, there were strong associations between coping and the two outcome variables and when controlling for coping the previously noted associations diminished or disappeared, whereas none of the other variables had a large effect on the association between coping and the two outcome variables. Coping emerged as strongly associated with both back problem and ill health and coping had a leveling effect on the associations between the other independent variables and the two outcome variables. This study is noteworthy particularly because the association with coping is so robust. It is a retrospective, cross-sectional study, however, and, as such it raises questions of causality; which - if any - came first, inability to cope or back pain? The results of this study call attention to the need for a prospective study, in which coping is clearly defined. Such a study has been undertaken and will be presented separately. Index terms: back pain, coping, education, height, BMI, intellectual capacity, bio-psycho-social model, epidemiology, cohort, cross-sectional study.

  9. Estimation of Ordinary Differential Equation Parameters Using Constrained Local Polynomial Regression.

    PubMed

    Ding, A Adam; Wu, Hulin

    2014-10-01

    We propose a new method to use a constrained local polynomial regression to estimate the unknown parameters in ordinary differential equation models with a goal of improving the smoothing-based two-stage pseudo-least squares estimate. The equation constraints are derived from the differential equation model and are incorporated into the local polynomial regression in order to estimate the unknown parameters in the differential equation model. We also derive the asymptotic bias and variance of the proposed estimator. Our simulation studies show that our new estimator is clearly better than the pseudo-least squares estimator in estimation accuracy with a small price of computational cost. An application example on immune cell kinetics and trafficking for influenza infection further illustrates the benefits of the proposed new method.

  10. Estimation of Ordinary Differential Equation Parameters Using Constrained Local Polynomial Regression

    PubMed Central

    Ding, A. Adam; Wu, Hulin

    2015-01-01

    We propose a new method to use a constrained local polynomial regression to estimate the unknown parameters in ordinary differential equation models with a goal of improving the smoothing-based two-stage pseudo-least squares estimate. The equation constraints are derived from the differential equation model and are incorporated into the local polynomial regression in order to estimate the unknown parameters in the differential equation model. We also derive the asymptotic bias and variance of the proposed estimator. Our simulation studies show that our new estimator is clearly better than the pseudo-least squares estimator in estimation accuracy with a small price of computational cost. An application example on immune cell kinetics and trafficking for influenza infection further illustrates the benefits of the proposed new method. PMID:26401093

  11. An Exactly Solvable Spin Chain Related to Hahn Polynomials

    NASA Astrophysics Data System (ADS)

    Stoilova, Neli I.; van der Jeugt, Joris

    2011-03-01

    We study a linear spin chain which was originally introduced by Shi et al. [Phys. Rev. A 71 (2005), 032309, 5 pages], for which the coupling strength contains a parameter α and depends on the parity of the chain site. Extending the model by a second parameter β, it is shown that the single fermion eigenstates of the Hamiltonian can be computed in explicit form. The components of these eigenvectors turn out to be Hahn polynomials with parameters (α,β) and (α+1,β-1). The construction of the eigenvectors relies on two new difference equations for Hahn polynomials. The explicit knowledge of the eigenstates leads to a closed form expression for the correlation function of the spin chain. We also discuss some aspects of a q-extension of this model.

  12. The Julia sets of basic uniCremer polynomials of arbitrary degree

    NASA Astrophysics Data System (ADS)

    Blokh, Alexander; Oversteegen, Lex

    Let P be a polynomial of degree d with a Cremer point p and no repelling or parabolic periodic bi-accessible points. We show that there are two types of such Julia sets J_P . The red dwarf J_P are nowhere connected im kleinen and such that the intersection of all impressions of external angles is a continuum containing p and the orbits of all critical images. The solar J_P are such that every angle with dense orbit has a degenerate impression disjoint from other impressions and J_P is connected im kleinen at its landing point. We study bi-accessible points and locally connected models of J_P and show that such sets J_P appear through polynomial-like maps for generic polynomials with Cremer points. Since known tools break down for d>2 (if d>2 , it is not known if there are small cycles near p , while if d=2 , this result is due to Yoccoz), we introduce wandering ray continua in J_P and provide a new application of Thurston laminations.

  13. A discrete method for modal analysis of overhead line conductor bundles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Migdalovici, M.A.; Sireteanu, T.D.; Albrecht, A.A.

    The paper presents a mathematical model and a semi-analytical procedure to calculate the vibration modes and eigenfrequencies of single or bundled conductors with spacers which are needed for evaluation of the wind induced vibration of conductors and for optimization of spacer-dampers placement. The method consists in decomposition of conductors in modules and the expansion by polynomial series of unknown displacements on each module. A complete system of polynomials are deduced for this by Legendre polynomials. Each module is considered either boundary conditions at the extremity of the module or the continuity conditions between the modules and also a number ofmore » projections of module equilibrium equation on the polynomials from the expansion series of unknown displacement. The global system of the eigenmodes and eigenfrequencies is of the matrix form: A X + {omega}{sup 2} M X = 0. The theoretical considerations are exemplified on one conductor and on bundle of two conductors with spacers. From this, a method for forced vibration calculus of a single or bundled conductors is also presented.« less

  14. Guaranteed cost control of polynomial fuzzy systems via a sum of squares approach.

    PubMed

    Tanaka, Kazuo; Ohtake, Hiroshi; Wang, Hua O

    2009-04-01

    This paper presents the guaranteed cost control of polynomial fuzzy systems via a sum of squares (SOS) approach. First, we present a polynomial fuzzy model and controller that are more general representations of the well-known Takagi-Sugeno (T-S) fuzzy model and controller, respectively. Second, we derive a guaranteed cost control design condition based on polynomial Lyapunov functions. Hence, the design approach discussed in this paper is more general than the existing LMI approaches (to T-S fuzzy control system designs) based on quadratic Lyapunov functions. The design condition realizes a guaranteed cost control by minimizing the upper bound of a given performance function. In addition, the design condition in the proposed approach can be represented in terms of SOS and is numerically (partially symbolically) solved via the recent developed SOSTOOLS. To illustrate the validity of the design approach, two design examples are provided. The first example deals with a complicated nonlinear system. The second example presents micro helicopter control. Both the examples show that our approach provides more extensive design results for the existing LMI approach.

  15. New realisation of Preisach model using adaptive polynomial approximation

    NASA Astrophysics Data System (ADS)

    Liu, Van-Tsai; Lin, Chun-Liang; Wing, Home-Young

    2012-09-01

    Modelling system with hysteresis has received considerable attention recently due to the increasing accurate requirement in engineering applications. The classical Preisach model (CPM) is the most popular model to demonstrate hysteresis which can be represented by infinite but countable first-order reversal curves (FORCs). The usage of look-up tables is one way to approach the CPM in actual practice. The data in those tables correspond with the samples of a finite number of FORCs. This approach, however, faces two major problems: firstly, it requires a large amount of memory space to obtain an accurate prediction of hysteresis; secondly, it is difficult to derive efficient ways to modify the data table to reflect the timing effect of elements with hysteresis. To overcome, this article proposes the idea of using a set of polynomials to emulate the CPM instead of table look-up. The polynomial approximation requires less memory space for data storage. Furthermore, the polynomial coefficients can be obtained accurately by using the least-square approximation or adaptive identification algorithm, such as the possibility of accurate tracking of hysteresis model parameters.

  16. Information entropy of Gegenbauer polynomials and Gaussian quadrature

    NASA Astrophysics Data System (ADS)

    Sánchez-Ruiz, Jorge

    2003-05-01

    In a recent paper (Buyarov V S, López-Artés P, Martínez-Finkelshtein A and Van Assche W 2000 J. Phys. A: Math. Gen. 33 6549-60), an efficient method was provided for evaluating in closed form the information entropy of the Gegenbauer polynomials C(lambda)n(x) in the case when lambda = l in Bbb N. For given values of n and l, this method requires the computation by means of recurrence relations of two auxiliary polynomials, P(x) and H(x), of degrees 2l - 2 and 2l - 4, respectively. Here it is shown that P(x) is related to the coefficients of the Gaussian quadrature formula for the Gegenbauer weights wl(x) = (1 - x2)l-1/2, and this fact is used to obtain the explicit expression of P(x). From this result, an explicit formula is also given for the polynomial S(x) = limnrightarrowinfty P(1 - x/(2n2)), which is relevant to the study of the asymptotic (n rightarrow infty with l fixed) behaviour of the entropy.

  17. Solutions of interval type-2 fuzzy polynomials using a new ranking method

    NASA Astrophysics Data System (ADS)

    Rahman, Nurhakimah Ab.; Abdullah, Lazim; Ghani, Ahmad Termimi Ab.; Ahmad, Noor'Ani

    2015-10-01

    A few years ago, a ranking method have been introduced in the fuzzy polynomial equations. Concept of the ranking method is proposed to find actual roots of fuzzy polynomials (if exists). Fuzzy polynomials are transformed to system of crisp polynomials, performed by using ranking method based on three parameters namely, Value, Ambiguity and Fuzziness. However, it was found that solutions based on these three parameters are quite inefficient to produce answers. Therefore in this study a new ranking method have been developed with the aim to overcome the inherent weakness. The new ranking method which have four parameters are then applied in the interval type-2 fuzzy polynomials, covering the interval type-2 of fuzzy polynomial equation, dual fuzzy polynomial equations and system of fuzzy polynomials. The efficiency of the new ranking method then numerically considered in the triangular fuzzy numbers and the trapezoidal fuzzy numbers. Finally, the approximate solutions produced from the numerical examples indicate that the new ranking method successfully produced actual roots for the interval type-2 fuzzy polynomials.

  18. Human evaluation in association to the mathematical analysis of arch forms: Two-dimensional study.

    PubMed

    Zabidin, Nurwahidah; Mohamed, Alizae Marny; Zaharim, Azami; Marizan Nor, Murshida; Rosli, Tanti Irawati

    2018-03-01

    To evaluate the relationship between human evaluation of the dental-arch form, to complete a mathematical analysis via two different methods in quantifying the arch form, and to establish agreement with the fourth-order polynomial equation. This study included 64 sets of digitised maxilla and mandible dental casts obtained from a sample of dental arch with normal occlusion. For human evaluation, a convenient sample of orthodontic practitioners ranked the photo images of dental cast from the most tapered to the less tapered (square). In the mathematical analysis, dental arches were interpolated using the fourth-order polynomial equation with millimetric acetate paper and AutoCAD software. Finally, the relations between human evaluation and mathematical objective analyses were evaluated. Human evaluations were found to be generally in agreement, but only at the extremes of tapered and square arch forms; this indicated general human error and observer bias. The two methods used to plot the arch form were comparable. The use of fourth-order polynomial equation may be facilitative in obtaining a smooth curve, which can produce a template for individual arch that represents all potential tooth positions for the dental arch. Copyright © 2018 CEO. Published by Elsevier Masson SAS. All rights reserved.

  19. Development of Finite Elements for Two-Dimensional Structural Analysis Using the Integrated Force Method

    NASA Technical Reports Server (NTRS)

    Kaljevic, Igor; Patnaik, Surya N.; Hopkins, Dale A.

    1996-01-01

    The Integrated Force Method has been developed in recent years for the analysis of structural mechanics problems. This method treats all independent internal forces as unknown variables that can be calculated by simultaneously imposing equations of equilibrium and compatibility conditions. In this paper a finite element library for analyzing two-dimensional problems by the Integrated Force Method is presented. Triangular- and quadrilateral-shaped elements capable of modeling arbitrary domain configurations are presented. The element equilibrium and flexibility matrices are derived by discretizing the expressions for potential and complementary energies, respectively. The displacement and stress fields within the finite elements are independently approximated. The displacement field is interpolated as it is in the standard displacement method, and the stress field is approximated by using complete polynomials of the correct order. A procedure that uses the definitions of stress components in terms of an Airy stress function is developed to derive the stress interpolation polynomials. Such derived stress fields identically satisfy the equations of equilibrium. Moreover, the resulting element matrices are insensitive to the orientation of local coordinate systems. A method is devised to calculate the number of rigid body modes, and the present elements are shown to be free of spurious zero-energy modes. A number of example problems are solved by using the present library, and the results are compared with corresponding analytical solutions and with results from the standard displacement finite element method. The Integrated Force Method not only gives results that agree well with analytical and displacement method results but also outperforms the displacement method in stress calculations.

  20. Stabilizing and destabilizing effects of damping in non-conservative systems: Some new results

    NASA Astrophysics Data System (ADS)

    Abdullatif, Mahmoud; Mukherjee, Ranjan; Hellum, Aren

    2018-01-01

    Previous work has amply demonstrated that non-conservative systems can be made unstable by the application of damping. Systems with two neutrally-stable damping levels, whereby the system initially gains stability but later loses stability as the level of damping is increased, have also been observed. The phenomenon of three damping-induced stability transitions has not been reported in the literature. Here we show that the addition of damping can cause non-conservative systems to become stable, then unstable, then stable again at the same value of the non-conservative forcing variable. This combination of stability transitions is found to exist for several example systems, including linkages with follower end forces and fluid-conveying pipes. Another interesting observation is that a given system can exhibit different forms of stability transitions in different regions of its parameter space. In a particular example, the neutral stability curves corresponding to two different modes are observed to intersect, such that the boundary separating the stable and unstable regions is piecewise continuous. This observation requires that the accepted definitions of "stabilizing" and "destabilizing" roles of damping be revised. All of these stability transition behaviors were found by applying the Routh-Hurwitz procedure, whereby the traditional procedure is first applied to the characteristic polynomial of the system, and then again to guarantee the existence of a second-order auxiliary polynomial in the Routh array. This procedure is developed in the context of examples, each of which concerns a classical apparatus who dynamics are more interesting than previously believed.

  1. Flat bases of invariant polynomials and P-matrices of E{sub 7} and E{sub 8}

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Talamini, Vittorino

    2010-02-15

    Let G be a compact group of linear transformations of a Euclidean space V. The G-invariant C{sup {infinity}} functions can be expressed as C{sup {infinity}} functions of a finite basic set of G-invariant homogeneous polynomials, sometimes called an integrity basis. The mathematical description of the orbit space V/G depends on the integrity basis too: it is realized through polynomial equations and inequalities expressing rank and positive semidefiniteness conditions of the P-matrix, a real symmetric matrix determined by the integrity basis. The choice of the basic set of G-invariant homogeneous polynomials forming an integrity basis is not unique, so it ismore » not unique the mathematical description of the orbit space too. If G is an irreducible finite reflection group, Saito et al. [Commun. Algebra 8, 373 (1980)] characterized some special basic sets of G-invariant homogeneous polynomials that they called flat. They also found explicitly the flat basic sets of invariant homogeneous polynomials of all the irreducible finite reflection groups except of the two largest groups E{sub 7} and E{sub 8}. In this paper the flat basic sets of invariant homogeneous polynomials of E{sub 7} and E{sub 8} and the corresponding P-matrices are determined explicitly. Using the results here reported one is able to determine easily the P-matrices corresponding to any other integrity basis of E{sub 7} or E{sub 8}. From the P-matrices one may then write down the equations and inequalities defining the orbit spaces of E{sub 7} and E{sub 8} relatively to a flat basis or to any other integrity basis. The results here obtained may be employed concretely to study analytically the symmetry breaking in all theories where the symmetry group is one of the finite reflection groups E{sub 7} and E{sub 8} or one of the Lie groups E{sub 7} and E{sub 8} in their adjoint representations.« less

  2. Operator identities involving the bivariate Rogers-Szegö polynomials and their applications to the multiple q-series identities

    NASA Astrophysics Data System (ADS)

    Zhang, Zhizheng; Wang, Tianze

    2008-07-01

    In this paper, we first give several operator identities involving the bivariate Rogers-Szegö polynomials. By applying the technique of parameter augmentation to the multiple q-binomial theorems given by Milne [S.C. Milne, Balanced summation theorems for U(n) basic hypergeometric series, AdvE Math. 131 (1997) 93-187], we obtain several new multiple q-series identities involving the bivariate Rogers-Szegö polynomials. These include multiple extensions of Mehler's formula and Rogers's formula. Our U(n+1) generalizations are quite natural as they are also a direct and immediate consequence of their (often classical) known one-variable cases and Milne's fundamental theorem for An or U(n+1) basic hypergeometric series in Theorem 1E49 of [S.C. Milne, An elementary proof of the Macdonald identities for , Adv. Math. 57 (1985) 34-70], as rewritten in Lemma 7.3 on p. 163 of [S.C. Milne, Balanced summation theorems for U(n) basic hypergeometric series, Adv. Math. 131 (1997) 93-187] or Corollary 4.4 on pp. 768-769 of [S.C. Milne, M. Schlosser, A new An extension of Ramanujan's summation with applications to multilateral An series, Rocky Mountain J. Math. 32 (2002) 759-792].

  3. Simple Proof of Jury Test for Complex Polynomials

    NASA Astrophysics Data System (ADS)

    Choo, Younseok; Kim, Dongmin

    Recently some attempts have been made in the literature to give simple proofs of Jury test for real polynomials. This letter presents a similar result for complex polynomials. A simple proof of Jury test for complex polynomials is provided based on the Rouché's Theorem and a single-parameter characterization of Schur stability property for complex polynomials.

  4. Temporal Context in Concurrent Chains: I. Terminal-Link Duration

    ERIC Educational Resources Information Center

    Grace, Randolph C.

    2004-01-01

    Two experiments are reported in which the ratio of the average times spent in the terminal and initial links ("Tt/Ti") in concurrent chains was varied. In Experiment 1, pigeons responded in a three-component procedure in which terminal-link variable-interval schedules were in constant ratio, but their average duration increased across components…

  5. "Asymptotic Parabola" Fits for Smoothing Generally Asymmetric Light Curves

    NASA Astrophysics Data System (ADS)

    Andrych, K. D.; Andronov, I. L.; Chinarova, L. L.; Marsakova, V. I.

    A computer program is introduced, which allows to determine statistically optimal approximation using the "Asymptotic Parabola" fit, or, in other words, the spline consisting of polynomials of order 1,2,1, or two lines ("asymptotes") connected with a parabola. The function itself and its derivative is continuous. There are 5 parameters: two points, where a line switches to a parabola and vice versa, the slopes of the line and the curvature of the parabola. Extreme cases are either the parabola without lines (i.e.the parabola of width of the whole interval), or lines without a parabola (zero width of the parabola), or "line+parabola" without a second line. Such an approximation is especially effective for pulsating variables, for which the slopes of the ascending and descending branches are generally different, so the maxima and minima have asymmetric shapes. The method was initially introduced by Marsakova and Andronov (1996OAP.....9...127M) and realized as a computer program written in QBasic under DOS. It was used for dozens of variable stars, particularly, for the catalogs of the individual characteristics of pulsations of the Mira (1998OAP....11...79M) and semi-regular (200OAP....13..116C) pulsating variables. For the eclipsing variables with nearly symmetric shapes of the minima, we use a "symmetric" version of the "Asymptotic parabola". Here we introduce a Windows-based program, which does not have DOS limitation for the memory (number of observations) and screen resolution. The program has an user-friendly interface and is illustrated by an application to the test signal and to the pulsating variable AC Her.

  6. On the connection coefficients and recurrence relations arising from expansions in series of Laguerre polynomials

    NASA Astrophysics Data System (ADS)

    Doha, E. H.

    2003-05-01

    A formula expressing the Laguerre coefficients of a general-order derivative of an infinitely differentiable function in terms of its original coefficients is proved, and a formula expressing explicitly the derivatives of Laguerre polynomials of any degree and for any order as a linear combination of suitable Laguerre polynomials is deduced. A formula for the Laguerre coefficients of the moments of one single Laguerre polynomial of certain degree is given. Formulae for the Laguerre coefficients of the moments of a general-order derivative of an infinitely differentiable function in terms of its Laguerre coefficients are also obtained. A simple approach in order to build and solve recursively for the connection coefficients between Jacobi-Laguerre and Hermite-Laguerre polynomials is described. An explicit formula for these coefficients between Jacobi and Laguerre polynomials is given, of which the ultra-spherical polynomials of the first and second kinds and Legendre polynomials are important special cases. An analytical formula for the connection coefficients between Hermite and Laguerre polynomials is also obtained.

  7. Polynomial filter estimation of range and range rate for terminal rendezvous

    NASA Technical Reports Server (NTRS)

    Philips, R.

    1970-01-01

    A study was made of a polynomial filter for computing range rate information from CSM VHF range data. The filter's performance during the terminal phase of the rendezvous is discussed. Two modifications of the filter were also made and tested. A manual terminal rendezvous was simulated and desired accuracies were achieved for vehicles on an intercept trajectory, except for short periods following each braking maneuver when the estimated range rate was initially in error by the magnitude of the burn.

  8. Solution of Fifth-order Korteweg and de Vries Equation by Homotopy perturbation Transform Method using He's Polynomial

    NASA Astrophysics Data System (ADS)

    Sharma, Dinkar; Singh, Prince; Chauhan, Shubha

    2017-06-01

    In this paper, a combined form of the Laplace transform method with the homotopy perturbation method is applied to solve nonlinear fifth order Korteweg de Vries (KdV) equations. The method is known as homotopy perturbation transform method (HPTM). The nonlinear terms can be easily handled by the use of He's polynomials. Two test examples are considered to illustrate the present scheme. Further the results are compared with Homotopy perturbation method (HPM).

  9. A class of generalized Ginzburg-Landau equations with random switching

    NASA Astrophysics Data System (ADS)

    Wu, Zheng; Yin, George; Lei, Dongxia

    2018-09-01

    This paper focuses on a class of generalized Ginzburg-Landau equations with random switching. In our formulation, the nonlinear term is allowed to have higher polynomial growth rate than the usual cubic polynomials. The random switching is modeled by a continuous-time Markov chain with a finite state space. First, an explicit solution is obtained. Then properties such as stochastic-ultimate boundedness and permanence of the solution processes are investigated. Finally, two-time-scale models are examined leading to a reduction of complexity.

  10. Supersymmetric Casimir energy and the anomaly polynomial

    NASA Astrophysics Data System (ADS)

    Bobev, Nikolay; Bullimore, Mathew; Kim, Hee-Cheol

    2015-09-01

    We conjecture that for superconformal field theories in even dimensions, the supersymmetric Casimir energy on a space with topology S 1 × S D-1 is equal to an equivariant integral of the anomaly polynomial. The equivariant integration is defined with respect to the Cartan subalgebra of the global symmetry algebra that commutes with a given supercharge. We test our proposal extensively by computing the supersymmetric Casimir energy for large classes of superconformal field theories, with and without known Lagrangian descriptions, in two, four and six dimensions.

  11. Uniform versus Gaussian Beams: A Comparison of the Effects of Diffraction, Obscuration, and Aberations.

    DTIC Science & Technology

    1985-12-16

    balancing is discussed for the two types of beams. Zernike polynomials representing balanced primary aberration for uniform and Gaussian annular beams...plotted on a logarithmic scale (Figs. 3c and 3d ). The positions of maxima and minima and the correspond- ing irradiance and encircled-power values are...aberration 2 4 (representing a term in the expansion of the aberration in terms of a set of " Zernike " polynomials which are orthonormal over the amplitude

  12. Piecewise Polynomial Aggregation as Preprocessing for Data Numerical Modeling

    NASA Astrophysics Data System (ADS)

    Dobronets, B. S.; Popova, O. A.

    2018-05-01

    Data aggregation issues for numerical modeling are reviewed in the present study. The authors discuss data aggregation procedures as preprocessing for subsequent numerical modeling. To calculate the data aggregation, the authors propose using numerical probabilistic analysis (NPA). An important feature of this study is how the authors represent the aggregated data. The study shows that the offered approach to data aggregation can be interpreted as the frequency distribution of a variable. To study its properties, the density function is used. For this purpose, the authors propose using the piecewise polynomial models. A suitable example of such approach is the spline. The authors show that their approach to data aggregation allows reducing the level of data uncertainty and significantly increasing the efficiency of numerical calculations. To demonstrate the degree of the correspondence of the proposed methods to reality, the authors developed a theoretical framework and considered numerical examples devoted to time series aggregation.

  13. Investigation of the Process Conditions for Hydrogen Production by Steam Reforming of Glycerol over Ni/Al₂O₃ Catalyst Using Response Surface Methodology (RSM).

    PubMed

    Ebshish, Ali; Yaakob, Zahira; Taufiq-Yap, Yun Hin; Bshish, Ahmed

    2014-03-19

    In this work; a response surface methodology (RSM) was implemented to investigate the process variables in a hydrogen production system. The effects of five independent variables; namely the temperature (X₁); the flow rate (X₂); the catalyst weight (X₃); the catalyst loading (X₄) and the glycerol-water molar ratio (X₅) on the H₂ yield (Y₁) and the conversion of glycerol to gaseous products (Y₂) were explored. Using multiple regression analysis; the experimental results of the H₂ yield and the glycerol conversion to gases were fit to quadratic polynomial models. The proposed mathematical models have correlated the dependent factors well within the limits that were being examined. The best values of the process variables were a temperature of approximately 600 °C; a feed flow rate of 0.05 mL/min; a catalyst weight of 0.2 g; a catalyst loading of 20% and a glycerol-water molar ratio of approximately 12; where the H₂ yield was predicted to be 57.6% and the conversion of glycerol was predicted to be 75%. To validate the proposed models; statistical analysis using a two-sample t -test was performed; and the results showed that the models could predict the responses satisfactorily within the limits of the variables that were studied.

  14. Nodal Statistics for the Van Vleck Polynomials

    NASA Astrophysics Data System (ADS)

    Bourget, Alain

    The Van Vleck polynomials naturally arise from the generalized Lamé equation as the polynomials of degree for which Eq. (1) has a polynomial solution of some degree k. In this paper, we compute the limiting distribution, as well as the limiting mean level spacings distribution of the zeros of any Van Vleck polynomial as N --> ∞.

  15. Ordinal probability effect measures for group comparisons in multinomial cumulative link models.

    PubMed

    Agresti, Alan; Kateri, Maria

    2017-03-01

    We consider simple ordinal model-based probability effect measures for comparing distributions of two groups, adjusted for explanatory variables. An "ordinal superiority" measure summarizes the probability that an observation from one distribution falls above an independent observation from the other distribution, adjusted for explanatory variables in a model. The measure applies directly to normal linear models and to a normal latent variable model for ordinal response variables. It equals Φ(β/2) for the corresponding ordinal model that applies a probit link function to cumulative multinomial probabilities, for standard normal cdf Φ and effect β that is the coefficient of the group indicator variable. For the more general latent variable model for ordinal responses that corresponds to a linear model with other possible error distributions and corresponding link functions for cumulative multinomial probabilities, the ordinal superiority measure equals exp(β)/[1+exp(β)] with the log-log link and equals approximately exp(β/2)/[1+exp(β/2)] with the logit link, where β is the group effect. Another ordinal superiority measure generalizes the difference of proportions from binary to ordinal responses. We also present related measures directly for ordinal models for the observed response that need not assume corresponding latent response models. We present confidence intervals for the measures and illustrate with an example. © 2016, The International Biometric Society.

  16. A Classroom Note on: Bounds on Integer Solutions of xy = k(x + y) and xyz = k(xy + xz + yz)

    ERIC Educational Resources Information Center

    Umar, Abdullahi; Alassar, Rajai

    2011-01-01

    Diophantine equations constitute a rich mathematical field. This article may be useful as a basis for a student math club project. There are several situations in which one needs to find a solution of indeterminate polynomial equations that allow the variables to be integers only. These indeterminate equations are fewer than the involved unknown…

  17. Regression Simulation of Turbine Engine Performance - Accuracy Improvement (TASK IV)

    DTIC Science & Technology

    1978-09-30

    33 21 Generalized Form of the Regression Equation for the Optimized Polynomial Exponent M ethod...altitude, Mach number and power setting combinations were generated during the ARES evaluation. The orthogonal Latin Square selection procedure...pattern. In data generation , the low (L), mid (M), and high (H) values of a variable are not always the same. At some of the corner points where

  18. Legendre modified moments for Euler's constant

    NASA Astrophysics Data System (ADS)

    Prévost, Marc

    2008-10-01

    Polynomial moments are often used for the computation of Gauss quadrature to stabilize the numerical calculation of the orthogonal polynomials, see [W. Gautschi, Computational aspects of orthogonal polynomials, in: P. Nevai (Ed.), Orthogonal Polynomials-Theory and Practice, NATO ASI Series, Series C: Mathematical and Physical Sciences, vol. 294. Kluwer, Dordrecht, 1990, pp. 181-216 [6]; W. Gautschi, On the sensitivity of orthogonal polynomials to perturbations in the moments, Numer. Math. 48(4) (1986) 369-382 [5]; W. Gautschi, On generating orthogonal polynomials, SIAM J. Sci. Statist. Comput. 3(3) (1982) 289-317 [4

  19. Data driven discrete-time parsimonious identification of a nonlinear state-space model for a weakly nonlinear system with short data record

    NASA Astrophysics Data System (ADS)

    Relan, Rishi; Tiels, Koen; Marconato, Anna; Dreesen, Philippe; Schoukens, Johan

    2018-05-01

    Many real world systems exhibit a quasi linear or weakly nonlinear behavior during normal operation, and a hard saturation effect for high peaks of the input signal. In this paper, a methodology to identify a parsimonious discrete-time nonlinear state space model (NLSS) for the nonlinear dynamical system with relatively short data record is proposed. The capability of the NLSS model structure is demonstrated by introducing two different initialisation schemes, one of them using multivariate polynomials. In addition, a method using first-order information of the multivariate polynomials and tensor decomposition is employed to obtain the parsimonious decoupled representation of the set of multivariate real polynomials estimated during the identification of NLSS model. Finally, the experimental verification of the model structure is done on the cascaded water-benchmark identification problem.

  20. Explicit 2-D Hydrodynamic FEM Program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Jerry

    1996-08-07

    DYNA2D* is a vectorized, explicit, two-dimensional, axisymmetric and plane strain finite element program for analyzing the large deformation dynamic and hydrodynamic response of inelastic solids. DYNA2D* contains 13 material models and 9 equations of state (EOS) to cover a wide range of material behavior. The material models implemented in all machine versions are: elastic, orthotropic elastic, kinematic/isotropic elastic plasticity, thermoelastoplastic, soil and crushable foam, linear viscoelastic, rubber, high explosive burn, isotropic elastic-plastic, temperature-dependent elastic-plastic. The isotropic and temperature-dependent elastic-plastic models determine only the deviatoric stresses. Pressure is determined by one of 9 equations of state including linear polynomial, JWL highmore » explosive, Sack Tuesday high explosive, Gruneisen, ratio of polynomials, linear polynomial with energy deposition, ignition and growth of reaction in HE, tabulated compaction, and tabulated.« less

  1. Multiple regression technique for Pth degree polynominals with and without linear cross products

    NASA Technical Reports Server (NTRS)

    Davis, J. W.

    1973-01-01

    A multiple regression technique was developed by which the nonlinear behavior of specified independent variables can be related to a given dependent variable. The polynomial expression can be of Pth degree and can incorporate N independent variables. Two cases are treated such that mathematical models can be studied both with and without linear cross products. The resulting surface fits can be used to summarize trends for a given phenomenon and provide a mathematical relationship for subsequent analysis. To implement this technique, separate computer programs were developed for the case without linear cross products and for the case incorporating such cross products which evaluate the various constants in the model regression equation. In addition, the significance of the estimated regression equation is considered and the standard deviation, the F statistic, the maximum absolute percent error, and the average of the absolute values of the percent of error evaluated. The computer programs and their manner of utilization are described. Sample problems are included to illustrate the use and capability of the technique which show the output formats and typical plots comparing computer results to each set of input data.

  2. Parametric synthesis of a robust controller on a base of mathematical programming method

    NASA Astrophysics Data System (ADS)

    Khozhaev, I. V.; Gayvoronskiy, S. A.; Ezangina, T. A.

    2018-05-01

    Considered paper is dedicated to deriving sufficient conditions, linking root indices of robust control quality with coefficients of interval characteristic polynomial, on the base of mathematical programming method. On the base of these conditions, a method of PI- and PID-controllers, providing aperiodic transient process with acceptable stability degree and, subsequently, acceptable setting time, synthesis was developed. The method was applied to a problem of synthesizing a controller for a depth control system of an unmanned underwater vehicle.

  3. Exact models for isotropic matter

    NASA Astrophysics Data System (ADS)

    Thirukkanesh, S.; Maharaj, S. D.

    2006-04-01

    We study the Einstein-Maxwell system of equations in spherically symmetric gravitational fields for static interior spacetimes. The condition for pressure isotropy is reduced to a recurrence equation with variable, rational coefficients. We demonstrate that this difference equation can be solved in general using mathematical induction. Consequently, we can find an explicit exact solution to the Einstein-Maxwell field equations. The metric functions, energy density, pressure and the electric field intensity can be found explicitly. Our result contains models found previously, including the neutron star model of Durgapal and Bannerji. By placing restrictions on parameters arising in the general series, we show that the series terminate and there exist two linearly independent solutions. Consequently, it is possible to find exact solutions in terms of elementary functions, namely polynomials and algebraic functions.

  4. Automated image segmentation-assisted flattening of atomic force microscopy images.

    PubMed

    Wang, Yuliang; Lu, Tongda; Li, Xiaolai; Wang, Huimin

    2018-01-01

    Atomic force microscopy (AFM) images normally exhibit various artifacts. As a result, image flattening is required prior to image analysis. To obtain optimized flattening results, foreground features are generally manually excluded using rectangular masks in image flattening, which is time consuming and inaccurate. In this study, a two-step scheme was proposed to achieve optimized image flattening in an automated manner. In the first step, the convex and concave features in the foreground were automatically segmented with accurate boundary detection. The extracted foreground features were taken as exclusion masks. In the second step, data points in the background were fitted as polynomial curves/surfaces, which were then subtracted from raw images to get the flattened images. Moreover, sliding-window-based polynomial fitting was proposed to process images with complex background trends. The working principle of the two-step image flattening scheme were presented, followed by the investigation of the influence of a sliding-window size and polynomial fitting direction on the flattened images. Additionally, the role of image flattening on the morphological characterization and segmentation of AFM images were verified with the proposed method.

  5. AN IMPROVED STRATEGY FOR REGRESSION OF BIOPHYSICAL VARIABLES AND LANDSAT ETM+ DATA. (R828309)

    EPA Science Inventory

    Empirical models are important tools for relating field-measured biophysical variables to remote sensing data. Regression analysis has been a popular empirical method of linking these two types of data to provide continuous estimates for variables such as biomass, percent wood...

  6. Meixner Class of Non-commutative Generalized Stochastic Processes with Freely Independent Values II. The Generating Function

    NASA Astrophysics Data System (ADS)

    Bożejko, Marek; Lytvynov, Eugene

    2011-03-01

    Let T be an underlying space with a non-atomic measure σ on it. In [ Comm. Math. Phys. 292, 99-129 (2009)] the Meixner class of non-commutative generalized stochastic processes with freely independent values, {ω=(ω(t))_{tin T}} , was characterized through the continuity of the corresponding orthogonal polynomials. In this paper, we derive a generating function for these orthogonal polynomials. The first question we have to answer is: What should serve as a generating function for a system of polynomials of infinitely many non-commuting variables? We construct a class of operator-valued functions {Z=(Z(t))_{tin T}} such that Z( t) commutes with ω( s) for any {s,tin T}. Then a generating function can be understood as {G(Z,ω)=sum_{n=0}^infty int_{T^n}P^{(n)}(ω(t_1),dots,ω(t_n))Z(t_1)dots Z(t_n)} {σ(dt_1) dots σ(dt_n)} , where {P^{(n)}(ω(t_1),dots,ω(t_n))} is (the kernel of the) n th orthogonal polynomial. We derive an explicit form of G( Z, ω), which has a resolvent form and resembles the generating function in the classical case, albeit it involves integrals of non-commuting operators. We finally discuss a related problem of the action of the annihilation operators {partial_t,t in T} . In contrast to the classical case, we prove that the operators ∂ t related to the free Gaussian and Poisson processes have a property of globality. This result is genuinely infinite-dimensional, since in one dimension one loses the notion of globality.

  7. Improving Global Models of Remotely Sensed Ocean Chlorophyll Content Using Partial Least Squares and Geographically Weighted Regression

    NASA Astrophysics Data System (ADS)

    Gholizadeh, H.; Robeson, S. M.

    2015-12-01

    Empirical models have been widely used to estimate global chlorophyll content from remotely sensed data. Here, we focus on the standard NASA empirical models that use blue-green band ratios. These band ratio ocean color (OC) algorithms are in the form of fourth-order polynomials and the parameters of these polynomials (i.e. coefficients) are estimated from the NASA bio-Optical Marine Algorithm Data set (NOMAD). Most of the points in this data set have been sampled from tropical and temperate regions. However, polynomial coefficients obtained from this data set are used to estimate chlorophyll content in all ocean regions with different properties such as sea-surface temperature, salinity, and downwelling/upwelling patterns. Further, the polynomial terms in these models are highly correlated. In sum, the limitations of these empirical models are as follows: 1) the independent variables within the empirical models, in their current form, are correlated (multicollinear), and 2) current algorithms are global approaches and are based on the spatial stationarity assumption, so they are independent of location. Multicollinearity problem is resolved by using partial least squares (PLS). PLS, which transforms the data into a set of independent components, can be considered as a combined form of principal component regression (PCR) and multiple regression. Geographically weighted regression (GWR) is also used to investigate the validity of spatial stationarity assumption. GWR solves a regression model over each sample point by using the observations within its neighbourhood. PLS results show that the empirical method underestimates chlorophyll content in high latitudes, including the Southern Ocean region, when compared to PLS (see Figure 1). Cluster analysis of GWR coefficients also shows that the spatial stationarity assumption in empirical models is not likely a valid assumption.

  8. Adaptive surrogate modeling by ANOVA and sparse polynomial dimensional decomposition for global sensitivity analysis in fluid simulation

    NASA Astrophysics Data System (ADS)

    Tang, Kunkun; Congedo, Pietro M.; Abgrall, Rémi

    2016-06-01

    The Polynomial Dimensional Decomposition (PDD) is employed in this work for the global sensitivity analysis and uncertainty quantification (UQ) of stochastic systems subject to a moderate to large number of input random variables. Due to the intimate connection between the PDD and the Analysis of Variance (ANOVA) approaches, PDD is able to provide a simpler and more direct evaluation of the Sobol' sensitivity indices, when compared to the Polynomial Chaos expansion (PC). Unfortunately, the number of PDD terms grows exponentially with respect to the size of the input random vector, which makes the computational cost of standard methods unaffordable for real engineering applications. In order to address the problem of the curse of dimensionality, this work proposes essentially variance-based adaptive strategies aiming to build a cheap meta-model (i.e. surrogate model) by employing the sparse PDD approach with its coefficients computed by regression. Three levels of adaptivity are carried out in this paper: 1) the truncated dimensionality for ANOVA component functions, 2) the active dimension technique especially for second- and higher-order parameter interactions, and 3) the stepwise regression approach designed to retain only the most influential polynomials in the PDD expansion. During this adaptive procedure featuring stepwise regressions, the surrogate model representation keeps containing few terms, so that the cost to resolve repeatedly the linear systems of the least-squares regression problem is negligible. The size of the finally obtained sparse PDD representation is much smaller than the one of the full expansion, since only significant terms are eventually retained. Consequently, a much smaller number of calls to the deterministic model is required to compute the final PDD coefficients.

  9. Rational Ruijsenaars Schneider hierarchy and bispectral difference operators

    NASA Astrophysics Data System (ADS)

    Iliev, Plamen

    2007-05-01

    We show that a monic polynomial in a discrete variable n, with coefficients depending on time variables t1,t2,…, is a τ-function for the discrete Kadomtsev-Petviashvili hierarchy if and only if the motion of its zeros is governed by a hierarchy of Ruijsenaars-Schneider systems. These τ-functions were considered in [L. Haine, P. Iliev, Commutative rings of difference operators and an adelic flag manifold, Int. Math. Res. Not. 2000 (6) (2000) 281-323], where it was proved that they parametrize rank one solutions to a difference-differential version of the bispectral problem.

  10. Discontinuous Skeletal Gradient Discretisation methods on polytopal meshes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Di Pietro, Daniele A.; Droniou, Jérôme; Manzini, Gianmarco

    Here, in this work we develop arbitrary-order Discontinuous Skeletal Gradient Discretisations (DSGD) on general polytopal meshes. Discontinuous Skeletal refers to the fact that the globally coupled unknowns are broken polynomials on the mesh skeleton. The key ingredient is a high-order gradient reconstruction composed of two terms: (i) a consistent contribution obtained mimicking an integration by parts formula inside each element and (ii) a stabilising term for which sufficient design conditions are provided. An example of stabilisation that satisfies the design conditions is proposed based on a local lifting of high-order residuals on a Raviart–Thomas–Nédélec subspace. We prove that the novelmore » DSGDs satisfy coercivity, consistency, limit-conformity, and compactness requirements that ensure convergence for a variety of elliptic and parabolic problems. Lastly, links with Hybrid High-Order, non-conforming Mimetic Finite Difference and non-conforming Virtual Element methods are also studied. Numerical examples complete the exposition.« less

  11. Aberration corrections for free-space optical communications in atmosphere turbulence using orbital angular momentum states.

    PubMed

    Zhao, S M; Leach, J; Gong, L Y; Ding, J; Zheng, B Y

    2012-01-02

    The effect of atmosphere turbulence on light's spatial structure compromises the information capacity of photons carrying the Orbital Angular Momentum (OAM) in free-space optical (FSO) communications. In this paper, we study two aberration correction methods to mitigate this effect. The first one is the Shack-Hartmann wavefront correction method, which is based on the Zernike polynomials, and the second is a phase correction method specific to OAM states. Our numerical results show that the phase correction method for OAM states outperforms the Shark-Hartmann wavefront correction method, although both methods improve significantly purity of a single OAM state and the channel capacities of FSO communication link. At the same time, our experimental results show that the values of participation functions go down at the phase correction method for OAM states, i.e., the correction method ameliorates effectively the bad effect of atmosphere turbulence.

  12. Discontinuous Skeletal Gradient Discretisation methods on polytopal meshes

    DOE PAGES

    Di Pietro, Daniele A.; Droniou, Jérôme; Manzini, Gianmarco

    2017-11-21

    Here, in this work we develop arbitrary-order Discontinuous Skeletal Gradient Discretisations (DSGD) on general polytopal meshes. Discontinuous Skeletal refers to the fact that the globally coupled unknowns are broken polynomials on the mesh skeleton. The key ingredient is a high-order gradient reconstruction composed of two terms: (i) a consistent contribution obtained mimicking an integration by parts formula inside each element and (ii) a stabilising term for which sufficient design conditions are provided. An example of stabilisation that satisfies the design conditions is proposed based on a local lifting of high-order residuals on a Raviart–Thomas–Nédélec subspace. We prove that the novelmore » DSGDs satisfy coercivity, consistency, limit-conformity, and compactness requirements that ensure convergence for a variety of elliptic and parabolic problems. Lastly, links with Hybrid High-Order, non-conforming Mimetic Finite Difference and non-conforming Virtual Element methods are also studied. Numerical examples complete the exposition.« less

  13. Direct calculation of modal parameters from matrix orthogonal polynomials

    NASA Astrophysics Data System (ADS)

    El-Kafafy, Mahmoud; Guillaume, Patrick

    2011-10-01

    The object of this paper is to introduce a new technique to derive the global modal parameter (i.e. system poles) directly from estimated matrix orthogonal polynomials. This contribution generalized the results given in Rolain et al. (1994) [5] and Rolain et al. (1995) [6] for scalar orthogonal polynomials to multivariable (matrix) orthogonal polynomials for multiple input multiple output (MIMO) system. Using orthogonal polynomials improves the numerical properties of the estimation process. However, the derivation of the modal parameters from the orthogonal polynomials is in general ill-conditioned if not handled properly. The transformation of the coefficients from orthogonal polynomials basis to power polynomials basis is known to be an ill-conditioned transformation. In this paper a new approach is proposed to compute the system poles directly from the multivariable orthogonal polynomials. High order models can be used without any numerical problems. The proposed method will be compared with existing methods (Van Der Auweraer and Leuridan (1987) [4] Chen and Xu (2003) [7]). For this comparative study, simulated as well as experimental data will be used.

  14. Study on the mapping of dark matter clustering from real space to redshift space

    NASA Astrophysics Data System (ADS)

    Zheng, Yi; Song, Yong-Seon

    2016-08-01

    The mapping of dark matter clustering from real space to redshift space introduces the anisotropic property to the measured density power spectrum in redshift space, known as the redshift space distortion effect. The mapping formula is intrinsically non-linear, which is complicated by the higher order polynomials due to indefinite cross correlations between the density and velocity fields, and the Finger-of-God effect due to the randomness of the peculiar velocity field. Whilst the full higher order polynomials remain unknown, the other systematics can be controlled consistently within the same order truncation in the expansion of the mapping formula, as shown in this paper. The systematic due to the unknown non-linear density and velocity fields is removed by separately measuring all terms in the expansion directly using simulations. The uncertainty caused by the velocity randomness is controlled by splitting the FoG term into two pieces, 1) the ``one-point" FoG term being independent of the separation vector between two different points, and 2) the ``correlated" FoG term appearing as an indefinite polynomials which is expanded in the same order as all other perturbative polynomials. Using 100 realizations of simulations, we find that the Gaussian FoG function with only one scale-independent free parameter works quite well, and that our new mapping formulation accurately reproduces the observed 2-dimensional density power spectrum in redshift space at the smallest scales by far, up to k~ 0.2 Mpc-1, considering the resolution of future experiments.

  15. An Adaptive Prediction-Based Approach to Lossless Compression of Floating-Point Volume Data.

    PubMed

    Fout, N; Ma, Kwan-Liu

    2012-12-01

    In this work, we address the problem of lossless compression of scientific and medical floating-point volume data. We propose two prediction-based compression methods that share a common framework, which consists of a switched prediction scheme wherein the best predictor out of a preset group of linear predictors is selected. Such a scheme is able to adapt to different datasets as well as to varying statistics within the data. The first method, called APE (Adaptive Polynomial Encoder), uses a family of structured interpolating polynomials for prediction, while the second method, which we refer to as ACE (Adaptive Combined Encoder), combines predictors from previous work with the polynomial predictors to yield a more flexible, powerful encoder that is able to effectively decorrelate a wide range of data. In addition, in order to facilitate efficient visualization of compressed data, our scheme provides an option to partition floating-point values in such a way as to provide a progressive representation. We compare our two compressors to existing state-of-the-art lossless floating-point compressors for scientific data, with our data suite including both computer simulations and observational measurements. The results demonstrate that our polynomial predictor, APE, is comparable to previous approaches in terms of speed but achieves better compression rates on average. ACE, our combined predictor, while somewhat slower, is able to achieve the best compression rate on all datasets, with significantly better rates on most of the datasets.

  16. Geostatistical interpolation model selection based on ArcGIS and spatio-temporal variability analysis of groundwater level in piedmont plains, northwest China.

    PubMed

    Xiao, Yong; Gu, Xiaomin; Yin, Shiyang; Shao, Jingli; Cui, Yali; Zhang, Qiulan; Niu, Yong

    2016-01-01

    Based on the geo-statistical theory and ArcGIS geo-statistical module, datas of 30 groundwater level observation wells were used to estimate the decline of groundwater level in Beijing piedmont. Seven different interpolation methods (inverse distance weighted interpolation, global polynomial interpolation, local polynomial interpolation, tension spline interpolation, ordinary Kriging interpolation, simple Kriging interpolation and universal Kriging interpolation) were used for interpolating groundwater level between 2001 and 2013. Cross-validation, absolute error and coefficient of determination (R(2)) was applied to evaluate the accuracy of different methods. The result shows that simple Kriging method gave the best fit. The analysis of spatial and temporal variability suggest that the nugget effects from 2001 to 2013 were increasing, which means the spatial correlation weakened gradually under the influence of human activities. The spatial variability in the middle areas of the alluvial-proluvial fan is relatively higher than area in top and bottom. Since the changes of the land use, groundwater level also has a temporal variation, the average decline rate of groundwater level between 2007 and 2013 increases compared with 2001-2006. Urban development and population growth cause over-exploitation of residential and industrial areas. The decline rate of the groundwater level in residential, industrial and river areas is relatively high, while the decreasing of farmland area and development of water-saving irrigation reduce the quantity of water using by agriculture and decline rate of groundwater level in agricultural area is not significant.

  17. A Bayesian analysis of trends in ozone sounding data series from 9 Nordic stations

    NASA Astrophysics Data System (ADS)

    Christiansen, Bo; Jepsen, Nis; Larsen, Niels; Korsholm, Ulrik S.

    2016-04-01

    Ozone soundings from 9 Nordic stations have been homogenized and interpolated to standard pressure levels. The different stations have very different data coverage; the longest period with data is from the end of the 1980ies to 2013. We apply a model which includes both low-frequency variability in form of a polynomial, an annual cycle with harmonics, the possibility for low-frequency variability in the annual amplitude and phasing, and either white noise or AR1 noise. The fitting of the parameters is performed with a Bayesian approach not only giving the posterior mean values but also credible intervals. We find that all stations agree on an well-defined annual cycle in the free troposphere with a relatively confined maximum in the early summer. Regarding the low-frequency variability we find that Scoresbysund, Ny Aalesund, and Sodankyla show similar structures with a maximum near 2005 followed by a decrease. However, these results are only weakly significant. A significant change in the amplitude of the annual cycle was only found for Ny Aalesund. Here the peak-to-peak amplitude changes from 0.9 to 0.8 mhPa between 1995-2000 and 2007-2012. The results are shown to be robust to the different settings of the model parameters (order of the polynomial, number of harmonics in the annual cycle, type of noise, etc). The results are also shown to be characteristic for all pressure levels in the free troposphere.

  18. Algorithm for Compressing Time-Series Data

    NASA Technical Reports Server (NTRS)

    Hawkins, S. Edward, III; Darlington, Edward Hugo

    2012-01-01

    An algorithm based on Chebyshev polynomials effects lossy compression of time-series data or other one-dimensional data streams (e.g., spectral data) that are arranged in blocks for sequential transmission. The algorithm was developed for use in transmitting data from spacecraft scientific instruments to Earth stations. In spite of its lossy nature, the algorithm preserves the information needed for scientific analysis. The algorithm is computationally simple, yet compresses data streams by factors much greater than two. The algorithm is not restricted to spacecraft or scientific uses: it is applicable to time-series data in general. The algorithm can also be applied to general multidimensional data that have been converted to time-series data, a typical example being image data acquired by raster scanning. However, unlike most prior image-data-compression algorithms, this algorithm neither depends on nor exploits the two-dimensional spatial correlations that are generally present in images. In order to understand the essence of this compression algorithm, it is necessary to understand that the net effect of this algorithm and the associated decompression algorithm is to approximate the original stream of data as a sequence of finite series of Chebyshev polynomials. For the purpose of this algorithm, a block of data or interval of time for which a Chebyshev polynomial series is fitted to the original data is denoted a fitting interval. Chebyshev approximation has two properties that make it particularly effective for compressing serial data streams with minimal loss of scientific information: The errors associated with a Chebyshev approximation are nearly uniformly distributed over the fitting interval (this is known in the art as the "equal error property"); and the maximum deviations of the fitted Chebyshev polynomial from the original data have the smallest possible values (this is known in the art as the "min-max property").

  19. Image distortion analysis using polynomial series expansion.

    PubMed

    Baggenstoss, Paul M

    2004-11-01

    In this paper, we derive a technique for analysis of local distortions which affect data in real-world applications. In the paper, we focus on image data, specifically handwritten characters. Given a reference image and a distorted copy of it, the method is able to efficiently determine the rotations, translations, scaling, and any other distortions that have been applied. Because the method is robust, it is also able to estimate distortions for two unrelated images, thus determining the distortions that would be required to cause the two images to resemble each other. The approach is based on a polynomial series expansion using matrix powers of linear transformation matrices. The technique has applications in pattern recognition in the presence of distortions.

  20. The parameters of death: a consideration of the quantity of information in a life table using a polynomial representation of the survivorship curve.

    PubMed

    Anson, J

    1988-08-01

    How much unique information is contained in any life table? The logarithmic survivorship (lx) columns of 360 empirical life tables were fitted by a weighted fifth degree polynomial, and it is shown that six parameters are adequate to reproduce these curves almost flawlessly. However, these parameters are highly intercorrelated, so that a two-dimensional representation would be adequate to express the similarities and differences among life tables. It is thus concluded that a life table contains but two unique pieces of information, these being the level of mortality in the population which it represents, and the relative shape of the underlying mortality curve.

  1. The link between eddy-driven jet variability and weather regimes in the North Atlantic-European sector

    NASA Astrophysics Data System (ADS)

    Madonna, E.; Li, C.; Grams, C. M.; Woollings, T.

    2017-12-01

    Understanding the variability of the North Atlantic eddy-driven jet is key to unravelling the dynamics, predictability and climate change response of extratropical weather in the region. This study aims to 1) reconcile two perspectives on wintertime variability in the North Atlantic-European sector and 2) clarify their link to atmospheric blocking. Two common views of wintertime variability in the North Atlantic are the zonal-mean framework comprising three preferred locations of the eddy-driven jet (southern, central, northern), and the weather regime framework comprising four classical North Atlantic-European regimes (Atlantic ridge AR, zonal ZO, European/Scandinavian blocking BL, Greenland anticyclone GA). We use a k-means clustering algorithm to characterize the two-dimensional variability of the eddy-driven jet stream, defined by the lower tropospheric zonal wind in the ERA-Interim reanalysis. The first three clusters capture the central jet and northern jet, along with a new mixed jet configuration; a fourth cluster is needed to recover the southern jet. The mixed cluster represents a split or strongly tilted jet, neither of which is well described in the zonal-mean framework, and has a persistence of about one week, similar to the other clusters. Connections between the preferred jet locations and weather regimes are corroborated - southern to GA, central to ZO, and northern to AR. In addition, the new mixed cluster is found to be linked to European/Scandinavian blocking, whose relation to the eddy-driven jet was previously unclear. The results highlight the necessity of bridging from weather to climate scales for a deeper understanding of atmospheric circulation variability.

  2. Asymptotically extremal polynomials with respect to varying weights and application to Sobolev orthogonality

    NASA Astrophysics Data System (ADS)

    Díaz Mendoza, C.; Orive, R.; Pijeira Cabrera, H.

    2008-10-01

    We study the asymptotic behavior of the zeros of a sequence of polynomials whose weighted norms, with respect to a sequence of weight functions, have the same nth root asymptotic behavior as the weighted norms of certain extremal polynomials. This result is applied to obtain the (contracted) weak zero distribution for orthogonal polynomials with respect to a Sobolev inner product with exponential weights of the form e-[phi](x), giving a unified treatment for the so-called Freud (i.e., when [phi] has polynomial growth at infinity) and Erdös (when [phi] grows faster than any polynomial at infinity) cases. In addition, we provide a new proof for the bound of the distance of the zeros to the convex hull of the support for these Sobolev orthogonal polynomials.

  3. Stabilisation of discrete-time polynomial fuzzy systems via a polynomial lyapunov approach

    NASA Astrophysics Data System (ADS)

    Nasiri, Alireza; Nguang, Sing Kiong; Swain, Akshya; Almakhles, Dhafer

    2018-02-01

    This paper deals with the problem of designing a controller for a class of discrete-time nonlinear systems which is represented by discrete-time polynomial fuzzy model. Most of the existing control design methods for discrete-time fuzzy polynomial systems cannot guarantee their Lyapunov function to be a radially unbounded polynomial function, hence the global stability cannot be assured. The proposed control design in this paper guarantees a radially unbounded polynomial Lyapunov functions which ensures global stability. In the proposed design, state feedback structure is considered and non-convexity problem is solved by incorporating an integrator into the controller. Sufficient conditions of stability are derived in terms of polynomial matrix inequalities which are solved via SOSTOOLS in MATLAB. A numerical example is presented to illustrate the effectiveness of the proposed controller.

  4. Random complex dynamics and devil's coliseums

    NASA Astrophysics Data System (ADS)

    Sumi, Hiroki

    2015-04-01

    We investigate the random dynamics of polynomial maps on the Riemann sphere \\hat{\\Bbb{C}} and the dynamics of semigroups of polynomial maps on \\hat{\\Bbb{C}} . In particular, the dynamics of a semigroup G of polynomials whose planar postcritical set is bounded and the associated random dynamics are studied. In general, the Julia set of such a G may be disconnected. We show that if G is such a semigroup, then regarding the associated random dynamics, the chaos of the averaged system disappears in the C0 sense, and the function T∞ of probability of tending to ∞ \\in \\hat{\\Bbb{C}} is Hölder continuous on \\hat{\\Bbb{C}} and varies only on the Julia set of G. Moreover, the function T∞ has a kind of monotonicity. It turns out that T∞ is a complex analogue of the devil's staircase, and we call T∞ a ‘devil’s coliseum'. We investigate the details of T∞ when G is generated by two polynomials. In this case, T∞ varies precisely on the Julia set of G, which is a thin fractal set. Moreover, under this condition, we investigate the pointwise Hölder exponents of T∞.

  5. A contracting-interval program for the Danilewski method. Ph.D. Thesis - Va. Univ.

    NASA Technical Reports Server (NTRS)

    Harris, J. D.

    1971-01-01

    The concept of contracting-interval programs is applied to finding the eigenvalues of a matrix. The development is a three-step process in which (1) a program is developed for the reduction of a matrix to Hessenberg form, (2) a program is developed for the reduction of a Hessenberg matrix to colleague form, and (3) the characteristic polynomial with interval coefficients is readily obtained from the interval of colleague matrices. This interval polynomial is then factored into quadratic factors so that the eigenvalues may be obtained. To develop a contracting-interval program for factoring this polynomial with interval coefficients it is necessary to have an iteration method which converges even in the presence of controlled rounding errors. A theorem is stated giving sufficient conditions for the convergence of Newton's method when both the function and its Jacobian cannot be evaluated exactly but errors can be made proportional to the square of the norm of the difference between the previous two iterates. This theorem is applied to prove the convergence of the generalization of the Newton-Bairstow method that is used to obtain quadratic factors of the characteristic polynomial.

  6. Epidemics in networks: a master equation approach

    NASA Astrophysics Data System (ADS)

    Cotacallapa, M.; Hase, M. O.

    2016-02-01

    A problem closely related to epidemiology, where a subgraph of ‘infected’ links is defined inside a larger network, is investigated. This subgraph is generated from the underlying network by a random variable, which decides whether a link is able to propagate a disease/information. The relaxation timescale of this random variable is examined in both annealed and quenched limits, and the effectiveness of propagation of disease/information is analyzed. The dynamics of the model is governed by a master equation and two types of underlying network are considered: one is scale-free and the other has exponential degree distribution. We have shown that the relaxation timescale of the contagion variable has a major influence on the topology of the subgraph of infected links, which determines the efficiency of spreading of disease/information over the network.

  7. Comparison of polynomial and neural fuzzy models as applied to the ethanolamine pulping of vine shoots.

    PubMed

    Jiménez, L; Angulo, V; Caparrós, S; Ariza, J

    2007-12-01

    The influence of operational variables in the pulping of vine shoots by use of ethanolamine [viz. temperature (155-185 degrees C), cooking time (30-90min) and ethanolamine concentration (50-70% v/v)] on the properties of the resulting pulp (viz. yield, kappa index, viscosity and drainability) was studied. A central composite factorial design was used in conjunction with the software BMDP and ANFIS Edit Matlab 6.5 to develop polynomial and fuzzy neural models that reproduced the experimental results of the dependent variables with errors less than 10%. Both types of models are therefore effective with a view to simulating the ethanolamine pulping process. Based on the proposed equations, the best choice is to use values of the operational valuables resulting in near-optimal pulp properties while saving energy and immobilized capital on industrial facilities by using lower temperatures and shorter processing times. One combination leading to near-optimal properties with reduced costs is using a temperature of 180 degrees C and an ethanolamine concentration of 60% for 60min, to obtain pulp with a viscosity of 6.13% lower than the maximum value (932.8ml/g) and a drainability of 5.49% lower than the maximum value (71 (o)SR).

  8. Hybrid High-Order methods for finite deformations of hyperelastic materials

    NASA Astrophysics Data System (ADS)

    Abbas, Mickaël; Ern, Alexandre; Pignet, Nicolas

    2018-01-01

    We devise and evaluate numerically Hybrid High-Order (HHO) methods for hyperelastic materials undergoing finite deformations. The HHO methods use as discrete unknowns piecewise polynomials of order k≥1 on the mesh skeleton, together with cell-based polynomials that can be eliminated locally by static condensation. The discrete problem is written as the minimization of a broken nonlinear elastic energy where a local reconstruction of the displacement gradient is used. Two HHO methods are considered: a stabilized method where the gradient is reconstructed as a tensor-valued polynomial of order k and a stabilization is added to the discrete energy functional, and an unstabilized method which reconstructs a stable higher-order gradient and circumvents the need for stabilization. Both methods satisfy the principle of virtual work locally with equilibrated tractions. We present a numerical study of the two HHO methods on test cases with known solution and on more challenging three-dimensional test cases including finite deformations with strong shear layers and cavitating voids. We assess the computational efficiency of both methods, and we compare our results to those obtained with an industrial software using conforming finite elements and to results from the literature. The two HHO methods exhibit robust behavior in the quasi-incompressible regime.

  9. Stable Numerical Approach for Fractional Delay Differential Equations

    NASA Astrophysics Data System (ADS)

    Singh, Harendra; Pandey, Rajesh K.; Baleanu, D.

    2017-12-01

    In this paper, we present a new stable numerical approach based on the operational matrix of integration of Jacobi polynomials for solving fractional delay differential equations (FDDEs). The operational matrix approach converts the FDDE into a system of linear equations, and hence the numerical solution is obtained by solving the linear system. The error analysis of the proposed method is also established. Further, a comparative study of the approximate solutions is provided for the test examples of the FDDE by varying the values of the parameters in the Jacobi polynomials. As in special case, the Jacobi polynomials reduce to the well-known polynomials such as (1) Legendre polynomial, (2) Chebyshev polynomial of second kind, (3) Chebyshev polynomial of third and (4) Chebyshev polynomial of fourth kind respectively. Maximum absolute error and root mean square error are calculated for the illustrated examples and presented in form of tables for the comparison purpose. Numerical stability of the presented method with respect to all four kind of polynomials are discussed. Further, the obtained numerical results are compared with some known methods from the literature and it is observed that obtained results from the proposed method is better than these methods.

  10. Percolation critical polynomial as a graph invariant

    DOE PAGES

    Scullard, Christian R.

    2012-10-18

    Every lattice for which the bond percolation critical probability can be found exactly possesses a critical polynomial, with the root in [0; 1] providing the threshold. Recent work has demonstrated that this polynomial may be generalized through a definition that can be applied on any periodic lattice. The polynomial depends on the lattice and on its decomposition into identical finite subgraphs, but once these are specified, the polynomial is essentially unique. On lattices for which the exact percolation threshold is unknown, the polynomials provide approximations for the critical probability with the estimates appearing to converge to the exact answer withmore » increasing subgraph size. In this paper, I show how the critical polynomial can be viewed as a graph invariant like the Tutte polynomial. In particular, the critical polynomial is computed on a finite graph and may be found using the deletion-contraction algorithm. This allows calculation on a computer, and I present such results for the kagome lattice using subgraphs of up to 36 bonds. For one of these, I find the prediction p c = 0:52440572:::, which differs from the numerical value, p c = 0:52440503(5), by only 6:9 X 10 -7.« less

  11. T1 mapping with the variable flip angle technique: A simple correction for insufficient spoiling of transverse magnetization.

    PubMed

    Baudrexel, Simon; Nöth, Ulrike; Schüre, Jan-Rüdiger; Deichmann, Ralf

    2018-06-01

    The variable flip angle method derives T 1 maps from radiofrequency-spoiled gradient-echo data sets, acquired with different flip angles α. Because the method assumes validity of the Ernst equation, insufficient spoiling of transverse magnetization yields errors in T 1 estimation, depending on the chosen radiofrequency-spoiling phase increment (Δϕ). This paper presents a versatile correction method that uses modified flip angles α' to restore the validity of the Ernst equation. Spoiled gradient-echo signals were simulated for three commonly used phase increments Δϕ (50°/117°/150°), different values of α, repetition time (TR), T 1 , and a T 2 of 85 ms. For each parameter combination, α' (for which the Ernst equation yielded the same signal) and a correction factor C Δϕ (α, TR, T 1 ) = α'/α were determined. C Δϕ was found to be independent of T 1 and fitted as polynomial C Δϕ (α, TR), allowing to calculate α' for any protocol using this Δϕ. The accuracy of the correction method for T 2 values deviating from 85 ms was also determined. The method was tested in vitro and in vivo for variable flip angle scans with different acquisition parameters. The technique considerably improved the accuracy of variable flip angle-based T 1 maps in vitro and in vivo. The proposed method allows for a simple correction of insufficient spoiling in gradient-echo data. The required polynomial parameters are supplied for three common Δϕ. Magn Reson Med 79:3082-3092, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  12. Democratic superstring field theory: gauge fixing

    NASA Astrophysics Data System (ADS)

    Kroyter, Michael

    2011-03-01

    We show that a partial gauge fixing of the NS sector of the democratic-picture superstring field theory leads to the non-polynomial theory. Moreover, by partially gauge fixing the Ramond sector we obtain a non-polynomial fully RNS theory at pictures 0 and 1/2 . Within the democratic theory and in the partially gauge fixed theory the equations of motion of both sectors are derived from an action. We also discuss a representation of the non-polynomial theory analogous to a manifestly two-dimensional representation of WZW theory and the action of bosonic pure-gauge solutions. We further demonstrate that one can consistently gauge fix the NS sector of the democratic theory at picture number -1. The resulting theory is new. It is a {mathbb{Z}_2} dual of the modified cubic theory. We construct analytical solutions of this theory and show that they possess the desired properties.

  13. Riemann-Liouville Fractional Calculus of Certain Finite Class of Classical Orthogonal Polynomials

    NASA Astrophysics Data System (ADS)

    Malik, Pradeep; Swaminathan, A.

    2010-11-01

    In this work we consider certain class of classical orthogonal polynomials defined on the positive real line. These polynomials have their weight function related to the probability density function of F distribution and are finite in number up to orthogonality. We generalize these polynomials for fractional order by considering the Riemann-Liouville type operator on these polynomials. Various properties like explicit representation in terms of hypergeometric functions, differential equations, recurrence relations are derived.

  14. A new VLSI complex integer multiplier which uses a quadratic-polynomial residue system with Fermat numbers

    NASA Technical Reports Server (NTRS)

    Truong, T. K.; Hsu, I. S.; Chang, J. J.; Shyu, H. C.; Reed, I. S.

    1986-01-01

    A quadratic-polynomial Fermat residue number system (QFNS) has been used to compute complex integer multiplications. The advantage of such a QFNS is that a complex integer multiplication requires only two integer multiplications. In this article, a new type Fermat number multiplier is developed which eliminates the initialization condition of the previous method. It is shown that the new complex multiplier can be implemented on a single VLSI chip. Such a chip is designed and fabricated in CMOS-pw technology.

  15. A new VLSI complex integer multiplier which uses a quadratic-polynomial residue system with Fermat numbers

    NASA Technical Reports Server (NTRS)

    Shyu, H. C.; Reed, I. S.; Truong, T. K.; Hsu, I. S.; Chang, J. J.

    1987-01-01

    A quadratic-polynomial Fermat residue number system (QFNS) has been used to compute complex integer multiplications. The advantage of such a QFNS is that a complex integer multiplication requires only two integer multiplications. In this article, a new type Fermat number multiplier is developed which eliminates the initialization condition of the previous method. It is shown that the new complex multiplier can be implemented on a single VLSI chip. Such a chip is designed and fabricated in CMOS-Pw technology.

  16. Exploring the potential energy landscape over a large parameter-space

    NASA Astrophysics Data System (ADS)

    He, Yang-Hui; Mehta, Dhagash; Niemerg, Matthew; Rummel, Markus; Valeanu, Alexandru

    2013-07-01

    Solving large polynomial systems with coefficient parameters are ubiquitous and constitute an important class of problems. We demonstrate the computational power of two methods — a symbolic one called the Comprehensive Gröbner basis and a numerical one called coefficient-parameter polynomial continuation — applied to studying both potential energy landscapes and a variety of questions arising from geometry and phenomenology. Particular attention is paid to an example in flux compactification where important physical quantities such as the gravitino and moduli masses and the string coupling can be efficiently extracted.

  17. Non-polynomial closed string field theory: loops and conformal maps

    NASA Astrophysics Data System (ADS)

    Hua, Long; Kaku, Michio

    1990-11-01

    Recently, we proposed the complete classical action for the non-polynomial closed string field theory, which succesfully reproduced all closed string tree amplitudes. (The action was simultaneously proposed by the Kyoto group). In this paper, we analyze the structure of the theory. We (a) compute the explicit conformal map for all g-loop, p-puncture diagrams, (b) compute all one-loop, two-puncture maps in terms of hyper-elliptic functions, and (c) analyze their modular structure. We analyze, but do not resolve, the question of modular invariance.

  18. A comparison of polynomial approximations and artificial neural nets as response surfaces

    NASA Technical Reports Server (NTRS)

    Carpenter, William C.; Barthelemy, Jean-Francois M.

    1992-01-01

    Artificial neural nets and polynomial approximations were used to develop response surfaces for several test problems. Based on the number of functional evaluations required to build the approximations and the number of undetermined parameters associated with the approximations, the performance of the two types of approximations was found to be comparable. A rule of thumb is developed for determining the number of nodes to be used on a hidden layer of an artificial neural net, and the number of designs needed to train an approximation is discussed.

  19. Two new templates for epidemiology applications: linked micromap plots and conditioned choropleth maps.

    PubMed

    Carr, D B; Wallin, J F; Carr, D A

    This paper describes two interactive templates for representing spatially indexed estimates. Both templates use a matrix layout of small panels. The first template, called linked micromap plots, can represent multivariate estimates associated with each spatially indexed study unit. The second template, called conditioned choropleth maps, shows the connection between a dependent variable, as represented in a classed choropleth map, and two explanatory variables. The paper describes the cognitive considerations that motivate the layouts and representation details. The discussion also addresses topics of data quality and access, hypothesis generation, and interactive features such as pan and zoom and dynamic conditioning via sliders. The examples show epidemiological (mortality rates) and environmental (toxic concentrations) applications. Copyright 2000 John Wiley & Sons, Ltd.

  20. The Gibbs Phenomenon for Series of Orthogonal Polynomials

    ERIC Educational Resources Information Center

    Fay, T. H.; Kloppers, P. Hendrik

    2006-01-01

    This note considers the four classes of orthogonal polynomials--Chebyshev, Hermite, Laguerre, Legendre--and investigates the Gibbs phenomenon at a jump discontinuity for the corresponding orthogonal polynomial series expansions. The perhaps unexpected thing is that the Gibbs constant that arises for each class of polynomials appears to be the same…

  1. Determinants with orthogonal polynomial entries

    NASA Astrophysics Data System (ADS)

    Ismail, Mourad E. H.

    2005-06-01

    We use moment representations of orthogonal polynomials to evaluate the corresponding Hankel determinants formed by the orthogonal polynomials. We also study the Hankel determinants which start with pn on the top left-hand corner. As examples we evaluate the Hankel determinants whose entries are q-ultraspherical or Al-Salam-Chihara polynomials.

  2. From sequences to polynomials and back, via operator orderings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amdeberhan, Tewodros, E-mail: tamdeber@tulane.edu; Dixit, Atul, E-mail: adixit@tulane.edu; Moll, Victor H., E-mail: vhm@tulane.edu

    2013-12-15

    Bender and Dunne [“Polynomials and operator orderings,” J. Math. Phys. 29, 1727–1731 (1988)] showed that linear combinations of words q{sup k}p{sup n}q{sup n−k}, where p and q are subject to the relation qp − pq = ı, may be expressed as a polynomial in the symbol z=1/2 (qp+pq). Relations between such polynomials and linear combinations of the transformed coefficients are explored. In particular, examples yielding orthogonal polynomials are provided.

  3. Advances of the reverse lactate threshold test: Non-invasive proposal based on heart rate and effect of previous cycling experience

    PubMed Central

    2018-01-01

    Our first aim was to compare the anaerobic threshold (AnT) determined by the incremental protocol with the reverse lactate threshold test (RLT), investigating the previous cycling experience effect. Secondarily, an alternative RLT application based on heart rate was proposed. Two groups (12 per group-according to cycling experience) were evaluated on cycle ergometer. The incremental protocol started at 25 W with increments of 25 W at each 3 minutes, and the AnT was calculated by bissegmentation, onset of blood lactate concentration and maximal deviation methods. The RLT was applied in two phases: a) lactate priming segment; and b) reverse segment; the AnT (AnTRLT) was calculated based on a second order polynomial function. The AnT from the RLT was calculated based on the heart rate (AnTRLT-HR) by the second order polynomial function. In regard of the Study 1, most of statistical procedures converged for similarity between the AnT determined from the bissegmentation method and AnTRLT. For 83% of non-experienced and 75% of experienced subjects the bias was 4% and 2%, respectively. In Study 2, no difference was found between the AnTRLT and AnTRLT-HR. For 83% of non-experienced and 91% of experienced subjects, the bias between AnTRLT and AnTRLT-HR was similar (i.e. 6%). In summary, the AnT determined by the incremental protocol and RLT are consistent. The AnT can be determined during the RLT via heart rate, improving its applicability. However, future studies are required to improve the agreement between variables. PMID:29534108

  4. Compatible Models of Carbon Content of Individual Trees on a Cunninghamia lanceolata Plantation in Fujian Province, China

    PubMed Central

    Zhuo, Lin; Tao, Hong; Wei, Hong; Chengzhen, Wu

    2016-01-01

    We tried to establish compatible carbon content models of individual trees for a Chinese fir (Cunninghamia lanceolata (Lamb.) Hook.) plantation from Fujian province in southeast China. In general, compatibility requires that the sum of components equal the whole tree, meaning that the sum of percentages calculated from component equations should equal 100%. Thus, we used multiple approaches to simulate carbon content in boles, branches, foliage leaves, roots and the whole individual trees. The approaches included (i) single optimal fitting (SOF), (ii) nonlinear adjustment in proportion (NAP) and (iii) nonlinear seemingly unrelated regression (NSUR). These approaches were used in combination with variables relating diameter at breast height (D) and tree height (H), such as D, D2H, DH and D&H (where D&H means two separate variables in bivariate model). Power, exponential and polynomial functions were tested as well as a new general function model was proposed by this study. Weighted least squares regression models were employed to eliminate heteroscedasticity. Model performances were evaluated by using mean residuals, residual variance, mean square error and the determination coefficient. The results indicated that models with two dimensional variables (DH, D2H and D&H) were always superior to those with a single variable (D). The D&H variable combination was found to be the most useful predictor. Of all the approaches, SOF could establish a single optimal model separately, but there were deviations in estimating results due to existing incompatibilities, while NAP and NSUR could ensure predictions compatibility. Simultaneously, we found that the new general model had better accuracy than others. In conclusion, we recommend that the new general model be used to estimate carbon content for Chinese fir and considered for other vegetation types as well. PMID:26982054

  5. Extending Romanovski polynomials in quantum mechanics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quesne, C.

    2013-12-15

    Some extensions of the (third-class) Romanovski polynomials (also called Romanovski/pseudo-Jacobi polynomials), which appear in bound-state wavefunctions of rationally extended Scarf II and Rosen-Morse I potentials, are considered. For the former potentials, the generalized polynomials satisfy a finite orthogonality relation, while for the latter an infinite set of relations among polynomials with degree-dependent parameters is obtained. Both types of relations are counterparts of those known for conventional polynomials. In the absence of any direct information on the zeros of the Romanovski polynomials present in denominators, the regularity of the constructed potentials is checked by taking advantage of the disconjugacy properties ofmore » second-order differential equations of Schrödinger type. It is also shown that on going from Scarf I to Scarf II or from Rosen-Morse II to Rosen-Morse I potentials, the variety of rational extensions is narrowed down from types I, II, and III to type III only.« less

  6. Polynomial solutions of the Monge-Ampère equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aminov, Yu A

    2014-11-30

    The question of the existence of polynomial solutions to the Monge-Ampère equation z{sub xx}z{sub yy}−z{sub xy}{sup 2}=f(x,y) is considered in the case when f(x,y) is a polynomial. It is proved that if f is a polynomial of the second degree, which is positive for all values of its arguments and has a positive squared part, then no polynomial solution exists. On the other hand, a solution which is not polynomial but is analytic in the whole of the x, y-plane is produced. Necessary and sufficient conditions for the existence of polynomial solutions of degree up to 4 are found and methods for the construction ofmore » such solutions are indicated. An approximation theorem is proved. Bibliography: 10 titles.« less

  7. Solving the interval type-2 fuzzy polynomial equation using the ranking method

    NASA Astrophysics Data System (ADS)

    Rahman, Nurhakimah Ab.; Abdullah, Lazim

    2014-07-01

    Polynomial equations with trapezoidal and triangular fuzzy numbers have attracted some interest among researchers in mathematics, engineering and social sciences. There are some methods that have been developed in order to solve these equations. In this study we are interested in introducing the interval type-2 fuzzy polynomial equation and solving it using the ranking method of fuzzy numbers. The ranking method concept was firstly proposed to find real roots of fuzzy polynomial equation. Therefore, the ranking method is applied to find real roots of the interval type-2 fuzzy polynomial equation. We transform the interval type-2 fuzzy polynomial equation to a system of crisp interval type-2 fuzzy polynomial equation. This transformation is performed using the ranking method of fuzzy numbers based on three parameters, namely value, ambiguity and fuzziness. Finally, we illustrate our approach by numerical example.

  8. Characteristics of solitary waves, quasiperiodic solutions, homoclinic breather solutions and rogue waves in the generalized variable-coefficient forced Kadomtsev-Petviashvili equation

    NASA Astrophysics Data System (ADS)

    Yan, Xue-Wei; Tian, Shou-Fu; Dong, Min-Jie; Zou, Li

    2017-12-01

    In this paper, the generalized variable-coefficient forced Kadomtsev-Petviashvili (gvcfKP) equation is investigated, which can be used to characterize the water waves of long wavelength relating to nonlinear restoring forces. Using a dependent variable transformation and combining the Bell’s polynomials, we accurately derive the bilinear expression for the gvcfKP equation. By virtue of bilinear expression, its solitary waves are computed in a very direct method. By using the Riemann theta function, we derive the quasiperiodic solutions for the equation under some limitation factors. Besides, an effective way can be used to calculate its homoclinic breather waves and rogue waves, respectively, by using an extended homoclinic test function. We hope that our results can help enrich the dynamical behavior of the nonlinear wave equations with variable-coefficient.

  9. Approximating Exponential and Logarithmic Functions Using Polynomial Interpolation

    ERIC Educational Resources Information Center

    Gordon, Sheldon P.; Yang, Yajun

    2017-01-01

    This article takes a closer look at the problem of approximating the exponential and logarithmic functions using polynomials. Either as an alternative to or a precursor to Taylor polynomial approximations at the precalculus level, interpolating polynomials are considered. A measure of error is given and the behaviour of the error function is…

  10. A note on the zeros of Freud-Sobolev orthogonal polynomials

    NASA Astrophysics Data System (ADS)

    Moreno-Balcazar, Juan J.

    2007-10-01

    We prove that the zeros of a certain family of Sobolev orthogonal polynomials involving the Freud weight function e-x4 on are real, simple, and interlace with the zeros of the Freud polynomials, i.e., those polynomials orthogonal with respect to the weight function e-x4. Some numerical examples are shown.

  11. Optimal Chebyshev polynomials on ellipses in the complex plane

    NASA Technical Reports Server (NTRS)

    Fischer, Bernd; Freund, Roland

    1989-01-01

    The design of iterative schemes for sparse matrix computations often leads to constrained polynomial approximation problems on sets in the complex plane. For the case of ellipses, we introduce a new class of complex polynomials which are in general very good approximations to the best polynomials and even optimal in most cases.

  12. Does preprocessing change nonlinear measures of heart rate variability?

    PubMed

    Gomes, Murilo E D; Guimarães, Homero N; Ribeiro, Antônio L P; Aguirre, Luis A

    2002-11-01

    This work investigated if methods used to produce a uniformly sampled heart rate variability (HRV) time series significantly change the deterministic signature underlying the dynamics of such signals and some nonlinear measures of HRV. Two methods of preprocessing were used: the convolution of inverse interval function values with a rectangular window and the cubic polynomial interpolation. The HRV time series were obtained from 33 Wistar rats submitted to autonomic blockade protocols and from 17 healthy adults. The analysis of determinism was carried out by the method of surrogate data sets and nonlinear autoregressive moving average modelling and prediction. The scaling exponents alpha, alpha(1) and alpha(2) derived from the detrended fluctuation analysis were calculated from raw HRV time series and respective preprocessed signals. It was shown that the technique of cubic interpolation of HRV time series did not significantly change any nonlinear characteristic studied in this work, while the method of convolution only affected the alpha(1) index. The results suggested that preprocessed time series may be used to study HRV in the field of nonlinear dynamics.

  13. Beyond Euler angles: exploiting the angle-axis parametrization in a multipole expansion of the rotation operator.

    PubMed

    Siemens, Mark; Hancock, Jason; Siminovitch, David

    2007-02-01

    Euler angles (alpha,beta,gamma) are cumbersome from a computational point of view, and their link to experimental parameters is oblique. The angle-axis {Phi, n} parametrization, especially in the form of quaternions (or Euler-Rodrigues parameters), has served as the most promising alternative, and they have enjoyed considerable success in rf pulse design and optimization. We focus on the benefits of angle-axis parameters by considering a multipole operator expansion of the rotation operator D(Phi, n), and a Clebsch-Gordan expansion of the rotation matrices D(MM')(J)(Phi, n). Each of the coefficients in the Clebsch-Gordan expansion is proportional to the product of a spherical harmonic of the vector n specifying the axis of rotation, Y(lambdamu)(n), with a fixed function of the rotation angle Phi, a Gegenbauer polynomial C(2J-lambda)(lambda+1)(cosPhi/2). Several application examples demonstrate that this Clebsch-Gordan expansion gives easy and direct access to many of the parameters of experimental interest, including coherence order changes (isolated in the Clebsch-Gordan coefficients), and rotation angle (isolated in the Gegenbauer polynomials).

  14. Continuous-variable quantum cryptography with an untrusted relay: Detailed security analysis of the symmetric configuration

    NASA Astrophysics Data System (ADS)

    Ottaviani, Carlo; Spedalieri, Gaetana; Braunstein, Samuel L.; Pirandola, Stefano

    2015-02-01

    We consider the continuous-variable protocol of Pirandola et al. [arXiv:1312.4104] where the secret key is established by the measurement of an untrusted relay. In this network protocol, two authorized parties are connected to an untrusted relay by insecure quantum links. Secret correlations are generated by a continuous-variable Bell detection performed on incoming coherent states. In the present work we provide a detailed study of the symmetric configuration, where the relay is midway between the parties. We analyze symmetric eavesdropping strategies against the quantum links explicitly showing that, at fixed transmissivity and thermal noise, two-mode coherent attacks are optimal, manifestly outperforming one-mode collective attacks based on independent entangling cloners. Such an advantage is shown both in terms of security threshold and secret-key rate.

  15. An improved strategy for regression of biophysical variables and Landsat ETM+ data.

    Treesearch

    Warren B. Cohen; Thomas K. Maiersperger; Stith T. Gower; David P. Turner

    2003-01-01

    Empirical models are important tools for relating field-measured biophysical variables to remote sensing data. Regression analysis has been a popular empirical method of linking these two types of data to provide continuous estimates for variables such as biomass, percent woody canopy cover, and leaf area index (LAI). Traditional methods of regression are not...

  16. Study on the mapping of dark matter clustering from real space to redshift space

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zheng, Yi; Song, Yong-Seon, E-mail: yizheng@kasi.re.kr, E-mail: ysong@kasi.re.kr

    The mapping of dark matter clustering from real space to redshift space introduces the anisotropic property to the measured density power spectrum in redshift space, known as the redshift space distortion effect. The mapping formula is intrinsically non-linear, which is complicated by the higher order polynomials due to indefinite cross correlations between the density and velocity fields, and the Finger-of-God effect due to the randomness of the peculiar velocity field. Whilst the full higher order polynomials remain unknown, the other systematics can be controlled consistently within the same order truncation in the expansion of the mapping formula, as shown inmore » this paper. The systematic due to the unknown non-linear density and velocity fields is removed by separately measuring all terms in the expansion directly using simulations. The uncertainty caused by the velocity randomness is controlled by splitting the FoG term into two pieces, 1) the ''one-point' FoG term being independent of the separation vector between two different points, and 2) the ''correlated' FoG term appearing as an indefinite polynomials which is expanded in the same order as all other perturbative polynomials. Using 100 realizations of simulations, we find that the Gaussian FoG function with only one scale-independent free parameter works quite well, and that our new mapping formulation accurately reproduces the observed 2-dimensional density power spectrum in redshift space at the smallest scales by far, up to k ∼ 0.2 Mpc{sup -1}, considering the resolution of future experiments.« less

  17. New algorithms for solving third- and fifth-order two point boundary value problems based on nonsymmetric generalized Jacobi Petrov–Galerkin method

    PubMed Central

    Doha, E.H.; Abd-Elhameed, W.M.; Youssri, Y.H.

    2014-01-01

    Two families of certain nonsymmetric generalized Jacobi polynomials with negative integer indexes are employed for solving third- and fifth-order two point boundary value problems governed by homogeneous and nonhomogeneous boundary conditions using a dual Petrov–Galerkin method. The idea behind our method is to use trial functions satisfying the underlying boundary conditions of the differential equations and the test functions satisfying the dual boundary conditions. The resulting linear systems from the application of our method are specially structured and they can be efficiently inverted. The use of generalized Jacobi polynomials simplify the theoretical and numerical analysis of the method and also leads to accurate and efficient numerical algorithms. The presented numerical results indicate that the proposed numerical algorithms are reliable and very efficient. PMID:26425358

  18. Two Meanings of Algorithmic Mathematics.

    ERIC Educational Resources Information Center

    Maurer, Stephen B.

    1984-01-01

    Two mathematical topics are interpreted from the viewpoints of traditional (performing algorithms) and contemporary (creating algorithms and thinking in terms of them for solving problems and developing theory) algorithmic mathematics. The two topics are Horner's method for evaluating polynomials and Gauss's method for solving systems of linear…

  19. Scheduling Jobs with Variable Job Processing Times on Unrelated Parallel Machines

    PubMed Central

    Zhang, Guang-Qian; Wang, Jian-Jun; Liu, Ya-Jing

    2014-01-01

    m unrelated parallel machines scheduling problems with variable job processing times are considered, where the processing time of a job is a function of its position in a sequence, its starting time, and its resource allocation. The objective is to determine the optimal resource allocation and the optimal schedule to minimize a total cost function that dependents on the total completion (waiting) time, the total machine load, the total absolute differences in completion (waiting) times on all machines, and total resource cost. If the number of machines is a given constant number, we propose a polynomial time algorithm to solve the problem. PMID:24982933

  20. An identification method for damping ratio in rotor systems

    NASA Astrophysics Data System (ADS)

    Wang, Weimin; Li, Qihang; Gao, Jinji; Yao, Jianfei; Allaire, Paul

    2016-02-01

    Centrifugal compressor testing with magnetic bearing excitations is the last step to assure the compressor rotordynamic stability in the designed operating conditions. To meet the challenges of stability evaluation, a new method combining the rational polynomials method (RPM) with the weighted instrumental variables (WIV) estimator to fit the directional frequency response function (dFRF) is presented. Numerical simulation results show that the method suggested in this paper can identify the damping ratio of the first forward and backward modes with high accuracy, even in a severe noise environment. Experimental tests were conducted to study the effect of different bearing configurations on the stability of rotor. Furthermore, two example centrifugal compressors (a nine-stage straight-through and a six-stage back-to-back) were employed to verify the feasibility of identification method in industrial configurations as well.

  1. Eigenvalues of normalized Laplacian matrices of fractal trees and dendrimers: Analytical results and applications

    NASA Astrophysics Data System (ADS)

    Julaiti, Alafate; Wu, Bin; Zhang, Zhongzhi

    2013-05-01

    The eigenvalues of the normalized Laplacian matrix of a network play an important role in its structural and dynamical aspects associated with the network. In this paper, we study the spectra and their applications of normalized Laplacian matrices of a family of fractal trees and dendrimers modeled by Cayley trees, both of which are built in an iterative way. For the fractal trees, we apply the spectral decimation approach to determine analytically all the eigenvalues and their corresponding multiplicities, with the eigenvalues provided by a recursive relation governing the eigenvalues of networks at two successive generations. For Cayley trees, we show that all their eigenvalues can be obtained by computing the roots of several small-degree polynomials defined recursively. By using the relation between normalized Laplacian spectra and eigentime identity, we derive the explicit solution to the eigentime identity for random walks on the two treelike networks, the leading scalings of which follow quite different behaviors. In addition, we corroborate the obtained eigenvalues and their degeneracies through the link between them and the number of spanning trees.

  2. A robust and efficient stepwise regression method for building sparse polynomial chaos expansions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abraham, Simon, E-mail: Simon.Abraham@ulb.ac.be; Raisee, Mehrdad; Ghorbaniasl, Ghader

    2017-03-01

    Polynomial Chaos (PC) expansions are widely used in various engineering fields for quantifying uncertainties arising from uncertain parameters. The computational cost of classical PC solution schemes is unaffordable as the number of deterministic simulations to be calculated grows dramatically with the number of stochastic dimension. This considerably restricts the practical use of PC at the industrial level. A common approach to address such problems is to make use of sparse PC expansions. This paper presents a non-intrusive regression-based method for building sparse PC expansions. The most important PC contributions are detected sequentially through an automatic search procedure. The variable selectionmore » criterion is based on efficient tools relevant to probabilistic method. Two benchmark analytical functions are used to validate the proposed algorithm. The computational efficiency of the method is then illustrated by a more realistic CFD application, consisting of the non-deterministic flow around a transonic airfoil subject to geometrical uncertainties. To assess the performance of the developed methodology, a detailed comparison is made with the well established LAR-based selection technique. The results show that the developed sparse regression technique is able to identify the most significant PC contributions describing the problem. Moreover, the most important stochastic features are captured at a reduced computational cost compared to the LAR method. The results also demonstrate the superior robustness of the method by repeating the analyses using random experimental designs.« less

  3. Model-assisted probability of detection of flaws in aluminum blocks using polynomial chaos expansions

    NASA Astrophysics Data System (ADS)

    Du, Xiaosong; Leifsson, Leifur; Grandin, Robert; Meeker, William; Roberts, Ronald; Song, Jiming

    2018-04-01

    Probability of detection (POD) is widely used for measuring reliability of nondestructive testing (NDT) systems. Typically, POD is determined experimentally, while it can be enhanced by utilizing physics-based computational models in combination with model-assisted POD (MAPOD) methods. With the development of advanced physics-based methods, such as ultrasonic NDT testing, the empirical information, needed for POD methods, can be reduced. However, performing accurate numerical simulations can be prohibitively time-consuming, especially as part of stochastic analysis. In this work, stochastic surrogate models for computational physics-based measurement simulations are developed for cost savings of MAPOD methods while simultaneously ensuring sufficient accuracy. The stochastic surrogate is used to propagate the random input variables through the physics-based simulation model to obtain the joint probability distribution of the output. The POD curves are then generated based on those results. Here, the stochastic surrogates are constructed using non-intrusive polynomial chaos (NIPC) expansions. In particular, the NIPC methods used are the quadrature, ordinary least-squares (OLS), and least-angle regression sparse (LARS) techniques. The proposed approach is demonstrated on the ultrasonic testing simulation of a flat bottom hole flaw in an aluminum block. The results show that the stochastic surrogates have at least two orders of magnitude faster convergence on the statistics than direct Monte Carlo sampling (MCS). Moreover, the evaluation of the stochastic surrogate models is over three orders of magnitude faster than the underlying simulation model for this case, which is the UTSim2 model.

  4. Existence and energy decay of a nonuniform Timoshenko system with second sound

    NASA Astrophysics Data System (ADS)

    Hamadouche, Taklit; Messaoudi, Salim A.

    2018-02-01

    In this paper, we consider a linear thermoelastic Timoshenko system with variable physical parameters, where the heat conduction is given by Cattaneo's law and the coupling is via the displacement equation. We discuss the well-posedness and the regularity of solution using the semigroup theory. Moreover, we establish the exponential decay result provided that the stability function χ r(x)=0. Otherwise, we show that the solution decays polynomially.

  5. Global a priori estimates for the inhomogeneous Landau equation with moderately soft potentials

    NASA Astrophysics Data System (ADS)

    Cameron, Stephen; Silvestre, Luis; Snelson, Stanley

    2018-05-01

    We establish a priori upper bounds for solutions to the spatially inhomogeneous Landau equation in the case of moderately soft potentials, with arbitrary initial data, under the assumption that mass, energy and entropy densities stay under control. Our pointwise estimates decay polynomially in the velocity variable. We also show that if the initial data satisfies a Gaussian upper bound, this bound is propagated for all positive times.

  6. Stochastic Estimation via Polynomial Chaos

    DTIC Science & Technology

    2015-10-01

    AFRL-RW-EG-TR-2015-108 Stochastic Estimation via Polynomial Chaos Douglas V. Nance Air Force Research...COVERED (From - To) 20-04-2015 – 07-08-2015 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER Stochastic Estimation via Polynomial Chaos ...This expository report discusses fundamental aspects of the polynomial chaos method for representing the properties of second order stochastic

  7. Vehicle Sprung Mass Estimation for Rough Terrain

    DTIC Science & Technology

    2011-03-01

    distributions are greater than zero. The multivariate polynomials are functions of the Legendre polynomials (Poularikas (1999...developed methods based on polynomial chaos theory and on the maximum likelihood approach to estimate the most likely value of the vehicle sprung...mass. The polynomial chaos estimator is compared to benchmark algorithms including recursive least squares, recursive total least squares, extended

  8. Degenerate r-Stirling Numbers and r-Bell Polynomials

    NASA Astrophysics Data System (ADS)

    Kim, T.; Yao, Y.; Kim, D. S.; Jang, G.-W.

    2018-01-01

    The purpose of this paper is to exploit umbral calculus in order to derive some properties, recurrence relations, and identities related to the degenerate r-Stirling numbers of the second kind and the degenerate r-Bell polynomials. Especially, we will express the degenerate r-Bell polynomials as linear combinations of many well-known families of special polynomials.

  9. Computational tools for multi-linked flexible structures

    NASA Technical Reports Server (NTRS)

    Lee, Gordon K. F.; Brubaker, Thomas A.; Shults, James R.

    1990-01-01

    A software module which designs and tests controllers and filters in Kalman Estimator form, based on a polynomial state-space model is discussed. The user-friendly program employs an interactive graphics approach to simplify the design process. A variety of input methods are provided to test the effectiveness of the estimator. Utilities are provided which address important issues in filter design such as graphical analysis, statistical analysis, and calculation time. The program also provides the user with the ability to save filter parameters, inputs, and outputs for future use.

  10. Rapidity correlations in the RHIC Beam Energy Scan Data

    NASA Astrophysics Data System (ADS)

    Jowzaee, Sedigheh; STAR Collaboration

    2017-11-01

    A pair-normalized two-particle covariance versus the rapidity of the two particles, called R2, was originally studied in ISR and FNAL data in the 1970's. This variable has recently seen renewed interest for the study of the dynamics of heavy-ion collisions in the longitudinal direction. These rapidity correlations can be decomposed into a basis set of Legendre polynomials with prefactors 〈amn 〉, which can be considered the rapidity analog of the decomposition of azimuthal anisotropies into a set of cosine functions with prefactors vn. The 〈amn 〉 values have been suggested to be sensitive to the number of particle emitting sources, baryon stopping, viscosities, and critical behavior. The rapidity correlations have been measured by the STAR collaboration as a function of the beam energy for 0-5% central Au+Au collisions with beam energies ranging from 7.7 to 200 GeV. The experimental results and comparisons to the UrQMD model are presented.

  11. Airborne laser ranging system for monitoring regional crustal deformation

    NASA Technical Reports Server (NTRS)

    Degnan, J. J.

    1981-01-01

    Alternate approaches for making the atmospheric correction without benefit of a ground-based meteorological network are discussed. These include (1) a two-color channel that determines the atmospheric correction by measuring the time delay induced by dispersion between pulses at two optical frequencies; (2) single-color range measurements supported by an onboard temperature sounder, pressure altimeter readings, and surface measurements by a few existing meteorological facilities; and (3) inclusion of the quadratic polynomial coefficients as variables to be solved for along with target coordinates in the reduction of the single-color range data. It is anticipated that the initial Airborne Laser Ranging System (ALRS) experiments will be carried out in Southern California in a region bounded by Santa Barbara on the norht and the Mexican border on the south. The target area will be bounded by the Pacific Ocean to the west and will extend eastward for approximately 400 km. The unique ability of the ALRS to provide a geodetic 'snapshot' of such a large area will make it a valuable geophysical tool.

  12. [Application of ordinary Kriging method in entomologic ecology].

    PubMed

    Zhang, Runjie; Zhou, Qiang; Chen, Cuixian; Wang, Shousong

    2003-01-01

    Geostatistics is a statistic method based on regional variables and using the tool of variogram to analyze the spatial structure and the patterns of organism. In simulating the variogram within a great range, though optimal simulation cannot be obtained, the simulation method of a dialogue between human and computer can be used to optimize the parameters of the spherical models. In this paper, the method mentioned above and the weighted polynomial regression were utilized to simulate the one-step spherical model, the two-step spherical model and linear function model, and the available nearby samples were used to draw on the ordinary Kriging procedure, which provided a best linear unbiased estimate of the constraint of the unbiased estimation. The sum of square deviation between the estimating and measuring values of varying theory models were figured out, and the relative graphs were shown. It was showed that the simulation based on the two-step spherical model was the best simulation, and the one-step spherical model was better than the linear function model.

  13. Inverting Monotonic Nonlinearities by Entropy Maximization

    PubMed Central

    López-de-Ipiña Pena, Karmele; Caiafa, Cesar F.

    2016-01-01

    This paper proposes a new method for blind inversion of a monotonic nonlinear map applied to a sum of random variables. Such kinds of mixtures of random variables are found in source separation and Wiener system inversion problems, for example. The importance of our proposed method is based on the fact that it permits to decouple the estimation of the nonlinear part (nonlinear compensation) from the estimation of the linear one (source separation matrix or deconvolution filter), which can be solved by applying any convenient linear algorithm. Our new nonlinear compensation algorithm, the MaxEnt algorithm, generalizes the idea of Gaussianization of the observation by maximizing its entropy instead. We developed two versions of our algorithm based either in a polynomial or a neural network parameterization of the nonlinear function. We provide a sufficient condition on the nonlinear function and the probability distribution that gives a guarantee for the MaxEnt method to succeed compensating the distortion. Through an extensive set of simulations, MaxEnt is compared with existing algorithms for blind approximation of nonlinear maps. Experiments show that MaxEnt is able to successfully compensate monotonic distortions outperforming other methods in terms of the obtained Signal to Noise Ratio in many important cases, for example when the number of variables in a mixture is small. Besides its ability for compensating nonlinearities, MaxEnt is very robust, i.e. showing small variability in the results. PMID:27780261

  14. Inverting Monotonic Nonlinearities by Entropy Maximization.

    PubMed

    Solé-Casals, Jordi; López-de-Ipiña Pena, Karmele; Caiafa, Cesar F

    2016-01-01

    This paper proposes a new method for blind inversion of a monotonic nonlinear map applied to a sum of random variables. Such kinds of mixtures of random variables are found in source separation and Wiener system inversion problems, for example. The importance of our proposed method is based on the fact that it permits to decouple the estimation of the nonlinear part (nonlinear compensation) from the estimation of the linear one (source separation matrix or deconvolution filter), which can be solved by applying any convenient linear algorithm. Our new nonlinear compensation algorithm, the MaxEnt algorithm, generalizes the idea of Gaussianization of the observation by maximizing its entropy instead. We developed two versions of our algorithm based either in a polynomial or a neural network parameterization of the nonlinear function. We provide a sufficient condition on the nonlinear function and the probability distribution that gives a guarantee for the MaxEnt method to succeed compensating the distortion. Through an extensive set of simulations, MaxEnt is compared with existing algorithms for blind approximation of nonlinear maps. Experiments show that MaxEnt is able to successfully compensate monotonic distortions outperforming other methods in terms of the obtained Signal to Noise Ratio in many important cases, for example when the number of variables in a mixture is small. Besides its ability for compensating nonlinearities, MaxEnt is very robust, i.e. showing small variability in the results.

  15. Investigation of the Process Conditions for Hydrogen Production by Steam Reforming of Glycerol over Ni/Al2O3 Catalyst Using Response Surface Methodology (RSM)

    PubMed Central

    Ebshish, Ali; Yaakob, Zahira; Taufiq-Yap, Yun Hin; Bshish, Ahmed

    2014-01-01

    In this work; a response surface methodology (RSM) was implemented to investigate the process variables in a hydrogen production system. The effects of five independent variables; namely the temperature (X1); the flow rate (X2); the catalyst weight (X3); the catalyst loading (X4) and the glycerol-water molar ratio (X5) on the H2 yield (Y1) and the conversion of glycerol to gaseous products (Y2) were explored. Using multiple regression analysis; the experimental results of the H2 yield and the glycerol conversion to gases were fit to quadratic polynomial models. The proposed mathematical models have correlated the dependent factors well within the limits that were being examined. The best values of the process variables were a temperature of approximately 600 °C; a feed flow rate of 0.05 mL/min; a catalyst weight of 0.2 g; a catalyst loading of 20% and a glycerol-water molar ratio of approximately 12; where the H2 yield was predicted to be 57.6% and the conversion of glycerol was predicted to be 75%. To validate the proposed models; statistical analysis using a two-sample t-test was performed; and the results showed that the models could predict the responses satisfactorily within the limits of the variables that were studied. PMID:28788567

  16. Variation, Repetition, and Choice

    ERIC Educational Resources Information Center

    Abreu-Rodrigues, Josele; Lattal, Kennon A.; dos Santos, Cristiano V.; Matos, Ricardo A.

    2005-01-01

    Experiment 1 investigated the controlling properties of variability contingencies on choice between repeated and variable responding. Pigeons were exposed to concurrent-chains schedules with two alternatives. In the REPEAT alternative, reinforcers in the terminal link depended on a single sequence of four responses. In the VARY alternative, a…

  17. Packet-Based Protocol Efficiency for Aeronautical and Satellite Communications

    NASA Technical Reports Server (NTRS)

    Carek, David A.

    2005-01-01

    This paper examines the relation between bit error ratios and the effective link efficiency when transporting data with a packet-based protocol. Relations are developed to quantify the impact of a protocol s packet size and header size relative to the bit error ratio of the underlying link. These relations are examined in the context of radio transmissions that exhibit variable error conditions, such as those used in satellite, aeronautical, and other wireless networks. A comparison of two packet sizing methodologies is presented. From these relations, the true ability of a link to deliver user data, or information, is determined. Relations are developed to calculate the optimal protocol packet size forgiven link error characteristics. These relations could be useful in future research for developing an adaptive protocol layer. They can also be used for sizing protocols in the design of static links, where bit error ratios have small variability.

  18. Spatial patterns of simulated transpiration response to climate variability in a snow dominated mountain ecosystem

    USGS Publications Warehouse

    Christensen, L.; Tague, C.L.; Baron, Jill S.

    2008-01-01

    Transpiration is an important component of soil water storage and stream-flow and is linked with ecosystem productivity, species distribution, and ecosystem health. In mountain environments, complex topography creates heterogeneity in key controls on transpiration as well as logistical challenges for collecting representative measurements. In these settings, ecosystem models can be used to account for variation in space and time of the dominant controls on transpiration and provide estimates of transpiration patterns and their sensitivity to climate variability and change. The Regional Hydro-Ecological Simulation System (RHESSys) model was used to assess elevational differences in sensitivity of transpiration rates to the spatiotemporal variability of climate variables across the Upper Merced River watershed, Yosemite Valley, California, USA. At the basin scale, predicted annual transpiration was lowest in driest and wettest years, and greatest in moderate precipitation years (R2 = 0.32 and 0.29, based on polynomial regression of maximum snow depth and annual precipitation, respectively). At finer spatial scales, responsiveness of transpiration rates to climate differed along an elevational gradient. Low elevations (1200-1800 m) showed little interannual variation in transpiration due to topographically controlled high soil moistures along the river corridor. Annual conifer stand transpiration at intermediate elevations (1800-2150 m) responded more strongly to precipitation, resulting in a unimodal relationship between transpiration and precipitation where highest transpiration occurred during moderate precipitation levels, regardless of annual air temperatures. Higher elevations (2150-2600 m) maintained this trend, but air temperature sensitivities were greater. At these elevations, snowfall provides enough moisture for growth, and increased temperatures influenced transpiration. Transpiration at the highest elevations (2600-4000 m) showed strong sensitivity to air temperature, little sensitivity to precipitation. Model results suggest elevational differences in vegetation water use and sensitivity to climate were significant and will likely play a key role in controlling responses and vulnerability of Sierra Nevada ecosystems to climate change. Copyright ?? 2008 John Wiley & Sons, Ltd.

  19. Umbral orthogonal polynomials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lopez-Sendino, J. E.; del Olmo, M. A.

    2010-12-23

    We present an umbral operator version of the classical orthogonal polynomials. We obtain three families which are the umbral counterpart of the Jacobi, Laguerre and Hermite polynomials in the classical case.

  20. Trends and annual cycles in soundings of Arctic tropospheric ozone

    NASA Astrophysics Data System (ADS)

    Christiansen, Bo; Jepsen, Nis; Kivi, Rigel; Hansen, Georg; Larsen, Niels; Smith Korsholm, Ulrik

    2017-08-01

    Ozone soundings from nine Nordic stations have been homogenized and interpolated to standard pressure levels. The different stations have very different data coverage; the longest period with data is from the end of the 1980s to 2014. At each pressure level the homogenized ozone time series have been analysed with a model that includes both low-frequency variability in the form of a polynomial, an annual cycle with harmonics, the possibility for low-frequency variability in the annual amplitude and phasing, and either white noise or noise given by a first-order autoregressive process. The fitting of the parameters is performed with a Bayesian approach not only giving the mean values but also confidence intervals. The results show that all stations agree on a well-defined annual cycle in the free troposphere with a relatively confined maximum in the early summer. Regarding the low-frequency variability, it is found that Scoresbysund, Ny Ålesund, Sodankylä, Eureka, and Ørland show similar, significant signals with a maximum near 2005 followed by a decrease. This change is characteristic for all pressure levels in the free troposphere. A significant change in the annual cycle was found for Ny Ålesund, Scoresbysund, and Sodankylä. The changes at these stations are in agreement with the interpretation that the early summer maximum is appearing earlier in the year. The results are shown to be robust to the different settings of the model parameters such as the order of the polynomial, number of harmonics in the annual cycle, and the type of noise.

Top