Sample records for multi-step polynomial regression

  1. Breeding value accuracy estimates for growth traits using random regression and multi-trait models in Nelore cattle.

    PubMed

    Boligon, A A; Baldi, F; Mercadante, M E Z; Lobo, R B; Pereira, R J; Albuquerque, L G

    2011-06-28

    We quantified the potential increase in accuracy of expected breeding value for weights of Nelore cattle, from birth to mature age, using multi-trait and random regression models on Legendre polynomials and B-spline functions. A total of 87,712 weight records from 8144 females were used, recorded every three months from birth to mature age from the Nelore Brazil Program. For random regression analyses, all female weight records from birth to eight years of age (data set I) were considered. From this general data set, a subset was created (data set II), which included only nine weight records: at birth, weaning, 365 and 550 days of age, and 2, 3, 4, 5, and 6 years of age. Data set II was analyzed using random regression and multi-trait models. The model of analysis included the contemporary group as fixed effects and age of dam as a linear and quadratic covariable. In the random regression analyses, average growth trends were modeled using a cubic regression on orthogonal polynomials of age. Residual variances were modeled by a step function with five classes. Legendre polynomials of fourth and sixth order were utilized to model the direct genetic and animal permanent environmental effects, respectively, while third-order Legendre polynomials were considered for maternal genetic and maternal permanent environmental effects. Quadratic polynomials were applied to model all random effects in random regression models on B-spline functions. Direct genetic and animal permanent environmental effects were modeled using three segments or five coefficients, and genetic maternal and maternal permanent environmental effects were modeled with one segment or three coefficients in the random regression models on B-spline functions. For both data sets (I and II), animals ranked differently according to expected breeding value obtained by random regression or multi-trait models. With random regression models, the highest gains in accuracy were obtained at ages with a low number of weight records. The results indicate that random regression models provide more accurate expected breeding values than the traditionally finite multi-trait models. Thus, higher genetic responses are expected for beef cattle growth traits by replacing a multi-trait model with random regression models for genetic evaluation. B-spline functions could be applied as an alternative to Legendre polynomials to model covariance functions for weights from birth to mature age.

  2. Random regression models on Legendre polynomials to estimate genetic parameters for weights from birth to adult age in Canchim cattle.

    PubMed

    Baldi, F; Albuquerque, L G; Alencar, M M

    2010-08-01

    The objective of this work was to estimate covariance functions for direct and maternal genetic effects, animal and maternal permanent environmental effects, and subsequently, to derive relevant genetic parameters for growth traits in Canchim cattle. Data comprised 49,011 weight records on 2435 females from birth to adult age. The model of analysis included fixed effects of contemporary groups (year and month of birth and at weighing) and age of dam as quadratic covariable. Mean trends were taken into account by a cubic regression on orthogonal polynomials of animal age. Residual variances were allowed to vary and were modelled by a step function with 1, 4 or 11 classes based on animal's age. The model fitting four classes of residual variances was the best. A total of 12 random regression models from second to seventh order were used to model direct and maternal genetic effects, animal and maternal permanent environmental effects. The model with direct and maternal genetic effects, animal and maternal permanent environmental effects fitted by quadric, cubic, quintic and linear Legendre polynomials, respectively, was the most adequate to describe the covariance structure of the data. Estimates of direct and maternal heritability obtained by multi-trait (seven traits) and random regression models were very similar. Selection for higher weight at any age, especially after weaning, will produce an increase in mature cow weight. The possibility to modify the growth curve in Canchim cattle to obtain animals with rapid growth at early ages and moderate to low mature cow weight is limited.

  3. USING LINEAR AND POLYNOMIAL MODELS TO EXAMINE THE ENVIRONMENTAL STABILITY OF VIRUSES

    EPA Science Inventory

    The article presents the development of model equations for describing the fate of viral infectivity in environmental samples. Most of the models were based upon the use of a two-step linear regression approach. The first step employs regression of log base 10 transformed viral t...

  4. Constructing general partial differential equations using polynomial and neural networks.

    PubMed

    Zjavka, Ladislav; Pedrycz, Witold

    2016-01-01

    Sum fraction terms can approximate multi-variable functions on the basis of discrete observations, replacing a partial differential equation definition with polynomial elementary data relation descriptions. Artificial neural networks commonly transform the weighted sum of inputs to describe overall similarity relationships of trained and new testing input patterns. Differential polynomial neural networks form a new class of neural networks, which construct and solve an unknown general partial differential equation of a function of interest with selected substitution relative terms using non-linear multi-variable composite polynomials. The layers of the network generate simple and composite relative substitution terms whose convergent series combinations can describe partial dependent derivative changes of the input variables. This regression is based on trained generalized partial derivative data relations, decomposed into a multi-layer polynomial network structure. The sigmoidal function, commonly used as a nonlinear activation of artificial neurons, may transform some polynomial items together with the parameters with the aim to improve the polynomial derivative term series ability to approximate complicated periodic functions, as simple low order polynomials are not able to fully make up for the complete cycles. The similarity analysis facilitates substitutions for differential equations or can form dimensional units from data samples to describe real-world problems. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. Covariance functions for body weight from birth to maturity in Nellore cows.

    PubMed

    Boligon, A A; Mercadante, M E Z; Forni, S; Lôbo, R B; Albuquerque, L G

    2010-03-01

    The objective of this study was to estimate (co)variance functions using random regression models on Legendre polynomials for the analysis of repeated measures of BW from birth to adult age. A total of 82,064 records from 8,145 females were analyzed. Different models were compared. The models included additive direct and maternal effects, and animal and maternal permanent environmental effects as random terms. Contemporary group and dam age at calving (linear and quadratic effect) were included as fixed effects, and orthogonal Legendre polynomials of animal age (cubic regression) were considered as random covariables. Eight models with polynomials of third to sixth order were used to describe additive direct and maternal effects, and animal and maternal permanent environmental effects. Residual effects were modeled using 1 (i.e., assuming homogeneity of variances across all ages) or 5 age classes. The model with 5 classes was the best to describe the trajectory of residuals along the growth curve. The model including fourth- and sixth-order polynomials for additive direct and animal permanent environmental effects, respectively, and third-order polynomials for maternal genetic and maternal permanent environmental effects were the best. Estimates of (co)variance obtained with the multi-trait and random regression models were similar. Direct heritability estimates obtained with the random regression models followed a trend similar to that obtained with the multi-trait model. The largest estimates of maternal heritability were those of BW taken close to 240 d of age. In general, estimates of correlation between BW from birth to 8 yr of age decreased with increasing distance between ages.

  6. Advances in Highly Constrained Multi-Phase Trajectory Generation using the General Pseudospectral Optimization Software (GPOPS)

    DTIC Science & Technology

    2013-08-01

    release; distribution unlimited. PA Number 412-TW-PA-13395 f generic function g acceleration due to gravity h altitude L aerodynamic lift force L Lagrange...cost m vehicle mass M Mach number n number of coefficients in polynomial regression p highest order of polynomial regression Q dynamic pressure R...Method (RPM); the collocation points are defined by the roots of Legendre -Gauss- Radau (LGR) functions.9 GPOPS also automatically refines the “mesh” by

  7. Genetic evaluation and selection response for growth in meat-type quail through random regression models using B-spline functions and Legendre polynomials.

    PubMed

    Mota, L F M; Martins, P G M A; Littiere, T O; Abreu, L R A; Silva, M A; Bonafé, C M

    2018-04-01

    The objective was to estimate (co)variance functions using random regression models (RRM) with Legendre polynomials, B-spline function and multi-trait models aimed at evaluating genetic parameters of growth traits in meat-type quail. A database containing the complete pedigree information of 7000 meat-type quail was utilized. The models included the fixed effects of contemporary group and generation. Direct additive genetic and permanent environmental effects, considered as random, were modeled using B-spline functions considering quadratic and cubic polynomials for each individual segment, and Legendre polynomials for age. Residual variances were grouped in four age classes. Direct additive genetic and permanent environmental effects were modeled using 2 to 4 segments and were modeled by Legendre polynomial with orders of fit ranging from 2 to 4. The model with quadratic B-spline adjustment, using four segments for direct additive genetic and permanent environmental effects, was the most appropriate and parsimonious to describe the covariance structure of the data. The RRM using Legendre polynomials presented an underestimation of the residual variance. Lesser heritability estimates were observed for multi-trait models in comparison with RRM for the evaluated ages. In general, the genetic correlations between measures of BW from hatching to 35 days of age decreased as the range between the evaluated ages increased. Genetic trend for BW was positive and significant along the selection generations. The genetic response to selection for BW in the evaluated ages presented greater values for RRM compared with multi-trait models. In summary, RRM using B-spline functions with four residual variance classes and segments were the best fit for genetic evaluation of growth traits in meat-type quail. In conclusion, RRM should be considered in genetic evaluation of breeding programs.

  8. Equivalences of the multi-indexed orthogonal polynomials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Odake, Satoru

    2014-01-15

    Multi-indexed orthogonal polynomials describe eigenfunctions of exactly solvable shape-invariant quantum mechanical systems in one dimension obtained by the method of virtual states deletion. Multi-indexed orthogonal polynomials are labeled by a set of degrees of polynomial parts of virtual state wavefunctions. For multi-indexed orthogonal polynomials of Laguerre, Jacobi, Wilson, and Askey-Wilson types, two different index sets may give equivalent multi-indexed orthogonal polynomials. We clarify these equivalences. Multi-indexed orthogonal polynomials with both type I and II indices are proportional to those of type I indices only (or type II indices only) with shifted parameters.

  9. Multi-criteria manufacturability indices for ranking high-concentration monoclonal antibody formulations.

    PubMed

    Yang, Yang; Velayudhan, Ajoy; Thornhill, Nina F; Farid, Suzanne S

    2017-09-01

    The need for high-concentration formulations for subcutaneous delivery of therapeutic monoclonal antibodies (mAbs) can present manufacturability challenges for the final ultrafiltration/diafiltration (UF/DF) step. Viscosity levels and the propensity to aggregate are key considerations for high-concentration formulations. This work presents novel frameworks for deriving a set of manufacturability indices related to viscosity and thermostability to rank high-concentration mAb formulation conditions in terms of their ease of manufacture. This is illustrated by analyzing published high-throughput biophysical screening data that explores the influence of different formulation conditions (pH, ions, and excipients) on the solution viscosity and product thermostability. A decision tree classification method, CART (Classification and Regression Tree) is used to identify the critical formulation conditions that influence the viscosity and thermostability. In this work, three different multi-criteria data analysis frameworks were investigated to derive manufacturability indices from analysis of the stress maps and the process conditions experienced in the final UF/DF step. Polynomial regression techniques were used to transform the experimental data into a set of stress maps that show viscosity and thermostability as functions of the formulation conditions. A mathematical filtrate flux model was used to capture the time profiles of protein concentration and flux decay behavior during UF/DF. Multi-criteria decision-making analysis was used to identify the optimal formulation conditions that minimize the potential for both viscosity and aggregation issues during UF/DF. Biotechnol. Bioeng. 2017;114: 2043-2056. © 2017 The Authors. Biotechnology and Bioengineering Published by Wiley Perodicals, Inc. © 2017 The Authors. Biotechnology and Bioengineering Published by Wiley Perodicals, Inc.

  10. Time series modeling by a regression approach based on a latent process.

    PubMed

    Chamroukhi, Faicel; Samé, Allou; Govaert, Gérard; Aknin, Patrice

    2009-01-01

    Time series are used in many domains including finance, engineering, economics and bioinformatics generally to represent the change of a measurement over time. Modeling techniques may then be used to give a synthetic representation of such data. A new approach for time series modeling is proposed in this paper. It consists of a regression model incorporating a discrete hidden logistic process allowing for activating smoothly or abruptly different polynomial regression models. The model parameters are estimated by the maximum likelihood method performed by a dedicated Expectation Maximization (EM) algorithm. The M step of the EM algorithm uses a multi-class Iterative Reweighted Least-Squares (IRLS) algorithm to estimate the hidden process parameters. To evaluate the proposed approach, an experimental study on simulated data and real world data was performed using two alternative approaches: a heteroskedastic piecewise regression model using a global optimization algorithm based on dynamic programming, and a Hidden Markov Regression Model whose parameters are estimated by the Baum-Welch algorithm. Finally, in the context of the remote monitoring of components of the French railway infrastructure, and more particularly the switch mechanism, the proposed approach has been applied to modeling and classifying time series representing the condition measurements acquired during switch operations.

  11. Connection between quantum systems involving the fourth Painlevé transcendent and k-step rational extensions of the harmonic oscillator related to Hermite exceptional orthogonal polynomial

    NASA Astrophysics Data System (ADS)

    Marquette, Ian; Quesne, Christiane

    2016-05-01

    The purpose of this communication is to point out the connection between a 1D quantum Hamiltonian involving the fourth Painlevé transcendent PIV, obtained in the context of second-order supersymmetric quantum mechanics and third-order ladder operators, with a hierarchy of families of quantum systems called k-step rational extensions of the harmonic oscillator and related with multi-indexed Xm1,m2,…,mk Hermite exceptional orthogonal polynomials of type III. The connection between these exactly solvable models is established at the level of the equivalence of the Hamiltonians using rational solutions of the fourth Painlevé equation in terms of generalized Hermite and Okamoto polynomials. We also relate the different ladder operators obtained by various combinations of supersymmetric constructions involving Darboux-Crum and Krein-Adler supercharges, their zero modes and the corresponding energies. These results will demonstrate and clarify the relation observed for a particular case in previous papers.

  12. Multi-indexed (q-)Racah polynomials

    NASA Astrophysics Data System (ADS)

    Odake, Satoru; Sasaki, Ryu

    2012-09-01

    As the second stage of the project multi-indexed orthogonal polynomials, we present, in the framework of ‘discrete quantum mechanics’ with real shifts in one dimension, the multi-indexed (q-)Racah polynomials. They are obtained from the (q-)Racah polynomials by the multiple application of the discrete analogue of the Darboux transformations or the Crum-Krein-Adler deletion of ‘virtual state’ vectors, in a similar way to the multi-indexed Laguerre and Jacobi polynomials reported earlier. The virtual state vectors are the ‘solutions’ of the matrix Schrödinger equation with negative ‘eigenvalues’, except for one of the two boundary points.

  13. Parametric correlation functions to model the structure of permanent environmental (co)variances in milk yield random regression models.

    PubMed

    Bignardi, A B; El Faro, L; Cardoso, V L; Machado, P F; Albuquerque, L G

    2009-09-01

    The objective of the present study was to estimate milk yield genetic parameters applying random regression models and parametric correlation functions combined with a variance function to model animal permanent environmental effects. A total of 152,145 test-day milk yields from 7,317 first lactations of Holstein cows belonging to herds located in the southeastern region of Brazil were analyzed. Test-day milk yields were divided into 44 weekly classes of days in milk. Contemporary groups were defined by herd-test-day comprising a total of 2,539 classes. The model included direct additive genetic, permanent environmental, and residual random effects. The following fixed effects were considered: contemporary group, age of cow at calving (linear and quadratic regressions), and the population average lactation curve modeled by fourth-order orthogonal Legendre polynomial. Additive genetic effects were modeled by random regression on orthogonal Legendre polynomials of days in milk, whereas permanent environmental effects were estimated using a stationary or nonstationary parametric correlation function combined with a variance function of different orders. The structure of residual variances was modeled using a step function containing 6 variance classes. The genetic parameter estimates obtained with the model using a stationary correlation function associated with a variance function to model permanent environmental effects were similar to those obtained with models employing orthogonal Legendre polynomials for the same effect. A model using a sixth-order polynomial for additive effects and a stationary parametric correlation function associated with a seventh-order variance function to model permanent environmental effects would be sufficient for data fitting.

  14. Automatic Road Gap Detection Using Fuzzy Inference System

    NASA Astrophysics Data System (ADS)

    Hashemi, S.; Valadan Zoej, M. J.; Mokhtarzadeh, M.

    2011-09-01

    Automatic feature extraction from aerial and satellite images is a high-level data processing which is still one of the most important research topics of the field. In this area, most of the researches are focused on the early step of road detection, where road tracking methods, morphological analysis, dynamic programming and snakes, multi-scale and multi-resolution methods, stereoscopic and multi-temporal analysis, hyper spectral experiments, are some of the mature methods in this field. Although most researches are focused on detection algorithms, none of them can extract road network perfectly. On the other hand, post processing algorithms accentuated on the refining of road detection results, are not developed as well. In this article, the main is to design an intelligent method to detect and compensate road gaps remained on the early result of road detection algorithms. The proposed algorithm consists of five main steps as follow: 1) Short gap coverage: In this step, a multi-scale morphological is designed that covers short gaps in a hierarchical scheme. 2) Long gap detection: In this step, the long gaps, could not be covered in the previous stage, are detected using a fuzzy inference system. for this reason, a knowledge base consisting of some expert rules are designed which are fired on some gap candidates of the road detection results. 3) Long gap coverage: In this stage, detected long gaps are compensated by two strategies of linear and polynomials for this reason, shorter gaps are filled by line fitting while longer ones are compensated by polynomials.4) Accuracy assessment: In order to evaluate the obtained results, some accuracy assessment criteria are proposed. These criteria are obtained by comparing the obtained results with truly compensated ones produced by a human expert. The complete evaluation of the obtained results whit their technical discussions are the materials of the full paper.

  15. Automatic Registration of GF4 Pms: a High Resolution Multi-Spectral Sensor on Board a Satellite on Geostationary Orbit

    NASA Astrophysics Data System (ADS)

    Gao, M.; Li, J.

    2018-04-01

    Geometric correction is an important preprocessing process in the application of GF4 PMS image. The method of geometric correction that is based on the manual selection of geometric control points is time-consuming and laborious. The more common method, based on a reference image, is automatic image registration. This method involves several steps and parameters. For the multi-spectral sensor GF4 PMS, it is necessary for us to identify the best combination of parameters and steps. This study mainly focuses on the following issues: necessity of Rational Polynomial Coefficients (RPC) correction before automatic registration, base band in the automatic registration and configuration of GF4 PMS spatial resolution.

  16. Multi-indexed Meixner and little q-Jacobi (Laguerre) polynomials

    NASA Astrophysics Data System (ADS)

    Odake, Satoru; Sasaki, Ryu

    2017-04-01

    As the fourth stage of the project multi-indexed orthogonal polynomials, we present the multi-indexed Meixner and little q-Jacobi (Laguerre) polynomials in the framework of ‘discrete quantum mechanics’ with real shifts defined on the semi-infinite lattice in one dimension. They are obtained, in a similar way to the multi-indexed Laguerre and Jacobi polynomials reported earlier, from the quantum mechanical systems corresponding to the original orthogonal polynomials by multiple application of the discrete analogue of the Darboux transformations or the Crum-Krein-Adler deletion of virtual state vectors. The virtual state vectors are the solutions of the matrix Schrödinger equation on all the lattice points having negative energies and infinite norm. This is in good contrast to the (q-)Racah systems defined on a finite lattice, in which the ‘virtual state’ vectors satisfy the matrix Schrödinger equation except for one of the two boundary points.

  17. Local polynomial estimation of heteroscedasticity in a multivariate linear regression model and its applications in economics.

    PubMed

    Su, Liyun; Zhao, Yanyong; Yan, Tianshun; Li, Fenglan

    2012-01-01

    Multivariate local polynomial fitting is applied to the multivariate linear heteroscedastic regression model. Firstly, the local polynomial fitting is applied to estimate heteroscedastic function, then the coefficients of regression model are obtained by using generalized least squares method. One noteworthy feature of our approach is that we avoid the testing for heteroscedasticity by improving the traditional two-stage method. Due to non-parametric technique of local polynomial estimation, it is unnecessary to know the form of heteroscedastic function. Therefore, we can improve the estimation precision, when the heteroscedastic function is unknown. Furthermore, we verify that the regression coefficients is asymptotic normal based on numerical simulations and normal Q-Q plots of residuals. Finally, the simulation results and the local polynomial estimation of real data indicate that our approach is surely effective in finite-sample situations.

  18. STATLIB: NSWC Library of Statistical Programs and Subroutines

    DTIC Science & Technology

    1989-08-01

    Uncorrelated Weighted Polynomial Regression 41 .WEPORC Correlated Weighted Polynomial Regression 45 MROP Multiple Regression Using Orthogonal Polynomials ...could not and should not be con- NSWC TR 89-97 verted to the new general purpose computer (the current CDC 995). Some were designed tu compute...personal computers. They are referred to as SPSSPC+, BMDPC, and SASPC and in general are less comprehensive than their mainframe counterparts. The basic

  19. Random regression models using different functions to model test-day milk yield of Brazilian Holstein cows.

    PubMed

    Bignardi, A B; El Faro, L; Torres Júnior, R A A; Cardoso, V L; Machado, P F; Albuquerque, L G

    2011-10-31

    We analyzed 152,145 test-day records from 7317 first lactations of Holstein cows recorded from 1995 to 2003. Our objective was to model variations in test-day milk yield during the first lactation of Holstein cows by random regression model (RRM), using various functions in order to obtain adequate and parsimonious models for the estimation of genetic parameters. Test-day milk yields were grouped into weekly classes of days in milk, ranging from 1 to 44 weeks. The contemporary groups were defined as herd-test-day. The analyses were performed using a single-trait RRM, including the direct additive, permanent environmental and residual random effects. In addition, contemporary group and linear and quadratic effects of the age of cow at calving were included as fixed effects. The mean trend of milk yield was modeled with a fourth-order orthogonal Legendre polynomial. The additive genetic and permanent environmental covariance functions were estimated by random regression on two parametric functions, Ali and Schaeffer and Wilmink, and on B-spline functions of days in milk. The covariance components and the genetic parameters were estimated by the restricted maximum likelihood method. Results from RRM parametric and B-spline functions were compared to RRM on Legendre polynomials and with a multi-trait analysis, using the same data set. Heritability estimates presented similar trends during mid-lactation (13 to 31 weeks) and between week 37 and the end of lactation, for all RRM. Heritabilities obtained by multi-trait analysis were of a lower magnitude than those estimated by RRM. The RRMs with a higher number of parameters were more useful to describe the genetic variation of test-day milk yield throughout the lactation. RRM using B-spline and Legendre polynomials as base functions appears to be the most adequate to describe the covariance structure of the data.

  20. Random regression analyses using B-splines functions to model growth from birth to adult age in Canchim cattle.

    PubMed

    Baldi, F; Alencar, M M; Albuquerque, L G

    2010-12-01

    The objective of this work was to estimate covariance functions using random regression models on B-splines functions of animal age, for weights from birth to adult age in Canchim cattle. Data comprised 49,011 records on 2435 females. The model of analysis included fixed effects of contemporary groups, age of dam as quadratic covariable and the population mean trend taken into account by a cubic regression on orthogonal polynomials of animal age. Residual variances were modelled through a step function with four classes. The direct and maternal additive genetic effects, and animal and maternal permanent environmental effects were included as random effects in the model. A total of seventeen analyses, considering linear, quadratic and cubic B-splines functions and up to seven knots, were carried out. B-spline functions of the same order were considered for all random effects. Random regression models on B-splines functions were compared to a random regression model on Legendre polynomials and with a multitrait model. Results from different models of analyses were compared using the REML form of the Akaike Information criterion and Schwarz' Bayesian Information criterion. In addition, the variance components and genetic parameters estimated for each random regression model were also used as criteria to choose the most adequate model to describe the covariance structure of the data. A model fitting quadratic B-splines, with four knots or three segments for direct additive genetic effect and animal permanent environmental effect and two knots for maternal additive genetic effect and maternal permanent environmental effect, was the most adequate to describe the covariance structure of the data. Random regression models using B-spline functions as base functions fitted the data better than Legendre polynomials, especially at mature ages, but higher number of parameters need to be estimated with B-splines functions. © 2010 Blackwell Verlag GmbH.

  1. Penalized Multi-Way Partial Least Squares for Smooth Trajectory Decoding from Electrocorticographic (ECoG) Recording

    PubMed Central

    Eliseyev, Andrey; Aksenova, Tetiana

    2016-01-01

    In the current paper the decoding algorithms for motor-related BCI systems for continuous upper limb trajectory prediction are considered. Two methods for the smooth prediction, namely Sobolev and Polynomial Penalized Multi-Way Partial Least Squares (PLS) regressions, are proposed. The methods are compared to the Multi-Way Partial Least Squares and Kalman Filter approaches. The comparison demonstrated that the proposed methods combined the prediction accuracy of the algorithms of the PLS family and trajectory smoothness of the Kalman Filter. In addition, the prediction delay is significantly lower for the proposed algorithms than for the Kalman Filter approach. The proposed methods could be applied in a wide range of applications beyond neuroscience. PMID:27196417

  2. Application of the polynomial chaos expansion to approximate the homogenised response of the intervertebral disc.

    PubMed

    Karajan, N; Otto, D; Oladyshkin, S; Ehlers, W

    2014-10-01

    A possibility to simulate the mechanical behaviour of the human spine is given by modelling the stiffer structures, i.e. the vertebrae, as a discrete multi-body system (MBS), whereas the softer connecting tissue, i.e. the softer intervertebral discs (IVD), is represented in a continuum-mechanical sense using the finite-element method (FEM). From a modelling point of view, the mechanical behaviour of the IVD can be included into the MBS in two different ways. They can either be computed online in a so-called co-simulation of a MBS and a FEM or offline in a pre-computation step, where a representation of the discrete mechanical response of the IVD needs to be defined in terms of the applied degrees of freedom (DOF) of the MBS. For both methods, an appropriate homogenisation step needs to be applied to obtain the discrete mechanical response of the IVD, i.e. the resulting forces and moments. The goal of this paper was to present an efficient method to approximate the mechanical response of an IVD in an offline computation. In a previous paper (Karajan et al. in Biomech Model Mechanobiol 12(3):453-466, 2012), it was proven that a cubic polynomial for the homogenised forces and moments of the FE model is a suitable choice to approximate the purely elastic response as a coupled function of the DOF of the MBS. In this contribution, the polynomial chaos expansion (PCE) is applied to generate these high-dimensional polynomials. Following this, the main challenge is to determine suitable deformation states of the IVD for pre-computation, such that the polynomials can be constructed with high accuracy and low numerical cost. For the sake of a simple verification, the coupling method and the PCE are applied to the same simplified motion segment of the spine as was used in the previous paper, i.e. two cylindrical vertebrae and a cylindrical IVD in between. In a next step, the loading rates are included as variables in the polynomial response functions to account for a more realistic response of the overall viscoelastic intervertebral disc. Herein, an additive split into elastic and inelastic contributions to the homogenised forces and moments is applied.

  3. A quadratic regression modelling on paddy production in the area of Perlis

    NASA Astrophysics Data System (ADS)

    Goh, Aizat Hanis Annas; Ali, Zalila; Nor, Norlida Mohd; Baharum, Adam; Ahmad, Wan Muhamad Amir W.

    2017-08-01

    Polynomial regression models are useful in situations in which the relationship between a response variable and predictor variables is curvilinear. Polynomial regression fits the nonlinear relationship into a least squares linear regression model by decomposing the predictor variables into a kth order polynomial. The polynomial order determines the number of inflexions on the curvilinear fitted line. A second order polynomial forms a quadratic expression (parabolic curve) with either a single maximum or minimum, a third order polynomial forms a cubic expression with both a relative maximum and a minimum. This study used paddy data in the area of Perlis to model paddy production based on paddy cultivation characteristics and environmental characteristics. The results indicated that a quadratic regression model best fits the data and paddy production is affected by urea fertilizer application and the interaction between amount of average rainfall and percentage of area defected by pest and disease. Urea fertilizer application has a quadratic effect in the model which indicated that if the number of days of urea fertilizer application increased, paddy production is expected to decrease until it achieved a minimum value and paddy production is expected to increase at higher number of days of urea application. The decrease in paddy production with an increased in rainfall is greater, the higher the percentage of area defected by pest and disease.

  4. Why High-Order Polynomials Should Not Be Used in Regression Discontinuity Designs. NBER Working Paper No. 20405

    ERIC Educational Resources Information Center

    Gelman, Andrew; Imbens, Guido

    2014-01-01

    It is common in regression discontinuity analysis to control for high order (third, fourth, or higher) polynomials of the forcing variable. We argue that estimators for causal effects based on such methods can be misleading, and we recommend researchers do not use them, and instead use estimators based on local linear or quadratic polynomials or…

  5. Local Composite Quantile Regression Smoothing for Harris Recurrent Markov Processes

    PubMed Central

    Li, Degui; Li, Runze

    2016-01-01

    In this paper, we study the local polynomial composite quantile regression (CQR) smoothing method for the nonlinear and nonparametric models under the Harris recurrent Markov chain framework. The local polynomial CQR regression method is a robust alternative to the widely-used local polynomial method, and has been well studied in stationary time series. In this paper, we relax the stationarity restriction on the model, and allow that the regressors are generated by a general Harris recurrent Markov process which includes both the stationary (positive recurrent) and nonstationary (null recurrent) cases. Under some mild conditions, we establish the asymptotic theory for the proposed local polynomial CQR estimator of the mean regression function, and show that the convergence rate for the estimator in nonstationary case is slower than that in stationary case. Furthermore, a weighted type local polynomial CQR estimator is provided to improve the estimation efficiency, and a data-driven bandwidth selection is introduced to choose the optimal bandwidth involved in the nonparametric estimators. Finally, we give some numerical studies to examine the finite sample performance of the developed methodology and theory. PMID:27667894

  6. Polynomial elimination theory and non-linear stability analysis for the Euler equations

    NASA Technical Reports Server (NTRS)

    Kennon, S. R.; Dulikravich, G. S.; Jespersen, D. C.

    1986-01-01

    Numerical methods are presented that exploit the polynomial properties of discretizations of the Euler equations. It is noted that most finite difference or finite volume discretizations of the steady-state Euler equations produce a polynomial system of equations to be solved. These equations are solved using classical polynomial elimination theory, with some innovative modifications. This paper also presents some preliminary results of a new non-linear stability analysis technique. This technique is applicable to determining the stability of polynomial iterative schemes. Results are presented for applying the elimination technique to a one-dimensional test case. For this test case, the exact solution is computed in three iterations. The non-linear stability analysis is applied to determine the optimal time step for solving Burgers' equation using the MacCormack scheme. The estimated optimal time step is very close to the time step that arises from a linear stability analysis.

  7. Design of polynomial fuzzy observer-controller for nonlinear systems with state delay: sum of squares approach

    NASA Astrophysics Data System (ADS)

    Gassara, H.; El Hajjaji, A.; Chaabane, M.

    2017-07-01

    This paper investigates the problem of observer-based control for two classes of polynomial fuzzy systems with time-varying delay. The first class concerns a special case where the polynomial matrices do not depend on the estimated state variables. The second one is the general case where the polynomial matrices could depend on unmeasurable system states that will be estimated. For the last case, two design procedures are proposed. The first one gives the polynomial fuzzy controller and observer gains in two steps. In the second procedure, the designed gains are obtained using a single-step approach to overcome the drawback of a two-step procedure. The obtained conditions are presented in terms of sum of squares (SOS) which can be solved via the SOSTOOLS and a semi-definite program solver. Illustrative examples show the validity and applicability of the proposed results.

  8. A dynamic multi-level optimal design method with embedded finite-element modeling for power transformers

    NASA Astrophysics Data System (ADS)

    Zhang, Yunpeng; Ho, Siu-lau; Fu, Weinong

    2018-05-01

    This paper proposes a dynamic multi-level optimal design method for power transformer design optimization (TDO) problems. A response surface generated by second-order polynomial regression analysis is updated dynamically by adding more design points, which are selected by Shifted Hammersley Method (SHM) and calculated by finite-element method (FEM). The updating stops when the accuracy requirement is satisfied, and optimized solutions of the preliminary design are derived simultaneously. The optimal design level is modulated through changing the level of error tolerance. Based on the response surface of the preliminary design, a refined optimal design is added using multi-objective genetic algorithm (MOGA). The effectiveness of the proposed optimal design method is validated through a classic three-phase power TDO problem.

  9. Adaptive nonlinear polynomial neural networks for control of boundary layer/structural interaction

    NASA Technical Reports Server (NTRS)

    Parker, B. Eugene, Jr.; Cellucci, Richard L.; Abbott, Dean W.; Barron, Roger L.; Jordan, Paul R., III; Poor, H. Vincent

    1993-01-01

    The acoustic pressures developed in a boundary layer can interact with an aircraft panel to induce significant vibration in the panel. Such vibration is undesirable due to the aerodynamic drag and structure-borne cabin noises that result. The overall objective of this work is to develop effective and practical feedback control strategies for actively reducing this flow-induced structural vibration. This report describes the results of initial evaluations using polynomial, neural network-based, feedback control to reduce flow induced vibration in aircraft panels due to turbulent boundary layer/structural interaction. Computer simulations are used to develop and analyze feedback control strategies to reduce vibration in a beam as a first step. The key differences between this work and that going on elsewhere are as follows: that turbulent and transitional boundary layers represent broadband excitation and thus present a more complex stochastic control scenario than that of narrow band (e.g., laminar boundary layer) excitation; and secondly, that the proposed controller structures are adaptive nonlinear infinite impulse response (IIR) polynomial neural network, as opposed to the traditional adaptive linear finite impulse response (FIR) filters used in most studies to date. The controllers implemented in this study achieved vibration attenuation of 27 to 60 dB depending on the type of boundary layer established by laminar, turbulent, and intermittent laminar-to-turbulent transitional flows. Application of multi-input, multi-output, adaptive, nonlinear feedback control of vibration in aircraft panels based on polynomial neural networks appears to be feasible today. Plans are outlined for Phase 2 of this study, which will include extending the theoretical investigation conducted in Phase 2 and verifying the results in a series of laboratory experiments involving both bum and plate models.

  10. Spatial interpolation schemes of daily precipitation for hydrologic modeling

    USGS Publications Warehouse

    Hwang, Y.; Clark, M.R.; Rajagopalan, B.; Leavesley, G.

    2012-01-01

    Distributed hydrologic models typically require spatial estimates of precipitation interpolated from sparsely located observational points to the specific grid points. We compare and contrast the performance of regression-based statistical methods for the spatial estimation of precipitation in two hydrologically different basins and confirmed that widely used regression-based estimation schemes fail to describe the realistic spatial variability of daily precipitation field. The methods assessed are: (1) inverse distance weighted average; (2) multiple linear regression (MLR); (3) climatological MLR; and (4) locally weighted polynomial regression (LWP). In order to improve the performance of the interpolations, the authors propose a two-step regression technique for effective daily precipitation estimation. In this simple two-step estimation process, precipitation occurrence is first generated via a logistic regression model before estimate the amount of precipitation separately on wet days. This process generated the precipitation occurrence, amount, and spatial correlation effectively. A distributed hydrologic model (PRMS) was used for the impact analysis in daily time step simulation. Multiple simulations suggested noticeable differences between the input alternatives generated by three different interpolation schemes. Differences are shown in overall simulation error against the observations, degree of explained variability, and seasonal volumes. Simulated streamflows also showed different characteristics in mean, maximum, minimum, and peak flows. Given the same parameter optimization technique, LWP input showed least streamflow error in Alapaha basin and CMLR input showed least error (still very close to LWP) in Animas basin. All of the two-step interpolation inputs resulted in lower streamflow error compared to the directly interpolated inputs. ?? 2011 Springer-Verlag.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marquette, Ian, E-mail: i.marquette@uq.edu.au; Quesne, Christiane, E-mail: cquesne@ulb.ac.be

    The purpose of this communication is to point out the connection between a 1D quantum Hamiltonian involving the fourth Painlevé transcendent P{sub IV}, obtained in the context of second-order supersymmetric quantum mechanics and third-order ladder operators, with a hierarchy of families of quantum systems called k-step rational extensions of the harmonic oscillator and related with multi-indexed X{sub m{sub 1,m{sub 2,…,m{sub k}}}} Hermite exceptional orthogonal polynomials of type III. The connection between these exactly solvable models is established at the level of the equivalence of the Hamiltonians using rational solutions of the fourth Painlevé equation in terms of generalized Hermite andmore » Okamoto polynomials. We also relate the different ladder operators obtained by various combinations of supersymmetric constructions involving Darboux-Crum and Krein-Adler supercharges, their zero modes and the corresponding energies. These results will demonstrate and clarify the relation observed for a particular case in previous papers.« less

  12. A successful backward step correlates with hip flexion moment of supporting limb in elderly people.

    PubMed

    Takeuchi, Yahiko

    2018-01-01

    The objective of this study was to determine the positional relationship between the center of mass (COM) and the center of pressure (COP) at the time of step landing, and to examine their relationship with the joint moments exerted by the supporting limb, with regard to factors of the successful backward step response. The study population comprised 8 community-dwelling elderly people that were observed to take successive multi steps after the landing of a backward stepping. Using a motion capture system and force plate, we measured the COM, COP and COM-COP deviation distance on landing during backward stepping. In addition, we measured the moment of the supporting limb joint during backward stepping. The multi-step data were compared with data from instances when only one step was taken (single-step). Variables that differed significantly between the single- and multi-step data were used as objective variables and the joint moments of the supporting limb were used as explanatory variables in single regression analyses. The COM-COP deviation in the anteroposterior was significantly larger in the single-step. A regression analysis with COM-COP deviation as the objective variable obtained a significant regression equation in the hip flexion moment (R2 = 0.74). The hip flexion moment of supporting limb was shown to be a significant explanatory variable in both the PS and SS phases for the relationship with COM-COP distance. This study found that to create an appropriate backward step response after an external disturbance (i.e. the ability to stop after 1 step), posterior braking of the COM by a hip flexion moment are important during the single-limbed standing phase.

  13. Translation of Bernstein Coefficients Under an Affine Mapping of the Unit Interval

    NASA Technical Reports Server (NTRS)

    Alford, John A., II

    2012-01-01

    We derive an expression connecting the coefficients of a polynomial expanded in the Bernstein basis to the coefficients of an equivalent expansion of the polynomial under an affine mapping of the domain. The expression may be useful in the calculation of bounds for multi-variate polynomials.

  14. A method for fitting regression splines with varying polynomial order in the linear mixed model.

    PubMed

    Edwards, Lloyd J; Stewart, Paul W; MacDougall, James E; Helms, Ronald W

    2006-02-15

    The linear mixed model has become a widely used tool for longitudinal analysis of continuous variables. The use of regression splines in these models offers the analyst additional flexibility in the formulation of descriptive analyses, exploratory analyses and hypothesis-driven confirmatory analyses. We propose a method for fitting piecewise polynomial regression splines with varying polynomial order in the fixed effects and/or random effects of the linear mixed model. The polynomial segments are explicitly constrained by side conditions for continuity and some smoothness at the points where they join. By using a reparameterization of this explicitly constrained linear mixed model, an implicitly constrained linear mixed model is constructed that simplifies implementation of fixed-knot regression splines. The proposed approach is relatively simple, handles splines in one variable or multiple variables, and can be easily programmed using existing commercial software such as SAS or S-plus. The method is illustrated using two examples: an analysis of longitudinal viral load data from a study of subjects with acute HIV-1 infection and an analysis of 24-hour ambulatory blood pressure profiles.

  15. A Novel Multi-Receiver Signcryption Scheme with Complete Anonymity.

    PubMed

    Pang, Liaojun; Yan, Xuxia; Zhao, Huiyang; Hu, Yufei; Li, Huixian

    2016-01-01

    Anonymity, which is more and more important to multi-receiver schemes, has been taken into consideration by many researchers recently. To protect the receiver anonymity, in 2010, the first multi-receiver scheme based on the Lagrange interpolating polynomial was proposed. To ensure the sender's anonymity, the concept of the ring signature was proposed in 2005, but afterwards, this scheme was proven to has some weakness and at the same time, a completely anonymous multi-receiver signcryption scheme is proposed. In this completely anonymous scheme, the sender anonymity is achieved by improving the ring signature, and the receiver anonymity is achieved by also using the Lagrange interpolating polynomial. Unfortunately, the Lagrange interpolation method was proven a failure to protect the anonymity of receivers, because each authorized receiver could judge whether anyone else is authorized or not. Therefore, the completely anonymous multi-receiver signcryption mentioned above can only protect the sender anonymity. In this paper, we propose a new completely anonymous multi-receiver signcryption scheme with a new polynomial technology used to replace the Lagrange interpolating polynomial, which can mix the identity information of receivers to save it as a ciphertext element and prevent the authorized receivers from verifying others. With the receiver anonymity, the proposed scheme also owns the anonymity of the sender at the same time. Meanwhile, the decryption fairness and public verification are also provided.

  16. Unconditionally energy stable time stepping scheme for Cahn–Morral equation: Application to multi-component spinodal decomposition and optimal space tiling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tavakoli, Rouhollah, E-mail: rtavakoli@sharif.ir

    An unconditionally energy stable time stepping scheme is introduced to solve Cahn–Morral-like equations in the present study. It is constructed based on the combination of David Eyre's time stepping scheme and Schur complement approach. Although the presented method is general and independent of the choice of homogeneous free energy density function term, logarithmic and polynomial energy functions are specifically considered in this paper. The method is applied to study the spinodal decomposition in multi-component systems and optimal space tiling problems. A penalization strategy is developed, in the case of later problem, to avoid trivial solutions. Extensive numerical experiments demonstrate themore » success and performance of the presented method. According to the numerical results, the method is convergent and energy stable, independent of the choice of time stepsize. Its MATLAB implementation is included in the appendix for the numerical evaluation of algorithm and reproduction of the presented results. -- Highlights: •Extension of Eyre's convex–concave splitting scheme to multiphase systems. •Efficient solution of spinodal decomposition in multi-component systems. •Efficient solution of least perimeter periodic space partitioning problem. •Developing a penalization strategy to avoid trivial solutions. •Presentation of MATLAB implementation of the introduced algorithm.« less

  17. A solver for General Unilateral Polynomial Matrix Equation with Second-Order Matrices Over Prime Finite Fields

    NASA Astrophysics Data System (ADS)

    Burtyka, Filipp

    2018-03-01

    The paper firstly considers the problem of finding solvents for arbitrary unilateral polynomial matrix equations with second-order matrices over prime finite fields from the practical point of view: we implement the solver for this problem. The solver’s algorithm has two step: the first is finding solvents, having Jordan Normal Form (JNF), the second is finding solvents among the rest matrices. The first step reduces to the finding roots of usual polynomials over finite fields, the second is essentially exhaustive search. The first step’s algorithms essentially use the polynomial matrices theory. We estimate the practical duration of computations using our software implementation (for example that one can’t construct unilateral matrix polynomial over finite field, having any predefined number of solvents) and answer some theoretically-valued questions.

  18. A FAST POLYNOMIAL TRANSFORM PROGRAM WITH A MODULARIZED STRUCTURE

    NASA Technical Reports Server (NTRS)

    Truong, T. K.

    1994-01-01

    This program utilizes a fast polynomial transformation (FPT) algorithm applicable to two-dimensional mathematical convolutions. Two-dimensional convolution has many applications, particularly in image processing. Two-dimensional cyclic convolutions can be converted to a one-dimensional convolution in a polynomial ring. Traditional FPT methods decompose the one-dimensional cyclic polynomial into polynomial convolutions of different lengths. This program will decompose a cyclic polynomial into polynomial convolutions of the same length. Thus, only FPTs and Fast Fourier Transforms of the same length are required. This modular approach can save computational resources. To further enhance its appeal, the program is written in the transportable 'C' language. The steps in the algorithm are: 1) formulate the modulus reduction equations, 2) calculate the polynomial transforms, 3) multiply the transforms using a generalized fast Fourier transformation, 4) compute the inverse polynomial transforms, and 5) reconstruct the final matrices using the Chinese remainder theorem. Input to this program is comprised of the row and column dimensions and the initial two matrices. The matrices are printed out at all steps, ending with the final reconstruction. This program is written in 'C' for batch execution and has been implemented on the IBM PC series of computers under DOS with a central memory requirement of approximately 18K of 8 bit bytes. This program was developed in 1986.

  19. Perceptually informed synthesis of bandlimited classical waveforms using integrated polynomial interpolation.

    PubMed

    Välimäki, Vesa; Pekonen, Jussi; Nam, Juhan

    2012-01-01

    Digital subtractive synthesis is a popular music synthesis method, which requires oscillators that are aliasing-free in a perceptual sense. It is a research challenge to find computationally efficient waveform generation algorithms that produce similar-sounding signals to analog music synthesizers but which are free from audible aliasing. A technique for approximately bandlimited waveform generation is considered that is based on a polynomial correction function, which is defined as the difference of a non-bandlimited step function and a polynomial approximation of the ideal bandlimited step function. It is shown that the ideal bandlimited step function is equivalent to the sine integral, and that integrated polynomial interpolation methods can successfully approximate it. Integrated Lagrange interpolation and B-spline basis functions are considered for polynomial approximation. The polynomial correction function can be added onto samples around each discontinuity in a non-bandlimited waveform to suppress aliasing. Comparison against previously known methods shows that the proposed technique yields the best tradeoff between computational cost and sound quality. The superior method amongst those considered in this study is the integrated third-order B-spline correction function, which offers perceptually aliasing-free sawtooth emulation up to the fundamental frequency of 7.8 kHz at the sample rate of 44.1 kHz. © 2012 Acoustical Society of America.

  20. Further Insight and Additional Inference Methods for Polynomial Regression Applied to the Analysis of Congruence

    ERIC Educational Resources Information Center

    Cohen, Ayala; Nahum-Shani, Inbal; Doveh, Etti

    2010-01-01

    In their seminal paper, Edwards and Parry (1993) presented the polynomial regression as a better alternative to applying difference score in the study of congruence. Although this method is increasingly applied in congruence research, its complexity relative to other methods for assessing congruence (e.g., difference score methods) was one of the…

  1. AKLSQF - LEAST SQUARES CURVE FITTING

    NASA Technical Reports Server (NTRS)

    Kantak, A. V.

    1994-01-01

    The Least Squares Curve Fitting program, AKLSQF, computes the polynomial which will least square fit uniformly spaced data easily and efficiently. The program allows the user to specify the tolerable least squares error in the fitting or allows the user to specify the polynomial degree. In both cases AKLSQF returns the polynomial and the actual least squares fit error incurred in the operation. The data may be supplied to the routine either by direct keyboard entry or via a file. AKLSQF produces the least squares polynomial in two steps. First, the data points are least squares fitted using the orthogonal factorial polynomials. The result is then reduced to a regular polynomial using Sterling numbers of the first kind. If an error tolerance is specified, the program starts with a polynomial of degree 1 and computes the least squares fit error. The degree of the polynomial used for fitting is then increased successively until the error criterion specified by the user is met. At every step the polynomial as well as the least squares fitting error is printed to the screen. In general, the program can produce a curve fitting up to a 100 degree polynomial. All computations in the program are carried out under Double Precision format for real numbers and under long integer format for integers to provide the maximum accuracy possible. AKLSQF was written for an IBM PC X/AT or compatible using Microsoft's Quick Basic compiler. It has been implemented under DOS 3.2.1 using 23K of RAM. AKLSQF was developed in 1989.

  2. Estimation of Ordinary Differential Equation Parameters Using Constrained Local Polynomial Regression.

    PubMed

    Ding, A Adam; Wu, Hulin

    2014-10-01

    We propose a new method to use a constrained local polynomial regression to estimate the unknown parameters in ordinary differential equation models with a goal of improving the smoothing-based two-stage pseudo-least squares estimate. The equation constraints are derived from the differential equation model and are incorporated into the local polynomial regression in order to estimate the unknown parameters in the differential equation model. We also derive the asymptotic bias and variance of the proposed estimator. Our simulation studies show that our new estimator is clearly better than the pseudo-least squares estimator in estimation accuracy with a small price of computational cost. An application example on immune cell kinetics and trafficking for influenza infection further illustrates the benefits of the proposed new method.

  3. Estimation of Ordinary Differential Equation Parameters Using Constrained Local Polynomial Regression

    PubMed Central

    Ding, A. Adam; Wu, Hulin

    2015-01-01

    We propose a new method to use a constrained local polynomial regression to estimate the unknown parameters in ordinary differential equation models with a goal of improving the smoothing-based two-stage pseudo-least squares estimate. The equation constraints are derived from the differential equation model and are incorporated into the local polynomial regression in order to estimate the unknown parameters in the differential equation model. We also derive the asymptotic bias and variance of the proposed estimator. Our simulation studies show that our new estimator is clearly better than the pseudo-least squares estimator in estimation accuracy with a small price of computational cost. An application example on immune cell kinetics and trafficking for influenza infection further illustrates the benefits of the proposed new method. PMID:26401093

  4. Discrete-time state estimation for stochastic polynomial systems over polynomial observations

    NASA Astrophysics Data System (ADS)

    Hernandez-Gonzalez, M.; Basin, M.; Stepanov, O.

    2018-07-01

    This paper presents a solution to the mean-square state estimation problem for stochastic nonlinear polynomial systems over polynomial observations confused with additive white Gaussian noises. The solution is given in two steps: (a) computing the time-update equations and (b) computing the measurement-update equations for the state estimate and error covariance matrix. A closed form of this filter is obtained by expressing conditional expectations of polynomial terms as functions of the state estimate and error covariance. As a particular case, the mean-square filtering equations are derived for a third-degree polynomial system with second-degree polynomial measurements. Numerical simulations show effectiveness of the proposed filter compared to the extended Kalman filter.

  5. Random regression analyses using B-spline functions to model growth of Nellore cattle.

    PubMed

    Boligon, A A; Mercadante, M E Z; Lôbo, R B; Baldi, F; Albuquerque, L G

    2012-02-01

    The objective of this study was to estimate (co)variance components using random regression on B-spline functions to weight records obtained from birth to adulthood. A total of 82 064 weight records of 8145 females obtained from the data bank of the Nellore Breeding Program (PMGRN/Nellore Brazil) which started in 1987, were used. The models included direct additive and maternal genetic effects and animal and maternal permanent environmental effects as random. Contemporary group and dam age at calving (linear and quadratic effect) were included as fixed effects, and orthogonal Legendre polynomials of age (cubic regression) were considered as random covariate. The random effects were modeled using B-spline functions considering linear, quadratic and cubic polynomials for each individual segment. Residual variances were grouped in five age classes. Direct additive genetic and animal permanent environmental effects were modeled using up to seven knots (six segments). A single segment with two knots at the end points of the curve was used for the estimation of maternal genetic and maternal permanent environmental effects. A total of 15 models were studied, with the number of parameters ranging from 17 to 81. The models that used B-splines were compared with multi-trait analyses with nine weight traits and to a random regression model that used orthogonal Legendre polynomials. A model fitting quadratic B-splines, with four knots or three segments for direct additive genetic effect and animal permanent environmental effect and two knots for maternal additive genetic effect and maternal permanent environmental effect, was the most appropriate and parsimonious model to describe the covariance structure of the data. Selection for higher weight, such as at young ages, should be performed taking into account an increase in mature cow weight. Particularly, this is important in most of Nellore beef cattle production systems, where the cow herd is maintained on range conditions. There is limited modification of the growth curve of Nellore cattle with respect to the aim of selecting them for rapid growth at young ages while maintaining constant adult weight.

  6. Reliability of the Load-Velocity Relationship Obtained Through Linear and Polynomial Regression Models to Predict the One-Repetition Maximum Load.

    PubMed

    Pestaña-Melero, Francisco Luis; Haff, G Gregory; Rojas, Francisco Javier; Pérez-Castilla, Alejandro; García-Ramos, Amador

    2017-12-18

    This study aimed to compare the between-session reliability of the load-velocity relationship between (1) linear vs. polynomial regression models, (2) concentric-only vs. eccentric-concentric bench press variants, as well as (3) the within-participants vs. the between-participants variability of the velocity attained at each percentage of the one-repetition maximum (%1RM). The load-velocity relationship of 30 men (age: 21.2±3.8 y; height: 1.78±0.07 m, body mass: 72.3±7.3 kg; bench press 1RM: 78.8±13.2 kg) were evaluated by means of linear and polynomial regression models in the concentric-only and eccentric-concentric bench press variants in a Smith Machine. Two sessions were performed with each bench press variant. The main findings were: (1) first-order-polynomials (CV: 4.39%-4.70%) provided the load-velocity relationship with higher reliability than second-order-polynomials (CV: 4.68%-5.04%); (2) the reliability of the load-velocity relationship did not differ between the concentric-only and eccentric-concentric bench press variants; (3) the within-participants variability of the velocity attained at each %1RM was markedly lower than the between-participants variability. Taken together, these results highlight that, regardless of the bench press variant considered, the individual determination of the load-velocity relationship by a linear regression model could be recommended to monitor and prescribe the relative load in the Smith machine bench press exercise.

  7. [Optimization of one-step pelletization technology of Biqiu granules by Plackett-Burman design and Box-Behnken response surface methodology].

    PubMed

    Zhang, Yan-jun; Liu, Li-li; Hu, Jun-hua; Wu, Yun; Chao, En-xiang; Xiao, Wei

    2015-11-01

    First with the qualified rate of granules as the evaluation index, significant influencing factors were firstly screened by Plackett-Burman design. Then, with the qualified rate and moisture content as the evaluation indexes, significant factors that affect one-step pelletization technology were further optimized by Box-Behnken design; experimental data were imitated by multiple regression and second-order polynomial equation; and response surface method was used for predictive analysis of optimal technology. The best conditions were as follows: inlet air temperature of 85 degrees C, sample introduction speed of 33 r x min(-1), density of concrete 1. 10. One-step pelletization technology of Biqiu granules by Plackett-Burman design and Box-Behnken response surface methodology was stable and feasible with good predictability, which provided reliable basis for the industrialized production of Biqiu granules.

  8. Modelling the breeding of Aedes Albopictus species in an urban area in Pulau Pinang using polynomial regression

    NASA Astrophysics Data System (ADS)

    Salleh, Nur Hanim Mohd; Ali, Zalila; Noor, Norlida Mohd.; Baharum, Adam; Saad, Ahmad Ramli; Sulaiman, Husna Mahirah; Ahmad, Wan Muhamad Amir W.

    2014-07-01

    Polynomial regression is used to model a curvilinear relationship between a response variable and one or more predictor variables. It is a form of a least squares linear regression model that predicts a single response variable by decomposing the predictor variables into an nth order polynomial. In a curvilinear relationship, each curve has a number of extreme points equal to the highest order term in the polynomial. A quadratic model will have either a single maximum or minimum, whereas a cubic model has both a relative maximum and a minimum. This study used quadratic modeling techniques to analyze the effects of environmental factors: temperature, relative humidity, and rainfall distribution on the breeding of Aedes albopictus, a type of Aedes mosquito. Data were collected at an urban area in south-west Penang from September 2010 until January 2011. The results indicated that the breeding of Aedes albopictus in the urban area is influenced by all three environmental characteristics. The number of mosquito eggs is estimated to reach a maximum value at a medium temperature, a medium relative humidity and a high rainfall distribution.

  9. STEP and STEPSPL: Computer programs for aerodynamic model structure determination and parameter estimation

    NASA Technical Reports Server (NTRS)

    Batterson, J. G.

    1986-01-01

    The successful parametric modeling of the aerodynamics for an airplane operating at high angles of attack or sideslip is performed in two phases. First the aerodynamic model structure must be determined and second the associated aerodynamic parameters (stability and control derivatives) must be estimated for that model. The purpose of this paper is to document two versions of a stepwise regression computer program which were developed for the determination of airplane aerodynamic model structure and to provide two examples of their use on computer generated data. References are provided for the application of the programs to real flight data. The two computer programs that are the subject of this report, STEP and STEPSPL, are written in FORTRAN IV (ANSI l966) compatible with a CDC FTN4 compiler. Both programs are adaptations of a standard forward stepwise regression algorithm. The purpose of the adaptation is to facilitate the selection of a adequate mathematical model of the aerodynamic force and moment coefficients of an airplane from flight test data. The major difference between STEP and STEPSPL is in the basis for the model. The basis for the model in STEP is the standard polynomial Taylor's series expansion of the aerodynamic function about some steady-state trim condition. Program STEPSPL utilizes a set of spline basis functions.

  10. [Application of ordinary Kriging method in entomologic ecology].

    PubMed

    Zhang, Runjie; Zhou, Qiang; Chen, Cuixian; Wang, Shousong

    2003-01-01

    Geostatistics is a statistic method based on regional variables and using the tool of variogram to analyze the spatial structure and the patterns of organism. In simulating the variogram within a great range, though optimal simulation cannot be obtained, the simulation method of a dialogue between human and computer can be used to optimize the parameters of the spherical models. In this paper, the method mentioned above and the weighted polynomial regression were utilized to simulate the one-step spherical model, the two-step spherical model and linear function model, and the available nearby samples were used to draw on the ordinary Kriging procedure, which provided a best linear unbiased estimate of the constraint of the unbiased estimation. The sum of square deviation between the estimating and measuring values of varying theory models were figured out, and the relative graphs were shown. It was showed that the simulation based on the two-step spherical model was the best simulation, and the one-step spherical model was better than the linear function model.

  11. Change with age in regression construction of fat percentage for BMI in school-age children.

    PubMed

    Fujii, Katsunori; Mishima, Takaaki; Watanabe, Eiji; Seki, Kazuyoshi

    2011-01-01

    In this study, curvilinear regression was applied to the relationship between BMI and body fat percentage, and an analysis was done to see whether there are characteristic changes in that curvilinear regression from elementary to middle school. Then, by simultaneously investigating the changes with age in BMI and body fat percentage, the essential differences in BMI and body fat percentage were demonstrated. The subjects were 789 boys and girls (469 boys, 320 girls) aged 7.5 to 14.5 years from all parts of Japan who participated in regular sports activities. Body weight, total body water (TBW), soft lean mass (SLM), body fat percentage, and fat mass were measured with a body composition analyzer (Tanita BC-521 Inner Scan), using segmental bioelectrical impedance analysis & multi-frequency bioelectrical impedance analysis. Height was measured with a digital height measurer. Body mass index (BMI) was calculated as body weight (km) divided by the square of height (m). The results for the validity of regression polynomials of body fat percentage against BMI showed that, for both boys and girls, first-order polynomials were valid in all school years. With regard to changes with age in BMI and body fat percentage, the results showed a temporary drop at 9 years in the aging distance curve in boys, followed by an increasing trend. Peaks were seen in the velocity curve at 9.7 and 11.9 years, but the MPV was presumed to be at 11.9 years. Among girls, a decreasing trend was seen in the aging distance curve, which was opposite to the changes in the aging distance curve for body fat percentage.

  12. Genome-wide association study on legendre random regression coefficients for the growth and feed intake trajectory on Duroc Boars.

    PubMed

    Howard, Jeremy T; Jiao, Shihui; Tiezzi, Francesco; Huang, Yijian; Gray, Kent A; Maltecca, Christian

    2015-05-30

    Feed intake and growth are economically important traits in swine production. Previous genome wide association studies (GWAS) have utilized average daily gain or daily feed intake to identify regions that impact growth and feed intake across time. The use of longitudinal models in GWAS studies, such as random regression, allows for SNPs having a heterogeneous effect across the trajectory to be characterized. The objective of this study is therefore to conduct a single step GWAS (ssGWAS) on the animal polynomial coefficients for feed intake and growth. Corrected daily feed intake (DFI Adj) and average daily weight measurements (DBW Avg) on 8981 (n=525,240 observations) and 5643 (n=283,607 observations) animals were utilized in a random regression model using Legendre polynomials (order=2) and a relationship matrix that included genotyped and un-genotyped animals. A ssGWAS was conducted on the animal polynomials coefficients (intercept, linear and quadratic) for animals with genotypes (DFIAdj: n=855; DBWAvg: n=590). Regions were characterized based on the variance of 10-SNP sliding windows GEBV (WGEBV). A bootstrap analysis (n=1000) was conducted to declare significance. Heritability estimates for the traits trajectory ranged from 0.34-0.52 to 0.07-0.23 for DBWAvg and DFIAdj, respectively. Genetic correlations across age classes were large and positive for both DBWAvg and DFIAdj, albeit age classes at the beginning had a small to moderate genetic correlation with age classes towards the end of the trajectory for both traits. The WGEBV variance explained by significant regions (P<0.001) for each polynomial coefficient ranged from 0.2-0.9 to 0.3-1.01% for DBWAvg and DFIAdj, respectively. The WGEBV variance explained by significant regions for the trajectory was 1.54 and 1.95% for DBWAvg and DFIAdj. Both traits identified candidate genes with functions related to metabolite and energy homeostasis, glucose and insulin signaling and behavior. We have identified regions of the genome that have an impact on the intercept, linear and quadratic terms for DBWAvg and DFIAdj. These results provide preliminary evidence that individual growth and feed intake trajectories are impacted by different regions of the genome at different times.

  13. Accurate Estimation of Solvation Free Energy Using Polynomial Fitting Techniques

    PubMed Central

    Shyu, Conrad; Ytreberg, F. Marty

    2010-01-01

    This report details an approach to improve the accuracy of free energy difference estimates using thermodynamic integration data (slope of the free energy with respect to the switching variable λ) and its application to calculating solvation free energy. The central idea is to utilize polynomial fitting schemes to approximate the thermodynamic integration data to improve the accuracy of the free energy difference estimates. Previously, we introduced the use of polynomial regression technique to fit thermodynamic integration data (Shyu and Ytreberg, J Comput Chem 30: 2297–2304, 2009). In this report we introduce polynomial and spline interpolation techniques. Two systems with analytically solvable relative free energies are used to test the accuracy of the interpolation approach. We also use both interpolation and regression methods to determine a small molecule solvation free energy. Our simulations show that, using such polynomial techniques and non-equidistant λ values, the solvation free energy can be estimated with high accuracy without using soft-core scaling and separate simulations for Lennard-Jones and partial charges. The results from our study suggest these polynomial techniques, especially with use of non-equidistant λ values, improve the accuracy for ΔF estimates without demanding additional simulations. We also provide general guidelines for use of polynomial fitting to estimate free energy. To allow researchers to immediately utilize these methods, free software and documentation is provided via http://www.phys.uidaho.edu/ytreberg/software. PMID:20623657

  14. Comparison of random regression test-day models for Polish Black and White cattle.

    PubMed

    Strabel, T; Szyda, J; Ptak, E; Jamrozik, J

    2005-10-01

    Test-day milk yields of first-lactation Black and White cows were used to select the model for routine genetic evaluation of dairy cattle in Poland. The population of Polish Black and White cows is characterized by small herd size, low level of production, and relatively early peak of lactation. Several random regression models for first-lactation milk yield were initially compared using the "percentage of squared bias" criterion and the correlations between true and predicted breeding values. Models with random herd-test-date effects, fixed age-season and herd-year curves, and random additive genetic and permanent environmental curves (Legendre polynomials of different orders were used for all regressions) were chosen for further studies. Additional comparisons included analyses of the residuals and shapes of variance curves in days in milk. The low production level and early peak of lactation of the breed required the use of Legendre polynomials of order 5 to describe age-season lactation curves. For the other curves, Legendre polynomials of order 3 satisfactorily described daily milk yield variation. Fitting third-order polynomials for the permanent environmental effect made it possible to adequately account for heterogeneous residual variance at different stages of lactation.

  15. Random regression models using different functions to model milk flow in dairy cows.

    PubMed

    Laureano, M M M; Bignardi, A B; El Faro, L; Cardoso, V L; Tonhati, H; Albuquerque, L G

    2014-09-12

    We analyzed 75,555 test-day milk flow records from 2175 primiparous Holstein cows that calved between 1997 and 2005. Milk flow was obtained by dividing the mean milk yield (kg) of the 3 daily milking by the total milking time (min) and was expressed as kg/min. Milk flow was grouped into 43 weekly classes. The analyses were performed using a single-trait Random Regression Models that included direct additive genetic, permanent environmental, and residual random effects. In addition, the contemporary group and linear and quadratic effects of cow age at calving were included as fixed effects. Fourth-order orthogonal Legendre polynomial of days in milk was used to model the mean trend in milk flow. The additive genetic and permanent environmental covariance functions were estimated using random regression Legendre polynomials and B-spline functions of days in milk. The model using a third-order Legendre polynomial for additive genetic effects and a sixth-order polynomial for permanent environmental effects, which contained 7 residual classes, proved to be the most adequate to describe variations in milk flow, and was also the most parsimonious. The heritability in milk flow estimated by the most parsimonious model was of moderate to high magnitude.

  16. Multi-species beam hardening calibration device for x-ray microtomography

    NASA Astrophysics Data System (ADS)

    Evershed, Anthony N. Z.; Mills, David; Davis, Graham

    2012-10-01

    Impact-source X-ray microtomography (XMT) is a widely-used benchtop alternative to synchrotron radiation microtomography. Since X-rays from a tube are polychromatic, however, greyscale `beam hardening' artefacts are produced by the preferential absorption of low-energy photons in the beam path. A multi-material `carousel' test piece was developed to offer a wider range of X-ray attenuations from well-characterised filters than single-material step wedges can produce practically, and optimization software was developed to produce a beam hardening correction by use of the Nelder-Mead optimization method, tuned for specimens composed of other materials (such as hydroxyapatite [HA] or barium for dental applications.) The carousel test piece produced calibration polynomials reliably and with a significantly smaller discrepancy between the calculated and measured attenuations than the calibration step wedge previously in use. An immersion tank was constructed and used to simplify multi-material samples in order to negate the beam hardening effect of low atomic number materials within the specimen when measuring mineral concentration of higher-Z regions. When scanned in water at an acceleration voltage of 90 kV a Scanco AG hydroxyapatite / poly(methyl methacrylate) calibration phantom closely approximates a single-material system, producing accurate hydroxyapatite concentration measurements. This system can then be corrected for beam hardening for the material of interest.

  17. Assessing the Multidimensional Relationship Between Medication Beliefs and Adherence in Older Adults With Hypertension Using Polynomial Regression.

    PubMed

    Dillon, Paul; Phillips, L Alison; Gallagher, Paul; Smith, Susan M; Stewart, Derek; Cousins, Gráinne

    2018-02-05

    The Necessity-Concerns Framework (NCF) is a multidimensional theory describing the relationship between patients' positive and negative evaluations of their medication which interplay to influence adherence. Most studies evaluating the NCF have failed to account for the multidimensional nature of the theory, placing the separate dimensions of medication "necessity beliefs" and "concerns" onto a single dimension (e.g., the Beliefs about Medicines Questionnaire-difference score model). To assess the multidimensional effect of patient medication beliefs (concerns and necessity beliefs) on medication adherence using polynomial regression with response surface analysis. Community-dwelling older adults >65 years (n = 1,211) presenting their own prescription for antihypertensive medication to 106 community pharmacies in the Republic of Ireland rated their concerns and necessity beliefs to antihypertensive medications at baseline and their adherence to antihypertensive medication at 12 months via structured telephone interview. Confirmatory polynomial regression found the difference-score model to be inaccurate; subsequent exploratory analysis identified a quadratic model to be the best-fitting polynomial model. Adherence was lowest among those with strong medication concerns and weak necessity beliefs, and adherence was greatest for those with weak concerns and strong necessity beliefs (slope β = -0.77, p<.001; curvature β = -0.26, p = .004). However, novel nonreciprocal effects were also observed; patients with simultaneously high concerns and necessity beliefs had lower adherence than those with simultaneously low concerns and necessity beliefs (slope β = -0.36, p = .004; curvature β = -0.25, p = .003). The difference-score model fails to account for the potential nonreciprocal effects. Results extend evidence supporting the use of polynomial regression to assess the multidimensional effect of medication beliefs on adherence.

  18. Detailed analysis of an optimized FPP-based 3D imaging system

    NASA Astrophysics Data System (ADS)

    Tran, Dat; Thai, Anh; Duong, Kiet; Nguyen, Thanh; Nehmetallah, Georges

    2016-05-01

    In this paper, we present detail analysis and a step-by-step implementation of an optimized fringe projection profilometry (FPP) based 3D shape measurement system. First, we propose a multi-frequency and multi-phase shifting sinusoidal fringe pattern reconstruction approach to increase accuracy and sensitivity of the system. Second, phase error compensation caused by the nonlinear transfer function of the projector and camera is performed through polynomial approximation. Third, phase unwrapping is performed using spatial and temporal techniques and the tradeoff between processing speed and high accuracy is discussed in details. Fourth, generalized camera and system calibration are developed for phase to real world coordinate transformation. The calibration coefficients are estimated accurately using a reference plane and several gauge blocks with precisely known heights and by employing a nonlinear least square fitting method. Fifth, a texture will be attached to the height profile by registering a 2D real photo to the 3D height map. The last step is to perform 3D image fusion and registration using an iterative closest point (ICP) algorithm for a full field of view reconstruction. The system is experimentally constructed using compact, portable, and low cost off-the-shelf components. A MATLAB® based GUI is developed to control and synchronize the whole system.

  19. Learning to Predict Combinatorial Structures

    NASA Astrophysics Data System (ADS)

    Vembu, Shankar

    2009-12-01

    The major challenge in designing a discriminative learning algorithm for predicting structured data is to address the computational issues arising from the exponential size of the output space. Existing algorithms make different assumptions to ensure efficient, polynomial time estimation of model parameters. For several combinatorial structures, including cycles, partially ordered sets, permutations and other graph classes, these assumptions do not hold. In this thesis, we address the problem of designing learning algorithms for predicting combinatorial structures by introducing two new assumptions: (i) The first assumption is that a particular counting problem can be solved efficiently. The consequence is a generalisation of the classical ridge regression for structured prediction. (ii) The second assumption is that a particular sampling problem can be solved efficiently. The consequence is a new technique for designing and analysing probabilistic structured prediction models. These results can be applied to solve several complex learning problems including but not limited to multi-label classification, multi-category hierarchical classification, and label ranking.

  20. Gaussian quadrature for multiple orthogonal polynomials

    NASA Astrophysics Data System (ADS)

    Coussement, Jonathan; van Assche, Walter

    2005-06-01

    We study multiple orthogonal polynomials of type I and type II, which have orthogonality conditions with respect to r measures. These polynomials are connected by their recurrence relation of order r+1. First we show a relation with the eigenvalue problem of a banded lower Hessenberg matrix Ln, containing the recurrence coefficients. As a consequence, we easily find that the multiple orthogonal polynomials of type I and type II satisfy a generalized Christoffel-Darboux identity. Furthermore, we explain the notion of multiple Gaussian quadrature (for proper multi-indices), which is an extension of the theory of Gaussian quadrature for orthogonal polynomials and was introduced by Borges. In particular, we show that the quadrature points and quadrature weights can be expressed in terms of the eigenvalue problem of Ln.

  1. Random regression models using Legendre orthogonal polynomials to evaluate the milk production of Alpine goats.

    PubMed

    Silva, F G; Torres, R A; Brito, L F; Euclydes, R F; Melo, A L P; Souza, N O; Ribeiro, J I; Rodrigues, M T

    2013-12-11

    The objective of this study was to identify the best random regression model using Legendre orthogonal polynomials to evaluate Alpine goats genetically and to estimate the parameters for test day milk yield. On the test day, we analyzed 20,710 records of milk yield of 667 goats from the Goat Sector of the Universidade Federal de Viçosa. The evaluated models had combinations of distinct fitting orders for polynomials (2-5), random genetic (1-7), and permanent environmental (1-7) fixed curves and a number of classes for residual variance (2, 4, 5, and 6). WOMBAT software was used for all genetic analyses. A random regression model using the best Legendre orthogonal polynomial for genetic evaluation of milk yield on the test day of Alpine goats considered a fixed curve of order 4, curve of genetic additive effects of order 2, curve of permanent environmental effects of order 7, and a minimum of 5 classes of residual variance because it was the most economical model among those that were equivalent to the complete model by the likelihood ratio test. Phenotypic variance and heritability were higher at the end of the lactation period, indicating that the length of lactation has more genetic components in relation to the production peak and persistence. It is very important that the evaluation utilizes the best combination of fixed, genetic additive and permanent environmental regressions, and number of classes of heterogeneous residual variance for genetic evaluation using random regression models, thereby enhancing the precision and accuracy of the estimates of parameters and prediction of genetic values.

  2. An adaptive least-squares global sensitivity method and application to a plasma-coupled combustion prediction with parametric correlation

    NASA Astrophysics Data System (ADS)

    Tang, Kunkun; Massa, Luca; Wang, Jonathan; Freund, Jonathan B.

    2018-05-01

    We introduce an efficient non-intrusive surrogate-based methodology for global sensitivity analysis and uncertainty quantification. Modified covariance-based sensitivity indices (mCov-SI) are defined for outputs that reflect correlated effects. The overall approach is applied to simulations of a complex plasma-coupled combustion system with disparate uncertain parameters in sub-models for chemical kinetics and a laser-induced breakdown ignition seed. The surrogate is based on an Analysis of Variance (ANOVA) expansion, such as widely used in statistics, with orthogonal polynomials representing the ANOVA subspaces and a polynomial dimensional decomposition (PDD) representing its multi-dimensional components. The coefficients of the PDD expansion are obtained using a least-squares regression, which both avoids the direct computation of high-dimensional integrals and affords an attractive flexibility in choosing sampling points. This facilitates importance sampling using a Bayesian calibrated posterior distribution, which is fast and thus particularly advantageous in common practical cases, such as our large-scale demonstration, for which the asymptotic convergence properties of polynomial expansions cannot be realized due to computation expense. Effort, instead, is focused on efficient finite-resolution sampling. Standard covariance-based sensitivity indices (Cov-SI) are employed to account for correlation of the uncertain parameters. Magnitude of Cov-SI is unfortunately unbounded, which can produce extremely large indices that limit their utility. Alternatively, mCov-SI are then proposed in order to bound this magnitude ∈ [ 0 , 1 ]. The polynomial expansion is coupled with an adaptive ANOVA strategy to provide an accurate surrogate as the union of several low-dimensional spaces, avoiding the typical computational cost of a high-dimensional expansion. It is also adaptively simplified according to the relative contribution of the different polynomials to the total variance. The approach is demonstrated for a laser-induced turbulent combustion simulation model, which includes parameters with correlated effects.

  3. A general U-block model-based design procedure for nonlinear polynomial control systems

    NASA Astrophysics Data System (ADS)

    Zhu, Q. M.; Zhao, D. Y.; Zhang, Jianhua

    2016-10-01

    The proposition of U-model concept (in terms of 'providing concise and applicable solutions for complex problems') and a corresponding basic U-control design algorithm was originated in the first author's PhD thesis. The term of U-model appeared (not rigorously defined) for the first time in the first author's other journal paper, which established a framework for using linear polynomial control system design approaches to design nonlinear polynomial control systems (in brief, linear polynomial approaches → nonlinear polynomial plants). This paper represents the next milestone work - using linear state-space approaches to design nonlinear polynomial control systems (in brief, linear state-space approaches → nonlinear polynomial plants). The overall aim of the study is to establish a framework, defined as the U-block model, which provides a generic prototype for using linear state-space-based approaches to design the control systems with smooth nonlinear plants/processes described by polynomial models. For analysing the feasibility and effectiveness, sliding mode control design approach is selected as an exemplary case study. Numerical simulation studies provide a user-friendly step-by-step procedure for the readers/users with interest in their ad hoc applications. In formality, this is the first paper to present the U-model-oriented control system design in a formal way and to study the associated properties and theorems. The previous publications, in the main, have been algorithm-based studies and simulation demonstrations. In some sense, this paper can be treated as a landmark for the U-model-based research from intuitive/heuristic stage to rigour/formal/comprehensive studies.

  4. The Necessity-Concerns-Framework: A Multidimensional Theory Benefits from Multidimensional Analysis

    PubMed Central

    Phillips, L. Alison; Diefenbach, Michael; Kronish, Ian M.; Negron, Rennie M.; Horowitz, Carol R.

    2014-01-01

    Background Patients’ medication-related concerns and necessity-beliefs predict adherence. Evaluation of the potentially complex interplay of these two dimensions has been limited because of methods that reduce them to a single dimension (difference scores). Purpose We use polynomial regression to assess the multidimensional effect of stroke-event survivors’ medication-related concerns and necessity-beliefs on their adherence to stroke-prevention medication. Methods Survivors (n=600) rated their concerns, necessity-beliefs, and adherence to medication. Confirmatory and exploratory polynomial regression determined the best-fitting multidimensional model. Results As posited by the Necessity-Concerns Framework (NCF), the greatest and lowest adherence was reported by those with strong necessity-beliefs/weak concerns and strong concerns/weak necessity-beliefs, respectively. However, as could not be assessed using a difference-score model, patients with ambivalent beliefs were less adherent than those exhibiting indifference. Conclusions Polynomial regression allows for assessment of the multidimensional nature of the NCF. Clinicians/Researchers should be aware that concerns and necessity dimensions are not polar opposites. PMID:24500078

  5. The necessity-concerns framework: a multidimensional theory benefits from multidimensional analysis.

    PubMed

    Phillips, L Alison; Diefenbach, Michael A; Kronish, Ian M; Negron, Rennie M; Horowitz, Carol R

    2014-08-01

    Patients' medication-related concerns and necessity-beliefs predict adherence. Evaluation of the potentially complex interplay of these two dimensions has been limited because of methods that reduce them to a single dimension (difference scores). We use polynomial regression to assess the multidimensional effect of stroke-event survivors' medication-related concerns and necessity beliefs on their adherence to stroke-prevention medication. Survivors (n = 600) rated their concerns, necessity beliefs, and adherence to medication. Confirmatory and exploratory polynomial regression determined the best-fitting multidimensional model. As posited by the necessity-concerns framework (NCF), the greatest and lowest adherence was reported by those necessity weak concerns and strong concerns/weak Necessity-Beliefs, respectively. However, as could not be assessed using a difference-score model, patients with ambivalent beliefs were less adherent than those exhibiting indifference. Polynomial regression allows for assessment of the multidimensional nature of the NCF. Clinicians/Researchers should be aware that concerns and necessity dimensions are not polar opposites.

  6. Automated image segmentation-assisted flattening of atomic force microscopy images.

    PubMed

    Wang, Yuliang; Lu, Tongda; Li, Xiaolai; Wang, Huimin

    2018-01-01

    Atomic force microscopy (AFM) images normally exhibit various artifacts. As a result, image flattening is required prior to image analysis. To obtain optimized flattening results, foreground features are generally manually excluded using rectangular masks in image flattening, which is time consuming and inaccurate. In this study, a two-step scheme was proposed to achieve optimized image flattening in an automated manner. In the first step, the convex and concave features in the foreground were automatically segmented with accurate boundary detection. The extracted foreground features were taken as exclusion masks. In the second step, data points in the background were fitted as polynomial curves/surfaces, which were then subtracted from raw images to get the flattened images. Moreover, sliding-window-based polynomial fitting was proposed to process images with complex background trends. The working principle of the two-step image flattening scheme were presented, followed by the investigation of the influence of a sliding-window size and polynomial fitting direction on the flattened images. Additionally, the role of image flattening on the morphological characterization and segmentation of AFM images were verified with the proposed method.

  7. Solution of the mean spherical approximation for polydisperse multi-Yukawa hard-sphere fluid mixture using orthogonal polynomial expansions

    NASA Astrophysics Data System (ADS)

    Kalyuzhnyi, Yurij V.; Cummings, Peter T.

    2006-03-01

    The Blum-Høye [J. Stat. Phys. 19 317 (1978)] solution of the mean spherical approximation for a multicomponent multi-Yukawa hard-sphere fluid is extended to a polydisperse multi-Yukawa hard-sphere fluid. Our extension is based on the application of the orthogonal polynomial expansion method of Lado [Phys. Rev. E 54, 4411 (1996)]. Closed form analytical expressions for the structural and thermodynamic properties of the model are presented. They are given in terms of the parameters that follow directly from the solution. By way of illustration the method of solution is applied to describe the thermodynamic properties of the one- and two-Yukawa versions of the model.

  8. On Certain Wronskians of Multiple Orthogonal Polynomials

    NASA Astrophysics Data System (ADS)

    Zhang, Lun; Filipuk, Galina

    2014-11-01

    We consider determinants of Wronskian type whose entries are multiple orthogonal polynomials associated with a path connecting two multi-indices. By assuming that the weight functions form an algebraic Chebyshev (AT) system, we show that the polynomials represented by the Wronskians keep a constant sign in some cases, while in some other cases oscillatory behavior appears, which generalizes classical results for orthogonal polynomials due to Karlin and Szegő. There are two applications of our results. The first application arises from the observation that the m-th moment of the average characteristic polynomials for multiple orthogonal polynomial ensembles can be expressed as a Wronskian of the type II multiple orthogonal polynomials. Hence, it is straightforward to obtain the distinct behavior of the moments for odd and even m in a special multiple orthogonal ensemble - the AT ensemble. As the second application, we derive some Turán type inequalities for m! ultiple Hermite and multiple Laguerre polynomials (of two kinds). Finally, we study numerically the geometric configuration of zeros for the Wronskians of these multiple orthogonal polynomials. We observe that the zeros have regular configurations in the complex plane, which might be of independent interest.

  9. Frequency domain system identification methods - Matrix fraction description approach

    NASA Technical Reports Server (NTRS)

    Horta, Luca G.; Juang, Jer-Nan

    1993-01-01

    This paper presents the use of matrix fraction descriptions for least-squares curve fitting of the frequency spectra to compute two matrix polynomials. The matrix polynomials are intermediate step to obtain a linearized representation of the experimental transfer function. Two approaches are presented: first, the matrix polynomials are identified using an estimated transfer function; second, the matrix polynomials are identified directly from the cross/auto spectra of the input and output signals. A set of Markov parameters are computed from the polynomials and subsequently realization theory is used to recover a minimum order state space model. Unevenly spaced frequency response functions may be used. Results from a simple numerical example and an experiment are discussed to highlight some of the important aspect of the algorithm.

  10. Efficient computer algebra algorithms for polynomial matrices in control design

    NASA Technical Reports Server (NTRS)

    Baras, J. S.; Macenany, D. C.; Munach, R.

    1989-01-01

    The theory of polynomial matrices plays a key role in the design and analysis of multi-input multi-output control and communications systems using frequency domain methods. Examples include coprime factorizations of transfer functions, cannonical realizations from matrix fraction descriptions, and the transfer function design of feedback compensators. Typically, such problems abstract in a natural way to the need to solve systems of Diophantine equations or systems of linear equations over polynomials. These and other problems involving polynomial matrices can in turn be reduced to polynomial matrix triangularization procedures, a result which is not surprising given the importance of matrix triangularization techniques in numerical linear algebra. Matrices with entries from a field and Gaussian elimination play a fundamental role in understanding the triangularization process. In the case of polynomial matrices, matrices with entries from a ring for which Gaussian elimination is not defined and triangularization is accomplished by what is quite properly called Euclidean elimination. Unfortunately, the numerical stability and sensitivity issues which accompany floating point approaches to Euclidean elimination are not very well understood. New algorithms are presented which circumvent entirely such numerical issues through the use of exact, symbolic methods in computer algebra. The use of such error-free algorithms guarantees that the results are accurate to within the precision of the model data--the best that can be hoped for. Care must be taken in the design of such algorithms due to the phenomenon of intermediate expressions swell.

  11. Comparison of random regression models with Legendre polynomials and linear splines for production traits and somatic cell score of Canadian Holstein cows.

    PubMed

    Bohmanova, J; Miglior, F; Jamrozik, J; Misztal, I; Sullivan, P G

    2008-09-01

    A random regression model with both random and fixed regressions fitted by Legendre polynomials of order 4 was compared with 3 alternative models fitting linear splines with 4, 5, or 6 knots. The effects common for all models were a herd-test-date effect, fixed regressions on days in milk (DIM) nested within region-age-season of calving class, and random regressions for additive genetic and permanent environmental effects. Data were test-day milk, fat and protein yields, and SCS recorded from 5 to 365 DIM during the first 3 lactations of Canadian Holstein cows. A random sample of 50 herds consisting of 96,756 test-day records was generated to estimate variance components within a Bayesian framework via Gibbs sampling. Two sets of genetic evaluations were subsequently carried out to investigate performance of the 4 models. Models were compared by graphical inspection of variance functions, goodness of fit, error of prediction of breeding values, and stability of estimated breeding values. Models with splines gave lower estimates of variances at extremes of lactations than the model with Legendre polynomials. Differences among models in goodness of fit measured by percentages of squared bias, correlations between predicted and observed records, and residual variances were small. The deviance information criterion favored the spline model with 6 knots. Smaller error of prediction and higher stability of estimated breeding values were achieved by using spline models with 5 and 6 knots compared with the model with Legendre polynomials. In general, the spline model with 6 knots had the best overall performance based upon the considered model comparison criteria.

  12. Improving Global Models of Remotely Sensed Ocean Chlorophyll Content Using Partial Least Squares and Geographically Weighted Regression

    NASA Astrophysics Data System (ADS)

    Gholizadeh, H.; Robeson, S. M.

    2015-12-01

    Empirical models have been widely used to estimate global chlorophyll content from remotely sensed data. Here, we focus on the standard NASA empirical models that use blue-green band ratios. These band ratio ocean color (OC) algorithms are in the form of fourth-order polynomials and the parameters of these polynomials (i.e. coefficients) are estimated from the NASA bio-Optical Marine Algorithm Data set (NOMAD). Most of the points in this data set have been sampled from tropical and temperate regions. However, polynomial coefficients obtained from this data set are used to estimate chlorophyll content in all ocean regions with different properties such as sea-surface temperature, salinity, and downwelling/upwelling patterns. Further, the polynomial terms in these models are highly correlated. In sum, the limitations of these empirical models are as follows: 1) the independent variables within the empirical models, in their current form, are correlated (multicollinear), and 2) current algorithms are global approaches and are based on the spatial stationarity assumption, so they are independent of location. Multicollinearity problem is resolved by using partial least squares (PLS). PLS, which transforms the data into a set of independent components, can be considered as a combined form of principal component regression (PCR) and multiple regression. Geographically weighted regression (GWR) is also used to investigate the validity of spatial stationarity assumption. GWR solves a regression model over each sample point by using the observations within its neighbourhood. PLS results show that the empirical method underestimates chlorophyll content in high latitudes, including the Southern Ocean region, when compared to PLS (see Figure 1). Cluster analysis of GWR coefficients also shows that the spatial stationarity assumption in empirical models is not likely a valid assumption.

  13. Estimation of Heat Transfer Coefficient in Squeeze Casting of Magnesium Alloy AM60 by Experimental Polynomial Extrapolation Method

    NASA Astrophysics Data System (ADS)

    Sun, Zhizhong; Niu, Xiaoping; Hu, Henry

    In this work, a different wall-thickness 5-step (with thicknesses as 3, 5, 8, 12, 20 mm) casting mold was designed, and squeeze casting of magnesium alloy AM60 was performed in a hydraulic press. The casting-die interfacial heat transfer coefficients (IHTC) in 5-step casting were determined based on experimental thermal histories data throughout the die and inside the casting which were recorded by fine type-K thermocouples. With measured temperatures, heat flux and IHTC were evaluated using the polynomial curve fitting method. The results show that the wall thickness affects IHTC peak values significantly. The IHTC value for the thick step is higher than that for the thin steps.

  14. Higher-order Multivariable Polynomial Regression to Estimate Human Affective States

    NASA Astrophysics Data System (ADS)

    Wei, Jie; Chen, Tong; Liu, Guangyuan; Yang, Jiemin

    2016-03-01

    From direct observations, facial, vocal, gestural, physiological, and central nervous signals, estimating human affective states through computational models such as multivariate linear-regression analysis, support vector regression, and artificial neural network, have been proposed in the past decade. In these models, linear models are generally lack of precision because of ignoring intrinsic nonlinearities of complex psychophysiological processes; and nonlinear models commonly adopt complicated algorithms. To improve accuracy and simplify model, we introduce a new computational modeling method named as higher-order multivariable polynomial regression to estimate human affective states. The study employs standardized pictures in the International Affective Picture System to induce thirty subjects’ affective states, and obtains pure affective patterns of skin conductance as input variables to the higher-order multivariable polynomial model for predicting affective valence and arousal. Experimental results show that our method is able to obtain efficient correlation coefficients of 0.98 and 0.96 for estimation of affective valence and arousal, respectively. Moreover, the method may provide certain indirect evidences that valence and arousal have their brain’s motivational circuit origins. Thus, the proposed method can serve as a novel one for efficiently estimating human affective states.

  15. Higher-order Multivariable Polynomial Regression to Estimate Human Affective States

    PubMed Central

    Wei, Jie; Chen, Tong; Liu, Guangyuan; Yang, Jiemin

    2016-01-01

    From direct observations, facial, vocal, gestural, physiological, and central nervous signals, estimating human affective states through computational models such as multivariate linear-regression analysis, support vector regression, and artificial neural network, have been proposed in the past decade. In these models, linear models are generally lack of precision because of ignoring intrinsic nonlinearities of complex psychophysiological processes; and nonlinear models commonly adopt complicated algorithms. To improve accuracy and simplify model, we introduce a new computational modeling method named as higher-order multivariable polynomial regression to estimate human affective states. The study employs standardized pictures in the International Affective Picture System to induce thirty subjects’ affective states, and obtains pure affective patterns of skin conductance as input variables to the higher-order multivariable polynomial model for predicting affective valence and arousal. Experimental results show that our method is able to obtain efficient correlation coefficients of 0.98 and 0.96 for estimation of affective valence and arousal, respectively. Moreover, the method may provide certain indirect evidences that valence and arousal have their brain’s motivational circuit origins. Thus, the proposed method can serve as a novel one for efficiently estimating human affective states. PMID:26996254

  16. A two-step, fourth-order method with energy preserving properties

    NASA Astrophysics Data System (ADS)

    Brugnano, Luigi; Iavernaro, Felice; Trigiante, Donato

    2012-09-01

    We introduce a family of fourth-order two-step methods that preserve the energy function of canonical polynomial Hamiltonian systems. As is the case with linear mutistep and one-leg methods, a prerogative of the new formulae is that the associated nonlinear systems to be solved at each step of the integration procedure have the very same dimension of the underlying continuous problem. The key tools in the new methods are the line integral associated with a conservative vector field (such as the one defined by a Hamiltonian dynamical system) and its discretization obtained by the aid of a quadrature formula. Energy conservation is equivalent to the requirement that the quadrature is exact, which turns out to be always the case in the event that the Hamiltonian function is a polynomial and the degree of precision of the quadrature formula is high enough. The non-polynomial case is also discussed and a number of test problems are finally presented in order to compare the behavior of the new methods to the theoretical results.

  17. Tensor spherical harmonics theories on the exact nature of the elastic fields of a spherically anisotropic multi-inhomogeneous inclusion

    NASA Astrophysics Data System (ADS)

    Shodja, H. M.; Khorshidi, A.

    2013-04-01

    Eshelby's theories on the nature of the disturbance strains due to polynomial eigenstrains inside an isotropic ellipsoidal inclusion, and the form of homogenizing eigenstrains corresponding to remote polynomial loadings in the equivalent inclusion method (EIM) are not valid for spherically anisotropic inclusions and inhomogeneities. Materials with spherically anisotropic behavior are frequently encountered in nature, for example, some graphite particles or polyethylene spherulites. Moreover, multi-inclusions/inhomogeneities/inhomogeneous inclusions have abundant engineering and scientific applications and their exact theoretical treatment would be of great value. The present work is devoted to the development of a mathematical framework for the exact treatment of a spherical multi-inhomogeneous inclusion with spherically anisotropic constituents embedded in an unbounded isotropic matrix. The formulations herein are based on tensor spherical harmonics having orthogonality and completeness properties. For polynomial eigenstrain field and remote applied loading, several theorems on the exact closed-form expressions of the elastic fields associated with the matrix and all the phases of the inhomogeneous inclusion are stated and proved. Several classes of impotent eigenstrain fields associated to a generally anisotropic inclusion as well as isotropic and spherically anisotropic multi-inclusions are also introduced. The presented theories are useful for obtaining highly accurate solutions of desired accuracy when the constituent phases of the multi-inhomogeneous inclusion are made of functionally graded materials (FGMs).

  18. Principal polynomial analysis.

    PubMed

    Laparra, Valero; Jiménez, Sandra; Tuia, Devis; Camps-Valls, Gustau; Malo, Jesus

    2014-11-01

    This paper presents a new framework for manifold learning based on a sequence of principal polynomials that capture the possibly nonlinear nature of the data. The proposed Principal Polynomial Analysis (PPA) generalizes PCA by modeling the directions of maximal variance by means of curves, instead of straight lines. Contrarily to previous approaches, PPA reduces to performing simple univariate regressions, which makes it computationally feasible and robust. Moreover, PPA shows a number of interesting analytical properties. First, PPA is a volume-preserving map, which in turn guarantees the existence of the inverse. Second, such an inverse can be obtained in closed form. Invertibility is an important advantage over other learning methods, because it permits to understand the identified features in the input domain where the data has physical meaning. Moreover, it allows to evaluate the performance of dimensionality reduction in sensible (input-domain) units. Volume preservation also allows an easy computation of information theoretic quantities, such as the reduction in multi-information after the transform. Third, the analytical nature of PPA leads to a clear geometrical interpretation of the manifold: it allows the computation of Frenet-Serret frames (local features) and of generalized curvatures at any point of the space. And fourth, the analytical Jacobian allows the computation of the metric induced by the data, thus generalizing the Mahalanobis distance. These properties are demonstrated theoretically and illustrated experimentally. The performance of PPA is evaluated in dimensionality and redundancy reduction, in both synthetic and real datasets from the UCI repository.

  19. A new sampling scheme for developing metamodels with the zeros of Chebyshev polynomials

    NASA Astrophysics Data System (ADS)

    Wu, Jinglai; Luo, Zhen; Zhang, Nong; Zhang, Yunqing

    2015-09-01

    The accuracy of metamodelling is determined by both the sampling and approximation. This article proposes a new sampling method based on the zeros of Chebyshev polynomials to capture the sampling information effectively. First, the zeros of one-dimensional Chebyshev polynomials are applied to construct Chebyshev tensor product (CTP) sampling, and the CTP is then used to construct high-order multi-dimensional metamodels using the 'hypercube' polynomials. Secondly, the CTP sampling is further enhanced to develop Chebyshev collocation method (CCM) sampling, to construct the 'simplex' polynomials. The samples of CCM are randomly and directly chosen from the CTP samples. Two widely studied sampling methods, namely the Smolyak sparse grid and Hammersley, are used to demonstrate the effectiveness of the proposed sampling method. Several numerical examples are utilized to validate the approximation accuracy of the proposed metamodel under different dimensions.

  20. Genetic parameters of legendre polynomials for first parity lactation curves.

    PubMed

    Pool, M H; Janss, L L; Meuwissen, T H

    2000-11-01

    Variance components of the covariance function coefficients in a random regression test-day model were estimated by Legendre polynomials up to a fifth order for first-parity records of Dutch dairy cows using Gibbs sampling. Two Legendre polynomials of equal order were used to model the random part of the lactation curve, one for the genetic component and one for permanent environment. Test-day records from cows registered between 1990 to 1996 and collected by regular milk recording were available. For the data set, 23,700 complete lactations were selected from 475 herds sired by 262 sires. Because the application of a random regression model is limited by computing capacity, we investigated the minimum order needed to fit the variance structure in the data sufficiently. Predictions of genetic and permanent environmental variance structures were compared with bivariate estimates on 30-d intervals. A third-order or higher polynomial modeled the shape of variance curves over DIM with sufficient accuracy for the genetic and permanent environment part. Also, the genetic correlation structure was fitted with sufficient accuracy by a third-order polynomial, but, for the permanent environmental component, a fourth order was needed. Because equal orders are suggested in the literature, a fourth-order Legendre polynomial is recommended in this study. However, a rank of three for the genetic covariance matrix and of four for permanent environment allows a simpler covariance function with a reduced number of parameters based on the eigenvalues and eigenvectors.

  1. Correlation and prediction of dynamic human isolated joint strength from lean body mass

    NASA Technical Reports Server (NTRS)

    Pandya, Abhilash K.; Hasson, Scott M.; Aldridge, Ann M.; Maida, James C.; Woolford, Barbara J.

    1992-01-01

    A relationship between a person's lean body mass and the amount of maximum torque that can be produced with each isolated joint of the upper extremity was investigated. The maximum dynamic isolated joint torque (upper extremity) on 14 subjects was collected using a dynamometer multi-joint testing unit. These data were reduced to a table of coefficients of second degree polynomials, computed using a least squares regression method. All the coefficients were then organized into look-up tables, a compact and convenient storage/retrieval mechanism for the data set. Data from each joint, direction and velocity, were normalized with respect to that joint's average and merged into files (one for each curve for a particular joint). Regression was performed on each one of these files to derive a table of normalized population curve coefficients for each joint axis, direction, and velocity. In addition, a regression table which included all upper extremity joints was built which related average torque to lean body mass for an individual. These two tables are the basis of the regression model which allows the prediction of dynamic isolated joint torques from an individual's lean body mass.

  2. Periodicity analysis of tourist arrivals to Banda Aceh using smoothing SARIMA approach

    NASA Astrophysics Data System (ADS)

    Miftahuddin, Helida, Desri; Sofyan, Hizir

    2017-11-01

    Forecasting the number of tourist arrivals who enters a region is needed for tourism businesses, economic and industrial policies, so that the statistical modeling needs to be conducted. Banda Aceh is the capital of Aceh province more economic activity is driven by the services sector, one of which is the tourism sector. Therefore, the prediction of the number of tourist arrivals is needed to develop further policies. The identification results indicate that the data arrival of foreign tourists to Banda Aceh to contain the trend and seasonal nature. Allegedly, the number of arrivals is influenced by external factors, such as economics, politics, and the holiday season caused the structural break in the data. Trend patterns are detected by using polynomial regression with quadratic and cubic approaches, while seasonal is detected by a periodic regression polynomial with quadratic and cubic approach. To model the data that has seasonal effects, one of the statistical methods that can be used is SARIMA (Seasonal Autoregressive Integrated Moving Average). The results showed that the smoothing, a method to detect the trend pattern is cubic polynomial regression approach, with the modified model and the multiplicative periodicity of 12 months. The AIC value obtained was 70.52. While the method for detecting the seasonal pattern is a periodic regression polynomial cubic approach, with the modified model and the multiplicative periodicity of 12 months. The AIC value obtained was 73.37. Furthermore, the best model to predict the number of foreign tourist arrivals to Banda Aceh in 2017 to 2018 is SARIMA (0,1,1)(1,1,0) with MAPE is 26%.

  3. Linear and evolutionary polynomial regression models to forecast coastal dynamics: Comparison and reliability assessment

    NASA Astrophysics Data System (ADS)

    Bruno, Delia Evelina; Barca, Emanuele; Goncalves, Rodrigo Mikosz; de Araujo Queiroz, Heithor Alexandre; Berardi, Luigi; Passarella, Giuseppe

    2018-01-01

    In this paper, the Evolutionary Polynomial Regression data modelling strategy has been applied to study small scale, short-term coastal morphodynamics, given its capability for treating a wide database of known information, non-linearly. Simple linear and multilinear regression models were also applied to achieve a balance between the computational load and reliability of estimations of the three models. In fact, even though it is easy to imagine that the more complex the model, the more the prediction improves, sometimes a "slight" worsening of estimations can be accepted in exchange for the time saved in data organization and computational load. The models' outcomes were validated through a detailed statistical, error analysis, which revealed a slightly better estimation of the polynomial model with respect to the multilinear model, as expected. On the other hand, even though the data organization was identical for the two models, the multilinear one required a simpler simulation setting and a faster run time. Finally, the most reliable evolutionary polynomial regression model was used in order to make some conjecture about the uncertainty increase with the extension of extrapolation time of the estimation. The overlapping rate between the confidence band of the mean of the known coast position and the prediction band of the estimated position can be a good index of the weakness in producing reliable estimations when the extrapolation time increases too much. The proposed models and tests have been applied to a coastal sector located nearby Torre Colimena in the Apulia region, south Italy.

  4. Multiple-trait random regression models for the estimation of genetic parameters for milk, fat, and protein yield in buffaloes.

    PubMed

    Borquis, Rusbel Raul Aspilcueta; Neto, Francisco Ribeiro de Araujo; Baldi, Fernando; Hurtado-Lugo, Naudin; de Camargo, Gregório M F; Muñoz-Berrocal, Milthon; Tonhati, Humberto

    2013-09-01

    In this study, genetic parameters for test-day milk, fat, and protein yield were estimated for the first lactation. The data analyzed consisted of 1,433 first lactations of Murrah buffaloes, daughters of 113 sires from 12 herds in the state of São Paulo, Brazil, with calvings from 1985 to 2007. Ten-month classes of lactation days were considered for the test-day yields. The (co)variance components for the 3 traits were estimated using the regression analyses by Bayesian inference applying an animal model by Gibbs sampling. The contemporary groups were defined as herd-year-month of the test day. In the model, the random effects were additive genetic, permanent environment, and residual. The fixed effects were contemporary group and number of milkings (1 or 2), the linear and quadratic effects of the covariable age of the buffalo at calving, as well as the mean lactation curve of the population, which was modeled by orthogonal Legendre polynomials of fourth order. The random effects for the traits studied were modeled by Legendre polynomials of third and fourth order for additive genetic and permanent environment, respectively, the residual variances were modeled considering 4 residual classes. The heritability estimates for the traits were moderate (from 0.21-0.38), with higher estimates in the intermediate lactation phase. The genetic correlation estimates within and among the traits varied from 0.05 to 0.99. The results indicate that the selection for any trait test day will result in an indirect genetic gain for milk, fat, and protein yield in all periods of the lactation curve. The accuracy associated with estimated breeding values obtained using multi-trait random regression was slightly higher (around 8%) compared with single-trait random regression. This difference may be because to the greater amount of information available per animal. Copyright © 2013 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  5. A modified interval symmetric single step procedure ISS-5D for simultaneous inclusion of polynomial zeros

    NASA Astrophysics Data System (ADS)

    Sham, Atiyah W. M.; Monsi, Mansor; Hassan, Nasruddin; Suleiman, Mohamed

    2013-04-01

    The aim of this paper is to present a new modified interval symmetric single-step procedure ISS-5D which is the extension from the previous procedure, ISS1. The ISS-5D method will produce successively smaller intervals that are guaranteed to still contain the zeros. The efficiency of this method is measured on the CPU times and the number of iteration. The procedure is run on five test polynomials and the results obtained are shown in this paper.

  6. On Everhart Method

    NASA Astrophysics Data System (ADS)

    Pârv, Bazil

    This paper deals with the Everhart numerical integration method, a well-known method in astronomical research. This method, a single-step one, is widely used for numerical integration of motion equation of celestial bodies. For an integration step, this method uses unequally-spaced substeps, defined by the roots of the so-called generating polynomial of Everhart's method. For this polynomial, this paper proposes and proves new recurrence formulae. The Maple computer algebra system was used to find and prove these formulae. Again, Maple seems to be well suited and easy to use in mathematical research.

  7. Estimation of genetic parameters related to eggshell strength using random regression models.

    PubMed

    Guo, J; Ma, M; Qu, L; Shen, M; Dou, T; Wang, K

    2015-01-01

    This study examined the changes in eggshell strength and the genetic parameters related to this trait throughout a hen's laying life using random regression. The data were collected from a crossbred population between 2011 and 2014, where the eggshell strength was determined repeatedly for 2260 hens. Using random regression models (RRMs), several Legendre polynomials were employed to estimate the fixed, direct genetic and permanent environment effects. The residual effects were treated as independently distributed with heterogeneous variance for each test week. The direct genetic variance was included with second-order Legendre polynomials and the permanent environment with third-order Legendre polynomials. The heritability of eggshell strength ranged from 0.26 to 0.43, the repeatability ranged between 0.47 and 0.69, and the estimated genetic correlations between test weeks was high at > 0.67. The first eigenvalue of the genetic covariance matrix accounted for about 97% of the sum of all the eigenvalues. The flexibility and statistical power of RRM suggest that this model could be an effective method to improve eggshell quality and to reduce losses due to cracked eggs in a breeding plan.

  8. A new surrogate modeling technique combining Kriging and polynomial chaos expansions – Application to uncertainty analysis in computational dosimetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kersaudy, Pierric, E-mail: pierric.kersaudy@orange.com; Whist Lab, 38 avenue du Général Leclerc, 92130 Issy-les-Moulineaux; ESYCOM, Université Paris-Est Marne-la-Vallée, 5 boulevard Descartes, 77700 Marne-la-Vallée

    2015-04-01

    In numerical dosimetry, the recent advances in high performance computing led to a strong reduction of the required computational time to assess the specific absorption rate (SAR) characterizing the human exposure to electromagnetic waves. However, this procedure remains time-consuming and a single simulation can request several hours. As a consequence, the influence of uncertain input parameters on the SAR cannot be analyzed using crude Monte Carlo simulation. The solution presented here to perform such an analysis is surrogate modeling. This paper proposes a novel approach to build such a surrogate model from a design of experiments. Considering a sparse representationmore » of the polynomial chaos expansions using least-angle regression as a selection algorithm to retain the most influential polynomials, this paper proposes to use the selected polynomials as regression functions for the universal Kriging model. The leave-one-out cross validation is used to select the optimal number of polynomials in the deterministic part of the Kriging model. The proposed approach, called LARS-Kriging-PC modeling, is applied to three benchmark examples and then to a full-scale metamodeling problem involving the exposure of a numerical fetus model to a femtocell device. The performances of the LARS-Kriging-PC are compared to an ordinary Kriging model and to a classical sparse polynomial chaos expansion. The LARS-Kriging-PC appears to have better performances than the two other approaches. A significant accuracy improvement is observed compared to the ordinary Kriging or to the sparse polynomial chaos depending on the studied case. This approach seems to be an optimal solution between the two other classical approaches. A global sensitivity analysis is finally performed on the LARS-Kriging-PC model of the fetus exposure problem.« less

  9. Approximating Multilinear Monomial Coefficients and Maximum Multilinear Monomials in Multivariate Polynomials

    NASA Astrophysics Data System (ADS)

    Chen, Zhixiang; Fu, Bin

    This paper is our third step towards developing a theory of testing monomials in multivariate polynomials and concentrates on two problems: (1) How to compute the coefficients of multilinear monomials; and (2) how to find a maximum multilinear monomial when the input is a ΠΣΠ polynomial. We first prove that the first problem is #P-hard and then devise a O *(3 n s(n)) upper bound for this problem for any polynomial represented by an arithmetic circuit of size s(n). Later, this upper bound is improved to O *(2 n ) for ΠΣΠ polynomials. We then design fully polynomial-time randomized approximation schemes for this problem for ΠΣ polynomials. On the negative side, we prove that, even for ΠΣΠ polynomials with terms of degree ≤ 2, the first problem cannot be approximated at all for any approximation factor ≥ 1, nor "weakly approximated" in a much relaxed setting, unless P=NP. For the second problem, we first give a polynomial time λ-approximation algorithm for ΠΣΠ polynomials with terms of degrees no more a constant λ ≥ 2. On the inapproximability side, we give a n (1 - ɛ)/2 lower bound, for any ɛ> 0, on the approximation factor for ΠΣΠ polynomials. When the degrees of the terms in these polynomials are constrained as ≤ 2, we prove a 1.0476 lower bound, assuming Pnot=NP; and a higher 1.0604 lower bound, assuming the Unique Games Conjecture.

  10. Adaptive surrogate modeling by ANOVA and sparse polynomial dimensional decomposition for global sensitivity analysis in fluid simulation

    NASA Astrophysics Data System (ADS)

    Tang, Kunkun; Congedo, Pietro M.; Abgrall, Rémi

    2016-06-01

    The Polynomial Dimensional Decomposition (PDD) is employed in this work for the global sensitivity analysis and uncertainty quantification (UQ) of stochastic systems subject to a moderate to large number of input random variables. Due to the intimate connection between the PDD and the Analysis of Variance (ANOVA) approaches, PDD is able to provide a simpler and more direct evaluation of the Sobol' sensitivity indices, when compared to the Polynomial Chaos expansion (PC). Unfortunately, the number of PDD terms grows exponentially with respect to the size of the input random vector, which makes the computational cost of standard methods unaffordable for real engineering applications. In order to address the problem of the curse of dimensionality, this work proposes essentially variance-based adaptive strategies aiming to build a cheap meta-model (i.e. surrogate model) by employing the sparse PDD approach with its coefficients computed by regression. Three levels of adaptivity are carried out in this paper: 1) the truncated dimensionality for ANOVA component functions, 2) the active dimension technique especially for second- and higher-order parameter interactions, and 3) the stepwise regression approach designed to retain only the most influential polynomials in the PDD expansion. During this adaptive procedure featuring stepwise regressions, the surrogate model representation keeps containing few terms, so that the cost to resolve repeatedly the linear systems of the least-squares regression problem is negligible. The size of the finally obtained sparse PDD representation is much smaller than the one of the full expansion, since only significant terms are eventually retained. Consequently, a much smaller number of calls to the deterministic model is required to compute the final PDD coefficients.

  11. Random regression models using Legendre polynomials or linear splines for test-day milk yield of dairy Gyr (Bos indicus) cattle.

    PubMed

    Pereira, R J; Bignardi, A B; El Faro, L; Verneque, R S; Vercesi Filho, A E; Albuquerque, L G

    2013-01-01

    Studies investigating the use of random regression models for genetic evaluation of milk production in Zebu cattle are scarce. In this study, 59,744 test-day milk yield records from 7,810 first lactations of purebred dairy Gyr (Bos indicus) and crossbred (dairy Gyr × Holstein) cows were used to compare random regression models in which additive genetic and permanent environmental effects were modeled using orthogonal Legendre polynomials or linear spline functions. Residual variances were modeled considering 1, 5, or 10 classes of days in milk. Five classes fitted the changes in residual variances over the lactation adequately and were used for model comparison. The model that fitted linear spline functions with 6 knots provided the lowest sum of residual variances across lactation. On the other hand, according to the deviance information criterion (DIC) and bayesian information criterion (BIC), a model using third-order and fourth-order Legendre polynomials for additive genetic and permanent environmental effects, respectively, provided the best fit. However, the high rank correlation (0.998) between this model and that applying third-order Legendre polynomials for additive genetic and permanent environmental effects, indicates that, in practice, the same bulls would be selected by both models. The last model, which is less parameterized, is a parsimonious option for fitting dairy Gyr breed test-day milk yield records. Copyright © 2013 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  12. Genetic analysis of groups of mid-infrared predicted fatty acids in milk.

    PubMed

    Narayana, S G; Schenkel, F S; Fleming, A; Koeck, A; Malchiodi, F; Jamrozik, J; Johnston, J; Sargolzaei, M; Miglior, F

    2017-06-01

    The objective of this study was to investigate genetic variability of mid-infrared predicted fatty acid groups in Canadian Holstein cattle. Genetic parameters were estimated for 5 groups of fatty acids: short-chain (4 to 10 carbons), medium-chain (11 to 16 carbons), long-chain (17 to 22 carbons), saturated, and unsaturated fatty acids. The data set included 49,127 test-day records from 10,029 first-lactation Holstein cows in 810 herds. The random regression animal test-day model included days in milk, herd-test date, and age-season of calving (polynomial regression) as fixed effects, herd-year of calving, animal additive genetic effect, and permanent environment effects as random polynomial regressions, and random residual effect. Legendre polynomials of the third degree were selected for the fixed regression for age-season of calving effect and Legendre polynomials of the fourth degree were selected for the random regression for animal additive genetic, permanent environment, and herd-year effect. The average daily heritability over the lactation for the medium-chain fatty acid group (0.32) was higher than for the short-chain (0.24) and long-chain (0.23) fatty acid groups. The average daily heritability for the saturated fatty acid group (0.33) was greater than for the unsaturated fatty acid group (0.21). Estimated average daily genetic correlations were positive among all fatty acid groups and ranged from moderate to high (0.63-0.96). The genetic correlations illustrated similarities and differences in their origin and the makeup of the groupings based on chain length and saturation. These results provide evidence for the existence of genetic variation in mid-infrared predicted fatty acid groups, and the possibility of improving milk fatty acid profile through genetic selection in Canadian dairy cattle. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  13. Piecewise polynomial representations of genomic tracks.

    PubMed

    Tarabichi, Maxime; Detours, Vincent; Konopka, Tomasz

    2012-01-01

    Genomic data from micro-array and sequencing projects consist of associations of measured values to chromosomal coordinates. These associations can be thought of as functions in one dimension and can thus be stored, analyzed, and interpreted as piecewise-polynomial curves. We present a general framework for building piecewise polynomial representations of genome-scale signals and illustrate some of its applications via examples. We show that piecewise constant segmentation, a typical step in copy-number analyses, can be carried out within this framework for both array and (DNA) sequencing data offering advantages over existing methods in each case. Higher-order polynomial curves can be used, for example, to detect trends and/or discontinuities in transcription levels from RNA-seq data. We give a concrete application of piecewise linear functions to diagnose and quantify alignment quality at exon borders (splice sites). Our software (source and object code) for building piecewise polynomial models is available at http://sourceforge.net/projects/locsmoc/.

  14. Regression-based adaptive sparse polynomial dimensional decomposition for sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Tang, Kunkun; Congedo, Pietro; Abgrall, Remi

    2014-11-01

    Polynomial dimensional decomposition (PDD) is employed in this work for global sensitivity analysis and uncertainty quantification of stochastic systems subject to a large number of random input variables. Due to the intimate structure between PDD and Analysis-of-Variance, PDD is able to provide simpler and more direct evaluation of the Sobol' sensitivity indices, when compared to polynomial chaos (PC). Unfortunately, the number of PDD terms grows exponentially with respect to the size of the input random vector, which makes the computational cost of the standard method unaffordable for real engineering applications. In order to address this problem of curse of dimensionality, this work proposes a variance-based adaptive strategy aiming to build a cheap meta-model by sparse-PDD with PDD coefficients computed by regression. During this adaptive procedure, the model representation by PDD only contains few terms, so that the cost to resolve repeatedly the linear system of the least-square regression problem is negligible. The size of the final sparse-PDD representation is much smaller than the full PDD, since only significant terms are eventually retained. Consequently, a much less number of calls to the deterministic model is required to compute the final PDD coefficients.

  15. Comparative Analysis of Various Single-tone Frequency Estimation Techniques in High-order Instantaneous Moments Based Phase Estimation Method

    NASA Astrophysics Data System (ADS)

    Rajshekhar, G.; Gorthi, Sai Siva; Rastogi, Pramod

    2010-04-01

    For phase estimation in digital holographic interferometry, a high-order instantaneous moments (HIM) based method was recently developed which relies on piecewise polynomial approximation of phase and subsequent evaluation of the polynomial coefficients using the HIM operator. A crucial step in the method is mapping the polynomial coefficient estimation to single-tone frequency determination for which various techniques exist. The paper presents a comparative analysis of the performance of the HIM operator based method in using different single-tone frequency estimation techniques for phase estimation. The analysis is supplemented by simulation results.

  16. Wavefront reconstruction for multi-lateral shearing interferometry using difference Zernike polynomials fitting

    NASA Astrophysics Data System (ADS)

    Liu, Ke; Wang, Jiannian; Wang, Hai; Li, Yanqiu

    2018-07-01

    For the multi-lateral shearing interferometers (multi-LSIs), the measurement accuracy can be enhanced by estimating the wavefront under test with the multidirectional phase information encoded in the shearing interferogram. Usually the multi-LSIs reconstruct the test wavefront from the phase derivatives in multiple directions using the discrete Fourier transforms (DFT) method, which is only suitable to small shear ratios and relatively sensitive to noise. To improve the accuracy of multi-LSIs, wavefront reconstruction from the multidirectional phase differences using the difference Zernike polynomials fitting (DZPF) method is proposed in this paper. For the DZPF method applied in the quadriwave LSI, difference Zernike polynomials in only two orthogonal shear directions are required to represent the phase differences in multiple shear directions. In this way, the test wavefront can be reconstructed from the phase differences in multiple shear directions using a noise-variance weighted least-squares method with almost no extra computational burden, compared with the usual recovery from the phase differences in two orthogonal directions. Numerical simulation results show that the DZPF method can maintain high reconstruction accuracy in a wider range of shear ratios and has much better anti-noise performance than the DFT method. A null test experiment of the quadriwave LSI has been conducted and the experimental results show that the measurement accuracy of the quadriwave LSI can be improved from 0.0054 λ rms to 0.0029 λ rms (λ = 632.8 nm) by substituting the DFT method with the proposed DZPF method in the wavefront reconstruction process.

  17. Inferring genetic parameters of lactation in Tropical Milking Criollo cattle with random regression test-day models.

    PubMed

    Santellano-Estrada, E; Becerril-Pérez, C M; de Alba, J; Chang, Y M; Gianola, D; Torres-Hernández, G; Ramírez-Valverde, R

    2008-11-01

    This study inferred genetic and permanent environmental variation of milk yield in Tropical Milking Criollo cattle and compared 5 random regression test-day models using Wilmink's function and Legendre polynomials. Data consisted of 15,377 test-day records from 467 Tropical Milking Criollo cows that calved between 1974 and 2006 in the tropical lowlands of the Gulf Coast of Mexico and in southern Nicaragua. Estimated heritabilities of test-day milk yields ranged from 0.18 to 0.45, and repeatabilities ranged from 0.35 to 0.68 for the period spanning from 6 to 400 d in milk. Genetic correlation between days in milk 10 and 400 was around 0.50 but greater than 0.90 for most pairs of test days. The model that used first-order Legendre polynomials for additive genetic effects and second-order Legendre polynomials for permanent environmental effects gave the smallest residual variance and was also favored by the Akaike information criterion and likelihood ratio tests.

  18. Introduction to methodology of dose-response meta-analysis for binary outcome: With application on software.

    PubMed

    Zhang, Chao; Jia, Pengli; Yu, Liu; Xu, Chang

    2018-05-01

    Dose-response meta-analysis (DRMA) is widely applied to investigate the dose-specific relationship between independent and dependent variables. Such methods have been in use for over 30 years and are increasingly employed in healthcare and clinical decision-making. In this article, we give an overview of the methodology used in DRMA. We summarize the commonly used regression model and the pooled method in DRMA. We also use an example to illustrate how to employ a DRMA by these methods. Five regression models, linear regression, piecewise regression, natural polynomial regression, fractional polynomial regression, and restricted cubic spline regression, were illustrated in this article to fit the dose-response relationship. And two types of pooling approaches, that is, one-stage approach and two-stage approach are illustrated to pool the dose-response relationship across studies. The example showed similar results among these models. Several dose-response meta-analysis methods can be used for investigating the relationship between exposure level and the risk of an outcome. However the methodology of DRMA still needs to be improved. © 2018 Chinese Cochrane Center, West China Hospital of Sichuan University and John Wiley & Sons Australia, Ltd.

  19. Shock Capturing with PDE-Based Artificial Viscosity for an Adaptive, Higher-Order Discontinuous Galerkin Finite Element Method

    DTIC Science & Technology

    2008-06-01

    Geometry Interpolation The function space , VpH , consists of discontinuous, piecewise-polynomials. This work used a polynomial basis for VpH such...between a piecewise-constant and smooth variation of viscosity in both a one- dimensional and multi- dimensional setting. Before continuing with the ...inviscid, transonic flow past a NACA 0012 at zero angle of attack and freestream Mach number of M∞ = 0.95. The

  20. Numeric Function Generators Using Decision Diagrams for Discrete Functions

    DTIC Science & Technology

    2009-05-01

    Taylor series and Chebyshev series. Since polynomial functions can be realized with multipliers and adders, any numeric functions can be realized in...NFGs from the decision diagrams. Since nu- meric functions can be expanded into polynomial functions, such as a Taylor series, in this section, we use...pp. 107–114, July 1995. [13] T. Kam, T. Villa, R. K. Brayton , and A. L. Sangiovanni- Vincentelli, “Multi-valued decision diagrams: Theory and appli

  1. Leader-follower value congruence in social responsibility and ethical satisfaction: a polynomial regression analysis.

    PubMed

    Kang, Seung-Wan; Byun, Gukdo; Park, Hun-Joon

    2014-12-01

    This paper presents empirical research into the relationship between leader-follower value congruence in social responsibility and the level of ethical satisfaction for employees in the workplace. 163 dyads were analyzed, each consisting of a team leader and an employee working at a large manufacturing company in South Korea. Following current methodological recommendations for congruence research, polynomial regression and response surface modeling methodologies were used to determine the effects of value congruence. Results indicate that leader-follower value congruence in social responsibility was positively related to the ethical satisfaction of employees. Furthermore, employees' ethical satisfaction was stronger when aligned with a leader with high social responsibility. The theoretical and practical implications are discussed.

  2. Polynomials to model the growth of young bulls in performance tests.

    PubMed

    Scalez, D C B; Fragomeni, B O; Passafaro, T L; Pereira, I G; Toral, F L B

    2014-03-01

    The use of polynomial functions to describe the average growth trajectory and covariance functions of Nellore and MA (21/32 Charolais+11/32 Nellore) young bulls in performance tests was studied. The average growth trajectories and additive genetic and permanent environmental covariance functions were fit with Legendre (linear through quintic) and quadratic B-spline (with two to four intervals) polynomials. In general, the Legendre and quadratic B-spline models that included more covariance parameters provided a better fit with the data. When comparing models with the same number of parameters, the quadratic B-spline provided a better fit than the Legendre polynomials. The quadratic B-spline with four intervals provided the best fit for the Nellore and MA groups. The fitting of random regression models with different types of polynomials (Legendre polynomials or B-spline) affected neither the genetic parameters estimates nor the ranking of the Nellore young bulls. However, fitting different type of polynomials affected the genetic parameters estimates and the ranking of the MA young bulls. Parsimonious Legendre or quadratic B-spline models could be used for genetic evaluation of body weight of Nellore young bulls in performance tests, whereas these parsimonious models were less efficient for animals of the MA genetic group owing to limited data at the extreme ages.

  3. Polynomial equations for science orbits around Europa

    NASA Astrophysics Data System (ADS)

    Cinelli, Marco; Circi, Christian; Ortore, Emiliano

    2015-07-01

    In this paper, the design of science orbits for the observation of a celestial body has been carried out using polynomial equations. The effects related to the main zonal harmonics of the celestial body and the perturbation deriving from the presence of a third celestial body have been taken into account. The third body describes a circular and equatorial orbit with respect to the primary body and, for its disturbing potential, an expansion in Legendre polynomials up to the second order has been considered. These polynomial equations allow the determination of science orbits around Jupiter's satellite Europa, where the third body gravitational attraction represents one of the main forces influencing the motion of an orbiting probe. Thus, the retrieved relationships have been applied to this moon and periodic sun-synchronous and multi-sun-synchronous orbits have been determined. Finally, numerical simulations have been carried out to validate the analytical results.

  4. Finding higher order Darboux polynomials for a family of rational first order ordinary differential equations

    NASA Astrophysics Data System (ADS)

    Avellar, J.; Claudino, A. L. G. C.; Duarte, L. G. S.; da Mota, L. A. C. P.

    2015-10-01

    For the Darbouxian methods we are studying here, in order to solve first order rational ordinary differential equations (1ODEs), the most costly (computationally) step is the finding of the needed Darboux polynomials. This can be so grave that it can render the whole approach unpractical. Hereby we introduce a simple heuristics to speed up this process for a class of 1ODEs.

  5. Improving reliability of aggregation, numerical simulation and analysis of complex systems by empirical data

    NASA Astrophysics Data System (ADS)

    Dobronets, Boris S.; Popova, Olga A.

    2018-05-01

    The paper considers a new approach of regression modeling that uses aggregated data presented in the form of density functions. Approaches to Improving the reliability of aggregation of empirical data are considered: improving accuracy and estimating errors. We discuss the procedures of data aggregation as a preprocessing stage for subsequent to regression modeling. An important feature of study is demonstration of the way how represent the aggregated data. It is proposed to use piecewise polynomial models, including spline aggregate functions. We show that the proposed approach to data aggregation can be interpreted as the frequency distribution. To study its properties density function concept is used. Various types of mathematical models of data aggregation are discussed. For the construction of regression models, it is proposed to use data representation procedures based on piecewise polynomial models. New approaches to modeling functional dependencies based on spline aggregations are proposed.

  6. Bayesian median regression for temporal gene expression data

    NASA Astrophysics Data System (ADS)

    Yu, Keming; Vinciotti, Veronica; Liu, Xiaohui; 't Hoen, Peter A. C.

    2007-09-01

    Most of the existing methods for the identification of biologically interesting genes in a temporal expression profiling dataset do not fully exploit the temporal ordering in the dataset and are based on normality assumptions for the gene expression. In this paper, we introduce a Bayesian median regression model to detect genes whose temporal profile is significantly different across a number of biological conditions. The regression model is defined by a polynomial function where both time and condition effects as well as interactions between the two are included. MCMC-based inference returns the posterior distribution of the polynomial coefficients. From this a simple Bayes factor test is proposed to test for significance. The estimation of the median rather than the mean, and within a Bayesian framework, increases the robustness of the method compared to a Hotelling T2-test previously suggested. This is shown on simulated data and on muscular dystrophy gene expression data.

  7. On adaptive weighted polynomial preconditioning for Hermitian positive definite matrices

    NASA Technical Reports Server (NTRS)

    Fischer, Bernd; Freund, Roland W.

    1992-01-01

    The conjugate gradient algorithm for solving Hermitian positive definite linear systems is usually combined with preconditioning in order to speed up convergence. In recent years, there has been a revival of polynomial preconditioning, motivated by the attractive features of the method on modern architectures. Standard techniques for choosing the preconditioning polynomial are based only on bounds for the extreme eigenvalues. Here a different approach is proposed, which aims at adapting the preconditioner to the eigenvalue distribution of the coefficient matrix. The technique is based on the observation that good estimates for the eigenvalue distribution can be derived after only a few steps of the Lanczos process. This information is then used to construct a weight function for a suitable Chebyshev approximation problem. The solution of this problem yields the polynomial preconditioner. In particular, we investigate the use of Bernstein-Szego weights.

  8. A new model for estimating total body water from bioelectrical resistance

    NASA Technical Reports Server (NTRS)

    Siconolfi, S. F.; Kear, K. T.

    1992-01-01

    Estimation of total body water (T) from bioelectrical resistance (R) is commonly done by stepwise regression models with height squared over R, H(exp 2)/R, age, sex, and weight (W). Polynomials of H(exp 2)/R have not been included in these models. We examined the validity of a model with third order polynomials and W. Methods: T was measured with oxygen-18 labled water in 27 subjects. R at 50 kHz was obtained from electrodes placed on the hand and foot while subjects were in the supine position. A stepwise regression equation was developed with 13 subjects (age 31.5 plus or minus 6.2 years, T 38.2 plus or minus 6.6 L, W 65.2 plus or minus 12.0 kg). Correlations, standard error of estimates and mean differences were computed between T and estimated T's from the new (N) model and other models. Evaluations were completed with the remaining 14 subjects (age 32.4 plus or minus 6.3 years, T 40.3 plus or minus 8 L, W 70.2 plus or minus 12.3 kg) and two of its subgroups (high and low) Results: A regression equation was developed from the model. The only significant mean difference was between T and one of the earlier models. Conclusion: Third order polynomials in regression models may increase the accuracy of estimating total body water. Evaluating the model with a larger population is needed.

  9. Multi Objective Controller Design for Linear System via Optimal Interpolation

    NASA Technical Reports Server (NTRS)

    Ozbay, Hitay

    1996-01-01

    We propose a methodology for the design of a controller which satisfies a set of closed-loop objectives simultaneously. The set of objectives consists of: (1) pole placement, (2) decoupled command tracking of step inputs at steady-state, and (3) minimization of step response transients with respect to envelope specifications. We first obtain a characterization of all controllers placing the closed-loop poles in a prescribed region of the complex plane. In this characterization, the free parameter matrix Q(s) is to be determined to attain objectives (2) and (3). Objective (2) is expressed as determining a Pareto optimal solution to a vector valued optimization problem. The solution of this problem is obtained by transforming it to a scalar convex optimization problem. This solution determines Q(O) and the remaining freedom in choosing Q(s) is used to satisfy objective (3). We write Q(s) = (l/v(s))bar-Q(s) for a prescribed polynomial v(s). Bar-Q(s) is a polynomial matrix which is arbitrary except that Q(O) and the order of bar-Q(s) are fixed. Obeying these constraints bar-Q(s) is now to be 'shaped' to minimize the step response characteristics of specific input/output pairs according to the maximum envelope violations. This problem is expressed as a vector valued optimization problem using the concept of Pareto optimality. We then investigate a scalar optimization problem associated with this vector valued problem and show that it is convex. The organization of the report is as follows. The next section includes some definitions and preliminary lemmas. We then give the problem statement which is followed by a section including a detailed development of the design procedure. We then consider an aircraft control example. The last section gives some concluding remarks. The Appendix includes the proofs of technical lemmas, printouts of computer programs, and figures.

  10. Data Assimilation and Propagation of Uncertainty in Multiscale Cardiovascular Simulation

    NASA Astrophysics Data System (ADS)

    Schiavazzi, Daniele; Marsden, Alison

    2015-11-01

    Cardiovascular modeling is the application of computational tools to predict hemodynamics. State-of-the-art techniques couple a 3D incompressible Navier-Stokes solver with a boundary circulation model and can predict local and peripheral hemodynamics, analyze the post-operative performance of surgical designs and complement clinical data collection minimizing invasive and risky measurement practices. The ability of these tools to make useful predictions is directly related to their accuracy in representing measured physiologies. Tuning of model parameters is therefore a topic of paramount importance and should include clinical data uncertainty, revealing how this uncertainty will affect the predictions. We propose a fully Bayesian, multi-level approach to data assimilation of uncertain clinical data in multiscale circulation models. To reduce the computational cost, we use a stable, condensed approximation of the 3D model build by linear sparse regression of the pressure/flow rate relationship at the outlets. Finally, we consider the problem of non-invasively propagating the uncertainty in model parameters to the resulting hemodynamics and compare Monte Carlo simulation with Stochastic Collocation approaches based on Polynomial or Multi-resolution Chaos expansions.

  11. Hierarchical cluster-based partial least squares regression (HC-PLSR) is an efficient tool for metamodelling of nonlinear dynamic models.

    PubMed

    Tøndel, Kristin; Indahl, Ulf G; Gjuvsland, Arne B; Vik, Jon Olav; Hunter, Peter; Omholt, Stig W; Martens, Harald

    2011-06-01

    Deterministic dynamic models of complex biological systems contain a large number of parameters and state variables, related through nonlinear differential equations with various types of feedback. A metamodel of such a dynamic model is a statistical approximation model that maps variation in parameters and initial conditions (inputs) to variation in features of the trajectories of the state variables (outputs) throughout the entire biologically relevant input space. A sufficiently accurate mapping can be exploited both instrumentally and epistemically. Multivariate regression methodology is a commonly used approach for emulating dynamic models. However, when the input-output relations are highly nonlinear or non-monotone, a standard linear regression approach is prone to give suboptimal results. We therefore hypothesised that a more accurate mapping can be obtained by locally linear or locally polynomial regression. We present here a new method for local regression modelling, Hierarchical Cluster-based PLS regression (HC-PLSR), where fuzzy C-means clustering is used to separate the data set into parts according to the structure of the response surface. We compare the metamodelling performance of HC-PLSR with polynomial partial least squares regression (PLSR) and ordinary least squares (OLS) regression on various systems: six different gene regulatory network models with various types of feedback, a deterministic mathematical model of the mammalian circadian clock and a model of the mouse ventricular myocyte function. Our results indicate that multivariate regression is well suited for emulating dynamic models in systems biology. The hierarchical approach turned out to be superior to both polynomial PLSR and OLS regression in all three test cases. The advantage, in terms of explained variance and prediction accuracy, was largest in systems with highly nonlinear functional relationships and in systems with positive feedback loops. HC-PLSR is a promising approach for metamodelling in systems biology, especially for highly nonlinear or non-monotone parameter to phenotype maps. The algorithm can be flexibly adjusted to suit the complexity of the dynamic model behaviour, inviting automation in the metamodelling of complex systems.

  12. Hierarchical Cluster-based Partial Least Squares Regression (HC-PLSR) is an efficient tool for metamodelling of nonlinear dynamic models

    PubMed Central

    2011-01-01

    Background Deterministic dynamic models of complex biological systems contain a large number of parameters and state variables, related through nonlinear differential equations with various types of feedback. A metamodel of such a dynamic model is a statistical approximation model that maps variation in parameters and initial conditions (inputs) to variation in features of the trajectories of the state variables (outputs) throughout the entire biologically relevant input space. A sufficiently accurate mapping can be exploited both instrumentally and epistemically. Multivariate regression methodology is a commonly used approach for emulating dynamic models. However, when the input-output relations are highly nonlinear or non-monotone, a standard linear regression approach is prone to give suboptimal results. We therefore hypothesised that a more accurate mapping can be obtained by locally linear or locally polynomial regression. We present here a new method for local regression modelling, Hierarchical Cluster-based PLS regression (HC-PLSR), where fuzzy C-means clustering is used to separate the data set into parts according to the structure of the response surface. We compare the metamodelling performance of HC-PLSR with polynomial partial least squares regression (PLSR) and ordinary least squares (OLS) regression on various systems: six different gene regulatory network models with various types of feedback, a deterministic mathematical model of the mammalian circadian clock and a model of the mouse ventricular myocyte function. Results Our results indicate that multivariate regression is well suited for emulating dynamic models in systems biology. The hierarchical approach turned out to be superior to both polynomial PLSR and OLS regression in all three test cases. The advantage, in terms of explained variance and prediction accuracy, was largest in systems with highly nonlinear functional relationships and in systems with positive feedback loops. Conclusions HC-PLSR is a promising approach for metamodelling in systems biology, especially for highly nonlinear or non-monotone parameter to phenotype maps. The algorithm can be flexibly adjusted to suit the complexity of the dynamic model behaviour, inviting automation in the metamodelling of complex systems. PMID:21627852

  13. The validation of a human force model to predict dynamic forces resulting from multi-joint motions

    NASA Technical Reports Server (NTRS)

    Pandya, Abhilash K.; Maida, James C.; Aldridge, Ann M.; Hasson, Scott M.; Woolford, Barbara J.

    1992-01-01

    The development and validation is examined of a dynamic strength model for humans. This model is based on empirical data. The shoulder, elbow, and wrist joints were characterized in terms of maximum isolated torque, or position and velocity, in all rotational planes. This data was reduced by a least squares regression technique into a table of single variable second degree polynomial equations determining torque as a function of position and velocity. The isolated joint torque equations were then used to compute forces resulting from a composite motion, in this case, a ratchet wrench push and pull operation. A comparison of the predicted results of the model with the actual measured values for the composite motion indicates that forces derived from a composite motion of joints (ratcheting) can be predicted from isolated joint measures. Calculated T values comparing model versus measured values for 14 subjects were well within the statistically acceptable limits and regression analysis revealed coefficient of variation between actual and measured to be within 0.72 and 0.80.

  14. Measurement of pediatric regional cerebral blood flow from 6 months to 15 years of age in a clinical population.

    PubMed

    Carsin-Vu, Aline; Corouge, Isabelle; Commowick, Olivier; Bouzillé, Guillaume; Barillot, Christian; Ferré, Jean-Christophe; Proisy, Maia

    2018-04-01

    To investigate changes in cerebral blood flow (CBF) in gray matter (GM) between 6 months and 15 years of age and to provide CBF values for the brain, GM, white matter (WM), hemispheres and lobes. Between 2013 and 2016, we retrospectively included all clinical MRI examinations with arterial spin labeling (ASL). We excluded subjects with a condition potentially affecting brain perfusion. For each subject, mean values of CBF in the brain, GM, WM, hemispheres and lobes were calculated. GM CBF was fitted using linear, quadratic and cubic polynomial regression against age. Regression models were compared with Akaike's information criterion (AIC), and Likelihood Ratio tests. 84 children were included (44 females/40 males). Mean CBF values were 64.2 ± 13.8 mL/100 g/min in GM, and 29.3 ± 10.0 mL/100 g/min in WM. The best-fit model of brain perfusion was the cubic polynomial function (AIC = 672.7, versus respectively AIC = 673.9 and AIC = 674.1 with the linear negative function and the quadratic polynomial function). A statistically significant difference between the tested models demonstrating the superiority of the quadratic (p = 0.18) or cubic polynomial model (p = 0.06), over the negative linear regression model was not found. No effect of general anesthesia (p = 0.34) or of gender (p = 0.16) was found. we provided values for ASL CBF in the brain, GM, WM, hemispheres, and lobes over a wide pediatric age range, approximately showing inverted U-shaped changes in GM perfusion over the course of childhood. Copyright © 2018 Elsevier B.V. All rights reserved.

  15. Meta-Regression Approximations to Reduce Publication Selection Bias

    ERIC Educational Resources Information Center

    Stanley, T. D.; Doucouliagos, Hristos

    2014-01-01

    Publication selection bias is a serious challenge to the integrity of all empirical sciences. We derive meta-regression approximations to reduce this bias. Our approach employs Taylor polynomial approximations to the conditional mean of a truncated distribution. A quadratic approximation without a linear term, precision-effect estimate with…

  16. Mixed kernel function support vector regression for global sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Cheng, Kai; Lu, Zhenzhou; Wei, Yuhao; Shi, Yan; Zhou, Yicheng

    2017-11-01

    Global sensitivity analysis (GSA) plays an important role in exploring the respective effects of input variables on an assigned output response. Amongst the wide sensitivity analyses in literature, the Sobol indices have attracted much attention since they can provide accurate information for most models. In this paper, a mixed kernel function (MKF) based support vector regression (SVR) model is employed to evaluate the Sobol indices at low computational cost. By the proposed derivation, the estimation of the Sobol indices can be obtained by post-processing the coefficients of the SVR meta-model. The MKF is constituted by the orthogonal polynomials kernel function and Gaussian radial basis kernel function, thus the MKF possesses both the global characteristic advantage of the polynomials kernel function and the local characteristic advantage of the Gaussian radial basis kernel function. The proposed approach is suitable for high-dimensional and non-linear problems. Performance of the proposed approach is validated by various analytical functions and compared with the popular polynomial chaos expansion (PCE). Results demonstrate that the proposed approach is an efficient method for global sensitivity analysis.

  17. Consensus seeking in a network of discrete-time linear agents with communication noises

    NASA Astrophysics Data System (ADS)

    Wang, Yunpeng; Cheng, Long; Hou, Zeng-Guang; Tan, Min; Zhou, Chao; Wang, Ming

    2015-07-01

    This paper studies the mean square consensus of discrete-time linear time-invariant multi-agent systems with communication noises. A distributed consensus protocol, which is composed of the agent's own state feedback and the relative states between the agent and its neighbours, is proposed. A time-varying consensus gain a[k] is applied to attenuate the effect of noises which inherits in the inaccurate measurement of relative states with neighbours. A polynomial, namely 'parameter polynomial', is constructed. And its coefficients are the parameters in the feedback gain vector of the proposed protocol. It turns out that the parameter polynomial plays an important role in guaranteeing the consensus of linear multi-agent systems. By the proposed protocol, necessary and sufficient conditions for mean square consensus are presented under different topology conditions: (1) if the communication topology graph has a spanning tree and every node in the graph has at least one parent node, then the mean square consensus can be achieved if and only if ∑∞k = 0a[k] = ∞, ∑∞k = 0a2[k] < ∞ and all roots of the parameter polynomial are in the unit circle; (2) if the communication topology graph has a spanning tree and there exits one node without any parent node (the leader-follower case), then the mean square consensus can be achieved if and only if ∑∞k = 0a[k] = ∞, limk → ∞a[k] = 0 and all roots of the parameter polynomial are in the unit circle; (3) if the communication topology graph does not have a spanning tree, then the mean square consensus can never be achieved. Finally, one simulation example on the multiple aircrafts system is provided to validate the theoretical analysis.

  18. Pseudo spectral collocation with Maxwell polynomials for kinetic equations with energy diffusion

    NASA Astrophysics Data System (ADS)

    Sánchez-Vizuet, Tonatiuh; Cerfon, Antoine J.

    2018-02-01

    We study the approximation and stability properties of a recently popularized discretization strategy for the speed variable in kinetic equations, based on pseudo-spectral collocation on a grid defined by the zeros of a non-standard family of orthogonal polynomials called Maxwell polynomials. Taking a one-dimensional equation describing energy diffusion due to Fokker-Planck collisions with a Maxwell-Boltzmann background distribution as the test bench for the performance of the scheme, we find that Maxwell based discretizations outperform other commonly used schemes in most situations, often by orders of magnitude. This provides a strong motivation for their use in high-dimensional gyrokinetic simulations. However, we also show that Maxwell based schemes are subject to a non-modal time stepping instability in their most straightforward implementation, so that special care must be given to the discrete representation of the linear operators in order to benefit from the advantages provided by Maxwell polynomials.

  19. Analysis of the expected density of internal equilibria in random evolutionary multi-player multi-strategy games.

    PubMed

    Duong, Manh Hong; Han, The Anh

    2016-12-01

    In this paper, we study the distribution and behaviour of internal equilibria in a d-player n-strategy random evolutionary game where the game payoff matrix is generated from normal distributions. The study of this paper reveals and exploits interesting connections between evolutionary game theory and random polynomial theory. The main contributions of the paper are some qualitative and quantitative results on the expected density, [Formula: see text], and the expected number, E(n, d), of (stable) internal equilibria. Firstly, we show that in multi-player two-strategy games, they behave asymptotically as [Formula: see text] as d is sufficiently large. Secondly, we prove that they are monotone functions of d. We also make a conjecture for games with more than two strategies. Thirdly, we provide numerical simulations for our analytical results and to support the conjecture. As consequences of our analysis, some qualitative and quantitative results on the distribution of zeros of a random Bernstein polynomial are also obtained.

  20. a Comparison Study of Different Kernel Functions for Svm-Based Classification of Multi-Temporal Polarimetry SAR Data

    NASA Astrophysics Data System (ADS)

    Yekkehkhany, B.; Safari, A.; Homayouni, S.; Hasanlou, M.

    2014-10-01

    In this paper, a framework is developed based on Support Vector Machines (SVM) for crop classification using polarimetric features extracted from multi-temporal Synthetic Aperture Radar (SAR) imageries. The multi-temporal integration of data not only improves the overall retrieval accuracy but also provides more reliable estimates with respect to single-date data. Several kernel functions are employed and compared in this study for mapping the input space to higher Hilbert dimension space. These kernel functions include linear, polynomials and Radial Based Function (RBF). The method is applied to several UAVSAR L-band SAR images acquired over an agricultural area near Winnipeg, Manitoba, Canada. In this research, the temporal alpha features of H/A/α decomposition method are used in classification. The experimental tests show an SVM classifier with RBF kernel for three dates of data increases the Overall Accuracy (OA) to up to 3% in comparison to using linear kernel function, and up to 1% in comparison to a 3rd degree polynomial kernel function.

  1. Exploring the use of random regression models with legendre polynomials to analyze measures of volume of ejaculate in Holstein bulls.

    PubMed

    Carabaño, M J; Díaz, C; Ugarte, C; Serrano, M

    2007-02-01

    Artificial insemination centers routinely collect records of quantity and quality of semen of bulls throughout the animals' productive period. The goal of this paper was to explore the use of random regression models with orthogonal polynomials to analyze repeated measures of semen production of Spanish Holstein bulls. A total of 8,773 records of volume of first ejaculate (VFE) collected between 12 and 30 mo of age from 213 Spanish Holstein bulls was analyzed under alternative random regression models. Legendre polynomial functions of increasing order (0 to 6) were fitted to the average trajectory, additive genetic and permanent environmental effects. Age at collection and days in production were used as time variables. Heterogeneous and homogeneous residual variances were alternatively assumed. Analyses were carried out within a Bayesian framework. The logarithm of the marginal density and the cross-validation predictive ability of the data were used as model comparison criteria. Based on both criteria, age at collection as a time variable and heterogeneous residuals models are recommended to analyze changes of VFE over time. Both criteria indicated that fitting random curves for genetic and permanent environmental components as well as for the average trajector improved the quality of models. Furthermore, models with a higher order polynomial for the permanent environmental (5 to 6) than for the genetic components (4 to 5) and the average trajectory (2 to 3) tended to perform best. High-order polynomials were needed to accommodate the highly oscillating nature of the phenotypic values. Heritability and repeatability estimates, disregarding the extremes of the studied period, ranged from 0.15 to 0.35 and from 0.20 to 0.50, respectively, indicating that selection for VFE may be effective at any stage. Small differences among models were observed. Apart from the extremes, estimated correlations between ages decreased steadily from 0.9 and 0.4 for measures 1 mo apart to 0.4 and 0.2 for most distant measures for additive genetic and phenotypic components, respectively. Further investigation to account for environmental factors that may be responsible for the oscillating observations of VFE is needed.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tang, Kunkun, E-mail: ktg@illinois.edu; Inria Bordeaux – Sud-Ouest, Team Cardamom, 200 avenue de la Vieille Tour, 33405 Talence; Congedo, Pietro M.

    The Polynomial Dimensional Decomposition (PDD) is employed in this work for the global sensitivity analysis and uncertainty quantification (UQ) of stochastic systems subject to a moderate to large number of input random variables. Due to the intimate connection between the PDD and the Analysis of Variance (ANOVA) approaches, PDD is able to provide a simpler and more direct evaluation of the Sobol' sensitivity indices, when compared to the Polynomial Chaos expansion (PC). Unfortunately, the number of PDD terms grows exponentially with respect to the size of the input random vector, which makes the computational cost of standard methods unaffordable formore » real engineering applications. In order to address the problem of the curse of dimensionality, this work proposes essentially variance-based adaptive strategies aiming to build a cheap meta-model (i.e. surrogate model) by employing the sparse PDD approach with its coefficients computed by regression. Three levels of adaptivity are carried out in this paper: 1) the truncated dimensionality for ANOVA component functions, 2) the active dimension technique especially for second- and higher-order parameter interactions, and 3) the stepwise regression approach designed to retain only the most influential polynomials in the PDD expansion. During this adaptive procedure featuring stepwise regressions, the surrogate model representation keeps containing few terms, so that the cost to resolve repeatedly the linear systems of the least-squares regression problem is negligible. The size of the finally obtained sparse PDD representation is much smaller than the one of the full expansion, since only significant terms are eventually retained. Consequently, a much smaller number of calls to the deterministic model is required to compute the final PDD coefficients.« less

  3. Dynamic graphs, community detection, and Riemannian geometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bakker, Craig; Halappanavar, Mahantesh; Visweswara Sathanur, Arun

    A community is a subset of a wider network where the members of that subset are more strongly connected to each other than they are to the rest of the network. In this paper, we consider the problem of identifying and tracking communities in graphs that change over time {dynamic community detection} and present a framework based on Riemannian geometry to aid in this task. Our framework currently supports several important operations such as interpolating between and averaging over graph snapshots. We compare these Riemannian methods with entry-wise linear interpolation and that the Riemannian methods are generally better suited tomore » dynamic community detection. Next steps with the Riemannian framework include developing higher-order interpolation methods (e.g. the analogues of polynomial and spline interpolation) and a Riemannian least-squares regression method for working with noisy data.« less

  4. Response surface optimization of medium components for naringinase production from Staphylococcus xylosus MAK2.

    PubMed

    Puri, Munish; Kaur, Aneet; Singh, Ram Sarup; Singh, Anubhav

    2010-09-01

    Response surface methodology was used to optimize the fermentation medium for enhancing naringinase production by Staphylococcus xylosus. The first step of this process involved the individual adjustment and optimization of various medium components at shake flask level. Sources of carbon (sucrose) and nitrogen (sodium nitrate), as well as an inducer (naringin) and pH levels were all found to be the important factors significantly affecting naringinase production. In the second step, a 22 full factorial central composite design was applied to determine the optimal levels of each of the significant variables. A second-order polynomial was derived by multiple regression analysis on the experimental data. Using this methodology, the optimum values for the critical components were obtained as follows: sucrose, 10.0%; sodium nitrate, 10.0%; pH 5.6; biomass concentration, 1.58%; and naringin, 0.50% (w/v), respectively. Under optimal conditions, the experimental naringinase production was 8.45 U/mL. The determination coefficients (R(2)) were 0.9908 and 0.9950 for naringinase activity and biomass production, respectively, indicating an adequate degree of reliability in the model.

  5. A statistical approach to optimizing concrete mixture design.

    PubMed

    Ahmad, Shamsad; Alghamdi, Saeid A

    2014-01-01

    A step-by-step statistical approach is proposed to obtain optimum proportioning of concrete mixtures using the data obtained through a statistically planned experimental program. The utility of the proposed approach for optimizing the design of concrete mixture is illustrated considering a typical case in which trial mixtures were considered according to a full factorial experiment design involving three factors and their three levels (3(3)). A total of 27 concrete mixtures with three replicates (81 specimens) were considered by varying the levels of key factors affecting compressive strength of concrete, namely, water/cementitious materials ratio (0.38, 0.43, and 0.48), cementitious materials content (350, 375, and 400 kg/m(3)), and fine/total aggregate ratio (0.35, 0.40, and 0.45). The experimental data were utilized to carry out analysis of variance (ANOVA) and to develop a polynomial regression model for compressive strength in terms of the three design factors considered in this study. The developed statistical model was used to show how optimization of concrete mixtures can be carried out with different possible options.

  6. A Statistical Approach to Optimizing Concrete Mixture Design

    PubMed Central

    Alghamdi, Saeid A.

    2014-01-01

    A step-by-step statistical approach is proposed to obtain optimum proportioning of concrete mixtures using the data obtained through a statistically planned experimental program. The utility of the proposed approach for optimizing the design of concrete mixture is illustrated considering a typical case in which trial mixtures were considered according to a full factorial experiment design involving three factors and their three levels (33). A total of 27 concrete mixtures with three replicates (81 specimens) were considered by varying the levels of key factors affecting compressive strength of concrete, namely, water/cementitious materials ratio (0.38, 0.43, and 0.48), cementitious materials content (350, 375, and 400 kg/m3), and fine/total aggregate ratio (0.35, 0.40, and 0.45). The experimental data were utilized to carry out analysis of variance (ANOVA) and to develop a polynomial regression model for compressive strength in terms of the three design factors considered in this study. The developed statistical model was used to show how optimization of concrete mixtures can be carried out with different possible options. PMID:24688405

  7. Bounding the Failure Probability Range of Polynomial Systems Subject to P-box Uncertainties

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2012-01-01

    This paper proposes a reliability analysis framework for systems subject to multiple design requirements that depend polynomially on the uncertainty. Uncertainty is prescribed by probability boxes, also known as p-boxes, whose distribution functions have free or fixed functional forms. An approach based on the Bernstein expansion of polynomials and optimization is proposed. In particular, we search for the elements of a multi-dimensional p-box that minimize (i.e., the best-case) and maximize (i.e., the worst-case) the probability of inner and outer bounding sets of the failure domain. This technique yields intervals that bound the range of failure probabilities. The offset between this bounding interval and the actual failure probability range can be made arbitrarily tight with additional computational effort.

  8. Polynomial mixture method of solving ordinary differential equations

    NASA Astrophysics Data System (ADS)

    Shahrir, Mohammad Shazri; Nallasamy, Kumaresan; Ratnavelu, Kuru; Kamali, M. Z. M.

    2017-11-01

    In this paper, a numerical solution of fuzzy quadratic Riccati differential equation is estimated using a proposed new approach that provides mixture of polynomials where iteratively the right mixture will be generated. This mixture provide a generalized formalism of traditional Neural Networks (NN). Previous works have shown reliable results using Runge-Kutta 4th order (RK4). This can be achieved by solving the 1st Order Non-linear Differential Equation (ODE) that is found commonly in Riccati differential equation. Research has shown improved results relatively to the RK4 method. It can be said that Polynomial Mixture Method (PMM) shows promising results with the advantage of continuous estimation and improved accuracy that can be produced over Mabood et al, RK-4, Multi-Agent NN and Neuro Method (NM).

  9. Non-polynomial extensions of solvable potentials à la Abraham-Moses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Odake, Satoru; Sasaki, Ryu; Center for Theoretical Sciences, National Taiwan University, Taipei 10617, Taiwan

    2013-10-15

    Abraham-Moses transformations, besides Darboux transformations, are well-known procedures to generate extensions of solvable potentials in one-dimensional quantum mechanics. Here we present the explicit forms of infinitely many seed solutions for adding eigenstates at arbitrary real energy through the Abraham-Moses transformations for typical solvable potentials, e.g., the radial oscillator, the Darboux-Pöschl-Teller, and some others. These seed solutions are simple generalisations of the virtual state wavefunctions, which are obtained from the eigenfunctions by discrete symmetries of the potentials. The virtual state wavefunctions have been an essential ingredient for constructing multi-indexed Laguerre and Jacobi polynomials through multiple Darboux-Crum transformations. In contrast to themore » Darboux transformations, the virtual state wavefunctions generate non-polynomial extensions of solvable potentials through the Abraham-Moses transformations.« less

  10. Settling Efficiency of Urban Particulate Matter Transported by Stormwater Runoff.

    PubMed

    Carbone, Marco; Penna, Nadia; Piro, Patrizia

    2015-09-01

    The main purpose of control measures in urban areas is to retain particulate matter washed out by stormwater over impermeable surfaces. In stormwater control measures, particulate matter removal typically occurs via sedimentation. Settling column tests were performed to examine the settling efficiency of such units using monodisperse and heterodisperse particulate matter (for which the particle size distributions were measured and modelled by the cumulative gamma distribution). To investigate the dependence of settling efficiency from the particulate matter, a variant of the evolutionary polynomial regression (EPR), a Microsoft Excel function based on multi-objective EPR technique (EPR-MOGA), called EPR MOGA XL, was used as a data-mining strategy. The results from this study have shown that settling efficiency is a function of the initial total suspended solids (TSS) concentration and of the median diameter (d50 index), obtained from the particle size distributions (PSDs) of the samples.

  11. Very-short-term wind power prediction by a hybrid model with single- and multi-step approaches

    NASA Astrophysics Data System (ADS)

    Mohammed, E.; Wang, S.; Yu, J.

    2017-05-01

    Very-short-term wind power prediction (VSTWPP) has played an essential role for the operation of electric power systems. This paper aims at improving and applying a hybrid method of VSTWPP based on historical data. The hybrid method is combined by multiple linear regressions and least square (MLR&LS), which is intended for reducing prediction errors. The predicted values are obtained through two sub-processes:1) transform the time-series data of actual wind power into the power ratio, and then predict the power ratio;2) use the predicted power ratio to predict the wind power. Besides, the proposed method can include two prediction approaches: single-step prediction (SSP) and multi-step prediction (MSP). WPP is tested comparatively by auto-regressive moving average (ARMA) model from the predicted values and errors. The validity of the proposed hybrid method is confirmed in terms of error analysis by using probability density function (PDF), mean absolute percent error (MAPE) and means square error (MSE). Meanwhile, comparison of the correlation coefficients between the actual values and the predicted values for different prediction times and window has confirmed that MSP approach by using the hybrid model is the most accurate while comparing to SSP approach and ARMA. The MLR&LS is accurate and promising for solving problems in WPP.

  12. Peculiarities of stochastic regime of Arctic ice cover time evolution over 1987-2014 from microwave satellite sounding on the basis of NASA team 2 algorithm

    NASA Astrophysics Data System (ADS)

    Raev, M. D.; Sharkov, E. A.; Tikhonov, V. V.; Repina, I. A.; Komarova, N. Yu.

    2015-12-01

    The GLOBAL-RT database (DB) is composed of long-term radio heat multichannel observation data received from DMSP F08-F17 satellites; it is permanently supplemented with new data on the Earth's exploration from the space department of the Space Research Institute, Russian Academy of Sciences. Arctic ice-cover areas for regions higher than 60° N latitude were calculated using the DB polar version and NASA Team 2 algorithm, which is widely used in foreign scientific literature. According to the analysis of variability of Arctic ice cover during 1987-2014, 2 months were selected when the Arctic ice cover was maximal (February) and minimal (September), and the average ice cover area was calculated for these months. Confidence intervals of the average values are in the 95-98% limits. Several approximations are derived for the time dependences of the ice-cover maximum and minimum over the period under study. Regression dependences were calculated for polynomials from the first degree (linear) to sextic. It was ascertained that the minimal root-mean-square error of deviation from the approximated curve sharply decreased for the biquadratic polynomial and then varied insignificantly: from 0.5593 for the polynomial of third degree to 0.4560 for the biquadratic polynomial. Hence, the commonly used strictly linear regression with a negative time gradient for the September Arctic ice cover minimum over 30 years should be considered incorrect.

  13. Bayes Node Energy Polynomial Distribution to Improve Routing in Wireless Sensor Network

    PubMed Central

    Palanisamy, Thirumoorthy; Krishnasamy, Karthikeyan N.

    2015-01-01

    Wireless Sensor Network monitor and control the physical world via large number of small, low-priced sensor nodes. Existing method on Wireless Sensor Network (WSN) presented sensed data communication through continuous data collection resulting in higher delay and energy consumption. To conquer the routing issue and reduce energy drain rate, Bayes Node Energy and Polynomial Distribution (BNEPD) technique is introduced with energy aware routing in the wireless sensor network. The Bayes Node Energy Distribution initially distributes the sensor nodes that detect an object of similar event (i.e., temperature, pressure, flow) into specific regions with the application of Bayes rule. The object detection of similar events is accomplished based on the bayes probabilities and is sent to the sink node resulting in minimizing the energy consumption. Next, the Polynomial Regression Function is applied to the target object of similar events considered for different sensors are combined. They are based on the minimum and maximum value of object events and are transferred to the sink node. Finally, the Poly Distribute algorithm effectively distributes the sensor nodes. The energy efficient routing path for each sensor nodes are created by data aggregation at the sink based on polynomial regression function which reduces the energy drain rate with minimum communication overhead. Experimental performance is evaluated using Dodgers Loop Sensor Data Set from UCI repository. Simulation results show that the proposed distribution algorithm significantly reduce the node energy drain rate and ensure fairness among different users reducing the communication overhead. PMID:26426701

  14. Bayes Node Energy Polynomial Distribution to Improve Routing in Wireless Sensor Network.

    PubMed

    Palanisamy, Thirumoorthy; Krishnasamy, Karthikeyan N

    2015-01-01

    Wireless Sensor Network monitor and control the physical world via large number of small, low-priced sensor nodes. Existing method on Wireless Sensor Network (WSN) presented sensed data communication through continuous data collection resulting in higher delay and energy consumption. To conquer the routing issue and reduce energy drain rate, Bayes Node Energy and Polynomial Distribution (BNEPD) technique is introduced with energy aware routing in the wireless sensor network. The Bayes Node Energy Distribution initially distributes the sensor nodes that detect an object of similar event (i.e., temperature, pressure, flow) into specific regions with the application of Bayes rule. The object detection of similar events is accomplished based on the bayes probabilities and is sent to the sink node resulting in minimizing the energy consumption. Next, the Polynomial Regression Function is applied to the target object of similar events considered for different sensors are combined. They are based on the minimum and maximum value of object events and are transferred to the sink node. Finally, the Poly Distribute algorithm effectively distributes the sensor nodes. The energy efficient routing path for each sensor nodes are created by data aggregation at the sink based on polynomial regression function which reduces the energy drain rate with minimum communication overhead. Experimental performance is evaluated using Dodgers Loop Sensor Data Set from UCI repository. Simulation results show that the proposed distribution algorithm significantly reduce the node energy drain rate and ensure fairness among different users reducing the communication overhead.

  15. Evaluation of Piecewise Polynomial Equations for Two Types of Thermocouples

    PubMed Central

    Chen, Andrew; Chen, Chiachung

    2013-01-01

    Thermocouples are the most frequently used sensors for temperature measurement because of their wide applicability, long-term stability and high reliability. However, one of the major utilization problems is the linearization of the transfer relation between temperature and output voltage of thermocouples. The linear calibration equation and its modules could be improved by using regression analysis to help solve this problem. In this study, two types of thermocouple and five temperature ranges were selected to evaluate the fitting agreement of different-order polynomial equations. Two quantitative criteria, the average of the absolute error values |e|ave and the standard deviation of calibration equation estd, were used to evaluate the accuracy and precision of these calibrations equations. The optimal order of polynomial equations differed with the temperature range. The accuracy and precision of the calibration equation could be improved significantly with an adequate higher degree polynomial equation. The technique could be applied with hardware modules to serve as an intelligent sensor for temperature measurement. PMID:24351627

  16. Control Synthesis of Discrete-Time T-S Fuzzy Systems: Reducing the Conservatism Whilst Alleviating the Computational Burden.

    PubMed

    Xie, Xiangpeng; Yue, Dong; Zhang, Huaguang; Peng, Chen

    2017-09-01

    The augmented multi-indexed matrix approach acts as a powerful tool in reducing the conservatism of control synthesis of discrete-time Takagi-Sugeno fuzzy systems. However, its computational burden is sometimes too heavy as a tradeoff. Nowadays, reducing the conservatism whilst alleviating the computational burden becomes an ideal but very challenging problem. This paper is toward finding an efficient way to achieve one of satisfactory answers. Different from the augmented multi-indexed matrix approach in the literature, we aim to design a more efficient slack variable approach under a general framework of homogenous matrix polynomials. Thanks to the introduction of a new extended representation for homogeneous matrix polynomials, related matrices with the same coefficient are collected together into one sole set and thus those redundant terms of the augmented multi-indexed matrix approach can be removed, i.e., the computational burden can be alleviated in this paper. More importantly, due to the fact that more useful information is involved into control design, the conservatism of the proposed approach as well is less than the counterpart of the augmented multi-indexed matrix approach. Finally, numerical experiments are given to show the effectiveness of the proposed approach.

  17. Contributions of dopamine-related genes and environmental factors to highly sensitive personality: a multi-step neuronal system-level approach.

    PubMed

    Chen, Chunhui; Chen, Chuansheng; Moyzis, Robert; Stern, Hal; He, Qinghua; Li, He; Li, Jin; Zhu, Bi; Dong, Qi

    2011-01-01

    Traditional behavioral genetic studies (e.g., twin, adoption studies) have shown that human personality has moderate to high heritability, but recent molecular behavioral genetic studies have failed to identify quantitative trait loci (QTL) with consistent effects. The current study adopted a multi-step approach (ANOVA followed by multiple regression and permutation) to assess the cumulative effects of multiple QTLs. Using a system-level (dopamine system) genetic approach, we investigated a personality trait deeply rooted in the nervous system (the Highly Sensitive Personality, HSP). 480 healthy Chinese college students were given the HSP scale and genotyped for 98 representative polymorphisms in all major dopamine neurotransmitter genes. In addition, two environment factors (stressful life events and parental warmth) that have been implicated for their contributions to personality development were included to investigate their relative contributions as compared to genetic factors. In Step 1, using ANOVA, we identified 10 polymorphisms that made statistically significant contributions to HSP. In Step 2, these polymorphism's main effects and interactions were assessed using multiple regression. This model accounted for 15% of the variance of HSP (p<0.001). Recent stressful life events accounted for an additional 2% of the variance. Finally, permutation analyses ascertained the probability of obtaining these findings by chance to be very low, p ranging from 0.001 to 0.006. Dividing these loci by the subsystems of dopamine synthesis, degradation/transport, receptor and modulation, we found that the modulation and receptor subsystems made the most significant contribution to HSP. The results of this study demonstrate the utility of a multi-step neuronal system-level approach in assessing genetic contributions to individual differences in human behavior. It can potentially bridge the gap between the high heritability estimates based on traditional behavioral genetics and the lack of reproducible genetic effects observed currently from molecular genetic studies.

  18. Contributions of Dopamine-Related Genes and Environmental Factors to Highly Sensitive Personality: A Multi-Step Neuronal System-Level Approach

    PubMed Central

    Chen, Chunhui; Chen, Chuansheng; Moyzis, Robert; Stern, Hal; He, Qinghua; Li, He; Li, Jin; Zhu, Bi; Dong, Qi

    2011-01-01

    Traditional behavioral genetic studies (e.g., twin, adoption studies) have shown that human personality has moderate to high heritability, but recent molecular behavioral genetic studies have failed to identify quantitative trait loci (QTL) with consistent effects. The current study adopted a multi-step approach (ANOVA followed by multiple regression and permutation) to assess the cumulative effects of multiple QTLs. Using a system-level (dopamine system) genetic approach, we investigated a personality trait deeply rooted in the nervous system (the Highly Sensitive Personality, HSP). 480 healthy Chinese college students were given the HSP scale and genotyped for 98 representative polymorphisms in all major dopamine neurotransmitter genes. In addition, two environment factors (stressful life events and parental warmth) that have been implicated for their contributions to personality development were included to investigate their relative contributions as compared to genetic factors. In Step 1, using ANOVA, we identified 10 polymorphisms that made statistically significant contributions to HSP. In Step 2, these polymorphism's main effects and interactions were assessed using multiple regression. This model accounted for 15% of the variance of HSP (p<0.001). Recent stressful life events accounted for an additional 2% of the variance. Finally, permutation analyses ascertained the probability of obtaining these findings by chance to be very low, p ranging from 0.001 to 0.006. Dividing these loci by the subsystems of dopamine synthesis, degradation/transport, receptor and modulation, we found that the modulation and receptor subsystems made the most significant contribution to HSP. The results of this study demonstrate the utility of a multi-step neuronal system-level approach in assessing genetic contributions to individual differences in human behavior. It can potentially bridge the gap between the high heritability estimates based on traditional behavioral genetics and the lack of reproducible genetic effects observed currently from molecular genetic studies. PMID:21765900

  19. Modeling Source Water TOC Using Hydroclimate Variables and Local Polynomial Regression.

    PubMed

    Samson, Carleigh C; Rajagopalan, Balaji; Summers, R Scott

    2016-04-19

    To control disinfection byproduct (DBP) formation in drinking water, an understanding of the source water total organic carbon (TOC) concentration variability can be critical. Previously, TOC concentrations in water treatment plant source waters have been modeled using streamflow data. However, the lack of streamflow data or unimpaired flow scenarios makes it difficult to model TOC. In addition, TOC variability under climate change further exacerbates the problem. Here we proposed a modeling approach based on local polynomial regression that uses climate, e.g. temperature, and land surface, e.g., soil moisture, variables as predictors of TOC concentration, obviating the need for streamflow. The local polynomial approach has the ability to capture non-Gaussian and nonlinear features that might be present in the relationships. The utility of the methodology is demonstrated using source water quality and climate data in three case study locations with surface source waters including river and reservoir sources. The models show good predictive skill in general at these locations, with lower skills at locations with the most anthropogenic influences in their streams. Source water TOC predictive models can provide water treatment utilities important information for making treatment decisions for DBP regulation compliance under future climate scenarios.

  20. [Using fractional polynomials to estimate the safety threshold of fluoride in drinking water].

    PubMed

    Pan, Shenling; An, Wei; Li, Hongyan; Yang, Min

    2014-01-01

    To study the dose-response relationship between fluoride content in drinking water and prevalence of dental fluorosis on the national scale, then to determine the safety threshold of fluoride in drinking water. Meta-regression analysis was applied to the 2001-2002 national endemic fluorosis survey data of key wards. First, fractional polynomial (FP) was adopted to establish fixed effect model, determining the best FP structure, after that restricted maximum likelihood (REML) was adopted to estimate between-study variance, then the best random effect model was established. The best FP structure was first-order logarithmic transformation. Based on the best random effect model, the benchmark dose (BMD) of fluoride in drinking water and its lower limit (BMDL) was calculated as 0.98 mg/L and 0.78 mg/L. Fluoride in drinking water can only explain 35.8% of the variability of the prevalence, among other influencing factors, ward type was a significant factor, while temperature condition and altitude were not. Fractional polynomial-based meta-regression method is simple, practical and can provide good fitting effect, based on it, the safety threshold of fluoride in drinking water of our country is determined as 0.8 mg/L.

  1. Retrieval Accuracy Assessment with Gap Detection for Case 2 Waters Chla Algorithms

    NASA Astrophysics Data System (ADS)

    Salem, S. I.; Higa, H.; Kim, H.; Oki, K.; Oki, T.

    2016-12-01

    Inland lakes and coastal regions types of Case 2 Waters should be continuously and accurately monitored as the former contain 90% of the global liquid freshwater storage, while the latter provide most of the dissolved organic carbon (DOC) which is an important link in the global carbon cycle. The optical properties of Case 2 Waters are dominated by three optically active components: phytoplankton, non-algal particles (NAP) and color dissolved organic matter (CDOM). During the last three decades, researchers have proposed several algorithms to retrieve Chla concentration from the remote sensing reflectance. In this study, seven algorithms are assessed with various band combinations from multi and hyper-spectral data with linear, polynomial and power regression approaches. To evaluate the performance of the 43 algorithm combination sets, 500,000 remote sensing reflectance spectra are simulated with a wide range of concentrations for Chla, NAP and CDOM. The concentrations of Chla and NAP vary from 1-200 (mg m-3) and 1-200 (gm m-3), respectively, and the absorption of CDOM at 440 nm has the range of 0.1-10 (m-1). It is found that the three-band algorithm (665, 709 and 754 nm) with the quadratic polynomial (3b_665_QP) indicates the best overall performance. 3b_665_QP has the least error with a root mean square error (RMSE) of 0.2 (mg m-3) and a mean absolute relative error (MARE) of 0.7 %. The less accurate retrieval of Chla was obtained by the synthetic chlorophyll index algorithm with RMSE and MARE of 35.8 mg m-3 and 160.4 %, respectively. In general, Chla algorithms which incorporates 665 nm band or band tuning technique performs better than those with 680 nm. In addition, the retrieval accuracy of Chla algorithms with quadratic polynomial and power regression approaches are consistently better than the linear ones. By analyzing Chla versus NAP concentrations, the 3b_665_QP outperforms the other algorithms for all Chla concentrations and NAP concentrations above 40 gm m-3which accounts for 81.3 % of the total combinations of NAP and Chla. In conclusion, these findings provide a reference for algorithm selection based on constituents' concentrations and open the door for developing a classification scheme to retrieve Chla with higher accuracy.

  2. Investigating and Modelling Effects of Climatically and Hydrologically Indicators on the Urmia Lake Coastline Changes Using Time Series Analysis

    NASA Astrophysics Data System (ADS)

    Ahmadijamal, M.; Hasanlou, M.

    2017-09-01

    Study of hydrological parameters of lakes and examine the variation of water level to operate management on water resources are important. The purpose of this study is to investigate and model the Urmia Lake water level changes due to changes in climatically and hydrological indicators that affects in the process of level variation and area of this lake. For this purpose, Landsat satellite images, hydrological data, the daily precipitation, the daily surface evaporation and the daily discharge in total of the lake basin during the period of 2010-2016 have been used. Based on time-series analysis that is conducted on individual data independently with same procedure, to model variation of Urmia Lake level, we used polynomial regression technique and combined polynomial with periodic behavior. In the first scenario, we fit a multivariate linear polynomial to our datasets and determining RMSE, NRSME and R² value. We found that fourth degree polynomial can better fit to our datasets with lowest RMSE value about 9 cm. In the second scenario, we combine polynomial with periodic behavior for modeling. The second scenario has superiority comparing to the first one, by RMSE value about 3 cm.

  3. Genetic analyses of partial egg production in Japanese quail using multi-trait random regression models.

    PubMed

    Karami, K; Zerehdaran, S; Barzanooni, B; Lotfi, E

    2017-12-01

    1. The aim of the present study was to estimate genetic parameters for average egg weight (EW) and egg number (EN) at different ages in Japanese quail using multi-trait random regression (MTRR) models. 2. A total of 8534 records from 900 quail, hatched between 2014 and 2015, were used in the study. Average weekly egg weights and egg numbers were measured from second until sixth week of egg production. 3. Nine random regression models were compared to identify the best order of the Legendre polynomials (LP). The most optimal model was identified by the Bayesian Information Criterion. A model with second order of LP for fixed effects, second order of LP for additive genetic effects and third order of LP for permanent environmental effects (MTRR23) was found to be the best. 4. According to the MTRR23 model, direct heritability for EW increased from 0.26 in the second week to 0.53 in the sixth week of egg production, whereas the ratio of permanent environment to phenotypic variance decreased from 0.48 to 0.1. Direct heritability for EN was low, whereas the ratio of permanent environment to phenotypic variance decreased from 0.57 to 0.15 during the production period. 5. For each trait, estimated genetic correlations among weeks of egg production were high (from 0.85 to 0.98). Genetic correlations between EW and EN were low and negative for the first two weeks, but they were low and positive for the rest of the egg production period. 6. In conclusion, random regression models can be used effectively for analysing egg production traits in Japanese quail. Response to selection for increased egg weight would be higher at older ages because of its higher heritability and such a breeding program would have no negative genetic impact on egg production.

  4. Multigrid methods for isogeometric discretization

    PubMed Central

    Gahalaut, K.P.S.; Kraus, J.K.; Tomar, S.K.

    2013-01-01

    We present (geometric) multigrid methods for isogeometric discretization of scalar second order elliptic problems. The smoothing property of the relaxation method, and the approximation property of the intergrid transfer operators are analyzed. These properties, when used in the framework of classical multigrid theory, imply uniform convergence of two-grid and multigrid methods. Supporting numerical results are provided for the smoothing property, the approximation property, convergence factor and iterations count for V-, W- and F-cycles, and the linear dependence of V-cycle convergence on the smoothing steps. For two dimensions, numerical results include the problems with variable coefficients, simple multi-patch geometry, a quarter annulus, and the dependence of convergence behavior on refinement levels ℓ, whereas for three dimensions, only the constant coefficient problem in a unit cube is considered. The numerical results are complete up to polynomial order p=4, and for C0 and Cp-1 smoothness. PMID:24511168

  5. The effects of multi-disciplinary psycho-social care on socio-economic problems in cancer patients: a cluster-randomized trial.

    PubMed

    Singer, Susanne; Roick, Julia; Meixensberger, Jürgen; Schiefke, Franziska; Briest, Susanne; Dietz, Andreas; Papsdorf, Kirsten; Mössner, Joachim; Berg, Thomas; Stolzenburg, Jens-Uwe; Niederwieser, Dietger; Keller, Annette; Kersting, Anette; Danker, Helge

    2018-06-01

    We examined whether multi-disciplinary stepped psycho-social care decreases financial problems and improves return-to-work in cancer patients. In a university hospital, wards were randomly allocated to either stepped or standard care. Stepped care comprised screening for financial problems, consultation between doctor and patient, and the provision of social service. Outcomes were financial problems at the time of discharge and return-to-work in patients < 65 years old half a year after baseline. The analysis employed mixed-effect multivariate regression modeling. Thirteen wards were randomized and 1012 patients participated (n = 570 in stepped care and n = 442 in standard care). Those who reported financial problems at baseline were less likely to have financial problems at discharge when they had received stepped care (odds ratio (OR) 0.2, 95% confidence interval (CI) 0.1, 0.7; p = 0.01). There was no evidence for an effect of stepped care on financial problems in patients without such problems at baseline (OR 1.1, CI 0.5, 2.6; p = 0.82). There were 399 patients < 65 years old who were not retired at baseline. In this group, there was no evidence for an effect of stepped care on being employed half a year after baseline (OR 0.7, CI 0.3, 2.0; p = 0.52). NCT01859429 CONCLUSIONS: Financial problems can be avoided more effectively with multi-disciplinary stepped psycho-social care than with standard care in patients who have such problems.

  6. Volumetric calibration of a plenoptic camera.

    PubMed

    Hall, Elise Munz; Fahringer, Timothy W; Guildenbecher, Daniel R; Thurow, Brian S

    2018-02-01

    The volumetric calibration of a plenoptic camera is explored to correct for inaccuracies due to real-world lens distortions and thin-lens assumptions in current processing methods. Two methods of volumetric calibration based on a polynomial mapping function that does not require knowledge of specific lens parameters are presented and compared to a calibration based on thin-lens assumptions. The first method, volumetric dewarping, is executed by creation of a volumetric representation of a scene using the thin-lens assumptions, which is then corrected in post-processing using a polynomial mapping function. The second method, direct light-field calibration, uses the polynomial mapping in creation of the initial volumetric representation to relate locations in object space directly to image sensor locations. The accuracy and feasibility of these methods is examined experimentally by capturing images of a known dot card at a variety of depths. Results suggest that use of a 3D polynomial mapping function provides a significant increase in reconstruction accuracy and that the achievable accuracy is similar using either polynomial-mapping-based method. Additionally, direct light-field calibration provides significant computational benefits by eliminating some intermediate processing steps found in other methods. Finally, the flexibility of this method is shown for a nonplanar calibration.

  7. Random regression models for the prediction of days to weight, ultrasound rib eye area, and ultrasound back fat depth in beef cattle.

    PubMed

    Speidel, S E; Peel, R K; Crews, D H; Enns, R M

    2016-02-01

    Genetic evaluation research designed to reduce the required days to a specified end point has received very little attention in pertinent scientific literature, given that its economic importance was first discussed in 1957. There are many production scenarios in today's beef industry, making a prediction for the required number of days to a single end point a suboptimal option. Random regression is an attractive alternative to calculate days to weight (DTW), days to ultrasound back fat (DTUBF), and days to ultrasound rib eye area (DTUREA) genetic predictions that could overcome weaknesses of a single end point prediction. The objective of this study was to develop random regression approaches for the prediction of the DTW, DTUREA, and DTUBF. Data were obtained from the Agriculture and Agri-Food Canada Research Centre, Lethbridge, AB, Canada. Data consisted of records on 1,324 feedlot cattle spanning 1999 to 2007. Individual animals averaged 5.77 observations with weights, ultrasound rib eye area (UREA), ultrasound back fat depth (UBF), and ages ranging from 293 to 863 kg, 73.39 to 129.54 cm, 1.53 to 30.47 mm, and 276 to 519 d, respectively. Random regression models using Legendre polynomials were used to regress age of the individual on weight, UREA, and UBF. Fixed effects in the model included an overall fixed regression of age on end point (weight, UREA, and UBF) nested within breed to account for the mean relationship between age and weight as well as a contemporary group effect consisting of breed of the animal (Angus, Charolais, and Charolais sired), feedlot pen, and year of measure. Likelihood ratio tests were used to determine the appropriate random polynomial order. Use of the quadratic polynomial did not account for any additional genetic variation in days for DTW ( > 0.11), for DTUREA ( > 0.18), and for DTUBF ( > 0.20) when compared with the linear random polynomial. Heritability estimates from the linear random regression for DTW ranged from 0.54 to 0.74, corresponding to end points of 293 and 863 kg, respectively. Heritability for DTUREA ranged from 0.51 to 0.34 and for DTUBF ranged from 0.55 to 0.37. These estimates correspond to UREA end points of 35 and 125 cm and UBF end points of 1.53 and 30 mm, respectively. This range of heritability shows DTW, DTUREA, and DTUBF to be highly heritable and indicates that selection pressure aimed at reducing the number of days to reach a finish weight end point can result in genetic change given sufficient data.

  8. Profile shape optimization in multi-jet impingement cooling of dimpled topologies for local heat transfer enhancement

    NASA Astrophysics Data System (ADS)

    Negi, Deepchand Singh; Pattamatta, Arvind

    2015-04-01

    The present study deals with shape optimization of dimples on the target surface in multi-jet impingement heat transfer. Bezier polynomial formulation is incorporated to generate profile shapes for the dimple profile generation and a multi-objective optimization is performed. The optimized dimple shape exhibits higher local Nusselt number values compared to the reference hemispherical dimpled plate optimized shape which can be used to alleviate local temperature hot spots on target surface.

  9. Scattering amplitudes from multivariate polynomial division

    NASA Astrophysics Data System (ADS)

    Mastrolia, Pierpaolo; Mirabella, Edoardo; Ossola, Giovanni; Peraro, Tiziano

    2012-11-01

    We show that the evaluation of scattering amplitudes can be formulated as a problem of multivariate polynomial division, with the components of the integration-momenta as indeterminates. We present a recurrence relation which, independently of the number of loops, leads to the multi-particle pole decomposition of the integrands of the scattering amplitudes. The recursive algorithm is based on the weak Nullstellensatz theorem and on the division modulo the Gröbner basis associated to all possible multi-particle cuts. We apply it to dimensionally regulated one-loop amplitudes, recovering the well-known integrand-decomposition formula. Finally, we focus on the maximum-cut, defined as a system of on-shell conditions constraining the components of all the integration-momenta. By means of the Finiteness Theorem and of the Shape Lemma, we prove that the residue at the maximum-cut is parametrized by a number of coefficients equal to the number of solutions of the cut itself.

  10. A Riemann-Hilbert approach to asymptotic questions for orthogonal polynomials

    NASA Astrophysics Data System (ADS)

    Deift, P.; Kriecherbauer, T.; McLaughlin, K. T.-R.; Venakides, S.; Zhou, X.

    2001-08-01

    A few years ago the authors introduced a new approach to study asymptotic questions for orthogonal polynomials. In this paper we give an overview of our method and review the results which have been obtained in Deift et al. (Internat. Math. Res. Notices (1997) 759, Comm. Pure Appl. Math. 52 (1999) 1491, 1335), Deift (Orthogonal Polynomials and Random Matrices: A Riemann-Hilbert Approach, Courant Lecture Notes, Vol. 3, New York University, 1999), Kriecherbauer and McLaughlin (Internat. Math. Res. Notices (1999) 299) and Baik et al. (J. Amer. Math. Soc. 12 (1999) 1119). We mainly consider orthogonal polynomials with respect to weights on the real line which are either (1) Freud-type weights d[alpha](x)=e-Q(x) dx (Q polynomial or Q(x)=x[beta], [beta]>0), or (2) varying weights d[alpha]n(x)=e-nV(x) dx (V analytic, limx-->[infinity] V(x)/logx=[infinity]). We obtain Plancherel-Rotach-type asymptotics in the entire complex plane as well as asymptotic formulae with error estimates for the leading coefficients, for the recurrence coefficients, and for the zeros of the orthogonal polynomials. Our proof starts from an observation of Fokas et al. (Comm. Math. Phys. 142 (1991) 313) that the orthogonal polynomials can be determined as solutions of certain matrix valued Riemann-Hilbert problems. We analyze the Riemann-Hilbert problems by a steepest descent type method introduced by Deift and Zhou (Ann. Math. 137 (1993) 295) and further developed in Deift and Zhou (Comm. Pure Appl. Math. 48 (1995) 277) and Deift et al. (Proc. Nat. Acad. Sci. USA 95 (1998) 450). A crucial step in our analysis is the use of the well-known equilibrium measure which describes the asymptotic distribution of the zeros of the orthogonal polynomials.

  11. Comparison of Linear and Non-linear Regression Analysis to Determine Pulmonary Pressure in Hyperthyroidism.

    PubMed

    Scarneciu, Camelia C; Sangeorzan, Livia; Rus, Horatiu; Scarneciu, Vlad D; Varciu, Mihai S; Andreescu, Oana; Scarneciu, Ioan

    2017-01-01

    This study aimed at assessing the incidence of pulmonary hypertension (PH) at newly diagnosed hyperthyroid patients and at finding a simple model showing the complex functional relation between pulmonary hypertension in hyperthyroidism and the factors causing it. The 53 hyperthyroid patients (H-group) were evaluated mainly by using an echocardiographical method and compared with 35 euthyroid (E-group) and 25 healthy people (C-group). In order to identify the factors causing pulmonary hypertension the statistical method of comparing the values of arithmetical means is used. The functional relation between the two random variables (PAPs and each of the factors determining it within our research study) can be expressed by linear or non-linear function. By applying the linear regression method described by a first-degree equation the line of regression (linear model) has been determined; by applying the non-linear regression method described by a second degree equation, a parabola-type curve of regression (non-linear or polynomial model) has been determined. We made the comparison and the validation of these two models by calculating the determination coefficient (criterion 1), the comparison of residuals (criterion 2), application of AIC criterion (criterion 3) and use of F-test (criterion 4). From the H-group, 47% have pulmonary hypertension completely reversible when obtaining euthyroidism. The factors causing pulmonary hypertension were identified: previously known- level of free thyroxin, pulmonary vascular resistance, cardiac output; new factors identified in this study- pretreatment period, age, systolic blood pressure. According to the four criteria and to the clinical judgment, we consider that the polynomial model (graphically parabola- type) is better than the linear one. The better model showing the functional relation between the pulmonary hypertension in hyperthyroidism and the factors identified in this study is given by a polynomial equation of second degree where the parabola is its graphical representation.

  12. Polynomial sequences for bond percolation critical thresholds

    DOE PAGES

    Scullard, Christian R.

    2011-09-22

    In this paper, I compute the inhomogeneous (multi-probability) bond critical surfaces for the (4, 6, 12) and (3 4, 6) using the linearity approximation described in (Scullard and Ziff, J. Stat. Mech. 03021), implemented as a branching process of lattices. I find the estimates for the bond percolation thresholds, pc(4, 6, 12) = 0.69377849... and p c(3 4, 6) = 0.43437077..., compared with Parviainen’s numerical results of p c = 0.69373383... and p c = 0.43430621... . These deviations are of the order 10 -5, as is standard for this method. Deriving thresholds in this way for a given latticemore » leads to a polynomial with integer coefficients, the root in [0, 1] of which gives the estimate for the bond threshold and I show how the method can be refined, leading to a series of higher order polynomials making predictions that likely converge to the exact answer. Finally, I discuss how this fact hints that for certain graphs, such as the kagome lattice, the exact bond threshold may not be the root of any polynomial with integer coefficients.« less

  13. Accurate spectral solutions for the parabolic and elliptic partial differential equations by the ultraspherical tau method

    NASA Astrophysics Data System (ADS)

    Doha, E. H.; Abd-Elhameed, W. M.

    2005-09-01

    We present a double ultraspherical spectral methods that allow the efficient approximate solution for the parabolic partial differential equations in a square subject to the most general inhomogeneous mixed boundary conditions. The differential equations with their boundary and initial conditions are reduced to systems of ordinary differential equations for the time-dependent expansion coefficients. These systems are greatly simplified by using tensor matrix algebra, and are solved by using the step-by-step method. Numerical applications of how to use these methods are described. Numerical results obtained compare favorably with those of the analytical solutions. Accurate double ultraspherical spectral approximations for Poisson's and Helmholtz's equations are also noted. Numerical experiments show that spectral approximation based on Chebyshev polynomials of the first kind is not always better than others based on ultraspherical polynomials.

  14. Rational solutions to the KPI equation and multi rogue waves

    NASA Astrophysics Data System (ADS)

    Gaillard, Pierre

    2016-04-01

    We construct here rational solutions to the Kadomtsev-Petviashvili equation (KPI) as a quotient of two polynomials in x, y and t depending on several real parameters. This method provides an infinite hierarchy of rational solutions written in terms of polynomials of degrees 2 N(N + 1) in x, y and t depending on 2 N - 2 real parameters for each positive integer N. We give explicit expressions of the solutions in the simplest cases N = 1 and N = 2 and we study the patterns of their modulus in the (x , y) plane for different values of time t and parameters.

  15. On the gravitational field of static and stationary axial symmetric bodies with multi-polar structure

    NASA Astrophysics Data System (ADS)

    Letelier, Patricio S.

    1999-04-01

    We give a physical interpretation to the multi-polar Erez-Rozen-Quevedo solution of the Einstein equations in terms of bars. We find that each multi-pole corresponds to the Newtonian potential of a bar with linear density proportional to a Legendre polynomial. We use this fact to find an integral representation of the 0264-9381/16/4/010/img1 function. These integral representations are used in the context of the inverse scattering method to find solutions associated with one or more rotating bodies each with their own multi-polar structure.

  16. An approach toward the numerical evaluation of multi-loop Feynman diagrams

    NASA Astrophysics Data System (ADS)

    Passarino, Giampiero

    2001-12-01

    A scheme for systematically achieving accurate numerical evaluation of multi-loop Feynman diagrams is developed. This shows the feasibility of a project aimed to produce a complete calculation for two-loop predictions in the Standard Model. As a first step an algorithm, proposed by F.V. Tkachov and based on the so-called generalized Bernstein functional relation, is applied to one-loop multi-leg diagrams with particular emphasis to the presence of infrared singularities, to the problem of tensorial reduction and to the classification of all singularities of a given diagram. Successively, the extension of the algorithm to two-loop diagrams is examined. The proposed solution consists in applying the functional relation to the one-loop sub-diagram which has the largest number of internal lines. In this way the integrand can be made smooth, a part from a factor which is a polynomial in xS, the vector of Feynman parameters needed for the complementary sub-diagram with the smallest number of internal lines. Since the procedure does not introduce new singularities one can distort the xS-integration hyper-contour into the complex hyper-plane, thus achieving numerical stability. The algorithm is then modified to deal with numerical evaluation around normal thresholds. Concise and practical formulas are assembled and presented, numerical results and comparisons with the available literature are shown and discussed for the so-called sunset topology.

  17. Application of mathematical model methods for optimization tasks in construction materials technology

    NASA Astrophysics Data System (ADS)

    Fomina, E. V.; Kozhukhova, N. I.; Sverguzova, S. V.; Fomin, A. E.

    2018-05-01

    In this paper, the regression equations method for design of construction material was studied. Regression and polynomial equations representing the correlation between the studied parameters were proposed. The logic design and software interface of the regression equations method focused on parameter optimization to provide the energy saving effect at the stage of autoclave aerated concrete design considering the replacement of traditionally used quartz sand by coal mining by-product such as argillite. The mathematical model represented by a quadric polynomial for the design of experiment was obtained using calculated and experimental data. This allowed the estimation of relationship between the composition and final properties of the aerated concrete. The surface response graphically presented in a nomogram allowed the estimation of concrete properties in response to variation of composition within the x-space. The optimal range of argillite content was obtained leading to a reduction of raw materials demand, development of target plastic strength of aerated concrete as well as a reduction of curing time before autoclave treatment. Generally, this method allows the design of autoclave aerated concrete with required performance without additional resource and time costs.

  18. Regression Simulation of Turbine Engine Performance - Accuracy Improvement (TASK IV)

    DTIC Science & Technology

    1978-09-30

    33 21 Generalized Form of the Regression Equation for the Optimized Polynomial Exponent M ethod...altitude, Mach number and power setting combinations were generated during the ARES evaluation. The orthogonal Latin Square selection procedure...pattern. In data generation , the low (L), mid (M), and high (H) values of a variable are not always the same. At some of the corner points where

  19. Penalized spline estimation for functional coefficient regression models.

    PubMed

    Cao, Yanrong; Lin, Haiqun; Wu, Tracy Z; Yu, Yan

    2010-04-01

    The functional coefficient regression models assume that the regression coefficients vary with some "threshold" variable, providing appreciable flexibility in capturing the underlying dynamics in data and avoiding the so-called "curse of dimensionality" in multivariate nonparametric estimation. We first investigate the estimation, inference, and forecasting for the functional coefficient regression models with dependent observations via penalized splines. The P-spline approach, as a direct ridge regression shrinkage type global smoothing method, is computationally efficient and stable. With established fixed-knot asymptotics, inference is readily available. Exact inference can be obtained for fixed smoothing parameter λ, which is most appealing for finite samples. Our penalized spline approach gives an explicit model expression, which also enables multi-step-ahead forecasting via simulations. Furthermore, we examine different methods of choosing the important smoothing parameter λ: modified multi-fold cross-validation (MCV), generalized cross-validation (GCV), and an extension of empirical bias bandwidth selection (EBBS) to P-splines. In addition, we implement smoothing parameter selection using mixed model framework through restricted maximum likelihood (REML) for P-spline functional coefficient regression models with independent observations. The P-spline approach also easily allows different smoothness for different functional coefficients, which is enabled by assigning different penalty λ accordingly. We demonstrate the proposed approach by both simulation examples and a real data application.

  20. A Polynomial Subset-Based Efficient Multi-Party Key Management System for Lightweight Device Networks.

    PubMed

    Mahmood, Zahid; Ning, Huansheng; Ghafoor, AtaUllah

    2017-03-24

    Wireless Sensor Networks (WSNs) consist of lightweight devices to measure sensitive data that are highly vulnerable to security attacks due to their constrained resources. In a similar manner, the internet-based lightweight devices used in the Internet of Things (IoT) are facing severe security and privacy issues because of the direct accessibility of devices due to their connection to the internet. Complex and resource-intensive security schemes are infeasible and reduce the network lifetime. In this regard, we have explored the polynomial distribution-based key establishment schemes and identified an issue that the resultant polynomial value is either storage intensive or infeasible when large values are multiplied. It becomes more costly when these polynomials are regenerated dynamically after each node join or leave operation and whenever key is refreshed. To reduce the computation, we have proposed an Efficient Key Management (EKM) scheme for multiparty communication-based scenarios. The proposed session key management protocol is established by applying a symmetric polynomial for group members, and the group head acts as a responsible node. The polynomial generation method uses security credentials and secure hash function. Symmetric cryptographic parameters are efficient in computation, communication, and the storage required. The security justification of the proposed scheme has been completed by using Rubin logic, which guarantees that the protocol attains mutual validation and session key agreement property strongly among the participating entities. Simulation scenarios are performed using NS 2.35 to validate the results for storage, communication, latency, energy, and polynomial calculation costs during authentication, session key generation, node migration, secure joining, and leaving phases. EKM is efficient regarding storage, computation, and communication overhead and can protect WSN-based IoT infrastructure.

  1. A Polynomial Subset-Based Efficient Multi-Party Key Management System for Lightweight Device Networks

    PubMed Central

    Mahmood, Zahid; Ning, Huansheng; Ghafoor, AtaUllah

    2017-01-01

    Wireless Sensor Networks (WSNs) consist of lightweight devices to measure sensitive data that are highly vulnerable to security attacks due to their constrained resources. In a similar manner, the internet-based lightweight devices used in the Internet of Things (IoT) are facing severe security and privacy issues because of the direct accessibility of devices due to their connection to the internet. Complex and resource-intensive security schemes are infeasible and reduce the network lifetime. In this regard, we have explored the polynomial distribution-based key establishment schemes and identified an issue that the resultant polynomial value is either storage intensive or infeasible when large values are multiplied. It becomes more costly when these polynomials are regenerated dynamically after each node join or leave operation and whenever key is refreshed. To reduce the computation, we have proposed an Efficient Key Management (EKM) scheme for multiparty communication-based scenarios. The proposed session key management protocol is established by applying a symmetric polynomial for group members, and the group head acts as a responsible node. The polynomial generation method uses security credentials and secure hash function. Symmetric cryptographic parameters are efficient in computation, communication, and the storage required. The security justification of the proposed scheme has been completed by using Rubin logic, which guarantees that the protocol attains mutual validation and session key agreement property strongly among the participating entities. Simulation scenarios are performed using NS 2.35 to validate the results for storage, communication, latency, energy, and polynomial calculation costs during authentication, session key generation, node migration, secure joining, and leaving phases. EKM is efficient regarding storage, computation, and communication overhead and can protect WSN-based IoT infrastructure. PMID:28338632

  2. Disconjugacy, regularity of multi-indexed rationally extended potentials, and Laguerre exceptional polynomials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grandati, Y.; Quesne, C.

    2013-07-15

    The power of the disconjugacy properties of second-order differential equations of Schrödinger type to check the regularity of rationally extended quantum potentials connected with exceptional orthogonal polynomials is illustrated by re-examining the extensions of the isotonic oscillator (or radial oscillator) potential derived in kth-order supersymmetric quantum mechanics or multistep Darboux-Bäcklund transformation method. The function arising in the potential denominator is proved to be a polynomial with a nonvanishing constant term, whose value is calculated by induction over k. The sign of this term being the same as that of the already known highest degree term, the potential denominator has themore » same sign at both extremities of the definition interval, a property that is shared by the seed eigenfunction used in the potential construction. By virtue of disconjugacy, such a property implies the nodeless character of both the eigenfunction and the resulting potential.« less

  3. Control Synthesis of Discrete-Time T-S Fuzzy Systems via a Multi-Instant Homogenous Polynomial Approach.

    PubMed

    Xie, Xiangpeng; Yue, Dong; Zhang, Huaguang; Xue, Yusheng

    2016-03-01

    This paper deals with the problem of control synthesis of discrete-time Takagi-Sugeno fuzzy systems by employing a novel multiinstant homogenous polynomial approach. A new multiinstant fuzzy control scheme and a new class of fuzzy Lyapunov functions, which are homogenous polynomially parameter-dependent on both the current-time normalized fuzzy weighting functions and the past-time normalized fuzzy weighting functions, are proposed for implementing the object of relaxed control synthesis. Then, relaxed stabilization conditions are derived with less conservatism than existing ones. Furthermore, the relaxation quality of obtained stabilization conditions is further ameliorated by developing an efficient slack variable approach, which presents a multipolynomial dependence on the normalized fuzzy weighting functions at the current and past instants of time. Two simulation examples are given to demonstrate the effectiveness and benefits of the results developed in this paper.

  4. Optimization of Medium Composition for the Production of Neomycin by Streptomyces fradiae NCIM 2418 in Solid State Fermentation

    PubMed Central

    Vastrad, B. M.; Neelagund, S. E.

    2014-01-01

    Neomycin production of Streptomyces fradiae NCIM 2418 was optimized by using response surface methodology (RSM), which is powerful mathematical approach comprehensively applied in the optimization of solid state fermentation processes. In the first step of optimization, with Placket-Burman design, ammonium chloride, sodium nitrate, L-histidine, and ammonium nitrate were established to be the crucial nutritional factors affecting neomycin production significantly. In the second step, a 24 full factorial central composite design and RSM were applied to determine the optimal concentration of significant variable. A second-order polynomial was determined by the multiple regression analysis of the experimental data. The optimum values for the important nutrients for the maximum were obtained as follows: ammonium chloride 2.00%, sodium nitrate 1.50%, L-histidine 0.250%, and ammonium nitrate 0.250% with a predicted value of maximum neomycin production of 20,000 g kg−1 dry coconut oil cake. Under the optimal condition, the practical neomycin production was 19,642 g kg−1 dry coconut oil cake. The determination coefficient (R 2) was 0.9232, which ensures an acceptable admissibility of the model. PMID:25009746

  5. Response Surface Modeling Tolerance and Inference Error Risk Specifications: Proposed Industry Standards

    NASA Technical Reports Server (NTRS)

    DeLoach, Richard

    2012-01-01

    This paper reviews the derivation of an equation for scaling response surface modeling experiments. The equation represents the smallest number of data points required to fit a linear regression polynomial so as to achieve certain specified model adequacy criteria. Specific criteria are proposed which simplify an otherwise rather complex equation, generating a practical rule of thumb for the minimum volume of data required to adequately fit a polynomial with a specified number of terms in the model. This equation and the simplified rule of thumb it produces can be applied to minimize the cost of wind tunnel testing.

  6. A stabilized Runge–Kutta–Legendre method for explicit super-time-stepping of parabolic and mixed equations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Meyer, Chad D.; Balsara, Dinshaw S.; Aslam, Tariq D.

    2014-01-15

    Parabolic partial differential equations appear in several physical problems, including problems that have a dominant hyperbolic part coupled to a sub-dominant parabolic component. Explicit methods for their solution are easy to implement but have very restrictive time step constraints. Implicit solution methods can be unconditionally stable but have the disadvantage of being computationally costly or difficult to implement. Super-time-stepping methods for treating parabolic terms in mixed type partial differential equations occupy an intermediate position. In such methods each superstep takes “s” explicit Runge–Kutta-like time-steps to advance the parabolic terms by a time-step that is s{sup 2} times larger than amore » single explicit time-step. The expanded stability is usually obtained by mapping the short recursion relation of the explicit Runge–Kutta scheme to the recursion relation of some well-known, stable polynomial. Prior work has built temporally first- and second-order accurate super-time-stepping methods around the recursion relation associated with Chebyshev polynomials. Since their stability is based on the boundedness of the Chebyshev polynomials, these methods have been called RKC1 and RKC2. In this work we build temporally first- and second-order accurate super-time-stepping methods around the recursion relation associated with Legendre polynomials. We call these methods RKL1 and RKL2. The RKL1 method is first-order accurate in time; the RKL2 method is second-order accurate in time. We verify that the newly-designed RKL1 and RKL2 schemes have a very desirable monotonicity preserving property for one-dimensional problems – a solution that is monotone at the beginning of a time step retains that property at the end of that time step. It is shown that RKL1 and RKL2 methods are stable for all values of the diffusion coefficient up to the maximum value. We call this a convex monotonicity preserving property and show by examples that it is very useful in parabolic problems with variable diffusion coefficients. This includes variable coefficient parabolic equations that might give rise to skew symmetric terms. The RKC1 and RKC2 schemes do not share this convex monotonicity preserving property. One-dimensional and two-dimensional von Neumann stability analyses of RKC1, RKC2, RKL1 and RKL2 are also presented, showing that the latter two have some advantages. The paper includes several details to facilitate implementation. A detailed accuracy analysis is presented to show that the methods reach their design accuracies. A stringent set of test problems is also presented. To demonstrate the robustness and versatility of our methods, we show their successful operation on problems involving linear and non-linear heat conduction and viscosity, resistive magnetohydrodynamics, ambipolar diffusion dominated magnetohydrodynamics, level set methods and flux limited radiation diffusion. In a prior paper (Meyer, Balsara and Aslam 2012 [36]) we have also presented an extensive test-suite showing that the RKL2 method works robustly in the presence of shocks in an anisotropically conducting, magnetized plasma.« less

  7. A stabilized Runge-Kutta-Legendre method for explicit super-time-stepping of parabolic and mixed equations

    NASA Astrophysics Data System (ADS)

    Meyer, Chad D.; Balsara, Dinshaw S.; Aslam, Tariq D.

    2014-01-01

    Parabolic partial differential equations appear in several physical problems, including problems that have a dominant hyperbolic part coupled to a sub-dominant parabolic component. Explicit methods for their solution are easy to implement but have very restrictive time step constraints. Implicit solution methods can be unconditionally stable but have the disadvantage of being computationally costly or difficult to implement. Super-time-stepping methods for treating parabolic terms in mixed type partial differential equations occupy an intermediate position. In such methods each superstep takes “s” explicit Runge-Kutta-like time-steps to advance the parabolic terms by a time-step that is s2 times larger than a single explicit time-step. The expanded stability is usually obtained by mapping the short recursion relation of the explicit Runge-Kutta scheme to the recursion relation of some well-known, stable polynomial. Prior work has built temporally first- and second-order accurate super-time-stepping methods around the recursion relation associated with Chebyshev polynomials. Since their stability is based on the boundedness of the Chebyshev polynomials, these methods have been called RKC1 and RKC2. In this work we build temporally first- and second-order accurate super-time-stepping methods around the recursion relation associated with Legendre polynomials. We call these methods RKL1 and RKL2. The RKL1 method is first-order accurate in time; the RKL2 method is second-order accurate in time. We verify that the newly-designed RKL1 and RKL2 schemes have a very desirable monotonicity preserving property for one-dimensional problems - a solution that is monotone at the beginning of a time step retains that property at the end of that time step. It is shown that RKL1 and RKL2 methods are stable for all values of the diffusion coefficient up to the maximum value. We call this a convex monotonicity preserving property and show by examples that it is very useful in parabolic problems with variable diffusion coefficients. This includes variable coefficient parabolic equations that might give rise to skew symmetric terms. The RKC1 and RKC2 schemes do not share this convex monotonicity preserving property. One-dimensional and two-dimensional von Neumann stability analyses of RKC1, RKC2, RKL1 and RKL2 are also presented, showing that the latter two have some advantages. The paper includes several details to facilitate implementation. A detailed accuracy analysis is presented to show that the methods reach their design accuracies. A stringent set of test problems is also presented. To demonstrate the robustness and versatility of our methods, we show their successful operation on problems involving linear and non-linear heat conduction and viscosity, resistive magnetohydrodynamics, ambipolar diffusion dominated magnetohydrodynamics, level set methods and flux limited radiation diffusion. In a prior paper (Meyer, Balsara and Aslam 2012 [36]) we have also presented an extensive test-suite showing that the RKL2 method works robustly in the presence of shocks in an anisotropically conducting, magnetized plasma.

  8. Geometrization and Generalization of the Kowalevski Top

    NASA Astrophysics Data System (ADS)

    Dragović, Vladimir

    2010-08-01

    A new view on the Kowalevski top and the Kowalevski integration procedure is presented. For more than a century, the Kowalevski 1889 case, has attracted full attention of a wide community as the highlight of the classical theory of integrable systems. Despite hundreds of papers on the subject, the Kowalevski integration is still understood as a magic recipe, an unbelievable sequence of skillful tricks, unexpected identities and smart changes of variables. The novelty of our present approach is based on our four observations. The first one is that the so-called fundamental Kowalevski equation is an instance of a pencil equation of the theory of conics which leads us to a new geometric interpretation of the Kowalevski variables w, x 1, x 2 as the pencil parameter and the Darboux coordinates, respectively. The second is observation of the key algebraic property of the pencil equation which is followed by introduction and study of a new class of discriminantly separable polynomials. All steps of the Kowalevski integration procedure are now derived as easy and transparent logical consequences of our theory of discriminantly separable polynomials. The third observation connects the Kowalevski integration and the pencil equation with the theory of multi-valued groups. The Kowalevski change of variables is now recognized as an example of a two-valued group operation and its action. The final observation is surprising equivalence of the associativity of the two-valued group operation and its action to the n = 3 case of the Great Poncelet Theorem for pencils of conics.

  9. Forward Behavioral Modeling of a Three-Way Amplitude Modulator-Based Transmitter Using an Augmented Memory Polynomial

    PubMed Central

    Chatrath, Jatin; Aziz, Mohsin; Helaoui, Mohamed

    2018-01-01

    Reconfigurable and multi-standard RF front-ends for wireless communication and sensor networks have gained importance as building blocks for the Internet of Things. Simpler and highly-efficient transmitter architectures, which can transmit better quality signals with reduced impairments, are an important step in this direction. In this regard, mixer-less transmitter architecture, namely, the three-way amplitude modulator-based transmitter, avoids the use of imperfect mixers and frequency up-converters, and their resulting distortions, leading to an improved signal quality. In this work, an augmented memory polynomial-based model for the behavioral modeling of such mixer-less transmitter architecture is proposed. Extensive simulations and measurements have been carried out in order to validate the accuracy of the proposed modeling strategy. The performance of the proposed model is evaluated using normalized mean square error (NMSE) for long-term evolution (LTE) signals. NMSE for a LTE signal of 1.4 MHz bandwidth with 100,000 samples for digital combining and analog combining are recorded as −36.41 dB and −36.9 dB, respectively. Similarly, for a 5 MHz signal the proposed models achieves −31.93 dB and −32.08 dB NMSE using digital and analog combining, respectively. For further validation of the proposed model, amplitude-to-amplitude (AM-AM), amplitude-to-phase (AM-PM), and the spectral response of the modeled and measured data are plotted, reasonably meeting the desired modeling criteria. PMID:29510501

  10. Georeferencing CAMS data: Polynomial rectification and beyond

    NASA Astrophysics Data System (ADS)

    Yang, Xinghe

    The Calibrated Airborne Multispectral Scanner (CAMS) is a sensor used in the commercial remote sensing program at NASA Stennis Space Center. In geographic applications of the CAMS data, accurate geometric rectification is essential for the analysis of the remotely sensed data and for the integration of the data into Geographic Information Systems (GIS). The commonly used rectification techniques such as the polynomial transformation and ortho rectification have been very successful in the field of remote sensing and GIS for most remote sensing data such as Landsat imagery, SPOT imagery and aerial photos. However, due to the geometric nature of the airborne line scanner which has high spatial frequency distortions, the polynomial model and the ortho rectification technique in current commercial software packages such as Erdas Imagine are not adequate for obtaining sufficient geometric accuracy. In this research, the geometric nature, especially the major distortions, of the CAMS data has been described. An analytical step-by-step geometric preprocessing has been utilized to deal with the potential high frequency distortions of the CAMS data. A generic sensor-independent photogrammetric model has been developed for the ortho-rectification of the CAMS data. Three generalized kernel classes and directional elliptical basis have been formulated into a rectification model of summation of multisurface functions, which is a significant extension to the traditional radial basis functions. The preprocessing mechanism has been fully incorporated into the polynomial, the triangle-based finite element analysis as well as the summation of multisurface functions. While the multisurface functions and the finite element analysis have the characteristics of localization, piecewise logic has been applied to the polynomial and photogrammetric methods, which can produce significant accuracy improvement over the global approach. A software module has been implemented with full integration of data preprocessing and rectification techniques under Erdas Imagine development environment. The final root mean square (RMS) errors for the test CAMS data are about two pixels which are compatible with the random RMS errors existed in the reference map coordinates.

  11. Volumetric calibration of a plenoptic camera

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hall, Elise Munz; Fahringer, Timothy W.; Guildenbecher, Daniel Robert

    Here, the volumetric calibration of a plenoptic camera is explored to correct for inaccuracies due to real-world lens distortions and thin-lens assumptions in current processing methods. Two methods of volumetric calibration based on a polynomial mapping function that does not require knowledge of specific lens parameters are presented and compared to a calibration based on thin-lens assumptions. The first method, volumetric dewarping, is executed by creation of a volumetric representation of a scene using the thin-lens assumptions, which is then corrected in post-processing using a polynomial mapping function. The second method, direct light-field calibration, uses the polynomial mapping in creationmore » of the initial volumetric representation to relate locations in object space directly to image sensor locations. The accuracy and feasibility of these methods is examined experimentally by capturing images of a known dot card at a variety of depths. Results suggest that use of a 3D polynomial mapping function provides a significant increase in reconstruction accuracy and that the achievable accuracy is similar using either polynomial-mapping-based method. Additionally, direct light-field calibration provides significant computational benefits by eliminating some intermediate processing steps found in other methods. Finally, the flexibility of this method is shown for a nonplanar calibration.« less

  12. Volumetric calibration of a plenoptic camera

    DOE PAGES

    Hall, Elise Munz; Fahringer, Timothy W.; Guildenbecher, Daniel Robert; ...

    2018-02-01

    Here, the volumetric calibration of a plenoptic camera is explored to correct for inaccuracies due to real-world lens distortions and thin-lens assumptions in current processing methods. Two methods of volumetric calibration based on a polynomial mapping function that does not require knowledge of specific lens parameters are presented and compared to a calibration based on thin-lens assumptions. The first method, volumetric dewarping, is executed by creation of a volumetric representation of a scene using the thin-lens assumptions, which is then corrected in post-processing using a polynomial mapping function. The second method, direct light-field calibration, uses the polynomial mapping in creationmore » of the initial volumetric representation to relate locations in object space directly to image sensor locations. The accuracy and feasibility of these methods is examined experimentally by capturing images of a known dot card at a variety of depths. Results suggest that use of a 3D polynomial mapping function provides a significant increase in reconstruction accuracy and that the achievable accuracy is similar using either polynomial-mapping-based method. Additionally, direct light-field calibration provides significant computational benefits by eliminating some intermediate processing steps found in other methods. Finally, the flexibility of this method is shown for a nonplanar calibration.« less

  13. Multi-linear regression of sea level in the south west Pacific as a first step towards local sea level projections

    NASA Astrophysics Data System (ADS)

    Kumar, Vandhna; Meyssignac, Benoit; Melet, Angélique; Ganachaud, Alexandre

    2017-04-01

    Rising sea levels are a critical concern in small island nations. The problem is especially serious in the western south Pacific, where the total sea level rise over the last 60 years is up to 3 times the global average. In this study, we attempt to reconstruct sea levels at selected sites in the region (Suva, Lautoka, Noumea - Fiji and New Caledonia) as a mutiple-linear regression of atmospheric and oceanic variables. We focus on interannual-to-decadal scale variability, and lower (including the global mean sea level rise) over the 1979-2014 period. Sea levels are taken from tide gauge records and the ORAS4 reanalysis dataset, and are expressed as a sum of steric and mass changes as a preliminary step. The key development in our methodology is using leading wind stress curl as a proxy for the thermosteric component. This is based on the knowledge that wind stress curl anomalies can modulate the thermocline depth and resultant sea levels via Rossby wave propagation. The analysis is primarily based on correlation between local sea level and selected predictors, the dominant one being wind stress curl. In the first step, proxy boxes for wind stress curl are determined via regions of highest correlation. The proportion of sea level explained via linear regression is then removed, leaving a residual. This residual is then correlated with other locally acting potential predictors: halosteric sea level, the zonal and meridional wind stress components, and sea surface temperature. The statistically significant predictors are used in a multi-linear regression function to simulate the observed sea level. The method is able to reproduce between 40 to 80% of the variance in observed sea level. Based on the skill of the model, it has high potential in sea level projection and downscaling studies.

  14. Analysis of the inter- and extracellular formation of platinum nanoparticles by Fusarium oxysporum f. sp. lycopersici using response surface methodology

    NASA Astrophysics Data System (ADS)

    Riddin, T. L.; Gericke, M.; Whiteley, C. G.

    2006-07-01

    Fusarium oxysporum fungal strain was screened and found to be successful for the inter- and extracellular production of platinum nanoparticles. Nanoparticle formation was visually observed, over time, by the colour of the extracellular solution and/or the fungal biomass turning from yellow to dark brown, and their concentration was determined from the amount of residual hexachloroplatinic acid measured from a standard curve at 456 nm. The extracellular nanoparticles were characterized by transmission electron microscopy. Nanoparticles of varying size (10-100 nm) and shape (hexagons, pentagons, circles, squares, rectangles) were produced at both extracellular and intercellular levels by the Fusarium oxysporum. The particles precipitate out of solution and bioaccumulate by nucleation either intercellularly, on the cell wall/membrane, or extracellularly in the surrounding medium. The importance of pH, temperature and hexachloroplatinic acid (H2PtCl6) concentration in nanoparticle formation was examined through the use of a statistical response surface methodology. Only the extracellular production of nanoparticles proved to be statistically significant, with a concentration yield of 4.85 mg l-1 estimated by a first-order regression model. From a second-order polynomial regression, the predicted yield of nanoparticles increased to 5.66 mg l-1 and, after a backward step, regression gave a final model with a yield of 6.59 mg l-1.

  15. Architecture for time or transform domain decoding of reed-solomon codes

    NASA Technical Reports Server (NTRS)

    Hsu, In-Shek (Inventor); Truong, Trieu-Kie (Inventor); Deutsch, Leslie J. (Inventor); Shao, Howard M. (Inventor)

    1989-01-01

    Two pipeline (255,233) RS decoders, one a time domain decoder and the other a transform domain decoder, use the same first part to develop an errata locator polynomial .tau.(x), and an errata evaluator polynominal A(x). Both the time domain decoder and transform domain decoder have a modified GCD that uses an input multiplexer and an output demultiplexer to reduce the number of GCD cells required. The time domain decoder uses a Chien search and polynomial evaluator on the GCD outputs .tau.(x) and A(x), for the final decoding steps, while the transform domain decoder uses a transform error pattern algorithm operating on .tau.(x) and the initial syndrome computation S(x), followed by an inverse transform algorithm in sequence for the final decoding steps prior to adding the received RS coded message to produce a decoded output message.

  16. An Approach to Stable Gradient-Descent Adaptation of Higher Order Neural Units.

    PubMed

    Bukovsky, Ivo; Homma, Noriyasu

    2017-09-01

    Stability evaluation of a weight-update system of higher order neural units (HONUs) with polynomial aggregation of neural inputs (also known as classes of polynomial neural networks) for adaptation of both feedforward and recurrent HONUs by a gradient descent method is introduced. An essential core of the approach is based on the spectral radius of a weight-update system, and it allows stability monitoring and its maintenance at every adaptation step individually. Assuring the stability of the weight-update system (at every single adaptation step) naturally results in the adaptation stability of the whole neural architecture that adapts to the target data. As an aside, the used approach highlights the fact that the weight optimization of HONU is a linear problem, so the proposed approach can be generally extended to any neural architecture that is linear in its adaptable parameters.

  17. Comments on Samal and Henderson: Parallel consistent labeling algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Swain, M.J.

    Samal and Henderson claim that any parallel algorithm for enforcing arc consistency in the worst case must have {Omega}(na) sequential steps, where n is the number of nodes, and a is the number of labels per node. The authors argue that Samal and Henderon's argument makes assumptions about how processors are used and give a counterexample that enforces arc consistency in a constant number of steps using O(n{sup 2}a{sup 2}2{sup na}) processors. It is possible that the lower bound holds for a polynomial number of processors; if such a lower bound were to be proven it would answer an importantmore » open question in theoretical computer science concerning the relation between the complexity classes P and NC. The strongest existing lower bound for the arc consistency problem states that it cannot be solved in polynomial log time unless P = NC.« less

  18. Linearity versus Nonlinearity of Offspring-Parent Regression: An Experimental Study of Drosophila Melanogaster

    PubMed Central

    Gimelfarb, A.; Willis, J. H.

    1994-01-01

    An experiment was conducted to investigate the offspring-parent regression for three quantitative traits (weight, abdominal bristles and wing length) in Drosophila melanogaster. Linear and polynomial models were fitted for the regressions of a character in offspring on both parents. It is demonstrated that responses by the characters to selection predicted by the nonlinear regressions may differ substantially from those predicted by the linear regressions. This is true even, and especially, if selection is weak. The realized heritability for a character under selection is shown to be determined not only by the offspring-parent regression but also by the distribution of the character and by the form and strength of selection. PMID:7828818

  19. Optimization of Milk-Based Medium for Efficient Cultivation of Bifidobacterium pseudocatenulatum G4 Using Face-Centered Central Composite-Response Surface Methodology

    PubMed Central

    Abdul Khalil, Khalilah; Mustafa, Shuhaimi; Mohammad, Rosfarizan; Bin Ariff, Arbakariya; Shaari, Yamin; Abdul Manap, Yazid; Dahalan, Farrah Aini

    2014-01-01

    This study was undertaken to optimize skim milk and yeast extract concentration as a cultivation medium for optimal Bifidobacteria pseudocatenulatum G4 (G4) biomass and β-galactosidase production as well as lactose and free amino nitrogen (FAN) balance after cultivation period. Optimization process in this study involved four steps: screening for significant factors using 23 full factorial design, steepest ascent, optimization using FCCD-RSM, and verification. From screening steps, skim milk and yeast extract showed significant influence on the biomass production and, based on the steepest ascent step, middle points of skim milk (6% wt/vol) and yeast extract (1.89% wt/vol) were obtained. A polynomial regression model in FCCD-RSM revealed that both factors were found significant and the strongest influence was given by skim milk concentration. Optimum concentrations of skim milk and yeast extract for maximum biomass G4 and β-galactosidase production meanwhile low in lactose and FAN balance after cultivation period were 5.89% (wt/vol) and 2.31% (wt/vol), respectively. The validation experiments showed that the predicted and experimental values are not significantly different, indicating that the FCCD-RSM model developed is sufficient to describe the cultivation process of G4 using skim-milk-based medium with the addition of yeast extract. PMID:24527457

  20. ? and ? nonquadratic stabilisation of discrete-time Takagi-Sugeno systems based on multi-instant fuzzy Lyapunov functions

    NASA Astrophysics Data System (ADS)

    Tognetti, Eduardo S.; Oliveira, Ricardo C. L. F.; Peres, Pedro L. D.

    2015-01-01

    The problem of state feedback control design for discrete-time Takagi-Sugeno (TS) (T-S) fuzzy systems is investigated in this paper. A Lyapunov function, which is quadratic in the state and presents a multi-polynomial dependence on the fuzzy weighting functions at the current and past instants of time, is proposed.This function contains, as particular cases, other previous Lyapunov functions already used in the literature, being able to provide less conservative conditions of control design for TS fuzzy systems. The structure of the proposed Lyapunov function also motivates the design of a new stabilising compensator for Takagi-Sugeno fuzzy systems. The main novelty of the proposed state feedback control law is that the gain is composed of matrices with multi-polynomial dependence on the fuzzy weighting functions at a set of past instants of time, including the current one. The conditions for the existence of a stabilising state feedback control law that minimises an upper bound to the ? or ? norms are given in terms of linear matrix inequalities. Numerical examples show that the approach can be less conservative and more efficient than other methods available in the literature.

  1. Demodulation of moire fringes in digital holographic interferometry using an extended Kalman filter.

    PubMed

    Ramaiah, Jagadesh; Rastogi, Pramod; Rajshekhar, Gannavarpu

    2018-03-10

    This paper presents a method for extracting multiple phases from a single moire fringe pattern in digital holographic interferometry. The method relies on component separation using singular value decomposition and an extended Kalman filter for demodulating the moire fringes. The Kalman filter is applied by modeling the interference field locally as a multi-component polynomial phase signal and extracting the associated multiple polynomial coefficients using the state space approach. In addition to phase, the corresponding multiple phase derivatives can be simultaneously extracted using the proposed method. The applicability of the proposed method is demonstrated using simulation and experimental results.

  2. Model-based multi-fringe interferometry using Zernike polynomials

    NASA Astrophysics Data System (ADS)

    Gu, Wei; Song, Weihong; Wu, Gaofeng; Quan, Haiyang; Wu, Yongqian; Zhao, Wenchuan

    2018-06-01

    In this paper, a general phase retrieval method is proposed, which is based on one single interferogram with a small amount of fringes (either tilt or power). Zernike polynomials are used to characterize the phase to be measured; the phase distribution is reconstructed by a non-linear least squares method. Experiments show that the proposed method can obtain satisfactory results compared to the standard phase-shifting interferometry technique. Additionally, the retrace errors of proposed method can be neglected because of the few fringes; it does not need any auxiliary phase shifting facilities (low cost) and it is easy to implement without the process of phase unwrapping.

  3. Advances in simultaneous atmospheric profile and cloud parameter regression based retrieval from high-spectral resolution radiance measurements

    NASA Astrophysics Data System (ADS)

    Weisz, Elisabeth; Smith, William L.; Smith, Nadia

    2013-06-01

    The dual-regression (DR) method retrieves information about the Earth surface and vertical atmospheric conditions from measurements made by any high-spectral resolution infrared sounder in space. The retrieved information includes temperature and atmospheric gases (such as water vapor, ozone, and carbon species) as well as surface and cloud top parameters. The algorithm was designed to produce a high-quality product with low latency and has been demonstrated to yield accurate results in real-time environments. The speed of the retrieval is achieved through linear regression, while accuracy is achieved through a series of classification schemes and decision-making steps. These steps are necessary to account for the nonlinearity of hyperspectral retrievals. In this work, we detail the key steps that have been developed in the DR method to advance accuracy in the retrieval of nonlinear parameters, specifically cloud top pressure. The steps and their impact on retrieval results are discussed in-depth and illustrated through relevant case studies. In addition to discussing and demonstrating advances made in addressing nonlinearity in a linear geophysical retrieval method, advances toward multi-instrument geophysical analysis by applying the DR to three different operational sounders in polar orbit are also noted. For any area on the globe, the DR method achieves consistent accuracy and precision, making it potentially very valuable to both the meteorological and environmental user communities.

  4. Multi-criteria optimization for ultrasonic-assisted extraction of antioxidants from Pericarpium Citri Reticulatae using response surface methodology, an activity-based approach.

    PubMed

    Zeng, Shanshan; Wang, Lu; Zhang, Lei; Qu, Haibin; Gong, Xingchu

    2013-06-01

    An activity-based approach to optimize the ultrasonic-assisted extraction of antioxidants from Pericarpium Citri Reticulatae (Chenpi in Chinese) was developed. Response surface optimization based on a quantitative composition-activity relationship model showed the relationships among product chemical composition, antioxidant activity of extract, and parameters of extraction process. Three parameters of ultrasonic-assisted extraction, including the ethanol/water ratio, Chenpi amount, and alkaline amount, were investigated to give optimum extraction conditions for antioxidants of Chenpi: ethanol/water 70:30 v/v, Chenpi amount of 10 g, and alkaline amount of 28 mg. The experimental antioxidant yield under the optimum conditions was found to be 196.5 mg/g Chenpi, and the antioxidant activity was 2023.8 μmol Trolox equivalents/g of the Chenpi powder. The results agreed well with the second-order polynomial regression model. This presented approach promised great application potentials in both food and pharmaceutical industries. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Multi-soliton solutions and Bäcklund transformation for a two-mode KdV equation in a fluid

    NASA Astrophysics Data System (ADS)

    Xiao, Zi-Jian; Tian, Bo; Zhen, Hui-Ling; Chai, Jun; Wu, Xiao-Yu

    2017-01-01

    In this paper, we investigate a two-mode Korteweg-de Vries equation, which describes the one-dimensional propagation of shallow water waves with two modes in a weakly nonlinear and dispersive fluid system. With the binary Bell polynomial and an auxiliary variable, bilinear forms, multi-soliton solutions in the two-wave modes and Bell polynomial-type Bäcklund transformation for such an equation are obtained through the symbolic computation. Soliton propagation and collisions between the two solitons are presented. Based on the graphic analysis, it is shown that the increase in s can lead to the increase in the soliton velocities under the condition of ?, but the soliton amplitudes remain unchanged when s changes, where s means the difference between the phase velocities of two-mode waves, ? and ? are the nonlinearity parameter and dispersion parameter respectively. Elastic collisions between the two solitons in both two modes are analyzed with the help of graphic analysis.

  6. Bayesian B-spline mapping for dynamic quantitative traits.

    PubMed

    Xing, Jun; Li, Jiahan; Yang, Runqing; Zhou, Xiaojing; Xu, Shizhong

    2012-04-01

    Owing to their ability and flexibility to describe individual gene expression at different time points, random regression (RR) analyses have become a popular procedure for the genetic analysis of dynamic traits whose phenotypes are collected over time. Specifically, when modelling the dynamic patterns of gene expressions in the RR framework, B-splines have been proved successful as an alternative to orthogonal polynomials. In the so-called Bayesian B-spline quantitative trait locus (QTL) mapping, B-splines are used to characterize the patterns of QTL effects and individual-specific time-dependent environmental errors over time, and the Bayesian shrinkage estimation method is employed to estimate model parameters. Extensive simulations demonstrate that (1) in terms of statistical power, Bayesian B-spline mapping outperforms the interval mapping based on the maximum likelihood; (2) for the simulated dataset with complicated growth curve simulated by B-splines, Legendre polynomial-based Bayesian mapping is not capable of identifying the designed QTLs accurately, even when higher-order Legendre polynomials are considered and (3) for the simulated dataset using Legendre polynomials, the Bayesian B-spline mapping can find the same QTLs as those identified by Legendre polynomial analysis. All simulation results support the necessity and flexibility of B-spline in Bayesian mapping of dynamic traits. The proposed method is also applied to a real dataset, where QTLs controlling the growth trajectory of stem diameters in Populus are located.

  7. Step-rate cut-points for physical activity intensity in patients with multiple sclerosis: The effect of disability status.

    PubMed

    Agiovlasitis, Stamatis; Sandroff, Brian M; Motl, Robert W

    2016-02-15

    Evaluating the relationship between step-rate and rate of oxygen uptake (VO2) may allow for practical physical activity assessment in patients with multiple sclerosis (MS) of differing disability levels. To examine whether the VO2 to step-rate relationship during over-ground walking differs across varying disability levels among patients with MS and to develop step-rate thresholds for moderate- and vigorous-intensity physical activity. Adults with MS (N=58; age: 51 ± 9 years; 48 women) completed one over-ground walking trial at comfortable speed, one at 0.22 m · s(-1) slower, and one at 0.22 m · s(-1) faster. Each trial lasted 6 min. VO2 was measured with portable spirometry and steps with hand-tally. Disability status was classified as mild, moderate, or severe based on Expanded Disability Status Scale scores. Multi-level regression indicated that step-rate, disability status, and height significantly predicted VO2 (p<0.05). Based on this model, we developed step-rate thresholds for activity intensity that vary by disability status and height. A separate regression without height allowed for development of step-rate thresholds that vary only by disability status. The VO2 during over-ground walking differs among ambulatory patients with MS based on disability level and height, yielding different step-rate thresholds for physical activity intensity. Copyright © 2015 Elsevier B.V. All rights reserved.

  8. Minimizing the effects of multicollinearity in the polynomial regression of age relationships and sex differences in serum levels of pregnenolone sulfate in healthy subjects.

    PubMed

    Meloun, Milan; Hill, Martin; Vceláková-Havlíková, Helena

    2009-01-01

    Pregnenolone sulfate (PregS) is known as a steroid conjugate positively modulating N-methyl-D-aspartate receptors on neuronal membranes. These receptors are responsible for permeability of calcium channels and activation of neuronal function. Neuroactivating effect of PregS is also exerted via non-competitive negative modulation of GABA(A) receptors regulating the chloride influx. Recently, a penetrability of blood-brain barrier for PregS was found in rat, but some experiments in agreement with this finding were reported even earlier. It is known that circulating levels of PregS in human are relatively high depending primarily on age and adrenal activity. Concerning the neuromodulating effect of PregS, we recently evaluated age relationships of PregS in both sexes using polynomial regression models known to bring about the problems of multicollinearity, i.e., strong correlations among independent variables. Several criteria for the selection of suitable bias are demonstrated. Biased estimators based on the generalized principal component regression (GPCR) method avoiding multicollinearity problems are described. Significant differences were found between men and women in the course of the age dependence of PregS. In women, a significant maximum was found around the 30th year followed by a rapid decline, while the maximum in men was achieved almost 10 years earlier and changes were minor up to the 60th year. The investigation of gender differences and age dependencies in PregS could be of interest given its well-known neurostimulating effect, relatively high serum concentration, and the probable partial permeability of the blood-brain barrier for the steroid conjugate. GPCR in combination with the MEP (mean quadric error of prediction) criterion is extremely useful and appealing for constructing biased models. It can also be used for achieving such estimates with regard to keeping the model course corresponding to the data trend, especially in polynomial type regression models.

  9. Multi‐criteria manufacturability indices for ranking high‐concentration monoclonal antibody formulations

    PubMed Central

    Velayudhan, Ajoy; Thornhill, Nina F.

    2017-01-01

    ABSTRACT The need for high‐concentration formulations for subcutaneous delivery of therapeutic monoclonal antibodies (mAbs) can present manufacturability challenges for the final ultrafiltration/diafiltration (UF/DF) step. Viscosity levels and the propensity to aggregate are key considerations for high‐concentration formulations. This work presents novel frameworks for deriving a set of manufacturability indices related to viscosity and thermostability to rank high‐concentration mAb formulation conditions in terms of their ease of manufacture. This is illustrated by analyzing published high‐throughput biophysical screening data that explores the influence of different formulation conditions (pH, ions, and excipients) on the solution viscosity and product thermostability. A decision tree classification method, CART (Classification and Regression Tree) is used to identify the critical formulation conditions that influence the viscosity and thermostability. In this work, three different multi‐criteria data analysis frameworks were investigated to derive manufacturability indices from analysis of the stress maps and the process conditions experienced in the final UF/DF step. Polynomial regression techniques were used to transform the experimental data into a set of stress maps that show viscosity and thermostability as functions of the formulation conditions. A mathematical filtrate flux model was used to capture the time profiles of protein concentration and flux decay behavior during UF/DF. Multi‐criteria decision‐making analysis was used to identify the optimal formulation conditions that minimize the potential for both viscosity and aggregation issues during UF/DF. Biotechnol. Bioeng. 2017;114: 2043–2056. © 2017 The Authors. Biotechnology and Bioengineering Published by Wiley Perodicals, Inc. PMID:28464235

  10. High quality adaptive optics zoom with adaptive lenses

    NASA Astrophysics Data System (ADS)

    Quintavalla, M.; Santiago, F.; Bonora, S.; Restaino, S.

    2018-02-01

    We present the combined use of large aperture adaptive lens with large optical power modulation with a multi actuator adaptive lens. The Multi-actuator Adaptive Lens (M-AL) can correct up to the 4th radial order of Zernike polynomials, without any obstructions (electrodes and actuators) placed inside its clear aperture. We demonstrated that the use of both lenses together can lead to better image quality and to the correction of aberrations of adaptive optics optical systems.

  11. Quadratic Polynomial Regression using Serial Observation Processing:Implementation within DART

    NASA Astrophysics Data System (ADS)

    Hodyss, D.; Anderson, J. L.; Collins, N.; Campbell, W. F.; Reinecke, P. A.

    2017-12-01

    Many Ensemble-Based Kalman ltering (EBKF) algorithms process the observations serially. Serial observation processing views the data assimilation process as an iterative sequence of scalar update equations. What is useful about this data assimilation algorithm is that it has very low memory requirements and does not need complex methods to perform the typical high-dimensional inverse calculation of many other algorithms. Recently, the push has been towards the prediction, and therefore the assimilation of observations, for regions and phenomena for which high-resolution is required and/or highly nonlinear physical processes are operating. For these situations, a basic hypothesis is that the use of the EBKF is sub-optimal and performance gains could be achieved by accounting for aspects of the non-Gaussianty. To this end, we develop here a new component of the Data Assimilation Research Testbed [DART] to allow for a wide-variety of users to test this hypothesis. This new version of DART allows one to run several variants of the EBKF as well as several variants of the quadratic polynomial lter using the same forecast model and observations. Dierences between the results of the two systems will then highlight the degree of non-Gaussianity in the system being examined. We will illustrate in this work the differences between the performance of linear versus quadratic polynomial regression in a hierarchy of models from Lorenz-63 to a simple general circulation model.

  12. Genetic modelling of test day records in dairy sheep using orthogonal Legendre polynomials.

    PubMed

    Kominakis, A; Volanis, M; Rogdakis, E

    2001-03-01

    Test day milk yields of three lactations in Sfakia sheep were analyzed fitting a random regression (RR) model, regressing on orthogonal polynomials of the stage of the lactation period, i.e. days in milk. Univariate (UV) and multivariate (MV) analyses were also performed for four stages of the lactation period, represented by average days in milk, i.e. 15, 45, 70 and 105 days, to compare estimates obtained from RR models with estimates from UV and MV analyses. The total number of test day records were 790, 1314 and 1041 obtained from 214, 342 and 303 ewes in the first, second and third lactation, respectively. Error variances and covariances between regression coefficients were estimated by restricted maximum likelihood. Models were compared using likelihood ratio tests (LRTs). Log likelihoods were not significantly reduced when the rank of the orthogonal Legendre polynomials (LPs) of lactation stage was reduced from 4 to 2 and homogenous variances for lactation stages within lactations were considered. Mean weighted heritability estimates with RR models were 0.19, 0.09 and 0.08 for first, second and third lactation, respectively. The respective estimates obtained from UV analyses were 0.14, 0.12 and 0.08, respectively. Mean permanent environmental variance, as a proportion of the total, was high at all stages and lactations ranging from 0.54 to 0.71. Within lactations, genetic and permanent environmental correlations between lactation stages were in the range from 0.36 to 0.99 and 0.76 to 0.99, respectively. Genetic parameters for additive genetic and permanent environmental effects obtained from RR models were different from those obtained from UV and MV analyses.

  13. Discrepancies Between Perceptions of the Parent-Adolescent Relationship and Early Adolescent Depressive Symptoms: An Illustration of Polynomial Regression Analysis.

    PubMed

    Nelemans, S A; Branje, S J T; Hale, W W; Goossens, L; Koot, H M; Oldehinkel, A J; Meeus, W H J

    2016-10-01

    Adolescence is a critical period for the development of depressive symptoms. Lower quality of the parent-adolescent relationship has been consistently associated with higher adolescent depressive symptoms, but discrepancies in perceptions of parents and adolescents regarding the quality of their relationship may be particularly important to consider. In the present study, we therefore examined how discrepancies in parents' and adolescents' perceptions of the parent-adolescent relationship were associated with early adolescent depressive symptoms, both concurrently and longitudinally over a 1-year period. Our sample consisted of 497 Dutch adolescents (57 % boys, M age = 13.03 years), residing in the western and central regions of the Netherlands, and their mothers and fathers, who all completed several questionnaires on two occasions with a 1-year interval. Adolescents reported on depressive symptoms and all informants reported on levels of negative interaction in the parent-adolescent relationship. Results from polynomial regression analyses including interaction terms between informants' perceptions, which have recently been proposed as more valid tests of hypotheses involving informant discrepancies than difference scores, suggested the highest adolescent depressive symptoms when both the mother and the adolescent reported high negative interaction, and when the adolescent reported high but the father reported low negative interaction. This pattern of findings underscores the need for a more sophisticated methodology such as polynomial regression analysis including tests of moderation, rather than the use of difference scores, which can adequately address both congruence and discrepancies in perceptions of adolescents and mothers/fathers of the parent-adolescent relationship in detail. Such an analysis can contribute to a more comprehensive understanding of risk factors for early adolescent depressive symptoms.

  14. Non-intrusive uncertainty quantification of computational fluid dynamics simulations: notes on the accuracy and efficiency

    NASA Astrophysics Data System (ADS)

    Zimoń, Małgorzata; Sawko, Robert; Emerson, David; Thompson, Christopher

    2017-11-01

    Uncertainty quantification (UQ) is increasingly becoming an indispensable tool for assessing the reliability of computational modelling. Efficient handling of stochastic inputs, such as boundary conditions, physical properties or geometry, increases the utility of model results significantly. We discuss the application of non-intrusive generalised polynomial chaos techniques in the context of fluid engineering simulations. Deterministic and Monte Carlo integration rules are applied to a set of problems, including ordinary differential equations and the computation of aerodynamic parameters subject to random perturbations. In particular, we analyse acoustic wave propagation in a heterogeneous medium to study the effects of mesh resolution, transients, number and variability of stochastic inputs. We consider variants of multi-level Monte Carlo and perform a novel comparison of the methods with respect to numerical and parametric errors, as well as computational cost. The results provide a comprehensive view of the necessary steps in UQ analysis and demonstrate some key features of stochastic fluid flow systems.

  15. Linear FBG Temperature Sensor Interrogation with Fabry-Perot ITU Multi-wavelength Reference.

    PubMed

    Park, Hyoung-Jun; Song, Minho

    2008-10-29

    The equidistantly spaced multi-passbands of a Fabry-Perot ITU filter are used as an efficient multi-wavelength reference for fiber Bragg grating sensor demodulation. To compensate for the nonlinear wavelength tuning effect in the FBG sensor demodulator, a polynomial fitting algorithm was applied to the temporal peaks of the wavelength-scanned ITU filter. The fitted wavelength values are assigned to the peak locations of the FBG sensor reflections, obtaining constant accuracy, regardless of the wavelength scan range and frequency. A linearity error of about 0.18% against a reference thermocouple thermometer was obtained with the suggested method.

  16. Model Robust Calibration: Method and Application to Electronically-Scanned Pressure Transducers

    NASA Technical Reports Server (NTRS)

    Walker, Eric L.; Starnes, B. Alden; Birch, Jeffery B.; Mays, James E.

    2010-01-01

    This article presents the application of a recently developed statistical regression method to the controlled instrument calibration problem. The statistical method of Model Robust Regression (MRR), developed by Mays, Birch, and Starnes, is shown to improve instrument calibration by reducing the reliance of the calibration on a predetermined parametric (e.g. polynomial, exponential, logarithmic) model. This is accomplished by allowing fits from the predetermined parametric model to be augmented by a certain portion of a fit to the residuals from the initial regression using a nonparametric (locally parametric) regression technique. The method is demonstrated for the absolute scale calibration of silicon-based pressure transducers.

  17. Hypothesis testing in functional linear regression models with Neyman's truncation and wavelet thresholding for longitudinal data.

    PubMed

    Yang, Xiaowei; Nie, Kun

    2008-03-15

    Longitudinal data sets in biomedical research often consist of large numbers of repeated measures. In many cases, the trajectories do not look globally linear or polynomial, making it difficult to summarize the data or test hypotheses using standard longitudinal data analysis based on various linear models. An alternative approach is to apply the approaches of functional data analysis, which directly target the continuous nonlinear curves underlying discretely sampled repeated measures. For the purposes of data exploration, many functional data analysis strategies have been developed based on various schemes of smoothing, but fewer options are available for making causal inferences regarding predictor-outcome relationships, a common task seen in hypothesis-driven medical studies. To compare groups of curves, two testing strategies with good power have been proposed for high-dimensional analysis of variance: the Fourier-based adaptive Neyman test and the wavelet-based thresholding test. Using a smoking cessation clinical trial data set, this paper demonstrates how to extend the strategies for hypothesis testing into the framework of functional linear regression models (FLRMs) with continuous functional responses and categorical or continuous scalar predictors. The analysis procedure consists of three steps: first, apply the Fourier or wavelet transform to the original repeated measures; then fit a multivariate linear model in the transformed domain; and finally, test the regression coefficients using either adaptive Neyman or thresholding statistics. Since a FLRM can be viewed as a natural extension of the traditional multiple linear regression model, the development of this model and computational tools should enhance the capacity of medical statistics for longitudinal data.

  18. Polynomial-Time Algorithms for Building a Consensus MUL-Tree

    PubMed Central

    Cui, Yun; Jansson, Jesper

    2012-01-01

    Abstract A multi-labeled phylogenetic tree, or MUL-tree, is a generalization of a phylogenetic tree that allows each leaf label to be used many times. MUL-trees have applications in biogeography, the study of host–parasite cospeciation, gene evolution studies, and computer science. Here, we consider the problem of inferring a consensus MUL-tree that summarizes a given set of conflicting MUL-trees, and present the first polynomial-time algorithms for solving it. In particular, we give a straightforward, fast algorithm for building a strict consensus MUL-tree for any input set of MUL-trees with identical leaf label multisets, as well as a polynomial-time algorithm for building a majority rule consensus MUL-tree for the special case where every leaf label occurs at most twice. We also show that, although it is NP-hard to find a majority rule consensus MUL-tree in general, the variant that we call the singular majority rule consensus MUL-tree can be constructed efficiently whenever it exists. PMID:22963134

  19. Polynomial-time algorithms for building a consensus MUL-tree.

    PubMed

    Cui, Yun; Jansson, Jesper; Sung, Wing-Kin

    2012-09-01

    A multi-labeled phylogenetic tree, or MUL-tree, is a generalization of a phylogenetic tree that allows each leaf label to be used many times. MUL-trees have applications in biogeography, the study of host-parasite cospeciation, gene evolution studies, and computer science. Here, we consider the problem of inferring a consensus MUL-tree that summarizes a given set of conflicting MUL-trees, and present the first polynomial-time algorithms for solving it. In particular, we give a straightforward, fast algorithm for building a strict consensus MUL-tree for any input set of MUL-trees with identical leaf label multisets, as well as a polynomial-time algorithm for building a majority rule consensus MUL-tree for the special case where every leaf label occurs at most twice. We also show that, although it is NP-hard to find a majority rule consensus MUL-tree in general, the variant that we call the singular majority rule consensus MUL-tree can be constructed efficiently whenever it exists.

  20. A contracting-interval program for the Danilewski method. Ph.D. Thesis - Va. Univ.

    NASA Technical Reports Server (NTRS)

    Harris, J. D.

    1971-01-01

    The concept of contracting-interval programs is applied to finding the eigenvalues of a matrix. The development is a three-step process in which (1) a program is developed for the reduction of a matrix to Hessenberg form, (2) a program is developed for the reduction of a Hessenberg matrix to colleague form, and (3) the characteristic polynomial with interval coefficients is readily obtained from the interval of colleague matrices. This interval polynomial is then factored into quadratic factors so that the eigenvalues may be obtained. To develop a contracting-interval program for factoring this polynomial with interval coefficients it is necessary to have an iteration method which converges even in the presence of controlled rounding errors. A theorem is stated giving sufficient conditions for the convergence of Newton's method when both the function and its Jacobian cannot be evaluated exactly but errors can be made proportional to the square of the norm of the difference between the previous two iterates. This theorem is applied to prove the convergence of the generalization of the Newton-Bairstow method that is used to obtain quadratic factors of the characteristic polynomial.

  1. Sum-of-squares-based fuzzy controller design using quantum-inspired evolutionary algorithm

    NASA Astrophysics Data System (ADS)

    Yu, Gwo-Ruey; Huang, Yu-Chia; Cheng, Chih-Yung

    2016-07-01

    In the field of fuzzy control, control gains are obtained by solving stabilisation conditions in linear-matrix-inequality-based Takagi-Sugeno fuzzy control method and sum-of-squares-based polynomial fuzzy control method. However, the optimal performance requirements are not considered under those stabilisation conditions. In order to handle specific performance problems, this paper proposes a novel design procedure with regard to polynomial fuzzy controllers using quantum-inspired evolutionary algorithms. The first contribution of this paper is a combination of polynomial fuzzy control and quantum-inspired evolutionary algorithms to undertake an optimal performance controller design. The second contribution is the proposed stability condition derived from the polynomial Lyapunov function. The proposed design approach is dissimilar to the traditional approach, in which control gains are obtained by solving the stabilisation conditions. The first step of the controller design uses the quantum-inspired evolutionary algorithms to determine the control gains with the best performance. Then, the stability of the closed-loop system is analysed under the proposed stability conditions. To illustrate effectiveness and validity, the problem of balancing and the up-swing of an inverted pendulum on a cart is used.

  2. Securing Color Fidelity in 3D Architectural Heritage Scenarios.

    PubMed

    Gaiani, Marco; Apollonio, Fabrizio Ivan; Ballabeni, Andrea; Remondino, Fabio

    2017-10-25

    Ensuring color fidelity in image-based 3D modeling of heritage scenarios is nowadays still an open research matter. Image colors are important during the data processing as they affect algorithm outcomes, therefore their correct treatment, reduction and enhancement is fundamental. In this contribution, we present an automated solution developed to improve the radiometric quality of an image datasets and the performances of two main steps of the photogrammetric pipeline (camera orientation and dense image matching). The suggested solution aims to achieve a robust automatic color balance and exposure equalization, stability of the RGB-to-gray image conversion and faithful color appearance of a digitized artifact. The innovative aspects of the article are: complete automation, better color target detection, a MATLAB implementation of the ACR scripts created by Fraser and the use of a specific weighted polynomial regression. A series of tests are presented to demonstrate the efficiency of the developed methodology and to evaluate color accuracy ('color characterization').

  3. Securing Color Fidelity in 3D Architectural Heritage Scenarios

    PubMed Central

    Apollonio, Fabrizio Ivan; Ballabeni, Andrea; Remondino, Fabio

    2017-01-01

    Ensuring color fidelity in image-based 3D modeling of heritage scenarios is nowadays still an open research matter. Image colors are important during the data processing as they affect algorithm outcomes, therefore their correct treatment, reduction and enhancement is fundamental. In this contribution, we present an automated solution developed to improve the radiometric quality of an image datasets and the performances of two main steps of the photogrammetric pipeline (camera orientation and dense image matching). The suggested solution aims to achieve a robust automatic color balance and exposure equalization, stability of the RGB-to-gray image conversion and faithful color appearance of a digitized artifact. The innovative aspects of the article are: complete automation, better color target detection, a MATLAB implementation of the ACR scripts created by Fraser and the use of a specific weighted polynomial regression. A series of tests are presented to demonstrate the efficiency of the developed methodology and to evaluate color accuracy (‘color characterization’). PMID:29068359

  4. Hybrid Solution of Stochastic Optimal Control Problems Using Gauss Pseudospectral Method and Generalized Polynomial Chaos Algorithms

    DTIC Science & Technology

    2012-03-01

    0-486-41183-4. 15. Brown , Robert G. and Patrick Y. C. Hwang . Introduction to Random Signals and Applied Kalman Filtering. Wiley, New York, 1996. ISBN...stability and perfor- mance criteria. In the 1960’s, Kalman introduced the Linear Quadratic Regulator (LQR) method using an integral performance index...feedback of the state variables and was able to apply this method to time-varying and Multi-Input Multi-Output (MIMO) systems. Kalman further showed

  5. Adaptive-Mesh-Refinement for hyperbolic systems of conservation laws based on a posteriori stabilized high order polynomial reconstructions

    NASA Astrophysics Data System (ADS)

    Semplice, Matteo; Loubère, Raphaël

    2018-02-01

    In this paper we propose a third order accurate finite volume scheme based on a posteriori limiting of polynomial reconstructions within an Adaptive-Mesh-Refinement (AMR) simulation code for hydrodynamics equations in 2D. The a posteriori limiting is based on the detection of problematic cells on a so-called candidate solution computed at each stage of a third order Runge-Kutta scheme. Such detection may include different properties, derived from physics, such as positivity, from numerics, such as a non-oscillatory behavior, or from computer requirements such as the absence of NaN's. Troubled cell values are discarded and re-computed starting again from the previous time-step using a more dissipative scheme but only locally, close to these cells. By locally decrementing the degree of the polynomial reconstructions from 2 to 0 we switch from a third-order to a first-order accurate but more stable scheme. The entropy indicator sensor is used to refine/coarsen the mesh. This sensor is also employed in an a posteriori manner because if some refinement is needed at the end of a time step, then the current time-step is recomputed with the refined mesh, but only locally, close to the new cells. We show on a large set of numerical tests that this a posteriori limiting procedure coupled with the entropy-based AMR technology can maintain not only optimal accuracy on smooth flows but also stability on discontinuous profiles such as shock waves, contacts, interfaces, etc. Moreover numerical evidences show that this approach is at least comparable in terms of accuracy and cost to a more classical CWENO approach within the same AMR context.

  6. Robust learning for optimal treatment decision with NP-dimensionality

    PubMed Central

    Shi, Chengchun; Song, Rui; Lu, Wenbin

    2016-01-01

    In order to identify important variables that are involved in making optimal treatment decision, Lu, Zhang and Zeng (2013) proposed a penalized least squared regression framework for a fixed number of predictors, which is robust against the misspecification of the conditional mean model. Two problems arise: (i) in a world of explosively big data, effective methods are needed to handle ultra-high dimensional data set, for example, with the dimension of predictors is of the non-polynomial (NP) order of the sample size; (ii) both the propensity score and conditional mean models need to be estimated from data under NP dimensionality. In this paper, we propose a robust procedure for estimating the optimal treatment regime under NP dimensionality. In both steps, penalized regressions are employed with the non-concave penalty function, where the conditional mean model of the response given predictors may be misspecified. The asymptotic properties, such as weak oracle properties, selection consistency and oracle distributions, of the proposed estimators are investigated. In addition, we study the limiting distribution of the estimated value function for the obtained optimal treatment regime. The empirical performance of the proposed estimation method is evaluated by simulations and an application to a depression dataset from the STAR*D study. PMID:28781717

  7. A comparison of Redlich-Kister polynomial and cubic spline representations of the chemical potential in phase field computations

    DOE PAGES

    Teichert, Gregory H.; Gunda, N. S. Harsha; Rudraraju, Shiva; ...

    2016-12-18

    Free energies play a central role in many descriptions of equilibrium and non-equilibrium properties of solids. Continuum partial differential equations (PDEs) of atomic transport, phase transformations and mechanics often rely on first and second derivatives of a free energy function. The stability, accuracy and robustness of numerical methods to solve these PDEs are sensitive to the particular functional representations of the free energy. In this communication we investigate the influence of different representations of thermodynamic data on phase field computations of diffusion and two-phase reactions in the solid state. First-principles statistical mechanics methods were used to generate realistic free energymore » data for HCP titanium with interstitially dissolved oxygen. While Redlich-Kister polynomials have formed the mainstay of thermodynamic descriptions of multi-component solids, they require high order terms to fit oscillations in chemical potentials around phase transitions. Here, we demonstrate that high fidelity fits to rapidly fluctuating free energy functions are obtained with spline functions. As a result, spline functions that are many degrees lower than Redlich-Kister polynomials provide equal or superior fits to chemical potential data and, when used in phase field computations, result in solution times approaching an order of magnitude speed up relative to the use of Redlich-Kister polynomials.« less

  8. A frequency domain global parameter estimation method for multiple reference frequency response measurements

    NASA Astrophysics Data System (ADS)

    Shih, C. Y.; Tsuei, Y. G.; Allemang, R. J.; Brown, D. L.

    1988-10-01

    A method of using the matrix Auto-Regressive Moving Average (ARMA) model in the Laplace domain for multiple-reference global parameter identification is presented. This method is particularly applicable to the area of modal analysis where high modal density exists. The method is also applicable when multiple reference frequency response functions are used to characterise linear systems. In order to facilitate the mathematical solution, the Forsythe orthogonal polynomial is used to reduce the ill-conditioning of the formulated equations and to decouple the normal matrix into two reduced matrix blocks. A Complex Mode Indicator Function (CMIF) is introduced, which can be used to determine the proper order of the rational polynomials.

  9. On the stability of projection methods for the incompressible Navier-Stokes equations based on high-order discontinuous Galerkin discretizations

    NASA Astrophysics Data System (ADS)

    Fehn, Niklas; Wall, Wolfgang A.; Kronbichler, Martin

    2017-12-01

    The present paper deals with the numerical solution of the incompressible Navier-Stokes equations using high-order discontinuous Galerkin (DG) methods for discretization in space. For DG methods applied to the dual splitting projection method, instabilities have recently been reported that occur for small time step sizes. Since the critical time step size depends on the viscosity and the spatial resolution, these instabilities limit the robustness of the Navier-Stokes solver in case of complex engineering applications characterized by coarse spatial resolutions and small viscosities. By means of numerical investigation we give evidence that these instabilities are related to the discontinuous Galerkin formulation of the velocity divergence term and the pressure gradient term that couple velocity and pressure. Integration by parts of these terms with a suitable definition of boundary conditions is required in order to obtain a stable and robust method. Since the intermediate velocity field does not fulfill the boundary conditions prescribed for the velocity, a consistent boundary condition is derived from the convective step of the dual splitting scheme to ensure high-order accuracy with respect to the temporal discretization. This new formulation is stable in the limit of small time steps for both equal-order and mixed-order polynomial approximations. Although the dual splitting scheme itself includes inf-sup stabilizing contributions, we demonstrate that spurious pressure oscillations appear for equal-order polynomials and small time steps highlighting the necessity to consider inf-sup stability explicitly.

  10. Polynomial Chaos Based Acoustic Uncertainty Predictions from Ocean Forecast Ensembles

    NASA Astrophysics Data System (ADS)

    Dennis, S.

    2016-02-01

    Most significant ocean acoustic propagation occurs at tens of kilometers, at scales small compared basin and to most fine scale ocean modeling. To address the increased emphasis on uncertainty quantification, for example transmission loss (TL) probability density functions (PDF) within some radius, a polynomial chaos (PC) based method is utilized. In order to capture uncertainty in ocean modeling, Navy Coastal Ocean Model (NCOM) now includes ensembles distributed to reflect the ocean analysis statistics. Since the ensembles are included in the data assimilation for the new forecast ensembles, the acoustic modeling uses the ensemble predictions in a similar fashion for creating sound speed distribution over an acoustically relevant domain. Within an acoustic domain, singular value decomposition over the combined time-space structure of the sound speeds can be used to create Karhunen-Loève expansions of sound speed, subject to multivariate normality testing. These sound speed expansions serve as a basis for Hermite polynomial chaos expansions of derived quantities, in particular TL. The PC expansion coefficients result from so-called non-intrusive methods, involving evaluation of TL at multi-dimensional Gauss-Hermite quadrature collocation points. Traditional TL calculation from standard acoustic propagation modeling could be prohibitively time consuming at all multi-dimensional collocation points. This method employs Smolyak order and gridding methods to allow adaptive sub-sampling of the collocation points to determine only the most significant PC expansion coefficients to within a preset tolerance. Practically, the Smolyak order and grid sizes grow only polynomially in the number of Karhunen-Loève terms, alleviating the curse of dimensionality. The resulting TL PC coefficients allow the determination of TL PDF normality and its mean and standard deviation. In the non-normal case, PC Monte Carlo methods are used to rapidly establish the PDF. This work was sponsored by the Office of Naval Research

  11. Rock infromation of the moon revealed by multi-channel microwave radiometer data

    NASA Astrophysics Data System (ADS)

    Hu, Guo-Ping; Zheng, Yong-Chun; Chan, Kwing Lam; Xu, Ao-Ao

    2016-10-01

    Rock abundance on lunar surface is an important consideration for understanding the physical properties of the Moon. With the deeper penetration power of the microwave, data from Chang'E (CE) multichannel (3.0-, 7.8-, 19.35-, and 37-GHz) microwave radiometer (MRM) are used to constrain the rock distribution on the Moon. The contrasting thermo-physical properties between rocks and regolith fines cause multiple brightness temperature (TB) to be present within the field of view of CE microwave data. But these variations could be easily masked by the more significant effect of ilmenite on TB, especially in the mare regions which are rich in ilmenite.To highlight the rock effect in TB, the diurnal TB difference, which has the effect of enlarging the TB difference caused by the rock abundance and reducing the absolute error of the CE microwave data, is considered here. The rock information in TB data is distinguished from the ilmenite effect by comparing the diurnal TB difference with a statistical TB model of the mare regions which are relatively low in rock abundance. The employed statistical TB model is a polynomial fitting formula between the selected CE TB data from mare regions and the corresponding TiO2 content data from Clementine UVVIS data. The correlation coefficients of the polynomial fit between TB and TiO2 content are 0.94 at lunar daytime and 0.84 at lunar nighttime, respectively. This polynomial fit forms an approximated relationship between the TiO2 content and TB when rock abundance is zero, with a standard error determined from the regression procedure.Based on the TiO2 map retrieved from Clementine UVVIS data, the TB map that is deflated to a lower TiO2 content shows a distribution trend similar to the rock abundance map retrieved by LRO data, except for the mare regions at the nearside of the Moon. The bigger diurnal TB difference in the mare regions could be either caused by the rich ilmenite rocks or the smaller rocks which cannot be recognized by the LRO data.

  12. Spillover in the Academy: Marriage Stability and Faculty Evaluations.

    ERIC Educational Resources Information Center

    Ludlow, Larry H.; Alvarez-Salvat, Rose M.

    2001-01-01

    Studied the spillover between family and work by examining the link between marital status and work performance across marriage, divorce, and remarriage. A polynomial regression model was fit to the data from 78 evaluations of an individual professor, and a cubic curve through the 3 periods was statistically significant. (SLD)

  13. Thermal degradation of polybrominated diphenyl ethers over as-prepared Fe3O4 micro/nano-material and hypothesized mechanism.

    PubMed

    Li, Qianqian; Yang, Fan; Su, Guijin; Huang, Linyan; Lu, Huijie; Zhao, Yuyang; Zheng, Minghui

    2016-01-01

    The thermal degradation of decabromodiphenyl ether (BDE-209) featuring fully substituted bromines was investigated over an as-prepared Fe3O4 micro/nano-material at 300 °C. Degradation followed pseudo-first-order kinetics with kobs = 0.15 min(-1) higher than that for decachlorobiphenyl (CB-209). Twenty-six newly produced polybrominated diphenyl ether (PBDE) congeners were identified using the available PBDE standards, while four PBDE congener products were predicted using third-order polynomial regression equation. Analysis of the products indicated that BDE-209 underwent stepwise hydrodebromination over as-prepared Fe3O4. Similar to the case for CB-209, two initial hydrodebromination steps are favored at the BDE-209 meta-positions, giving the major products BDE-207 and BDE-197. However, the variance about the preferred products began to emerge from the start of heptabromodiphenyl ethers (hepta-BDEs). The majorly produced hepta-BDE isomer with BDE-183 is unbrominated at one ortho-position. However, this is different from the reported degradation of CB-209, which always produced the products chlorinated at all four ortho-positions until the ortho-position had to be removed for the formation of trichlorobiphenyls and dichlorobiphenyl still majorly chlorinated at three or two ortho-positions. The early BDE-209 hydrodebromination steps appear to be strongly influenced by steric effects, whereas subsequent hydrodebromination steps, as more bromine atoms are removed, will be gradually governed more by thermodynamics.

  14. Genetic parameters for test-day yield of milk, fat and protein in buffaloes estimated by random regression models.

    PubMed

    Aspilcueta-Borquis, Rúsbel R; Araujo Neto, Francisco R; Baldi, Fernando; Santos, Daniel J A; Albuquerque, Lucia G; Tonhati, Humberto

    2012-08-01

    The test-day yields of milk, fat and protein were analysed from 1433 first lactations of buffaloes of the Murrah breed, daughters of 113 sires from 12 herds in the state of São Paulo, Brazil, born between 1985 and 2007. For the test-day yields, 10 monthly classes of lactation days were considered. The contemporary groups were defined as the herd-year-month of the test day. Random additive genetic, permanent environmental and residual effects were included in the model. The fixed effects considered were the contemporary group, number of milkings (1 or 2 milkings), linear and quadratic effects of the covariable cow age at calving and the mean lactation curve of the population (modelled by third-order Legendre orthogonal polynomials). The random additive genetic and permanent environmental effects were estimated by means of regression on third- to sixth-order Legendre orthogonal polynomials. The residual variances were modelled with a homogenous structure and various heterogeneous classes. According to the likelihood-ratio test, the best model for milk and fat production was that with four residual variance classes, while a third-order Legendre polynomial was best for the additive genetic effect for milk and fat yield, a fourth-order polynomial was best for the permanent environmental effect for milk production and a fifth-order polynomial was best for fat production. For protein yield, the best model was that with three residual variance classes and third- and fourth-order Legendre polynomials were best for the additive genetic and permanent environmental effects, respectively. The heritability estimates for the characteristics analysed were moderate, varying from 0·16±0·05 to 0·29±0·05 for milk yield, 0·20±0·05 to 0·30±0·08 for fat yield and 0·18±0·06 to 0·27±0·08 for protein yield. The estimates of the genetic correlations between the tests varied from 0·18±0·120 to 0·99±0·002; from 0·44±0·080 to 0·99±0·004; and from 0·41±0·080 to 0·99±0·004, for milk, fat and protein production, respectively, indicating that whatever the selection criterion used, indirect genetic gains can be expected throughout the lactation curve.

  15. Developing the Polynomial Expressions for Fields in the ITER Tokamak

    NASA Astrophysics Data System (ADS)

    Sharma, Stephen

    2017-10-01

    The two most important problems to be solved in the development of working nuclear fusion power plants are: sustained partial ignition and turbulence. These two phenomena are the subject of research and investigation through the development of analytic functions and computational models. Ansatz development through Gaussian wave-function approximations, dielectric quark models, field solutions using new elliptic functions, and better descriptions of the polynomials of the superconducting current loops are the critical theoretical developments that need to be improved. Euler-Lagrange equations of motion in addition to geodesic formulations generate the particle model which should correspond to the Dirac dispersive scattering coefficient calculations and the fluid plasma model. Feynman-Hellman formalism and Heaviside step functional forms are introduced to the fusion equations to produce simple expressions for the kinetic energy and loop currents. Conclusively, a polynomial description of the current loops, the Biot-Savart field, and the Lagrangian must be uncovered before there can be an adequate computational and iterative model of the thermonuclear plasma.

  16. Bell-polynomial approach and Wronskian determinant solutions for three sets of differential-difference nonlinear evolution equations with symbolic computation

    NASA Astrophysics Data System (ADS)

    Qin, Bo; Tian, Bo; Wang, Yu-Feng; Shen, Yu-Jia; Wang, Ming

    2017-10-01

    Under investigation in this paper are the Belov-Chaltikian (BC), Leznov and Blaszak-Marciniak (BM) lattice equations, which are associated with the conformal field theory, UToda(m_1,m_2) system and r-matrix, respectively. With symbolic computation, the Bell-polynomial approach is developed to directly bilinearize those three sets of differential-difference nonlinear evolution equations (NLEEs). This Bell-polynomial approach does not rely on any dependent variable transformation, which constitutes the key step and main difficulty of the Hirota bilinear method, and thus has the advantage in the bilinearization of the differential-difference NLEEs. Based on the bilinear forms obtained, the N-soliton solutions are constructed in terms of the N × N Wronskian determinant. Graphic illustrations demonstrate that those solutions, more general than the existing results, permit some new properties, such as the solitonic propagation and interactions for the BC lattice equations, and the nonnegative dark solitons for the BM lattice equations.

  17. Dynamic Harmony Search with Polynomial Mutation Algorithm for Valve-Point Economic Load Dispatch

    PubMed Central

    Karthikeyan, M.; Sree Ranga Raja, T.

    2015-01-01

    Economic load dispatch (ELD) problem is an important issue in the operation and control of modern control system. The ELD problem is complex and nonlinear with equality and inequality constraints which makes it hard to be efficiently solved. This paper presents a new modification of harmony search (HS) algorithm named as dynamic harmony search with polynomial mutation (DHSPM) algorithm to solve ORPD problem. In DHSPM algorithm the key parameters of HS algorithm like harmony memory considering rate (HMCR) and pitch adjusting rate (PAR) are changed dynamically and there is no need to predefine these parameters. Additionally polynomial mutation is inserted in the updating step of HS algorithm to favor exploration and exploitation of the search space. The DHSPM algorithm is tested with three power system cases consisting of 3, 13, and 40 thermal units. The computational results show that the DHSPM algorithm is more effective in finding better solutions than other computational intelligence based methods. PMID:26491710

  18. Dynamic Harmony Search with Polynomial Mutation Algorithm for Valve-Point Economic Load Dispatch.

    PubMed

    Karthikeyan, M; Raja, T Sree Ranga

    2015-01-01

    Economic load dispatch (ELD) problem is an important issue in the operation and control of modern control system. The ELD problem is complex and nonlinear with equality and inequality constraints which makes it hard to be efficiently solved. This paper presents a new modification of harmony search (HS) algorithm named as dynamic harmony search with polynomial mutation (DHSPM) algorithm to solve ORPD problem. In DHSPM algorithm the key parameters of HS algorithm like harmony memory considering rate (HMCR) and pitch adjusting rate (PAR) are changed dynamically and there is no need to predefine these parameters. Additionally polynomial mutation is inserted in the updating step of HS algorithm to favor exploration and exploitation of the search space. The DHSPM algorithm is tested with three power system cases consisting of 3, 13, and 40 thermal units. The computational results show that the DHSPM algorithm is more effective in finding better solutions than other computational intelligence based methods.

  19. Linear FBG Temperature Sensor Interrogation with Fabry-Perot ITU Multi-wavelength Reference

    PubMed Central

    Park, Hyoung-Jun; Song, Minho

    2008-01-01

    The equidistantly spaced multi-passbands of a Fabry-Perot ITU filter are used as an efficient multi-wavelength reference for fiber Bragg grating sensor demodulation. To compensate for the nonlinear wavelength tuning effect in the FBG sensor demodulator, a polynomial fitting algorithm was applied to the temporal peaks of the wavelength-scanned ITU filter. The fitted wavelength values are assigned to the peak locations of the FBG sensor reflections, obtaining constant accuracy, regardless of the wavelength scan range and frequency. A linearity error of about 0.18% against a reference thermocouple thermometer was obtained with the suggested method. PMID:27873898

  20. Pragmatic estimation of a spatio-temporal air quality model with irregular monitoring data

    NASA Astrophysics Data System (ADS)

    Sampson, Paul D.; Szpiro, Adam A.; Sheppard, Lianne; Lindström, Johan; Kaufman, Joel D.

    2011-11-01

    Statistical analyses of health effects of air pollution have increasingly used GIS-based covariates for prediction of ambient air quality in "land use" regression models. More recently these spatial regression models have accounted for spatial correlation structure in combining monitoring data with land use covariates. We present a flexible spatio-temporal modeling framework and pragmatic, multi-step estimation procedure that accommodates essentially arbitrary patterns of missing data with respect to an ideally complete space by time matrix of observations on a network of monitoring sites. The methodology incorporates a model for smooth temporal trends with coefficients varying in space according to Partial Least Squares regressions on a large set of geographic covariates and nonstationary modeling of spatio-temporal residuals from these regressions. This work was developed to provide spatial point predictions of PM 2.5 concentrations for the Multi-Ethnic Study of Atherosclerosis and Air Pollution (MESA Air) using irregular monitoring data derived from the AQS regulatory monitoring network and supplemental short-time scale monitoring campaigns conducted to better predict intra-urban variation in air quality. We demonstrate the interpretation and accuracy of this methodology in modeling data from 2000 through 2006 in six U.S. metropolitan areas and establish a basis for likelihood-based estimation.

  1. Short communication: Genetic variation of saturated fatty acids in Holsteins in the Walloon region of Belgium.

    PubMed

    Arnould, V M-R; Hammami, H; Soyeurt, H; Gengler, N

    2010-09-01

    Random regression test-day models using Legendre polynomials are commonly used for the estimation of genetic parameters and genetic evaluation for test-day milk production traits. However, some researchers have reported that these models present some undesirable properties such as the overestimation of variances at the edges of lactation. Describing genetic variation of saturated fatty acids expressed in milk fat might require the testing of different models. Therefore, 3 different functions were used and compared to take into account the lactation curve: (1) Legendre polynomials with the same order as currently applied for genetic model for production traits; 2) linear splines with 10 knots; and 3) linear splines with the same 10 knots reduced to 3 parameters. The criteria used were Akaike's information and Bayesian information criteria, percentage square biases, and log-likelihood function. These criteria indentified Legendre polynomials and linear splines with 10 knots reduced to 3 parameters models as the most useful. Reducing more complex models using eigenvalues seemed appealing because the resulting models are less time demanding and can reduce convergence difficulties, because convergence properties also seemed to be improved. Finally, the results showed that the reduced spline model was very similar to the Legendre polynomials model. Copyright (c) 2010 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  2. Delineating chalk sand distribution of Ekofisk formation using probabilistic neural network (PNN) and stepwise regression (SWR): Case study Danish North Sea field

    NASA Astrophysics Data System (ADS)

    Haris, A.; Nafian, M.; Riyanto, A.

    2017-07-01

    Danish North Sea Fields consist of several formations (Ekofisk, Tor, and Cromer Knoll) that was started from the age of Paleocene to Miocene. In this study, the integration of seismic and well log data set is carried out to determine the chalk sand distribution in the Danish North Sea field. The integration of seismic and well log data set is performed by using the seismic inversion analysis and seismic multi-attribute. The seismic inversion algorithm, which is used to derive acoustic impedance (AI), is model-based technique. The derived AI is then used as external attributes for the input of multi-attribute analysis. Moreover, the multi-attribute analysis is used to generate the linear and non-linear transformation of among well log properties. In the case of the linear model, selected transformation is conducted by weighting step-wise linear regression (SWR), while for the non-linear model is performed by using probabilistic neural networks (PNN). The estimated porosity, which is resulted by PNN shows better suited to the well log data compared with the results of SWR. This result can be understood since PNN perform non-linear regression so that the relationship between the attribute data and predicted log data can be optimized. The distribution of chalk sand has been successfully identified and characterized by porosity value ranging from 23% up to 30%.

  3. Polynomials for crystal frameworks and the rigid unit mode spectrum

    PubMed Central

    Power, S. C.

    2014-01-01

    To each discrete translationally periodic bar-joint framework in , we associate a matrix-valued function defined on the d-torus. The rigid unit mode (RUM) spectrum of is defined in terms of the multi-phases of phase-periodic infinitesimal flexes and is shown to correspond to the singular points of the function and also to the set of wavevectors of harmonic excitations which have vanishing energy in the long wavelength limit. To a crystal framework in Maxwell counting equilibrium, which corresponds to being square, the determinant of gives rise to a unique multi-variable polynomial . For ideal zeolites, the algebraic variety of zeros of on the d-torus coincides with the RUM spectrum. The matrix function is related to other aspects of idealized framework rigidity and flexibility, and in particular leads to an explicit formula for the number of supercell-periodic floppy modes. In the case of certain zeolite frameworks in dimensions two and three, direct proofs are given to show the maximal floppy mode property (order N). In particular, this is the case for the cubic symmetry sodalite framework and some other idealized zeolites. PMID:24379422

  4. Factors associated with the use of cognitive aids in operating room crises: a cross-sectional study of US hospitals and ambulatory surgical centers.

    PubMed

    Alidina, Shehnaz; Goldhaber-Fiebert, Sara N; Hannenberg, Alexander A; Hepner, David L; Singer, Sara J; Neville, Bridget A; Sachetta, James R; Lipsitz, Stuart R; Berry, William R

    2018-03-26

    Operating room (OR) crises are high-acuity events requiring rapid, coordinated management. Medical judgment and decision-making can be compromised in stressful situations, and clinicians may not experience a crisis for many years. A cognitive aid (e.g., checklist) for the most common types of crises in the OR may improve management during unexpected and rare events. While implementation strategies for innovations such as cognitive aids for routine use are becoming better understood, cognitive aids that are rarely used are not yet well understood. We examined organizational context and implementation process factors influencing the use of cognitive aids for OR crises. We conducted a cross-sectional study using a Web-based survey of individuals who had downloaded OR cognitive aids from the websites of Ariadne Labs or Stanford University between January 2013 and January 2016. In this paper, we report on the experience of 368 respondents from US hospitals and ambulatory surgical centers. We analyzed the relationship of more successful implementation (measured as reported regular cognitive aid use during applicable clinical events) with organizational context and with participation in a multi-step implementation process. We used multivariable logistic regression to identify significant predictors of reported, regular OR cognitive aid use during OR crises. In the multivariable logistic regression, small facility size was associated with a fourfold increase in the odds of a facility reporting more successful implementation (p = 0.0092). Completing more implementation steps was also significantly associated with more successful implementation; each implementation step completed was associated with just over 50% higher odds of more successful implementation (p ≤ 0.0001). More successful implementation was associated with leadership support (p < 0.0001) and dedicated time to train staff (p = 0.0189). Less successful implementation was associated with resistance among clinical providers to using cognitive aids (p < 0.0001), absence of an implementation champion (p = 0.0126), and unsatisfactory content or design of the cognitive aid (p = 0.0112). Successful implementation of cognitive aids in ORs was associated with a supportive organizational context and following a multi-step implementation process. Building strong organizational support and following a well-planned multi-step implementation process will likely increase the use of OR cognitive aids during intraoperative crises, which may improve patient outcomes.

  5. Random regression analyses using B-splines to model growth of Australian Angus cattle

    PubMed Central

    Meyer, Karin

    2005-01-01

    Regression on the basis function of B-splines has been advocated as an alternative to orthogonal polynomials in random regression analyses. Basic theory of splines in mixed model analyses is reviewed, and estimates from analyses of weights of Australian Angus cattle from birth to 820 days of age are presented. Data comprised 84 533 records on 20 731 animals in 43 herds, with a high proportion of animals with 4 or more weights recorded. Changes in weights with age were modelled through B-splines of age at recording. A total of thirteen analyses, considering different combinations of linear, quadratic and cubic B-splines and up to six knots, were carried out. Results showed good agreement for all ages with many records, but fluctuated where data were sparse. On the whole, analyses using B-splines appeared more robust against "end-of-range" problems and yielded more consistent and accurate estimates of the first eigenfunctions than previous, polynomial analyses. A model fitting quadratic B-splines, with knots at 0, 200, 400, 600 and 821 days and a total of 91 covariance components, appeared to be a good compromise between detailedness of the model, number of parameters to be estimated, plausibility of results, and fit, measured as residual mean square error. PMID:16093011

  6. Continuous monitoring of fetal scalp temperature in labor: a new technology validated in a fetal lamb model.

    PubMed

    Lavesson, Tony; Amer-Wåhlin, Isis; Hansson, Stefan; Ley, David; Marsál, Karel; Olofsson, Per

    2010-06-01

    To evaluate a new technical equipment for continuous recording of human fetal scalp temperature in labor. Experimental animal study. Two temperature sensors were placed subcutaneously and intracranially on the forehead of 10 fetal lambs and connected to a temperature monitoring system. The system records temperatures simultaneously on-line and stores data to be analyzed off-line. Throughout the experiment, the fetus was oxygenated via the umbilical cord circulation. Asphyxia was induced by intermittent cord compression, as assessed by pH in jugular vein blood. The intracranial (ICT) and subcutaneous (SCT) temperatures were compared with simple and polynomial regression analyses. Absolute and delta ICT and SCT changes. ICT and SCT were both successfully recorded in all 10 cases. With increasing acidosis, the temperatures decreased. The correlation coefficient between ICT and SCT had a range of 0.76-0.97 (median 0.88) by simple linear regression and 0.80-0.99 (median 0.89) by second grade polynomial regression. After an initial system stabilization period of 10 minutes, the delta temperature values (ICT minus SCT) were less than 1.5 degrees C throughout the experiment in all but one case. The fetal forehead SCT mirrored the ICT closely, with the ICT being higher.

  7. Genetic analysis of body weights of individually fed beef bulls in South Africa using random regression models.

    PubMed

    Selapa, N W; Nephawe, K A; Maiwashe, A; Norris, D

    2012-02-08

    The aim of this study was to estimate genetic parameters for body weights of individually fed beef bulls measured at centralized testing stations in South Africa using random regression models. Weekly body weights of Bonsmara bulls (N = 2919) tested between 1999 and 2003 were available for the analyses. The model included a fixed regression of the body weights on fourth-order orthogonal Legendre polynomials of the actual days on test (7, 14, 21, 28, 35, 42, 49, 56, 63, 70, 77, and 84) for starting age and contemporary group effects. Random regressions on fourth-order orthogonal Legendre polynomials of the actual days on test were included for additive genetic effects and additional uncorrelated random effects of the weaning-herd-year and the permanent environment of the animal. Residual effects were assumed to be independently distributed with heterogeneous variance for each test day. Variance ratios for additive genetic, permanent environment and weaning-herd-year for weekly body weights at different test days ranged from 0.26 to 0.29, 0.37 to 0.44 and 0.26 to 0.34, respectively. The weaning-herd-year was found to have a significant effect on the variation of body weights of bulls despite a 28-day adjustment period. Genetic correlations amongst body weights at different test days were high, ranging from 0.89 to 1.00. Heritability estimates were comparable to literature using multivariate models. Therefore, random regression model could be applied in the genetic evaluation of body weight of individually fed beef bulls in South Africa.

  8. H0 from cosmic chronometers and Type Ia supernovae, with Gaussian Processes and the novel Weighted Polynomial Regression method

    NASA Astrophysics Data System (ADS)

    Gómez-Valent, Adrià; Amendola, Luca

    2018-04-01

    In this paper we present new constraints on the Hubble parameter H0 using: (i) the available data on H(z) obtained from cosmic chronometers (CCH); (ii) the Hubble rate data points extracted from the supernovae of Type Ia (SnIa) of the Pantheon compilation and the Hubble Space Telescope (HST) CANDELS and CLASH Multy-Cycle Treasury (MCT) programs; and (iii) the local HST measurement of H0 provided by Riess et al. (2018), H0HST=(73.45±1.66) km/s/Mpc. Various determinations of H0 using the Gaussian processes (GPs) method and the most updated list of CCH data have been recently provided by Yu, Ratra & Wang (2018). Using the Gaussian kernel they find H0=(67.42± 4.75) km/s/Mpc. Here we extend their analysis to also include the most released and complete set of SnIa data, which allows us to reduce the uncertainty by a factor ~ 3 with respect to the result found by only considering the CCH information. We obtain H0=(67.06± 1.68) km/s/Mpc, which favors again the lower range of values for H0 and is in tension with H0HST. The tension reaches the 2.71σ level. We round off the GPs determination too by taking also into account the error propagation of the kernel hyperparameters when the CCH with and without H0HST are used in the analysis. In addition, we present a novel method to reconstruct functions from data, which consists in a weighted sum of polynomial regressions (WPR). We apply it from a cosmographic perspective to reconstruct H(z) and estimate H0 from CCH and SnIa measurements. The result obtained with this method, H0=(68.90± 1.96) km/s/Mpc, is fully compatible with the GPs ones. Finally, a more conservative GPs+WPR value is also provided, H0=(68.45± 2.00) km/s/Mpc, which is still almost 2σ away from H0HST.

  9. A phenomenological biological dose model for proton therapy based on linear energy transfer spectra.

    PubMed

    Rørvik, Eivind; Thörnqvist, Sara; Stokkevåg, Camilla H; Dahle, Tordis J; Fjaera, Lars Fredrik; Ytre-Hauge, Kristian S

    2017-06-01

    The relative biological effectiveness (RBE) of protons varies with the radiation quality, quantified by the linear energy transfer (LET). Most phenomenological models employ a linear dependency of the dose-averaged LET (LET d ) to calculate the biological dose. However, several experiments have indicated a possible non-linear trend. Our aim was to investigate if biological dose models including non-linear LET dependencies should be considered, by introducing a LET spectrum based dose model. The RBE-LET relationship was investigated by fitting of polynomials from 1st to 5th degree to a database of 85 data points from aerobic in vitro experiments. We included both unweighted and weighted regression, the latter taking into account experimental uncertainties. Statistical testing was performed to decide whether higher degree polynomials provided better fits to the data as compared to lower degrees. The newly developed models were compared to three published LET d based models for a simulated spread out Bragg peak (SOBP) scenario. The statistical analysis of the weighted regression analysis favored a non-linear RBE-LET relationship, with the quartic polynomial found to best represent the experimental data (P = 0.010). The results of the unweighted regression analysis were on the borderline of statistical significance for non-linear functions (P = 0.053), and with the current database a linear dependency could not be rejected. For the SOBP scenario, the weighted non-linear model estimated a similar mean RBE value (1.14) compared to the three established models (1.13-1.17). The unweighted model calculated a considerably higher RBE value (1.22). The analysis indicated that non-linear models could give a better representation of the RBE-LET relationship. However, this is not decisive, as inclusion of the experimental uncertainties in the regression analysis had a significant impact on the determination and ranking of the models. As differences between the models were observed for the SOBP scenario, both non-linear LET spectrum- and linear LET d based models should be further evaluated in clinically realistic scenarios. © 2017 American Association of Physicists in Medicine.

  10. Flood inundation mapping in the Logone floodplain from multi temporal Landsat ETM+ imagery

    NASA Astrophysics Data System (ADS)

    Jung, H.; Alsdorf, D. E.; Moritz, M.; Lee, H.; Vassolo, S.

    2011-12-01

    Yearly flooding in the Logone floodplain makes an impact on agricultural, pastoral, and fishery systems in the Lake Chad Basin. Since the flooding extent and depth are highly variable, flood inundation mapping helps us make better use of water resources and prevent flood hazards in the Logone floodplain. The flood maps are generated from 33 multi temporal Landsat Enhanced Thematic Mapper Plus (ETM+) during three years 2006 to 2008. Flooded area is classified using a short-wave infrared band whereas open water is classified by Iterative Self-organizing Data Analysis (ISODATA) clustering. The maximum flooding extent in the study area increases up to ~5.8K km2 in late October 2008. The study also provides strong correlation of the flooding extents with water height variations in both the floodplain and the river based on a second polynomial regression model. The water heights are from ENIVSAT altimetry in the floodplain and gauge measurements in the river. Coefficients of determination between flooding extents and water height variations are greater than 0.91 with 4 to 36 days in phase lag. Floodwater drains back to the river and to the northeast during the recession period in December and January. The study supports understanding of the Logone floodplain dynamics in detail of spatial pattern and size of the flooding extent and assists the flood monitoring and prediction systems in the catchment.

  11. Flood Inundation Mapping in the Logone Floodplain from Multi Temporal Landsat ETM+Imagery

    NASA Technical Reports Server (NTRS)

    Jung, Hahn Chul; Alsdorf, Douglas E.; Moritz, Mark; Lee, Hyongki; Vassolo, Sara

    2011-01-01

    Yearly flooding in the Logone floodplain makes an impact on agricultural, pastoral, and fishery systems in the Lake Chad Basin. Since the flooding extent and depth are highly variable, flood inundation mapping helps us make better use of water resources and prevent flood hazards in the Logone floodplain. The flood maps are generated from 33 multi temporal Landsat Enhanced Thematic Mapper Plus (ETM+) during three years 2006 to 2008. Flooded area is classified using a short-wave infrared band whereas open water is classified by Iterative Self-organizing Data Analysis (ISODATA) clustering. The maximum flooding extent in the study area increases up to approximately 5.8K km2 in late October 2008. The study also provides strong correlation of the flooding extents with water height variations in both the floodplain and the river based on a second polynomial regression model. The water heights are from ENIVSAT altimetry in the floodplain and gauge measurements in the river. Coefficients of determination between flooding extents and water height variations are greater than 0.91 with 4 to 36 days in phase lag. Floodwater drains back to the river and to the northeast during the recession period in December and January. The study supports understanding of the Logone floodplain dynamics in detail of spatial pattern and size of the flooding extent and assists the flood monitoring and prediction systems in the catchment.

  12. Creating a non-linear total sediment load formula using polynomial best subset regression model

    NASA Astrophysics Data System (ADS)

    Okcu, Davut; Pektas, Ali Osman; Uyumaz, Ali

    2016-08-01

    The aim of this study is to derive a new total sediment load formula which is more accurate and which has less application constraints than the well-known formulae of the literature. 5 most known stream power concept sediment formulae which are approved by ASCE are used for benchmarking on a wide range of datasets that includes both field and flume (lab) observations. The dimensionless parameters of these widely used formulae are used as inputs in a new regression approach. The new approach is called Polynomial Best subset regression (PBSR) analysis. The aim of the PBRS analysis is fitting and testing all possible combinations of the input variables and selecting the best subset. Whole the input variables with their second and third powers are included in the regression to test the possible relation between the explanatory variables and the dependent variable. While selecting the best subset a multistep approach is used that depends on significance values and also the multicollinearity degrees of inputs. The new formula is compared to others in a holdout dataset and detailed performance investigations are conducted for field and lab datasets within this holdout data. Different goodness of fit statistics are used as they represent different perspectives of the model accuracy. After the detailed comparisons are carried out we figured out the most accurate equation that is also applicable on both flume and river data. Especially, on field dataset the prediction performance of the proposed formula outperformed the benchmark formulations.

  13. Evolution method and ``differential hierarchy'' of colored knot polynomials

    NASA Astrophysics Data System (ADS)

    Mironov, A.; Morozov, A.; Morozov, And.

    2013-10-01

    We consider braids with repeating patterns inside arbitrary knots which provides a multi-parametric family of knots, depending on the "evolution" parameter, which controls the number of repetitions. The dependence of knot (super)polynomials on such evolution parameters is very easy to find. We apply this evolution method to study of the families of knots and links which include the cases with just two parallel and anti-parallel strands in the braid, like the ordinary twist and 2-strand torus knots/links and counter-oriented 2-strand links. When the answers were available before, they are immediately reproduced, and an essentially new example is added of the "double braid", which is a combination of parallel and anti-parallel 2-strand braids. This study helps us to reveal with the full clarity and partly investigate a mysterious hierarchical structure of the colored HOMFLY polynomials, at least, in (anti)symmetric representations, which extends the original observation for the figure-eight knot to many (presumably all) knots. We demonstrate that this structure is typically respected by the t-deformation to the superpolynomials.

  14. Fabrication and correction of freeform surface based on Zernike polynomials by slow tool servo

    NASA Astrophysics Data System (ADS)

    Cheng, Yuan-Chieh; Hsu, Ming-Ying; Peng, Wei-Jei; Hsu, Wei-Yao

    2017-10-01

    Recently, freeform surface widely using to the optical system; because it is have advance of optical image and freedom available to improve the optical performance. For freeform optical fabrication by integrating freeform optical design, precision freeform manufacture, metrology freeform optics and freeform compensate method, to modify the form deviation of surface, due to production process of freeform lens ,compared and provides more flexibilities and better performance. This paper focuses on the fabrication and correction of the free-form surface. In this study, optical freeform surface using multi-axis ultra-precision manufacturing could be upgrading the quality of freeform. It is a machine equipped with a positioning C-axis and has the CXZ machining function which is also called slow tool servo (STS) function. The freeform compensate method of Zernike polynomials results successfully verified; it is correction the form deviation of freeform surface. Finally, the freeform surface are measured experimentally by Ultrahigh Accurate 3D Profilometer (UA3P), compensate the freeform form error with Zernike polynomial fitting to improve the form accuracy of freeform.

  15. Comparing Inference Approaches for RD Designs: A Reexamination of the Effect of Head Start on Child Mortality

    ERIC Educational Resources Information Center

    Cattaneo, Matias D.; Titiunik, Rocío; Vazquez-Bare, Gonzalo

    2017-01-01

    The regression discontinuity (RD) design is a popular quasi-experimental design for causal inference and policy evaluation. The most common inference approaches in RD designs employ "flexible" parametric and nonparametric local polynomial methods, which rely on extrapolation and large-sample approximations of conditional expectations…

  16. Numerical solution of second order ODE directly by two point block backward differentiation formula

    NASA Astrophysics Data System (ADS)

    Zainuddin, Nooraini; Ibrahim, Zarina Bibi; Othman, Khairil Iskandar; Suleiman, Mohamed; Jamaludin, Noraini

    2015-12-01

    Direct Two Point Block Backward Differentiation Formula, (BBDF2) for solving second order ordinary differential equations (ODEs) will be presented throughout this paper. The method is derived by differentiating the interpolating polynomial using three back values. In BBDF2, two approximate solutions are produced simultaneously at each step of integration. The method derived is implemented by using fixed step size and the numerical results that follow demonstrate the advantage of the direct method as compared to the reduction method.

  17. A robust nonparametric framework for reconstruction of stochastic differential equation models

    NASA Astrophysics Data System (ADS)

    Rajabzadeh, Yalda; Rezaie, Amir Hossein; Amindavar, Hamidreza

    2016-05-01

    In this paper, we employ a nonparametric framework to robustly estimate the functional forms of drift and diffusion terms from discrete stationary time series. The proposed method significantly improves the accuracy of the parameter estimation. In this framework, drift and diffusion coefficients are modeled through orthogonal Legendre polynomials. We employ the least squares regression approach along with the Euler-Maruyama approximation method to learn coefficients of stochastic model. Next, a numerical discrete construction of mean squared prediction error (MSPE) is established to calculate the order of Legendre polynomials in drift and diffusion terms. We show numerically that the new method is robust against the variation in sample size and sampling rate. The performance of our method in comparison with the kernel-based regression (KBR) method is demonstrated through simulation and real data. In case of real dataset, we test our method for discriminating healthy electroencephalogram (EEG) signals from epilepsy ones. We also demonstrate the efficiency of the method through prediction in the financial data. In both simulation and real data, our algorithm outperforms the KBR method.

  18. An updated PREDICT breast cancer prognostication and treatment benefit prediction model with independent validation.

    PubMed

    Candido Dos Reis, Francisco J; Wishart, Gordon C; Dicks, Ed M; Greenberg, David; Rashbass, Jem; Schmidt, Marjanka K; van den Broek, Alexandra J; Ellis, Ian O; Green, Andrew; Rakha, Emad; Maishman, Tom; Eccles, Diana M; Pharoah, Paul D P

    2017-05-22

    PREDICT is a breast cancer prognostic and treatment benefit model implemented online. The overall fit of the model has been good in multiple independent case series, but PREDICT has been shown to underestimate breast cancer specific mortality in women diagnosed under the age of 40. Another limitation is the use of discrete categories for tumour size and node status resulting in 'step' changes in risk estimates on moving between categories. We have refitted the PREDICT prognostic model using the original cohort of cases from East Anglia with updated survival time in order to take into account age at diagnosis and to smooth out the survival function for tumour size and node status. Multivariable Cox regression models were used to fit separate models for ER negative and ER positive disease. Continuous variables were fitted using fractional polynomials and a smoothed baseline hazard was obtained by regressing the baseline cumulative hazard for each patients against time using fractional polynomials. The fit of the prognostic models were then tested in three independent data sets that had also been used to validate the original version of PREDICT. In the model fitting data, after adjusting for other prognostic variables, there is an increase in risk of breast cancer specific mortality in younger and older patients with ER positive disease, with a substantial increase in risk for women diagnosed before the age of 35. In ER negative disease the risk increases slightly with age. The association between breast cancer specific mortality and both tumour size and number of positive nodes was non-linear with a more marked increase in risk with increasing size and increasing number of nodes in ER positive disease. The overall calibration and discrimination of the new version of PREDICT (v2) was good and comparable to that of the previous version in both model development and validation data sets. However, the calibration of v2 improved over v1 in patients diagnosed under the age of 40. The PREDICT v2 is an improved prognostication and treatment benefit model compared with v1. The online version should continue to aid clinical decision making in women with early breast cancer.

  19. A New Navigation Satellite Clock Bias Prediction Method Based on Modified Clock-bias Quadratic Polynomial Model

    NASA Astrophysics Data System (ADS)

    Wang, Y. P.; Lu, Z. P.; Sun, D. S.; Wang, N.

    2016-01-01

    In order to better express the characteristics of satellite clock bias (SCB) and improve SCB prediction precision, this paper proposed a new SCB prediction model which can take physical characteristics of space-borne atomic clock, the cyclic variation, and random part of SCB into consideration. First, the new model employs a quadratic polynomial model with periodic items to fit and extract the trend term and cyclic term of SCB; then based on the characteristics of fitting residuals, a time series ARIMA ~(Auto-Regressive Integrated Moving Average) model is used to model the residuals; eventually, the results from the two models are combined to obtain final SCB prediction values. At last, this paper uses precise SCB data from IGS (International GNSS Service) to conduct prediction tests, and the results show that the proposed model is effective and has better prediction performance compared with the quadratic polynomial model, grey model, and ARIMA model. In addition, the new method can also overcome the insufficiency of the ARIMA model in model recognition and order determination.

  20. Correlation between external and internal respiratory motion: a validation study.

    PubMed

    Ernst, Floris; Bruder, Ralf; Schlaefer, Alexander; Schweikard, Achim

    2012-05-01

    In motion-compensated image-guided radiotherapy, accurate tracking of the target region is required. This tracking process includes building a correlation model between external surrogate motion and the motion of the target region. A novel correlation method is presented and compared with the commonly used polynomial model. The CyberKnife system (Accuray, Inc., Sunnyvale/CA) uses a polynomial correlation model to relate externally measured surrogate data (optical fibres on the patient's chest emitting red light) to infrequently acquired internal measurements (X-ray data). A new correlation algorithm based on ɛ -Support Vector Regression (SVR) was developed. Validation and comparison testing were done with human volunteers using live 3D ultrasound and externally measured infrared light-emitting diodes (IR LEDs). Seven data sets (5:03-6:27 min long) were recorded from six volunteers. Polynomial correlation algorithms were compared to the SVR-based algorithm demonstrating an average increase in root mean square (RMS) accuracy of 21.3% (0.4 mm). For three signals, the increase was more than 29% and for one signal as much as 45.6% (corresponding to more than 1.5 mm RMS). Further analysis showed the improvement to be statistically significant. The new SVR-based correlation method outperforms traditional polynomial correlation methods for motion tracking. This method is suitable for clinical implementation and may improve the overall accuracy of targeted radiotherapy.

  1. Numeric model to predict the location of market demand and economic order quantity for retailers of supply chain

    NASA Astrophysics Data System (ADS)

    Fradinata, Edy; Marli Kesuma, Zurnila

    2018-05-01

    Polynomials and Spline regression are the numeric model where they used to obtain the performance of methods, distance relationship models for cement retailers in Banda Aceh, predicts the market area for retailers and the economic order quantity (EOQ). These numeric models have their difference accuracy for measuring the mean square error (MSE). The distance relationships between retailers are to identify the density of retailers in the town. The dataset is collected from the sales of cement retailer with a global positioning system (GPS). The sales dataset is plotted of its characteristic to obtain the goodness of fitted quadratic, cubic, and fourth polynomial methods. On the real sales dataset, polynomials are used the behavior relationship x-abscissa and y-ordinate to obtain the models. This research obtains some advantages such as; the four models from the methods are useful for predicting the market area for the retailer in the competitiveness, the comparison of the performance of the methods, the distance of the relationship between retailers, and at last the inventory policy based on economic order quantity. The results, the high-density retail relationship areas indicate that the growing population with the construction project. The spline is better than quadratic, cubic, and four polynomials in predicting the points indicating of small MSE. The inventory policy usages the periodic review policy type.

  2. A moving hum filter to suppress rotor noise in high-resolution airborne magnetic data

    USGS Publications Warehouse

    Xia, J.; Doll, W.E.; Miller, R.D.; Gamey, T.J.; Emond, A.M.

    2005-01-01

    A unique filtering approach is developed to eliminate helicopter rotor noise. It is designed to suppress harmonic noise from a rotor that varies slightly in amplitude, phase, and frequency and that contaminates aero-magnetic data. The filter provides a powerful harmonic noise-suppression tool for data acquired with modern large-dynamic-range recording systems. This three-step approach - polynomial fitting, bandpass filtering, and rotor-noise synthesis - significantly reduces rotor noise without altering the spectra of signals of interest. Two steps before hum filtering - polynomial fitting and bandpass filtering - are critical to accurately model the weak rotor noise. During rotor-noise synthesis, amplitude, phase, and frequency are determined. Data are processed segment by segment so that there is no limit on the length of data. The segment length changes dynamically along a line based on modeling results. Modeling the rotor noise is stable and efficient. Real-world data examples demonstrate that this method can suppress rotor noise by more than 95% when implemented in an aeromagnetic data-processing flow. ?? 2005 Society of Exploration Geophysicists. All rights reserved.

  3. Financial time series prediction using spiking neural networks.

    PubMed

    Reid, David; Hussain, Abir Jaafar; Tawfik, Hissam

    2014-01-01

    In this paper a novel application of a particular type of spiking neural network, a Polychronous Spiking Network, was used for financial time series prediction. It is argued that the inherent temporal capabilities of this type of network are suited to non-stationary data such as this. The performance of the spiking neural network was benchmarked against three systems: two "traditional", rate-encoded, neural networks; a Multi-Layer Perceptron neural network and a Dynamic Ridge Polynomial neural network, and a standard Linear Predictor Coefficients model. For this comparison three non-stationary and noisy time series were used: IBM stock data; US/Euro exchange rate data, and the price of Brent crude oil. The experiments demonstrated favourable prediction results for the Spiking Neural Network in terms of Annualised Return and prediction error for 5-Step ahead predictions. These results were also supported by other relevant metrics such as Maximum Drawdown and Signal-To-Noise ratio. This work demonstrated the applicability of the Polychronous Spiking Network to financial data forecasting and this in turn indicates the potential of using such networks over traditional systems in difficult to manage non-stationary environments.

  4. The CFL condition for spectral approximations to hyperbolic initial-boundary value problems

    NASA Technical Reports Server (NTRS)

    Gottlieb, David; Tadmor, Eitan

    1991-01-01

    The stability of spectral approximations to scalar hyperbolic initial-boundary value problems with variable coefficients are studied. Time is discretized by explicit multi-level or Runge-Kutta methods of order less than or equal to 3 (forward Euler time differencing is included), and spatial discretizations are studied by spectral and pseudospectral approximations associated with the general family of Jacobi polynomials. It is proved that these fully explicit spectral approximations are stable provided their time-step, delta t, is restricted by the CFL-like condition, delta t less than Const. N(exp-2), where N equals the spatial number of degrees of freedom. We give two independent proofs of this result, depending on two different choices of approximate L(exp 2)-weighted norms. In both approaches, the proofs hinge on a certain inverse inequality interesting for its own sake. The result confirms the commonly held belief that the above CFL stability restriction, which is extensively used in practical implementations, guarantees the stability (and hence the convergence) of fully-explicit spectral approximations in the nonperiodic case.

  5. The CFL condition for spectral approximations to hyperbolic initial-boundary value problems

    NASA Technical Reports Server (NTRS)

    Gottlieb, David; Tadmor, Eitan

    1990-01-01

    The stability of spectral approximations to scalar hyperbolic initial-boundary value problems with variable coefficients are studied. Time is discretized by explicit multi-level or Runge-Kutta methods of order less than or equal to 3 (forward Euler time differencing is included), and spatial discretizations are studied by spectral and pseudospectral approximations associated with the general family of Jacobi polynomials. It is proved that these fully explicit spectral approximations are stable provided their time-step, delta t, is restricted by the CFL-like condition, delta t less than Const. N(exp-2), where N equals the spatial number of degrees of freedom. We give two independent proofs of this result, depending on two different choices of approximate L(exp 2)-weighted norms. In both approaches, the proofs hinge on a certain inverse inequality interesting for its own sake. The result confirms the commonly held belief that the above CFL stability restriction, which is extensively used in practical implementations, guarantees the stability (and hence the convergence) of fully-explicit spectral approximations in the nonperiodic case.

  6. Computationally efficient approach for solving time dependent diffusion equation with discrete temporal convolution applied to granular particles of battery electrodes

    NASA Astrophysics Data System (ADS)

    Senegačnik, Jure; Tavčar, Gregor; Katrašnik, Tomaž

    2015-03-01

    The paper presents a computationally efficient method for solving the time dependent diffusion equation in a granule of the Li-ion battery's granular solid electrode. The method, called Discrete Temporal Convolution method (DTC), is based on a discrete temporal convolution of the analytical solution of the step function boundary value problem. This approach enables modelling concentration distribution in the granular particles for arbitrary time dependent exchange fluxes that do not need to be known a priori. It is demonstrated in the paper that the proposed method features faster computational times than finite volume/difference methods and Padé approximation at the same accuracy of the results. It is also demonstrated that all three addressed methods feature higher accuracy compared to the quasi-steady polynomial approaches when applied to simulate the current densities variations typical for mobile/automotive applications. The proposed approach can thus be considered as one of the key innovative methods enabling real-time capability of the multi particle electrochemical battery models featuring spatial and temporal resolved particle concentration profiles.

  7. A practical data processing workflow for multi-OMICS projects.

    PubMed

    Kohl, Michael; Megger, Dominik A; Trippler, Martin; Meckel, Hagen; Ahrens, Maike; Bracht, Thilo; Weber, Frank; Hoffmann, Andreas-Claudius; Baba, Hideo A; Sitek, Barbara; Schlaak, Jörg F; Meyer, Helmut E; Stephan, Christian; Eisenacher, Martin

    2014-01-01

    Multi-OMICS approaches aim on the integration of quantitative data obtained for different biological molecules in order to understand their interrelation and the functioning of larger systems. This paper deals with several data integration and data processing issues that frequently occur within this context. To this end, the data processing workflow within the PROFILE project is presented, a multi-OMICS project that aims on identification of novel biomarkers and the development of new therapeutic targets for seven important liver diseases. Furthermore, a software called CrossPlatformCommander is sketched, which facilitates several steps of the proposed workflow in a semi-automatic manner. Application of the software is presented for the detection of novel biomarkers, their ranking and annotation with existing knowledge using the example of corresponding Transcriptomics and Proteomics data sets obtained from patients suffering from hepatocellular carcinoma. Additionally, a linear regression analysis of Transcriptomics vs. Proteomics data is presented and its performance assessed. It was shown, that for capturing profound relations between Transcriptomics and Proteomics data, a simple linear regression analysis is not sufficient and implementation and evaluation of alternative statistical approaches are needed. Additionally, the integration of multivariate variable selection and classification approaches is intended for further development of the software. Although this paper focuses only on the combination of data obtained from quantitative Proteomics and Transcriptomics experiments, several approaches and data integration steps are also applicable for other OMICS technologies. Keeping specific restrictions in mind the suggested workflow (or at least parts of it) may be used as a template for similar projects that make use of different high throughput techniques. This article is part of a Special Issue entitled: Computational Proteomics in the Post-Identification Era. Guest Editors: Martin Eisenacher and Christian Stephan. Copyright © 2013 Elsevier B.V. All rights reserved.

  8. Explaining variation in tropical plant community composition: influence of environmental and spatial data quality.

    PubMed

    Jones, Mirkka M; Tuomisto, Hanna; Borcard, Daniel; Legendre, Pierre; Clark, David B; Olivas, Paulo C

    2008-03-01

    The degree to which variation in plant community composition (beta-diversity) is predictable from environmental variation, relative to other spatial processes, is of considerable current interest. We addressed this question in Costa Rican rain forest pteridophytes (1,045 plots, 127 species). We also tested the effect of data quality on the results, which has largely been overlooked in earlier studies. To do so, we compared two alternative spatial models [polynomial vs. principal coordinates of neighbour matrices (PCNM)] and ten alternative environmental models (all available environmental variables vs. four subsets, and including their polynomials vs. not). Of the environmental data types, soil chemistry contributed most to explaining pteridophyte community variation, followed in decreasing order of contribution by topography, soil type and forest structure. Environmentally explained variation increased moderately when polynomials of the environmental variables were included. Spatially explained variation increased substantially when the multi-scale PCNM spatial model was used instead of the traditional, broad-scale polynomial spatial model. The best model combination (PCNM spatial model and full environmental model including polynomials) explained 32% of pteridophyte community variation, after correcting for the number of sampling sites and explanatory variables. Overall evidence for environmental control of beta-diversity was strong, and the main floristic gradients detected were correlated with environmental variation at all scales encompassed by the study (c. 100-2,000 m). Depending on model choice, however, total explained variation differed more than fourfold, and the apparent relative importance of space and environment could be reversed. Therefore, we advocate a broader recognition of the impacts that data quality has on analysis results. A general understanding of the relative contributions of spatial and environmental processes to species distributions and beta-diversity requires that methodological artefacts are separated from real ecological differences.

  9. 48 CFR 15.202 - Advisory multi-step process.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 48 Federal Acquisition Regulations System 1 2010-10-01 2010-10-01 false Advisory multi-step... Information 15.202 Advisory multi-step process. (a) The agency may publish a presolicitation notice (see 5.204... participate in the acquisition. This process should not be used for multi-step acquisitions where it would...

  10. An efficient higher order family of root finders

    NASA Astrophysics Data System (ADS)

    Petkovic, Ljiljana D.; Rancic, Lidija; Petkovic, Miodrag S.

    2008-06-01

    A one parameter family of iterative methods for the simultaneous approximation of simple complex zeros of a polynomial, based on a cubically convergent Hansen-Patrick's family, is studied. We show that the convergence of the basic family of the fourth order can be increased to five and six using Newton's and Halley's corrections, respectively. Since these corrections use the already calculated values, the computational efficiency of the accelerated methods is significantly increased. Further acceleration is achieved by applying the Gauss-Seidel approach (single-step mode). One of the most important problems in solving nonlinear equations, the construction of initial conditions which provide both the guaranteed and fast convergence, is considered for the proposed accelerated family. These conditions are computationally verifiable; they depend only on the polynomial coefficients, its degree and initial approximations, which is of practical importance. Some modifications of the considered family, providing the computation of multiple zeros of polynomials and simple zeros of a wide class of analytic functions, are also studied. Numerical examples demonstrate the convergence properties of the presented family of root-finding methods.

  11. Pedestrian detection in crowded scenes with the histogram of gradients principle

    NASA Astrophysics Data System (ADS)

    Sidla, O.; Rosner, M.; Lypetskyy, Y.

    2006-10-01

    This paper describes a close to real-time scale invariant implementation of a pedestrian detector system which is based on the Histogram of Oriented Gradients (HOG) principle. Salient HOG features are first selected from a manually created very large database of samples with an evolutionary optimization procedure that directly trains a polynomial Support Vector Machine (SVM). Real-time operation is achieved by a cascaded 2-step classifier which uses first a very fast linear SVM (with the same features as the polynomial SVM) to reject most of the irrelevant detections and then computes the decision function with a polynomial SVM on the remaining set of candidate detections. Scale invariance is achieved by running the detector of constant size on scaled versions of the original input images and by clustering the results over all resolutions. The pedestrian detection system has been implemented in two versions: i) fully body detection, and ii) upper body only detection. The latter is especially suited for very busy and crowded scenarios. On a state-of-the-art PC it is able to run at a frequency of 8 - 20 frames/sec.

  12. A proposed metric for assessing the measurement quality of individual microarrays

    PubMed Central

    Kim, Kyoungmi; Page, Grier P; Beasley, T Mark; Barnes, Stephen; Scheirer, Katherine E; Allison, David B

    2006-01-01

    Background High-density microarray technology is increasingly applied to study gene expression levels on a large scale. Microarray experiments rely on several critical steps that may introduce error and uncertainty in analyses. These steps include mRNA sample extraction, amplification and labeling, hybridization, and scanning. In some cases this may be manifested as systematic spatial variation on the surface of microarray in which expression measurements within an individual array may vary as a function of geographic position on the array surface. Results We hypothesized that an index of the degree of spatiality of gene expression measurements associated with their physical geographic locations on an array could indicate the summary of the physical reliability of the microarray. We introduced a novel way to formulate this index using a statistical analysis tool. Our approach regressed gene expression intensity measurements on a polynomial response surface of the microarray's Cartesian coordinates. We demonstrated this method using a fixed model and presented results from real and simulated datasets. Conclusion We demonstrated the potential of such a quantitative metric for assessing the reliability of individual arrays. Moreover, we showed that this procedure can be incorporated into laboratory practice as a means to set quality control specifications and as a tool to determine whether an array has sufficient quality to be retained in terms of spatial correlation of gene expression measurements. PMID:16430768

  13. Uncertainty Quantification in CO 2 Sequestration Using Surrogate Models from Polynomial Chaos Expansion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Yan; Sahinidis, Nikolaos V.

    2013-03-06

    In this paper, surrogate models are iteratively built using polynomial chaos expansion (PCE) and detailed numerical simulations of a carbon sequestration system. Output variables from a numerical simulator are approximated as polynomial functions of uncertain parameters. Once generated, PCE representations can be used in place of the numerical simulator and often decrease simulation times by several orders of magnitude. However, PCE models are expensive to derive unless the number of terms in the expansion is moderate, which requires a relatively small number of uncertain variables and a low degree of expansion. To cope with this limitation, instead of using amore » classical full expansion at each step of an iterative PCE construction method, we introduce a mixed-integer programming (MIP) formulation to identify the best subset of basis terms in the expansion. This approach makes it possible to keep the number of terms small in the expansion. Monte Carlo (MC) simulation is then performed by substituting the values of the uncertain parameters into the closed-form polynomial functions. Based on the results of MC simulation, the uncertainties of injecting CO{sub 2} underground are quantified for a saline aquifer. Moreover, based on the PCE model, we formulate an optimization problem to determine the optimal CO{sub 2} injection rate so as to maximize the gas saturation (residual trapping) during injection, and thereby minimize the chance of leakage.« less

  14. Random Regression Models Using Legendre Polynomials to Estimate Genetic Parameters for Test-day Milk Protein Yields in Iranian Holstein Dairy Cattle.

    PubMed

    Naserkheil, Masoumeh; Miraie-Ashtiani, Seyed Reza; Nejati-Javaremi, Ardeshir; Son, Jihyun; Lee, Deukhwan

    2016-12-01

    The objective of this study was to estimate the genetic parameters of milk protein yields in Iranian Holstein dairy cattle. A total of 1,112,082 test-day milk protein yield records of 167,269 first lactation Holstein cows, calved from 1990 to 2010, were analyzed. Estimates of the variance components, heritability, and genetic correlations for milk protein yields were obtained using a random regression test-day model. Milking times, herd, age of recording, year, and month of recording were included as fixed effects in the model. Additive genetic and permanent environmental random effects for the lactation curve were taken into account by applying orthogonal Legendre polynomials of the fourth order in the model. The lowest and highest additive genetic variances were estimated at the beginning and end of lactation, respectively. Permanent environmental variance was higher at both extremes. Residual variance was lowest at the middle of the lactation and contrarily, heritability increased during this period. Maximum heritability was found during the 12th lactation stage (0.213±0.007). Genetic, permanent, and phenotypic correlations among test-days decreased as the interval between consecutive test-days increased. A relatively large data set was used in this study; therefore, the estimated (co)variance components for random regression coefficients could be used for national genetic evaluation of dairy cattle in Iran.

  15. Random Regression Models Using Legendre Polynomials to Estimate Genetic Parameters for Test-day Milk Protein Yields in Iranian Holstein Dairy Cattle

    PubMed Central

    Naserkheil, Masoumeh; Miraie-Ashtiani, Seyed Reza; Nejati-Javaremi, Ardeshir; Son, Jihyun; Lee, Deukhwan

    2016-01-01

    The objective of this study was to estimate the genetic parameters of milk protein yields in Iranian Holstein dairy cattle. A total of 1,112,082 test-day milk protein yield records of 167,269 first lactation Holstein cows, calved from 1990 to 2010, were analyzed. Estimates of the variance components, heritability, and genetic correlations for milk protein yields were obtained using a random regression test-day model. Milking times, herd, age of recording, year, and month of recording were included as fixed effects in the model. Additive genetic and permanent environmental random effects for the lactation curve were taken into account by applying orthogonal Legendre polynomials of the fourth order in the model. The lowest and highest additive genetic variances were estimated at the beginning and end of lactation, respectively. Permanent environmental variance was higher at both extremes. Residual variance was lowest at the middle of the lactation and contrarily, heritability increased during this period. Maximum heritability was found during the 12th lactation stage (0.213±0.007). Genetic, permanent, and phenotypic correlations among test-days decreased as the interval between consecutive test-days increased. A relatively large data set was used in this study; therefore, the estimated (co)variance components for random regression coefficients could be used for national genetic evaluation of dairy cattle in Iran. PMID:26954192

  16. Separation of the long-term thermal effects from the strain measurements in the Geodynamics Laboratory of Lanzarote

    NASA Astrophysics Data System (ADS)

    Venedikov, A. P.; Arnoso, J.; Cai, W.; Vieira, R.; Tan, S.; Velez, E. J.

    2006-01-01

    A 12-year series (1992-2004) of strain measurements recorded in the Geodynamics Laboratory of Lanzarote is investigated. Through a tidal analysis the non-tidal component of the data is separated in order to use it for studying signals, useful for monitoring of the volcanic activity on the island. This component contains various perturbations of meteorological and oceanic origin, which should be eliminated in order to make the useful signals discernible. The paper is devoted to the estimation and elimination of the effect of the air temperature inside the station, which strongly dominates the strainmeter data. For solving this task, a regression model is applied, which includes a linear relation with the temperature and time-dependant polynomials. The regression includes nonlinearly a set of parameters, which are estimated by a properly applied Bayesian approach. The results obtained are: the regression coefficient of the strain data on temperature is equal to (-367.4 ± 0.8) × 10 -9 °C -1, the curve of the non-tidal component reduced by the effect of the temperature and a polynomial approximation of the reduced curve. The technique used here can be helpful to investigators in the domain of the earthquake and volcano monitoring. However, the fundamental and extremely difficult problem of what kind of signals in the reduced curves might be useful in this field is not considered here.

  17. SEMIPARAMETRIC QUANTILE REGRESSION WITH HIGH-DIMENSIONAL COVARIATES

    PubMed Central

    Zhu, Liping; Huang, Mian; Li, Runze

    2012-01-01

    This paper is concerned with quantile regression for a semiparametric regression model, in which both the conditional mean and conditional variance function of the response given the covariates admit a single-index structure. This semiparametric regression model enables us to reduce the dimension of the covariates and simultaneously retains the flexibility of nonparametric regression. Under mild conditions, we show that the simple linear quantile regression offers a consistent estimate of the index parameter vector. This is a surprising and interesting result because the single-index model is possibly misspecified under the linear quantile regression. With a root-n consistent estimate of the index vector, one may employ a local polynomial regression technique to estimate the conditional quantile function. This procedure is computationally efficient, which is very appealing in high-dimensional data analysis. We show that the resulting estimator of the quantile function performs asymptotically as efficiently as if the true value of the index vector were known. The methodologies are demonstrated through comprehensive simulation studies and an application to a real dataset. PMID:24501536

  18. A /31,15/ Reed-Solomon Code for large memory systems

    NASA Technical Reports Server (NTRS)

    Lim, R. S.

    1979-01-01

    This paper describes the encoding and the decoding of a (31,15) Reed-Solomon Code for multiple-burst error correction for large memory systems. The decoding procedure consists of four steps: (1) syndrome calculation, (2) error-location polynomial calculation, (3) error-location numbers calculation, and (4) error values calculation. The principal features of the design are the use of a hardware shift register for both high-speed encoding and syndrome calculation, and the use of a commercially available (31,15) decoder for decoding Steps 2, 3 and 4.

  19. Behavioral modeling and digital compensation of nonlinearity in DFB lasers for multi-band directly modulated radio-over-fiber systems

    NASA Astrophysics Data System (ADS)

    Li, Jianqiang; Yin, Chunjing; Chen, Hao; Yin, Feifei; Dai, Yitang; Xu, Kun

    2014-11-01

    The envisioned C-RAN concept in wireless communication sector replies on distributed antenna systems (DAS) which consist of a central unit (CU), multiple remote antenna units (RAUs) and the fronthaul links between them. As the legacy and emerging wireless communication standards will coexist for a long time, the fronthaul links are preferred to carry multi-band multi-standard wireless signals. Directly-modulated radio-over-fiber (ROF) links can serve as a lowcost option to make fronthaul connections conveying multi-band wireless signals. However, directly-modulated radioover- fiber (ROF) systems often suffer from inherent nonlinearities from directly-modulated lasers. Unlike ROF systems working at the single-band mode, the modulation nonlinearities in multi-band ROF systems can result in both in-band and cross-band nonlinear distortions. In order to address this issue, we have recently investigated the multi-band nonlinear behavior of directly-modulated DFB lasers based on multi-dimensional memory polynomial model. Based on this model, an efficient multi-dimensional baseband digital predistortion technique was developed and experimentally demonstrated for linearization of multi-band directly-modulated ROF systems.

  20. From h to p efficiently: optimal implementation strategies for explicit time-dependent problems using the spectral/hp element method

    PubMed Central

    Bolis, A; Cantwell, C D; Kirby, R M; Sherwin, S J

    2014-01-01

    We investigate the relative performance of a second-order Adams–Bashforth scheme and second-order and fourth-order Runge–Kutta schemes when time stepping a 2D linear advection problem discretised using a spectral/hp element technique for a range of different mesh sizes and polynomial orders. Numerical experiments explore the effects of short (two wavelengths) and long (32 wavelengths) time integration for sets of uniform and non-uniform meshes. The choice of time-integration scheme and discretisation together fixes a CFL limit that imposes a restriction on the maximum time step, which can be taken to ensure numerical stability. The number of steps, together with the order of the scheme, affects not only the runtime but also the accuracy of the solution. Through numerical experiments, we systematically highlight the relative effects of spatial resolution and choice of time integration on performance and provide general guidelines on how best to achieve the minimal execution time in order to obtain a prescribed solution accuracy. The significant role played by higher polynomial orders in reducing CPU time while preserving accuracy becomes more evident, especially for uniform meshes, compared with what has been typically considered when studying this type of problem.© 2014. The Authors. International Journal for Numerical Methods in Fluids published by John Wiley & Sons, Ltd. PMID:25892840

  1. The construction of high-accuracy schemes for acoustic equations

    NASA Technical Reports Server (NTRS)

    Tang, Lei; Baeder, James D.

    1995-01-01

    An accuracy analysis of various high order schemes is performed from an interpolation point of view. The analysis indicates that classical high order finite difference schemes, which use polynomial interpolation, hold high accuracy only at nodes and are therefore not suitable for time-dependent problems. Thus, some schemes improve their numerical accuracy within grid cells by the near-minimax approximation method, but their practical significance is degraded by maintaining the same stencil as classical schemes. One-step methods in space discretization, which use piecewise polynomial interpolation and involve data at only two points, can generate a uniform accuracy over the whole grid cell and avoid spurious roots. As a result, they are more accurate and efficient than multistep methods. In particular, the Cubic-Interpolated Psuedoparticle (CIP) scheme is recommended for computational acoustics.

  2. Polynomial order selection in random regression models via penalizing adaptively the likelihood.

    PubMed

    Corrales, J D; Munilla, S; Cantet, R J C

    2015-08-01

    Orthogonal Legendre polynomials (LP) are used to model the shape of additive genetic and permanent environmental effects in random regression models (RRM). Frequently, the Akaike (AIC) and the Bayesian (BIC) information criteria are employed to select LP order. However, it has been theoretically shown that neither AIC nor BIC is simultaneously optimal in terms of consistency and efficiency. Thus, the goal was to introduce a method, 'penalizing adaptively the likelihood' (PAL), as a criterion to select LP order in RRM. Four simulated data sets and real data (60,513 records, 6675 Colombian Holstein cows) were employed. Nested models were fitted to the data, and AIC, BIC and PAL were calculated for all of them. Results showed that PAL and BIC identified with probability of one the true LP order for the additive genetic and permanent environmental effects, but AIC tended to favour over parameterized models. Conversely, when the true model was unknown, PAL selected the best model with higher probability than AIC. In the latter case, BIC never favoured the best model. To summarize, PAL selected a correct model order regardless of whether the 'true' model was within the set of candidates. © 2015 Blackwell Verlag GmbH.

  3. Development of Ensemble Model Based Water Demand Forecasting Model

    NASA Astrophysics Data System (ADS)

    Kwon, Hyun-Han; So, Byung-Jin; Kim, Seong-Hyeon; Kim, Byung-Seop

    2014-05-01

    In recent years, Smart Water Grid (SWG) concept has globally emerged over the last decade and also gained significant recognition in South Korea. Especially, there has been growing interest in water demand forecast and optimal pump operation and this has led to various studies regarding energy saving and improvement of water supply reliability. Existing water demand forecasting models are categorized into two groups in view of modeling and predicting their behavior in time series. One is to consider embedded patterns such as seasonality, periodicity and trends, and the other one is an autoregressive model that is using short memory Markovian processes (Emmanuel et al., 2012). The main disadvantage of the abovementioned model is that there is a limit to predictability of water demands of about sub-daily scale because the system is nonlinear. In this regard, this study aims to develop a nonlinear ensemble model for hourly water demand forecasting which allow us to estimate uncertainties across different model classes. The proposed model is consist of two parts. One is a multi-model scheme that is based on combination of independent prediction model. The other one is a cross validation scheme named Bagging approach introduced by Brieman (1996) to derive weighting factors corresponding to individual models. Individual forecasting models that used in this study are linear regression analysis model, polynomial regression, multivariate adaptive regression splines(MARS), SVM(support vector machine). The concepts are demonstrated through application to observed from water plant at several locations in the South Korea. Keywords: water demand, non-linear model, the ensemble forecasting model, uncertainty. Acknowledgements This subject is supported by Korea Ministry of Environment as "Projects for Developing Eco-Innovation Technologies (GT-11-G-02-001-6)

  4. Prediction of different ovarian responses using anti-Müllerian hormone following a long agonist treatment protocol for IVF.

    PubMed

    Heidar, Z; Bakhtiyari, M; Mirzamoradi, M; Zadehmodarres, S; Sarfjoo, F S; Mansournia, M A

    2015-09-01

    The purpose of this study was to predict the poor and excessive ovarian response using anti-Müllerian hormone (AMH) levels following a long agonist protocol in IVF candidates. Through a prospective cohort study, the type of relationship and appropriate scale for AMH were determined using the fractional polynomial regression. To determine the effect of AMH on the outcomes of ovarian stimulation and different ovarian responses, the multi-nominal and negative binomial regression models were fitted using backward stepwise method. The ovarian response of study subject who entered a standard long-term treatment cycle with GnRH agonist was evaluated using prediction model, separately and in combined models with (ROC) curves. The use of standard long-term treatments with GnRH agonist led to positive pregnancy test results in 30% of treated patients. With each unit increase in the log of AMH, the odds ratio of having poor response compared to normal response decreases by 64% (OR 0.36, 95% CI 0.19-0.68). Also the results of negative binomial regression model indicated that for one unit increase in the log of AMH blood levels, the odds of releasing an oocyte increased 24% (OR 1.24, 95% CI 1.14-1.35). The optimal cut-off points of AMH for predicting excessive and poor ovarian responses were 3.4 and 1.2 ng/ml, respectively, with area under curves of 0.69 (0.60-0.77) and 0.76 (0.66-0.86), respectively. By considering the age of the patient undergoing infertility treatment as a variable affecting ovulation, use of AMH levels showed to be a good test to discriminate between different ovarian responses.

  5. Accessible solitons of fractional dimension

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhong, Wei-Ping, E-mail: zhongwp6@126.com; Texas A&M University at Qatar, P.O. Box 23874, Doha; Belić, Milivoj

    We demonstrate that accessible solitons described by an extended Schrödinger equation with the Laplacian of fractional dimension can exist in strongly nonlocal nonlinear media. The soliton solutions of the model are constructed by two special functions, the associated Legendre polynomials and the Laguerre polynomials in the fraction-dimensional space. Our results show that these fractional accessible solitons form a soliton family which includes crescent solitons, and asymmetric single-layer and multi-layer necklace solitons. -- Highlights: •Analytic solutions of a fractional Schrödinger equation are obtained. •The solutions are produced by means of self-similar method applied to the fractional Schrödinger equation with parabolic potential.more » •The fractional accessible solitons form crescent, asymmetric single-layer and multilayer necklace profiles. •The model applies to the propagation of optical pulses in strongly nonlocal nonlinear media.« less

  6. Soil Particle Size Analysis by Laser Diffractometry: Result Comparison with Pipette Method

    NASA Astrophysics Data System (ADS)

    Šinkovičová, Miroslava; Igaz, Dušan; Kondrlová, Elena; Jarošová, Miriam

    2017-10-01

    Soil texture as the basic soil physical property provides a basic information on the soil grain size distribution as well as grain size fraction representation. Currently, there are several methods of particle dimension measurement available that are based on different physical principles. Pipette method based on the different sedimentation velocity of particles with different diameter is considered to be one of the standard methods of individual grain size fraction distribution determination. Following the technical advancement, optical methods such as laser diffraction can be also used nowadays for grain size distribution determination in the soil. According to the literature review of domestic as well as international sources related to this topic, it is obvious that the results obtained by laser diffractometry do not correspond with the results obtained by pipette method. The main aim of this paper was to analyse 132 samples of medium fine soil, taken from the Nitra River catchment in Slovakia, from depths of 15-20 cm and 40-45 cm, respectively, using laser analysers: ANALYSETTE 22 MicroTec plus (Fritsch GmbH) and Mastersizer 2000 (Malvern Instruments Ltd). The results obtained by laser diffractometry were compared with pipette method and the regression relationships using linear, exponential, power and polynomial trend were derived. Regressions with the three highest regression coefficients (R2) were further investigated. The fit with the highest tightness was observed for the polynomial regression. In view of the results obtained, we recommend using the estimate of the representation of the clay fraction (<0.01 mm) polynomial regression, to achieve a highest confidence value R2 at the depths of 15-20 cm 0.72 (Analysette 22 MicroTec plus) and 0.95 (Mastersizer 2000), from a depth of 40-45 cm 0.90 (Analysette 22 MicroTec plus) and 0.96 (Mastersizer 2000). Since the percentage representation of clayey particles (2nd fraction according to the methodology of Complex Soil Survey done in Slovakia) in soil is the determinant for soil type specification, we recommend using the derived relationships in soil science when the soil texture analysis is done according to laser diffractometry. The advantages of laser diffraction method comprise the short analysis time, usage of small sample amount, application for the various grain size fraction and soil type classification systems, and a wide range of determined fractions. Therefore, it is necessary to focus on this issue further to address the needs of soil science research and attempt to replace the standard pipette method with more progressive laser diffraction method.

  7. Real estate value prediction using multivariate regression models

    NASA Astrophysics Data System (ADS)

    Manjula, R.; Jain, Shubham; Srivastava, Sharad; Rajiv Kher, Pranav

    2017-11-01

    The real estate market is one of the most competitive in terms of pricing and the same tends to vary significantly based on a lot of factors, hence it becomes one of the prime fields to apply the concepts of machine learning to optimize and predict the prices with high accuracy. Therefore in this paper, we present various important features to use while predicting housing prices with good accuracy. We have described regression models, using various features to have lower Residual Sum of Squares error. While using features in a regression model some feature engineering is required for better prediction. Often a set of features (multiple regressions) or polynomial regression (applying a various set of powers in the features) is used for making better model fit. For these models are expected to be susceptible towards over fitting ridge regression is used to reduce it. This paper thus directs to the best application of regression models in addition to other techniques to optimize the result.

  8. Genetic analyses of stillbirth in relation to litter size using random regression models.

    PubMed

    Chen, C Y; Misztal, I; Tsuruta, S; Herring, W O; Holl, J; Culbertson, M

    2010-12-01

    Estimates of genetic parameters for number of stillborns (NSB) in relation to litter size (LS) were obtained with random regression models (RRM). Data were collected from 4 purebred Duroc nucleus farms between 2004 and 2008. Two data sets with 6,575 litters for the first parity (P1) and 6,259 litters for the second to fifth parity (P2-5) with a total of 8,217 and 5,066 animals in the pedigree were analyzed separately. Number of stillborns was studied as a trait on sow level. Fixed effects were contemporary groups (farm-year-season) and fixed cubic regression coefficients on LS with Legendre polynomials. Models for P2-5 included the fixed effect of parity. Random effects were additive genetic effects for both data sets with permanent environmental effects included for P2-5. Random effects modeled with Legendre polynomials (RRM-L), linear splines (RRM-S), and degree 0 B-splines (RRM-BS) with regressions on LS were used. For P1, the order of polynomial, the number of knots, and the number of intervals used for respective models were quadratic, 3, and 3, respectively. For P2-5, the same parameters were linear, 2, and 2, respectively. Heterogeneous residual variances were considered in the models. For P1, estimates of heritability were 12 to 15%, 5 to 6%, and 6 to 7% in LS 5, 9, and 13, respectively. For P2-5, estimates were 15 to 17%, 4 to 5%, and 4 to 6% in LS 6, 9, and 12, respectively. For P1, average estimates of genetic correlations between LS 5 to 9, 5 to 13, and 9 to 13 were 0.53, -0.29, and 0.65, respectively. For P2-5, same estimates averaged for RRM-L and RRM-S were 0.75, -0.21, and 0.50, respectively. For RRM-BS with 2 intervals, the correlation was 0.66 between LS 5 to 7 and 8 to 13. Parameters obtained by 3 RRM revealed the nonlinear relationship between additive genetic effect of NSB and the environmental deviation of LS. The negative correlations between the 2 extreme LS might possibly indicate different genetic bases on incidence of stillbirth.

  9. Experimental injury study of children seated behind collapsing front seats in rear impacts.

    PubMed

    Saczalski, Kenneth J; Sances, Anthony; Kumaresan, Srirangam; Burton, Joseph L; Lewis, Paul R

    2003-01-01

    In the mid 1990's the U.S. Department of Transportation made recommendations to place children and infants into the rear seating areas of motor vehicles to avoid front seat airbag induced injuries and fatalities. In most rear-impacts, however, the adult occupied front seats will collapse into the rear occupant area and pose another potentially serious injury hazard to the rear-seated children. Since rear-impacts involve a wide range of speeds, impact severity, and various sizes of adults in collapsing front seats, a multi-variable experimental method was employed in conjunction with a multi-level "factorial analysis" technique to study injury potential of rear-seated children. Various sizes of Hybrid III adult surrogates, seated in a "typical" average strength collapsing type of front seat, and a three-year-old Hybrid III child surrogate, seated on a built-in booster seat located directly behind the front adult occupant, were tested at various impact severity levels in a popular "minivan" sled-buck test set up. A total of five test configurations were utilized in this study. Three levels of velocity changes ranging from 22.5 to 42.5 kph were used. The average of peak accelerations on the sled-buck tests ranged from approximately 8.2 G's up to about 11.1 G's, with absolute peak values of just over 14 G's at the higher velocity change. The parameters of the test configuration enabled the experimental data to be combined into a polynomial "injury" function of the two primary independent variables (i.e. front seat adult occupant weight and velocity change) so that the "likelihood" of rear child "injury potential" could be determined over a wide range of the key parameters. The experimentally derived head injury data was used to obtain a preliminary HIC (Head Injury Criteria) polynomial fit at the 900 level for the rear-seated child. Several actual accident cases were compared with the preliminary polynomial fit. This study provides a test efficient, multi-variable, method to compare the injury biomechanical data with actual accident cases.

  10. Multi-state model for studying an intermediate event using time-dependent covariates: application to breast cancer.

    PubMed

    Meier-Hirmer, Carolina; Schumacher, Martin

    2013-06-20

    The aim of this article is to propose several methods that allow to investigate how and whether the shape of the hazard ratio after an intermediate event depends on the waiting time to occurrence of this event and/or the sojourn time in this state. A simple multi-state model, the illness-death model, is used as a framework to investigate the occurrence of this intermediate event. Several approaches are shown and their advantages and disadvantages are discussed. All these approaches are based on Cox regression. As different time-scales are used, these models go beyond Markov models. Different estimation methods for the transition hazards are presented. Additionally, time-varying covariates are included into the model using an approach based on fractional polynomials. The different methods of this article are then applied to a dataset consisting of four studies conducted by the German Breast Cancer Study Group (GBSG). The occurrence of the first isolated locoregional recurrence (ILRR) is studied. The results contribute to the debate on the role of the ILRR with respect to the course of the breast cancer disease and the resulting prognosis. We have investigated different modelling strategies for the transition hazard after ILRR or in general after an intermediate event. Including time-dependent structures altered the resulting hazard functions considerably and it was shown that this time-dependent structure has to be taken into account in the case of our breast cancer dataset. The results indicate that an early recurrence increases the risk of death. A late ILRR increases the hazard function much less and after the successful removal of the second tumour the risk of death is almost the same as before the recurrence. With respect to distant disease, the appearance of the ILRR only slightly increases the risk of death if the recurrence was treated successfully. It is important to realize that there are several modelling strategies for the intermediate event and that each of these strategies has restrictions and may lead to different results. Especially in the medical literature considering breast cancer development, the time-dependency is often neglected in the statistical analyses. We show that the time-varying variables cannot be neglected in the case of ILRR and that fractional polynomials are a useful tool for finding the functional form of these time-varying variables.

  11. Multi-variant study of obesity risk genes in African Americans: The Jackson Heart Study.

    PubMed

    Liu, Shijian; Wilson, James G; Jiang, Fan; Griswold, Michael; Correa, Adolfo; Mei, Hao

    2016-11-30

    Genome-wide association study (GWAS) has been successful in identifying obesity risk genes by single-variant association analysis. For this study, we designed steps of analysis strategy and aimed to identify multi-variant effects on obesity risk among candidate genes. Our analyses were focused on 2137 African American participants with body mass index measured in the Jackson Heart Study and 657 common single nucleotide polymorphisms (SNPs) genotyped at 8 GWAS-identified obesity risk genes. Single-variant association test showed that no SNPs reached significance after multiple testing adjustment. The following gene-gene interaction analysis, which was focused on SNPs with unadjusted p-value<0.10, identified 6 significant multi-variant associations. Logistic regression showed that SNPs in these associations did not have significant linear interactions; examination of genetic risk score evidenced that 4 multi-variant associations had significant additive effects of risk SNPs; and haplotype association test presented that all multi-variant associations contained one or several combinations of particular alleles or haplotypes, associated with increased obesity risk. Our study evidenced that obesity risk genes generated multi-variant effects, which can be additive or non-linear interactions, and multi-variant study is an important supplement to existing GWAS for understanding genetic effects of obesity risk genes. Copyright © 2016 Elsevier B.V. All rights reserved.

  12. A robust and efficient stepwise regression method for building sparse polynomial chaos expansions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abraham, Simon, E-mail: Simon.Abraham@ulb.ac.be; Raisee, Mehrdad; Ghorbaniasl, Ghader

    2017-03-01

    Polynomial Chaos (PC) expansions are widely used in various engineering fields for quantifying uncertainties arising from uncertain parameters. The computational cost of classical PC solution schemes is unaffordable as the number of deterministic simulations to be calculated grows dramatically with the number of stochastic dimension. This considerably restricts the practical use of PC at the industrial level. A common approach to address such problems is to make use of sparse PC expansions. This paper presents a non-intrusive regression-based method for building sparse PC expansions. The most important PC contributions are detected sequentially through an automatic search procedure. The variable selectionmore » criterion is based on efficient tools relevant to probabilistic method. Two benchmark analytical functions are used to validate the proposed algorithm. The computational efficiency of the method is then illustrated by a more realistic CFD application, consisting of the non-deterministic flow around a transonic airfoil subject to geometrical uncertainties. To assess the performance of the developed methodology, a detailed comparison is made with the well established LAR-based selection technique. The results show that the developed sparse regression technique is able to identify the most significant PC contributions describing the problem. Moreover, the most important stochastic features are captured at a reduced computational cost compared to the LAR method. The results also demonstrate the superior robustness of the method by repeating the analyses using random experimental designs.« less

  13. Endpoint in plasma etch process using new modified w-multivariate charts and windowed regression

    NASA Astrophysics Data System (ADS)

    Zakour, Sihem Ben; Taleb, Hassen

    2017-09-01

    Endpoint detection is very important undertaking on the side of getting a good understanding and figuring out if a plasma etching process is done in the right way, especially if the etched area is very small (0.1%). It truly is a crucial part of supplying repeatable effects in every single wafer. When the film being etched has been completely cleared, the endpoint is reached. To ensure the desired device performance on the produced integrated circuit, the high optical emission spectroscopy (OES) sensor is employed. The huge number of gathered wavelengths (profiles) is then analyzed and pre-processed using a new proposed simple algorithm named Spectra peak selection (SPS) to select the important wavelengths, then we employ wavelet analysis (WA) to enhance the performance of detection by suppressing noise and redundant information. The selected and treated OES wavelengths are then used in modified multivariate control charts (MEWMA and Hotelling) for three statistics (mean, SD and CV) and windowed polynomial regression for mean. The employ of three aforementioned statistics is motivated by controlling mean shift, variance shift and their ratio (CV) if both mean and SD are not stable. The control charts show their performance in detecting endpoint especially W-mean Hotelling chart and the worst result is given by CV statistic. As the best detection of endpoint is given by the W-Hotelling mean statistic, this statistic will be used to construct a windowed wavelet Hotelling polynomial regression. This latter can only identify the window containing endpoint phenomenon.

  14. Testing Informant Discrepancies as Predictors of Early Adolescent Psychopathology: Why Difference Scores Cannot Tell You What You Want to Know and How Polynomial Regression May

    ERIC Educational Resources Information Center

    Laird, Robert D.; De Los Reyes, Andres

    2013-01-01

    Multiple informants commonly disagree when reporting child and family behavior. In many studies of informant discrepancies, researchers take the difference between two informants' reports and seek to examine the link between this difference score and external constructs (e.g., child maladjustment). In this paper, we review two reasons why…

  15. Correlation among extinction efficiency and other parameters in an aggregate dust model

    NASA Astrophysics Data System (ADS)

    Dhar, Tanuj Kumar; Sekhar Das, Himadri

    2017-10-01

    We study the extinction properties of highly porous Ballistic Cluster-Cluster Aggregate dust aggregates in a wide range of complex refractive indices (1.4≤ n≤ 2.0, 0.001≤ k≤ 1.0) and wavelengths (0.11 {{μ }}{{m}}≤ {{λ }}≤ 3.4 {{μ }} m). An attempt has been made for the first time to investigate the correlation among extinction efficiency ({Q}{ext}), composition of dust aggregates (n,k), wavelength of radiation (λ) and size parameter of the monomers (x). If k is fixed at any value between 0.001 and 1.0, {Q}{ext} increases with increase of n from 1.4 to 2.0. {Q}{ext} and n are correlated via linear regression when the cluster size is small, whereas the correlation is quadratic at moderate and higher sizes of the cluster. This feature is observed at all wavelengths (ultraviolet to optical to infrared). We also find that the variation of {Q}{ext} with n is very small when λ is high. When n is fixed at any value between 1.4 and 2.0, it is observed that {Q}{ext} and k are correlated via a polynomial regression equation (of degree 1, 2, 3 or 4), where the degree of the equation depends on the cluster size, n and λ. The correlation is linear for small size and quadratic/cubic/quartic for moderate and higher sizes. We have also found that {Q}{ext} and x are correlated via a polynomial regression (of degree 3, 4 or 5) for all values of n. The degree of regression is found to be n and k-dependent. The set of relations obtained from our work can be used to model interstellar extinction for dust aggregates in a wide range of wavelengths and complex refractive indices.

  16. Stable multi-domain spectral penalty methods for fractional partial differential equations

    NASA Astrophysics Data System (ADS)

    Xu, Qinwu; Hesthaven, Jan S.

    2014-01-01

    We propose stable multi-domain spectral penalty methods suitable for solving fractional partial differential equations with fractional derivatives of any order. First, a high order discretization is proposed to approximate fractional derivatives of any order on any given grids based on orthogonal polynomials. The approximation order is analyzed and verified through numerical examples. Based on the discrete fractional derivative, we introduce stable multi-domain spectral penalty methods for solving fractional advection and diffusion equations. The equations are discretized in each sub-domain separately and the global schemes are obtained by weakly imposed boundary and interface conditions through a penalty term. Stability of the schemes are analyzed and numerical examples based on both uniform and nonuniform grids are considered to highlight the flexibility and high accuracy of the proposed schemes.

  17. Multi-frequency Phase Unwrap from Noisy Data: Adaptive Least Squares Approach

    NASA Astrophysics Data System (ADS)

    Katkovnik, Vladimir; Bioucas-Dias, José

    2010-04-01

    Multiple frequency interferometry is, basically, a phase acquisition strategy aimed at reducing or eliminating the ambiguity of the wrapped phase observations or, equivalently, reducing or eliminating the fringe ambiguity order. In multiple frequency interferometry, the phase measurements are acquired at different frequencies (or wavelengths) and recorded using the corresponding sensors (measurement channels). Assuming that the absolute phase to be reconstructed is piece-wise smooth, we use a nonparametric regression technique for the phase reconstruction. The nonparametric estimates are derived from a local least squares criterion, which, when applied to the multifrequency data, yields denoised (filtered) phase estimates with extended ambiguity (periodized), compared with the phase ambiguities inherent to each measurement frequency. The filtering algorithm is based on local polynomial (LPA) approximation for design of nonlinear filters (estimators) and adaptation of these filters to unknown smoothness of the spatially varying absolute phase [9]. For phase unwrapping, from filtered periodized data, we apply the recently introduced robust (in the sense of discontinuity preserving) PUMA unwrapping algorithm [1]. Simulations give evidence that the proposed algorithm yields state-of-the-art performance for continuous as well as for discontinues phase surfaces, enabling phase unwrapping in extraordinary difficult situations when all other algorithms fail.

  18. A State Event Detection Algorithm for Numerically Simulating Hybrid Systems with Model Singularities

    DTIC Science & Technology

    2007-01-01

    the case of non- constant step sizes. Therefore the event dynamics after the predictor and corrector phases are, respectively, gpk +1 = g( xk + hk+1{ m...the Extrapolation Polynomial Using a Taylor series expansion of the predicted event function eq.(6) gpk +1 = gk + hk+1 dgp dt ∣∣∣∣ (x,t)=(xk,tk) + h2k...1 2! d2gp dt2 ∣∣∣∣ (x,t)=(xk,tk) + . . . , (8) we can determine the value of gpk +1 as a function of the, yet undetermined, step size hk+1. Recalling

  19. Genetic parameters for growth characteristics of free-range chickens under univariate random regression models.

    PubMed

    Rovadoscki, Gregori A; Petrini, Juliana; Ramirez-Diaz, Johanna; Pertile, Simone F N; Pertille, Fábio; Salvian, Mayara; Iung, Laiza H S; Rodriguez, Mary Ana P; Zampar, Aline; Gaya, Leila G; Carvalho, Rachel S B; Coelho, Antonio A D; Savino, Vicente J M; Coutinho, Luiz L; Mourão, Gerson B

    2016-09-01

    Repeated measures from the same individual have been analyzed by using repeatability and finite dimension models under univariate or multivariate analyses. However, in the last decade, the use of random regression models for genetic studies with longitudinal data have become more common. Thus, the aim of this research was to estimate genetic parameters for body weight of four experimental chicken lines by using univariate random regression models. Body weight data from hatching to 84 days of age (n = 34,730) from four experimental free-range chicken lines (7P, Caipirão da ESALQ, Caipirinha da ESALQ and Carijó Barbado) were used. The analysis model included the fixed effects of contemporary group (gender and rearing system), fixed regression coefficients for age at measurement, and random regression coefficients for permanent environmental effects and additive genetic effects. Heterogeneous variances for residual effects were considered, and one residual variance was assigned for each of six subclasses of age at measurement. Random regression curves were modeled by using Legendre polynomials of the second and third orders, with the best model chosen based on the Akaike Information Criterion, Bayesian Information Criterion, and restricted maximum likelihood. Multivariate analyses under the same animal mixed model were also performed for the validation of the random regression models. The Legendre polynomials of second order were better for describing the growth curves of the lines studied. Moderate to high heritabilities (h(2) = 0.15 to 0.98) were estimated for body weight between one and 84 days of age, suggesting that selection for body weight at all ages can be used as a selection criteria. Genetic correlations among body weight records obtained through multivariate analyses ranged from 0.18 to 0.96, 0.12 to 0.89, 0.06 to 0.96, and 0.28 to 0.96 in 7P, Caipirão da ESALQ, Caipirinha da ESALQ, and Carijó Barbado chicken lines, respectively. Results indicate that genetic gain for body weight can be achieved by selection. Also, selection for body weight at 42 days of age can be maintained as a selection criterion. © 2016 Poultry Science Association Inc.

  20. Flutter analysis using transversality theory

    NASA Technical Reports Server (NTRS)

    Afolabi, D.

    1993-01-01

    A new method of calculating flutter boundaries of undamped aeronautical structures is presented. The method is an application of the weak transversality theorem used in catastrophe theory. In the first instance, the flutter problem is cast in matrix form using a frequency domain method, leading to an eigenvalue matrix. The characteristic polynomial resulting from this matrix usually has a smooth dependence on the system's parameters. As these parameters change with operating conditions, certain critical values are reached at which flutter sets in. Our approach is to use the transversality theorem in locating such flutter boundaries using this criterion: at a flutter boundary, the characteristic polynomial does not intersect the axis of the abscissa transversally. Formulas for computing the flutter boundaries and flutter frequencies of structures with two degrees of freedom are presented, and extension to multi-degree of freedom systems is indicated. The formulas have obvious applications in, for instance, problems of panel flutter at supersonic Mach numbers.

  1. Financial Time Series Prediction Using Spiking Neural Networks

    PubMed Central

    Reid, David; Hussain, Abir Jaafar; Tawfik, Hissam

    2014-01-01

    In this paper a novel application of a particular type of spiking neural network, a Polychronous Spiking Network, was used for financial time series prediction. It is argued that the inherent temporal capabilities of this type of network are suited to non-stationary data such as this. The performance of the spiking neural network was benchmarked against three systems: two “traditional”, rate-encoded, neural networks; a Multi-Layer Perceptron neural network and a Dynamic Ridge Polynomial neural network, and a standard Linear Predictor Coefficients model. For this comparison three non-stationary and noisy time series were used: IBM stock data; US/Euro exchange rate data, and the price of Brent crude oil. The experiments demonstrated favourable prediction results for the Spiking Neural Network in terms of Annualised Return and prediction error for 5-Step ahead predictions. These results were also supported by other relevant metrics such as Maximum Drawdown and Signal-To-Noise ratio. This work demonstrated the applicability of the Polychronous Spiking Network to financial data forecasting and this in turn indicates the potential of using such networks over traditional systems in difficult to manage non-stationary environments. PMID:25170618

  2. HOMFLY for twist knots and exclusive Racah matrices in representation [333

    NASA Astrophysics Data System (ADS)

    Morozov, A.

    2018-03-01

    Next step is reported in the program of Racah matrices extraction from the differential expansion of HOMFLY polynomials for twist knots: from the double-column rectangular representations R = [ rr ] to a triple-column and triple-hook R = [ 333 ]. The main new phenomenon is the deviation of the particular coefficient f[ 332 ][ 21 ] from the corresponding skew dimension, what opens a way to further generalizations.

  3. Genetic analysis of milk production traits of Tunisian Holsteins using random regression test-day model with Legendre polynomials

    PubMed Central

    2018-01-01

    Objective The objective of this study was to estimate genetic parameters of milk, fat, and protein yields within and across lactations in Tunisian Holsteins using a random regression test-day (TD) model. Methods A random regression multiple trait multiple lactation TD model was used to estimate genetic parameters in the Tunisian dairy cattle population. Data were TD yields of milk, fat, and protein from the first three lactations. Random regressions were modeled with third-order Legendre polynomials for the additive genetic, and permanent environment effects. Heritabilities, and genetic correlations were estimated by Bayesian techniques using the Gibbs sampler. Results All variance components tended to be high in the beginning and the end of lactations. Additive genetic variances for milk, fat, and protein yields were the lowest and were the least variable compared to permanent variances. Heritability values tended to increase with parity. Estimates of heritabilities for 305-d yield-traits were low to moderate, 0.14 to 0.2, 0.12 to 0.17, and 0.13 to 0.18 for milk, fat, and protein yields, respectively. Within-parity, genetic correlations among traits were up to 0.74. Genetic correlations among lactations for the yield traits were relatively high and ranged from 0.78±0.01 to 0.82±0.03, between the first and second parities, from 0.73±0.03 to 0.8±0.04 between the first and third parities, and from 0.82±0.02 to 0.84±0.04 between the second and third parities. Conclusion These results are comparable to previously reported estimates on the same population, indicating that the adoption of a random regression TD model as the official genetic evaluation for production traits in Tunisia, as developed by most Interbull countries, is possible in the Tunisian Holsteins. PMID:28823122

  4. Genetic analysis of milk production traits of Tunisian Holsteins using random regression test-day model with Legendre polynomials.

    PubMed

    Ben Zaabza, Hafedh; Ben Gara, Abderrahmen; Rekik, Boulbaba

    2018-05-01

    The objective of this study was to estimate genetic parameters of milk, fat, and protein yields within and across lactations in Tunisian Holsteins using a random regression test-day (TD) model. A random regression multiple trait multiple lactation TD model was used to estimate genetic parameters in the Tunisian dairy cattle population. Data were TD yields of milk, fat, and protein from the first three lactations. Random regressions were modeled with third-order Legendre polynomials for the additive genetic, and permanent environment effects. Heritabilities, and genetic correlations were estimated by Bayesian techniques using the Gibbs sampler. All variance components tended to be high in the beginning and the end of lactations. Additive genetic variances for milk, fat, and protein yields were the lowest and were the least variable compared to permanent variances. Heritability values tended to increase with parity. Estimates of heritabilities for 305-d yield-traits were low to moderate, 0.14 to 0.2, 0.12 to 0.17, and 0.13 to 0.18 for milk, fat, and protein yields, respectively. Within-parity, genetic correlations among traits were up to 0.74. Genetic correlations among lactations for the yield traits were relatively high and ranged from 0.78±0.01 to 0.82±0.03, between the first and second parities, from 0.73±0.03 to 0.8±0.04 between the first and third parities, and from 0.82±0.02 to 0.84±0.04 between the second and third parities. These results are comparable to previously reported estimates on the same population, indicating that the adoption of a random regression TD model as the official genetic evaluation for production traits in Tunisia, as developed by most Interbull countries, is possible in the Tunisian Holsteins.

  5. Hyperspectral imaging using a color camera and its application for pathogen detection

    NASA Astrophysics Data System (ADS)

    Yoon, Seung-Chul; Shin, Tae-Sung; Heitschmidt, Gerald W.; Lawrence, Kurt C.; Park, Bosoon; Gamble, Gary

    2015-02-01

    This paper reports the results of a feasibility study for the development of a hyperspectral image recovery (reconstruction) technique using a RGB color camera and regression analysis in order to detect and classify colonies of foodborne pathogens. The target bacterial pathogens were the six representative non-O157 Shiga-toxin producing Escherichia coli (STEC) serogroups (O26, O45, O103, O111, O121, and O145) grown in Petri dishes of Rainbow agar. The purpose of the feasibility study was to evaluate whether a DSLR camera (Nikon D700) could be used to predict hyperspectral images in the wavelength range from 400 to 1,000 nm and even to predict the types of pathogens using a hyperspectral STEC classification algorithm that was previously developed. Unlike many other studies using color charts with known and noise-free spectra for training reconstruction models, this work used hyperspectral and color images, separately measured by a hyperspectral imaging spectrometer and the DSLR color camera. The color images were calibrated (i.e. normalized) to relative reflectance, subsampled and spatially registered to match with counterpart pixels in hyperspectral images that were also calibrated to relative reflectance. Polynomial multivariate least-squares regression (PMLR) was previously developed with simulated color images. In this study, partial least squares regression (PLSR) was also evaluated as a spectral recovery technique to minimize multicollinearity and overfitting. The two spectral recovery models (PMLR and PLSR) and their parameters were evaluated by cross-validation. The QR decomposition was used to find a numerically more stable solution of the regression equation. The preliminary results showed that PLSR was more effective especially with higher order polynomial regressions than PMLR. The best classification accuracy measured with an independent test set was about 90%. The results suggest the potential of cost-effective color imaging using hyperspectral image classification algorithms for rapidly differentiating pathogens in agar plates.

  6. Spatial modeling and classification of corneal shape.

    PubMed

    Marsolo, Keith; Twa, Michael; Bullimore, Mark A; Parthasarathy, Srinivasan

    2007-03-01

    One of the most promising applications of data mining is in biomedical data used in patient diagnosis. Any method of data analysis intended to support the clinical decision-making process should meet several criteria: it should capture clinically relevant features, be computationally feasible, and provide easily interpretable results. In an initial study, we examined the feasibility of using Zernike polynomials to represent biomedical instrument data in conjunction with a decision tree classifier to distinguish between the diseased and non-diseased eyes. Here, we provide a comprehensive follow-up to that work, examining a second representation, pseudo-Zernike polynomials, to determine whether they provide any increase in classification accuracy. We compare the fidelity of both methods using residual root-mean-square (rms) error and evaluate accuracy using several classifiers: neural networks, C4.5 decision trees, Voting Feature Intervals, and Naïve Bayes. We also examine the effect of several meta-learning strategies: boosting, bagging, and Random Forests (RFs). We present results comparing accuracy as it relates to dataset and transformation resolution over a larger, more challenging, multi-class dataset. They show that classification accuracy is similar for both data transformations, but differs by classifier. We find that the Zernike polynomials provide better feature representation than the pseudo-Zernikes and that the decision trees yield the best balance of classification accuracy and interpretability.

  7. Explaining Support Vector Machines: A Color Based Nomogram

    PubMed Central

    Van Belle, Vanya; Van Calster, Ben; Van Huffel, Sabine; Suykens, Johan A. K.; Lisboa, Paulo

    2016-01-01

    Problem setting Support vector machines (SVMs) are very popular tools for classification, regression and other problems. Due to the large choice of kernels they can be applied with, a large variety of data can be analysed using these tools. Machine learning thanks its popularity to the good performance of the resulting models. However, interpreting the models is far from obvious, especially when non-linear kernels are used. Hence, the methods are used as black boxes. As a consequence, the use of SVMs is less supported in areas where interpretability is important and where people are held responsible for the decisions made by models. Objective In this work, we investigate whether SVMs using linear, polynomial and RBF kernels can be explained such that interpretations for model-based decisions can be provided. We further indicate when SVMs can be explained and in which situations interpretation of SVMs is (hitherto) not possible. Here, explainability is defined as the ability to produce the final decision based on a sum of contributions which depend on one single or at most two input variables. Results Our experiments on simulated and real-life data show that explainability of an SVM depends on the chosen parameter values (degree of polynomial kernel, width of RBF kernel and regularization constant). When several combinations of parameter values yield the same cross-validation performance, combinations with a lower polynomial degree or a larger kernel width have a higher chance of being explainable. Conclusions This work summarizes SVM classifiers obtained with linear, polynomial and RBF kernels in a single plot. Linear and polynomial kernels up to the second degree are represented exactly. For other kernels an indication of the reliability of the approximation is presented. The complete methodology is available as an R package and two apps and a movie are provided to illustrate the possibilities offered by the method. PMID:27723811

  8. A Small and Slim Coaxial Probe for Single Rice Grain Moisture Sensing

    PubMed Central

    You, Kok Yeow; Mun, Hou Kit; You, Li Ling; Salleh, Jamaliah; Abbas, Zulkifly

    2013-01-01

    A moisture detection of single rice grains using a slim and small open-ended coaxial probe is presented. The coaxial probe is suitable for the nondestructive measurement of moisture values in the rice grains ranging from from 9.5% to 26%. Empirical polynomial models are developed to predict the gravimetric moisture content of rice based on measured reflection coefficients using a vector network analyzer. The relationship between the reflection coefficient and relative permittivity were also created using a regression method and expressed in a polynomial model, whose model coefficients were obtained by fitting the data from Finite Element-based simulation. Besides, the designed single rice grain sample holder and experimental set-up were shown. The measurement of single rice grains in this study is more precise compared to the measurement in conventional bulk rice grains, as the random air gap present in the bulk rice grains is excluded. PMID:23493127

  9. A resilient domain decomposition polynomial chaos solver for uncertain elliptic PDEs

    NASA Astrophysics Data System (ADS)

    Mycek, Paul; Contreras, Andres; Le Maître, Olivier; Sargsyan, Khachik; Rizzi, Francesco; Morris, Karla; Safta, Cosmin; Debusschere, Bert; Knio, Omar

    2017-07-01

    A resilient method is developed for the solution of uncertain elliptic PDEs on extreme scale platforms. The method is based on a hybrid domain decomposition, polynomial chaos (PC) framework that is designed to address soft faults. Specifically, parallel and independent solves of multiple deterministic local problems are used to define PC representations of local Dirichlet boundary-to-boundary maps that are used to reconstruct the global solution. A LAD-lasso type regression is developed for this purpose. The performance of the resulting algorithm is tested on an elliptic equation with an uncertain diffusivity field. Different test cases are considered in order to analyze the impacts of correlation structure of the uncertain diffusivity field, the stochastic resolution, as well as the probability of soft faults. In particular, the computations demonstrate that, provided sufficiently many samples are generated, the method effectively overcomes the occurrence of soft faults.

  10. Horizontal vestibuloocular reflex evoked by high-acceleration rotations in the squirrel monkey. I. Normal responses

    NASA Technical Reports Server (NTRS)

    Minor, L. B.; Lasker, D. M.; Backous, D. D.; Hullar, T. E.; Shelhamer, M. J. (Principal Investigator)

    1999-01-01

    The horizontal angular vestibuloocular reflex (VOR) evoked by high-frequency, high-acceleration rotations was studied in five squirrel monkeys with intact vestibular function. The VOR evoked by steps of acceleration in darkness (3,000 degrees /s(2) reaching a velocity of 150 degrees /s) began after a latency of 7.3 +/- 1.5 ms (mean +/- SD). Gain of the reflex during the acceleration was 14.2 +/- 5.2% greater than that measured once the plateau head velocity had been reached. A polynomial regression was used to analyze the trajectory of the responses to steps of acceleration. A better representation of the data was obtained from a polynomial that included a cubic term in contrast to an exclusively linear fit. For sinusoidal rotations of 0.5-15 Hz with a peak velocity of 20 degrees /s, the VOR gain measured 0.83 +/- 0.06 and did not vary across frequencies or animals. The phase of these responses was close to compensatory except at 15 Hz where a lag of 5.0 +/- 0.9 degrees was noted. The VOR gain did not vary with head velocity at 0.5 Hz but increased with velocity for rotations at frequencies of >/=4 Hz (0. 85 +/- 0.04 at 4 Hz, 20 degrees /s; 1.01 +/- 0.05 at 100 degrees /s, P < 0.0001). No responses to these rotations were noted in two animals that had undergone bilateral labyrinthectomy indicating that inertia of the eye had a negligible effect for these stimuli. We developed a mathematical model of VOR dynamics to account for these findings. The inputs to the reflex come from linear and nonlinear pathways. The linear pathway is responsible for the constant gain across frequencies at peak head velocity of 20 degrees /s and also for the phase lag at higher frequencies being less than that expected based on the reflex delay. The frequency- and velocity-dependent nonlinearity in VOR gain is accounted for by the dynamics of the nonlinear pathway. A transfer function that increases the gain of this pathway with frequency and a term related to the third power of head velocity are used to represent the dynamics of this pathway. This model accounts for the experimental findings and provides a method for interpreting responses to these stimuli after vestibular lesions.

  11. Are We All in the Same Boat? The Role of Perceptual Distance in Organizational Health Interventions.

    PubMed

    Hasson, Henna; von Thiele Schwarz, Ulrica; Nielsen, Karina; Tafvelin, Susanne

    2016-10-01

    The study investigates how agreement between leaders' and their team's perceptions influence intervention outcomes in a leadership-training intervention aimed at improving organizational learning. Agreement, i.e. perceptual distance was calculated for the organizational learning dimensions at baseline. Changes in the dimensions from pre-intervention to post-intervention were evaluated using polynomial regression analysis with response surface analysis. The general pattern of the results indicated that the organizational learning improved when leaders and their teams agreed on the level of organizational learning prior to the intervention. The improvement was greatest when the leader's and the team's perceptions at baseline were aligned and high rather than aligned and low. The least beneficial scenario was when the leader's perceptions were higher than the team's perceptions. These results give insights into the importance of comparing leaders' and their team's perceptions in intervention research. Polynomial regression analyses with response surface methodology allow three-dimensional examination of relationship between two predictor variables and an outcome. This contributes with knowledge on how combination of predictor variables may affect outcome and allows studies of potential non-linearity relating to the outcome. Future studies could use these methods in process evaluation of interventions. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  12. Simple, fast, and low-cost camera-based water content measurement with colorimetric fluorescent indicator

    NASA Astrophysics Data System (ADS)

    Song, Seok-Jeong; Kim, Tae-Il; Kim, Youngmi; Nam, Hyoungsik

    2018-05-01

    Recently, a simple, sensitive, and low-cost fluorescent indicator has been proposed to determine water contents in organic solvents, drugs, and foodstuffs. The change of water content leads to the change of the indicator's fluorescence color under the ultra-violet (UV) light. Whereas the water content values could be estimated from the spectrum obtained by a bulky and expensive spectrometer in the previous research, this paper demonstrates a simple and low-cost camera-based water content measurement scheme with the same fluorescent water indicator. Water content is calculated over the range of 0-30% by quadratic polynomial regression models with color information extracted from the captured images of samples. Especially, several color spaces such as RGB, xyY, L∗a∗b∗, u‧v‧, HSV, and YCBCR have been investigated to establish the optimal color information features over both linear and nonlinear RGB data given by a camera before and after gamma correction. In the end, a 2nd order polynomial regression model along with HSV in a linear domain achieves the minimum mean square error of 1.06% for a 3-fold cross validation method. Additionally, the resultant water content estimation model is implemented and evaluated in an off-the-shelf Android-based smartphone.

  13. A method for the selection of a functional form for a thermodynamic equation of state using weighted linear least squares stepwise regression

    NASA Technical Reports Server (NTRS)

    Jacobsen, R. T.; Stewart, R. B.; Crain, R. W., Jr.; Rose, G. L.; Myers, A. F.

    1976-01-01

    A method was developed for establishing a rational choice of the terms to be included in an equation of state with a large number of adjustable coefficients. The methods presented were developed for use in the determination of an equation of state for oxygen and nitrogen. However, a general application of the methods is possible in studies involving the determination of an optimum polynomial equation for fitting a large number of data points. The data considered in the least squares problem are experimental thermodynamic pressure-density-temperature data. Attention is given to a description of stepwise multiple regression and the use of stepwise regression in the determination of an equation of state for oxygen and nitrogen.

  14. Temporal hemodynamic classification of two hands tapping using functional near—infrared spectroscopy

    PubMed Central

    Thanh Hai, Nguyen; Cuong, Ngo Q.; Dang Khoa, Truong Q.; Van Toi, Vo

    2013-01-01

    In recent decades, a lot of achievements have been obtained in imaging and cognitive neuroscience of human brain. Brain's activities can be shown by a number of different kinds of non-invasive technologies, such as: Near-Infrared Spectroscopy (NIRS), Magnetic Resonance Imaging (MRI), and ElectroEncephaloGraphy (EEG; Wolpaw et al., 2002; Weiskopf et al., 2004; Blankertz et al., 2006). NIRS has become the convenient technology for experimental brain purposes. The change of oxygenation changes (oxy-Hb) along task period depending on location of channel on the cortex has been studied: sustained activation in the motor cortex, transient activation during the initial segments in the somatosensory cortex, and accumulating activation in the frontal lobe (Gentili et al., 2010). Oxy-Hb concentration at the aforementioned sites in the brain can also be used as a predictive factor allows prediction of subject's investigation behavior with a considerable degree of precision (Shimokawa et al., 2009). In this paper, a study of recognition algorithm will be described for recognition whether one taps the left hand (LH) or the right hand (RH). Data with noises and artifacts collected from a multi-channel system will be pre-processed using a Savitzky–Golay filter for getting more smoothly data. Characteristics of the filtered signals during LH and RH tapping process will be extracted using a polynomial regression (PR) algorithm. Coefficients of the polynomial, which correspond to Oxygen-Hemoglobin (Oxy-Hb) concentration, will be applied for the recognition models of hand tapping. Support Vector Machines (SVM) will be applied to validate the obtained coefficient data for hand tapping recognition. In addition, for the objective of comparison, Artificial Neural Networks (ANNs) was also applied to recognize hand tapping side with the same principle. Experimental results have been done many trials on three subjects to illustrate the effectiveness of the proposed method. PMID:24032008

  15. Temporal hemodynamic classification of two hands tapping using functional near-infrared spectroscopy.

    PubMed

    Thanh Hai, Nguyen; Cuong, Ngo Q; Dang Khoa, Truong Q; Van Toi, Vo

    2013-01-01

    In recent decades, a lot of achievements have been obtained in imaging and cognitive neuroscience of human brain. Brain's activities can be shown by a number of different kinds of non-invasive technologies, such as: Near-Infrared Spectroscopy (NIRS), Magnetic Resonance Imaging (MRI), and ElectroEncephaloGraphy (EEG; Wolpaw et al., 2002; Weiskopf et al., 2004; Blankertz et al., 2006). NIRS has become the convenient technology for experimental brain purposes. The change of oxygenation changes (oxy-Hb) along task period depending on location of channel on the cortex has been studied: sustained activation in the motor cortex, transient activation during the initial segments in the somatosensory cortex, and accumulating activation in the frontal lobe (Gentili et al., 2010). Oxy-Hb concentration at the aforementioned sites in the brain can also be used as a predictive factor allows prediction of subject's investigation behavior with a considerable degree of precision (Shimokawa et al., 2009). In this paper, a study of recognition algorithm will be described for recognition whether one taps the left hand (LH) or the right hand (RH). Data with noises and artifacts collected from a multi-channel system will be pre-processed using a Savitzky-Golay filter for getting more smoothly data. Characteristics of the filtered signals during LH and RH tapping process will be extracted using a polynomial regression (PR) algorithm. Coefficients of the polynomial, which correspond to Oxygen-Hemoglobin (Oxy-Hb) concentration, will be applied for the recognition models of hand tapping. Support Vector Machines (SVM) will be applied to validate the obtained coefficient data for hand tapping recognition. In addition, for the objective of comparison, Artificial Neural Networks (ANNs) was also applied to recognize hand tapping side with the same principle. Experimental results have been done many trials on three subjects to illustrate the effectiveness of the proposed method.

  16. Color calibration of an RGB camera mounted in front of a microscope with strong color distortion.

    PubMed

    Charrière, Renée; Hébert, Mathieu; Trémeau, Alain; Destouches, Nathalie

    2013-07-20

    This paper aims at showing that performing color calibration of an RGB camera can be achieved even in the case where the optical system before the camera introduces strong color distortion. In the present case, the optical system is a microscope containing a halogen lamp, with a nonuniform irradiance on the viewed surface. The calibration method proposed in this work is based on an existing method, but it is preceded by a three-step preprocessing of the RGB images aiming at extracting relevant color information from the strongly distorted images, taking especially into account the nonuniform irradiance map and the perturbing texture due to the surface topology of the standard color calibration charts when observed at micrometric scale. The proposed color calibration process consists first in computing the average color of the color-chart patches viewed under the microscope; then computing white balance, gamma correction, and saturation enhancement; and finally applying a third-order polynomial regression color calibration transform. Despite the nonusual conditions for color calibration, fairly good performance is achieved from a 48 patch Lambertian color chart, since an average CIE-94 color difference on the color-chart colors lower than 2.5 units is obtained.

  17. Automatic bone outer contour extraction from B-modes ultrasound images based on local phase symmetry and quadratic polynomial fitting

    NASA Astrophysics Data System (ADS)

    Karlita, Tita; Yuniarno, Eko Mulyanto; Purnama, I. Ketut Eddy; Purnomo, Mauridhi Hery

    2017-06-01

    Analyzing ultrasound (US) images to get the shapes and structures of particular anatomical regions is an interesting field of study since US imaging is a non-invasive method to capture internal structures of a human body. However, bone segmentation of US images is still challenging because it is strongly influenced by speckle noises and it has poor image quality. This paper proposes a combination of local phase symmetry and quadratic polynomial fitting methods to extract bone outer contour (BOC) from two dimensional (2D) B-modes US image as initial steps of three-dimensional (3D) bone surface reconstruction. By using local phase symmetry, the bone is initially extracted from US images. BOC is then extracted by scanning one pixel on the bone boundary in each column of the US images using first phase features searching method. Quadratic polynomial fitting is utilized to refine and estimate the pixel location that fails to be detected during the extraction process. Hole filling method is then applied by utilize the polynomial coefficients to fill the gaps with new pixel. The proposed method is able to estimate the new pixel position and ensures smoothness and continuity of the contour path. Evaluations are done using cow and goat bones by comparing the resulted BOCs with the contours produced by manual segmentation and contours produced by canny edge detection. The evaluation shows that our proposed methods produces an excellent result with average MSE before and after hole filling at the value of 0.65.

  18. A review of downscaling procedures - a contribution to the research on climate change impacts at city scale

    NASA Astrophysics Data System (ADS)

    Smid, Marek; Costa, Ana; Pebesma, Edzer; Granell, Carlos; Bhattacharya, Devanjan

    2016-04-01

    Human kind is currently predominantly urban based, and the majority of ever continuing population growth will take place in urban agglomerations. Urban systems are not only major drivers of climate change, but also the impact hot spots. Furthermore, climate change impacts are commonly managed at city scale. Therefore, assessing climate change impacts on urban systems is a very relevant subject of research. Climate and its impacts on all levels (local, meso and global scale) and also the inter-scale dependencies of those processes should be a subject to detail analysis. While global and regional projections of future climate are currently available, local-scale information is lacking. Hence, statistical downscaling methodologies represent a potentially efficient way to help to close this gap. In general, the methodological reviews of downscaling procedures cover the various methods according to their application (e.g. downscaling for the hydrological modelling). Some of the most recent and comprehensive studies, such as the ESSEM COST Action ES1102 (VALUE), use the concept of Perfect Prog and MOS. Other examples of classification schemes of downscaling techniques consider three main categories: linear methods, weather classifications and weather generators. Downscaling and climate modelling represent a multidisciplinary field, where researchers from various backgrounds intersect their efforts, resulting in specific terminology, which may be somewhat confusing. For instance, the Polynomial Regression (also called the Surface Trend Analysis) is a statistical technique. In the context of the spatial interpolation procedures, it is commonly classified as a deterministic technique, and kriging approaches are classified as stochastic. Furthermore, the terms "statistical" and "stochastic" (frequently used as names of sub-classes in downscaling methodological reviews) are not always considered as synonymous, even though both terms could be seen as identical since they are referring to methods handling input modelling factors as variables with certain probability distributions. In addition, the recent development is going towards multi-step methodologies containing deterministic and stochastic components. This evolution leads to the introduction of new terms like hybrid or semi-stochastic approaches, which makes the efforts to systematically classifying downscaling methods to the previously defined categories even more challenging. This work presents a review of statistical downscaling procedures, which classifies the methods in two steps. In the first step, we describe several techniques that produce a single climatic surface based on observations. The methods are classified into two categories using an approximation to the broadest consensual statistical terms: linear and non-linear methods. The second step covers techniques that use simulations to generate alternative surfaces, which correspond to different realizations of the same processes. Those simulations are essential because there is a limited number of real observational data, and such procedures are crucial for modelling extremes. This work emphasises the link between statistical downscaling methods and the research of climate change impacts at city scale.

  19. Forecast horizon of multi-item dynamic lot size model with perishable inventory.

    PubMed

    Jing, Fuying; Lan, Zirui

    2017-01-01

    This paper studies a multi-item dynamic lot size problem for perishable products where stock deterioration rates and inventory costs are age-dependent. We explore structural properties in an optimal solution under two cost structures and develop a dynamic programming algorithm to solve the problem in polynomial time when the number of products is fixed. We establish forecast horizon results that can help the operation manager to decide the precise forecast horizon in a rolling decision-making process. Finally, based on a detailed test bed of instance, we obtain useful managerial insights on the impact of deterioration rate and lifetime of products on the length of forecast horizon.

  20. Forecast horizon of multi-item dynamic lot size model with perishable inventory

    PubMed Central

    Jing, Fuying

    2017-01-01

    This paper studies a multi-item dynamic lot size problem for perishable products where stock deterioration rates and inventory costs are age-dependent. We explore structural properties in an optimal solution under two cost structures and develop a dynamic programming algorithm to solve the problem in polynomial time when the number of products is fixed. We establish forecast horizon results that can help the operation manager to decide the precise forecast horizon in a rolling decision-making process. Finally, based on a detailed test bed of instance, we obtain useful managerial insights on the impact of deterioration rate and lifetime of products on the length of forecast horizon. PMID:29125856

  1. Positivity-preserving numerical schemes for multidimensional advection

    NASA Technical Reports Server (NTRS)

    Leonard, B. P.; Macvean, M. K.; Lock, A. P.

    1993-01-01

    This report describes the construction of an explicit, single time-step, conservative, finite-volume method for multidimensional advective flow, based on a uniformly third-order polynomial interpolation algorithm (UTOPIA). Particular attention is paid to the problem of flow-to-grid angle-dependent, anisotropic distortion typical of one-dimensional schemes used component-wise. The third-order multidimensional scheme automatically includes certain cross-difference terms that guarantee good isotropy (and stability). However, above first-order, polynomial-based advection schemes do not preserve positivity (the multidimensional analogue of monotonicity). For this reason, a multidimensional generalization of the first author's universal flux-limiter is sought. This is a very challenging problem. A simple flux-limiter can be found; but this introduces strong anisotropic distortion. A more sophisticated technique, limiting part of the flux and then restoring the isotropy-maintaining cross-terms afterwards, gives more satisfactory results. Test cases are confined to two dimensions; three-dimensional extensions are briefly discussed.

  2. An hp-adaptivity and error estimation for hyperbolic conservation laws

    NASA Technical Reports Server (NTRS)

    Bey, Kim S.

    1995-01-01

    This paper presents an hp-adaptive discontinuous Galerkin method for linear hyperbolic conservation laws. A priori and a posteriori error estimates are derived in mesh-dependent norms which reflect the dependence of the approximate solution on the element size (h) and the degree (p) of the local polynomial approximation. The a posteriori error estimate, based on the element residual method, provides bounds on the actual global error in the approximate solution. The adaptive strategy is designed to deliver an approximate solution with the specified level of error in three steps. The a posteriori estimate is used to assess the accuracy of a given approximate solution and the a priori estimate is used to predict the mesh refinements and polynomial enrichment needed to deliver the desired solution. Numerical examples demonstrate the reliability of the a posteriori error estimates and the effectiveness of the hp-adaptive strategy.

  3. Automatic Sub-Pixel Co-Registration of LandSat-8 OLI and Sentinel-2A MSI Images Using Phase Correlation and Machine Learning Based Mapping

    NASA Technical Reports Server (NTRS)

    Skakun, Sergii; Roger, Jean-Claude; Vermote, Eric F.; Masek, Jeffrey G.; Justice, Christopher O.

    2017-01-01

    This study investigates misregistration issues between Landsat-8/OLI and Sentinel-2A/MSI at 30 m resolution, and between multi-temporal Sentinel-2A images at 10 m resolution using a phase correlation approach and multiple transformation functions. Co-registration of 45 Landsat-8 to Sentinel-2A pairs and 37 Sentinel-2A to Sentinel-2A pairs were analyzed. Phase correlation proved to be a robust approach that allowed us to identify hundreds and thousands of control points on images acquired more than 100 days apart. Overall, misregistration of up to 1.6 pixels at 30 m resolution between Landsat-8 and Sentinel-2A images, and 1.2 pixels and 2.8 pixels at 10 m resolution between multi-temporal Sentinel-2A images from the same and different orbits, respectively, were observed. The non-linear Random Forest regression used for constructing the mapping function showed best results in terms of root mean square error (RMSE), yielding an average RMSE error of 0.07+/-0.02 pixels at 30 m resolution, and 0.09+/-0.05 and 0.15+/-0.06 pixels at 10 m resolution for the same and adjacent Sentinel-2A orbits, respectively, for multiple tiles and multiple conditions. A simpler 1st order polynomial function (affine transformation) yielded RMSE of 0.08+/-0.02 pixels at 30 m resolution and 0.12+/-0.06 (same Sentinel-2A orbits) and 0.20+/-0.09 (adjacent orbits) pixels at 10 m resolution.

  4. A weighted least squares estimation of the polynomial regression model on paddy production in the area of Kedah and Perlis

    NASA Astrophysics Data System (ADS)

    Musa, Rosliza; Ali, Zalila; Baharum, Adam; Nor, Norlida Mohd

    2017-08-01

    The linear regression model assumes that all random error components are identically and independently distributed with constant variance. Hence, each data point provides equally precise information about the deterministic part of the total variation. In other words, the standard deviations of the error terms are constant over all values of the predictor variables. When the assumption of constant variance is violated, the ordinary least squares estimator of regression coefficient lost its property of minimum variance in the class of linear and unbiased estimators. Weighted least squares estimation are often used to maximize the efficiency of parameter estimation. A procedure that treats all of the data equally would give less precisely measured points more influence than they should have and would give highly precise points too little influence. Optimizing the weighted fitting criterion to find the parameter estimates allows the weights to determine the contribution of each observation to the final parameter estimates. This study used polynomial model with weighted least squares estimation to investigate paddy production of different paddy lots based on paddy cultivation characteristics and environmental characteristics in the area of Kedah and Perlis. The results indicated that factors affecting paddy production are mixture fertilizer application cycle, average temperature, the squared effect of average rainfall, the squared effect of pest and disease, the interaction between acreage with amount of mixture fertilizer, the interaction between paddy variety and NPK fertilizer application cycle and the interaction between pest and disease and NPK fertilizer application cycle.

  5. Antibacterial and antifungal activities of pyroligneous acid from wood of Eucalyptus urograndis and Mimosa tenuiflora.

    PubMed

    de Souza Araújo, E; Pimenta, A S; Feijó, F M C; Castro, R V O; Fasciotti, M; Monteiro, T V C; de Lima, K M G

    2018-01-01

    This work aimed to evaluate the antibacterial and antifungal activities of two types of pyroligneous acid (PA) obtained from slow pyrolysis of wood of Mimosa tenuiflora and of a hybrid of Eucalyptus urophylla × Eucalyptus grandis. Wood wedges were carbonized on a heating rate of 1·25°C min -1 until 450°C. Pyrolysis smoke was trapped and condensed to yield liquid products. Crude pyrolysis liquids were bidistilled under 5 mmHg vacuum yielding purified PA. Multi-antibiotic-resistant strains of Escherichia coli, Pseudomonas aeruginosa (ATCC 27853) and Staphylococcus aureus (ATCC 25923) had their sensitivity to PA evaluated using agar diffusion test. Two yeasts were evaluated as well, Candida albicans (ATCC 10231) and Cryptococcus neoformans. GC-MS analysis of both PAs was carried out to obtain their chemical composition. Regression analysis was performed, and models were adjusted, with diameter of inhibition halos and PA concentration (100, 50 and 20%) as parameters. Identity of regression models and equality of parameters in polynomial orthogonal equations were verified. Inhibition halos were observed in the range 15-25 mm of diameter. All micro-organisms were inhibited by both types of PA even in the lowest concentration of 20%. The feasibility of the usage of PAs produced with wood species planted in large scale in Brazil was evident and the real potential as a basis to produce natural antibacterial and antifungal agents, with real possibility to be used in veterinary and zootechnical applications. © 2017 The Society for Applied Microbiology.

  6. Evaluation of force-velocity and power-velocity relationship of arm muscles.

    PubMed

    Sreckovic, Sreten; Cuk, Ivan; Djuric, Sasa; Nedeljkovic, Aleksandar; Mirkov, Dragan; Jaric, Slobodan

    2015-08-01

    A number of recent studies have revealed an approximately linear force-velocity (F-V) and, consequently, a parabolic power-velocity (P-V) relationship of multi-joint tasks. However, the measurement characteristics of their parameters have been neglected, particularly those regarding arm muscles, which could be a problem for using the linear F-V model in both research and routine testing. Therefore, the aims of the present study were to evaluate the strength, shape, reliability, and concurrent validity of the F-V relationship of arm muscles. Twelve healthy participants performed maximum bench press throws against loads ranging from 20 to 70 % of their maximum strength, and linear regression model was applied on the obtained range of F and V data. One-repetition maximum bench press and medicine ball throw tests were also conducted. The observed individual F-V relationships were exceptionally strong (r = 0.96-0.99; all P < 0.05) and fairly linear, although it remains unresolved whether a polynomial fit could provide even stronger relationships. The reliability of parameters obtained from the linear F-V regressions proved to be mainly high (ICC > 0.80), while their concurrent validity regarding directly measured F, P, and V ranged from high (for maximum F) to medium-to-low (for maximum P and V). The findings add to the evidence that the linear F-V and, consequently, parabolic P-V models could be used to study the mechanical properties of muscular systems, as well as to design a relatively simple, reliable, and ecologically valid routine test of the muscle ability of force, power, and velocity production.

  7. A high-order time-parallel scheme for solving wave propagation problems via the direct construction of an approximate time-evolution operator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Haut, T. S.; Babb, T.; Martinsson, P. G.

    2015-06-16

    Our manuscript demonstrates a technique for efficiently solving the classical wave equation, the shallow water equations, and, more generally, equations of the form ∂u/∂t=Lu∂u/∂t=Lu, where LL is a skew-Hermitian differential operator. The idea is to explicitly construct an approximation to the time-evolution operator exp(τL)exp(τL) for a relatively large time-step ττ. Recently developed techniques for approximating oscillatory scalar functions by rational functions, and accelerated algorithms for computing functions of discretized differential operators are exploited. Principal advantages of the proposed method include: stability even for large time-steps, the possibility to parallelize in time over many characteristic wavelengths and large speed-ups over existingmore » methods in situations where simulation over long times are required. Numerical examples involving the 2D rotating shallow water equations and the 2D wave equation in an inhomogenous medium are presented, and the method is compared to the 4th order Runge–Kutta (RK4) method and to the use of Chebyshev polynomials. The new method achieved high accuracy over long-time intervals, and with speeds that are orders of magnitude faster than both RK4 and the use of Chebyshev polynomials.« less

  8. Parallel Multi-Step/Multi-Rate Integration of Two-Time Scale Dynamic Systems

    NASA Technical Reports Server (NTRS)

    Chang, Johnny T.; Ploen, Scott R.; Sohl, Garett. A,; Martin, Bryan J.

    2004-01-01

    Increasing demands on the fidelity of simulations for real-time and high-fidelity simulations are stressing the capacity of modern processors. New integration techniques are required that provide maximum efficiency for systems that are parallelizable. However many current techniques make assumptions that are at odds with non-cascadable systems. A new serial multi-step/multi-rate integration algorithm for dual-timescale continuous state systems is presented which applies to these systems, and is extended to a parallel multi-step/multi-rate algorithm. The superior performance of both algorithms is demonstrated through a representative example.

  9. Moderating effect of intrinsic religiosity on the relationship between depression and cognitive function among community-dwelling older adults.

    PubMed

    Foong, Hui Foh; Hamid, Tengku Aizan; Ibrahim, Rahimah; Haron, Sharifah Azizah

    2018-04-01

    Research has found that depression in later life is associated with cognitive impairment. Thus, the mechanism to reduce the effect of depression on cognitive function is warranted. In this paper, we intend to examine whether intrinsic religiosity mediates the association between depression and cognitive function. The study included 2322 nationally representative community-dwelling elderly in Malaysia, randomly selected through a multi-stage proportional cluster random sampling from Peninsular Malaysia. The elderly were surveyed on socio-demographic information, cognitive function, depression and intrinsic religiosity. A four-step moderated hierarchical regression analysis was employed to test the moderating effect. Statistical analyses were performed using SPSS (version 15.0). Bivariate analyses showed that both depression and intrinsic religiosity had significant relationships with cognitive function. In addition, four-step moderated hierarchical regression analysis revealed that the intrinsic religiosity moderated the association between depression and cognitive function, after controlling for selected socio-demographic characteristics. Intrinsic religiosity might reduce the negative effect of depression on cognitive function. Professionals who are working with depressed older adults should seek ways to improve their intrinsic religiosity as one of the strategies to prevent cognitive impairment.

  10. Comparative assessment of orthogonal polynomials for wavefront reconstruction over the square aperture.

    PubMed

    Ye, Jingfei; Gao, Zhishan; Wang, Shuai; Cheng, Jinlong; Wang, Wei; Sun, Wenqing

    2014-10-01

    Four orthogonal polynomials for reconstructing a wavefront over a square aperture based on the modal method are currently available, namely, the 2D Chebyshev polynomials, 2D Legendre polynomials, Zernike square polynomials and Numerical polynomials. They are all orthogonal over the full unit square domain. 2D Chebyshev polynomials are defined by the product of Chebyshev polynomials in x and y variables, as are 2D Legendre polynomials. Zernike square polynomials are derived by the Gram-Schmidt orthogonalization process, where the integration region across the full unit square is circumscribed outside the unit circle. Numerical polynomials are obtained by numerical calculation. The presented study is to compare these four orthogonal polynomials by theoretical analysis and numerical experiments from the aspects of reconstruction accuracy, remaining errors, and robustness. Results show that the Numerical orthogonal polynomial is superior to the other three polynomials because of its high accuracy and robustness even in the case of a wavefront with incomplete data.

  11. Multiresponse semiparametric regression for modelling the effect of regional socio-economic variables on the use of information technology

    NASA Astrophysics Data System (ADS)

    Wibowo, Wahyu; Wene, Chatrien; Budiantara, I. Nyoman; Permatasari, Erma Oktania

    2017-03-01

    Multiresponse semiparametric regression is simultaneous equation regression model and fusion of parametric and nonparametric model. The regression model comprise several models and each model has two components, parametric and nonparametric. The used model has linear function as parametric and polynomial truncated spline as nonparametric component. The model can handle both linearity and nonlinearity relationship between response and the sets of predictor variables. The aim of this paper is to demonstrate the application of the regression model for modeling of effect of regional socio-economic on use of information technology. More specific, the response variables are percentage of households has access to internet and percentage of households has personal computer. Then, predictor variables are percentage of literacy people, percentage of electrification and percentage of economic growth. Based on identification of the relationship between response and predictor variable, economic growth is treated as nonparametric predictor and the others are parametric predictors. The result shows that the multiresponse semiparametric regression can be applied well as indicate by the high coefficient determination, 90 percent.

  12. Multi-electrolyte-step anodic aluminum oxide method for the fabrication of self-organized nanochannel arrays

    PubMed Central

    2012-01-01

    Nanochannel arrays were fabricated by the self-organized multi-electrolyte-step anodic aluminum oxide [AAO] method in this study. The anodization conditions used in the multi-electrolyte-step AAO method included a phosphoric acid solution as the electrolyte and an applied high voltage. There was a change in the phosphoric acid by the oxalic acid solution as the electrolyte and the applied low voltage. This method was used to produce self-organized nanochannel arrays with good regularity and circularity, meaning less power loss and processing time than with the multi-step AAO method. PMID:22333268

  13. Quantum attack-resistent certificateless multi-receiver signcryption scheme.

    PubMed

    Li, Huixian; Chen, Xubao; Pang, Liaojun; Shi, Weisong

    2013-01-01

    The existing certificateless signcryption schemes were designed mainly based on the traditional public key cryptography, in which the security relies on the hard problems, such as factor decomposition and discrete logarithm. However, these problems will be easily solved by the quantum computing. So the existing certificateless signcryption schemes are vulnerable to the quantum attack. Multivariate public key cryptography (MPKC), which can resist the quantum attack, is one of the alternative solutions to guarantee the security of communications in the post-quantum age. Motivated by these concerns, we proposed a new construction of the certificateless multi-receiver signcryption scheme (CLMSC) based on MPKC. The new scheme inherits the security of MPKC, which can withstand the quantum attack. Multivariate quadratic polynomial operations, which have lower computation complexity than bilinear pairing operations, are employed in signcrypting a message for a certain number of receivers in our scheme. Security analysis shows that our scheme is a secure MPKC-based scheme. We proved its security under the hardness of the Multivariate Quadratic (MQ) problem and its unforgeability under the Isomorphism of Polynomials (IP) assumption in the random oracle model. The analysis results show that our scheme also has the security properties of non-repudiation, perfect forward secrecy, perfect backward secrecy and public verifiability. Compared with the existing schemes in terms of computation complexity and ciphertext length, our scheme is more efficient, which makes it suitable for terminals with low computation capacity like smart cards.

  14. Artificial immune algorithm for multi-depot vehicle scheduling problems

    NASA Astrophysics Data System (ADS)

    Wu, Zhongyi; Wang, Donggen; Xia, Linyuan; Chen, Xiaoling

    2008-10-01

    In the fast-developing logistics and supply chain management fields, one of the key problems in the decision support system is that how to arrange, for a lot of customers and suppliers, the supplier-to-customer assignment and produce a detailed supply schedule under a set of constraints. Solutions to the multi-depot vehicle scheduling problems (MDVRP) help in solving this problem in case of transportation applications. The objective of the MDVSP is to minimize the total distance covered by all vehicles, which can be considered as delivery costs or time consumption. The MDVSP is one of nondeterministic polynomial-time hard (NP-hard) problem which cannot be solved to optimality within polynomial bounded computational time. Many different approaches have been developed to tackle MDVSP, such as exact algorithm (EA), one-stage approach (OSA), two-phase heuristic method (TPHM), tabu search algorithm (TSA), genetic algorithm (GA) and hierarchical multiplex structure (HIMS). Most of the methods mentioned above are time consuming and have high risk to result in local optimum. In this paper, a new search algorithm is proposed to solve MDVSP based on Artificial Immune Systems (AIS), which are inspirited by vertebrate immune systems. The proposed AIS algorithm is tested with 30 customers and 6 vehicles located in 3 depots. Experimental results show that the artificial immune system algorithm is an effective and efficient method for solving MDVSP problems.

  15. Multi-Party Privacy-Preserving Set Intersection with Quasi-Linear Complexity

    NASA Astrophysics Data System (ADS)

    Cheon, Jung Hee; Jarecki, Stanislaw; Seo, Jae Hong

    Secure computation of the set intersection functionality allows n parties to find the intersection between their datasets without revealing anything else about them. An efficient protocol for such a task could have multiple potential applications in commerce, health care, and security. However, all currently known secure set intersection protocols for n>2 parties have computational costs that are quadratic in the (maximum) number of entries in the dataset contributed by each party, making secure computation of the set intersection only practical for small datasets. In this paper, we describe the first multi-party protocol for securely computing the set intersection functionality with both the communication and the computation costs that are quasi-linear in the size of the datasets. For a fixed security parameter, our protocols require O(n2k) bits of communication and Õ(n2k) group multiplications per player in the malicious adversary setting, where k is the size of each dataset. Our protocol follows the basic idea of the protocol proposed by Kissner and Song, but we gain efficiency by using different representations of the polynomials associated with users' datasets and careful employment of algorithms that interpolate or evaluate polynomials on multiple points more efficiently. Moreover, the proposed protocol is robust. This means that the protocol outputs the desired result even if some corrupted players leave during the execution of the protocol.

  16. Numerical Modelling of Tsunami Generated by Deformable Submarine Slides: Parameterisation of Slide Dynamics for Coupling to Tsunami Propagation Model

    NASA Astrophysics Data System (ADS)

    Smith, R. C.; Collins, G. S.; Hill, J.; Piggott, M. D.; Mouradian, S. L.

    2015-12-01

    Numerical modelling informs risk assessment of tsunami generated by submarine slides; however, for large-scale slides modelling can be complex and computationally challenging. Many previous numerical studies have approximated slides as rigid blocks that moved according to prescribed motion. However, wave characteristics are strongly dependent on the motion of the slide and previous work has recommended that more accurate representation of slide dynamics is needed. We have used the finite-element, adaptive-mesh CFD model Fluidity, to perform multi-material simulations of deformable submarine slide-generated waves at real world scales for a 2D scenario in the Gulf of Mexico. Our high-resolution approach represents slide dynamics with good accuracy, compared to other numerical simulations of this scenario, but precludes tracking of wave propagation over large distances. To enable efficient modelling of further propagation of the waves, we investigate an approach to extract information about the slide evolution from our multi-material simulations in order to drive a single-layer wave propagation model, also using Fluidity, which is much less computationally expensive. The extracted submarine slide geometry and position as a function of time are parameterised using simple polynomial functions. The polynomial functions are used to inform a prescribed velocity boundary condition in a single-layer simulation, mimicking the effect the submarine slide motion has on the water column. The approach is verified by successful comparison of wave generation in the single-layer model with that recorded in the multi-material, multi-layer simulations. We then extend this approach to 3D for further validation of this methodology (using the Gulf of Mexico scenario proposed by Horrillo et al., 2013) and to consider the effect of lateral spreading. This methodology is then used to simulate a series of hypothetical submarine slide events in the Arctic Ocean (based on evidence of historic slides) and examine the hazard posed to the UK coast.

  17. Timing and Mode of Landscape Response to Glacial-Interglacial Climate Forcing From Fluvial Fill Terrace Sediments: Humahuaca Basin, E Cordillera, NW Argentina

    NASA Astrophysics Data System (ADS)

    Schildgen, T. F.; Robinson, R. A. J.; Savi, S.; Bookhagen, B.; Tofelde, S.; Strecker, M. R.

    2014-12-01

    Numerical modelling informs risk assessment of tsunami generated by submarine slides; however, for large-scale slides modelling can be complex and computationally challenging. Many previous numerical studies have approximated slides as rigid blocks that moved according to prescribed motion. However, wave characteristics are strongly dependent on the motion of the slide and previous work has recommended that more accurate representation of slide dynamics is needed. We have used the finite-element, adaptive-mesh CFD model Fluidity, to perform multi-material simulations of deformable submarine slide-generated waves at real world scales for a 2D scenario in the Gulf of Mexico. Our high-resolution approach represents slide dynamics with good accuracy, compared to other numerical simulations of this scenario, but precludes tracking of wave propagation over large distances. To enable efficient modelling of further propagation of the waves, we investigate an approach to extract information about the slide evolution from our multi-material simulations in order to drive a single-layer wave propagation model, also using Fluidity, which is much less computationally expensive. The extracted submarine slide geometry and position as a function of time are parameterised using simple polynomial functions. The polynomial functions are used to inform a prescribed velocity boundary condition in a single-layer simulation, mimicking the effect the submarine slide motion has on the water column. The approach is verified by successful comparison of wave generation in the single-layer model with that recorded in the multi-material, multi-layer simulations. We then extend this approach to 3D for further validation of this methodology (using the Gulf of Mexico scenario proposed by Horrillo et al., 2013) and to consider the effect of lateral spreading. This methodology is then used to simulate a series of hypothetical submarine slide events in the Arctic Ocean (based on evidence of historic slides) and examine the hazard posed to the UK coast.

  18. Poly-Frobenius-Euler polynomials

    NASA Astrophysics Data System (ADS)

    Kurt, Burak

    2017-07-01

    Hamahata [3] defined poly-Euler polynomials and the generalized poly-Euler polynomials. He proved some relations and closed formulas for the poly-Euler polynomials. By this motivation, we define poly-Frobenius-Euler polynomials. We give some relations for this polynomials. Also, we prove the relationships between poly-Frobenius-Euler polynomials and Stirling numbers of the second kind.

  19. Mechanical and Metallurgical Evolution of Stainless Steel 321 in a Multi-step Forming Process

    NASA Astrophysics Data System (ADS)

    Anderson, M.; Bridier, F.; Gholipour, J.; Jahazi, M.; Wanjara, P.; Bocher, P.; Savoie, J.

    2016-04-01

    This paper examines the metallurgical evolution of AISI Stainless Steel 321 (SS 321) during multi-step forming, a process that involves cycles of deformation with intermediate heat treatment steps. The multi-step forming process was simulated by implementing interrupted uniaxial tensile testing experiments. Evolution of the mechanical properties as well as the microstructural features, such as twins and textures of the austenite and martensite phases, was studied as a function of the multi-step forming process. The characteristics of the Strain-Induced Martensite (SIM) were also documented for each deformation step and intermediate stress relief heat treatment. The results indicated that the intermediate heat treatments considerably increased the formability of SS 321. Texture analysis showed that the effect of the intermediate heat treatment on the austenite was minor and led to partial recrystallization, while deformation was observed to reinforce the crystallographic texture of austenite. For the SIM, an Olson-Cohen equation type was identified to analytically predict its formation during the multi-step forming process. The generated SIM was textured and weakened with increasing deformation.

  20. A multi-domain spectral method for time-fractional differential equations

    NASA Astrophysics Data System (ADS)

    Chen, Feng; Xu, Qinwu; Hesthaven, Jan S.

    2015-07-01

    This paper proposes an approach for high-order time integration within a multi-domain setting for time-fractional differential equations. Since the kernel is singular or nearly singular, two main difficulties arise after the domain decomposition: how to properly account for the history/memory part and how to perform the integration accurately. To address these issues, we propose a novel hybrid approach for the numerical integration based on the combination of three-term-recurrence relations of Jacobi polynomials and high-order Gauss quadrature. The different approximations used in the hybrid approach are justified theoretically and through numerical examples. Based on this, we propose a new multi-domain spectral method for high-order accurate time integrations and study its stability properties by identifying the method as a generalized linear method. Numerical experiments confirm hp-convergence for both time-fractional differential equations and time-fractional partial differential equations.

  1. Optical alignment procedure utilizing neural networks combined with Shack-Hartmann wavefront sensor

    NASA Astrophysics Data System (ADS)

    Adil, Fatime Zehra; Konukseven, Erhan İlhan; Balkan, Tuna; Adil, Ömer Faruk

    2017-05-01

    In the design of pilot helmets with night vision capability, to not limit or block the sight of the pilot, a transparent visor is used. The reflected image from the coated part of the visor must coincide with the physical human sight image seen through the nonreflecting regions of the visor. This makes the alignment of the visor halves critical. In essence, this is an alignment problem of two optical parts that are assembled together during the manufacturing process. Shack-Hartmann wavefront sensor is commonly used for the determination of the misalignments through wavefront measurements, which are quantified in terms of the Zernike polynomials. Although the Zernike polynomials provide very useful feedback about the misalignments, the corrective actions are basically ad hoc. This stems from the fact that there exists no easy inverse relation between the misalignment measurements and the physical causes of the misalignments. This study aims to construct this inverse relation by making use of the expressive power of the neural networks in such complex relations. For this purpose, a neural network is designed and trained in MATLAB® regarding which types of misalignments result in which wavefront measurements, quantitatively given by Zernike polynomials. This way, manual and iterative alignment processes relying on trial and error will be replaced by the trained guesses of a neural network, so the alignment process is reduced to applying the counter actions based on the misalignment causes. Such a training requires data containing misalignment and measurement sets in fine detail, which is hard to obtain manually on a physical setup. For that reason, the optical setup is completely modeled in Zemax® software, and Zernike polynomials are generated for misalignments applied in small steps. The performance of the neural network is experimented and found promising in the actual physical setup.

  2. Stochastic Modeling of Flow-Structure Interactions using Generalized Polynomial Chaos

    DTIC Science & Technology

    2001-09-11

    Some basic hypergeometric polynomials that generalize Jacobi polynomials . Memoirs Amer. Math. Soc...scheme, which is represented as a tree structure in figure 1 (following [24]), classifies the hypergeometric orthogonal polynomials and indicates the...2F0(1) 2F0(0) Figure 1: The Askey scheme of orthogonal polynomials The orthogonal polynomials associated with the generalized polynomial chaos,

  3. Comparison of polynomial approximations and artificial neural nets for response surfaces in engineering optimization

    NASA Technical Reports Server (NTRS)

    Carpenter, William C.

    1991-01-01

    Engineering optimization problems involve minimizing some function subject to constraints. In areas such as aircraft optimization, the constraint equations may be from numerous disciplines such as transfer of information between these disciplines and the optimization algorithm. They are also suited to problems which may require numerous re-optimizations such as in multi-objective function optimization or to problems where the design space contains numerous local minima, thus requiring repeated optimizations from different initial designs. Their use has been limited, however, by the fact that development of response surfaces randomly selected or preselected points in the design space. Thus, they have been thought to be inefficient compared to algorithms to the optimum solution. A development has taken place in the last several years which may effect the desirability of using response surfaces. It may be possible that artificial neural nets are more efficient in developing response surfaces than polynomial approximations which have been used in the past. This development is the concern of the work.

  4. Efficient Global Aerodynamic Modeling from Flight Data

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.

    2012-01-01

    A method for identifying global aerodynamic models from flight data in an efficient manner is explained and demonstrated. A novel experiment design technique was used to obtain dynamic flight data over a range of flight conditions with a single flight maneuver. Multivariate polynomials and polynomial splines were used with orthogonalization techniques and statistical modeling metrics to synthesize global nonlinear aerodynamic models directly and completely from flight data alone. Simulation data and flight data from a subscale twin-engine jet transport aircraft were used to demonstrate the techniques. Results showed that global multivariate nonlinear aerodynamic dependencies could be accurately identified using flight data from a single maneuver. Flight-derived global aerodynamic model structures, model parameter estimates, and associated uncertainties were provided for all six nondimensional force and moment coefficients for the test aircraft. These models were combined with a propulsion model identified from engine ground test data to produce a high-fidelity nonlinear flight simulation very efficiently. Prediction testing using a multi-axis maneuver showed that the identified global model accurately predicted aircraft responses.

  5. Simultaneous estimation of multiple phases in digital holographic interferometry using state space analysis

    NASA Astrophysics Data System (ADS)

    Kulkarni, Rishikesh; Rastogi, Pramod

    2018-05-01

    A new approach is proposed for the multiple phase estimation from a multicomponent exponential phase signal recorded in multi-beam digital holographic interferometry. It is capable of providing multidimensional measurements in a simultaneous manner from a single recording of the exponential phase signal encoding multiple phases. Each phase within a small window around each pixel is appproximated with a first order polynomial function of spatial coordinates. The problem of accurate estimation of polynomial coefficients, and in turn the unwrapped phases, is formulated as a state space analysis wherein the coefficients and signal amplitudes are set as the elements of a state vector. The state estimation is performed using the extended Kalman filter. An amplitude discrimination criterion is utilized in order to unambiguously estimate the coefficients associated with the individual signal components. The performance of proposed method is stable over a wide range of the ratio of signal amplitudes. The pixelwise phase estimation approach of the proposed method allows it to handle the fringe patterns that may contain invalid regions.

  6. Multi-Vehicle Function Tracking by Moment Matching

    NASA Astrophysics Data System (ADS)

    Avant, Trevor

    The evolution of many natural and man-made environmental events can be represented as scalar functions of time and space. Examples include the boundary and intensity of wildfires, of waste spills in bodies of water, and of natural emissions of methane from the earth. The difficult task of understanding and monitoring these processes can be accomplished through the use of coordinated groups of vehicles. This thesis devises a method to determine positions of the members of a group of vehicles in the domain of a scalar function which lead to effective sensing of the function. This method involves equating the moments of a scalar function to the moments of a group of positions, which results in a system of polynomial equations to be solved. This methodology also allows for other explicit geometric constraints, in the form of polynomial equations, to be imposed on the vehicles. Several example simulations are shown to demonstrate the advantages and challenges associated with the moment matching technique.

  7. User Selection Criteria of Airspace Designs in Flexible Airspace Management

    NASA Technical Reports Server (NTRS)

    Lee, Hwasoo E.; Lee, Paul U.; Jung, Jaewoo; Lai, Chok Fung

    2011-01-01

    A method for identifying global aerodynamic models from flight data in an efficient manner is explained and demonstrated. A novel experiment design technique was used to obtain dynamic flight data over a range of flight conditions with a single flight maneuver. Multivariate polynomials and polynomial splines were used with orthogonalization techniques and statistical modeling metrics to synthesize global nonlinear aerodynamic models directly and completely from flight data alone. Simulation data and flight data from a subscale twin-engine jet transport aircraft were used to demonstrate the techniques. Results showed that global multivariate nonlinear aerodynamic dependencies could be accurately identified using flight data from a single maneuver. Flight-derived global aerodynamic model structures, model parameter estimates, and associated uncertainties were provided for all six nondimensional force and moment coefficients for the test aircraft. These models were combined with a propulsion model identified from engine ground test data to produce a high-fidelity nonlinear flight simulation very efficiently. Prediction testing using a multi-axis maneuver showed that the identified global model accurately predicted aircraft responses.

  8. Self-other rating agreement and leader-member exchange (LMX): a quasi-replication.

    PubMed

    Barbuto, John E; Wilmot, Michael P; Singh, Matthew; Story, Joana S P

    2012-04-01

    Data from a sample of 83 elected community leaders and 391 direct-report staff (resulting in 333 useable leader-member dyads) were reanalyzed to test relations between self-other rating agreement of servant leadership and member-reported leader-member exchange (LMX). Polynomial regression analysis indicated that the self-other rating agreement model was not statistically significant. Instead, all of the variance in member-reported LMX was accounted for by the others' ratings component alone.

  9. Analysis of precision and accuracy in a simple model of machine learning

    NASA Astrophysics Data System (ADS)

    Lee, Julian

    2017-12-01

    Machine learning is a procedure where a model for the world is constructed from a training set of examples. It is important that the model should capture relevant features of the training set, and at the same time make correct prediction for examples not included in the training set. I consider the polynomial regression, the simplest method of learning, and analyze the accuracy and precision for different levels of the model complexity.

  10. Modeling lactation curves and estimation of genetic parameters in Holstein cows using multiple-trait random regression models.

    PubMed

    Kheirabadi, Khabat; Rashidi, Amir; Alijani, Sadegh; Imumorin, Ikhide

    2014-11-01

    We compared the goodness of fit of three mathematical functions (including: Legendre polynomials, Lidauer-Mäntysaari function and Wilmink function) for describing the lactation curve of primiparous Iranian Holstein cows by using multiple-trait random regression models (MT-RRM). Lactational submodels provided the largest daily additive genetic (AG) and permanent environmental (PE) variance estimates at the end and at the onset of lactation, respectively, as well as low genetic correlations between peripheral test-day records. For all models, heritability estimates were highest at the end of lactation (245 to 305 days) and ranged from 0.05 to 0.26, 0.03 to 0.12 and 0.04 to 0.24 for milk, fat and protein yields, respectively. Generally, the genetic correlations between traits depend on how far apart they are or whether they are on the same day in any two traits. On average, genetic correlations between milk and fat were the lowest and those between fat and protein were intermediate, while those between milk and protein were the highest. Results from all criteria (Akaike's and Schwarz's Bayesian information criterion, and -2*logarithm of the likelihood function) suggested that a model with 2 and 5 coefficients of Legendre polynomials for AG and PE effects, respectively, was the most adequate for fitting the data. © 2014 Japanese Society of Animal Science.

  11. Genetic evaluation of weekly body weight in Japanese quail using random regression models.

    PubMed

    Karami, K; Zerehdaran, S; Tahmoorespur, M; Barzanooni, B; Lotfi, E

    2017-02-01

    1. A total of 11 826 records from 2489 quails, hatched between 2012 and 2013, were used to estimate genetic parameters for BW (body weight) of Japanese quail using random regression models. Weekly BW was measured from hatch until 49 d of age. WOMBAT software (University of New England, Australia) was used for estimating genetic and phenotypic parameters. 2. Nineteen models were evaluated to identify the best orders of Legendre polynomials. A model with Legendre polynomial of order 3 for additive genetic effect, order 3 for permanent environmental effects and order 1 for maternal permanent environmental effects was chosen as the best model. 3. According to the best model, phenotypic and genetic variances were higher at the end of the rearing period. Although direct heritability for BW reduced from 0.18 at hatch to 0.12 at 7 d of age, it gradually increased to 0.42 at 49 d of age. It indicates that BW at older ages is more controlled by genetic components in Japanese quail. 4. Phenotypic and genetic correlations between adjacent periods except hatching weight were more closely correlated than remote periods. The present results suggested that BW at earlier ages, especially at hatch, are different traits compared to BW at older ages. Therefore, BW at earlier ages could not be used as a selection criterion for improving BW at slaughter age.

  12. Genetic Parameters for Milk Yield and Lactation Persistency Using Random Regression Models in Girolando Cattle

    PubMed Central

    Canaza-Cayo, Ali William; Lopes, Paulo Sávio; da Silva, Marcos Vinicius Gualberto Barbosa; de Almeida Torres, Robledo; Martins, Marta Fonseca; Arbex, Wagner Antonio; Cobuci, Jaime Araujo

    2015-01-01

    A total of 32,817 test-day milk yield (TDMY) records of the first lactation of 4,056 Girolando cows daughters of 276 sires, collected from 118 herds between 2000 and 2011 were utilized to estimate the genetic parameters for TDMY via random regression models (RRM) using Legendre’s polynomial functions whose orders varied from 3 to 5. In addition, nine measures of persistency in milk yield (PSi) and the genetic trend of 305-day milk yield (305MY) were evaluated. The fit quality criteria used indicated RRM employing the Legendre’s polynomial of orders 3 and 5 for fitting the genetic additive and permanent environment effects, respectively, as the best model. The heritability and genetic correlation for TDMY throughout the lactation, obtained with the best model, varied from 0.18 to 0.23 and from −0.03 to 1.00, respectively. The heritability and genetic correlation for persistency and 305MY varied from 0.10 to 0.33 and from −0.98 to 1.00, respectively. The use of PS7 would be the most suitable option for the evaluation of Girolando cattle. The estimated breeding values for 305MY of sires and cows showed significant and positive genetic trends. Thus, the use of selection indices would be indicated in the genetic evaluation of Girolando cattle for both traits. PMID:26323397

  13. Person-city personality fit and entrepreneurial success: An explorative study in China.

    PubMed

    Zhou, Mingjie; Zhou, Yixin; Zhang, Jianxin; Obschonka, Martin; Silbereisen, Rainer K

    2017-08-13

    While the study of personality differences is a traditional psychological approach in entrepreneurship research, economic research directs attention towards the entrepreneurial ecosystems in which entrepreneurial activity are embedded. We combine both approaches and quantify the interplay between the individual personality make-up of entrepreneurs and the local personality composition of ecosystems, with a special focus on person-city personality fit. Specifically, we analyse personality data from N = 26,405 Chinese residents across 42 major Chinese cities, including N = 1091 Chinese entrepreneurs. Multi-level polynomial regression and response surface plots revealed that: (a) individual-level conscientiousness had a positive effect and individual-level agreeableness and neuroticism had a negative effect on entrepreneurial success, (b) city-level conscientiousness had a positive, and city-level neuroticism had a negative effect on entrepreneurial success, and (c) additional person-city personality fit effects existed for agreeableness, conscientiousness and neuroticism. For example, entrepreneurs who are high in agreeableness and conduct their business in a city with a low agreeableness level show the lowest entrepreneurial success. In contrast, entrepreneurs who are low in agreeableness and conduct their business in a city with a high agreeableness level show relatively high entrepreneurial success. Implications for research and practice are discussed. © 2017 International Union of Psychological Science.

  14. Surface Modified Particles By Multi-Step Addition And Process For The Preparation Thereof

    DOEpatents

    Cook, Ronald Lee; Elliott, Brian John; Luebben, Silvia DeVito; Myers, Andrew William; Smith, Bryan Matthew

    2006-01-17

    The present invention relates to a new class of surface modified particles and to a multi-step surface modification process for the preparation of the same. The multi-step surface functionalization process involves two or more reactions to produce particles that are compatible with various host systems and/or to provide the particles with particular chemical reactivities. The initial step comprises the attachment of a small organic compound to the surface of the inorganic particle. The subsequent steps attach additional compounds to the previously attached organic compounds through organic linking groups.

  15. Multi-Target Regression via Robust Low-Rank Learning.

    PubMed

    Zhen, Xiantong; Yu, Mengyang; He, Xiaofei; Li, Shuo

    2018-02-01

    Multi-target regression has recently regained great popularity due to its capability of simultaneously learning multiple relevant regression tasks and its wide applications in data mining, computer vision and medical image analysis, while great challenges arise from jointly handling inter-target correlations and input-output relationships. In this paper, we propose Multi-layer Multi-target Regression (MMR) which enables simultaneously modeling intrinsic inter-target correlations and nonlinear input-output relationships in a general framework via robust low-rank learning. Specifically, the MMR can explicitly encode inter-target correlations in a structure matrix by matrix elastic nets (MEN); the MMR can work in conjunction with the kernel trick to effectively disentangle highly complex nonlinear input-output relationships; the MMR can be efficiently solved by a new alternating optimization algorithm with guaranteed convergence. The MMR leverages the strength of kernel methods for nonlinear feature learning and the structural advantage of multi-layer learning architectures for inter-target correlation modeling. More importantly, it offers a new multi-layer learning paradigm for multi-target regression which is endowed with high generality, flexibility and expressive ability. Extensive experimental evaluation on 18 diverse real-world datasets demonstrates that our MMR can achieve consistently high performance and outperforms representative state-of-the-art algorithms, which shows its great effectiveness and generality for multivariate prediction.

  16. The effect of osteoporotic vertebral fracture on predicted spinal loads in vivo.

    PubMed

    Briggs, Andrew M; Wrigley, Tim V; van Dieën, Jaap H; Phillips, Bev; Lo, Sing Kai; Greig, Alison M; Bennell, Kim L

    2006-12-01

    The aetiology of osteoporotic vertebral fractures is multi-factorial, and cannot be explained solely by low bone mass. After sustaining an initial vertebral fracture, the risk of subsequent fracture increases greatly. Examination of physiologic loads imposed on vertebral bodies may help to explain a mechanism underlying this fracture cascade. This study tested the hypothesis that model-derived segmental vertebral loading is greater in individuals who have sustained an osteoporotic vertebral fracture compared to those with osteoporosis and no history of fracture. Flexion moments, and compression and shear loads were calculated from T2 to L5 in 12 participants with fractures (66.4 +/- 6.4 years, 162.2 +/- 5.1 cm, 69.1 +/- 11.2 kg) and 19 without fractures (62.9 +/- 7.9 years, 158.3 +/- 4.4 cm, 59.3 +/- 8.9 kg) while standing. Static analysis was used to solve gravitational loads while muscle-derived forces were calculated using a detailed trunk muscle model driven by optimization with a cost function set to minimise muscle fatigue. Least squares regression was used to derive polynomial functions to describe normalised load profiles. Regression co-efficients were compared between groups to examine differences in loading profiles. Loading at the fractured level, and at one level above and below, were also compared between groups. The fracture group had significantly greater normalised compression (p = 0.0008) and shear force (p < 0.0001) profiles and a trend for a greater flexion moment profile. At the level of fracture, a significantly greater flexion moment (p = 0.001) and shear force (p < 0.001) was observed in the fracture group. A greater flexion moment (p = 0.003) and compression force (p = 0.007) one level below the fracture, and a greater flexion moment (p = 0.002) and shear force (p = 0.002) one level above the fracture was observed in the fracture group. The differences observed in multi-level spinal loading between the groups may explain a mechanism for increased risk of subsequent vertebral fractures. Interventions aimed at restoring vertebral morphology or reduce thoracic curvature may assist in normalising spine load profiles.

  17. Humeral development from neonatal period to skeletal maturity--application in age and sex assessment.

    PubMed

    Rissech, Carme; López-Costas, Olalla; Turbón, Daniel

    2013-01-01

    The goal of the present study is to examine cross-sectional information on the growth of the humerus based on the analysis of four measurements, namely, diaphyseal length, transversal diameter of the proximal (metaphyseal) end of the shaft, epicondylar breadth and vertical diameter of the head. This analysis was performed in 181 individuals (90 ♂ and 91 ♀) ranging from birth to 25 years of age and belonging to three documented Western European skeletal collections (Coimbra, Lisbon and St. Bride). After testing the homogeneity of the sample, the existence of sexual differences (Student's t- and Mann-Whitney U-test) and the growth of the variables (polynomial regression) were evaluated. The results showed the presence of sexual differences in epicondylar breadth above 20 years of age and vertical diameter of the head from 15 years of age, thus indicating that these two variables may be of use in determining sex from that age onward. The growth pattern of the variables showed a continuous increase and followed first- and second-degree polynomials. However, growth of the transversal diameter of the proximal end of the shaft followed a fourth-degree polynomial. Strong correlation coefficients were identified between humeral size and age for each of the four metric variables. These results indicate that any of the humeral measurements studied herein is likely to serve as a useful means of estimating sub-adult age in forensic samples.

  18. Applicability of the polynomial chaos expansion method for personalization of a cardiovascular pulse wave propagation model.

    PubMed

    Huberts, W; Donders, W P; Delhaas, T; van de Vosse, F N

    2014-12-01

    Patient-specific modeling requires model personalization, which can be achieved in an efficient manner by parameter fixing and parameter prioritization. An efficient variance-based method is using generalized polynomial chaos expansion (gPCE), but it has not been applied in the context of model personalization, nor has it ever been compared with standard variance-based methods for models with many parameters. In this work, we apply the gPCE method to a previously reported pulse wave propagation model and compare the conclusions for model personalization with that of a reference analysis performed with Saltelli's efficient Monte Carlo method. We furthermore differentiate two approaches for obtaining the expansion coefficients: one based on spectral projection (gPCE-P) and one based on least squares regression (gPCE-R). It was found that in general the gPCE yields similar conclusions as the reference analysis but at much lower cost, as long as the polynomial metamodel does not contain unnecessary high order terms. Furthermore, the gPCE-R approach generally yielded better results than gPCE-P. The weak performance of the gPCE-P can be attributed to the assessment of the expansion coefficients using the Smolyak algorithm, which might be hampered by the high number of model parameters and/or by possible non-smoothness in the output space. Copyright © 2014 John Wiley & Sons, Ltd.

  19. Optimization of Paclitaxel Containing pH-Sensitive Liposomes By 3 Factor, 3 Level Box-Behnken Design.

    PubMed

    Rane, Smita; Prabhakar, Bala

    2013-07-01

    The aim of this study was to investigate the combined influence of 3 independent variables in the preparation of paclitaxel containing pH-sensitive liposomes. A 3 factor, 3 levels Box-Behnken design was used to derive a second order polynomial equation and construct contour plots to predict responses. The independent variables selected were molar ratio phosphatidylcholine:diolylphosphatidylethanolamine (X1), molar concentration of cholesterylhemisuccinate (X2), and amount of drug (X3). Fifteen batches were prepared by thin film hydration method and evaluated for percent drug entrapment, vesicle size, and pH sensitivity. The transformed values of the independent variables and the percent drug entrapment were subjected to multiple regression to establish full model second order polynomial equation. F was calculated to confirm the omission of insignificant terms from the full model equation to derive a reduced model polynomial equation to predict the dependent variables. Contour plots were constructed to show the effects of X1, X2, and X3 on the percent drug entrapment. A model was validated for accurate prediction of the percent drug entrapment by performing checkpoint analysis. The computer optimization process and contour plots predicted the levels of independent variables X1, X2, and X3 (0.99, -0.06, 0, respectively), for maximized response of percent drug entrapment with constraints on vesicle size and pH sensitivity.

  20. Step width alters iliotibial band strain during running.

    PubMed

    Meardon, Stacey A; Campbell, Samuel; Derrick, Timothy R

    2012-11-01

    This study assessed the effect of step width during running on factors related to iliotibial band (ITB) syndrome. Three-dimensional (3D) kinematics and kinetics were recorded from 15 healthy recreational runners during overground running under various step width conditions (preferred and at least +/- 5% of their leg length). Strain and strain rate were estimated from a musculoskeletal model of the lower extremity. Greater ITB strain and strain rate were found in the narrower step width condition (p < 0.001, p = 0.040). ITB strain was significantly (p < 0.001) greater in the narrow condition than the preferred and wide conditions and it was greater in the preferred condition than the wide condition. ITB strain rate was significantly greater in the narrow condition than the wide condition (p = 0.020). Polynomial contrasts revealed a linear increase in both ITB strain and strain rate with decreasing step width. We conclude that relatively small decreases in step width can substantially increase ITB strain as well as strain rates. Increasing step width during running, especially in persons whose running style is characterized by a narrow step width, may be beneficial in the treatment and prevention of running-related ITB syndrome.

  1. Solving Multi-variate Polynomial Equations in a Finite Field

    DTIC Science & Technology

    2013-06-01

    Algebraic Background In this section, some algebraic definitions and basics are discussed as they pertain to this re- search. For a more detailed...definitions and basics are discussed as they pertain to this research. For a more detailed treatment, consult a graph theory text such as [10]. A graph G...graph if V(G) can be partitioned into k subsets V1,V2, ...,Vk such that uv is only an edge of G if u and v belong to different partite sets. If, in

  2. Aided target recognition processing of MUDSS sonar data

    NASA Astrophysics Data System (ADS)

    Lau, Brian; Chao, Tien-Hsin

    1998-09-01

    The Mobile Underwater Debris Survey System (MUDSS) is a collaborative effort by the Navy and the Jet Propulsion Lab to demonstrate multi-sensor, real-time, survey of underwater sites for ordnance and explosive waste (OEW). We describe the sonar processing algorithm, a novel target recognition algorithm incorporating wavelets, morphological image processing, expansion by Hermite polynomials, and neural networks. This algorithm has found all planted targets in MUDSS tests and has achieved spectacular success upon another Coastal Systems Station (CSS) sonar image database.

  3. Orthogonal Gaussian process models

    DOE PAGES

    Plumlee, Matthew; Joseph, V. Roshan

    2017-01-01

    Gaussian processes models are widely adopted for nonparameteric/semi-parametric modeling. Identifiability issues occur when the mean model contains polynomials with unknown coefficients. Though resulting prediction is unaffected, this leads to poor estimation of the coefficients in the mean model, and thus the estimated mean model loses interpretability. This paper introduces a new Gaussian process model whose stochastic part is orthogonal to the mean part to address this issue. As a result, this paper also discusses applications to multi-fidelity simulations using data examples.

  4. Orthogonal Gaussian process models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Plumlee, Matthew; Joseph, V. Roshan

    Gaussian processes models are widely adopted for nonparameteric/semi-parametric modeling. Identifiability issues occur when the mean model contains polynomials with unknown coefficients. Though resulting prediction is unaffected, this leads to poor estimation of the coefficients in the mean model, and thus the estimated mean model loses interpretability. This paper introduces a new Gaussian process model whose stochastic part is orthogonal to the mean part to address this issue. As a result, this paper also discusses applications to multi-fidelity simulations using data examples.

  5. Finite element mesh refinement criteria for stress analysis

    NASA Technical Reports Server (NTRS)

    Kittur, Madan G.; Huston, Ronald L.

    1990-01-01

    This paper discusses procedures for finite-element mesh selection and refinement. The objective is to improve accuracy. The procedures are based on (1) the minimization of the stiffness matrix race (optimizing node location); (2) the use of h-version refinement (rezoning, element size reduction, and increasing the number of elements); and (3) the use of p-version refinement (increasing the order of polynomial approximation of the elements). A step-by-step procedure of mesh selection, improvement, and refinement is presented. The criteria for 'goodness' of a mesh are based on strain energy, displacement, and stress values at selected critical points of a structure. An analysis of an aircraft lug problem is presented as an example.

  6. Hamiltonian BVMs (HBVMs): Implementation Details and Applications

    NASA Astrophysics Data System (ADS)

    Brugnano, Luigi; Iavernaro, Felice; Susca, Tiziana

    2009-09-01

    Hamiltonian Boundary Value Methods are one step schemes of high order where the internal stages are partly exploited to impose the order conditions (fundamental stages) and partly to confer the formula the property of conserving the Hamiltonian function when this is a polynomial with a given degree v. The term "silent stages" has been coined for these latter set of extra-stages to mean that their presence does not cause an increase of the dimension of the associated nonlinear system to be solved at each step. By considering a specific method in this class, we give some details about how the solution of the nonlinear system may be conveniently carried out and how to compensate the effect of roundoff errors.

  7. Physical realization of topological quantum walks on IBM-Q and beyond

    NASA Astrophysics Data System (ADS)

    Balu, Radhakrishnan; Castillo, Daniel; Siopsis, George

    2018-07-01

    We discuss an efficient physical realization of topological quantum walks on a one-dimensional finite lattice with periodic boundary conditions (circle). The N-point lattice is realized with {log}}2N qubits, and the quantum circuit utilizes a number of quantum gates that are polynomial in the number of qubits. In a certain scaling limit, we show that a large number of steps are implemented with a number of quantum gates which are independent of the number of steps. We ran the quantum algorithm on the IBM-Q five-qubit quantum computer, thus experimentally demonstrating topological features, such as boundary bound states, on a one-dimensional lattice with N = 4 points.

  8. Tracking Virus Particles in Fluorescence Microscopy Images Using Multi-Scale Detection and Multi-Frame Association.

    PubMed

    Jaiswal, Astha; Godinez, William J; Eils, Roland; Lehmann, Maik Jorg; Rohr, Karl

    2015-11-01

    Automatic fluorescent particle tracking is an essential task to study the dynamics of a large number of biological structures at a sub-cellular level. We have developed a probabilistic particle tracking approach based on multi-scale detection and two-step multi-frame association. The multi-scale detection scheme allows coping with particles in close proximity. For finding associations, we have developed a two-step multi-frame algorithm, which is based on a temporally semiglobal formulation as well as spatially local and global optimization. In the first step, reliable associations are determined for each particle individually in local neighborhoods. In the second step, the global spatial information over multiple frames is exploited jointly to determine optimal associations. The multi-scale detection scheme and the multi-frame association finding algorithm have been combined with a probabilistic tracking approach based on the Kalman filter. We have successfully applied our probabilistic tracking approach to synthetic as well as real microscopy image sequences of virus particles and quantified the performance. We found that the proposed approach outperforms previous approaches.

  9. Multivariate random regression analysis for body weight and main morphological traits in genetically improved farmed tilapia (Oreochromis niloticus).

    PubMed

    He, Jie; Zhao, Yunfeng; Zhao, Jingli; Gao, Jin; Han, Dandan; Xu, Pao; Yang, Runqing

    2017-11-02

    Because of their high economic importance, growth traits in fish are under continuous improvement. For growth traits that are recorded at multiple time-points in life, the use of univariate and multivariate animal models is limited because of the variable and irregular timing of these measures. Thus, the univariate random regression model (RRM) was introduced for the genetic analysis of dynamic growth traits in fish breeding. We used a multivariate random regression model (MRRM) to analyze genetic changes in growth traits recorded at multiple time-point of genetically-improved farmed tilapia. Legendre polynomials of different orders were applied to characterize the influences of fixed and random effects on growth trajectories. The final MRRM was determined by optimizing the univariate RRM for the analyzed traits separately via penalizing adaptively the likelihood statistical criterion, which is superior to both the Akaike information criterion and the Bayesian information criterion. In the selected MRRM, the additive genetic effects were modeled by Legendre polynomials of three orders for body weight (BWE) and body length (BL) and of two orders for body depth (BD). By using the covariance functions of the MRRM, estimated heritabilities were between 0.086 and 0.628 for BWE, 0.155 and 0.556 for BL, and 0.056 and 0.607 for BD. Only heritabilities for BD measured from 60 to 140 days of age were consistently higher than those estimated by the univariate RRM. All genetic correlations between growth time-points exceeded 0.5 for either single or pairwise time-points. Moreover, correlations between early and late growth time-points were lower. Thus, for phenotypes that are measured repeatedly in aquaculture, an MRRM can enhance the efficiency of the comprehensive selection for BWE and the main morphological traits.

  10. Modeling and control for closed environment plant production systems

    NASA Technical Reports Server (NTRS)

    Fleisher, David H.; Ting, K. C.; Janes, H. W. (Principal Investigator)

    2002-01-01

    A computer program was developed to study multiple crop production and control in controlled environment plant production systems. The program simulates crop growth and development under nominal and off-nominal environments. Time-series crop models for wheat (Triticum aestivum), soybean (Glycine max), and white potato (Solanum tuberosum) are integrated with a model-based predictive controller. The controller evaluates and compensates for effects of environmental disturbances on crop production scheduling. The crop models consist of a set of nonlinear polynomial equations, six for each crop, developed using multivariate polynomial regression (MPR). Simulated data from DSSAT crop models, previously modified for crop production in controlled environments with hydroponics under elevated atmospheric carbon dioxide concentration, were used for the MPR fitting. The model-based predictive controller adjusts light intensity, air temperature, and carbon dioxide concentration set points in response to environmental perturbations. Control signals are determined from minimization of a cost function, which is based on the weighted control effort and squared-error between the system response and desired reference signal.

  11. Additive-Multiplicative Approximation of Genotype-Environment Interaction

    PubMed Central

    Gimelfarb, A.

    1994-01-01

    A model of genotype-environment interaction in quantitative traits is considered. The model represents an expansion of the traditional additive (first degree polynomial) approximation of genotypic and environmental effects to a second degree polynomial incorporating a multiplicative term besides the additive terms. An experimental evaluation of the model is suggested and applied to a trait in Drosophila melanogaster. The environmental variance of a genotype in the model is shown to be a function of the genotypic value: it is a convex parabola. The broad sense heritability in a population depends not only on the genotypic and environmental variances, but also on the position of the genotypic mean in the population relative to the minimum of the parabola. It is demonstrated, using the model, that GXE interaction rectional may cause a substantial non-linearity in offspring-parent regression and a reversed response to directional selection. It is also shown that directional selection may be accompanied by an increase in the heritability. PMID:7896113

  12. Comparison of vertical E × B drift velocities and ground-based magnetometer observations of DELTA H in the low latitude under geomagnetically disturbed conditions

    NASA Astrophysics Data System (ADS)

    Prabhu, M.; Unnikrishnan, K.

    2018-04-01

    In the present work, we analyzed the daytime vertical E × B drift velocities obtained from Jicamarca Unattended Long-term Ionosphere Atmosphere (JULIA) radar and ΔH component of geomagnetic field measured as the difference between the magnitudes of the horizontal (H) components between two magnetometers deployed at two different locations Jicamarca, and Piura in Peru for 22 geomagnetically disturbed events in which either SC has occurred or Dstmax < -50 nT during the period 2006-2011. The ΔH component of geomagnetic field is measured as the differences in the magnitudes of horizontal H component between magnetometer placed directly on the magnetic equator and one displaced 6-9° away. It will provide a direct measure of the daytime electrojet current, due to the eastward electric field. This will in turn gives the magnitude of vertical E × B drift velocity in the F region. A positive correlation exists between peak values of daytime vertical E × B drift velocity and peak value of ΔH for the three consecutive days of the events. It was observed that 45% of the events have daytime vertical E × B drift velocity peak in the magnitude range 10-20 m/s and 20-30 m/s and 20% have peak ΔH in the magnitude range 50-60 nT and 80-90 nT. It was observed that the time of occurrence of the peak value of both the vertical E × B drift velocity and the ΔH have a maximum (40%) probability in the same time range 11:00-13:00 LT. We also investigated the correlation between E × B drift velocity and Dst index and the correlation between delta H and Dst index. A strong positive correlation is found between E × B drift and Dst index as well as between delta H and Dst Index. Three different techniques of data analysis - linear, polynomial (order 2), and polynomial (order 3) regression analysis were considered. The regression parameters in all the three cases were calculated using the Least Square Method (LSM), using the daytime vertical E × B drift velocity and ΔH. A formula was developed which indicates the relationship between daytime vertical E × B drift velocity and ΔH, for the disturbed periods. The E × B drift velocity was then evaluated using the formulae thus found for the three regression analysis and validated for the 'disturbed periods' of 3 selected events. The E × B drift velocities estimated by the three regression analysis have a fairly good agreement with JULIA radar observed values under different seasons and solar activity conditions. Root Mean Square (RMS) errors calculated for each case suggest that polynomial (order 3) regression analysis provides a better agreement with the observations from among the three.

  13. Meta-regression approximations to reduce publication selection bias.

    PubMed

    Stanley, T D; Doucouliagos, Hristos

    2014-03-01

    Publication selection bias is a serious challenge to the integrity of all empirical sciences. We derive meta-regression approximations to reduce this bias. Our approach employs Taylor polynomial approximations to the conditional mean of a truncated distribution. A quadratic approximation without a linear term, precision-effect estimate with standard error (PEESE), is shown to have the smallest bias and mean squared error in most cases and to outperform conventional meta-analysis estimators, often by a great deal. Monte Carlo simulations also demonstrate how a new hybrid estimator that conditionally combines PEESE and the Egger regression intercept can provide a practical solution to publication selection bias. PEESE is easily expanded to accommodate systematic heterogeneity along with complex and differential publication selection bias that is related to moderator variables. By providing an intuitive reason for these approximations, we can also explain why the Egger regression works so well and when it does not. These meta-regression methods are applied to several policy-relevant areas of research including antidepressant effectiveness, the value of a statistical life, the minimum wage, and nicotine replacement therapy. Copyright © 2013 John Wiley & Sons, Ltd.

  14. Genetic analyses of protein yield in dairy cows applying random regression models with time-dependent and temperature x humidity-dependent covariates.

    PubMed

    Brügemann, K; Gernand, E; von Borstel, U U; König, S

    2011-08-01

    Data used in the present study included 1,095,980 first-lactation test-day records for protein yield of 154,880 Holstein cows housed on 196 large-scale dairy farms in Germany. Data were recorded between 2002 and 2009 and merged with meteorological data from public weather stations. The maximum distance between each farm and its corresponding weather station was 50 km. Hourly temperature-humidity indexes (THI) were calculated using the mean of hourly measurements of dry bulb temperature and relative humidity. On the phenotypic scale, an increase in THI was generally associated with a decrease in daily protein yield. For genetic analyses, a random regression model was applied using time-dependent (d in milk, DIM) and THI-dependent covariates. Additive genetic and permanent environmental effects were fitted with this random regression model and Legendre polynomials of order 3 for DIM and THI. In addition, the fixed curve was modeled with Legendre polynomials of order 3. Heterogeneous residuals were fitted by dividing DIM into 5 classes, and by dividing THI into 4 classes, resulting in 20 different classes. Additive genetic variances for daily protein yield decreased with increasing degrees of heat stress and were lowest at the beginning of lactation and at extreme THI. Due to higher additive genetic variances, slightly higher permanent environment variances, and similar residual variances, heritabilities were highest for low THI in combination with DIM at the end of lactation. Genetic correlations among individual values for THI were generally >0.90. These trends from the complex random regression model were verified by applying relatively simple bivariate animal models for protein yield measured in 2 THI environments; that is, defining a THI value of 60 as a threshold. These high correlations indicate the absence of any substantial genotype × environment interaction for protein yield. However, heritabilities and additive genetic variances from the random regression model tended to be slightly higher in the THI range corresponding to cows' comfort zone. Selecting such superior environments for progeny testing can contribute to an accurate genetic differentiation among selection candidates. Copyright © 2011 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  15. Participant Adherence Indicators Predict Changes in Blood Pressure, Anthropometric Measures, and Self-Reported Physical Activity in a Lifestyle Intervention: HUB City Steps

    PubMed Central

    Thomson, Jessica L.; Landry, Alicia S.; Zoellner, Jamie M.; Connell, Carol; Madson, Michael B.; Molaison, Elaine Fontenot; Yadrick, Kathy

    2014-01-01

    The objective of this secondary analysis was to evaluate the utility of several participant adherence indicators for predicting changes in clinical, anthropometric, dietary, fitness, and physical activity (PA) outcomes in a lifestyle intervention, HUB City Steps, conducted in a southern, African American cohort in 2010. HUB City Steps was a 6 month, community engaged, multi component, non controlled, intervention targeting hypertension risk factors. Descriptive indicators were constructed using 2 participant adherence measures, education session attendance (ESA) and weekly steps/day pedometer diary submission (PDS), separately and in combination. Analyses, based on data from 269 primarily African American adult participants, included bivariate tests of association and multivariable linear regression to determine significant relationships between 7 adherence indicators and health outcome changes, including clinical, anthropometric, dietary, fitness, and PA measures. ESA indicators were significantly correlated with 4 health outcomes, body mass index (BMI), fat mass, low density lipoprotein (LDL), and PA ( .29≤ r ≤ .23; P<.05). PDS indicators were significantly correlated with PA (r=.27; P<.001). Combination ESA/PDS indicators were significantly correlated with 5 health outcomes, BMI, % body fat (%BF), fat mass, LDL, and PA (r= .26 to .29; P<.05). Results from the multivariate models indicated that the combination ESA/PDS indicators were the most significant predictors of changes for 5 outcomes, %BF, fat mass, LDL diastolic blood pressure (DBP), and PA, while ESA performed best for BMI only. For DBP, a 1 unit increase in the continuous categorical ESA/PDS indicator resulted in .3 mm Hg decrease. Implications for assessing participant adherence in community based, multi component lifestyle intervention research are discussed. PMID:24986913

  16. Integration of least angle regression with empirical Bayes for multi-locus genome-wide association studies

    USDA-ARS?s Scientific Manuscript database

    Multi-locus genome-wide association studies has become the state-of-the-art procedure to identify quantitative trait loci (QTL) associated with traits simultaneously. However, implementation of multi-locus model is still difficult. In this study, we integrated least angle regression with empirical B...

  17. Both hands at work: the effect of aging on upper-limb kinematics in a multi-step activity of daily living.

    PubMed

    Gulde, Philipp; Hermsdörfer, Joachim

    2017-05-01

    The kinematic performance of basic motor tasks shows a clear decrease with advancing age. This study examined if the rules known from such tasks can be generalized to activities of daily living. We examined the end-effector kinematics of 13 young and 13 elderly participants in the multi-step activity of daily living of tea-making. Furthermore, we analyzed bimanual behavior and hand dominance in the task using different conditions of execution. The elderly sample took substantially longer to complete the activity (almost 50%) with longer trajectories compared with the young sample. Models of multiple linear regression revealed that the longer trajectories prolonged the trial duration in both groups, and while movement speed influenced the trial duration of young participants, phases of inactivity negatively affected how long the activity took the elderly subjects. No differences were found regarding bimanual performance or hand dominance. We assume that in self-paced activities of daily living, the age-dependent differences in the kinematics are more likely to be based on the higher cognitive demands of the task rather than on pure motor capability. Furthermore, it seems that not all of the rules known from basic motor tasks can be generalized to activities of daily living.

  18. A statistical framework for applying RNA profiling to chemical hazard detection.

    PubMed

    Kostich, Mitchell S

    2017-12-01

    Use of 'omics technologies in environmental science is expanding. However, application is mostly restricted to characterizing molecular steps leading from toxicant interaction with molecular receptors to apical endpoints in laboratory species. Use in environmental decision-making is limited, due to difficulty in elucidating mechanisms in sufficient detail to make quantitative outcome predictions in any single species or in extending predictions to aquatic communities. Here we introduce a mechanism-agnostic statistical approach, supplementing mechanistic investigation by allowing probabilistic outcome prediction even when understanding of molecular pathways is limited, and facilitating extrapolation from results in laboratory test species to predictions about aquatic communities. We use concepts familiar to environmental managers, supplemented with techniques employed for clinical interpretation of 'omics-based biomedical tests. We describe the framework in step-wise fashion, beginning with single test replicates of a single RNA variant, then extending to multi-gene RNA profiling, collections of test replicates, and integration of complementary data. In order to simplify the presentation, we focus on using RNA profiling for distinguishing presence versus absence of chemical hazards, but the principles discussed can be extended to other types of 'omics measurements, multi-class problems, and regression. We include a supplemental file demonstrating many of the concepts using the open source R statistical package. Published by Elsevier Ltd.

  19. XMGR5 users manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, K.R.; Fisher, J.E.

    1997-03-01

    ACE/gr is XY plotting tool for workstations or X-terminals using X. A few of its features are: User defined scaling, tick marks, labels, symbols, line styles, colors. Batch mode for unattended plotting. Read and write parameters used during a session. Polynomial regression, splines, running averages, DFT/FFT, cross/auto-correlation. Hardcopy support for PostScript, HP-GL, and FrameMaker.mif format. While ACE/gr has a convenient point-and-click interface, most parameter settings and operations are available through a command line interface (found in Files/Commands).

  20. Finding the Best-Fit Polynomial Approximation in Evaluating Drill Data: the Application of a Generalized Inverse Matrix / Poszukiwanie Najlepszej ZGODNOŚCI W PRZYBLIŻENIU Wielomianowym Wykorzystanej do Oceny Danych Z ODWIERTÓW - Zastosowanie UOGÓLNIONEJ Macierzy Odwrotnej

    NASA Astrophysics Data System (ADS)

    Karakus, Dogan

    2013-12-01

    In mining, various estimation models are used to accurately assess the size and the grade distribution of an ore body. The estimation of the positional properties of unknown regions using random samples with known positional properties was first performed using polynomial approximations. Although the emergence of computer technologies and statistical evaluation of random variables after the 1950s rendered the polynomial approximations less important, theoretically the best surface passing through the random variables can be expressed as a polynomial approximation. In geoscience studies, in which the number of random variables is high, reliable solutions can be obtained only with high-order polynomials. Finding the coefficients of these types of high-order polynomials can be computationally intensive. In this study, the solution coefficients of high-order polynomials were calculated using a generalized inverse matrix method. A computer algorithm was developed to calculate the polynomial degree giving the best regression between the values obtained for solutions of different polynomial degrees and random observational data with known values, and this solution was tested with data derived from a practical application. In this application, the calorie values for data from 83 drilling points in a coal site located in southwestern Turkey were used, and the results are discussed in the context of this study. W górnictwie wykorzystuje się rozmaite modele estymacji do dokładnego określenia wielkości i rozkładu zawartości pierwiastka użytecznego w rudzie. Estymację położenia i właściwości skał w nieznanych obszarach z wykorzystaniem próbek losowych o znanym położeniu przeprowadzano na początku z wykorzystaniem przybliżenia wielomianowego. Pomimo tego, że rozwój technik komputerowych i statystycznych metod ewaluacji próbek losowych sprawiły, że po roku 1950 metody przybliżenia wielomianowego straciły na znaczeniu, nadal teoretyczna powierzchnia najlepszej zgodności przechodząca przez zmienne losowe wyrażana jest właśnie poprzez przybliżenie wielomianowe. W geofizyce, gdzie liczba próbek losowych jest zazwyczaj bardzo wysoka, wiarygodne rozwiązania uzyskać można jedynie przy wykorzystaniu wielomianów wyższych stopni. Określenie współczynników w tego typu wielomia nach jest skomplikowaną procedurą obliczeniową. W pracy tej poszukiwane współczynniki wielomianu wyższych stopni obliczono przy zastosowaniu metody uogólnionej macierzy odwrotnej. Opracowano odpowiedni algorytm komputerowy do obliczania stopnia wielomianu, zapewniający najlepszą regresję pomiędzy wartościami otrzymanymi z rozwiązań bazujących na wielomianach różnych stopni i losowymi danymi z obserwacji, o znanych wartościach. Rozwiązanie to przetestowano z użyciem danych uzyskanych z zastosowań praktycznych. W tym zastosowaniu użyto danych o wartości opałowej pochodzących z 83 odwiertów wykonanych w zagłębiu węglowym w południowo- zachodniej Turcji, wyniki obliczeń przedyskutowano w kontekście zagadnień uwzględnionych w niniejszej pracy.

  1. Modeling Uncertainty in Steady State Diffusion Problems via Generalized Polynomial Chaos

    DTIC Science & Technology

    2002-07-25

    Some basic hypergeometric polynomials that generalize Jacobi polynomials . Memoirs Amer. Math. Soc., AMS... orthogonal polynomial functionals from the Askey scheme, as a generalization of the original polynomial chaos idea of Wiener (1938). A Galerkin projection...1) by generalized polynomial chaos expansion, where the uncertainties can be introduced through κ, f , or g, or some combinations. It is worth

  2. Expressions for Fields in the ITER Tokamak

    NASA Astrophysics Data System (ADS)

    Sharma, Stephen

    2017-10-01

    The two most important problems to be solved in the development of working nuclear fusion power plants are: sustained partial ignition and turbulence. These two phenomenon are the subject of research and investigation through the development of analytic functions and computational models. Ansatz development through Gaussian wave-function approximations, dielectric quark models, field solutions using new elliptic functions, and better descriptions of the polynomials of the superconducting current loops are the critical theoretical developments that need to be improved. Euler-Lagrange equations of motion in addition to geodesic formulations generate the particle model which should correspond to the Dirac dispersive scattering coefficient calculations and the fluid plasma model. Feynman-Hellman formalism and Heaviside step functional forms are introduced to the fusion equations to produce simple expressions for the kinetic energy and loop currents. Conclusively, a polynomial description of the current loops, the Biot-Savart field, and the Lagrangian must be uncovered before there can be an adequate computational and iterative model of the thermonuclear plasma.

  3. Orthonormal aberration polynomials for anamorphic optical imaging systems with circular pupils.

    PubMed

    Mahajan, Virendra N

    2012-06-20

    In a recent paper, we considered the classical aberrations of an anamorphic optical imaging system with a rectangular pupil, representing the terms of a power series expansion of its aberration function. These aberrations are inherently separable in the Cartesian coordinates (x,y) of a point on the pupil. Accordingly, there is x-defocus and x-coma, y-defocus and y-coma, and so on. We showed that the aberration polynomials orthonormal over the pupil and representing balanced aberrations for such a system are represented by the products of two Legendre polynomials, one for each of the two Cartesian coordinates of the pupil point; for example, L(l)(x)L(m)(y), where l and m are positive integers (including zero) and L(l)(x), for example, represents an orthonormal Legendre polynomial of degree l in x. The compound two-dimensional (2D) Legendre polynomials, like the classical aberrations, are thus also inherently separable in the Cartesian coordinates of the pupil point. Moreover, for every orthonormal polynomial L(l)(x)L(m)(y), there is a corresponding orthonormal polynomial L(l)(y)L(m)(x) obtained by interchanging x and y. These polynomials are different from the corresponding orthogonal polynomials for a system with rotational symmetry but a rectangular pupil. In this paper, we show that the orthonormal aberration polynomials for an anamorphic system with a circular pupil, obtained by the Gram-Schmidt orthogonalization of the 2D Legendre polynomials, are not separable in the two coordinates. Moreover, for a given polynomial in x and y, there is no corresponding polynomial obtained by interchanging x and y. For example, there are polynomials representing x-defocus, balanced x-coma, and balanced x-spherical aberration, but no corresponding y-aberration polynomials. The missing y-aberration terms are contained in other polynomials. We emphasize that the Zernike circle polynomials, although orthogonal over a circular pupil, are not suitable for an anamorphic system as they do not represent balanced aberrations for such a system.

  4. SPSS macros to compare any two fitted values from a regression model.

    PubMed

    Weaver, Bruce; Dubois, Sacha

    2012-12-01

    In regression models with first-order terms only, the coefficient for a given variable is typically interpreted as the change in the fitted value of Y for a one-unit increase in that variable, with all other variables held constant. Therefore, each regression coefficient represents the difference between two fitted values of Y. But the coefficients represent only a fraction of the possible fitted value comparisons that might be of interest to researchers. For many fitted value comparisons that are not captured by any of the regression coefficients, common statistical software packages do not provide the standard errors needed to compute confidence intervals or carry out statistical tests-particularly in more complex models that include interactions, polynomial terms, or regression splines. We describe two SPSS macros that implement a matrix algebra method for comparing any two fitted values from a regression model. The !OLScomp and !MLEcomp macros are for use with models fitted via ordinary least squares and maximum likelihood estimation, respectively. The output from the macros includes the standard error of the difference between the two fitted values, a 95% confidence interval for the difference, and a corresponding statistical test with its p-value.

  5. Reliability of third molar development for age estimation in Gujarati population: A comparative study.

    PubMed

    Gandhi, Neha; Jain, Sandeep; Kumar, Manish; Rupakar, Pratik; Choyal, Kanaram; Prajapati, Seema

    2015-01-01

    Age assessment may be a crucial step in postmortem profiling leading to confirmative identification. In children, Demirjian's method based on eight developmental stages was developed to determine maturity scores as a function of age and polynomial functions to determine age as a function of score. Of this study was to evaluate the reliability of age estimation using Demirjian's eight teeth method following the French maturity scores and Indian-specific formula from developmental stages of third molar with the help of orthopantomograms using the Demirjian method. Dental panoramic tomograms from 30 subjects each of known chronological age and sex were collected and were evaluated according to Demirjian's criteria. Age calculations were performed using Demirjian's formula and Indian formula. Statistical analysis used was Chi-square test and ANOVA test and the P values obtained were statistically significant. There was an average underestimation of age with both Indian and Demirjian's formulas. The mean absolute error was lower using Indian formula hence it can be applied for age estimation in present Gujarati population. Also, females were ahead of achieving dental maturity than males thus completion of dental development is attained earlier in females. Greater accuracy can be obtained if population-specific formulas considering the ethnic and environmental variation are derived performing the regression analysis.

  6. Quantum Attack-Resistent Certificateless Multi-Receiver Signcryption Scheme

    PubMed Central

    Li, Huixian; Chen, Xubao; Pang, Liaojun; Shi, Weisong

    2013-01-01

    The existing certificateless signcryption schemes were designed mainly based on the traditional public key cryptography, in which the security relies on the hard problems, such as factor decomposition and discrete logarithm. However, these problems will be easily solved by the quantum computing. So the existing certificateless signcryption schemes are vulnerable to the quantum attack. Multivariate public key cryptography (MPKC), which can resist the quantum attack, is one of the alternative solutions to guarantee the security of communications in the post-quantum age. Motivated by these concerns, we proposed a new construction of the certificateless multi-receiver signcryption scheme (CLMSC) based on MPKC. The new scheme inherits the security of MPKC, which can withstand the quantum attack. Multivariate quadratic polynomial operations, which have lower computation complexity than bilinear pairing operations, are employed in signcrypting a message for a certain number of receivers in our scheme. Security analysis shows that our scheme is a secure MPKC-based scheme. We proved its security under the hardness of the Multivariate Quadratic (MQ) problem and its unforgeability under the Isomorphism of Polynomials (IP) assumption in the random oracle model. The analysis results show that our scheme also has the security properties of non-repudiation, perfect forward secrecy, perfect backward secrecy and public verifiability. Compared with the existing schemes in terms of computation complexity and ciphertext length, our scheme is more efficient, which makes it suitable for terminals with low computation capacity like smart cards. PMID:23967037

  7. Theoretical and experimental study of a new algorithm for factoring numbers

    NASA Astrophysics Data System (ADS)

    Tamma, Vincenzo

    The security of codes, for example in credit card and government information, relies on the fact that the factorization of a large integer N is a rather costly process on a classical digital computer. Such a security is endangered by Shor's algorithm which employs entangled quantum systems to find, with a polynomial number of resources, the period of a function which is connected with the factors of N. We can surely expect a possible future realization of such a method for large numbers, but so far the period of Shor's function has been only computed for the number 15. Inspired by Shor's idea, our work aims to methods of factorization based on the periodicity measurement of a given continuous periodic "factoring function" which is physically implementable using an analogue computer. In particular, we have focused on both the theoretical and the experimental analysis of Gauss sums with continuous arguments leading to a new factorization algorithm. The procedure allows, for the first time, to factor several numbers by measuring the periodicity of Gauss sums performing first-order "factoring" interfer ence processes. We experimentally implemented this idea by exploiting polychromatic optical interference in the visible range with a multi-path interferometer, and achieved the factorization of seven digit numbers. The physical principle behind this "factoring" interference procedure can be potentially exploited also on entangled systems, as multi-photon entangled states, in order to achieve a polynomial scaling in the number of resources.

  8. Approximating exponential and logarithmic functions using polynomial interpolation

    NASA Astrophysics Data System (ADS)

    Gordon, Sheldon P.; Yang, Yajun

    2017-04-01

    This article takes a closer look at the problem of approximating the exponential and logarithmic functions using polynomials. Either as an alternative to or a precursor to Taylor polynomial approximations at the precalculus level, interpolating polynomials are considered. A measure of error is given and the behaviour of the error function is analysed. The results of interpolating polynomials are compared with those of Taylor polynomials.

  9. Multi-step routes of capuchin monkeys in a laser pointer traveling salesman task.

    PubMed

    Howard, Allison M; Fragaszy, Dorothy M

    2014-09-01

    Prior studies have claimed that nonhuman primates plan their routes multiple steps in advance. However, a recent reexamination of multi-step route planning in nonhuman primates indicated that there is no evidence for planning more than one step ahead. We tested multi-step route planning in capuchin monkeys using a pointing device to "travel" to distal targets while stationary. This device enabled us to determine whether capuchins distinguish the spatial relationship between goals and themselves and spatial relationships between goals and the laser dot, allocentrically. In Experiment 1, two subjects were presented with identical food items in Near-Far (one item nearer to subject) and Equidistant (both items equidistant from subject) conditions with a laser dot visible between the items. Subjects moved the laser dot to the items using a joystick. In the Near-Far condition, one subject demonstrated a bias for items closest to self but the other subject chose efficiently. In the second experiment, subjects retrieved three food items in similar Near-Far and Equidistant arrangements. Both subjects preferred food items nearest the laser dot and showed no evidence of multi-step route planning. We conclude that these capuchins do not make choices on the basis of multi-step look ahead strategies. © 2014 Wiley Periodicals, Inc.

  10. Absolute phase estimation: adaptive local denoising and global unwrapping.

    PubMed

    Bioucas-Dias, Jose; Katkovnik, Vladimir; Astola, Jaakko; Egiazarian, Karen

    2008-10-10

    The paper attacks absolute phase estimation with a two-step approach: the first step applies an adaptive local denoising scheme to the modulo-2 pi noisy phase; the second step applies a robust phase unwrapping algorithm to the denoised modulo-2 pi phase obtained in the first step. The adaptive local modulo-2 pi phase denoising is a new algorithm based on local polynomial approximations. The zero-order and the first-order approximations of the phase are calculated in sliding windows of varying size. The zero-order approximation is used for pointwise adaptive window size selection, whereas the first-order approximation is used to filter the phase in the obtained windows. For phase unwrapping, we apply the recently introduced robust (in the sense of discontinuity preserving) PUMA unwrapping algorithm [IEEE Trans. Image Process.16, 698 (2007)] to the denoised wrapped phase. Simulations give evidence that the proposed algorithm yields state-of-the-art performance, enabling strong noise attenuation while preserving image details. (c) 2008 Optical Society of America

  11. Multi-objective optimization of process parameters of multi-step shaft formed with cross wedge rolling based on orthogonal test

    NASA Astrophysics Data System (ADS)

    Han, S. T.; Shu, X. D.; Shchukin, V.; Kozhevnikova, G.

    2018-06-01

    In order to achieve reasonable process parameters in forming multi-step shaft by cross wedge rolling, the research studied the rolling-forming process multi-step shaft on the DEFORM-3D finite element software. The interactive orthogonal experiment was used to study the effect of the eight parameters, the first section shrinkage rate φ1, the first forming angle α1, the first spreading angle β1, the first spreading length L1, the second section shrinkage rate φ2, the second forming angle α2, the second spreading angle β2 and the second spreading length L2, on the quality of shaft end and the microstructure uniformity. By using the fuzzy mathematics comprehensive evaluation method and the extreme difference analysis, the influence degree of the process parameters on the quality of the multi-step shaft is obtained: β2>φ2L1>α1>β1>φ1>α2L2. The results of the study can provide guidance for obtaining multi-stepped shaft with high mechanical properties and achieving near net forming without stub bar in cross wedge rolling.

  12. A high order cell-centered semi-Lagrangian scheme for multi-dimensional kinetic simulations of neutral gas flows

    NASA Astrophysics Data System (ADS)

    Güçlü, Y.; Hitchon, W. N. G.

    2012-04-01

    The term 'Convected Scheme' (CS) refers to a family of algorithms, most usually applied to the solution of Boltzmann's equation, which uses a method of characteristics in an integral form to project an initial cell forward to a group of final cells. As such the CS is a 'forward-trajectory' semi-Lagrangian scheme. For multi-dimensional simulations of neutral gas flows, the cell-centered version of this semi-Lagrangian (CCSL) scheme has advantages over other options due to its implementation simplicity, low memory requirements, and easier treatment of boundary conditions. The main drawback of the CCSL-CS to date has been its high numerical diffusion in physical space, because of the 2nd order remapping that takes place at the end of each time step. By means of a modified equation analysis, it is shown that a high order estimate of the remapping error can be obtained a priori, and a small correction to the final position of the cells can be applied upon remapping, in order to achieve full compensation of this error. The resulting scheme is 4th order accurate in space while retaining the desirable properties of the CS: it is conservative and positivity-preserving, and the overall algorithm complexity is not appreciably increased. Two monotone (i.e. non-oscillating) versions of the fourth order CCSL-CS are also presented: one uses a common flux-limiter approach; the other uses a non-polynomial reconstruction to evaluate the derivatives of the density function. The method is illustrated in simple one- and two-dimensional examples, and a fully 3D solution of the Boltzmann equation describing expansion of a gas into vacuum through a cylindrical tube.

  13. A general solution strategy of modified power method for higher mode solutions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Peng; Lee, Hyunsuk; Lee, Deokjung, E-mail: deokjung@unist.ac.kr

    2016-01-15

    A general solution strategy of the modified power iteration method for calculating higher eigenmodes has been developed and applied in continuous energy Monte Carlo simulation. The new approach adopts four features: 1) the eigen decomposition of transfer matrix, 2) weight cancellation for higher modes, 3) population control with higher mode weights, and 4) stabilization technique of statistical fluctuations using multi-cycle accumulations. The numerical tests of neutron transport eigenvalue problems successfully demonstrate that the new strategy can significantly accelerate the fission source convergence with stable convergence behavior while obtaining multiple higher eigenmodes at the same time. The advantages of the newmore » strategy can be summarized as 1) the replacement of the cumbersome solution step of high order polynomial equations required by Booth's original method with the simple matrix eigen decomposition, 2) faster fission source convergence in inactive cycles, 3) more stable behaviors in both inactive and active cycles, and 4) smaller variances in active cycles. Advantages 3 and 4 can be attributed to the lower sensitivity of the new strategy to statistical fluctuations due to the multi-cycle accumulations. The application of the modified power method to continuous energy Monte Carlo simulation and the higher eigenmodes up to 4th order are reported for the first time in this paper. -- Graphical abstract: -- Highlights: •Modified power method is applied to continuous energy Monte Carlo simulation. •Transfer matrix is introduced to generalize the modified power method. •All mode based population control is applied to get the higher eigenmodes. •Statistic fluctuation can be greatly reduced using accumulated tally results. •Fission source convergence is accelerated with higher mode solutions.« less

  14. Discovery of multi-ring basins - Gestalt perception in planetary science

    NASA Technical Reports Server (NTRS)

    Hartmann, W. K.

    1981-01-01

    Early selenographers resolved individual structural components of multi-ring basin systems but missed the underlying large-scale multi-ring basin patterns. The recognition of multi-ring basins as a general class of planetary features can be divided into five steps. Gilbert (1893) took a first step in recognizing radial 'sculpture' around the Imbrium basin system. Several writers through the 1940's rediscovered the radial sculpture and extended this concept by describing concentric rings around several circular maria. Some reminiscences are given about the fourth step - discovery of the Orientale basin and other basin systems by rectified lunar photography at the University of Arizona in 1961-62. Multi-ring basins remained a lunar phenomenon until the fifth step - discovery of similar systems of features on other planets, such as Mars (1972), Mercury (1974), and possibly Callisto and Ganymede (1979). This sequence is an example of gestalt recognition whose implications for scientific research are discussed.

  15. Effect of genotyped cows in the reference population on the genomic evaluation of Holstein cattle.

    PubMed

    Uemoto, Y; Osawa, T; Saburi, J

    2017-03-01

    This study evaluated the dependence of reliability and prediction bias on the prediction method, the contribution of including animals (bulls or cows), and the genetic relatedness, when including genotyped cows in the progeny-tested bull reference population. We performed genomic evaluation using a Japanese Holstein population, and assessed the accuracy of genomic enhanced breeding value (GEBV) for three production traits and 13 linear conformation traits. A total of 4564 animals for production traits and 4172 animals for conformation traits were genotyped using Illumina BovineSNP50 array. Single- and multi-step methods were compared for predicting GEBV in genotyped bull-only and genotyped bull-cow reference populations. No large differences in realized reliability and regression coefficient were found between the two reference populations; however, a slight difference was found between the two methods for production traits. The accuracy of GEBV determined by single-step method increased slightly when genotyped cows were included in the bull reference population, but decreased slightly by multi-step method. A validation study was used to evaluate the accuracy of GEBV when 800 additional genotyped bulls (POPbull) or cows (POPcow) were included in the base reference population composed of 2000 genotyped bulls. The realized reliabilities of POPbull were higher than those of POPcow for all traits. For the gain of realized reliability over the base reference population, the average ratios of POPbull gain to POPcow gain for production traits and conformation traits were 2.6 and 7.2, respectively, and the ratios depended on heritabilities of the traits. For regression coefficient, no large differences were found between the results for POPbull and POPcow. Another validation study was performed to investigate the effect of genetic relatedness between cows and bulls in the reference and test populations. The effect of genetic relationship among bulls in the reference population was also assessed. The results showed that it is important to account for relatedness among bulls in the reference population. Our studies indicate that the prediction method, the contribution ratio of including animals, and genetic relatedness could affect the prediction accuracy in genomic evaluation of Holstein cattle, when including genotyped cows in the reference population.

  16. Bi-cubic interpolation for shift-free pan-sharpening

    NASA Astrophysics Data System (ADS)

    Aiazzi, Bruno; Baronti, Stefano; Selva, Massimo; Alparone, Luciano

    2013-12-01

    Most of pan-sharpening techniques require the re-sampling of the multi-spectral (MS) image for matching the size of the panchromatic (Pan) image, before the geometric details of Pan are injected into the MS image. This operation is usually performed in a separable fashion by means of symmetric digital low-pass filtering kernels with odd lengths that utilize piecewise local polynomials, typically implementing linear or cubic interpolation functions. Conversely, constant, i.e. nearest-neighbour, and quadratic kernels, implementing zero and two degree polynomials, respectively, introduce shifts in the magnified images, that are sub-pixel in the case of interpolation by an even factor, as it is the most usual case. However, in standard satellite systems, the point spread functions (PSF) of the MS and Pan instruments are centered in the middle of each pixel. Hence, commercial MS and Pan data products, whose scale ratio is an even number, are relatively shifted by an odd number of half pixels. Filters of even lengths may be exploited to compensate the half-pixel shifts between the MS and Pan sampling grids. In this paper, it is shown that separable polynomial interpolations of odd degrees are feasible with linear-phase kernels of even lengths. The major benefit is that bi-cubic interpolation, which is known to represent the best trade-off between performances and computational complexity, can be applied to commercial MS + Pan datasets, without the need of performing a further half-pixel registration after interpolation, to align the expanded MS with the Pan image.

  17. Growth and adhesion properties of monosodium urate monohydrate (MSU) crystals

    NASA Astrophysics Data System (ADS)

    Perrin, Clare M.

    The presence of monosodium urate monohydrate (MSU) crystals in the synovial fluid has long been associated with the joint disease gout. To elucidate the molecular level growth mechanism and adhesive properties of MSU crystals, atomic force microscopy (AFM), scanning electron microscopy, and dynamic light scattering (DLS) techniques were employed in the characterization of the (010) and (1-10) faces of MSU, as well as physiologically relevant solutions supersaturated with urate. Topographical AFM imaging of both MSU (010) and (1-10) revealed the presence of crystalline layers of urate arranged into v-shaped features of varying height. Growth rates were measured for both monolayers (elementary steps) and multiple layers (macrosteps) on both crystal faces under a wide range of urate supersaturation in physiologically relevant solutions. Step velocities for monolayers and multiple layers displayed a second order polynomial dependence on urate supersaturation on MSU (010) and (1-10), with step velocities on (1-10) generally half of those measured on MSU (010) in corresponding growth conditions. Perpendicular step velocities on MSU (010) were obtained and also showed a second order polynomial dependence of step velocity with respect to urate supersaturation, which implies a 2D-island nucleation growth mechanism for MSU (010). Extensive topographical imaging of MSU (010) showed island adsorption from urate growth solutions under all urate solution concentrations investigated, lending further support for the determined growth mechanism. Island sizes derived from DLS experiments on growth solutions were in agreement with those measured on MSU (010) topographical images. Chemical force microscopy (CFM) was utilized to characterize the adhesive properties of MSU (010) and (1-10). AFM probes functionalized with amino acid derivatives and bio-macromolecules found in the synovial fluid were brought into contact with both crystal faces and adhesion forces were tabulated into histograms for comparison. AFM probes functionalized with -COO-, -CH3, and -OH functionalities displayed similar adhesion force with both crystal surfaces of MSU, while adhesion force on (1-10) was three times greater than (010) for -NH2+ probes. For AFM probes functionalized with bovine serum albumin, adhesion force was three times greater on MSU (1-10) than (010), most likely due to the more ionic nature of (1-10).

  18. A polyhedral study of production ramping

    DOE PAGES

    Damci-Kurt, Pelin; Kucukyavuz, Simge; Rajan, Deepak; ...

    2015-06-12

    Here, we give strong formulations of ramping constraints—used to model the maximum change in production level for a generator or machine from one time period to the next—and production limits. For the two-period case, we give a complete description of the convex hull of the feasible solutions. The two-period inequalities can be readily used to strengthen ramping formulations without the need for separation. For the general case, we define exponential classes of multi-period variable upper bound and multi-period ramping inequalities, and give conditions under which these inequalities define facets of ramping polyhedra. Finally, we present exact polynomial separation algorithms formore » the inequalities and report computational experiments on using them in a branch-and-cut algorithm to solve unit commitment problems in power generation.« less

  19. A multi-populations multi-strategies differential evolution algorithm for structural optimization of metal nanoclusters

    NASA Astrophysics Data System (ADS)

    Fan, Tian-E.; Shao, Gui-Fang; Ji, Qing-Shuang; Zheng, Ji-Wen; Liu, Tun-dong; Wen, Yu-Hua

    2016-11-01

    Theoretically, the determination of the structure of a cluster is to search the global minimum on its potential energy surface. The global minimization problem is often nondeterministic-polynomial-time (NP) hard and the number of local minima grows exponentially with the cluster size. In this article, a multi-populations multi-strategies differential evolution algorithm has been proposed to search the globally stable structure of Fe and Cr nanoclusters. The algorithm combines a multi-populations differential evolution with an elite pool scheme to keep the diversity of the solutions and avoid prematurely trapping into local optima. Moreover, multi-strategies such as growing method in initialization and three differential strategies in mutation are introduced to improve the convergence speed and lower the computational cost. The accuracy and effectiveness of our algorithm have been verified by comparing the results of Fe clusters with Cambridge Cluster Database. Meanwhile, the performance of our algorithm has been analyzed by comparing the convergence rate and energy evaluations with the classical DE algorithm. The multi-populations, multi-strategies mutation and growing method in initialization in our algorithm have been considered respectively. Furthermore, the structural growth pattern of Cr clusters has been predicted by this algorithm. The results show that the lowest-energy structure of Cr clusters contains many icosahedra, and the number of the icosahedral rings rises with increasing size.

  20. PsiQuaSP-A library for efficient computation of symmetric open quantum systems.

    PubMed

    Gegg, Michael; Richter, Marten

    2017-11-24

    In a recent publication we showed that permutation symmetry reduces the numerical complexity of Lindblad quantum master equations for identical multi-level systems from exponential to polynomial scaling. This is important for open system dynamics including realistic system bath interactions and dephasing in, for instance, the Dicke model, multi-Λ system setups etc. Here we present an object-oriented C++ library that allows to setup and solve arbitrary quantum optical Lindblad master equations, especially those that are permutationally symmetric in the multi-level systems. PsiQuaSP (Permutation symmetry for identical Quantum Systems Package) uses the PETSc package for sparse linear algebra methods and differential equations as basis. The aim of PsiQuaSP is to provide flexible, storage efficient and scalable code while being as user friendly as possible. It is easily applied to many quantum optical or quantum information systems with more than one multi-level system. We first review the basics of the permutation symmetry for multi-level systems in quantum master equations. The application of PsiQuaSP to quantum dynamical problems is illustrated with several typical, simple examples of open quantum optical systems.

  1. Non-Gaussian Analysis of Turbulent Boundary Layer Fluctuating Pressure on Aircraft Skin Panels

    NASA Technical Reports Server (NTRS)

    Rizzi, Stephen A.; Steinwolf, Alexander

    2005-01-01

    The purpose of the study is to investigate the probability density function (PDF) of turbulent boundary layer fluctuating pressures measured on the outer sidewall of a supersonic transport aircraft and to approximate these PDFs by analytical models. Experimental flight results show that the fluctuating pressure PDFs differ from the Gaussian distribution even for standard smooth surface conditions. The PDF tails are wider and longer than those of the Gaussian model. For pressure fluctuations in front of forward-facing step discontinuities, deviations from the Gaussian model are more significant and the PDFs become asymmetrical. There is a certain spatial pattern of the skewness and kurtosis behavior depending on the distance upstream from the step. All characteristics related to non-Gaussian behavior are highly dependent upon the distance from the step and the step height, less dependent on aircraft speed, and not dependent on the fuselage location. A Hermite polynomial transform model and a piecewise-Gaussian model fit the flight data well both for the smooth and stepped conditions. The piecewise-Gaussian approximation can be additionally regarded for convenience in usage after the model is constructed.

  2. The integration of the motion equations of low-orbiting earth satellites using Taylor's method

    NASA Astrophysics Data System (ADS)

    Krivov, A. V.; Chernysheva, N. A.

    1990-04-01

    A method for the numerical integration of the equations of motion for a satellite is proposed, taking the earth's oblateness and atmospheric drag into account. The method is based on Taylor's representation of the solution to the corresponding polynomial system. The algorithm for choosing the integration step and error estimation is constructed. The method is realized as a subrouting package. The method is applied to a low-orbiting earth satellite and the results are compared with those obtained using Everhart's method.

  3. Development of a Semi-Quantitative Food Frequency Questionnaire to Assess the Dietary Intake of a Multi-Ethnic Urban Asian Population.

    PubMed

    Neelakantan, Nithya; Whitton, Clare; Seah, Sharna; Koh, Hiromi; Rebello, Salome A; Lim, Jia Yi; Chen, Shiqi; Chan, Mei Fen; Chew, Ling; van Dam, Rob M

    2016-08-27

    Assessing habitual food consumption is challenging in multi-ethnic cosmopolitan settings. We systematically developed a semi-quantitative food frequency questionnaire (FFQ) in a multi-ethnic population in Singapore, using data from two 24-h dietary recalls from a nationally representative sample of 805 Singapore residents of Chinese, Malay and Indian ethnicity aged 18-79 years. Key steps included combining reported items on 24-h recalls into standardized food groups, developing a food list for the FFQ, pilot testing of different question formats, and cognitive interviews. Percentage contribution analysis and stepwise regression analysis were used to identify foods contributing cumulatively ≥90% to intakes and individually ≥1% to intake variance of key nutrients, for the total study population and for each ethnic group separately. Differences between ethnic groups were observed in proportions of consumers of certain foods (e.g., lentil stews, 1%-47%; and pork dishes, 0%-50%). The number of foods needed to explain variability in nutrient intakes differed substantially by ethnic groups and was substantially larger for the total population than for separate ethnic groups. A 163-item FFQ covered >95% of total population intake for all key nutrients. The methodological insights provided in this paper may be useful in developing similar FFQs in other multi-ethnic settings.

  4. Monte Carlo Sampling in Fractal Landscapes

    NASA Astrophysics Data System (ADS)

    Leitão, Jorge C.; Lopes, J. M. Viana Parente; Altmann, Eduardo G.

    2013-05-01

    We design a random walk to explore fractal landscapes such as those describing chaotic transients in dynamical systems. We show that the random walk moves efficiently only when its step length depends on the height of the landscape via the largest Lyapunov exponent of the chaotic system. We propose a generalization of the Wang-Landau algorithm which constructs not only the density of states (transient time distribution) but also the correct step length. As a result, we obtain a flat-histogram Monte Carlo method which samples fractal landscapes in polynomial time, a dramatic improvement over the exponential scaling of traditional uniform-sampling methods. Our results are not limited by the dimensionality of the landscape and are confirmed numerically in chaotic systems with up to 30 dimensions.

  5. Impact of user influence on information multi-step communication in a micro-blog

    NASA Astrophysics Data System (ADS)

    Wu, Yue; Hu, Yong; He, Xiao-Hai; Deng, Ken

    2014-06-01

    User influence is generally considered as one of the most critical factors that affect information cascading spreading. Based on this common assumption, this paper proposes a theoretical model to examine user influence on the information multi-step communication in a micro-blog. The multi-steps of information communication are divided into first-step and non-first-step, and user influence is classified into five dimensions. Actual data from the Sina micro-blog is collected to construct the model by means of an approach based on structural equations that uses the Partial Least Squares (PLS) technique. Our experimental results indicate that the dimensions of the number of fans and their authority significantly impact the information of first-step communication. Leader rank has a positive impact on both first-step and non-first-step communication. Moreover, global centrality and weight of friends are positively related to the information non-first-step communication, but authority is found to have much less relation to it.

  6. Correlation and simple linear regression.

    PubMed

    Eberly, Lynn E

    2007-01-01

    This chapter highlights important steps in using correlation and simple linear regression to address scientific questions about the association of two continuous variables with each other. These steps include estimation and inference, assessing model fit, the connection between regression and ANOVA, and study design. Examples in microbiology are used throughout. This chapter provides a framework that is helpful in understanding more complex statistical techniques, such as multiple linear regression, linear mixed effects models, logistic regression, and proportional hazards regression.

  7. Co-rotational thermo-mechanically coupled multi-field framework and finite element for the large displacement analysis of multi-layered shape memory alloy beam-like structures

    NASA Astrophysics Data System (ADS)

    Solomou, Alexandros G.; Machairas, Theodoros T.; Karakalas, Anargyros A.; Saravanos, Dimitris A.

    2017-06-01

    A thermo-mechanically coupled finite element (FE) for the simulation of multi-layered shape memory alloy (SMA) beams admitting large displacements and rotations (LDRs) is developed to capture the geometrically nonlinear effects which are present in many SMA applications. A generalized multi-field beam theory implementing a SMA constitutive model based on small strain theory, thermo-mechanically coupled governing equations and multi-field kinematic hypotheses combining first order shear deformation assumptions with a sixth order polynomial temperature field through the thickness of the beam section are extended to admit LDRs. The co-rotational formulation is adopted, where the motion of the beam is decomposed to rigid body motion and relative small deformation in the local frame. A new generalized multi-layered SMA FE is formulated. The nonlinear transient spatial discretized equations of motion of the SMA structure are synthesized and solved using the Newton-Raphson method combined with an implicit time integration scheme. Correlations of models incorporating the present beam FE with respective results of models incorporating plane stress SMA FEs, demonstrate excellent agreement of the predicted LDRs response, temperature and phase transformation fields, as well as, significant gains in computational time.

  8. An Evidence-Based Approach to Defining Fetal Macrosomia.

    PubMed

    Froehlich, Rosemary; Simhan, Hyagriv N; Larkin, Jacob C

    2016-04-01

    This study aims to determine the risk of adverse outcomes associated with the current diagnostic criteria for fetal macrosomia. Study We evaluated three techniques for characterizing birth weight as a predictor of shoulder dystocia or third- or fourth-degree laceration in 79,879 vaginal deliveries. First, we compared deliveries with birth weights above or below 4,500 g. We then performed logistic regression using birth weight as a continuous predictor, both with and without fractional polynomial transformation. Finally, we calculated the number of cesarean sections required to prevent one incident of the interrogated outcomes (number needed to treat [NNT]). Rates of adverse intrapartum outcomes increase incrementally with increasing birth weight and are predicted most accurately with logistic regression following fractional polynomial transformation. The NNT for third- or fourth-degree laceration dropped from 14.3 (95% confidence interval [CI], 13.9-14.7) at a birth weight of 3,500 g to 6.4 (95% CI, 6.1-6.8) at 4,500 g and, for shoulder dystocia, from 54.9 (95% CI, 51.5-58.6) at 3,500 g to 5.6 (95% CI, 5.2-6.0) at 4,500 g. The conventional distinction between "normal" and "macrosomic" does not reflect the incremental effect of increasing birth weight on the risk of obstetric morbidity. Outcomes analysis can inform fetal growth standards to better reflect relevant thresholds of risk. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  9. Optimizing Support Vector Machine Parameters with Genetic Algorithm for Credit Risk Assessment

    NASA Astrophysics Data System (ADS)

    Manurung, Jonson; Mawengkang, Herman; Zamzami, Elviawaty

    2017-12-01

    Support vector machine (SVM) is a popular classification method known to have strong generalization capabilities. SVM can solve the problem of classification and linear regression or nonlinear kernel which can be a learning algorithm for the ability of classification and regression. However, SVM also has a weakness that is difficult to determine the optimal parameter value. SVM calculates the best linear separator on the input feature space according to the training data. To classify data which are non-linearly separable, SVM uses kernel tricks to transform the data into a linearly separable data on a higher dimension feature space. The kernel trick using various kinds of kernel functions, such as : linear kernel, polynomial, radial base function (RBF) and sigmoid. Each function has parameters which affect the accuracy of SVM classification. To solve the problem genetic algorithms are proposed to be applied as the optimal parameter value search algorithm thus increasing the best classification accuracy on SVM. Data taken from UCI repository of machine learning database: Australian Credit Approval. The results show that the combination of SVM and genetic algorithms is effective in improving classification accuracy. Genetic algorithms has been shown to be effective in systematically finding optimal kernel parameters for SVM, instead of randomly selected kernel parameters. The best accuracy for data has been upgraded from kernel Linear: 85.12%, polynomial: 81.76%, RBF: 77.22% Sigmoid: 78.70%. However, for bigger data sizes, this method is not practical because it takes a lot of time.

  10. Selection of relevant input variables in storm water quality modeling by multiobjective evolutionary polynomial regression paradigm

    NASA Astrophysics Data System (ADS)

    Creaco, E.; Berardi, L.; Sun, Siao; Giustolisi, O.; Savic, D.

    2016-04-01

    The growing availability of field data, from information and communication technologies (ICTs) in "smart" urban infrastructures, allows data modeling to understand complex phenomena and to support management decisions. Among the analyzed phenomena, those related to storm water quality modeling have recently been gaining interest in the scientific literature. Nonetheless, the large amount of available data poses the problem of selecting relevant variables to describe a phenomenon and enable robust data modeling. This paper presents a procedure for the selection of relevant input variables using the multiobjective evolutionary polynomial regression (EPR-MOGA) paradigm. The procedure is based on scrutinizing the explanatory variables that appear inside the set of EPR-MOGA symbolic model expressions of increasing complexity and goodness of fit to target output. The strategy also enables the selection to be validated by engineering judgement. In such context, the multiple case study extension of EPR-MOGA, called MCS-EPR-MOGA, is adopted. The application of the proposed procedure to modeling storm water quality parameters in two French catchments shows that it was able to significantly reduce the number of explanatory variables for successive analyses. Finally, the EPR-MOGA models obtained after the input selection are compared with those obtained by using the same technique without benefitting from input selection and with those obtained in previous works where other data-modeling techniques were used on the same data. The comparison highlights the effectiveness of both EPR-MOGA and the input selection procedure.

  11. Multivariate decoding of brain images using ordinal regression.

    PubMed

    Doyle, O M; Ashburner, J; Zelaya, F O; Williams, S C R; Mehta, M A; Marquand, A F

    2013-11-01

    Neuroimaging data are increasingly being used to predict potential outcomes or groupings, such as clinical severity, drug dose response, and transitional illness states. In these examples, the variable (target) we want to predict is ordinal in nature. Conventional classification schemes assume that the targets are nominal and hence ignore their ranked nature, whereas parametric and/or non-parametric regression models enforce a metric notion of distance between classes. Here, we propose a novel, alternative multivariate approach that overcomes these limitations - whole brain probabilistic ordinal regression using a Gaussian process framework. We applied this technique to two data sets of pharmacological neuroimaging data from healthy volunteers. The first study was designed to investigate the effect of ketamine on brain activity and its subsequent modulation with two compounds - lamotrigine and risperidone. The second study investigates the effect of scopolamine on cerebral blood flow and its modulation using donepezil. We compared ordinal regression to multi-class classification schemes and metric regression. Considering the modulation of ketamine with lamotrigine, we found that ordinal regression significantly outperformed multi-class classification and metric regression in terms of accuracy and mean absolute error. However, for risperidone ordinal regression significantly outperformed metric regression but performed similarly to multi-class classification both in terms of accuracy and mean absolute error. For the scopolamine data set, ordinal regression was found to outperform both multi-class and metric regression techniques considering the regional cerebral blood flow in the anterior cingulate cortex. Ordinal regression was thus the only method that performed well in all cases. Our results indicate the potential of an ordinal regression approach for neuroimaging data while providing a fully probabilistic framework with elegant approaches for model selection. Copyright © 2013. Published by Elsevier Inc.

  12. Dirac(-Pauli), Fokker-Planck equations and exceptional Laguerre polynomials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ho, Choon-Lin, E-mail: hcl@mail.tku.edu.tw

    2011-04-15

    Research Highlights: > Physical examples involving exceptional orthogonal polynomials. > Exceptional polynomials as deformations of classical orthogonal polynomials. > Exceptional polynomials from Darboux-Crum transformation. - Abstract: An interesting discovery in the last two years in the field of mathematical physics has been the exceptional X{sub l} Laguerre and Jacobi polynomials. Unlike the well-known classical orthogonal polynomials which start with constant terms, these new polynomials have lowest degree l = 1, 2, and ..., and yet they form complete set with respect to some positive-definite measure. While the mathematical properties of these new X{sub l} polynomials deserve further analysis, it ismore » also of interest to see if they play any role in physical systems. In this paper we indicate some physical models in which these new polynomials appear as the main part of the eigenfunctions. The systems we consider include the Dirac equations coupled minimally and non-minimally with some external fields, and the Fokker-Planck equations. The systems presented here have enlarged the number of exactly solvable physical systems known so far.« less

  13. Solutions of interval type-2 fuzzy polynomials using a new ranking method

    NASA Astrophysics Data System (ADS)

    Rahman, Nurhakimah Ab.; Abdullah, Lazim; Ghani, Ahmad Termimi Ab.; Ahmad, Noor'Ani

    2015-10-01

    A few years ago, a ranking method have been introduced in the fuzzy polynomial equations. Concept of the ranking method is proposed to find actual roots of fuzzy polynomials (if exists). Fuzzy polynomials are transformed to system of crisp polynomials, performed by using ranking method based on three parameters namely, Value, Ambiguity and Fuzziness. However, it was found that solutions based on these three parameters are quite inefficient to produce answers. Therefore in this study a new ranking method have been developed with the aim to overcome the inherent weakness. The new ranking method which have four parameters are then applied in the interval type-2 fuzzy polynomials, covering the interval type-2 of fuzzy polynomial equation, dual fuzzy polynomial equations and system of fuzzy polynomials. The efficiency of the new ranking method then numerically considered in the triangular fuzzy numbers and the trapezoidal fuzzy numbers. Finally, the approximate solutions produced from the numerical examples indicate that the new ranking method successfully produced actual roots for the interval type-2 fuzzy polynomials.

  14. MSBIS: A Multi-Step Biomedical Informatics Screening Approach for Identifying Medications that Mitigate the Risks of Metoclopramide-Induced Tardive Dyskinesia.

    PubMed

    Xu, Dong; Ham, Alexandrea G; Tivis, Rickey D; Caylor, Matthew L; Tao, Aoxiang; Flynn, Steve T; Economen, Peter J; Dang, Hung K; Johnson, Royal W; Culbertson, Vaughn L

    2017-12-01

    In 2009 the U.S. Food and Drug Administration (FDA) placed a black box warning on metoclopramide (MCP) due to the increased risks and prevalence of tardive dyskinesia (TD). In this study, we developed a multi-step biomedical informatics screening (MSBIS) approach leveraging publicly available bioactivity and drug safety data to identify concomitant drugs that mitigate the risks of MCP-induced TD. MSBIS includes (1) TargetSearch (http://dxulab.org/software) bioinformatics scoring for drug anticholinergic activity using CHEMBL bioactivity data; (2) unadjusted odds ratio (UOR) scoring for indications of TD-mitigating effects using the FDA Adverse Event Reporting System (FAERS); (3) adjusted odds ratio (AOR) re-scoring by removing the effect of cofounding factors (age, gender, reporting year); (4) logistic regression (LR) coefficient scoring for confirming the best TD-mitigating drug candidates. Drugs with increasing TD protective potential and statistical significance were obtained at each screening step. Fentanyl is identified as the most promising drug against MCP-induced TD (coefficient: -2.68; p-value<0.01). The discovery is supported by clinical reports that patients fully recovered from MCP-induced TD after fentanyl-induced general anesthesia. Loperamide is identified as a potent mitigating drug against a broader range of drug-induced movement disorders through pharmacokinetic modifications. Using drug-induced TD as an example, we demonstrated that MSBIS is an efficient in silico tool for unknown drug-drug interaction detection, drug repurposing, and combination therapy design. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  15. Selection of optimal complexity for ENSO-EMR model by minimum description length principle

    NASA Astrophysics Data System (ADS)

    Loskutov, E. M.; Mukhin, D.; Mukhina, A.; Gavrilov, A.; Kondrashov, D. A.; Feigin, A. M.

    2012-12-01

    One of the main problems arising in modeling of data taken from natural system is finding a phase space suitable for construction of the evolution operator model. Since we usually deal with strongly high-dimensional behavior, we are forced to construct a model working in some projection of system phase space corresponding to time scales of interest. Selection of optimal projection is non-trivial problem since there are many ways to reconstruct phase variables from given time series, especially in the case of a spatio-temporal data field. Actually, finding optimal projection is significant part of model selection, because, on the one hand, the transformation of data to some phase variables vector can be considered as a required component of the model. On the other hand, such an optimization of a phase space makes sense only in relation to the parametrization of the model we use, i.e. representation of evolution operator, so we should find an optimal structure of the model together with phase variables vector. In this paper we propose to use principle of minimal description length (Molkov et al., 2009) for selection models of optimal complexity. The proposed method is applied to optimization of Empirical Model Reduction (EMR) of ENSO phenomenon (Kravtsov et al. 2005, Kondrashov et. al., 2005). This model operates within a subset of leading EOFs constructed from spatio-temporal field of SST in Equatorial Pacific, and has a form of multi-level stochastic differential equations (SDE) with polynomial parameterization of the right-hand side. Optimal values for both the number of EOF, the order of polynomial and number of levels are estimated from the Equatorial Pacific SST dataset. References: Ya. Molkov, D. Mukhin, E. Loskutov, G. Fidelin and A. Feigin, Using the minimum description length principle for global reconstruction of dynamic systems from noisy time series, Phys. Rev. E, Vol. 80, P 046207, 2009 Kravtsov S, Kondrashov D, Ghil M, 2005: Multilevel regression modeling of nonlinear processes: Derivation and applications to climatic variability. J. Climate, 18 (21): 4404-4424. D. Kondrashov, S. Kravtsov, A. W. Robertson and M. Ghil, 2005. A hierarchy of data-based ENSO models. J. Climate, 18, 4425-4444.

  16. Computational tools for multi-linked flexible structures

    NASA Technical Reports Server (NTRS)

    Lee, Gordon K. F.; Brubaker, Thomas A.; Shults, James R.

    1990-01-01

    A software module which designs and tests controllers and filters in Kalman Estimator form, based on a polynomial state-space model is discussed. The user-friendly program employs an interactive graphics approach to simplify the design process. A variety of input methods are provided to test the effectiveness of the estimator. Utilities are provided which address important issues in filter design such as graphical analysis, statistical analysis, and calculation time. The program also provides the user with the ability to save filter parameters, inputs, and outputs for future use.

  17. Coherent orthogonal polynomials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Celeghini, E., E-mail: celeghini@fi.infn.it; Olmo, M.A. del, E-mail: olmo@fta.uva.es

    2013-08-15

    We discuss a fundamental characteristic of orthogonal polynomials, like the existence of a Lie algebra behind them, which can be added to their other relevant aspects. At the basis of the complete framework for orthogonal polynomials we include thus–in addition to differential equations, recurrence relations, Hilbert spaces and square integrable functions–Lie algebra theory. We start here from the square integrable functions on the open connected subset of the real line whose bases are related to orthogonal polynomials. All these one-dimensional continuous spaces allow, besides the standard uncountable basis (|x〉), for an alternative countable basis (|n〉). The matrix elements that relatemore » these two bases are essentially the orthogonal polynomials: Hermite polynomials for the line and Laguerre and Legendre polynomials for the half-line and the line interval, respectively. Differential recurrence relations of orthogonal polynomials allow us to realize that they determine an infinite-dimensional irreducible representation of a non-compact Lie algebra, whose second order Casimir C gives rise to the second order differential equation that defines the corresponding family of orthogonal polynomials. Thus, the Weyl–Heisenberg algebra h(1) with C=0 for Hermite polynomials and su(1,1) with C=−1/4 for Laguerre and Legendre polynomials are obtained. Starting from the orthogonal polynomials the Lie algebra is extended both to the whole space of the L{sup 2} functions and to the corresponding Universal Enveloping Algebra and transformation group. Generalized coherent states from each vector in the space L{sup 2} and, in particular, generalized coherent polynomials are thus obtained. -- Highlights: •Fundamental characteristic of orthogonal polynomials (OP): existence of a Lie algebra. •Differential recurrence relations of OP determine a unitary representation of a non-compact Lie group. •2nd order Casimir originates a 2nd order differential equation that defines the corresponding OP family. •Generalized coherent polynomials are obtained from OP.« less

  18. Path synthesis of four-bar mechanisms using synergy of polynomial neural network and Stackelberg game theory

    NASA Astrophysics Data System (ADS)

    Ahmadi, Bahman; Nariman-zadeh, Nader; Jamali, Ali

    2017-06-01

    In this article, a novel approach based on game theory is presented for multi-objective optimal synthesis of four-bar mechanisms. The multi-objective optimization problem is modelled as a Stackelberg game. The more important objective function, tracking error, is considered as the leader, and the other objective function, deviation of the transmission angle from 90° (TA), is considered as the follower. In a new approach, a group method of data handling (GMDH)-type neural network is also utilized to construct an approximate model for the rational reaction set (RRS) of the follower. Using the proposed game-theoretic approach, the multi-objective optimal synthesis of a four-bar mechanism is then cast into a single-objective optimal synthesis using the leader variables and the obtained RRS of the follower. The superiority of using the synergy game-theoretic method of Stackelberg with a GMDH-type neural network is demonstrated for two case studies on the synthesis of four-bar mechanisms.

  19. Soliton interactions and Bäcklund transformation for a (2+1)-dimensional variable-coefficient modified Kadomtsev-Petviashvili equation in fluid dynamics

    NASA Astrophysics Data System (ADS)

    Xiao, Zi-Jian; Tian, Bo; Sun, Yan

    2018-01-01

    In this paper, we investigate a (2+1)-dimensional variable-coefficient modified Kadomtsev-Petviashvili (mKP) equation in fluid dynamics. With the binary Bell-polynomial and an auxiliary function, bilinear forms for the equation are constructed. Based on the bilinear forms, multi-soliton solutions and Bell-polynomial-type Bäcklund transformation for such an equation are obtained through the symbolic computation. Soliton interactions are presented. Based on the graphic analysis, Parametric conditions for the existence of the shock waves, elevation solitons and depression solitons are given, and it is shown that under the condition of keeping the wave vectors invariable, the change of α(t) and β(t) can lead to the change of the solitonic velocities, but the shape of each soliton remains unchanged, where α(t) and β(t) are the variable coefficients in the equation. Oblique elastic interactions can exist between the (i) two shock waves, (ii) two elevation solitons, and (iii) elevation and depression solitons. However, oblique interactions between (i) shock waves and elevation solitons, (ii) shock waves and depression solitons are inelastic.

  20. Surface Modified Particles By Multi-Step Michael-Type Addition And Process For The Preparation Thereof

    DOEpatents

    Cook, Ronald Lee; Elliott, Brian John; Luebben, Silvia DeVito; Myers, Andrew William; Smith, Bryan Matthew

    2005-05-03

    A new class of surface modified particles and a multi-step Michael-type addition surface modification process for the preparation of the same is provided. The multi-step Michael-type addition surface modification process involves two or more reactions to compatibilize particles with various host systems and/or to provide the particles with particular chemical reactivities. The initial step comprises the attachment of a small organic compound to the surface of the inorganic particle. The subsequent steps attach additional compounds to the previously attached organic compounds through reactive organic linking groups. Specifically, these reactive groups are activated carbon—carbon pi bonds and carbon and non-carbon nucleophiles that react via Michael or Michael-type additions.

  1. Impact of multi-resolution analysis of artificial intelligence models inputs on multi-step ahead river flow forecasting

    NASA Astrophysics Data System (ADS)

    Badrzadeh, Honey; Sarukkalige, Ranjan; Jayawardena, A. W.

    2013-12-01

    Discrete wavelet transform was applied to decomposed ANN and ANFIS inputs.Novel approach of WNF with subtractive clustering applied for flow forecasting.Forecasting was performed in 1-5 step ahead, using multi-variate inputs.Forecasting accuracy of peak values and longer lead-time significantly improved.

  2. Simple Proof of Jury Test for Complex Polynomials

    NASA Astrophysics Data System (ADS)

    Choo, Younseok; Kim, Dongmin

    Recently some attempts have been made in the literature to give simple proofs of Jury test for real polynomials. This letter presents a similar result for complex polynomials. A simple proof of Jury test for complex polynomials is provided based on the Rouché's Theorem and a single-parameter characterization of Schur stability property for complex polynomials.

  3. On the connection coefficients and recurrence relations arising from expansions in series of Laguerre polynomials

    NASA Astrophysics Data System (ADS)

    Doha, E. H.

    2003-05-01

    A formula expressing the Laguerre coefficients of a general-order derivative of an infinitely differentiable function in terms of its original coefficients is proved, and a formula expressing explicitly the derivatives of Laguerre polynomials of any degree and for any order as a linear combination of suitable Laguerre polynomials is deduced. A formula for the Laguerre coefficients of the moments of one single Laguerre polynomial of certain degree is given. Formulae for the Laguerre coefficients of the moments of a general-order derivative of an infinitely differentiable function in terms of its Laguerre coefficients are also obtained. A simple approach in order to build and solve recursively for the connection coefficients between Jacobi-Laguerre and Hermite-Laguerre polynomials is described. An explicit formula for these coefficients between Jacobi and Laguerre polynomials is given, of which the ultra-spherical polynomials of the first and second kinds and Legendre polynomials are important special cases. An analytical formula for the connection coefficients between Hermite and Laguerre polynomials is also obtained.

  4. Orthonormal vector general polynomials derived from the Cartesian gradient of the orthonormal Zernike-based polynomials.

    PubMed

    Mafusire, Cosmas; Krüger, Tjaart P J

    2018-06-01

    The concept of orthonormal vector circle polynomials is revisited by deriving a set from the Cartesian gradient of Zernike polynomials in a unit circle using a matrix-based approach. The heart of this model is a closed-form matrix equation of the gradient of Zernike circle polynomials expressed as a linear combination of lower-order Zernike circle polynomials related through a gradient matrix. This is a sparse matrix whose elements are two-dimensional standard basis transverse Euclidean vectors. Using the outer product form of the Cholesky decomposition, the gradient matrix is used to calculate a new matrix, which we used to express the Cartesian gradient of the Zernike circle polynomials as a linear combination of orthonormal vector circle polynomials. Since this new matrix is singular, the orthonormal vector polynomials are recovered by reducing the matrix to its row echelon form using the Gauss-Jordan elimination method. We extend the model to derive orthonormal vector general polynomials, which are orthonormal in a general pupil by performing a similarity transformation on the gradient matrix to give its equivalent in the general pupil. The outer form of the Gram-Schmidt procedure and the Gauss-Jordan elimination method are then applied to the general pupil to generate the orthonormal vector general polynomials from the gradient of the orthonormal Zernike-based polynomials. The performance of the model is demonstrated with a simulated wavefront in a square pupil inscribed in a unit circle.

  5. Using Spherical-Harmonics Expansions for Optics Surface Reconstruction from Gradients.

    PubMed

    Solano-Altamirano, Juan Manuel; Vázquez-Otero, Alejandro; Khikhlukha, Danila; Dormido, Raquel; Duro, Natividad

    2017-11-30

    In this paper, we propose a new algorithm to reconstruct optics surfaces (aka wavefronts) from gradients, defined on a circular domain, by means of the Spherical Harmonics. The experimental results indicate that this algorithm renders the same accuracy, compared to the reconstruction based on classical Zernike polynomials, using a smaller number of polynomial terms, which potentially speeds up the wavefront reconstruction. Additionally, we provide an open-source C++ library, released under the terms of the GNU General Public License version 2 (GPLv2), wherein several polynomial sets are coded. Therefore, this library constitutes a robust software alternative for wavefront reconstruction in a high energy laser field, optical surface reconstruction, and, more generally, in surface reconstruction from gradients. The library is a candidate for being integrated in control systems for optical devices, or similarly to be used in ad hoc simulations. Moreover, it has been developed with flexibility in mind, and, as such, the implementation includes the following features: (i) a mock-up generator of various incident wavefronts, intended to simulate the wavefronts commonly encountered in the field of high-energy lasers production; (ii) runtime selection of the library in charge of performing the algebraic computations; (iii) a profiling mechanism to measure and compare the performance of different steps of the algorithms and/or third-party linear algebra libraries. Finally, the library can be easily extended to include additional dependencies, such as porting the algebraic operations to specific architectures, in order to exploit hardware acceleration features.

  6. Using Spherical-Harmonics Expansions for Optics Surface Reconstruction from Gradients

    PubMed Central

    Solano-Altamirano, Juan Manuel; Khikhlukha, Danila

    2017-01-01

    In this paper, we propose a new algorithm to reconstruct optics surfaces (aka wavefronts) from gradients, defined on a circular domain, by means of the Spherical Harmonics. The experimental results indicate that this algorithm renders the same accuracy, compared to the reconstruction based on classical Zernike polynomials, using a smaller number of polynomial terms, which potentially speeds up the wavefront reconstruction. Additionally, we provide an open-source C++ library, released under the terms of the GNU General Public License version 2 (GPLv2), wherein several polynomial sets are coded. Therefore, this library constitutes a robust software alternative for wavefront reconstruction in a high energy laser field, optical surface reconstruction, and, more generally, in surface reconstruction from gradients. The library is a candidate for being integrated in control systems for optical devices, or similarly to be used in ad hoc simulations. Moreover, it has been developed with flexibility in mind, and, as such, the implementation includes the following features: (i) a mock-up generator of various incident wavefronts, intended to simulate the wavefronts commonly encountered in the field of high-energy lasers production; (ii) runtime selection of the library in charge of performing the algebraic computations; (iii) a profiling mechanism to measure and compare the performance of different steps of the algorithms and/or third-party linear algebra libraries. Finally, the library can be easily extended to include additional dependencies, such as porting the algebraic operations to specific architectures, in order to exploit hardware acceleration features. PMID:29189722

  7. Differential evolution-based multi-objective optimization for the definition of a health indicator for fault diagnostics and prognostics

    NASA Astrophysics Data System (ADS)

    Baraldi, P.; Bonfanti, G.; Zio, E.

    2018-03-01

    The identification of the current degradation state of an industrial component and the prediction of its future evolution is a fundamental step for the development of condition-based and predictive maintenance approaches. The objective of the present work is to propose a general method for extracting a health indicator to measure the amount of component degradation from a set of signals measured during operation. The proposed method is based on the combined use of feature extraction techniques, such as Empirical Mode Decomposition and Auto-Associative Kernel Regression, and a multi-objective Binary Differential Evolution (BDE) algorithm for selecting the subset of features optimal for the definition of the health indicator. The objectives of the optimization are desired characteristics of the health indicator, such as monotonicity, trendability and prognosability. A case study is considered, concerning the prediction of the remaining useful life of turbofan engines. The obtained results confirm that the method is capable of extracting health indicators suitable for accurate prognostics.

  8. Nodal Statistics for the Van Vleck Polynomials

    NASA Astrophysics Data System (ADS)

    Bourget, Alain

    The Van Vleck polynomials naturally arise from the generalized Lamé equation as the polynomials of degree for which Eq. (1) has a polynomial solution of some degree k. In this paper, we compute the limiting distribution, as well as the limiting mean level spacings distribution of the zeros of any Van Vleck polynomial as N --> ∞.

  9. A Renormalisation Group Method. V. A Single Renormalisation Group Step

    NASA Astrophysics Data System (ADS)

    Brydges, David C.; Slade, Gordon

    2015-05-01

    This paper is the fifth in a series devoted to the development of a rigorous renormalisation group method applicable to lattice field theories containing boson and/or fermion fields, and comprises the core of the method. In the renormalisation group method, increasingly large scales are studied in a progressive manner, with an interaction parametrised by a field polynomial which evolves with the scale under the renormalisation group map. In our context, the progressive analysis is performed via a finite-range covariance decomposition. Perturbative calculations are used to track the flow of the coupling constants of the evolving polynomial, but on their own perturbative calculations are insufficient to control error terms and to obtain mathematically rigorous results. In this paper, we define an additional non-perturbative coordinate, which together with the flow of coupling constants defines the complete evolution of the renormalisation group map. We specify conditions under which the non-perturbative coordinate is contractive under a single renormalisation group step. Our framework is essentially combinatorial, but its implementation relies on analytic results developed earlier in the series of papers. The results of this paper are applied elsewhere to analyse the critical behaviour of the 4-dimensional continuous-time weakly self-avoiding walk and of the 4-dimensional -component model. In particular, the existence of a logarithmic correction to mean-field scaling for the susceptibility can be proved for both models, together with other facts about critical exponents and critical behaviour.

  10. Legendre modified moments for Euler's constant

    NASA Astrophysics Data System (ADS)

    Prévost, Marc

    2008-10-01

    Polynomial moments are often used for the computation of Gauss quadrature to stabilize the numerical calculation of the orthogonal polynomials, see [W. Gautschi, Computational aspects of orthogonal polynomials, in: P. Nevai (Ed.), Orthogonal Polynomials-Theory and Practice, NATO ASI Series, Series C: Mathematical and Physical Sciences, vol. 294. Kluwer, Dordrecht, 1990, pp. 181-216 [6]; W. Gautschi, On the sensitivity of orthogonal polynomials to perturbations in the moments, Numer. Math. 48(4) (1986) 369-382 [5]; W. Gautschi, On generating orthogonal polynomials, SIAM J. Sci. Statist. Comput. 3(3) (1982) 289-317 [4

  11. Intimate partner violence and anxiety disorders in pregnancy: the importance of vocational training of the nursing staff in facing them1

    PubMed Central

    Fonseca-Machado, Mariana de Oliveira; Monteiro, Juliana Cristina dos Santos; Haas, Vanderlei José; Abrão, Ana Cristina Freitas de Vilhena; Gomes-Sponholz, Flávia

    2015-01-01

    Objective: to identify the relationship between posttraumatic stress disorder, trait and state anxiety, and intimate partner violence during pregnancy. Method: observational, cross-sectional study developed with 358 pregnant women. The Posttraumatic Stress Disorder Checklist - Civilian Version was used, as well as the State-Trait Anxiety Inventory and an adapted version of the instrument used in the World Health Organization Multi-country Study on Women's Health and Domestic Violence. Results: after adjusting to the multiple logistic regression model, intimate partner violence, occurred during pregnancy, was associated with the indication of posttraumatic stress disorder. The adjusted multiple linear regression models showed that the victims of violence, in the current pregnancy, had higher symptom scores of trait and state anxiety than non-victims. Conclusion: recognizing the intimate partner violence as a clinically relevant and identifiable risk factor for the occurrence of anxiety disorders during pregnancy can be a first step in the prevention thereof. PMID:26487135

  12. [Design and Implementation of Image Interpolation and Color Correction for Ultra-thin Electronic Endoscope on FPGA].

    PubMed

    Luo, Qiang; Yan, Zhuangzhi; Gu, Dongxing; Cao, Lei

    This paper proposed an image interpolation algorithm based on bilinear interpolation and a color correction algorithm based on polynomial regression on FPGA, which focused on the limited number of imaging pixels and color distortion of the ultra-thin electronic endoscope. Simulation experiment results showed that the proposed algorithm realized the real-time display of 1280 x 720@60Hz HD video, and using the X-rite color checker as standard colors, the average color difference was reduced about 30% comparing with that before color correction.

  13. A data-driven dynamics simulation framework for railway vehicles

    NASA Astrophysics Data System (ADS)

    Nie, Yinyu; Tang, Zhao; Liu, Fengjia; Chang, Jian; Zhang, Jianjun

    2018-03-01

    The finite element (FE) method is essential for simulating vehicle dynamics with fine details, especially for train crash simulations. However, factors such as the complexity of meshes and the distortion involved in a large deformation would undermine its calculation efficiency. An alternative method, the multi-body (MB) dynamics simulation provides satisfying time efficiency but limited accuracy when highly nonlinear dynamic process is involved. To maintain the advantages of both methods, this paper proposes a data-driven simulation framework for dynamics simulation of railway vehicles. This framework uses machine learning techniques to extract nonlinear features from training data generated by FE simulations so that specific mesh structures can be formulated by a surrogate element (or surrogate elements) to replace the original mechanical elements, and the dynamics simulation can be implemented by co-simulation with the surrogate element(s) embedded into a MB model. This framework consists of a series of techniques including data collection, feature extraction, training data sampling, surrogate element building, and model evaluation and selection. To verify the feasibility of this framework, we present two case studies, a vertical dynamics simulation and a longitudinal dynamics simulation, based on co-simulation with MATLAB/Simulink and Simpack, and a further comparison with a popular data-driven model (the Kriging model) is provided. The simulation result shows that using the legendre polynomial regression model in building surrogate elements can largely cut down the simulation time without sacrifice in accuracy.

  14. On multiple orthogonal polynomials for discrete Meixner measures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sorokin, Vladimir N

    2010-12-07

    The paper examines two examples of multiple orthogonal polynomials generalizing orthogonal polynomials of a discrete variable, meaning thereby the Meixner polynomials. One example is bound up with a discrete Nikishin system, and the other leads to essentially new effects. The limit distribution of the zeros of polynomials is obtained in terms of logarithmic equilibrium potentials and in terms of algebraic curves. Bibliography: 9 titles.

  15. A hybrid approach to parameter identification of linear delay differential equations involving multiple delays

    NASA Astrophysics Data System (ADS)

    Marzban, Hamid Reza

    2018-05-01

    In this paper, we are concerned with the parameter identification of linear time-invariant systems containing multiple delays. The approach is based upon a hybrid of block-pulse functions and Legendre's polynomials. The convergence of the proposed procedure is established and an upper error bound with respect to the L2-norm associated with the hybrid functions is derived. The problem under consideration is first transformed into a system of algebraic equations. The least squares technique is then employed for identification of the desired parameters. Several multi-delay systems of varying complexity are investigated to evaluate the performance and capability of the proposed approximation method. It is shown that the proposed approach is also applicable to a class of nonlinear multi-delay systems. It is demonstrated that the suggested procedure provides accurate results for the desired parameters.

  16. On mixed derivatives type high dimensional multi-term fractional partial differential equations approximate solutions

    NASA Astrophysics Data System (ADS)

    Talib, Imran; Belgacem, Fethi Bin Muhammad; Asif, Naseer Ahmad; Khalil, Hammad

    2017-01-01

    In this research article, we derive and analyze an efficient spectral method based on the operational matrices of three dimensional orthogonal Jacobi polynomials to solve numerically the mixed partial derivatives type multi-terms high dimensions generalized class of fractional order partial differential equations. We transform the considered fractional order problem to an easily solvable algebraic equations with the aid of the operational matrices. Being easily solvable, the associated algebraic system leads to finding the solution of the problem. Some test problems are considered to confirm the accuracy and validity of the proposed numerical method. The convergence of the method is ensured by comparing our Matlab software simulations based obtained results with the exact solutions in the literature, yielding negligible errors. Moreover, comparative results discussed in the literature are extended and improved in this study.

  17. Shadowing effects on multi-step Langmuir probe array on HL-2A tokamak

    NASA Astrophysics Data System (ADS)

    Ke, R.; Xu, M.; Nie, L.; Gao, Z.; Wu, Y.; Yuan, B.; Chen, J.; Song, X.; Yan, L.; Duan, X.

    2018-05-01

    Multi-step Langmuir probe arrays have been designed and installed on the HL-2A tokamak [1]–[2] to study the turbulent transport in the edge plasma, especially for the measurement of poloidal momentum flux, Reynolds stress Rs. However, except the probe tips on the top step, all other tips on lower steps are shadowed by graphite skeleton. It is necessary to estimate the shadowing effects on equilibrium and fluctuation measurement. In this paper, comparison of shadowed tips to unshadowed ones is presented. The results show that shadowing can strongly reduce the ion and electron effective collection area. However, its effect is negligible for the turbulence intensity and coherence measurement, confirming that the multi-step LP array is proper for the turbulent transport measurement.

  18. Fast quantification of bovine milk proteins employing external cavity-quantum cascade laser spectroscopy.

    PubMed

    Schwaighofer, Andreas; Kuligowski, Julia; Quintás, Guillermo; Mayer, Helmut K; Lendl, Bernhard

    2018-06-30

    Analysis of proteins in bovine milk is usually tackled by time-consuming analytical approaches involving wet-chemical, multi-step sample clean-up procedures. The use of external cavity-quantum cascade laser (EC-QCL) based IR spectroscopy was evaluated as an alternative screening tool for direct and simultaneous quantification of individual proteins (i.e. casein and β-lactoglobulin) and total protein content in commercial bovine milk samples. Mid-IR spectra of protein standard mixtures were used for building partial least squares (PLS) regression models. A sample set comprising different milk types (pasteurized; differently processed extended shelf life, ESL; ultra-high temperature, UHT) was analysed and results were compared to reference methods. Concentration values of the QCL-IR spectroscopy approach obtained within several minutes are in good agreement with reference methods involving multiple sample preparation steps. The potential application as a fast screening method for estimating the heat load applied to liquid milk is demonstrated. Copyright © 2018 Elsevier Ltd. All rights reserved.

  19. Estimation of continuous multi-DOF finger joint kinematics from surface EMG using a multi-output Gaussian Process.

    PubMed

    Ngeo, Jimson; Tamei, Tomoya; Shibata, Tomohiro

    2014-01-01

    Surface electromyographic (EMG) signals have often been used in estimating upper and lower limb dynamics and kinematics for the purpose of controlling robotic devices such as robot prosthesis and finger exoskeletons. However, in estimating multiple and a high number of degrees-of-freedom (DOF) kinematics from EMG, output DOFs are usually estimated independently. In this study, we estimate finger joint kinematics from EMG signals using a multi-output convolved Gaussian Process (Multi-output Full GP) that considers dependencies between outputs. We show that estimation of finger joints from muscle activation inputs can be improved by using a regression model that considers inherent coupling or correlation within the hand and finger joints. We also provide a comparison of estimation performance between different regression methods, such as Artificial Neural Networks (ANN) which is used by many of the related studies. We show that using a multi-output GP gives improved estimation compared to multi-output ANN and even dedicated or independent regression models.

  20. Direct calculation of modal parameters from matrix orthogonal polynomials

    NASA Astrophysics Data System (ADS)

    El-Kafafy, Mahmoud; Guillaume, Patrick

    2011-10-01

    The object of this paper is to introduce a new technique to derive the global modal parameter (i.e. system poles) directly from estimated matrix orthogonal polynomials. This contribution generalized the results given in Rolain et al. (1994) [5] and Rolain et al. (1995) [6] for scalar orthogonal polynomials to multivariable (matrix) orthogonal polynomials for multiple input multiple output (MIMO) system. Using orthogonal polynomials improves the numerical properties of the estimation process. However, the derivation of the modal parameters from the orthogonal polynomials is in general ill-conditioned if not handled properly. The transformation of the coefficients from orthogonal polynomials basis to power polynomials basis is known to be an ill-conditioned transformation. In this paper a new approach is proposed to compute the system poles directly from the multivariable orthogonal polynomials. High order models can be used without any numerical problems. The proposed method will be compared with existing methods (Van Der Auweraer and Leuridan (1987) [4] Chen and Xu (2003) [7]). For this comparative study, simulated as well as experimental data will be used.

  1. A study of machine learning regression methods for major elemental analysis of rocks using laser-induced breakdown spectroscopy

    NASA Astrophysics Data System (ADS)

    Boucher, Thomas F.; Ozanne, Marie V.; Carmosino, Marco L.; Dyar, M. Darby; Mahadevan, Sridhar; Breves, Elly A.; Lepore, Kate H.; Clegg, Samuel M.

    2015-05-01

    The ChemCam instrument on the Mars Curiosity rover is generating thousands of LIBS spectra and bringing interest in this technique to public attention. The key to interpreting Mars or any other types of LIBS data are calibrations that relate laboratory standards to unknowns examined in other settings and enable predictions of chemical composition. Here, LIBS spectral data are analyzed using linear regression methods including partial least squares (PLS-1 and PLS-2), principal component regression (PCR), least absolute shrinkage and selection operator (lasso), elastic net, and linear support vector regression (SVR-Lin). These were compared against results from nonlinear regression methods including kernel principal component regression (K-PCR), polynomial kernel support vector regression (SVR-Py) and k-nearest neighbor (kNN) regression to discern the most effective models for interpreting chemical abundances from LIBS spectra of geological samples. The results were evaluated for 100 samples analyzed with 50 laser pulses at each of five locations averaged together. Wilcoxon signed-rank tests were employed to evaluate the statistical significance of differences among the nine models using their predicted residual sum of squares (PRESS) to make comparisons. For MgO, SiO2, Fe2O3, CaO, and MnO, the sparse models outperform all the others except for linear SVR, while for Na2O, K2O, TiO2, and P2O5, the sparse methods produce inferior results, likely because their emission lines in this energy range have lower transition probabilities. The strong performance of the sparse methods in this study suggests that use of dimensionality-reduction techniques as a preprocessing step may improve the performance of the linear models. Nonlinear methods tend to overfit the data and predict less accurately, while the linear methods proved to be more generalizable with better predictive performance. These results are attributed to the high dimensionality of the data (6144 channels) relative to the small number of samples studied. The best-performing models were SVR-Lin for SiO2, MgO, Fe2O3, and Na2O, lasso for Al2O3, elastic net for MnO, and PLS-1 for CaO, TiO2, and K2O. Although these differences in model performance between methods were identified, most of the models produce comparable results when p ≤ 0.05 and all techniques except kNN produced statistically-indistinguishable results. It is likely that a combination of models could be used together to yield a lower total error of prediction, depending on the requirements of the user.

  2. Independence polynomial and matching polynomial of the Koch network

    NASA Astrophysics Data System (ADS)

    Liao, Yunhua; Xie, Xiaoliang

    2015-11-01

    The lattice gas model and the monomer-dimer model are two classical models in statistical mechanics. It is well known that the partition functions of these two models are associated with the independence polynomial and the matching polynomial in graph theory, respectively. Both polynomials have been shown to belong to the “#P-complete” class, which indicate the problems are computationally “intractable”. We consider these two polynomials of the Koch networks which are scale-free with small-world effects. Explicit recurrences are derived, and explicit formulae are presented for the number of independent sets of a certain type.

  3. Tutorial on Reed-Solomon error correction coding

    NASA Technical Reports Server (NTRS)

    Geisel, William A.

    1990-01-01

    This tutorial attempts to provide a frank, step-by-step approach to Reed-Solomon (RS) error correction coding. RS encoding and RS decoding both with and without erasing code symbols are emphasized. There is no need to present rigorous proofs and extreme mathematical detail. Rather, the simple concepts of groups and fields, specifically Galois fields, are presented with a minimum of complexity. Before RS codes are presented, other block codes are presented as a technical introduction into coding. A primitive (15, 9) RS coding example is then completely developed from start to finish, demonstrating the encoding and decoding calculations and a derivation of the famous error-locator polynomial. The objective is to present practical information about Reed-Solomon coding in a manner such that it can be easily understood.

  4. Reconstructing biochemical pathways from time course data.

    PubMed

    Srividhya, Jeyaraman; Crampin, Edmund J; McSharry, Patrick E; Schnell, Santiago

    2007-03-01

    Time series data on biochemical reactions reveal transient behavior, away from chemical equilibrium, and contain information on the dynamic interactions among reacting components. However, this information can be difficult to extract using conventional analysis techniques. We present a new method to infer biochemical pathway mechanisms from time course data using a global nonlinear modeling technique to identify the elementary reaction steps which constitute the pathway. The method involves the generation of a complete dictionary of polynomial basis functions based on the law of mass action. Using these basis functions, there are two approaches to model construction, namely the general to specific and the specific to general approach. We demonstrate that our new methodology reconstructs the chemical reaction steps and connectivity of the glycolytic pathway of Lactococcus lactis from time course experimental data.

  5. Ultra-fast consensus of discrete-time multi-agent systems with multi-step predictive output feedback

    NASA Astrophysics Data System (ADS)

    Zhang, Wenle; Liu, Jianchang

    2016-04-01

    This article addresses the ultra-fast consensus problem of high-order discrete-time multi-agent systems based on a unified consensus framework. A novel multi-step predictive output mechanism is proposed under a directed communication topology containing a spanning tree. By predicting the outputs of a network several steps ahead and adding this information into the consensus protocol, it is shown that the asymptotic convergence factor is improved by a power of q + 1 compared to the routine consensus. The difficult problem of selecting the optimal control gain is solved well by introducing a variable called convergence step. In addition, the ultra-fast formation achievement is studied on the basis of this new consensus protocol. Finally, the ultra-fast consensus with respect to a reference model and robust consensus is discussed. Some simulations are performed to illustrate the effectiveness of the theoretical results.

  6. Correlates of willingness to engage in residential gardening: implications for health optimization in ibadan, Nigeria.

    PubMed

    Motunrayo Ibrahim, Fausat

    2013-01-01

    Gardening is a worthwhile adventure which engenders health op-timization. Yet, a dearth of evidences that highlights motivations to engage in gardening exists. This study examined willingness to engage in gardening and its correlates, including some socio-psychological, health related and socio-demographic variables. In this cross-sectional survey, 508 copies of a structured questionnaire were randomly self administered among a group of civil servants of Oyo State, Nigeria. Multi-item measures were used to assess variables. Step wise multiple regression analysis was used to identify predictors of willingness to engage in gar-dening Results: Simple percentile analysis shows that 71.1% of respondents do not own a garden. Results of step wise multiple regression analysis indicate that descriptive norm of gardening is a good predictor, social support for gardening is better while gardening self efficacy is the best predictor of willingness to engage in gardening (P< 0.001). Health consciousness, gardening response efficacy, education and age are not predictors of this willingness (P> 0.05). Results of t-test and ANOVA respectively shows that gender is not associated with this willingness (P> 0.05), but marital status is (P< 0.05).  Socio-psychological characteristics and being married are very rele-vant in motivations to engage in gardening. The nexus between gardening and health optimization appears to be highly obscured in this population.

  7. Correlates of Willingness to Engage in Residential Gardening: Implications for Health Optimization in Ibadan, Nigeria

    PubMed Central

    Motunrayo Ibrahim, Fausat

    2013-01-01

    Background: Gardening is a worthwhile adventure which engenders health op­timization. Yet, a dearth of evidences that highlights motivations to engage in gardening exists. This study examined willingness to engage in gardening and its correlates, including some socio-psychological, health related and socio-demographic variables. Methods: In this cross-sectional survey, 508 copies of a structured questionnaire were randomly self administered among a group of civil servants of Oyo State, Nigeria. Multi-item measures were used to assess variables. Step wise multiple regression analysis was used to identify predictors of willingness to engage in gar­dening Results: Simple percentile analysis shows that 71.1% of respondents do not own a garden. Results of step wise multiple regression analysis indicate that descriptive norm of gardening is a good predictor, social support for gardening is better while gardening self efficacy is the best predictor of willingness to engage in gardening (P< 0.001). Health consciousness, gardening response efficacy, education and age are not predictors of this willingness (P> 0.05). Results of t-test and ANOVA respectively shows that gender is not associated with this willingness (P> 0.05), but marital status is (P< 0.05).  Conclusion: Socio-psychological characteristics and being married are very rele­vant in motivations to engage in gardening. The nexus between gardening and health optimization appears to be highly obscured in this population. PMID:24688974

  8. Mapping groundwater contamination risk of multiple aquifers using multi-model ensemble of machine learning algorithms.

    PubMed

    Barzegar, Rahim; Moghaddam, Asghar Asghari; Deo, Ravinesh; Fijani, Elham; Tziritis, Evangelos

    2018-04-15

    Constructing accurate and reliable groundwater risk maps provide scientifically prudent and strategic measures for the protection and management of groundwater. The objectives of this paper are to design and validate machine learning based-risk maps using ensemble-based modelling with an integrative approach. We employ the extreme learning machines (ELM), multivariate regression splines (MARS), M5 Tree and support vector regression (SVR) applied in multiple aquifer systems (e.g. unconfined, semi-confined and confined) in the Marand plain, North West Iran, to encapsulate the merits of individual learning algorithms in a final committee-based ANN model. The DRASTIC Vulnerability Index (VI) ranged from 56.7 to 128.1, categorized with no risk, low and moderate vulnerability thresholds. The correlation coefficient (r) and Willmott's Index (d) between NO 3 concentrations and VI were 0.64 and 0.314, respectively. To introduce improvements in the original DRASTIC method, the vulnerability indices were adjusted by NO 3 concentrations, termed as the groundwater contamination risk (GCR). Seven DRASTIC parameters utilized as the model inputs and GCR values utilized as the outputs of individual machine learning models were served in the fully optimized committee-based ANN-predictive model. The correlation indicators demonstrated that the ELM and SVR models outperformed the MARS and M5 Tree models, by virtue of a larger d and r value. Subsequently, the r and d metrics for the ANN-committee based multi-model in the testing phase were 0.8889 and 0.7913, respectively; revealing the superiority of the integrated (or ensemble) machine learning models when compared with the original DRASTIC approach. The newly designed multi-model ensemble-based approach can be considered as a pragmatic step for mapping groundwater contamination risks of multiple aquifer systems with multi-model techniques, yielding the high accuracy of the ANN committee-based model. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Asymptotically extremal polynomials with respect to varying weights and application to Sobolev orthogonality

    NASA Astrophysics Data System (ADS)

    Díaz Mendoza, C.; Orive, R.; Pijeira Cabrera, H.

    2008-10-01

    We study the asymptotic behavior of the zeros of a sequence of polynomials whose weighted norms, with respect to a sequence of weight functions, have the same nth root asymptotic behavior as the weighted norms of certain extremal polynomials. This result is applied to obtain the (contracted) weak zero distribution for orthogonal polynomials with respect to a Sobolev inner product with exponential weights of the form e-[phi](x), giving a unified treatment for the so-called Freud (i.e., when [phi] has polynomial growth at infinity) and Erdös (when [phi] grows faster than any polynomial at infinity) cases. In addition, we provide a new proof for the bound of the distance of the zeros to the convex hull of the support for these Sobolev orthogonal polynomials.

  10. A study of the orthogonal polynomials associated with the quantum harmonic oscillator on constant curvature spaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vignat, C.; Lamberti, P. W.

    2009-10-15

    Recently, Carinena, et al. [Ann. Phys. 322, 434 (2007)] introduced a new family of orthogonal polynomials that appear in the wave functions of the quantum harmonic oscillator in two-dimensional constant curvature spaces. They are a generalization of the Hermite polynomials and will be called curved Hermite polynomials in the following. We show that these polynomials are naturally related to the relativistic Hermite polynomials introduced by Aldaya et al. [Phys. Lett. A 156, 381 (1991)], and thus are Jacobi polynomials. Moreover, we exhibit a natural bijection between the solutions of the quantum harmonic oscillator on negative curvature spaces and on positivemore » curvature spaces. At last, we show a maximum entropy property for the ground states of these oscillators.« less

  11. Stabilisation of discrete-time polynomial fuzzy systems via a polynomial lyapunov approach

    NASA Astrophysics Data System (ADS)

    Nasiri, Alireza; Nguang, Sing Kiong; Swain, Akshya; Almakhles, Dhafer

    2018-02-01

    This paper deals with the problem of designing a controller for a class of discrete-time nonlinear systems which is represented by discrete-time polynomial fuzzy model. Most of the existing control design methods for discrete-time fuzzy polynomial systems cannot guarantee their Lyapunov function to be a radially unbounded polynomial function, hence the global stability cannot be assured. The proposed control design in this paper guarantees a radially unbounded polynomial Lyapunov functions which ensures global stability. In the proposed design, state feedback structure is considered and non-convexity problem is solved by incorporating an integrator into the controller. Sufficient conditions of stability are derived in terms of polynomial matrix inequalities which are solved via SOSTOOLS in MATLAB. A numerical example is presented to illustrate the effectiveness of the proposed controller.

  12. High-performance implementation of Chebyshev filter diagonalization for interior eigenvalue computations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pieper, Andreas; Kreutzer, Moritz; Alvermann, Andreas, E-mail: alvermann@physik.uni-greifswald.de

    2016-11-15

    We study Chebyshev filter diagonalization as a tool for the computation of many interior eigenvalues of very large sparse symmetric matrices. In this technique the subspace projection onto the target space of wanted eigenvectors is approximated with filter polynomials obtained from Chebyshev expansions of window functions. After the discussion of the conceptual foundations of Chebyshev filter diagonalization we analyze the impact of the choice of the damping kernel, search space size, and filter polynomial degree on the computational accuracy and effort, before we describe the necessary steps towards a parallel high-performance implementation. Because Chebyshev filter diagonalization avoids the need formore » matrix inversion it can deal with matrices and problem sizes that are presently not accessible with rational function methods based on direct or iterative linear solvers. To demonstrate the potential of Chebyshev filter diagonalization for large-scale problems of this kind we include as an example the computation of the 10{sup 2} innermost eigenpairs of a topological insulator matrix with dimension 10{sup 9} derived from quantum physics applications.« less

  13. Polynomial-time quantum algorithm for the simulation of chemical dynamics

    PubMed Central

    Kassal, Ivan; Jordan, Stephen P.; Love, Peter J.; Mohseni, Masoud; Aspuru-Guzik, Alán

    2008-01-01

    The computational cost of exact methods for quantum simulation using classical computers grows exponentially with system size. As a consequence, these techniques can be applied only to small systems. By contrast, we demonstrate that quantum computers could exactly simulate chemical reactions in polynomial time. Our algorithm uses the split-operator approach and explicitly simulates all electron-nuclear and interelectronic interactions in quadratic time. Surprisingly, this treatment is not only more accurate than the Born–Oppenheimer approximation but faster and more efficient as well, for all reactions with more than about four atoms. This is the case even though the entire electronic wave function is propagated on a grid with appropriately short time steps. Although the preparation and measurement of arbitrary states on a quantum computer is inefficient, here we demonstrate how to prepare states of chemical interest efficiently. We also show how to efficiently obtain chemically relevant observables, such as state-to-state transition probabilities and thermal reaction rates. Quantum computers using these techniques could outperform current classical computers with 100 qubits. PMID:19033207

  14. An Efficient numerical method to calculate the conductivity tensor for disordered topological matter

    NASA Astrophysics Data System (ADS)

    Garcia, Jose H.; Covaci, Lucian; Rappoport, Tatiana G.

    2015-03-01

    We propose a new efficient numerical approach to calculate the conductivity tensor in solids. We use a real-space implementation of the Kubo formalism where both diagonal and off-diagonal conductivities are treated in the same footing. We adopt a formulation of the Kubo theory that is known as Bastin formula and expand the Green's functions involved in terms of Chebyshev polynomials using the kernel polynomial method. Within this method, all the computational effort is on the calculation of the expansion coefficients. It also has the advantage of obtaining both conductivities in a single calculation step and for various values of temperature and chemical potential, capturing the topology of the band-structure. Our numerical technique is very general and is suitable for the calculation of transport properties of disordered systems. We analyze how the method's accuracy varies with the number of moments used in the expansion and illustrate our approach by calculating the transverse conductivity of different topological systems. T.G.R, J.H.G and L.C. acknowledge Brazilian agencies CNPq, FAPERJ and INCT de Nanoestruturas de Carbono, Flemish Science Foundation for financial support.

  15. Multi-dimensional Rankings, Program Termination, and Complexity Bounds of Flowchart Programs

    NASA Astrophysics Data System (ADS)

    Alias, Christophe; Darte, Alain; Feautrier, Paul; Gonnord, Laure

    Proving the termination of a flowchart program can be done by exhibiting a ranking function, i.e., a function from the program states to a well-founded set, which strictly decreases at each program step. A standard method to automatically generate such a function is to compute invariants for each program point and to search for a ranking in a restricted class of functions that can be handled with linear programming techniques. Previous algorithms based on affine rankings either are applicable only to simple loops (i.e., single-node flowcharts) and rely on enumeration, or are not complete in the sense that they are not guaranteed to find a ranking in the class of functions they consider, if one exists. Our first contribution is to propose an efficient algorithm to compute ranking functions: It can handle flowcharts of arbitrary structure, the class of candidate rankings it explores is larger, and our method, although greedy, is provably complete. Our second contribution is to show how to use the ranking functions we generate to get upper bounds for the computational complexity (number of transitions) of the source program. This estimate is a polynomial, which means that we can handle programs with more than linear complexity. We applied the method on a collection of test cases from the literature. We also show the links and differences with previous techniques based on the insertion of counters.

  16. Flame: A Flexible Data Reduction Pipeline for Near-Infrared and Optical Spectroscopy

    NASA Astrophysics Data System (ADS)

    Belli, Sirio; Contursi, Alessandra; Davies, Richard I.

    2018-05-01

    We present flame, a pipeline for reducing spectroscopic observations obtained with multi-slit near-infrared and optical instruments. Because of its flexible design, flame can be easily applied to data obtained with a wide variety of spectrographs. The flexibility is due to a modular architecture, which allows changes and customizations to the pipeline, and relegates the instrument-specific parts to a single module. At the core of the data reduction is the transformation from observed pixel coordinates (x, y) to rectified coordinates (λ, γ). This transformation consists in the polynomial functions λ(x, y) and γ(x, y) that are derived from arc or sky emission lines and slit edge tracing, respectively. The use of 2D transformations allows one to wavelength-calibrate and rectify the data using just one interpolation step. Furthermore, the γ(x, y) transformation includes also the spatial misalignment between frames, which can be measured from a reference star observed simultaneously with the science targets. The misalignment can then be fully corrected during the rectification, without having to further resample the data. Sky subtraction can be performed via nodding and/or modeling of the sky spectrum; the combination of the two methods typically yields the best results. We illustrate the pipeline by showing examples of data reduction for a near-infrared instrument (LUCI at the Large Binocular Telescope) and an optical one (LRIS at the Keck telescope).

  17. Multi-off-grid methods in multi-step integration of ordinary differential equations

    NASA Technical Reports Server (NTRS)

    Beaudet, P. R.

    1974-01-01

    Description of methods of solving first- and second-order systems of differential equations in which all derivatives are evaluated at off-grid locations in order to circumvent the Dahlquist stability limitation on the order of on-grid methods. The proposed multi-off-grid methods require off-grid state predictors for the evaluation of the n derivatives at each step. Progressing forward in time, the off-grid states are predicted using a linear combination of back on-grid state values and off-grid derivative evaluations. A comparison is made between the proposed multi-off-grid methods and the corresponding Adams and Cowell on-grid integration techniques in integrating systems of ordinary differential equations, showing a significant reduction in the error at larger step sizes in the case of the multi-off-grid integrator.

  18. Hadamard Factorization of Stable Polynomials

    NASA Astrophysics Data System (ADS)

    Loredo-Villalobos, Carlos Arturo; Aguirre-Hernández, Baltazar

    2011-11-01

    The stable (Hurwitz) polynomials are important in the study of differential equations systems and control theory (see [7] and [19]). A property of these polynomials is related to Hadamard product. Consider two polynomials p,q ∈ R[x]:p(x) = anxn+an-1xn-1+...+a1x+a0q(x) = bmx m+bm-1xm-1+...+b1x+b0the Hadamard product (p × q) is defined as (p×q)(x) = akbkxk+ak-1bk-1xk-1+...+a1b1x+a0b0where k = min(m,n). Some results (see [16]) shows that if p,q ∈R[x] are stable polynomials then (p×q) is stable, also, i.e. the Hadamard product is closed; however, the reciprocal is not always true, that is, not all stable polynomial has a factorization into two stable polynomials the same degree n, if n> 4 (see [15]).In this work we will give some conditions to Hadamard factorization existence for stable polynomials.

  19. On the construction of recurrence relations for the expansion and connection coefficients in series of Jacobi polynomials

    NASA Astrophysics Data System (ADS)

    Doha, E. H.

    2004-01-01

    Formulae expressing explicitly the Jacobi coefficients of a general-order derivative (integral) of an infinitely differentiable function in terms of its original expansion coefficients, and formulae for the derivatives (integrals) of Jacobi polynomials in terms of Jacobi polynomials themselves are stated. A formula for the Jacobi coefficients of the moments of one single Jacobi polynomial of certain degree is proved. Another formula for the Jacobi coefficients of the moments of a general-order derivative of an infinitely differentiable function in terms of its original expanded coefficients is also given. A simple approach in order to construct and solve recursively for the connection coefficients between Jacobi-Jacobi polynomials is described. Explicit formulae for these coefficients between ultraspherical and Jacobi polynomials are deduced, of which the Chebyshev polynomials of the first and second kinds and Legendre polynomials are important special cases. Two analytical formulae for the connection coefficients between Laguerre-Jacobi and Hermite-Jacobi are developed.

  20. Nonlinear fluctuations-induced rate equations for linear birth-death processes

    NASA Astrophysics Data System (ADS)

    Honkonen, J.

    2008-05-01

    The Fock-space approach to the solution of master equations for one-step Markov processes is reconsidered. It is shown that in birth-death processes with an absorbing state at the bottom of the occupation-number spectrum and occupation-number independent annihilation probability of occupation-number fluctuations give rise to rate equations drastically different from the polynomial form typical of birth-death processes. The fluctuation-induced rate equations with the characteristic exponential terms are derived for Mikhailov’s ecological model and Lanchester’s model of modern warfare.

  1. Genetic analysis of partial egg production records in Japanese quail using random regression models.

    PubMed

    Abou Khadiga, G; Mahmoud, B Y F; Farahat, G S; Emam, A M; El-Full, E A

    2017-08-01

    The main objectives of this study were to detect the most appropriate random regression model (RRM) to fit the data of monthly egg production in 2 lines (selected and control) of Japanese quail and to test the consistency of different criteria of model choice. Data from 1,200 female Japanese quails for the first 5 months of egg production from 4 consecutive generations of an egg line selected for egg production in the first month (EP1) was analyzed. Eight RRMs with different orders of Legendre polynomials were compared to determine the proper model for analysis. All criteria of model choice suggested that the adequate model included the second-order Legendre polynomials for fixed effects, and the third-order for additive genetic effects and permanent environmental effects. Predictive ability of the best model was the highest among all models (ρ = 0.987). According to the best model fitted to the data, estimates of heritability were relatively low to moderate (0.10 to 0.17) showed a descending pattern from the first to the fifth month of production. A similar pattern was observed for permanent environmental effects with greater estimates in the first (0.36) and second (0.23) months of production than heritability estimates. Genetic correlations between separate production periods were higher (0.18 to 0.93) than their phenotypic counterparts (0.15 to 0.87). The superiority of the selected line over the control was observed through significant (P < 0.05) linear contrast estimates. Significant (P < 0.05) estimates of covariate effect (age at sexual maturity) showed a decreased pattern with greater impact on egg production in earlier ages (first and second months) than later ones. A methodology based on random regression animal models can be recommended for genetic evaluation of egg production in Japanese quail. © 2017 Poultry Science Association Inc.

  2. Random Regression Models Are Suitable to Substitute the Traditional 305-Day Lactation Model in Genetic Evaluations of Holstein Cattle in Brazil

    PubMed Central

    Padilha, Alessandro Haiduck; Cobuci, Jaime Araujo; Costa, Cláudio Napolis; Neto, José Braccini

    2016-01-01

    The aim of this study was to compare two random regression models (RRM) fitted by fourth (RRM4) and fifth-order Legendre polynomials (RRM5) with a lactation model (LM) for evaluating Holstein cattle in Brazil. Two datasets with the same animals were prepared for this study. To apply test-day RRM and LMs, 262,426 test day records and 30,228 lactation records covering 305 days were prepared, respectively. The lowest values of Akaike’s information criterion, Bayesian information criterion, and estimates of the maximum of the likelihood function (−2LogL) were for RRM4. Heritability for 305-day milk yield (305MY) was 0.23 (RRM4), 0.24 (RRM5), and 0.21 (LM). Heritability, additive genetic and permanent environmental variances of test days on days in milk was from 0.16 to 0.27, from 3.76 to 6.88 and from 11.12 to 20.21, respectively. Additive genetic correlations between test days ranged from 0.20 to 0.99. Permanent environmental correlations between test days were between 0.07 and 0.99. Standard deviations of average estimated breeding values (EBVs) for 305MY from RRM4 and RRM5 were from 11% to 30% higher for bulls and around 28% higher for cows than that in LM. Rank correlations between RRM EBVs and LM EBVs were between 0.86 to 0.96 for bulls and 0.80 to 0.87 for cows. Average percentage of gain in reliability of EBVs for 305-day yield increased from 4% to 17% for bulls and from 23% to 24% for cows when reliability of EBVs from RRM models was compared to those from LM model. Random regression model fitted by fourth order Legendre polynomials is recommended for genetic evaluations of Brazilian Holstein cattle because of the higher reliability in the estimation of breeding values. PMID:26954176

  3. Random Regression Models Are Suitable to Substitute the Traditional 305-Day Lactation Model in Genetic Evaluations of Holstein Cattle in Brazil.

    PubMed

    Padilha, Alessandro Haiduck; Cobuci, Jaime Araujo; Costa, Cláudio Napolis; Neto, José Braccini

    2016-06-01

    The aim of this study was to compare two random regression models (RRM) fitted by fourth (RRM4) and fifth-order Legendre polynomials (RRM5) with a lactation model (LM) for evaluating Holstein cattle in Brazil. Two datasets with the same animals were prepared for this study. To apply test-day RRM and LMs, 262,426 test day records and 30,228 lactation records covering 305 days were prepared, respectively. The lowest values of Akaike's information criterion, Bayesian information criterion, and estimates of the maximum of the likelihood function (-2LogL) were for RRM4. Heritability for 305-day milk yield (305MY) was 0.23 (RRM4), 0.24 (RRM5), and 0.21 (LM). Heritability, additive genetic and permanent environmental variances of test days on days in milk was from 0.16 to 0.27, from 3.76 to 6.88 and from 11.12 to 20.21, respectively. Additive genetic correlations between test days ranged from 0.20 to 0.99. Permanent environmental correlations between test days were between 0.07 and 0.99. Standard deviations of average estimated breeding values (EBVs) for 305MY from RRM4 and RRM5 were from 11% to 30% higher for bulls and around 28% higher for cows than that in LM. Rank correlations between RRM EBVs and LM EBVs were between 0.86 to 0.96 for bulls and 0.80 to 0.87 for cows. Average percentage of gain in reliability of EBVs for 305-day yield increased from 4% to 17% for bulls and from 23% to 24% for cows when reliability of EBVs from RRM models was compared to those from LM model. Random regression model fitted by fourth order Legendre polynomials is recommended for genetic evaluations of Brazilian Holstein cattle because of the higher reliability in the estimation of breeding values.

  4. Association of overjet and overbite with esthetic impairments of oral health-related quality of life.

    PubMed

    Sierwald, Ira; John, Mike T; Schierz, Oliver; Jost-Brinkmann, Paul-Georg; Reissmann, Daniel R

    2015-09-01

    Esthetics is an important part of quality of life and a frequent reason for orthodontic treatment demand. It was the aim of this study to investigate whether esthetic impairments, related to overjet and overbite, can be assessed with an established oral health-related quality of life instrument. Data from 1968 participants (age: 16-90 years; 69.8% female) from three German surveys were analyzed. Esthetic impairments of oral health-related quality of life were measured with four questions of the Oral Health Impact profile (OHIP), which comprise esthetic aspects of oral health-related quality of life. Higher values represent greater esthetic impairment (sum score: 0-16). Overbite and overjet values were categorized (≤ - 1 mm, 0-1 mm, 2-3 mm, 4-5 mm, ≥ 6 mm). The specific impact of each category on esthetic impairment, in relation to the reference category (2-3 mm), was calculated in linear regression analyses. The type of relationship and the specific impact of overbite and overjet were evaluated in regression analyses with fractional polynomials. Overbite ranged from - 5 to 15 mm (mean: 3.2 mm) and overjet from - 7 to 19 mm (mean: 3.1 mm). Both an increase and a decrease in overjet, in relation to the reference category, resulted in more esthetic-related oral health-related quality of life impairments. However, in this model, only the effect for increased overjet was statistically significant (4-5 mm: + 0.4 OHIP points; ≥ 6 mm: + 0.9 OHIP points). In the regression analysis with fractional polynomials, both an increase and a decrease in overjet resulted in more esthetic impairments, characterized by a U-shaped relationship. No association could be verified for overbite. A substantial increase or decrease of overjet from the reference values is associated with esthetic impairments of oral health-related quality of life, whereas the extent of overbite seems to have no impact on esthetics.

  5. Relationship between age and elite marathon race time in world single age records from 5 to 93 years

    PubMed Central

    2014-01-01

    Background The aims of the study were (i) to investigate the relationship between elite marathon race times and age in 1-year intervals by using the world single age records in marathon running from 5 to 93 years and (ii) to evaluate the sex difference in elite marathon running performance with advancing age. Methods World single age records in marathon running in 1-year intervals for women and men were analysed regarding changes across age for both men and women using linear and non-linear regression analyses for each age for women and men. Results The relationship between elite marathon race time and age was non-linear (i.e. polynomial regression 4th degree) for women and men. The curve was U-shaped where performance improved from 5 to ~20 years. From 5 years to ~15 years, boys and girls performed very similar. Between ~20 and ~35 years, performance was quite linear, but started to decrease at the age of ~35 years in a curvilinear manner with increasing age in both women and men. The sex difference increased non-linearly (i.e. polynomial regression 7th degree) from 5 to ~20 years, remained unchanged at ~20 min from ~20 to ~50 years and increased thereafter. The sex difference was lowest (7.5%, 10.5 min) at the age of 49 years. Conclusion Elite marathon race times improved from 5 to ~20 years, remained linear between ~20 and ~35 years, and started to increase at the age of ~35 years in a curvilinear manner with increasing age in both women and men. The sex difference in elite marathon race time increased non-linearly and was lowest at the age of ~49 years. PMID:25120915

  6. Stable Numerical Approach for Fractional Delay Differential Equations

    NASA Astrophysics Data System (ADS)

    Singh, Harendra; Pandey, Rajesh K.; Baleanu, D.

    2017-12-01

    In this paper, we present a new stable numerical approach based on the operational matrix of integration of Jacobi polynomials for solving fractional delay differential equations (FDDEs). The operational matrix approach converts the FDDE into a system of linear equations, and hence the numerical solution is obtained by solving the linear system. The error analysis of the proposed method is also established. Further, a comparative study of the approximate solutions is provided for the test examples of the FDDE by varying the values of the parameters in the Jacobi polynomials. As in special case, the Jacobi polynomials reduce to the well-known polynomials such as (1) Legendre polynomial, (2) Chebyshev polynomial of second kind, (3) Chebyshev polynomial of third and (4) Chebyshev polynomial of fourth kind respectively. Maximum absolute error and root mean square error are calculated for the illustrated examples and presented in form of tables for the comparison purpose. Numerical stability of the presented method with respect to all four kind of polynomials are discussed. Further, the obtained numerical results are compared with some known methods from the literature and it is observed that obtained results from the proposed method is better than these methods.

  7. Percolation critical polynomial as a graph invariant

    DOE PAGES

    Scullard, Christian R.

    2012-10-18

    Every lattice for which the bond percolation critical probability can be found exactly possesses a critical polynomial, with the root in [0; 1] providing the threshold. Recent work has demonstrated that this polynomial may be generalized through a definition that can be applied on any periodic lattice. The polynomial depends on the lattice and on its decomposition into identical finite subgraphs, but once these are specified, the polynomial is essentially unique. On lattices for which the exact percolation threshold is unknown, the polynomials provide approximations for the critical probability with the estimates appearing to converge to the exact answer withmore » increasing subgraph size. In this paper, I show how the critical polynomial can be viewed as a graph invariant like the Tutte polynomial. In particular, the critical polynomial is computed on a finite graph and may be found using the deletion-contraction algorithm. This allows calculation on a computer, and I present such results for the kagome lattice using subgraphs of up to 36 bonds. For one of these, I find the prediction p c = 0:52440572:::, which differs from the numerical value, p c = 0:52440503(5), by only 6:9 X 10 -7.« less

  8. Wavefront correction and high-resolution in vivo OCT imaging with an objective integrated multi-actuator adaptive lens.

    PubMed

    Bonora, Stefano; Jian, Yifan; Zhang, Pengfei; Zam, Azhar; Pugh, Edward N; Zawadzki, Robert J; Sarunic, Marinko V

    2015-08-24

    Adaptive optics is rapidly transforming microscopy and high-resolution ophthalmic imaging. The adaptive elements commonly used to control optical wavefronts are liquid crystal spatial light modulators and deformable mirrors. We introduce a novel Multi-actuator Adaptive Lens that can correct aberrations to high order, and which has the potential to increase the spread of adaptive optics to many new applications by simplifying its integration with existing systems. Our method combines an adaptive lens with an imaged-based optimization control that allows the correction of images to the diffraction limit, and provides a reduction of hardware complexity with respect to existing state-of-the-art adaptive optics systems. The Multi-actuator Adaptive Lens design that we present can correct wavefront aberrations up to the 4th order of the Zernike polynomial characterization. The performance of the Multi-actuator Adaptive Lens is demonstrated in a wide field microscope, using a Shack-Hartmann wavefront sensor for closed loop control. The Multi-actuator Adaptive Lens and image-based wavefront-sensorless control were also integrated into the objective of a Fourier Domain Optical Coherence Tomography system for in vivo imaging of mouse retinal structures. The experimental results demonstrate that the insertion of the Multi-actuator Objective Lens can generate arbitrary wavefronts to correct aberrations down to the diffraction limit, and can be easily integrated into optical systems to improve the quality of aberrated images.

  9. Riemann-Liouville Fractional Calculus of Certain Finite Class of Classical Orthogonal Polynomials

    NASA Astrophysics Data System (ADS)

    Malik, Pradeep; Swaminathan, A.

    2010-11-01

    In this work we consider certain class of classical orthogonal polynomials defined on the positive real line. These polynomials have their weight function related to the probability density function of F distribution and are finite in number up to orthogonality. We generalize these polynomials for fractional order by considering the Riemann-Liouville type operator on these polynomials. Various properties like explicit representation in terms of hypergeometric functions, differential equations, recurrence relations are derived.

  10. Unconditional reference values for the amniotic fluid index measurement between 26w0d and 41w6d of gestation in low-risk pregnancies.

    PubMed

    Peixoto, Alberto Borges; Caldas, Taciana Mara Rodrigues da Cunha; Martins, Wellington P; Da Silva Costa, Fabricio; Araujo Júnior, Edward

    2016-10-01

    To establish reference values for the amniotic fluid index (AFI) measurement between 26w0d and 41w6d of gestation in a Brazilian population. We performed a cross-sectional study with 1984 low-risk singleton pregnant women between 26w0d and 41w6d of gestation. AFI was measured according to the technique proposed by Phelan et al. Maternal abdomen was divided into four quadrants using the umbilicus and linea nigra as landmarks. Single vertical pocket in each quadrant was measured and the AFI was generated by the sum of these four values without umbilical cord or fetal parts. All ultrasound exams were performed by only two experienced examiners. AFI was expressed as median, interquartile range, mean and ranges in each gestational age (GA) interval. Polynomial regressions were performed to obtain the best fit with adjustment by the determination coefficient (R(2)). Mean of AFI ranged from 14.0 ± 4.1 cm (range, 9.7-14.0) at 26w0d to 8.3 ± 4.7 cm (range, 1.9-16.5) at 41w6d, respectively. The best polynomial regression fit curve was a first-degree: AFI = 16.29-0.125*GA (R(2) = 0.01). According the scatterplot, AFI values practically did not vary with advancing GA. Reference values for the AFI measurement between 26w0d and 41w6d of gestation in a low-risk Brazilian population were established.

  11. New methodology to reconstruct in 2-D the cuspal enamel of modern human lower molars.

    PubMed

    Modesto-Mata, Mario; García-Campos, Cecilia; Martín-Francés, Laura; Martínez de Pinillos, Marina; García-González, Rebeca; Quintino, Yuliet; Canals, Antoni; Lozano, Marina; Dean, M Christopher; Martinón-Torres, María; Bermúdez de Castro, José María

    2017-08-01

    In the last years different methodologies have been developed to reconstruct worn teeth. In this article, we propose a new 2-D methodology to reconstruct the worn enamel of lower molars. Our main goals are to reconstruct molars with a high level of accuracy when measuring relevant histological variables and to validate the methodology calculating the errors associated with the measurements. This methodology is based on polynomial regression equations, and has been validated using two different dental variables: cuspal enamel thickness and crown height of the protoconid. In order to perform the validation process, simulated worn modern human molars were employed. The associated errors of the measurements were also estimated applying methodologies previously proposed by other authors. The mean percentage error estimated in reconstructed molars for these two variables in comparison with their own real values is -2.17% for the cuspal enamel thickness of the protoconid and -3.18% for the crown height of the protoconid. This error significantly improves the results of other methodologies, both in the interobserver error and in the accuracy of the measurements. The new methodology based on polynomial regressions can be confidently applied to the reconstruction of cuspal enamel of lower molars, as it improves the accuracy of the measurements and reduces the interobserver error. The present study shows that it is important to validate all methodologies in order to know the associated errors. This new methodology can be easily exportable to other modern human populations, the human fossil record and forensic sciences. © 2017 Wiley Periodicals, Inc.

  12. Three Gorges Dam: polynomial regression modeling of water level and the density of schistosome-transmitting snails Oncomelania hupensis.

    PubMed

    Yang, Ya; Gao, Jianchuan; Cheng, Wanting; Pan, Xiang; Yang, Yu; Chen, Yue; Dai, Qingqing; Zhu, Lan; Zhou, Yibiao; Jiang, Qingwu

    2018-03-14

    Schistosomiasis remains a major public health concern in China. Oncomelania hupensis (O. hupensis) is the sole intermediate host of Schistosoma japonicum, and its change in distribution and density influences the endemic S. japonicum. The Three Gorges Dam (TGD) has substantially changed the downstream water levels of the dam. This study investigated the quantitative relationship between flooding duration and the density of the snail population. Two bottomlands without any control measures for snails were selected in Yueyang City, Hunan Province. Data for the density of the snail population and water level in both spring and autumn were collected for the period 2009-2015. Polynomial regression analysis was applied to explore the relationship between flooding duration and the density of the snail population. Data showed a convex relationship between spring snail density and flooding duration of the previous year (adjusted R 2 , aR 2  = 0.61). The spring snail density remained low when the flooding duration was fewer than 50 days in the previous year, was the highest when the flooding duration was 123 days, and decreased thereafter. There was a similar convex relationship between autumn snail density and flooding duration of the current year (aR 2  = 0.77). The snail density was low when the flooding duration was fewer than 50 days and was the highest when the flooding duration was 139 days. There was a convex relationship between flooding duration and the spring or autumn snail density. The snail density was the highest when flooding lasted about four to 5 months.

  13. Multi-step prediction for influenza outbreak by an adjusted long short-term memory.

    PubMed

    Zhang, J; Nawata, K

    2018-05-01

    Influenza results in approximately 3-5 million annual cases of severe illness and 250 000-500 000 deaths. We urgently need an accurate multi-step-ahead time-series forecasting model to help hospitals to perform dynamical assignments of beds to influenza patients for the annually varied influenza season, and aid pharmaceutical companies to formulate a flexible plan of manufacturing vaccine for the yearly different influenza vaccine. In this study, we utilised four different multi-step prediction algorithms in the long short-term memory (LSTM). The result showed that implementing multiple single-output prediction in a six-layer LSTM structure achieved the best accuracy. The mean absolute percentage errors from two- to 13-step-ahead prediction for the US influenza-like illness rates were all <15%, averagely 12.930%. To the best of our knowledge, it is the first time that LSTM has been applied and refined to perform multi-step-ahead prediction for influenza outbreaks. Hopefully, this modelling methodology can be applied in other countries and therefore help prevent and control influenza worldwide.

  14. Laguerre-Freud Equations for the Recurrence Coefficients of Some Discrete Semi-Classical Orthogonal Polynomials of Class Two

    NASA Astrophysics Data System (ADS)

    Hounga, C.; Hounkonnou, M. N.; Ronveaux, A.

    2006-10-01

    In this paper, we give Laguerre-Freud equations for the recurrence coefficients of discrete semi-classical orthogonal polynomials of class two, when the polynomials in the Pearson equation are of the same degree. The case of generalized Charlier polynomials is also presented.

  15. The Gibbs Phenomenon for Series of Orthogonal Polynomials

    ERIC Educational Resources Information Center

    Fay, T. H.; Kloppers, P. Hendrik

    2006-01-01

    This note considers the four classes of orthogonal polynomials--Chebyshev, Hermite, Laguerre, Legendre--and investigates the Gibbs phenomenon at a jump discontinuity for the corresponding orthogonal polynomial series expansions. The perhaps unexpected thing is that the Gibbs constant that arises for each class of polynomials appears to be the same…

  16. Determinants with orthogonal polynomial entries

    NASA Astrophysics Data System (ADS)

    Ismail, Mourad E. H.

    2005-06-01

    We use moment representations of orthogonal polynomials to evaluate the corresponding Hankel determinants formed by the orthogonal polynomials. We also study the Hankel determinants which start with pn on the top left-hand corner. As examples we evaluate the Hankel determinants whose entries are q-ultraspherical or Al-Salam-Chihara polynomials.

  17. Improved multi-objective ant colony optimization algorithm and its application in complex reasoning

    NASA Astrophysics Data System (ADS)

    Wang, Xinqing; Zhao, Yang; Wang, Dong; Zhu, Huijie; Zhang, Qing

    2013-09-01

    The problem of fault reasoning has aroused great concern in scientific and engineering fields. However, fault investigation and reasoning of complex system is not a simple reasoning decision-making problem. It has become a typical multi-constraint and multi-objective reticulate optimization decision-making problem under many influencing factors and constraints. So far, little research has been carried out in this field. This paper transforms the fault reasoning problem of complex system into a paths-searching problem starting from known symptoms to fault causes. Three optimization objectives are considered simultaneously: maximum probability of average fault, maximum average importance, and minimum average complexity of test. Under the constraints of both known symptoms and the causal relationship among different components, a multi-objective optimization mathematical model is set up, taking minimizing cost of fault reasoning as the target function. Since the problem is non-deterministic polynomial-hard(NP-hard), a modified multi-objective ant colony algorithm is proposed, in which a reachability matrix is set up to constrain the feasible search nodes of the ants and a new pseudo-random-proportional rule and a pheromone adjustment mechinism are constructed to balance conflicts between the optimization objectives. At last, a Pareto optimal set is acquired. Evaluation functions based on validity and tendency of reasoning paths are defined to optimize noninferior set, through which the final fault causes can be identified according to decision-making demands, thus realize fault reasoning of the multi-constraint and multi-objective complex system. Reasoning results demonstrate that the improved multi-objective ant colony optimization(IMACO) can realize reasoning and locating fault positions precisely by solving the multi-objective fault diagnosis model, which provides a new method to solve the problem of multi-constraint and multi-objective fault diagnosis and reasoning of complex system.

  18. Physical activity monitoring in COPD: compliance and associations with clinical characteristics in a multicenter study.

    PubMed

    Waschki, Benjamin; Spruit, Martijn A; Watz, Henrik; Albert, Paul S; Shrikrishna, Dinesh; Groenen, Miriam; Smith, Cayley; Man, William D-C; Tal-Singer, Ruth; Edwards, Lisa D; Calverley, Peter M A; Magnussen, Helgo; Polkey, Michael I; Wouters, Emiel F M

    2012-04-01

    Little is known about COPD patients' compliance with physical activity monitoring and how activity relates to disease characteristics in a multi-center setting. In a prospective study at three Northern European sites physical activity and clinical disease characteristics were measured in 134 COPD patients (GOLD-stage II-IV; BODE index 0-9) and 46 controls. Wearing time, steps per day, and the physical activity level (PAL) were measured by a multisensory armband over a period of 6 consecutive days (in total, 144 h). A valid measurement period was defined as ≥22 h wearing time a day on at least 5 days. The median wearing time was 142 h:17 min (99%), 141 h:1 min (98%), and 142 h:24 min (99%), respectively in the three centres. A valid measurement period was reached in 94%, 97%, and 94% of the patients and did not differ across sites (P = 0.53). The amount of physical activity did not differ across sites (mean steps per day, 4725 ± 3212, P = 0.58; mean PAL, 1.45 ± 0.20, P = 0.48). Multivariate linear regression analyses revealed significant associations of FEV1, 6-min walk distance, quadriceps strength, fibrinogen, health status, and dyspnoea with both steps per day and PAL. Previously unrecognized correlates of activity were grade of fatigue, degree of emphysema, and exacerbation rate. The excellent compliance with wearing a physical activity monitor irrespective of study site and consistent associations with relevant disease characteristics support the use of activity monitoring as a valid outcome in multi-center studies. Copyright © 2011 Elsevier Ltd. All rights reserved.

  19. Reliability of third molar development for age estimation in Gujarati population: A comparative study

    PubMed Central

    Gandhi, Neha; Jain, Sandeep; Kumar, Manish; Rupakar, Pratik; Choyal, Kanaram; Prajapati, Seema

    2015-01-01

    Background: Age assessment may be a crucial step in postmortem profiling leading to confirmative identification. In children, Demirjian's method based on eight developmental stages was developed to determine maturity scores as a function of age and polynomial functions to determine age as a function of score. Aim: Of this study was to evaluate the reliability of age estimation using Demirjian's eight teeth method following the French maturity scores and Indian-specific formula from developmental stages of third molar with the help of orthopantomograms using the Demirjian method. Materials and Methods: Dental panoramic tomograms from 30 subjects each of known chronological age and sex were collected and were evaluated according to Demirjian's criteria. Age calculations were performed using Demirjian's formula and Indian formula. Statistical analysis used was Chi-square test and ANOVA test and the P values obtained were statistically significant. Results: There was an average underestimation of age with both Indian and Demirjian's formulas. The mean absolute error was lower using Indian formula hence it can be applied for age estimation in present Gujarati population. Also, females were ahead of achieving dental maturity than males thus completion of dental development is attained earlier in females. Conclusion: Greater accuracy can be obtained if population-specific formulas considering the ethnic and environmental variation are derived performing the regression analysis. PMID:26005298

  20. Fredholm and Wronskian representations of solutions to the KPI equation and multi-rogue waves

    NASA Astrophysics Data System (ADS)

    Gaillard, Pierre

    2016-06-01

    We construct solutions to the Kadomtsev-Petviashvili equation (KPI) in terms of Fredholm determinants. We deduce solutions written as a quotient of Wronskians of order 2N. These solutions, called solutions of order N, depend on 2N - 1 parameters. When one of these parameters tends to zero, we obtain N order rational solutions expressed as a quotient of two polynomials of degree 2N(N + 1) in x, y, and t depending on 2N - 2 parameters. So we get with this method an infinite hierarchy of solutions to the KPI equation.

  1. Use of dirichlet distributions and orthogonal projection techniques for the fluctuation analysis of steady-state multivariate birth-death systems

    NASA Astrophysics Data System (ADS)

    Palombi, Filippo; Toti, Simona

    2015-05-01

    Approximate weak solutions of the Fokker-Planck equation represent a useful tool to analyze the equilibrium fluctuations of birth-death systems, as they provide a quantitative knowledge lying in between numerical simulations and exact analytic arguments. In this paper, we adapt the general mathematical formalism known as the Ritz-Galerkin method for partial differential equations to the Fokker-Planck equation with time-independent polynomial drift and diffusion coefficients on the simplex. Then, we show how the method works in two examples, namely the binary and multi-state voter models with zealots.

  2. Measurement of intrahepatic pressure during radiofrequency ablation in porcine liver.

    PubMed

    Kawamoto, Chiaki; Yamauchi, Atsushi; Baba, Yoko; Kaneko, Keiko; Yakabi, Koji

    2010-04-01

    To identify the most effective procedures to avoid increased intrahepatic pressure during radiofrequency ablation, we evaluated different ablation methods. Laparotomy was performed in 19 pigs. Intrahepatic pressure was monitored using an invasive blood pressure monitor. Radiofrequency ablation was performed as follows: single-step standard ablation; single-step at 30 W; single-step at 70 W; 4-step at 30 W; 8-step at 30 W; 8-step at 70 W; and cooled-tip. The array was fully deployed in single-step methods. In the multi-step methods, the array was gradually deployed in four or eight steps. With the cooled-tip, ablation was performed by increasing output by 10 W/min, starting at 40 W. Intrahepatic pressure was as follows: single-step standard ablation, 154.5 +/- 30.9 mmHg; single-step at 30 W, 34.2 +/- 20.0 mmHg; single-step at 70 W, 46.7 +/- 24.3 mmHg; 4-step at 30 W, 42.3 +/- 17.9 mmHg; 8-step at 30 W, 24.1 +/- 18.2 mmHg; 8-step at 70 W, 47.5 +/- 31.5 mmHg; and cooled-tip, 114.5 +/- 16.6 mmHg. The radiofrequency ablation-induced area was spherical with single-step standard ablation, 4-step at 30 W, and 8-step at 30 W. Conversely, the ablated area was irregular with single-step at 30 W, single-step at 70 W, and 8-step at 70 W. The ablation time was significantly shorter for the multi-step method than for the single-step method. Increased intrahepatic pressure could be controlled using multi-step methods. From the shapes of the ablation area, 30-W 8-step expansions appear to be most suitable for radiofrequency ablation.

  3. From sequences to polynomials and back, via operator orderings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amdeberhan, Tewodros, E-mail: tamdeber@tulane.edu; Dixit, Atul, E-mail: adixit@tulane.edu; Moll, Victor H., E-mail: vhm@tulane.edu

    2013-12-15

    Bender and Dunne [“Polynomials and operator orderings,” J. Math. Phys. 29, 1727–1731 (1988)] showed that linear combinations of words q{sup k}p{sup n}q{sup n−k}, where p and q are subject to the relation qp − pq = ı, may be expressed as a polynomial in the symbol z=1/2 (qp+pq). Relations between such polynomials and linear combinations of the transformed coefficients are explored. In particular, examples yielding orthogonal polynomials are provided.

  4. Assessment of Chlorophyll-a Algorithms Considering Different Trophic Statuses and Optimal Bands.

    PubMed

    Salem, Salem Ibrahim; Higa, Hiroto; Kim, Hyungjun; Kobayashi, Hiroshi; Oki, Kazuo; Oki, Taikan

    2017-07-31

    Numerous algorithms have been proposed to retrieve chlorophyll- a concentrations in Case 2 waters; however, the retrieval accuracy is far from satisfactory. In this research, seven algorithms are assessed with different band combinations of multispectral and hyperspectral bands using linear (LN), quadratic polynomial (QP) and power (PW) regression approaches, resulting in altogether 43 algorithmic combinations. These algorithms are evaluated by using simulated and measured datasets to understand the strengths and limitations of these algorithms. Two simulated datasets comprising 500,000 reflectance spectra each, both based on wide ranges of inherent optical properties (IOPs), are generated for the calibration and validation stages. Results reveal that the regression approach (i.e., LN, QP, and PW) has more influence on the simulated dataset than on the measured one. The algorithms that incorporated linear regression provide the highest retrieval accuracy for the simulated dataset. Results from simulated datasets reveal that the 3-band (3b) algorithm that incorporate 665-nm and 680-nm bands and band tuning selection approach outperformed other algorithms with root mean square error (RMSE) of 15.87 mg·m -3 , 16.25 mg·m -3 , and 19.05 mg·m -3 , respectively. The spatial distribution of the best performing algorithms, for various combinations of chlorophyll- a (Chla) and non-algal particles (NAP) concentrations, show that the 3b_tuning_QP and 3b_680_QP outperform other algorithms in terms of minimum RMSE frequency of 33.19% and 60.52%, respectively. However, the two algorithms failed to accurately retrieve Chla for many combinations of Chla and NAP, particularly for low Chla and NAP concentrations. In addition, the spatial distribution emphasizes that no single algorithm can provide outstanding accuracy for Chla retrieval and that multi-algorithms should be included to reduce the error. Comparing the results of the measured and simulated datasets reveal that the algorithms that incorporate the 665-nm band outperform other algorithms for measured dataset (RMSE = 36.84 mg·m -3 ), while algorithms that incorporate the band tuning approach provide the highest retrieval accuracy for the simulated dataset (RMSE = 25.05 mg·m -3 ).

  5. Assessment of Chlorophyll-a Algorithms Considering Different Trophic Statuses and Optimal Bands

    PubMed Central

    Higa, Hiroto; Kobayashi, Hiroshi; Oki, Kazuo

    2017-01-01

    Numerous algorithms have been proposed to retrieve chlorophyll-a concentrations in Case 2 waters; however, the retrieval accuracy is far from satisfactory. In this research, seven algorithms are assessed with different band combinations of multispectral and hyperspectral bands using linear (LN), quadratic polynomial (QP) and power (PW) regression approaches, resulting in altogether 43 algorithmic combinations. These algorithms are evaluated by using simulated and measured datasets to understand the strengths and limitations of these algorithms. Two simulated datasets comprising 500,000 reflectance spectra each, both based on wide ranges of inherent optical properties (IOPs), are generated for the calibration and validation stages. Results reveal that the regression approach (i.e., LN, QP, and PW) has more influence on the simulated dataset than on the measured one. The algorithms that incorporated linear regression provide the highest retrieval accuracy for the simulated dataset. Results from simulated datasets reveal that the 3-band (3b) algorithm that incorporate 665-nm and 680-nm bands and band tuning selection approach outperformed other algorithms with root mean square error (RMSE) of 15.87 mg·m−3, 16.25 mg·m−3, and 19.05 mg·m−3, respectively. The spatial distribution of the best performing algorithms, for various combinations of chlorophyll-a (Chla) and non-algal particles (NAP) concentrations, show that the 3b_tuning_QP and 3b_680_QP outperform other algorithms in terms of minimum RMSE frequency of 33.19% and 60.52%, respectively. However, the two algorithms failed to accurately retrieve Chla for many combinations of Chla and NAP, particularly for low Chla and NAP concentrations. In addition, the spatial distribution emphasizes that no single algorithm can provide outstanding accuracy for Chla retrieval and that multi-algorithms should be included to reduce the error. Comparing the results of the measured and simulated datasets reveal that the algorithms that incorporate the 665-nm band outperform other algorithms for measured dataset (RMSE = 36.84 mg·m−3), while algorithms that incorporate the band tuning approach provide the highest retrieval accuracy for the simulated dataset (RMSE = 25.05 mg·m−3). PMID:28758984

  6. More irregular eye shape in low myopia than in emmetropia.

    PubMed

    Tabernero, Juan; Schaeffel, Frank

    2009-09-01

    To improve the description of the peripheral eye shape in myopia and emmetropia by using a new method for continuous measurement of the peripheral refractive state. A scanning photorefractor was designed to record refractive errors in the vertical pupil meridian across the horizontal visual field (up to +/-45 degrees ). The setup consists of a hot mirror that continuously projects the infrared light from a photoretinoscope under different angles of eccentricity into the eye. The movement of the mirror is controlled by using two stepping motors. Refraction in a group of 17 emmetropic subjects and 11 myopic subjects (mean, -4.3 D; SD, 1.7) was measured without spectacle correction. For the analysis of eye shape, the refractive error versus the eccentricity angles was fitted with different polynomials (from second to tenth order). The new setup presents some important advantages over previous techniques: The subject does not have to change gaze during the measurements, and a continuous profile is obtained rather than discrete points. There was a significant difference in the fitting errors between the subjects with myopia and those with emmetropia. Tenth-order polynomials were required in myopic subjects to achieve a quality of fit similar to that in emmetropic subjects fitted with only sixth-order polynomials. Apparently, the peripheral shape of the myopic eye is more "bumpy." A new setup is presented for obtaining continuous peripheral refraction profiles. It was found that the peripheral retinal shape is more irregular even in only moderately myopic eyes, perhaps because the sclera lost some rigidity even at the early stage of myopia.

  7. Extending Romanovski polynomials in quantum mechanics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quesne, C.

    2013-12-15

    Some extensions of the (third-class) Romanovski polynomials (also called Romanovski/pseudo-Jacobi polynomials), which appear in bound-state wavefunctions of rationally extended Scarf II and Rosen-Morse I potentials, are considered. For the former potentials, the generalized polynomials satisfy a finite orthogonality relation, while for the latter an infinite set of relations among polynomials with degree-dependent parameters is obtained. Both types of relations are counterparts of those known for conventional polynomials. In the absence of any direct information on the zeros of the Romanovski polynomials present in denominators, the regularity of the constructed potentials is checked by taking advantage of the disconjugacy properties ofmore » second-order differential equations of Schrödinger type. It is also shown that on going from Scarf I to Scarf II or from Rosen-Morse II to Rosen-Morse I potentials, the variety of rational extensions is narrowed down from types I, II, and III to type III only.« less

  8. Polynomial solutions of the Monge-Ampère equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aminov, Yu A

    2014-11-30

    The question of the existence of polynomial solutions to the Monge-Ampère equation z{sub xx}z{sub yy}−z{sub xy}{sup 2}=f(x,y) is considered in the case when f(x,y) is a polynomial. It is proved that if f is a polynomial of the second degree, which is positive for all values of its arguments and has a positive squared part, then no polynomial solution exists. On the other hand, a solution which is not polynomial but is analytic in the whole of the x, y-plane is produced. Necessary and sufficient conditions for the existence of polynomial solutions of degree up to 4 are found and methods for the construction ofmore » such solutions are indicated. An approximation theorem is proved. Bibliography: 10 titles.« less

  9. Solving the interval type-2 fuzzy polynomial equation using the ranking method

    NASA Astrophysics Data System (ADS)

    Rahman, Nurhakimah Ab.; Abdullah, Lazim

    2014-07-01

    Polynomial equations with trapezoidal and triangular fuzzy numbers have attracted some interest among researchers in mathematics, engineering and social sciences. There are some methods that have been developed in order to solve these equations. In this study we are interested in introducing the interval type-2 fuzzy polynomial equation and solving it using the ranking method of fuzzy numbers. The ranking method concept was firstly proposed to find real roots of fuzzy polynomial equation. Therefore, the ranking method is applied to find real roots of the interval type-2 fuzzy polynomial equation. We transform the interval type-2 fuzzy polynomial equation to a system of crisp interval type-2 fuzzy polynomial equation. This transformation is performed using the ranking method of fuzzy numbers based on three parameters, namely value, ambiguity and fuzziness. Finally, we illustrate our approach by numerical example.

  10. Parallel multigrid smoothing: polynomial versus Gauss-Seidel

    NASA Astrophysics Data System (ADS)

    Adams, Mark; Brezina, Marian; Hu, Jonathan; Tuminaro, Ray

    2003-07-01

    Gauss-Seidel is often the smoother of choice within multigrid applications. In the context of unstructured meshes, however, maintaining good parallel efficiency is difficult with multiplicative iterative methods such as Gauss-Seidel. This leads us to consider alternative smoothers. We discuss the computational advantages of polynomial smoothers within parallel multigrid algorithms for positive definite symmetric systems. Two particular polynomials are considered: Chebyshev and a multilevel specific polynomial. The advantages of polynomial smoothing over traditional smoothers such as Gauss-Seidel are illustrated on several applications: Poisson's equation, thin-body elasticity, and eddy current approximations to Maxwell's equations. While parallelizing the Gauss-Seidel method typically involves a compromise between a scalable convergence rate and maintaining high flop rates, polynomial smoothers achieve parallel scalable multigrid convergence rates without sacrificing flop rates. We show that, although parallel computers are the main motivation, polynomial smoothers are often surprisingly competitive with Gauss-Seidel smoothers on serial machines.

  11. Multiple zeros of polynomials

    NASA Technical Reports Server (NTRS)

    Wood, C. A.

    1974-01-01

    For polynomials of higher degree, iterative numerical methods must be used. Four iterative methods are presented for approximating the zeros of a polynomial using a digital computer. Newton's method and Muller's method are two well known iterative methods which are presented. They extract the zeros of a polynomial by generating a sequence of approximations converging to each zero. However, both of these methods are very unstable when used on a polynomial which has multiple zeros. That is, either they fail to converge to some or all of the zeros, or they converge to very bad approximations of the polynomial's zeros. This material introduces two new methods, the greatest common divisor (G.C.D.) method and the repeated greatest common divisor (repeated G.C.D.) method, which are superior methods for numerically approximating the zeros of a polynomial having multiple zeros. These methods were programmed in FORTRAN 4 and comparisons in time and accuracy are given.

  12. Approximating Exponential and Logarithmic Functions Using Polynomial Interpolation

    ERIC Educational Resources Information Center

    Gordon, Sheldon P.; Yang, Yajun

    2017-01-01

    This article takes a closer look at the problem of approximating the exponential and logarithmic functions using polynomials. Either as an alternative to or a precursor to Taylor polynomial approximations at the precalculus level, interpolating polynomials are considered. A measure of error is given and the behaviour of the error function is…

  13. Interpolation and Polynomial Curve Fitting

    ERIC Educational Resources Information Center

    Yang, Yajun; Gordon, Sheldon P.

    2014-01-01

    Two points determine a line. Three noncollinear points determine a quadratic function. Four points that do not lie on a lower-degree polynomial curve determine a cubic function. In general, n + 1 points uniquely determine a polynomial of degree n, presuming that they do not fall onto a polynomial of lower degree. The process of finding such a…

  14. A note on the zeros of Freud-Sobolev orthogonal polynomials

    NASA Astrophysics Data System (ADS)

    Moreno-Balcazar, Juan J.

    2007-10-01

    We prove that the zeros of a certain family of Sobolev orthogonal polynomials involving the Freud weight function e-x4 on are real, simple, and interlace with the zeros of the Freud polynomials, i.e., those polynomials orthogonal with respect to the weight function e-x4. Some numerical examples are shown.

  15. Optimal Chebyshev polynomials on ellipses in the complex plane

    NASA Technical Reports Server (NTRS)

    Fischer, Bernd; Freund, Roland

    1989-01-01

    The design of iterative schemes for sparse matrix computations often leads to constrained polynomial approximation problems on sets in the complex plane. For the case of ellipses, we introduce a new class of complex polynomials which are in general very good approximations to the best polynomials and even optimal in most cases.

  16. A transition from using multi-step procedures to a fully integrated system for performing extracorporeal photopheresis: A comparison of costs and efficiencies.

    PubMed

    Azar, Nabih; Leblond, Veronique; Ouzegdouh, Maya; Button, Paul

    2017-12-01

    The Pitié Salpêtrière Hospital Hemobiotherapy Department, Paris, France, has been providing extracorporeal photopheresis (ECP) since November 2011, and started using the Therakos ® CELLEX ® fully integrated system in 2012. This report summarizes our single-center experience of transitioning from the use of multi-step ECP procedures to the fully integrated ECP system, considering the capacity and cost implications. The total number of ECP procedures performed 2011-2015 was derived from department records. The time taken to complete a single ECP treatment using a multi-step technique and the fully integrated system at our department was assessed. Resource costs (2014€) were obtained for materials and calculated for personnel time required. Time-driven activity-based costing methods were applied to provide a cost comparison. The number of ECP treatments per year increased from 225 (2012) to 727 (2015). The single multi-step procedure took 270 min compared to 120 min for the fully integrated system. The total calculated per-session cost of performing ECP using the multi-step procedure was greater than with the CELLEX ® system (€1,429.37 and €1,264.70 per treatment, respectively). For hospitals considering a transition from multi-step procedures to fully integrated methods for ECP where cost may be a barrier, time-driven activity-based costing should be utilized to gain a more comprehensive understanding the full benefit that such a transition offers. The example from our department confirmed that there were not just cost and time savings, but that the time efficiencies gained with CELLEX ® allow for more patient treatments per year. © 2017 The Authors Journal of Clinical Apheresis Published by Wiley Periodicals, Inc.

  17. Hydrodynamics-based functional forms of activity metabolism: a case for the power-law polynomial function in animal swimming energetics.

    PubMed

    Papadopoulos, Anthony

    2009-01-01

    The first-degree power-law polynomial function is frequently used to describe activity metabolism for steady swimming animals. This function has been used in hydrodynamics-based metabolic studies to evaluate important parameters of energetic costs, such as the standard metabolic rate and the drag power indices. In theory, however, the power-law polynomial function of any degree greater than one can be used to describe activity metabolism for steady swimming animals. In fact, activity metabolism has been described by the conventional exponential function and the cubic polynomial function, although only the power-law polynomial function models drag power since it conforms to hydrodynamic laws. Consequently, the first-degree power-law polynomial function yields incorrect parameter values of energetic costs if activity metabolism is governed by the power-law polynomial function of any degree greater than one. This issue is important in bioenergetics because correct comparisons of energetic costs among different steady swimming animals cannot be made unless the degree of the power-law polynomial function derives from activity metabolism. In other words, a hydrodynamics-based functional form of activity metabolism is a power-law polynomial function of any degree greater than or equal to one. Therefore, the degree of the power-law polynomial function should be treated as a parameter, not as a constant. This new treatment not only conforms to hydrodynamic laws, but also ensures correct comparisons of energetic costs among different steady swimming animals. Furthermore, the exponential power-law function, which is a new hydrodynamics-based functional form of activity metabolism, is a special case of the power-law polynomial function. Hence, the link between the hydrodynamics of steady swimming and the exponential-based metabolic model is defined.

  18. Dissolvable fluidic time delays for programming multi-step assays in instrument-free paper diagnostics.

    PubMed

    Lutz, Barry; Liang, Tinny; Fu, Elain; Ramachandran, Sujatha; Kauffman, Peter; Yager, Paul

    2013-07-21

    Lateral flow tests (LFTs) are an ingenious format for rapid and easy-to-use diagnostics, but they are fundamentally limited to assay chemistries that can be reduced to a single chemical step. In contrast, most laboratory diagnostic assays rely on multiple timed steps carried out by a human or a machine. Here, we use dissolvable sugar applied to paper to create programmable flow delays and present a paper network topology that uses these time delays to program automated multi-step fluidic protocols. Solutions of sucrose at different concentrations (10-70% of saturation) were added to paper strips and dried to create fluidic time delays spanning minutes to nearly an hour. A simple folding card format employing sugar delays was shown to automate a four-step fluidic process initiated by a single user activation step (folding the card); this device was used to perform a signal-amplified sandwich immunoassay for a diagnostic biomarker for malaria. The cards are capable of automating multi-step assay protocols normally used in laboratories, but in a rapid, low-cost, and easy-to-use format.

  19. Dissolvable fluidic time delays for programming multi-step assays in instrument-free paper diagnostics

    PubMed Central

    Lutz, Barry; Liang, Tinny; Fu, Elain; Ramachandran, Sujatha; Kauffman, Peter; Yager, Paul

    2013-01-01

    Lateral flow tests (LFTs) are an ingenious format for rapid and easy-to-use diagnostics, but they are fundamentally limited to assay chemistries that can be reduced to a single chemical step. In contrast, most laboratory diagnostic assays rely on multiple timed steps carried out by a human or a machine. Here, we use dissolvable sugar applied to paper to create programmable flow delays and present a paper network topology that uses these time delays to program automated multi-step fluidic protocols. Solutions of sucrose at different concentrations (10-70% of saturation) were added to paper strips and dried to create fluidic time delays spanning minutes to nearly an hour. A simple folding card format employing sugar delays was shown to automate a four-step fluidic process initiated by a single user activation step (folding the card); this device was used to perform a signal-amplified sandwich immunoassay for a diagnostic biomarker for malaria. The cards are capable of automating multi-step assay protocols normally used in laboratories, but in a rapid, low-cost, and easy-to-use format. PMID:23685876

  20. The design of a multi-harmonic step-tunable gyrotron

    NASA Astrophysics Data System (ADS)

    Qi, Xiang-Bo; Du, Chao-Hai; Zhu, Juan-Feng; Pan, Shi; Liu, Pu-Kun

    2017-03-01

    The theoretical study of a step-tunable gyrotron controlled by successive excitation of multi-harmonic modes is presented in this paper. An axis-encircling electron beam is employed to eliminate the harmonic mode competition. Physics images are depicted to elaborate the multi-harmonic interaction mechanism in determining the operating parameters at which arbitrary harmonic tuning can be realized by magnetic field sweeping to achieve controlled multiband frequencies' radiation. An important principle is revealed that a weak coupling coefficient under a high-harmonic interaction can be compensated by a high Q-factor. To some extent, the complementation between the high Q-factor and weak coupling coefficient makes the high-harmonic mode potential to achieve high efficiency. Based on a previous optimized magnetic cusp gun, the multi-harmonic step-tunable gyrotron is feasible by using harmonic tuning of first-to-fourth harmonic modes. Multimode simulation shows that the multi-harmonic gyrotron can operate on the 34 GHz first-harmonic TE11 mode, 54 GHz second-harmonic TE21 mode, 74 GHz third-harmonic TE31 mode, and 94 GHz fourth-harmonic TE41 mode, corresponding to peak efficiencies of 28.6%, 35.7%, 17.1%, and 11.4%, respectively. The multi-harmonic step-tunable gyrotron provides new possibilities in millimeter-terahertz source development especially for advanced terahertz applications.

  1. Improved perovskite phototransistor prepared using multi-step annealing method

    NASA Astrophysics Data System (ADS)

    Cao, Mingxuan; Zhang, Yating; Yu, Yu; Yao, Jianquan

    2018-02-01

    Organic-inorganic hybrid perovskites with good intrinsic physical properties have received substantial interest for solar cell and optoelectronic applications. However, perovskite film always suffers from a low carrier mobility due to its structural imperfection including sharp grain boundaries and pinholes, restricting their device performance and application potential. Here we demonstrate a straightforward strategy based on multi-step annealing process to improve the performance of perovskite photodetector. Annealing temperature and duration greatly affects the surface morphology and optoelectrical properties of perovskites which determines the device property of phototransistor. The perovskite films treated with multi-step annealing method tend to form highly uniform, well-crystallized and high surface coverage perovskite film, which exhibit stronger ultraviolet-visible absorption and photoluminescence spectrum compare to the perovskites prepared by conventional one-step annealing process. The field-effect mobilities of perovskite photodetector treated by one-step direct annealing method shows mobility as 0.121 (0.062) cm2V-1s-1 for holes (electrons), which increases to 1.01 (0.54) cm2V-1s-1 for that treated with muti-step slow annealing method. Moreover, the perovskite phototransistors exhibit a fast photoresponse speed of 78 μs. In general, this work focuses on the influence of annealing methods on perovskite phototransistor, instead of obtains best parameters of it. These findings prove that Multi-step annealing methods is feasible to prepared high performance based photodetector.

  2. A framework for longitudinal data analysis via shape regression

    NASA Astrophysics Data System (ADS)

    Fishbaugh, James; Durrleman, Stanley; Piven, Joseph; Gerig, Guido

    2012-02-01

    Traditional longitudinal analysis begins by extracting desired clinical measurements, such as volume or head circumference, from discrete imaging data. Typically, the continuous evolution of a scalar measurement is estimated by choosing a 1D regression model, such as kernel regression or fitting a polynomial of fixed degree. This type of analysis not only leads to separate models for each measurement, but there is no clear anatomical or biological interpretation to aid in the selection of the appropriate paradigm. In this paper, we propose a consistent framework for the analysis of longitudinal data by estimating the continuous evolution of shape over time as twice differentiable flows of deformations. In contrast to 1D regression models, one model is chosen to realistically capture the growth of anatomical structures. From the continuous evolution of shape, we can simply extract any clinical measurements of interest. We demonstrate on real anatomical surfaces that volume extracted from a continuous shape evolution is consistent with a 1D regression performed on the discrete measurements. We further show how the visualization of shape progression can aid in the search for significant measurements. Finally, we present an example on a shape complex of the brain (left hemisphere, right hemisphere, cerebellum) that demonstrates a potential clinical application for our framework.

  3. Multi-optical-axis measurement of freeform progressive addition lenses using a Hartmann-Shack wavefront sensor

    NASA Astrophysics Data System (ADS)

    Xiang, Huazhong; Guo, Hang; Fu, Dongxiang; Zheng, Gang; Zhuang, Songlin; Chen, JiaBi; Wang, Cheng; Wu, Jie

    2018-05-01

    To precisely measure the whole-surface characterization of freeform progressive addition lenses (PALs), considering the multi-optical-axis conditions is becoming particularly important. Spherical power and astigmatism (cylinder) measurements for freeform PALs, using a Hartmann-Shack wavefront sensor (HSWFS) are proposed herein. Conversion formulas for the optical performance results were provided as HSWFS Zernike polynomial expansions. For each selected zone, the studied PALs were placed and tilted to simulate the multi-optical-axis conditions. The results of two tested PALs were analyzed using MATLAB programs and represented as contour plots of the spherical equivalent and cylinder of the whole-surface. The proposed experimental setup can provide a high accuracy as well as a possibility of choosing 12 lines and positions of 193 measurement zones on the entire surface. This approach to PAL analysis is potentially an efficient and useful method to objectively evaluate the optical performances, in which the full lens surface is defined and expressed as the contour plots of power in different regions (i.e., the distance region, progressive region, and near region) of the lens for regions of interest.

  4. Regression-based reduced-order models to predict transient thermal output for enhanced geothermal systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mudunuru, Maruti Kumar; Karra, Satish; Harp, Dylan Robert

    Reduced-order modeling is a promising approach, as many phenomena can be described by a few parameters/mechanisms. An advantage and attractive aspect of a reduced-order model is that it is computational inexpensive to evaluate when compared to running a high-fidelity numerical simulation. A reduced-order model takes couple of seconds to run on a laptop while a high-fidelity simulation may take couple of hours to run on a high-performance computing cluster. The goal of this paper is to assess the utility of regression-based reduced-order models (ROMs) developed from high-fidelity numerical simulations for predicting transient thermal power output for an enhanced geothermal reservoirmore » while explicitly accounting for uncertainties in the subsurface system and site-specific details. Numerical simulations are performed based on equally spaced values in the specified range of model parameters. Key sensitive parameters are then identified from these simulations, which are fracture zone permeability, well/skin factor, bottom hole pressure, and injection flow rate. We found the fracture zone permeability to be the most sensitive parameter. The fracture zone permeability along with time, are used to build regression-based ROMs for the thermal power output. The ROMs are trained and validated using detailed physics-based numerical simulations. Finally, predictions from the ROMs are then compared with field data. We propose three different ROMs with different levels of model parsimony, each describing key and essential features of the power production curves. The coefficients in the proposed regression-based ROMs are developed by minimizing a non-linear least-squares misfit function using the Levenberg–Marquardt algorithm. The misfit function is based on the difference between numerical simulation data and reduced-order model. ROM-1 is constructed based on polynomials up to fourth order. ROM-1 is able to accurately reproduce the power output of numerical simulations for low values of permeabilities and certain features of the field-scale data. ROM-2 is a model with more analytical functions consisting of polynomials up to order eight, exponential functions and smooth approximations of Heaviside functions, and accurately describes the field-data. At higher permeabilities, ROM-2 reproduces numerical results better than ROM-1, however, there is a considerable deviation from numerical results at low fracture zone permeabilities. ROM-3 consists of polynomials up to order ten, and is developed by taking the best aspects of ROM-1 and ROM-2. ROM-1 is relatively parsimonious than ROM-2 and ROM-3, while ROM-2 overfits the data. ROM-3 on the other hand, provides a middle ground for model parsimony. Based on R 2-values for training, validation, and prediction data sets we found that ROM-3 is better model than ROM-2 and ROM-1. For predicting thermal drawdown in EGS applications, where high fracture zone permeabilities (typically greater than 10 –15 m 2) are desired, ROM-2 and ROM-3 outperform ROM-1. As per computational time, all the ROMs are 10 4 times faster when compared to running a high-fidelity numerical simulation. In conclusion, this makes the proposed regression-based ROMs attractive for real-time EGS applications because they are fast and provide reasonably good predictions for thermal power output.« less

  5. Regression-based reduced-order models to predict transient thermal output for enhanced geothermal systems

    DOE PAGES

    Mudunuru, Maruti Kumar; Karra, Satish; Harp, Dylan Robert; ...

    2017-07-10

    Reduced-order modeling is a promising approach, as many phenomena can be described by a few parameters/mechanisms. An advantage and attractive aspect of a reduced-order model is that it is computational inexpensive to evaluate when compared to running a high-fidelity numerical simulation. A reduced-order model takes couple of seconds to run on a laptop while a high-fidelity simulation may take couple of hours to run on a high-performance computing cluster. The goal of this paper is to assess the utility of regression-based reduced-order models (ROMs) developed from high-fidelity numerical simulations for predicting transient thermal power output for an enhanced geothermal reservoirmore » while explicitly accounting for uncertainties in the subsurface system and site-specific details. Numerical simulations are performed based on equally spaced values in the specified range of model parameters. Key sensitive parameters are then identified from these simulations, which are fracture zone permeability, well/skin factor, bottom hole pressure, and injection flow rate. We found the fracture zone permeability to be the most sensitive parameter. The fracture zone permeability along with time, are used to build regression-based ROMs for the thermal power output. The ROMs are trained and validated using detailed physics-based numerical simulations. Finally, predictions from the ROMs are then compared with field data. We propose three different ROMs with different levels of model parsimony, each describing key and essential features of the power production curves. The coefficients in the proposed regression-based ROMs are developed by minimizing a non-linear least-squares misfit function using the Levenberg–Marquardt algorithm. The misfit function is based on the difference between numerical simulation data and reduced-order model. ROM-1 is constructed based on polynomials up to fourth order. ROM-1 is able to accurately reproduce the power output of numerical simulations for low values of permeabilities and certain features of the field-scale data. ROM-2 is a model with more analytical functions consisting of polynomials up to order eight, exponential functions and smooth approximations of Heaviside functions, and accurately describes the field-data. At higher permeabilities, ROM-2 reproduces numerical results better than ROM-1, however, there is a considerable deviation from numerical results at low fracture zone permeabilities. ROM-3 consists of polynomials up to order ten, and is developed by taking the best aspects of ROM-1 and ROM-2. ROM-1 is relatively parsimonious than ROM-2 and ROM-3, while ROM-2 overfits the data. ROM-3 on the other hand, provides a middle ground for model parsimony. Based on R 2-values for training, validation, and prediction data sets we found that ROM-3 is better model than ROM-2 and ROM-1. For predicting thermal drawdown in EGS applications, where high fracture zone permeabilities (typically greater than 10 –15 m 2) are desired, ROM-2 and ROM-3 outperform ROM-1. As per computational time, all the ROMs are 10 4 times faster when compared to running a high-fidelity numerical simulation. In conclusion, this makes the proposed regression-based ROMs attractive for real-time EGS applications because they are fast and provide reasonably good predictions for thermal power output.« less

  6. Stochastic Estimation via Polynomial Chaos

    DTIC Science & Technology

    2015-10-01

    AFRL-RW-EG-TR-2015-108 Stochastic Estimation via Polynomial Chaos Douglas V. Nance Air Force Research...COVERED (From - To) 20-04-2015 – 07-08-2015 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER Stochastic Estimation via Polynomial Chaos ...This expository report discusses fundamental aspects of the polynomial chaos method for representing the properties of second order stochastic

  7. Vehicle Sprung Mass Estimation for Rough Terrain

    DTIC Science & Technology

    2011-03-01

    distributions are greater than zero. The multivariate polynomials are functions of the Legendre polynomials (Poularikas (1999...developed methods based on polynomial chaos theory and on the maximum likelihood approach to estimate the most likely value of the vehicle sprung...mass. The polynomial chaos estimator is compared to benchmark algorithms including recursive least squares, recursive total least squares, extended

  8. Degenerate r-Stirling Numbers and r-Bell Polynomials

    NASA Astrophysics Data System (ADS)

    Kim, T.; Yao, Y.; Kim, D. S.; Jang, G.-W.

    2018-01-01

    The purpose of this paper is to exploit umbral calculus in order to derive some properties, recurrence relations, and identities related to the degenerate r-Stirling numbers of the second kind and the degenerate r-Bell polynomials. Especially, we will express the degenerate r-Bell polynomials as linear combinations of many well-known families of special polynomials.

  9. From Chebyshev to Bernstein: A Tour of Polynomials Small and Large

    ERIC Educational Resources Information Center

    Boelkins, Matthew; Miller, Jennifer; Vugteveen, Benjamin

    2006-01-01

    Consider the family of monic polynomials of degree n having zeros at -1 and +1 and all their other real zeros in between these two values. This article explores the size of these polynomials using the supremum of the absolute value on [-1, 1], showing that scaled Chebyshev and Bernstein polynomials give the extremes.

  10. Multiple regression technique for Pth degree polynominals with and without linear cross products

    NASA Technical Reports Server (NTRS)

    Davis, J. W.

    1973-01-01

    A multiple regression technique was developed by which the nonlinear behavior of specified independent variables can be related to a given dependent variable. The polynomial expression can be of Pth degree and can incorporate N independent variables. Two cases are treated such that mathematical models can be studied both with and without linear cross products. The resulting surface fits can be used to summarize trends for a given phenomenon and provide a mathematical relationship for subsequent analysis. To implement this technique, separate computer programs were developed for the case without linear cross products and for the case incorporating such cross products which evaluate the various constants in the model regression equation. In addition, the significance of the estimated regression equation is considered and the standard deviation, the F statistic, the maximum absolute percent error, and the average of the absolute values of the percent of error evaluated. The computer programs and their manner of utilization are described. Sample problems are included to illustrate the use and capability of the technique which show the output formats and typical plots comparing computer results to each set of input data.

  11. Method for obtaining electron energy-density functions from Langmuir-probe data using a card-programmable calculator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Longhurst, G.R.

    This paper presents a method for obtaining electron energy density functions from Langmuir probe data taken in cool, dense plasmas where thin-sheath criteria apply and where magnetic effects are not severe. Noise is filtered out by using regression of orthogonal polynomials. The method requires only a programmable calculator (TI-59 or equivalent) to implement and can be used for the most general, nonequilibrium electron energy distribution plasmas. Data from a mercury ion source analyzed using this method are presented and compared with results for the same data using standard numerical techniques.

  12. Models for Estimating Genetic Parameters of Milk Production Traits Using Random Regression Models in Korean Holstein Cattle

    PubMed Central

    Cho, C. I.; Alam, M.; Choi, T. J.; Choy, Y. H.; Choi, J. G.; Lee, S. S.; Cho, K. H.

    2016-01-01

    The objectives of the study were to estimate genetic parameters for milk production traits of Holstein cattle using random regression models (RRMs), and to compare the goodness of fit of various RRMs with homogeneous and heterogeneous residual variances. A total of 126,980 test-day milk production records of the first parity Holstein cows between 2007 and 2014 from the Dairy Cattle Improvement Center of National Agricultural Cooperative Federation in South Korea were used. These records included milk yield (MILK), fat yield (FAT), protein yield (PROT), and solids-not-fat yield (SNF). The statistical models included random effects of genetic and permanent environments using Legendre polynomials (LP) of the third to fifth order (L3–L5), fixed effects of herd-test day, year-season at calving, and a fixed regression for the test-day record (third to fifth order). The residual variances in the models were either homogeneous (HOM) or heterogeneous (15 classes, HET15; 60 classes, HET60). A total of nine models (3 orders of polynomials×3 types of residual variance) including L3-HOM, L3-HET15, L3-HET60, L4-HOM, L4-HET15, L4-HET60, L5-HOM, L5-HET15, and L5-HET60 were compared using Akaike information criteria (AIC) and/or Schwarz Bayesian information criteria (BIC) statistics to identify the model(s) of best fit for their respective traits. The lowest BIC value was observed for the models L5-HET15 (MILK; PROT; SNF) and L4-HET15 (FAT), which fit the best. In general, the BIC values of HET15 models for a particular polynomial order was lower than that of the HET60 model in most cases. This implies that the orders of LP and types of residual variances affect the goodness of models. Also, the heterogeneity of residual variances should be considered for the test-day analysis. The heritability estimates of from the best fitted models ranged from 0.08 to 0.15 for MILK, 0.06 to 0.14 for FAT, 0.08 to 0.12 for PROT, and 0.07 to 0.13 for SNF according to days in milk of first lactation. Genetic variances for studied traits tended to decrease during the earlier stages of lactation, which were followed by increases in the middle and decreases further at the end of lactation. With regards to the fitness of the models and the differential genetic parameters across the lactation stages, we could estimate genetic parameters more accurately from RRMs than from lactation models. Therefore, we suggest using RRMs in place of lactation models to make national dairy cattle genetic evaluations for milk production traits in Korea. PMID:26954184

  13. Models for Estimating Genetic Parameters of Milk Production Traits Using Random Regression Models in Korean Holstein Cattle.

    PubMed

    Cho, C I; Alam, M; Choi, T J; Choy, Y H; Choi, J G; Lee, S S; Cho, K H

    2016-05-01

    The objectives of the study were to estimate genetic parameters for milk production traits of Holstein cattle using random regression models (RRMs), and to compare the goodness of fit of various RRMs with homogeneous and heterogeneous residual variances. A total of 126,980 test-day milk production records of the first parity Holstein cows between 2007 and 2014 from the Dairy Cattle Improvement Center of National Agricultural Cooperative Federation in South Korea were used. These records included milk yield (MILK), fat yield (FAT), protein yield (PROT), and solids-not-fat yield (SNF). The statistical models included random effects of genetic and permanent environments using Legendre polynomials (LP) of the third to fifth order (L3-L5), fixed effects of herd-test day, year-season at calving, and a fixed regression for the test-day record (third to fifth order). The residual variances in the models were either homogeneous (HOM) or heterogeneous (15 classes, HET15; 60 classes, HET60). A total of nine models (3 orders of polynomials×3 types of residual variance) including L3-HOM, L3-HET15, L3-HET60, L4-HOM, L4-HET15, L4-HET60, L5-HOM, L5-HET15, and L5-HET60 were compared using Akaike information criteria (AIC) and/or Schwarz Bayesian information criteria (BIC) statistics to identify the model(s) of best fit for their respective traits. The lowest BIC value was observed for the models L5-HET15 (MILK; PROT; SNF) and L4-HET15 (FAT), which fit the best. In general, the BIC values of HET15 models for a particular polynomial order was lower than that of the HET60 model in most cases. This implies that the orders of LP and types of residual variances affect the goodness of models. Also, the heterogeneity of residual variances should be considered for the test-day analysis. The heritability estimates of from the best fitted models ranged from 0.08 to 0.15 for MILK, 0.06 to 0.14 for FAT, 0.08 to 0.12 for PROT, and 0.07 to 0.13 for SNF according to days in milk of first lactation. Genetic variances for studied traits tended to decrease during the earlier stages of lactation, which were followed by increases in the middle and decreases further at the end of lactation. With regards to the fitness of the models and the differential genetic parameters across the lactation stages, we could estimate genetic parameters more accurately from RRMs than from lactation models. Therefore, we suggest using RRMs in place of lactation models to make national dairy cattle genetic evaluations for milk production traits in Korea.

  14. Umbral orthogonal polynomials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lopez-Sendino, J. E.; del Olmo, M. A.

    2010-12-23

    We present an umbral operator version of the classical orthogonal polynomials. We obtain three families which are the umbral counterpart of the Jacobi, Laguerre and Hermite polynomials in the classical case.

  15. A direct Arbitrary-Lagrangian-Eulerian ADER-WENO finite volume scheme on unstructured tetrahedral meshes for conservative and non-conservative hyperbolic systems in 3D

    NASA Astrophysics Data System (ADS)

    Boscheri, Walter; Dumbser, Michael

    2014-10-01

    In this paper we present a new family of high order accurate Arbitrary-Lagrangian-Eulerian (ALE) one-step ADER-WENO finite volume schemes for the solution of nonlinear systems of conservative and non-conservative hyperbolic partial differential equations with stiff source terms on moving tetrahedral meshes in three space dimensions. A WENO reconstruction technique is used to achieve high order of accuracy in space, while an element-local space-time Discontinuous Galerkin finite element predictor on moving curved meshes is used to obtain a high order accurate one-step time discretization. Within the space-time predictor the physical element is mapped onto a reference element using a high order isoparametric approach, where the space-time basis and test functions are given by the Lagrange interpolation polynomials passing through a predefined set of space-time nodes. Since our algorithm is cell-centered, the final mesh motion is computed by using a suitable node solver algorithm. A rezoning step as well as a flattener strategy are used in some of the test problems to avoid mesh tangling or excessive element deformations that may occur when the computation involves strong shocks or shear waves. The ALE algorithm presented in this article belongs to the so-called direct ALE methods because the final Lagrangian finite volume scheme is based directly on a space-time conservation formulation of the governing PDE system, with the rezoned geometry taken already into account during the computation of the fluxes. We apply our new high order unstructured ALE schemes to the 3D Euler equations of compressible gas dynamics, for which a set of classical numerical test problems has been solved and for which convergence rates up to sixth order of accuracy in space and time have been obtained. We furthermore consider the equations of classical ideal magnetohydrodynamics (MHD) as well as the non-conservative seven-equation Baer-Nunziato model of compressible multi-phase flows with stiff relaxation source terms.

  16. Non-model-based damage identification of plates using principal, mean and Gaussian curvature mode shapes

    DOE PAGES

    Xu, Yongfeng F.; Zhu, Weidong D.; Smith, Scott A.

    2017-07-01

    Mode shapes (MSs) have been extensively used to identify structural damage. This paper presents a new non-model-based method that uses principal, mean and Gaussian curvature MSs (CMSs) to identify damage in plates; the method is applicable and robust to MSs associated with low and high elastic modes on dense and coarse measurement grids. A multi-scale discrete differential-geometry scheme is proposed to calculate principal, mean and Gaussian CMSs associated with a MS of a plate, which can alleviate adverse effects of measurement noise on calculating the CMSs. Principal, mean and Gaussian CMSs of a damaged plate and those of an undamagedmore » one are used to yield four curvature damage indices (CDIs), including Maximum-CDIs, Minimum-CDIs, Mean-CDIs and Gaussian-CDIs. Damage can be identified near regions with consistently higher values of the CDIs. It is shown that a MS of an undamaged plate can be well approximated using a polynomial with a properly determined order that fits a MS of a damaged one, provided that the undamaged plate has a smooth geometry and is made of material that has no stiffness and mass discontinuities. New fitting and convergence indices are proposed to quantify the level of approximation of a MS from a polynomial fit to that of a damaged plate and to determine the proper order of the polynomial fit, respectively. A MS of an aluminum plate with damage in the form of a machined thickness reduction area was measured to experimentally investigate the effectiveness of the proposed CDIs in damage identification; the damage on the plate was successfully identified.« less

  17. Non-model-based damage identification of plates using principal, mean and Gaussian curvature mode shapes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xu, Yongfeng F.; Zhu, Weidong D.; Smith, Scott A.

    Mode shapes (MSs) have been extensively used to identify structural damage. This paper presents a new non-model-based method that uses principal, mean and Gaussian curvature MSs (CMSs) to identify damage in plates; the method is applicable and robust to MSs associated with low and high elastic modes on dense and coarse measurement grids. A multi-scale discrete differential-geometry scheme is proposed to calculate principal, mean and Gaussian CMSs associated with a MS of a plate, which can alleviate adverse effects of measurement noise on calculating the CMSs. Principal, mean and Gaussian CMSs of a damaged plate and those of an undamagedmore » one are used to yield four curvature damage indices (CDIs), including Maximum-CDIs, Minimum-CDIs, Mean-CDIs and Gaussian-CDIs. Damage can be identified near regions with consistently higher values of the CDIs. It is shown that a MS of an undamaged plate can be well approximated using a polynomial with a properly determined order that fits a MS of a damaged one, provided that the undamaged plate has a smooth geometry and is made of material that has no stiffness and mass discontinuities. New fitting and convergence indices are proposed to quantify the level of approximation of a MS from a polynomial fit to that of a damaged plate and to determine the proper order of the polynomial fit, respectively. A MS of an aluminum plate with damage in the form of a machined thickness reduction area was measured to experimentally investigate the effectiveness of the proposed CDIs in damage identification; the damage on the plate was successfully identified.« less

  18. A variational formulation for vibro-acoustic analysis of a panel backed by an irregularly-bounded cavity

    NASA Astrophysics Data System (ADS)

    Xie, Xiang; Zheng, Hui; Qu, Yegao

    2016-07-01

    A weak form variational based method is developed to study the vibro-acoustic responses of coupled structural-acoustic system consisting of an irregular acoustic cavity with general wall impedance and a flexible panel subjected to arbitrary edge-supporting conditions. The structural and acoustical models of the coupled system are formulated on the basis of a modified variational method combined with multi-segment partitioning strategy. Meanwhile, the continuity constraints on the sub-segment interfaces are further incorporated into the system stiffness matrix by means of least-squares weighted residual method. Orthogonal polynomials, such as Chebyshev polynomials of the first kind, are employed as the wholly admissible unknown displacement and sound pressure field variables functions for separate components without meshing, and hence mapping the irregular physical domain into a square spectral domain is necessary. The effects of weighted parameter together with the number of truncated polynomial terms and divided partitions on the accuracy of present theoretical solutions are investigated. It is observed that applying this methodology, accurate and efficient predictions can be obtained for various types of coupled panel-cavity problems; and in weak or strong coupling cases for a panel surrounded by a light or heavy fluid, the inherent principle of velocity continuity on the panel-cavity contacting interface can all be handled satisfactorily. Key parametric studies concerning the influences of the geometrical properties as well as impedance boundary are performed. Finally, by performing the vibro-acoustic analyses of 3D car-like coupled miniature, we demonstrate that the present method seems to be an excellent way to obtain accurate mid-frequency solution with an acceptable CPU time.

  19. Hermite WENO limiting for multi-moment finite-volume methods using the ADER-DT time discretization for 1-D systems of conservation laws

    DOE PAGES

    Norman, Matthew R.

    2014-11-24

    New Hermite Weighted Essentially Non-Oscillatory (HWENO) interpolants are developed and investigated within the Multi-Moment Finite-Volume (MMFV) formulation using the ADER-DT time discretization. Whereas traditional WENO methods interpolate pointwise, function-based WENO methods explicitly form a non-oscillatory, high-order polynomial over the cell in question. This study chooses a function-based approach and details how fast convergence to optimal weights for smooth flow is ensured. Methods of sixth-, eighth-, and tenth-order accuracy are developed. We compare these against traditional single-moment WENO methods of fifth-, seventh-, ninth-, and eleventh-order accuracy to compare against more familiar methods from literature. The new HWENO methods improve upon existingmore » HWENO methods (1) by giving a better resolution of unreinforced contact discontinuities and (2) by only needing a single HWENO polynomial to update both the cell mean value and cell mean derivative. Test cases to validate and assess these methods include 1-D linear transport, the 1-D inviscid Burger's equation, and the 1-D inviscid Euler equations. Smooth and non-smooth flows are used for evaluation. These HWENO methods performed better than comparable literature-standard WENO methods for all regimes of discontinuity and smoothness in all tests herein. They exhibit improved optimal accuracy due to the use of derivatives, and they collapse to solutions similar to typical WENO methods when limiting is required. The study concludes that the new HWENO methods are robust and effective when used in the ADER-DT MMFV framework. Finally, these results are intended to demonstrate capability rather than exhaust all possible implementations.« less

  20. Global Design Optimization for Aerodynamics and Rocket Propulsion Components

    NASA Technical Reports Server (NTRS)

    Shyy, Wei; Papila, Nilay; Vaidyanathan, Rajkumar; Tucker, Kevin; Turner, James E. (Technical Monitor)

    2000-01-01

    Modern computational and experimental tools for aerodynamics and propulsion applications have matured to a stage where they can provide substantial insight into engineering processes involving fluid flows, and can be fruitfully utilized to help improve the design of practical devices. In particular, rapid and continuous development in aerospace engineering demands that new design concepts be regularly proposed to meet goals for increased performance, robustness and safety while concurrently decreasing cost. To date, the majority of the effort in design optimization of fluid dynamics has relied on gradient-based search algorithms. Global optimization methods can utilize the information collected from various sources and by different tools. These methods offer multi-criterion optimization, handle the existence of multiple design points and trade-offs via insight into the entire design space, can easily perform tasks in parallel, and are often effective in filtering the noise intrinsic to numerical and experimental data. However, a successful application of the global optimization method needs to address issues related to data requirements with an increase in the number of design variables, and methods for predicting the model performance. In this article, we review recent progress made in establishing suitable global optimization techniques employing neural network and polynomial-based response surface methodologies. Issues addressed include techniques for construction of the response surface, design of experiment techniques for supplying information in an economical manner, optimization procedures and multi-level techniques, and assessment of relative performance between polynomials and neural networks. Examples drawn from wing aerodynamics, turbulent diffuser flows, gas-gas injectors, and supersonic turbines are employed to help demonstrate the issues involved in an engineering design context. Both the usefulness of the existing knowledge to aid current design practices and the need for future research are identified.

Top