Sample records for polynomial regression analysis

  1. Why High-Order Polynomials Should Not Be Used in Regression Discontinuity Designs. NBER Working Paper No. 20405

    ERIC Educational Resources Information Center

    Gelman, Andrew; Imbens, Guido

    2014-01-01

    It is common in regression discontinuity analysis to control for high order (third, fourth, or higher) polynomials of the forcing variable. We argue that estimators for causal effects based on such methods can be misleading, and we recommend researchers do not use them, and instead use estimators based on local linear or quadratic polynomials or…

  2. A method for fitting regression splines with varying polynomial order in the linear mixed model.

    PubMed

    Edwards, Lloyd J; Stewart, Paul W; MacDougall, James E; Helms, Ronald W

    2006-02-15

    The linear mixed model has become a widely used tool for longitudinal analysis of continuous variables. The use of regression splines in these models offers the analyst additional flexibility in the formulation of descriptive analyses, exploratory analyses and hypothesis-driven confirmatory analyses. We propose a method for fitting piecewise polynomial regression splines with varying polynomial order in the fixed effects and/or random effects of the linear mixed model. The polynomial segments are explicitly constrained by side conditions for continuity and some smoothness at the points where they join. By using a reparameterization of this explicitly constrained linear mixed model, an implicitly constrained linear mixed model is constructed that simplifies implementation of fixed-knot regression splines. The proposed approach is relatively simple, handles splines in one variable or multiple variables, and can be easily programmed using existing commercial software such as SAS or S-plus. The method is illustrated using two examples: an analysis of longitudinal viral load data from a study of subjects with acute HIV-1 infection and an analysis of 24-hour ambulatory blood pressure profiles.

  3. Further Insight and Additional Inference Methods for Polynomial Regression Applied to the Analysis of Congruence

    ERIC Educational Resources Information Center

    Cohen, Ayala; Nahum-Shani, Inbal; Doveh, Etti

    2010-01-01

    In their seminal paper, Edwards and Parry (1993) presented the polynomial regression as a better alternative to applying difference score in the study of congruence. Although this method is increasingly applied in congruence research, its complexity relative to other methods for assessing congruence (e.g., difference score methods) was one of the…

  4. Breeding value accuracy estimates for growth traits using random regression and multi-trait models in Nelore cattle.

    PubMed

    Boligon, A A; Baldi, F; Mercadante, M E Z; Lobo, R B; Pereira, R J; Albuquerque, L G

    2011-06-28

    We quantified the potential increase in accuracy of expected breeding value for weights of Nelore cattle, from birth to mature age, using multi-trait and random regression models on Legendre polynomials and B-spline functions. A total of 87,712 weight records from 8144 females were used, recorded every three months from birth to mature age from the Nelore Brazil Program. For random regression analyses, all female weight records from birth to eight years of age (data set I) were considered. From this general data set, a subset was created (data set II), which included only nine weight records: at birth, weaning, 365 and 550 days of age, and 2, 3, 4, 5, and 6 years of age. Data set II was analyzed using random regression and multi-trait models. The model of analysis included the contemporary group as fixed effects and age of dam as a linear and quadratic covariable. In the random regression analyses, average growth trends were modeled using a cubic regression on orthogonal polynomials of age. Residual variances were modeled by a step function with five classes. Legendre polynomials of fourth and sixth order were utilized to model the direct genetic and animal permanent environmental effects, respectively, while third-order Legendre polynomials were considered for maternal genetic and maternal permanent environmental effects. Quadratic polynomials were applied to model all random effects in random regression models on B-spline functions. Direct genetic and animal permanent environmental effects were modeled using three segments or five coefficients, and genetic maternal and maternal permanent environmental effects were modeled with one segment or three coefficients in the random regression models on B-spline functions. For both data sets (I and II), animals ranked differently according to expected breeding value obtained by random regression or multi-trait models. With random regression models, the highest gains in accuracy were obtained at ages with a low number of weight records. The results indicate that random regression models provide more accurate expected breeding values than the traditionally finite multi-trait models. Thus, higher genetic responses are expected for beef cattle growth traits by replacing a multi-trait model with random regression models for genetic evaluation. B-spline functions could be applied as an alternative to Legendre polynomials to model covariance functions for weights from birth to mature age.

  5. Assessing the Multidimensional Relationship Between Medication Beliefs and Adherence in Older Adults With Hypertension Using Polynomial Regression.

    PubMed

    Dillon, Paul; Phillips, L Alison; Gallagher, Paul; Smith, Susan M; Stewart, Derek; Cousins, Gráinne

    2018-02-05

    The Necessity-Concerns Framework (NCF) is a multidimensional theory describing the relationship between patients' positive and negative evaluations of their medication which interplay to influence adherence. Most studies evaluating the NCF have failed to account for the multidimensional nature of the theory, placing the separate dimensions of medication "necessity beliefs" and "concerns" onto a single dimension (e.g., the Beliefs about Medicines Questionnaire-difference score model). To assess the multidimensional effect of patient medication beliefs (concerns and necessity beliefs) on medication adherence using polynomial regression with response surface analysis. Community-dwelling older adults >65 years (n = 1,211) presenting their own prescription for antihypertensive medication to 106 community pharmacies in the Republic of Ireland rated their concerns and necessity beliefs to antihypertensive medications at baseline and their adherence to antihypertensive medication at 12 months via structured telephone interview. Confirmatory polynomial regression found the difference-score model to be inaccurate; subsequent exploratory analysis identified a quadratic model to be the best-fitting polynomial model. Adherence was lowest among those with strong medication concerns and weak necessity beliefs, and adherence was greatest for those with weak concerns and strong necessity beliefs (slope β = -0.77, p<.001; curvature β = -0.26, p = .004). However, novel nonreciprocal effects were also observed; patients with simultaneously high concerns and necessity beliefs had lower adherence than those with simultaneously low concerns and necessity beliefs (slope β = -0.36, p = .004; curvature β = -0.25, p = .003). The difference-score model fails to account for the potential nonreciprocal effects. Results extend evidence supporting the use of polynomial regression to assess the multidimensional effect of medication beliefs on adherence.

  6. A new surrogate modeling technique combining Kriging and polynomial chaos expansions – Application to uncertainty analysis in computational dosimetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kersaudy, Pierric, E-mail: pierric.kersaudy@orange.com; Whist Lab, 38 avenue du Général Leclerc, 92130 Issy-les-Moulineaux; ESYCOM, Université Paris-Est Marne-la-Vallée, 5 boulevard Descartes, 77700 Marne-la-Vallée

    2015-04-01

    In numerical dosimetry, the recent advances in high performance computing led to a strong reduction of the required computational time to assess the specific absorption rate (SAR) characterizing the human exposure to electromagnetic waves. However, this procedure remains time-consuming and a single simulation can request several hours. As a consequence, the influence of uncertain input parameters on the SAR cannot be analyzed using crude Monte Carlo simulation. The solution presented here to perform such an analysis is surrogate modeling. This paper proposes a novel approach to build such a surrogate model from a design of experiments. Considering a sparse representationmore » of the polynomial chaos expansions using least-angle regression as a selection algorithm to retain the most influential polynomials, this paper proposes to use the selected polynomials as regression functions for the universal Kriging model. The leave-one-out cross validation is used to select the optimal number of polynomials in the deterministic part of the Kriging model. The proposed approach, called LARS-Kriging-PC modeling, is applied to three benchmark examples and then to a full-scale metamodeling problem involving the exposure of a numerical fetus model to a femtocell device. The performances of the LARS-Kriging-PC are compared to an ordinary Kriging model and to a classical sparse polynomial chaos expansion. The LARS-Kriging-PC appears to have better performances than the two other approaches. A significant accuracy improvement is observed compared to the ordinary Kriging or to the sparse polynomial chaos depending on the studied case. This approach seems to be an optimal solution between the two other classical approaches. A global sensitivity analysis is finally performed on the LARS-Kriging-PC model of the fetus exposure problem.« less

  7. Regression-based adaptive sparse polynomial dimensional decomposition for sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Tang, Kunkun; Congedo, Pietro; Abgrall, Remi

    2014-11-01

    Polynomial dimensional decomposition (PDD) is employed in this work for global sensitivity analysis and uncertainty quantification of stochastic systems subject to a large number of random input variables. Due to the intimate structure between PDD and Analysis-of-Variance, PDD is able to provide simpler and more direct evaluation of the Sobol' sensitivity indices, when compared to polynomial chaos (PC). Unfortunately, the number of PDD terms grows exponentially with respect to the size of the input random vector, which makes the computational cost of the standard method unaffordable for real engineering applications. In order to address this problem of curse of dimensionality, this work proposes a variance-based adaptive strategy aiming to build a cheap meta-model by sparse-PDD with PDD coefficients computed by regression. During this adaptive procedure, the model representation by PDD only contains few terms, so that the cost to resolve repeatedly the linear system of the least-square regression problem is negligible. The size of the final sparse-PDD representation is much smaller than the full PDD, since only significant terms are eventually retained. Consequently, a much less number of calls to the deterministic model is required to compute the final PDD coefficients.

  8. Adaptive surrogate modeling by ANOVA and sparse polynomial dimensional decomposition for global sensitivity analysis in fluid simulation

    NASA Astrophysics Data System (ADS)

    Tang, Kunkun; Congedo, Pietro M.; Abgrall, Rémi

    2016-06-01

    The Polynomial Dimensional Decomposition (PDD) is employed in this work for the global sensitivity analysis and uncertainty quantification (UQ) of stochastic systems subject to a moderate to large number of input random variables. Due to the intimate connection between the PDD and the Analysis of Variance (ANOVA) approaches, PDD is able to provide a simpler and more direct evaluation of the Sobol' sensitivity indices, when compared to the Polynomial Chaos expansion (PC). Unfortunately, the number of PDD terms grows exponentially with respect to the size of the input random vector, which makes the computational cost of standard methods unaffordable for real engineering applications. In order to address the problem of the curse of dimensionality, this work proposes essentially variance-based adaptive strategies aiming to build a cheap meta-model (i.e. surrogate model) by employing the sparse PDD approach with its coefficients computed by regression. Three levels of adaptivity are carried out in this paper: 1) the truncated dimensionality for ANOVA component functions, 2) the active dimension technique especially for second- and higher-order parameter interactions, and 3) the stepwise regression approach designed to retain only the most influential polynomials in the PDD expansion. During this adaptive procedure featuring stepwise regressions, the surrogate model representation keeps containing few terms, so that the cost to resolve repeatedly the linear systems of the least-squares regression problem is negligible. The size of the finally obtained sparse PDD representation is much smaller than the one of the full expansion, since only significant terms are eventually retained. Consequently, a much smaller number of calls to the deterministic model is required to compute the final PDD coefficients.

  9. Introduction to methodology of dose-response meta-analysis for binary outcome: With application on software.

    PubMed

    Zhang, Chao; Jia, Pengli; Yu, Liu; Xu, Chang

    2018-05-01

    Dose-response meta-analysis (DRMA) is widely applied to investigate the dose-specific relationship between independent and dependent variables. Such methods have been in use for over 30 years and are increasingly employed in healthcare and clinical decision-making. In this article, we give an overview of the methodology used in DRMA. We summarize the commonly used regression model and the pooled method in DRMA. We also use an example to illustrate how to employ a DRMA by these methods. Five regression models, linear regression, piecewise regression, natural polynomial regression, fractional polynomial regression, and restricted cubic spline regression, were illustrated in this article to fit the dose-response relationship. And two types of pooling approaches, that is, one-stage approach and two-stage approach are illustrated to pool the dose-response relationship across studies. The example showed similar results among these models. Several dose-response meta-analysis methods can be used for investigating the relationship between exposure level and the risk of an outcome. However the methodology of DRMA still needs to be improved. © 2018 Chinese Cochrane Center, West China Hospital of Sichuan University and John Wiley & Sons Australia, Ltd.

  10. Local polynomial estimation of heteroscedasticity in a multivariate linear regression model and its applications in economics.

    PubMed

    Su, Liyun; Zhao, Yanyong; Yan, Tianshun; Li, Fenglan

    2012-01-01

    Multivariate local polynomial fitting is applied to the multivariate linear heteroscedastic regression model. Firstly, the local polynomial fitting is applied to estimate heteroscedastic function, then the coefficients of regression model are obtained by using generalized least squares method. One noteworthy feature of our approach is that we avoid the testing for heteroscedasticity by improving the traditional two-stage method. Due to non-parametric technique of local polynomial estimation, it is unnecessary to know the form of heteroscedastic function. Therefore, we can improve the estimation precision, when the heteroscedastic function is unknown. Furthermore, we verify that the regression coefficients is asymptotic normal based on numerical simulations and normal Q-Q plots of residuals. Finally, the simulation results and the local polynomial estimation of real data indicate that our approach is surely effective in finite-sample situations.

  11. STATLIB: NSWC Library of Statistical Programs and Subroutines

    DTIC Science & Technology

    1989-08-01

    Uncorrelated Weighted Polynomial Regression 41 .WEPORC Correlated Weighted Polynomial Regression 45 MROP Multiple Regression Using Orthogonal Polynomials ...could not and should not be con- NSWC TR 89-97 verted to the new general purpose computer (the current CDC 995). Some were designed tu compute...personal computers. They are referred to as SPSSPC+, BMDPC, and SASPC and in general are less comprehensive than their mainframe counterparts. The basic

  12. Covariance functions for body weight from birth to maturity in Nellore cows.

    PubMed

    Boligon, A A; Mercadante, M E Z; Forni, S; Lôbo, R B; Albuquerque, L G

    2010-03-01

    The objective of this study was to estimate (co)variance functions using random regression models on Legendre polynomials for the analysis of repeated measures of BW from birth to adult age. A total of 82,064 records from 8,145 females were analyzed. Different models were compared. The models included additive direct and maternal effects, and animal and maternal permanent environmental effects as random terms. Contemporary group and dam age at calving (linear and quadratic effect) were included as fixed effects, and orthogonal Legendre polynomials of animal age (cubic regression) were considered as random covariables. Eight models with polynomials of third to sixth order were used to describe additive direct and maternal effects, and animal and maternal permanent environmental effects. Residual effects were modeled using 1 (i.e., assuming homogeneity of variances across all ages) or 5 age classes. The model with 5 classes was the best to describe the trajectory of residuals along the growth curve. The model including fourth- and sixth-order polynomials for additive direct and animal permanent environmental effects, respectively, and third-order polynomials for maternal genetic and maternal permanent environmental effects were the best. Estimates of (co)variance obtained with the multi-trait and random regression models were similar. Direct heritability estimates obtained with the random regression models followed a trend similar to that obtained with the multi-trait model. The largest estimates of maternal heritability were those of BW taken close to 240 d of age. In general, estimates of correlation between BW from birth to 8 yr of age decreased with increasing distance between ages.

  13. The Necessity-Concerns-Framework: A Multidimensional Theory Benefits from Multidimensional Analysis

    PubMed Central

    Phillips, L. Alison; Diefenbach, Michael; Kronish, Ian M.; Negron, Rennie M.; Horowitz, Carol R.

    2014-01-01

    Background Patients’ medication-related concerns and necessity-beliefs predict adherence. Evaluation of the potentially complex interplay of these two dimensions has been limited because of methods that reduce them to a single dimension (difference scores). Purpose We use polynomial regression to assess the multidimensional effect of stroke-event survivors’ medication-related concerns and necessity-beliefs on their adherence to stroke-prevention medication. Methods Survivors (n=600) rated their concerns, necessity-beliefs, and adherence to medication. Confirmatory and exploratory polynomial regression determined the best-fitting multidimensional model. Results As posited by the Necessity-Concerns Framework (NCF), the greatest and lowest adherence was reported by those with strong necessity-beliefs/weak concerns and strong concerns/weak necessity-beliefs, respectively. However, as could not be assessed using a difference-score model, patients with ambivalent beliefs were less adherent than those exhibiting indifference. Conclusions Polynomial regression allows for assessment of the multidimensional nature of the NCF. Clinicians/Researchers should be aware that concerns and necessity dimensions are not polar opposites. PMID:24500078

  14. The necessity-concerns framework: a multidimensional theory benefits from multidimensional analysis.

    PubMed

    Phillips, L Alison; Diefenbach, Michael A; Kronish, Ian M; Negron, Rennie M; Horowitz, Carol R

    2014-08-01

    Patients' medication-related concerns and necessity-beliefs predict adherence. Evaluation of the potentially complex interplay of these two dimensions has been limited because of methods that reduce them to a single dimension (difference scores). We use polynomial regression to assess the multidimensional effect of stroke-event survivors' medication-related concerns and necessity beliefs on their adherence to stroke-prevention medication. Survivors (n = 600) rated their concerns, necessity beliefs, and adherence to medication. Confirmatory and exploratory polynomial regression determined the best-fitting multidimensional model. As posited by the necessity-concerns framework (NCF), the greatest and lowest adherence was reported by those necessity weak concerns and strong concerns/weak Necessity-Beliefs, respectively. However, as could not be assessed using a difference-score model, patients with ambivalent beliefs were less adherent than those exhibiting indifference. Polynomial regression allows for assessment of the multidimensional nature of the NCF. Clinicians/Researchers should be aware that concerns and necessity dimensions are not polar opposites.

  15. Mixed kernel function support vector regression for global sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Cheng, Kai; Lu, Zhenzhou; Wei, Yuhao; Shi, Yan; Zhou, Yicheng

    2017-11-01

    Global sensitivity analysis (GSA) plays an important role in exploring the respective effects of input variables on an assigned output response. Amongst the wide sensitivity analyses in literature, the Sobol indices have attracted much attention since they can provide accurate information for most models. In this paper, a mixed kernel function (MKF) based support vector regression (SVR) model is employed to evaluate the Sobol indices at low computational cost. By the proposed derivation, the estimation of the Sobol indices can be obtained by post-processing the coefficients of the SVR meta-model. The MKF is constituted by the orthogonal polynomials kernel function and Gaussian radial basis kernel function, thus the MKF possesses both the global characteristic advantage of the polynomials kernel function and the local characteristic advantage of the Gaussian radial basis kernel function. The proposed approach is suitable for high-dimensional and non-linear problems. Performance of the proposed approach is validated by various analytical functions and compared with the popular polynomial chaos expansion (PCE). Results demonstrate that the proposed approach is an efficient method for global sensitivity analysis.

  16. Higher-order Multivariable Polynomial Regression to Estimate Human Affective States

    NASA Astrophysics Data System (ADS)

    Wei, Jie; Chen, Tong; Liu, Guangyuan; Yang, Jiemin

    2016-03-01

    From direct observations, facial, vocal, gestural, physiological, and central nervous signals, estimating human affective states through computational models such as multivariate linear-regression analysis, support vector regression, and artificial neural network, have been proposed in the past decade. In these models, linear models are generally lack of precision because of ignoring intrinsic nonlinearities of complex psychophysiological processes; and nonlinear models commonly adopt complicated algorithms. To improve accuracy and simplify model, we introduce a new computational modeling method named as higher-order multivariable polynomial regression to estimate human affective states. The study employs standardized pictures in the International Affective Picture System to induce thirty subjects’ affective states, and obtains pure affective patterns of skin conductance as input variables to the higher-order multivariable polynomial model for predicting affective valence and arousal. Experimental results show that our method is able to obtain efficient correlation coefficients of 0.98 and 0.96 for estimation of affective valence and arousal, respectively. Moreover, the method may provide certain indirect evidences that valence and arousal have their brain’s motivational circuit origins. Thus, the proposed method can serve as a novel one for efficiently estimating human affective states.

  17. Higher-order Multivariable Polynomial Regression to Estimate Human Affective States

    PubMed Central

    Wei, Jie; Chen, Tong; Liu, Guangyuan; Yang, Jiemin

    2016-01-01

    From direct observations, facial, vocal, gestural, physiological, and central nervous signals, estimating human affective states through computational models such as multivariate linear-regression analysis, support vector regression, and artificial neural network, have been proposed in the past decade. In these models, linear models are generally lack of precision because of ignoring intrinsic nonlinearities of complex psychophysiological processes; and nonlinear models commonly adopt complicated algorithms. To improve accuracy and simplify model, we introduce a new computational modeling method named as higher-order multivariable polynomial regression to estimate human affective states. The study employs standardized pictures in the International Affective Picture System to induce thirty subjects’ affective states, and obtains pure affective patterns of skin conductance as input variables to the higher-order multivariable polynomial model for predicting affective valence and arousal. Experimental results show that our method is able to obtain efficient correlation coefficients of 0.98 and 0.96 for estimation of affective valence and arousal, respectively. Moreover, the method may provide certain indirect evidences that valence and arousal have their brain’s motivational circuit origins. Thus, the proposed method can serve as a novel one for efficiently estimating human affective states. PMID:26996254

  18. A quadratic regression modelling on paddy production in the area of Perlis

    NASA Astrophysics Data System (ADS)

    Goh, Aizat Hanis Annas; Ali, Zalila; Nor, Norlida Mohd; Baharum, Adam; Ahmad, Wan Muhamad Amir W.

    2017-08-01

    Polynomial regression models are useful in situations in which the relationship between a response variable and predictor variables is curvilinear. Polynomial regression fits the nonlinear relationship into a least squares linear regression model by decomposing the predictor variables into a kth order polynomial. The polynomial order determines the number of inflexions on the curvilinear fitted line. A second order polynomial forms a quadratic expression (parabolic curve) with either a single maximum or minimum, a third order polynomial forms a cubic expression with both a relative maximum and a minimum. This study used paddy data in the area of Perlis to model paddy production based on paddy cultivation characteristics and environmental characteristics. The results indicated that a quadratic regression model best fits the data and paddy production is affected by urea fertilizer application and the interaction between amount of average rainfall and percentage of area defected by pest and disease. Urea fertilizer application has a quadratic effect in the model which indicated that if the number of days of urea fertilizer application increased, paddy production is expected to decrease until it achieved a minimum value and paddy production is expected to increase at higher number of days of urea application. The decrease in paddy production with an increased in rainfall is greater, the higher the percentage of area defected by pest and disease.

  19. Local Composite Quantile Regression Smoothing for Harris Recurrent Markov Processes

    PubMed Central

    Li, Degui; Li, Runze

    2016-01-01

    In this paper, we study the local polynomial composite quantile regression (CQR) smoothing method for the nonlinear and nonparametric models under the Harris recurrent Markov chain framework. The local polynomial CQR regression method is a robust alternative to the widely-used local polynomial method, and has been well studied in stationary time series. In this paper, we relax the stationarity restriction on the model, and allow that the regressors are generated by a general Harris recurrent Markov process which includes both the stationary (positive recurrent) and nonstationary (null recurrent) cases. Under some mild conditions, we establish the asymptotic theory for the proposed local polynomial CQR estimator of the mean regression function, and show that the convergence rate for the estimator in nonstationary case is slower than that in stationary case. Furthermore, a weighted type local polynomial CQR estimator is provided to improve the estimation efficiency, and a data-driven bandwidth selection is introduced to choose the optimal bandwidth involved in the nonparametric estimators. Finally, we give some numerical studies to examine the finite sample performance of the developed methodology and theory. PMID:27667894

  20. Improving Global Models of Remotely Sensed Ocean Chlorophyll Content Using Partial Least Squares and Geographically Weighted Regression

    NASA Astrophysics Data System (ADS)

    Gholizadeh, H.; Robeson, S. M.

    2015-12-01

    Empirical models have been widely used to estimate global chlorophyll content from remotely sensed data. Here, we focus on the standard NASA empirical models that use blue-green band ratios. These band ratio ocean color (OC) algorithms are in the form of fourth-order polynomials and the parameters of these polynomials (i.e. coefficients) are estimated from the NASA bio-Optical Marine Algorithm Data set (NOMAD). Most of the points in this data set have been sampled from tropical and temperate regions. However, polynomial coefficients obtained from this data set are used to estimate chlorophyll content in all ocean regions with different properties such as sea-surface temperature, salinity, and downwelling/upwelling patterns. Further, the polynomial terms in these models are highly correlated. In sum, the limitations of these empirical models are as follows: 1) the independent variables within the empirical models, in their current form, are correlated (multicollinear), and 2) current algorithms are global approaches and are based on the spatial stationarity assumption, so they are independent of location. Multicollinearity problem is resolved by using partial least squares (PLS). PLS, which transforms the data into a set of independent components, can be considered as a combined form of principal component regression (PCR) and multiple regression. Geographically weighted regression (GWR) is also used to investigate the validity of spatial stationarity assumption. GWR solves a regression model over each sample point by using the observations within its neighbourhood. PLS results show that the empirical method underestimates chlorophyll content in high latitudes, including the Southern Ocean region, when compared to PLS (see Figure 1). Cluster analysis of GWR coefficients also shows that the spatial stationarity assumption in empirical models is not likely a valid assumption.

  1. Leader-follower value congruence in social responsibility and ethical satisfaction: a polynomial regression analysis.

    PubMed

    Kang, Seung-Wan; Byun, Gukdo; Park, Hun-Joon

    2014-12-01

    This paper presents empirical research into the relationship between leader-follower value congruence in social responsibility and the level of ethical satisfaction for employees in the workplace. 163 dyads were analyzed, each consisting of a team leader and an employee working at a large manufacturing company in South Korea. Following current methodological recommendations for congruence research, polynomial regression and response surface modeling methodologies were used to determine the effects of value congruence. Results indicate that leader-follower value congruence in social responsibility was positively related to the ethical satisfaction of employees. Furthermore, employees' ethical satisfaction was stronger when aligned with a leader with high social responsibility. The theoretical and practical implications are discussed.

  2. Periodicity analysis of tourist arrivals to Banda Aceh using smoothing SARIMA approach

    NASA Astrophysics Data System (ADS)

    Miftahuddin, Helida, Desri; Sofyan, Hizir

    2017-11-01

    Forecasting the number of tourist arrivals who enters a region is needed for tourism businesses, economic and industrial policies, so that the statistical modeling needs to be conducted. Banda Aceh is the capital of Aceh province more economic activity is driven by the services sector, one of which is the tourism sector. Therefore, the prediction of the number of tourist arrivals is needed to develop further policies. The identification results indicate that the data arrival of foreign tourists to Banda Aceh to contain the trend and seasonal nature. Allegedly, the number of arrivals is influenced by external factors, such as economics, politics, and the holiday season caused the structural break in the data. Trend patterns are detected by using polynomial regression with quadratic and cubic approaches, while seasonal is detected by a periodic regression polynomial with quadratic and cubic approach. To model the data that has seasonal effects, one of the statistical methods that can be used is SARIMA (Seasonal Autoregressive Integrated Moving Average). The results showed that the smoothing, a method to detect the trend pattern is cubic polynomial regression approach, with the modified model and the multiplicative periodicity of 12 months. The AIC value obtained was 70.52. While the method for detecting the seasonal pattern is a periodic regression polynomial cubic approach, with the modified model and the multiplicative periodicity of 12 months. The AIC value obtained was 73.37. Furthermore, the best model to predict the number of foreign tourist arrivals to Banda Aceh in 2017 to 2018 is SARIMA (0,1,1)(1,1,0) with MAPE is 26%.

  3. Linear and evolutionary polynomial regression models to forecast coastal dynamics: Comparison and reliability assessment

    NASA Astrophysics Data System (ADS)

    Bruno, Delia Evelina; Barca, Emanuele; Goncalves, Rodrigo Mikosz; de Araujo Queiroz, Heithor Alexandre; Berardi, Luigi; Passarella, Giuseppe

    2018-01-01

    In this paper, the Evolutionary Polynomial Regression data modelling strategy has been applied to study small scale, short-term coastal morphodynamics, given its capability for treating a wide database of known information, non-linearly. Simple linear and multilinear regression models were also applied to achieve a balance between the computational load and reliability of estimations of the three models. In fact, even though it is easy to imagine that the more complex the model, the more the prediction improves, sometimes a "slight" worsening of estimations can be accepted in exchange for the time saved in data organization and computational load. The models' outcomes were validated through a detailed statistical, error analysis, which revealed a slightly better estimation of the polynomial model with respect to the multilinear model, as expected. On the other hand, even though the data organization was identical for the two models, the multilinear one required a simpler simulation setting and a faster run time. Finally, the most reliable evolutionary polynomial regression model was used in order to make some conjecture about the uncertainty increase with the extension of extrapolation time of the estimation. The overlapping rate between the confidence band of the mean of the known coast position and the prediction band of the estimated position can be a good index of the weakness in producing reliable estimations when the extrapolation time increases too much. The proposed models and tests have been applied to a coastal sector located nearby Torre Colimena in the Apulia region, south Italy.

  4. Investigating and Modelling Effects of Climatically and Hydrologically Indicators on the Urmia Lake Coastline Changes Using Time Series Analysis

    NASA Astrophysics Data System (ADS)

    Ahmadijamal, M.; Hasanlou, M.

    2017-09-01

    Study of hydrological parameters of lakes and examine the variation of water level to operate management on water resources are important. The purpose of this study is to investigate and model the Urmia Lake water level changes due to changes in climatically and hydrological indicators that affects in the process of level variation and area of this lake. For this purpose, Landsat satellite images, hydrological data, the daily precipitation, the daily surface evaporation and the daily discharge in total of the lake basin during the period of 2010-2016 have been used. Based on time-series analysis that is conducted on individual data independently with same procedure, to model variation of Urmia Lake level, we used polynomial regression technique and combined polynomial with periodic behavior. In the first scenario, we fit a multivariate linear polynomial to our datasets and determining RMSE, NRSME and R² value. We found that fourth degree polynomial can better fit to our datasets with lowest RMSE value about 9 cm. In the second scenario, we combine polynomial with periodic behavior for modeling. The second scenario has superiority comparing to the first one, by RMSE value about 3 cm.

  5. Constructing general partial differential equations using polynomial and neural networks.

    PubMed

    Zjavka, Ladislav; Pedrycz, Witold

    2016-01-01

    Sum fraction terms can approximate multi-variable functions on the basis of discrete observations, replacing a partial differential equation definition with polynomial elementary data relation descriptions. Artificial neural networks commonly transform the weighted sum of inputs to describe overall similarity relationships of trained and new testing input patterns. Differential polynomial neural networks form a new class of neural networks, which construct and solve an unknown general partial differential equation of a function of interest with selected substitution relative terms using non-linear multi-variable composite polynomials. The layers of the network generate simple and composite relative substitution terms whose convergent series combinations can describe partial dependent derivative changes of the input variables. This regression is based on trained generalized partial derivative data relations, decomposed into a multi-layer polynomial network structure. The sigmoidal function, commonly used as a nonlinear activation of artificial neurons, may transform some polynomial items together with the parameters with the aim to improve the polynomial derivative term series ability to approximate complicated periodic functions, as simple low order polynomials are not able to fully make up for the complete cycles. The similarity analysis facilitates substitutions for differential equations or can form dimensional units from data samples to describe real-world problems. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tang, Kunkun, E-mail: ktg@illinois.edu; Inria Bordeaux – Sud-Ouest, Team Cardamom, 200 avenue de la Vieille Tour, 33405 Talence; Congedo, Pietro M.

    The Polynomial Dimensional Decomposition (PDD) is employed in this work for the global sensitivity analysis and uncertainty quantification (UQ) of stochastic systems subject to a moderate to large number of input random variables. Due to the intimate connection between the PDD and the Analysis of Variance (ANOVA) approaches, PDD is able to provide a simpler and more direct evaluation of the Sobol' sensitivity indices, when compared to the Polynomial Chaos expansion (PC). Unfortunately, the number of PDD terms grows exponentially with respect to the size of the input random vector, which makes the computational cost of standard methods unaffordable formore » real engineering applications. In order to address the problem of the curse of dimensionality, this work proposes essentially variance-based adaptive strategies aiming to build a cheap meta-model (i.e. surrogate model) by employing the sparse PDD approach with its coefficients computed by regression. Three levels of adaptivity are carried out in this paper: 1) the truncated dimensionality for ANOVA component functions, 2) the active dimension technique especially for second- and higher-order parameter interactions, and 3) the stepwise regression approach designed to retain only the most influential polynomials in the PDD expansion. During this adaptive procedure featuring stepwise regressions, the surrogate model representation keeps containing few terms, so that the cost to resolve repeatedly the linear systems of the least-squares regression problem is negligible. The size of the finally obtained sparse PDD representation is much smaller than the one of the full expansion, since only significant terms are eventually retained. Consequently, a much smaller number of calls to the deterministic model is required to compute the final PDD coefficients.« less

  7. Improving reliability of aggregation, numerical simulation and analysis of complex systems by empirical data

    NASA Astrophysics Data System (ADS)

    Dobronets, Boris S.; Popova, Olga A.

    2018-05-01

    The paper considers a new approach of regression modeling that uses aggregated data presented in the form of density functions. Approaches to Improving the reliability of aggregation of empirical data are considered: improving accuracy and estimating errors. We discuss the procedures of data aggregation as a preprocessing stage for subsequent to regression modeling. An important feature of study is demonstration of the way how represent the aggregated data. It is proposed to use piecewise polynomial models, including spline aggregate functions. We show that the proposed approach to data aggregation can be interpreted as the frequency distribution. To study its properties density function concept is used. Various types of mathematical models of data aggregation are discussed. For the construction of regression models, it is proposed to use data representation procedures based on piecewise polynomial models. New approaches to modeling functional dependencies based on spline aggregations are proposed.

  8. Genetic analysis of groups of mid-infrared predicted fatty acids in milk.

    PubMed

    Narayana, S G; Schenkel, F S; Fleming, A; Koeck, A; Malchiodi, F; Jamrozik, J; Johnston, J; Sargolzaei, M; Miglior, F

    2017-06-01

    The objective of this study was to investigate genetic variability of mid-infrared predicted fatty acid groups in Canadian Holstein cattle. Genetic parameters were estimated for 5 groups of fatty acids: short-chain (4 to 10 carbons), medium-chain (11 to 16 carbons), long-chain (17 to 22 carbons), saturated, and unsaturated fatty acids. The data set included 49,127 test-day records from 10,029 first-lactation Holstein cows in 810 herds. The random regression animal test-day model included days in milk, herd-test date, and age-season of calving (polynomial regression) as fixed effects, herd-year of calving, animal additive genetic effect, and permanent environment effects as random polynomial regressions, and random residual effect. Legendre polynomials of the third degree were selected for the fixed regression for age-season of calving effect and Legendre polynomials of the fourth degree were selected for the random regression for animal additive genetic, permanent environment, and herd-year effect. The average daily heritability over the lactation for the medium-chain fatty acid group (0.32) was higher than for the short-chain (0.24) and long-chain (0.23) fatty acid groups. The average daily heritability for the saturated fatty acid group (0.33) was greater than for the unsaturated fatty acid group (0.21). Estimated average daily genetic correlations were positive among all fatty acid groups and ranged from moderate to high (0.63-0.96). The genetic correlations illustrated similarities and differences in their origin and the makeup of the groupings based on chain length and saturation. These results provide evidence for the existence of genetic variation in mid-infrared predicted fatty acid groups, and the possibility of improving milk fatty acid profile through genetic selection in Canadian dairy cattle. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  9. Discrepancies Between Perceptions of the Parent-Adolescent Relationship and Early Adolescent Depressive Symptoms: An Illustration of Polynomial Regression Analysis.

    PubMed

    Nelemans, S A; Branje, S J T; Hale, W W; Goossens, L; Koot, H M; Oldehinkel, A J; Meeus, W H J

    2016-10-01

    Adolescence is a critical period for the development of depressive symptoms. Lower quality of the parent-adolescent relationship has been consistently associated with higher adolescent depressive symptoms, but discrepancies in perceptions of parents and adolescents regarding the quality of their relationship may be particularly important to consider. In the present study, we therefore examined how discrepancies in parents' and adolescents' perceptions of the parent-adolescent relationship were associated with early adolescent depressive symptoms, both concurrently and longitudinally over a 1-year period. Our sample consisted of 497 Dutch adolescents (57 % boys, M age = 13.03 years), residing in the western and central regions of the Netherlands, and their mothers and fathers, who all completed several questionnaires on two occasions with a 1-year interval. Adolescents reported on depressive symptoms and all informants reported on levels of negative interaction in the parent-adolescent relationship. Results from polynomial regression analyses including interaction terms between informants' perceptions, which have recently been proposed as more valid tests of hypotheses involving informant discrepancies than difference scores, suggested the highest adolescent depressive symptoms when both the mother and the adolescent reported high negative interaction, and when the adolescent reported high but the father reported low negative interaction. This pattern of findings underscores the need for a more sophisticated methodology such as polynomial regression analysis including tests of moderation, rather than the use of difference scores, which can adequately address both congruence and discrepancies in perceptions of adolescents and mothers/fathers of the parent-adolescent relationship in detail. Such an analysis can contribute to a more comprehensive understanding of risk factors for early adolescent depressive symptoms.

  10. A phenomenological biological dose model for proton therapy based on linear energy transfer spectra.

    PubMed

    Rørvik, Eivind; Thörnqvist, Sara; Stokkevåg, Camilla H; Dahle, Tordis J; Fjaera, Lars Fredrik; Ytre-Hauge, Kristian S

    2017-06-01

    The relative biological effectiveness (RBE) of protons varies with the radiation quality, quantified by the linear energy transfer (LET). Most phenomenological models employ a linear dependency of the dose-averaged LET (LET d ) to calculate the biological dose. However, several experiments have indicated a possible non-linear trend. Our aim was to investigate if biological dose models including non-linear LET dependencies should be considered, by introducing a LET spectrum based dose model. The RBE-LET relationship was investigated by fitting of polynomials from 1st to 5th degree to a database of 85 data points from aerobic in vitro experiments. We included both unweighted and weighted regression, the latter taking into account experimental uncertainties. Statistical testing was performed to decide whether higher degree polynomials provided better fits to the data as compared to lower degrees. The newly developed models were compared to three published LET d based models for a simulated spread out Bragg peak (SOBP) scenario. The statistical analysis of the weighted regression analysis favored a non-linear RBE-LET relationship, with the quartic polynomial found to best represent the experimental data (P = 0.010). The results of the unweighted regression analysis were on the borderline of statistical significance for non-linear functions (P = 0.053), and with the current database a linear dependency could not be rejected. For the SOBP scenario, the weighted non-linear model estimated a similar mean RBE value (1.14) compared to the three established models (1.13-1.17). The unweighted model calculated a considerably higher RBE value (1.22). The analysis indicated that non-linear models could give a better representation of the RBE-LET relationship. However, this is not decisive, as inclusion of the experimental uncertainties in the regression analysis had a significant impact on the determination and ranking of the models. As differences between the models were observed for the SOBP scenario, both non-linear LET spectrum- and linear LET d based models should be further evaluated in clinically realistic scenarios. © 2017 American Association of Physicists in Medicine.

  11. Evaluation of Piecewise Polynomial Equations for Two Types of Thermocouples

    PubMed Central

    Chen, Andrew; Chen, Chiachung

    2013-01-01

    Thermocouples are the most frequently used sensors for temperature measurement because of their wide applicability, long-term stability and high reliability. However, one of the major utilization problems is the linearization of the transfer relation between temperature and output voltage of thermocouples. The linear calibration equation and its modules could be improved by using regression analysis to help solve this problem. In this study, two types of thermocouple and five temperature ranges were selected to evaluate the fitting agreement of different-order polynomial equations. Two quantitative criteria, the average of the absolute error values |e|ave and the standard deviation of calibration equation estd, were used to evaluate the accuracy and precision of these calibrations equations. The optimal order of polynomial equations differed with the temperature range. The accuracy and precision of the calibration equation could be improved significantly with an adequate higher degree polynomial equation. The technique could be applied with hardware modules to serve as an intelligent sensor for temperature measurement. PMID:24351627

  12. Estimation of Ordinary Differential Equation Parameters Using Constrained Local Polynomial Regression.

    PubMed

    Ding, A Adam; Wu, Hulin

    2014-10-01

    We propose a new method to use a constrained local polynomial regression to estimate the unknown parameters in ordinary differential equation models with a goal of improving the smoothing-based two-stage pseudo-least squares estimate. The equation constraints are derived from the differential equation model and are incorporated into the local polynomial regression in order to estimate the unknown parameters in the differential equation model. We also derive the asymptotic bias and variance of the proposed estimator. Our simulation studies show that our new estimator is clearly better than the pseudo-least squares estimator in estimation accuracy with a small price of computational cost. An application example on immune cell kinetics and trafficking for influenza infection further illustrates the benefits of the proposed new method.

  13. Estimation of Ordinary Differential Equation Parameters Using Constrained Local Polynomial Regression

    PubMed Central

    Ding, A. Adam; Wu, Hulin

    2015-01-01

    We propose a new method to use a constrained local polynomial regression to estimate the unknown parameters in ordinary differential equation models with a goal of improving the smoothing-based two-stage pseudo-least squares estimate. The equation constraints are derived from the differential equation model and are incorporated into the local polynomial regression in order to estimate the unknown parameters in the differential equation model. We also derive the asymptotic bias and variance of the proposed estimator. Our simulation studies show that our new estimator is clearly better than the pseudo-least squares estimator in estimation accuracy with a small price of computational cost. An application example on immune cell kinetics and trafficking for influenza infection further illustrates the benefits of the proposed new method. PMID:26401093

  14. Reliability of the Load-Velocity Relationship Obtained Through Linear and Polynomial Regression Models to Predict the One-Repetition Maximum Load.

    PubMed

    Pestaña-Melero, Francisco Luis; Haff, G Gregory; Rojas, Francisco Javier; Pérez-Castilla, Alejandro; García-Ramos, Amador

    2017-12-18

    This study aimed to compare the between-session reliability of the load-velocity relationship between (1) linear vs. polynomial regression models, (2) concentric-only vs. eccentric-concentric bench press variants, as well as (3) the within-participants vs. the between-participants variability of the velocity attained at each percentage of the one-repetition maximum (%1RM). The load-velocity relationship of 30 men (age: 21.2±3.8 y; height: 1.78±0.07 m, body mass: 72.3±7.3 kg; bench press 1RM: 78.8±13.2 kg) were evaluated by means of linear and polynomial regression models in the concentric-only and eccentric-concentric bench press variants in a Smith Machine. Two sessions were performed with each bench press variant. The main findings were: (1) first-order-polynomials (CV: 4.39%-4.70%) provided the load-velocity relationship with higher reliability than second-order-polynomials (CV: 4.68%-5.04%); (2) the reliability of the load-velocity relationship did not differ between the concentric-only and eccentric-concentric bench press variants; (3) the within-participants variability of the velocity attained at each %1RM was markedly lower than the between-participants variability. Taken together, these results highlight that, regardless of the bench press variant considered, the individual determination of the load-velocity relationship by a linear regression model could be recommended to monitor and prescribe the relative load in the Smith machine bench press exercise.

  15. Bayesian B-spline mapping for dynamic quantitative traits.

    PubMed

    Xing, Jun; Li, Jiahan; Yang, Runqing; Zhou, Xiaojing; Xu, Shizhong

    2012-04-01

    Owing to their ability and flexibility to describe individual gene expression at different time points, random regression (RR) analyses have become a popular procedure for the genetic analysis of dynamic traits whose phenotypes are collected over time. Specifically, when modelling the dynamic patterns of gene expressions in the RR framework, B-splines have been proved successful as an alternative to orthogonal polynomials. In the so-called Bayesian B-spline quantitative trait locus (QTL) mapping, B-splines are used to characterize the patterns of QTL effects and individual-specific time-dependent environmental errors over time, and the Bayesian shrinkage estimation method is employed to estimate model parameters. Extensive simulations demonstrate that (1) in terms of statistical power, Bayesian B-spline mapping outperforms the interval mapping based on the maximum likelihood; (2) for the simulated dataset with complicated growth curve simulated by B-splines, Legendre polynomial-based Bayesian mapping is not capable of identifying the designed QTLs accurately, even when higher-order Legendre polynomials are considered and (3) for the simulated dataset using Legendre polynomials, the Bayesian B-spline mapping can find the same QTLs as those identified by Legendre polynomial analysis. All simulation results support the necessity and flexibility of B-spline in Bayesian mapping of dynamic traits. The proposed method is also applied to a real dataset, where QTLs controlling the growth trajectory of stem diameters in Populus are located.

  16. Modelling the breeding of Aedes Albopictus species in an urban area in Pulau Pinang using polynomial regression

    NASA Astrophysics Data System (ADS)

    Salleh, Nur Hanim Mohd; Ali, Zalila; Noor, Norlida Mohd.; Baharum, Adam; Saad, Ahmad Ramli; Sulaiman, Husna Mahirah; Ahmad, Wan Muhamad Amir W.

    2014-07-01

    Polynomial regression is used to model a curvilinear relationship between a response variable and one or more predictor variables. It is a form of a least squares linear regression model that predicts a single response variable by decomposing the predictor variables into an nth order polynomial. In a curvilinear relationship, each curve has a number of extreme points equal to the highest order term in the polynomial. A quadratic model will have either a single maximum or minimum, whereas a cubic model has both a relative maximum and a minimum. This study used quadratic modeling techniques to analyze the effects of environmental factors: temperature, relative humidity, and rainfall distribution on the breeding of Aedes albopictus, a type of Aedes mosquito. Data were collected at an urban area in south-west Penang from September 2010 until January 2011. The results indicated that the breeding of Aedes albopictus in the urban area is influenced by all three environmental characteristics. The number of mosquito eggs is estimated to reach a maximum value at a medium temperature, a medium relative humidity and a high rainfall distribution.

  17. [Using fractional polynomials to estimate the safety threshold of fluoride in drinking water].

    PubMed

    Pan, Shenling; An, Wei; Li, Hongyan; Yang, Min

    2014-01-01

    To study the dose-response relationship between fluoride content in drinking water and prevalence of dental fluorosis on the national scale, then to determine the safety threshold of fluoride in drinking water. Meta-regression analysis was applied to the 2001-2002 national endemic fluorosis survey data of key wards. First, fractional polynomial (FP) was adopted to establish fixed effect model, determining the best FP structure, after that restricted maximum likelihood (REML) was adopted to estimate between-study variance, then the best random effect model was established. The best FP structure was first-order logarithmic transformation. Based on the best random effect model, the benchmark dose (BMD) of fluoride in drinking water and its lower limit (BMDL) was calculated as 0.98 mg/L and 0.78 mg/L. Fluoride in drinking water can only explain 35.8% of the variability of the prevalence, among other influencing factors, ward type was a significant factor, while temperature condition and altitude were not. Fractional polynomial-based meta-regression method is simple, practical and can provide good fitting effect, based on it, the safety threshold of fluoride in drinking water of our country is determined as 0.8 mg/L.

  18. Random regression analyses using B-splines functions to model growth from birth to adult age in Canchim cattle.

    PubMed

    Baldi, F; Alencar, M M; Albuquerque, L G

    2010-12-01

    The objective of this work was to estimate covariance functions using random regression models on B-splines functions of animal age, for weights from birth to adult age in Canchim cattle. Data comprised 49,011 records on 2435 females. The model of analysis included fixed effects of contemporary groups, age of dam as quadratic covariable and the population mean trend taken into account by a cubic regression on orthogonal polynomials of animal age. Residual variances were modelled through a step function with four classes. The direct and maternal additive genetic effects, and animal and maternal permanent environmental effects were included as random effects in the model. A total of seventeen analyses, considering linear, quadratic and cubic B-splines functions and up to seven knots, were carried out. B-spline functions of the same order were considered for all random effects. Random regression models on B-splines functions were compared to a random regression model on Legendre polynomials and with a multitrait model. Results from different models of analyses were compared using the REML form of the Akaike Information criterion and Schwarz' Bayesian Information criterion. In addition, the variance components and genetic parameters estimated for each random regression model were also used as criteria to choose the most adequate model to describe the covariance structure of the data. A model fitting quadratic B-splines, with four knots or three segments for direct additive genetic effect and animal permanent environmental effect and two knots for maternal additive genetic effect and maternal permanent environmental effect, was the most adequate to describe the covariance structure of the data. Random regression models using B-spline functions as base functions fitted the data better than Legendre polynomials, especially at mature ages, but higher number of parameters need to be estimated with B-splines functions. © 2010 Blackwell Verlag GmbH.

  19. Random regression models on Legendre polynomials to estimate genetic parameters for weights from birth to adult age in Canchim cattle.

    PubMed

    Baldi, F; Albuquerque, L G; Alencar, M M

    2010-08-01

    The objective of this work was to estimate covariance functions for direct and maternal genetic effects, animal and maternal permanent environmental effects, and subsequently, to derive relevant genetic parameters for growth traits in Canchim cattle. Data comprised 49,011 weight records on 2435 females from birth to adult age. The model of analysis included fixed effects of contemporary groups (year and month of birth and at weighing) and age of dam as quadratic covariable. Mean trends were taken into account by a cubic regression on orthogonal polynomials of animal age. Residual variances were allowed to vary and were modelled by a step function with 1, 4 or 11 classes based on animal's age. The model fitting four classes of residual variances was the best. A total of 12 random regression models from second to seventh order were used to model direct and maternal genetic effects, animal and maternal permanent environmental effects. The model with direct and maternal genetic effects, animal and maternal permanent environmental effects fitted by quadric, cubic, quintic and linear Legendre polynomials, respectively, was the most adequate to describe the covariance structure of the data. Estimates of direct and maternal heritability obtained by multi-trait (seven traits) and random regression models were very similar. Selection for higher weight at any age, especially after weaning, will produce an increase in mature cow weight. The possibility to modify the growth curve in Canchim cattle to obtain animals with rapid growth at early ages and moderate to low mature cow weight is limited.

  20. Peculiarities of stochastic regime of Arctic ice cover time evolution over 1987-2014 from microwave satellite sounding on the basis of NASA team 2 algorithm

    NASA Astrophysics Data System (ADS)

    Raev, M. D.; Sharkov, E. A.; Tikhonov, V. V.; Repina, I. A.; Komarova, N. Yu.

    2015-12-01

    The GLOBAL-RT database (DB) is composed of long-term radio heat multichannel observation data received from DMSP F08-F17 satellites; it is permanently supplemented with new data on the Earth's exploration from the space department of the Space Research Institute, Russian Academy of Sciences. Arctic ice-cover areas for regions higher than 60° N latitude were calculated using the DB polar version and NASA Team 2 algorithm, which is widely used in foreign scientific literature. According to the analysis of variability of Arctic ice cover during 1987-2014, 2 months were selected when the Arctic ice cover was maximal (February) and minimal (September), and the average ice cover area was calculated for these months. Confidence intervals of the average values are in the 95-98% limits. Several approximations are derived for the time dependences of the ice-cover maximum and minimum over the period under study. Regression dependences were calculated for polynomials from the first degree (linear) to sextic. It was ascertained that the minimal root-mean-square error of deviation from the approximated curve sharply decreased for the biquadratic polynomial and then varied insignificantly: from 0.5593 for the polynomial of third degree to 0.4560 for the biquadratic polynomial. Hence, the commonly used strictly linear regression with a negative time gradient for the September Arctic ice cover minimum over 30 years should be considered incorrect.

  1. Accurate Estimation of Solvation Free Energy Using Polynomial Fitting Techniques

    PubMed Central

    Shyu, Conrad; Ytreberg, F. Marty

    2010-01-01

    This report details an approach to improve the accuracy of free energy difference estimates using thermodynamic integration data (slope of the free energy with respect to the switching variable λ) and its application to calculating solvation free energy. The central idea is to utilize polynomial fitting schemes to approximate the thermodynamic integration data to improve the accuracy of the free energy difference estimates. Previously, we introduced the use of polynomial regression technique to fit thermodynamic integration data (Shyu and Ytreberg, J Comput Chem 30: 2297–2304, 2009). In this report we introduce polynomial and spline interpolation techniques. Two systems with analytically solvable relative free energies are used to test the accuracy of the interpolation approach. We also use both interpolation and regression methods to determine a small molecule solvation free energy. Our simulations show that, using such polynomial techniques and non-equidistant λ values, the solvation free energy can be estimated with high accuracy without using soft-core scaling and separate simulations for Lennard-Jones and partial charges. The results from our study suggest these polynomial techniques, especially with use of non-equidistant λ values, improve the accuracy for ΔF estimates without demanding additional simulations. We also provide general guidelines for use of polynomial fitting to estimate free energy. To allow researchers to immediately utilize these methods, free software and documentation is provided via http://www.phys.uidaho.edu/ytreberg/software. PMID:20623657

  2. Comparison of random regression test-day models for Polish Black and White cattle.

    PubMed

    Strabel, T; Szyda, J; Ptak, E; Jamrozik, J

    2005-10-01

    Test-day milk yields of first-lactation Black and White cows were used to select the model for routine genetic evaluation of dairy cattle in Poland. The population of Polish Black and White cows is characterized by small herd size, low level of production, and relatively early peak of lactation. Several random regression models for first-lactation milk yield were initially compared using the "percentage of squared bias" criterion and the correlations between true and predicted breeding values. Models with random herd-test-date effects, fixed age-season and herd-year curves, and random additive genetic and permanent environmental curves (Legendre polynomials of different orders were used for all regressions) were chosen for further studies. Additional comparisons included analyses of the residuals and shapes of variance curves in days in milk. The low production level and early peak of lactation of the breed required the use of Legendre polynomials of order 5 to describe age-season lactation curves. For the other curves, Legendre polynomials of order 3 satisfactorily described daily milk yield variation. Fitting third-order polynomials for the permanent environmental effect made it possible to adequately account for heterogeneous residual variance at different stages of lactation.

  3. Random regression models using different functions to model milk flow in dairy cows.

    PubMed

    Laureano, M M M; Bignardi, A B; El Faro, L; Cardoso, V L; Tonhati, H; Albuquerque, L G

    2014-09-12

    We analyzed 75,555 test-day milk flow records from 2175 primiparous Holstein cows that calved between 1997 and 2005. Milk flow was obtained by dividing the mean milk yield (kg) of the 3 daily milking by the total milking time (min) and was expressed as kg/min. Milk flow was grouped into 43 weekly classes. The analyses were performed using a single-trait Random Regression Models that included direct additive genetic, permanent environmental, and residual random effects. In addition, the contemporary group and linear and quadratic effects of cow age at calving were included as fixed effects. Fourth-order orthogonal Legendre polynomial of days in milk was used to model the mean trend in milk flow. The additive genetic and permanent environmental covariance functions were estimated using random regression Legendre polynomials and B-spline functions of days in milk. The model using a third-order Legendre polynomial for additive genetic effects and a sixth-order polynomial for permanent environmental effects, which contained 7 residual classes, proved to be the most adequate to describe variations in milk flow, and was also the most parsimonious. The heritability in milk flow estimated by the most parsimonious model was of moderate to high magnitude.

  4. Random regression models using Legendre orthogonal polynomials to evaluate the milk production of Alpine goats.

    PubMed

    Silva, F G; Torres, R A; Brito, L F; Euclydes, R F; Melo, A L P; Souza, N O; Ribeiro, J I; Rodrigues, M T

    2013-12-11

    The objective of this study was to identify the best random regression model using Legendre orthogonal polynomials to evaluate Alpine goats genetically and to estimate the parameters for test day milk yield. On the test day, we analyzed 20,710 records of milk yield of 667 goats from the Goat Sector of the Universidade Federal de Viçosa. The evaluated models had combinations of distinct fitting orders for polynomials (2-5), random genetic (1-7), and permanent environmental (1-7) fixed curves and a number of classes for residual variance (2, 4, 5, and 6). WOMBAT software was used for all genetic analyses. A random regression model using the best Legendre orthogonal polynomial for genetic evaluation of milk yield on the test day of Alpine goats considered a fixed curve of order 4, curve of genetic additive effects of order 2, curve of permanent environmental effects of order 7, and a minimum of 5 classes of residual variance because it was the most economical model among those that were equivalent to the complete model by the likelihood ratio test. Phenotypic variance and heritability were higher at the end of the lactation period, indicating that the length of lactation has more genetic components in relation to the production peak and persistence. It is very important that the evaluation utilizes the best combination of fixed, genetic additive and permanent environmental regressions, and number of classes of heterogeneous residual variance for genetic evaluation using random regression models, thereby enhancing the precision and accuracy of the estimates of parameters and prediction of genetic values.

  5. Advances in Highly Constrained Multi-Phase Trajectory Generation using the General Pseudospectral Optimization Software (GPOPS)

    DTIC Science & Technology

    2013-08-01

    release; distribution unlimited. PA Number 412-TW-PA-13395 f generic function g acceleration due to gravity h altitude L aerodynamic lift force L Lagrange...cost m vehicle mass M Mach number n number of coefficients in polynomial regression p highest order of polynomial regression Q dynamic pressure R...Method (RPM); the collocation points are defined by the roots of Legendre -Gauss- Radau (LGR) functions.9 GPOPS also automatically refines the “mesh” by

  6. A frequency domain global parameter estimation method for multiple reference frequency response measurements

    NASA Astrophysics Data System (ADS)

    Shih, C. Y.; Tsuei, Y. G.; Allemang, R. J.; Brown, D. L.

    1988-10-01

    A method of using the matrix Auto-Regressive Moving Average (ARMA) model in the Laplace domain for multiple-reference global parameter identification is presented. This method is particularly applicable to the area of modal analysis where high modal density exists. The method is also applicable when multiple reference frequency response functions are used to characterise linear systems. In order to facilitate the mathematical solution, the Forsythe orthogonal polynomial is used to reduce the ill-conditioning of the formulated equations and to decouple the normal matrix into two reduced matrix blocks. A Complex Mode Indicator Function (CMIF) is introduced, which can be used to determine the proper order of the rational polynomials.

  7. Creating a non-linear total sediment load formula using polynomial best subset regression model

    NASA Astrophysics Data System (ADS)

    Okcu, Davut; Pektas, Ali Osman; Uyumaz, Ali

    2016-08-01

    The aim of this study is to derive a new total sediment load formula which is more accurate and which has less application constraints than the well-known formulae of the literature. 5 most known stream power concept sediment formulae which are approved by ASCE are used for benchmarking on a wide range of datasets that includes both field and flume (lab) observations. The dimensionless parameters of these widely used formulae are used as inputs in a new regression approach. The new approach is called Polynomial Best subset regression (PBSR) analysis. The aim of the PBRS analysis is fitting and testing all possible combinations of the input variables and selecting the best subset. Whole the input variables with their second and third powers are included in the regression to test the possible relation between the explanatory variables and the dependent variable. While selecting the best subset a multistep approach is used that depends on significance values and also the multicollinearity degrees of inputs. The new formula is compared to others in a holdout dataset and detailed performance investigations are conducted for field and lab datasets within this holdout data. Different goodness of fit statistics are used as they represent different perspectives of the model accuracy. After the detailed comparisons are carried out we figured out the most accurate equation that is also applicable on both flume and river data. Especially, on field dataset the prediction performance of the proposed formula outperformed the benchmark formulations.

  8. Comparison of Linear and Non-linear Regression Analysis to Determine Pulmonary Pressure in Hyperthyroidism.

    PubMed

    Scarneciu, Camelia C; Sangeorzan, Livia; Rus, Horatiu; Scarneciu, Vlad D; Varciu, Mihai S; Andreescu, Oana; Scarneciu, Ioan

    2017-01-01

    This study aimed at assessing the incidence of pulmonary hypertension (PH) at newly diagnosed hyperthyroid patients and at finding a simple model showing the complex functional relation between pulmonary hypertension in hyperthyroidism and the factors causing it. The 53 hyperthyroid patients (H-group) were evaluated mainly by using an echocardiographical method and compared with 35 euthyroid (E-group) and 25 healthy people (C-group). In order to identify the factors causing pulmonary hypertension the statistical method of comparing the values of arithmetical means is used. The functional relation between the two random variables (PAPs and each of the factors determining it within our research study) can be expressed by linear or non-linear function. By applying the linear regression method described by a first-degree equation the line of regression (linear model) has been determined; by applying the non-linear regression method described by a second degree equation, a parabola-type curve of regression (non-linear or polynomial model) has been determined. We made the comparison and the validation of these two models by calculating the determination coefficient (criterion 1), the comparison of residuals (criterion 2), application of AIC criterion (criterion 3) and use of F-test (criterion 4). From the H-group, 47% have pulmonary hypertension completely reversible when obtaining euthyroidism. The factors causing pulmonary hypertension were identified: previously known- level of free thyroxin, pulmonary vascular resistance, cardiac output; new factors identified in this study- pretreatment period, age, systolic blood pressure. According to the four criteria and to the clinical judgment, we consider that the polynomial model (graphically parabola- type) is better than the linear one. The better model showing the functional relation between the pulmonary hypertension in hyperthyroidism and the factors identified in this study is given by a polynomial equation of second degree where the parabola is its graphical representation.

  9. Comparison of random regression models with Legendre polynomials and linear splines for production traits and somatic cell score of Canadian Holstein cows.

    PubMed

    Bohmanova, J; Miglior, F; Jamrozik, J; Misztal, I; Sullivan, P G

    2008-09-01

    A random regression model with both random and fixed regressions fitted by Legendre polynomials of order 4 was compared with 3 alternative models fitting linear splines with 4, 5, or 6 knots. The effects common for all models were a herd-test-date effect, fixed regressions on days in milk (DIM) nested within region-age-season of calving class, and random regressions for additive genetic and permanent environmental effects. Data were test-day milk, fat and protein yields, and SCS recorded from 5 to 365 DIM during the first 3 lactations of Canadian Holstein cows. A random sample of 50 herds consisting of 96,756 test-day records was generated to estimate variance components within a Bayesian framework via Gibbs sampling. Two sets of genetic evaluations were subsequently carried out to investigate performance of the 4 models. Models were compared by graphical inspection of variance functions, goodness of fit, error of prediction of breeding values, and stability of estimated breeding values. Models with splines gave lower estimates of variances at extremes of lactations than the model with Legendre polynomials. Differences among models in goodness of fit measured by percentages of squared bias, correlations between predicted and observed records, and residual variances were small. The deviance information criterion favored the spline model with 6 knots. Smaller error of prediction and higher stability of estimated breeding values were achieved by using spline models with 5 and 6 knots compared with the model with Legendre polynomials. In general, the spline model with 6 knots had the best overall performance based upon the considered model comparison criteria.

  10. Change with age in regression construction of fat percentage for BMI in school-age children.

    PubMed

    Fujii, Katsunori; Mishima, Takaaki; Watanabe, Eiji; Seki, Kazuyoshi

    2011-01-01

    In this study, curvilinear regression was applied to the relationship between BMI and body fat percentage, and an analysis was done to see whether there are characteristic changes in that curvilinear regression from elementary to middle school. Then, by simultaneously investigating the changes with age in BMI and body fat percentage, the essential differences in BMI and body fat percentage were demonstrated. The subjects were 789 boys and girls (469 boys, 320 girls) aged 7.5 to 14.5 years from all parts of Japan who participated in regular sports activities. Body weight, total body water (TBW), soft lean mass (SLM), body fat percentage, and fat mass were measured with a body composition analyzer (Tanita BC-521 Inner Scan), using segmental bioelectrical impedance analysis & multi-frequency bioelectrical impedance analysis. Height was measured with a digital height measurer. Body mass index (BMI) was calculated as body weight (km) divided by the square of height (m). The results for the validity of regression polynomials of body fat percentage against BMI showed that, for both boys and girls, first-order polynomials were valid in all school years. With regard to changes with age in BMI and body fat percentage, the results showed a temporary drop at 9 years in the aging distance curve in boys, followed by an increasing trend. Peaks were seen in the velocity curve at 9.7 and 11.9 years, but the MPV was presumed to be at 11.9 years. Among girls, a decreasing trend was seen in the aging distance curve, which was opposite to the changes in the aging distance curve for body fat percentage.

  11. Random regression models using different functions to model test-day milk yield of Brazilian Holstein cows.

    PubMed

    Bignardi, A B; El Faro, L; Torres Júnior, R A A; Cardoso, V L; Machado, P F; Albuquerque, L G

    2011-10-31

    We analyzed 152,145 test-day records from 7317 first lactations of Holstein cows recorded from 1995 to 2003. Our objective was to model variations in test-day milk yield during the first lactation of Holstein cows by random regression model (RRM), using various functions in order to obtain adequate and parsimonious models for the estimation of genetic parameters. Test-day milk yields were grouped into weekly classes of days in milk, ranging from 1 to 44 weeks. The contemporary groups were defined as herd-test-day. The analyses were performed using a single-trait RRM, including the direct additive, permanent environmental and residual random effects. In addition, contemporary group and linear and quadratic effects of the age of cow at calving were included as fixed effects. The mean trend of milk yield was modeled with a fourth-order orthogonal Legendre polynomial. The additive genetic and permanent environmental covariance functions were estimated by random regression on two parametric functions, Ali and Schaeffer and Wilmink, and on B-spline functions of days in milk. The covariance components and the genetic parameters were estimated by the restricted maximum likelihood method. Results from RRM parametric and B-spline functions were compared to RRM on Legendre polynomials and with a multi-trait analysis, using the same data set. Heritability estimates presented similar trends during mid-lactation (13 to 31 weeks) and between week 37 and the end of lactation, for all RRM. Heritabilities obtained by multi-trait analysis were of a lower magnitude than those estimated by RRM. The RRMs with a higher number of parameters were more useful to describe the genetic variation of test-day milk yield throughout the lactation. RRM using B-spline and Legendre polynomials as base functions appears to be the most adequate to describe the covariance structure of the data.

  12. Genetic parameters of legendre polynomials for first parity lactation curves.

    PubMed

    Pool, M H; Janss, L L; Meuwissen, T H

    2000-11-01

    Variance components of the covariance function coefficients in a random regression test-day model were estimated by Legendre polynomials up to a fifth order for first-parity records of Dutch dairy cows using Gibbs sampling. Two Legendre polynomials of equal order were used to model the random part of the lactation curve, one for the genetic component and one for permanent environment. Test-day records from cows registered between 1990 to 1996 and collected by regular milk recording were available. For the data set, 23,700 complete lactations were selected from 475 herds sired by 262 sires. Because the application of a random regression model is limited by computing capacity, we investigated the minimum order needed to fit the variance structure in the data sufficiently. Predictions of genetic and permanent environmental variance structures were compared with bivariate estimates on 30-d intervals. A third-order or higher polynomial modeled the shape of variance curves over DIM with sufficient accuracy for the genetic and permanent environment part. Also, the genetic correlation structure was fitted with sufficient accuracy by a third-order polynomial, but, for the permanent environmental component, a fourth order was needed. Because equal orders are suggested in the literature, a fourth-order Legendre polynomial is recommended in this study. However, a rank of three for the genetic covariance matrix and of four for permanent environment allows a simpler covariance function with a reduced number of parameters based on the eigenvalues and eigenvectors.

  13. Parametric correlation functions to model the structure of permanent environmental (co)variances in milk yield random regression models.

    PubMed

    Bignardi, A B; El Faro, L; Cardoso, V L; Machado, P F; Albuquerque, L G

    2009-09-01

    The objective of the present study was to estimate milk yield genetic parameters applying random regression models and parametric correlation functions combined with a variance function to model animal permanent environmental effects. A total of 152,145 test-day milk yields from 7,317 first lactations of Holstein cows belonging to herds located in the southeastern region of Brazil were analyzed. Test-day milk yields were divided into 44 weekly classes of days in milk. Contemporary groups were defined by herd-test-day comprising a total of 2,539 classes. The model included direct additive genetic, permanent environmental, and residual random effects. The following fixed effects were considered: contemporary group, age of cow at calving (linear and quadratic regressions), and the population average lactation curve modeled by fourth-order orthogonal Legendre polynomial. Additive genetic effects were modeled by random regression on orthogonal Legendre polynomials of days in milk, whereas permanent environmental effects were estimated using a stationary or nonstationary parametric correlation function combined with a variance function of different orders. The structure of residual variances was modeled using a step function containing 6 variance classes. The genetic parameter estimates obtained with the model using a stationary correlation function associated with a variance function to model permanent environmental effects were similar to those obtained with models employing orthogonal Legendre polynomials for the same effect. A model using a sixth-order polynomial for additive effects and a stationary parametric correlation function associated with a seventh-order variance function to model permanent environmental effects would be sufficient for data fitting.

  14. Estimation of genetic parameters related to eggshell strength using random regression models.

    PubMed

    Guo, J; Ma, M; Qu, L; Shen, M; Dou, T; Wang, K

    2015-01-01

    This study examined the changes in eggshell strength and the genetic parameters related to this trait throughout a hen's laying life using random regression. The data were collected from a crossbred population between 2011 and 2014, where the eggshell strength was determined repeatedly for 2260 hens. Using random regression models (RRMs), several Legendre polynomials were employed to estimate the fixed, direct genetic and permanent environment effects. The residual effects were treated as independently distributed with heterogeneous variance for each test week. The direct genetic variance was included with second-order Legendre polynomials and the permanent environment with third-order Legendre polynomials. The heritability of eggshell strength ranged from 0.26 to 0.43, the repeatability ranged between 0.47 and 0.69, and the estimated genetic correlations between test weeks was high at > 0.67. The first eigenvalue of the genetic covariance matrix accounted for about 97% of the sum of all the eigenvalues. The flexibility and statistical power of RRM suggest that this model could be an effective method to improve eggshell quality and to reduce losses due to cracked eggs in a breeding plan.

  15. Soil Particle Size Analysis by Laser Diffractometry: Result Comparison with Pipette Method

    NASA Astrophysics Data System (ADS)

    Šinkovičová, Miroslava; Igaz, Dušan; Kondrlová, Elena; Jarošová, Miriam

    2017-10-01

    Soil texture as the basic soil physical property provides a basic information on the soil grain size distribution as well as grain size fraction representation. Currently, there are several methods of particle dimension measurement available that are based on different physical principles. Pipette method based on the different sedimentation velocity of particles with different diameter is considered to be one of the standard methods of individual grain size fraction distribution determination. Following the technical advancement, optical methods such as laser diffraction can be also used nowadays for grain size distribution determination in the soil. According to the literature review of domestic as well as international sources related to this topic, it is obvious that the results obtained by laser diffractometry do not correspond with the results obtained by pipette method. The main aim of this paper was to analyse 132 samples of medium fine soil, taken from the Nitra River catchment in Slovakia, from depths of 15-20 cm and 40-45 cm, respectively, using laser analysers: ANALYSETTE 22 MicroTec plus (Fritsch GmbH) and Mastersizer 2000 (Malvern Instruments Ltd). The results obtained by laser diffractometry were compared with pipette method and the regression relationships using linear, exponential, power and polynomial trend were derived. Regressions with the three highest regression coefficients (R2) were further investigated. The fit with the highest tightness was observed for the polynomial regression. In view of the results obtained, we recommend using the estimate of the representation of the clay fraction (<0.01 mm) polynomial regression, to achieve a highest confidence value R2 at the depths of 15-20 cm 0.72 (Analysette 22 MicroTec plus) and 0.95 (Mastersizer 2000), from a depth of 40-45 cm 0.90 (Analysette 22 MicroTec plus) and 0.96 (Mastersizer 2000). Since the percentage representation of clayey particles (2nd fraction according to the methodology of Complex Soil Survey done in Slovakia) in soil is the determinant for soil type specification, we recommend using the derived relationships in soil science when the soil texture analysis is done according to laser diffractometry. The advantages of laser diffraction method comprise the short analysis time, usage of small sample amount, application for the various grain size fraction and soil type classification systems, and a wide range of determined fractions. Therefore, it is necessary to focus on this issue further to address the needs of soil science research and attempt to replace the standard pipette method with more progressive laser diffraction method.

  16. Are We All in the Same Boat? The Role of Perceptual Distance in Organizational Health Interventions.

    PubMed

    Hasson, Henna; von Thiele Schwarz, Ulrica; Nielsen, Karina; Tafvelin, Susanne

    2016-10-01

    The study investigates how agreement between leaders' and their team's perceptions influence intervention outcomes in a leadership-training intervention aimed at improving organizational learning. Agreement, i.e. perceptual distance was calculated for the organizational learning dimensions at baseline. Changes in the dimensions from pre-intervention to post-intervention were evaluated using polynomial regression analysis with response surface analysis. The general pattern of the results indicated that the organizational learning improved when leaders and their teams agreed on the level of organizational learning prior to the intervention. The improvement was greatest when the leader's and the team's perceptions at baseline were aligned and high rather than aligned and low. The least beneficial scenario was when the leader's perceptions were higher than the team's perceptions. These results give insights into the importance of comparing leaders' and their team's perceptions in intervention research. Polynomial regression analyses with response surface methodology allow three-dimensional examination of relationship between two predictor variables and an outcome. This contributes with knowledge on how combination of predictor variables may affect outcome and allows studies of potential non-linearity relating to the outcome. Future studies could use these methods in process evaluation of interventions. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  17. Random regression models using Legendre polynomials or linear splines for test-day milk yield of dairy Gyr (Bos indicus) cattle.

    PubMed

    Pereira, R J; Bignardi, A B; El Faro, L; Verneque, R S; Vercesi Filho, A E; Albuquerque, L G

    2013-01-01

    Studies investigating the use of random regression models for genetic evaluation of milk production in Zebu cattle are scarce. In this study, 59,744 test-day milk yield records from 7,810 first lactations of purebred dairy Gyr (Bos indicus) and crossbred (dairy Gyr × Holstein) cows were used to compare random regression models in which additive genetic and permanent environmental effects were modeled using orthogonal Legendre polynomials or linear spline functions. Residual variances were modeled considering 1, 5, or 10 classes of days in milk. Five classes fitted the changes in residual variances over the lactation adequately and were used for model comparison. The model that fitted linear spline functions with 6 knots provided the lowest sum of residual variances across lactation. On the other hand, according to the deviance information criterion (DIC) and bayesian information criterion (BIC), a model using third-order and fourth-order Legendre polynomials for additive genetic and permanent environmental effects, respectively, provided the best fit. However, the high rank correlation (0.998) between this model and that applying third-order Legendre polynomials for additive genetic and permanent environmental effects, indicates that, in practice, the same bulls would be selected by both models. The last model, which is less parameterized, is a parsimonious option for fitting dairy Gyr breed test-day milk yield records. Copyright © 2013 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  18. SEMIPARAMETRIC QUANTILE REGRESSION WITH HIGH-DIMENSIONAL COVARIATES

    PubMed Central

    Zhu, Liping; Huang, Mian; Li, Runze

    2012-01-01

    This paper is concerned with quantile regression for a semiparametric regression model, in which both the conditional mean and conditional variance function of the response given the covariates admit a single-index structure. This semiparametric regression model enables us to reduce the dimension of the covariates and simultaneously retains the flexibility of nonparametric regression. Under mild conditions, we show that the simple linear quantile regression offers a consistent estimate of the index parameter vector. This is a surprising and interesting result because the single-index model is possibly misspecified under the linear quantile regression. With a root-n consistent estimate of the index vector, one may employ a local polynomial regression technique to estimate the conditional quantile function. This procedure is computationally efficient, which is very appealing in high-dimensional data analysis. We show that the resulting estimator of the quantile function performs asymptotically as efficiently as if the true value of the index vector were known. The methodologies are demonstrated through comprehensive simulation studies and an application to a real dataset. PMID:24501536

  19. Genetic analysis of body weights of individually fed beef bulls in South Africa using random regression models.

    PubMed

    Selapa, N W; Nephawe, K A; Maiwashe, A; Norris, D

    2012-02-08

    The aim of this study was to estimate genetic parameters for body weights of individually fed beef bulls measured at centralized testing stations in South Africa using random regression models. Weekly body weights of Bonsmara bulls (N = 2919) tested between 1999 and 2003 were available for the analyses. The model included a fixed regression of the body weights on fourth-order orthogonal Legendre polynomials of the actual days on test (7, 14, 21, 28, 35, 42, 49, 56, 63, 70, 77, and 84) for starting age and contemporary group effects. Random regressions on fourth-order orthogonal Legendre polynomials of the actual days on test were included for additive genetic effects and additional uncorrelated random effects of the weaning-herd-year and the permanent environment of the animal. Residual effects were assumed to be independently distributed with heterogeneous variance for each test day. Variance ratios for additive genetic, permanent environment and weaning-herd-year for weekly body weights at different test days ranged from 0.26 to 0.29, 0.37 to 0.44 and 0.26 to 0.34, respectively. The weaning-herd-year was found to have a significant effect on the variation of body weights of bulls despite a 28-day adjustment period. Genetic correlations amongst body weights at different test days were high, ranging from 0.89 to 1.00. Heritability estimates were comparable to literature using multivariate models. Therefore, random regression model could be applied in the genetic evaluation of body weight of individually fed beef bulls in South Africa.

  20. Comparison of vertical E × B drift velocities and ground-based magnetometer observations of DELTA H in the low latitude under geomagnetically disturbed conditions

    NASA Astrophysics Data System (ADS)

    Prabhu, M.; Unnikrishnan, K.

    2018-04-01

    In the present work, we analyzed the daytime vertical E × B drift velocities obtained from Jicamarca Unattended Long-term Ionosphere Atmosphere (JULIA) radar and ΔH component of geomagnetic field measured as the difference between the magnitudes of the horizontal (H) components between two magnetometers deployed at two different locations Jicamarca, and Piura in Peru for 22 geomagnetically disturbed events in which either SC has occurred or Dstmax < -50 nT during the period 2006-2011. The ΔH component of geomagnetic field is measured as the differences in the magnitudes of horizontal H component between magnetometer placed directly on the magnetic equator and one displaced 6-9° away. It will provide a direct measure of the daytime electrojet current, due to the eastward electric field. This will in turn gives the magnitude of vertical E × B drift velocity in the F region. A positive correlation exists between peak values of daytime vertical E × B drift velocity and peak value of ΔH for the three consecutive days of the events. It was observed that 45% of the events have daytime vertical E × B drift velocity peak in the magnitude range 10-20 m/s and 20-30 m/s and 20% have peak ΔH in the magnitude range 50-60 nT and 80-90 nT. It was observed that the time of occurrence of the peak value of both the vertical E × B drift velocity and the ΔH have a maximum (40%) probability in the same time range 11:00-13:00 LT. We also investigated the correlation between E × B drift velocity and Dst index and the correlation between delta H and Dst index. A strong positive correlation is found between E × B drift and Dst index as well as between delta H and Dst Index. Three different techniques of data analysis - linear, polynomial (order 2), and polynomial (order 3) regression analysis were considered. The regression parameters in all the three cases were calculated using the Least Square Method (LSM), using the daytime vertical E × B drift velocity and ΔH. A formula was developed which indicates the relationship between daytime vertical E × B drift velocity and ΔH, for the disturbed periods. The E × B drift velocity was then evaluated using the formulae thus found for the three regression analysis and validated for the 'disturbed periods' of 3 selected events. The E × B drift velocities estimated by the three regression analysis have a fairly good agreement with JULIA radar observed values under different seasons and solar activity conditions. Root Mean Square (RMS) errors calculated for each case suggest that polynomial (order 3) regression analysis provides a better agreement with the observations from among the three.

  1. Genetic evaluation and selection response for growth in meat-type quail through random regression models using B-spline functions and Legendre polynomials.

    PubMed

    Mota, L F M; Martins, P G M A; Littiere, T O; Abreu, L R A; Silva, M A; Bonafé, C M

    2018-04-01

    The objective was to estimate (co)variance functions using random regression models (RRM) with Legendre polynomials, B-spline function and multi-trait models aimed at evaluating genetic parameters of growth traits in meat-type quail. A database containing the complete pedigree information of 7000 meat-type quail was utilized. The models included the fixed effects of contemporary group and generation. Direct additive genetic and permanent environmental effects, considered as random, were modeled using B-spline functions considering quadratic and cubic polynomials for each individual segment, and Legendre polynomials for age. Residual variances were grouped in four age classes. Direct additive genetic and permanent environmental effects were modeled using 2 to 4 segments and were modeled by Legendre polynomial with orders of fit ranging from 2 to 4. The model with quadratic B-spline adjustment, using four segments for direct additive genetic and permanent environmental effects, was the most appropriate and parsimonious to describe the covariance structure of the data. The RRM using Legendre polynomials presented an underestimation of the residual variance. Lesser heritability estimates were observed for multi-trait models in comparison with RRM for the evaluated ages. In general, the genetic correlations between measures of BW from hatching to 35 days of age decreased as the range between the evaluated ages increased. Genetic trend for BW was positive and significant along the selection generations. The genetic response to selection for BW in the evaluated ages presented greater values for RRM compared with multi-trait models. In summary, RRM using B-spline functions with four residual variance classes and segments were the best fit for genetic evaluation of growth traits in meat-type quail. In conclusion, RRM should be considered in genetic evaluation of breeding programs.

  2. Inferring genetic parameters of lactation in Tropical Milking Criollo cattle with random regression test-day models.

    PubMed

    Santellano-Estrada, E; Becerril-Pérez, C M; de Alba, J; Chang, Y M; Gianola, D; Torres-Hernández, G; Ramírez-Valverde, R

    2008-11-01

    This study inferred genetic and permanent environmental variation of milk yield in Tropical Milking Criollo cattle and compared 5 random regression test-day models using Wilmink's function and Legendre polynomials. Data consisted of 15,377 test-day records from 467 Tropical Milking Criollo cows that calved between 1974 and 2006 in the tropical lowlands of the Gulf Coast of Mexico and in southern Nicaragua. Estimated heritabilities of test-day milk yields ranged from 0.18 to 0.45, and repeatabilities ranged from 0.35 to 0.68 for the period spanning from 6 to 400 d in milk. Genetic correlation between days in milk 10 and 400 was around 0.50 but greater than 0.90 for most pairs of test days. The model that used first-order Legendre polynomials for additive genetic effects and second-order Legendre polynomials for permanent environmental effects gave the smallest residual variance and was also favored by the Akaike information criterion and likelihood ratio tests.

  3. Correlation between external and internal respiratory motion: a validation study.

    PubMed

    Ernst, Floris; Bruder, Ralf; Schlaefer, Alexander; Schweikard, Achim

    2012-05-01

    In motion-compensated image-guided radiotherapy, accurate tracking of the target region is required. This tracking process includes building a correlation model between external surrogate motion and the motion of the target region. A novel correlation method is presented and compared with the commonly used polynomial model. The CyberKnife system (Accuray, Inc., Sunnyvale/CA) uses a polynomial correlation model to relate externally measured surrogate data (optical fibres on the patient's chest emitting red light) to infrequently acquired internal measurements (X-ray data). A new correlation algorithm based on ɛ -Support Vector Regression (SVR) was developed. Validation and comparison testing were done with human volunteers using live 3D ultrasound and externally measured infrared light-emitting diodes (IR LEDs). Seven data sets (5:03-6:27 min long) were recorded from six volunteers. Polynomial correlation algorithms were compared to the SVR-based algorithm demonstrating an average increase in root mean square (RMS) accuracy of 21.3% (0.4 mm). For three signals, the increase was more than 29% and for one signal as much as 45.6% (corresponding to more than 1.5 mm RMS). Further analysis showed the improvement to be statistically significant. The new SVR-based correlation method outperforms traditional polynomial correlation methods for motion tracking. This method is suitable for clinical implementation and may improve the overall accuracy of targeted radiotherapy.

  4. An adaptive least-squares global sensitivity method and application to a plasma-coupled combustion prediction with parametric correlation

    NASA Astrophysics Data System (ADS)

    Tang, Kunkun; Massa, Luca; Wang, Jonathan; Freund, Jonathan B.

    2018-05-01

    We introduce an efficient non-intrusive surrogate-based methodology for global sensitivity analysis and uncertainty quantification. Modified covariance-based sensitivity indices (mCov-SI) are defined for outputs that reflect correlated effects. The overall approach is applied to simulations of a complex plasma-coupled combustion system with disparate uncertain parameters in sub-models for chemical kinetics and a laser-induced breakdown ignition seed. The surrogate is based on an Analysis of Variance (ANOVA) expansion, such as widely used in statistics, with orthogonal polynomials representing the ANOVA subspaces and a polynomial dimensional decomposition (PDD) representing its multi-dimensional components. The coefficients of the PDD expansion are obtained using a least-squares regression, which both avoids the direct computation of high-dimensional integrals and affords an attractive flexibility in choosing sampling points. This facilitates importance sampling using a Bayesian calibrated posterior distribution, which is fast and thus particularly advantageous in common practical cases, such as our large-scale demonstration, for which the asymptotic convergence properties of polynomial expansions cannot be realized due to computation expense. Effort, instead, is focused on efficient finite-resolution sampling. Standard covariance-based sensitivity indices (Cov-SI) are employed to account for correlation of the uncertain parameters. Magnitude of Cov-SI is unfortunately unbounded, which can produce extremely large indices that limit their utility. Alternatively, mCov-SI are then proposed in order to bound this magnitude ∈ [ 0 , 1 ]. The polynomial expansion is coupled with an adaptive ANOVA strategy to provide an accurate surrogate as the union of several low-dimensional spaces, avoiding the typical computational cost of a high-dimensional expansion. It is also adaptively simplified according to the relative contribution of the different polynomials to the total variance. The approach is demonstrated for a laser-induced turbulent combustion simulation model, which includes parameters with correlated effects.

  5. Separation of the long-term thermal effects from the strain measurements in the Geodynamics Laboratory of Lanzarote

    NASA Astrophysics Data System (ADS)

    Venedikov, A. P.; Arnoso, J.; Cai, W.; Vieira, R.; Tan, S.; Velez, E. J.

    2006-01-01

    A 12-year series (1992-2004) of strain measurements recorded in the Geodynamics Laboratory of Lanzarote is investigated. Through a tidal analysis the non-tidal component of the data is separated in order to use it for studying signals, useful for monitoring of the volcanic activity on the island. This component contains various perturbations of meteorological and oceanic origin, which should be eliminated in order to make the useful signals discernible. The paper is devoted to the estimation and elimination of the effect of the air temperature inside the station, which strongly dominates the strainmeter data. For solving this task, a regression model is applied, which includes a linear relation with the temperature and time-dependant polynomials. The regression includes nonlinearly a set of parameters, which are estimated by a properly applied Bayesian approach. The results obtained are: the regression coefficient of the strain data on temperature is equal to (-367.4 ± 0.8) × 10 -9 °C -1, the curve of the non-tidal component reduced by the effect of the temperature and a polynomial approximation of the reduced curve. The technique used here can be helpful to investigators in the domain of the earthquake and volcano monitoring. However, the fundamental and extremely difficult problem of what kind of signals in the reduced curves might be useful in this field is not considered here.

  6. Polynomials to model the growth of young bulls in performance tests.

    PubMed

    Scalez, D C B; Fragomeni, B O; Passafaro, T L; Pereira, I G; Toral, F L B

    2014-03-01

    The use of polynomial functions to describe the average growth trajectory and covariance functions of Nellore and MA (21/32 Charolais+11/32 Nellore) young bulls in performance tests was studied. The average growth trajectories and additive genetic and permanent environmental covariance functions were fit with Legendre (linear through quintic) and quadratic B-spline (with two to four intervals) polynomials. In general, the Legendre and quadratic B-spline models that included more covariance parameters provided a better fit with the data. When comparing models with the same number of parameters, the quadratic B-spline provided a better fit than the Legendre polynomials. The quadratic B-spline with four intervals provided the best fit for the Nellore and MA groups. The fitting of random regression models with different types of polynomials (Legendre polynomials or B-spline) affected neither the genetic parameters estimates nor the ranking of the Nellore young bulls. However, fitting different type of polynomials affected the genetic parameters estimates and the ranking of the MA young bulls. Parsimonious Legendre or quadratic B-spline models could be used for genetic evaluation of body weight of Nellore young bulls in performance tests, whereas these parsimonious models were less efficient for animals of the MA genetic group owing to limited data at the extreme ages.

  7. Self-other rating agreement and leader-member exchange (LMX): a quasi-replication.

    PubMed

    Barbuto, John E; Wilmot, Michael P; Singh, Matthew; Story, Joana S P

    2012-04-01

    Data from a sample of 83 elected community leaders and 391 direct-report staff (resulting in 333 useable leader-member dyads) were reanalyzed to test relations between self-other rating agreement of servant leadership and member-reported leader-member exchange (LMX). Polynomial regression analysis indicated that the self-other rating agreement model was not statistically significant. Instead, all of the variance in member-reported LMX was accounted for by the others' ratings component alone.

  8. Analysis of precision and accuracy in a simple model of machine learning

    NASA Astrophysics Data System (ADS)

    Lee, Julian

    2017-12-01

    Machine learning is a procedure where a model for the world is constructed from a training set of examples. It is important that the model should capture relevant features of the training set, and at the same time make correct prediction for examples not included in the training set. I consider the polynomial regression, the simplest method of learning, and analyze the accuracy and precision for different levels of the model complexity.

  9. A framework for longitudinal data analysis via shape regression

    NASA Astrophysics Data System (ADS)

    Fishbaugh, James; Durrleman, Stanley; Piven, Joseph; Gerig, Guido

    2012-02-01

    Traditional longitudinal analysis begins by extracting desired clinical measurements, such as volume or head circumference, from discrete imaging data. Typically, the continuous evolution of a scalar measurement is estimated by choosing a 1D regression model, such as kernel regression or fitting a polynomial of fixed degree. This type of analysis not only leads to separate models for each measurement, but there is no clear anatomical or biological interpretation to aid in the selection of the appropriate paradigm. In this paper, we propose a consistent framework for the analysis of longitudinal data by estimating the continuous evolution of shape over time as twice differentiable flows of deformations. In contrast to 1D regression models, one model is chosen to realistically capture the growth of anatomical structures. From the continuous evolution of shape, we can simply extract any clinical measurements of interest. We demonstrate on real anatomical surfaces that volume extracted from a continuous shape evolution is consistent with a 1D regression performed on the discrete measurements. We further show how the visualization of shape progression can aid in the search for significant measurements. Finally, we present an example on a shape complex of the brain (left hemisphere, right hemisphere, cerebellum) that demonstrates a potential clinical application for our framework.

  10. Humeral development from neonatal period to skeletal maturity--application in age and sex assessment.

    PubMed

    Rissech, Carme; López-Costas, Olalla; Turbón, Daniel

    2013-01-01

    The goal of the present study is to examine cross-sectional information on the growth of the humerus based on the analysis of four measurements, namely, diaphyseal length, transversal diameter of the proximal (metaphyseal) end of the shaft, epicondylar breadth and vertical diameter of the head. This analysis was performed in 181 individuals (90 ♂ and 91 ♀) ranging from birth to 25 years of age and belonging to three documented Western European skeletal collections (Coimbra, Lisbon and St. Bride). After testing the homogeneity of the sample, the existence of sexual differences (Student's t- and Mann-Whitney U-test) and the growth of the variables (polynomial regression) were evaluated. The results showed the presence of sexual differences in epicondylar breadth above 20 years of age and vertical diameter of the head from 15 years of age, thus indicating that these two variables may be of use in determining sex from that age onward. The growth pattern of the variables showed a continuous increase and followed first- and second-degree polynomials. However, growth of the transversal diameter of the proximal end of the shaft followed a fourth-degree polynomial. Strong correlation coefficients were identified between humeral size and age for each of the four metric variables. These results indicate that any of the humeral measurements studied herein is likely to serve as a useful means of estimating sub-adult age in forensic samples.

  11. Applicability of the polynomial chaos expansion method for personalization of a cardiovascular pulse wave propagation model.

    PubMed

    Huberts, W; Donders, W P; Delhaas, T; van de Vosse, F N

    2014-12-01

    Patient-specific modeling requires model personalization, which can be achieved in an efficient manner by parameter fixing and parameter prioritization. An efficient variance-based method is using generalized polynomial chaos expansion (gPCE), but it has not been applied in the context of model personalization, nor has it ever been compared with standard variance-based methods for models with many parameters. In this work, we apply the gPCE method to a previously reported pulse wave propagation model and compare the conclusions for model personalization with that of a reference analysis performed with Saltelli's efficient Monte Carlo method. We furthermore differentiate two approaches for obtaining the expansion coefficients: one based on spectral projection (gPCE-P) and one based on least squares regression (gPCE-R). It was found that in general the gPCE yields similar conclusions as the reference analysis but at much lower cost, as long as the polynomial metamodel does not contain unnecessary high order terms. Furthermore, the gPCE-R approach generally yielded better results than gPCE-P. The weak performance of the gPCE-P can be attributed to the assessment of the expansion coefficients using the Smolyak algorithm, which might be hampered by the high number of model parameters and/or by possible non-smoothness in the output space. Copyright © 2014 John Wiley & Sons, Ltd.

  12. Genome-wide association study on legendre random regression coefficients for the growth and feed intake trajectory on Duroc Boars.

    PubMed

    Howard, Jeremy T; Jiao, Shihui; Tiezzi, Francesco; Huang, Yijian; Gray, Kent A; Maltecca, Christian

    2015-05-30

    Feed intake and growth are economically important traits in swine production. Previous genome wide association studies (GWAS) have utilized average daily gain or daily feed intake to identify regions that impact growth and feed intake across time. The use of longitudinal models in GWAS studies, such as random regression, allows for SNPs having a heterogeneous effect across the trajectory to be characterized. The objective of this study is therefore to conduct a single step GWAS (ssGWAS) on the animal polynomial coefficients for feed intake and growth. Corrected daily feed intake (DFI Adj) and average daily weight measurements (DBW Avg) on 8981 (n=525,240 observations) and 5643 (n=283,607 observations) animals were utilized in a random regression model using Legendre polynomials (order=2) and a relationship matrix that included genotyped and un-genotyped animals. A ssGWAS was conducted on the animal polynomials coefficients (intercept, linear and quadratic) for animals with genotypes (DFIAdj: n=855; DBWAvg: n=590). Regions were characterized based on the variance of 10-SNP sliding windows GEBV (WGEBV). A bootstrap analysis (n=1000) was conducted to declare significance. Heritability estimates for the traits trajectory ranged from 0.34-0.52 to 0.07-0.23 for DBWAvg and DFIAdj, respectively. Genetic correlations across age classes were large and positive for both DBWAvg and DFIAdj, albeit age classes at the beginning had a small to moderate genetic correlation with age classes towards the end of the trajectory for both traits. The WGEBV variance explained by significant regions (P<0.001) for each polynomial coefficient ranged from 0.2-0.9 to 0.3-1.01% for DBWAvg and DFIAdj, respectively. The WGEBV variance explained by significant regions for the trajectory was 1.54 and 1.95% for DBWAvg and DFIAdj. Both traits identified candidate genes with functions related to metabolite and energy homeostasis, glucose and insulin signaling and behavior. We have identified regions of the genome that have an impact on the intercept, linear and quadratic terms for DBWAvg and DFIAdj. These results provide preliminary evidence that individual growth and feed intake trajectories are impacted by different regions of the genome at different times.

  13. Bayesian median regression for temporal gene expression data

    NASA Astrophysics Data System (ADS)

    Yu, Keming; Vinciotti, Veronica; Liu, Xiaohui; 't Hoen, Peter A. C.

    2007-09-01

    Most of the existing methods for the identification of biologically interesting genes in a temporal expression profiling dataset do not fully exploit the temporal ordering in the dataset and are based on normality assumptions for the gene expression. In this paper, we introduce a Bayesian median regression model to detect genes whose temporal profile is significantly different across a number of biological conditions. The regression model is defined by a polynomial function where both time and condition effects as well as interactions between the two are included. MCMC-based inference returns the posterior distribution of the polynomial coefficients. From this a simple Bayes factor test is proposed to test for significance. The estimation of the median rather than the mean, and within a Bayesian framework, increases the robustness of the method compared to a Hotelling T2-test previously suggested. This is shown on simulated data and on muscular dystrophy gene expression data.

  14. A new model for estimating total body water from bioelectrical resistance

    NASA Technical Reports Server (NTRS)

    Siconolfi, S. F.; Kear, K. T.

    1992-01-01

    Estimation of total body water (T) from bioelectrical resistance (R) is commonly done by stepwise regression models with height squared over R, H(exp 2)/R, age, sex, and weight (W). Polynomials of H(exp 2)/R have not been included in these models. We examined the validity of a model with third order polynomials and W. Methods: T was measured with oxygen-18 labled water in 27 subjects. R at 50 kHz was obtained from electrodes placed on the hand and foot while subjects were in the supine position. A stepwise regression equation was developed with 13 subjects (age 31.5 plus or minus 6.2 years, T 38.2 plus or minus 6.6 L, W 65.2 plus or minus 12.0 kg). Correlations, standard error of estimates and mean differences were computed between T and estimated T's from the new (N) model and other models. Evaluations were completed with the remaining 14 subjects (age 32.4 plus or minus 6.3 years, T 40.3 plus or minus 8 L, W 70.2 plus or minus 12.3 kg) and two of its subgroups (high and low) Results: A regression equation was developed from the model. The only significant mean difference was between T and one of the earlier models. Conclusion: Third order polynomials in regression models may increase the accuracy of estimating total body water. Evaluating the model with a larger population is needed.

  15. Hierarchical cluster-based partial least squares regression (HC-PLSR) is an efficient tool for metamodelling of nonlinear dynamic models.

    PubMed

    Tøndel, Kristin; Indahl, Ulf G; Gjuvsland, Arne B; Vik, Jon Olav; Hunter, Peter; Omholt, Stig W; Martens, Harald

    2011-06-01

    Deterministic dynamic models of complex biological systems contain a large number of parameters and state variables, related through nonlinear differential equations with various types of feedback. A metamodel of such a dynamic model is a statistical approximation model that maps variation in parameters and initial conditions (inputs) to variation in features of the trajectories of the state variables (outputs) throughout the entire biologically relevant input space. A sufficiently accurate mapping can be exploited both instrumentally and epistemically. Multivariate regression methodology is a commonly used approach for emulating dynamic models. However, when the input-output relations are highly nonlinear or non-monotone, a standard linear regression approach is prone to give suboptimal results. We therefore hypothesised that a more accurate mapping can be obtained by locally linear or locally polynomial regression. We present here a new method for local regression modelling, Hierarchical Cluster-based PLS regression (HC-PLSR), where fuzzy C-means clustering is used to separate the data set into parts according to the structure of the response surface. We compare the metamodelling performance of HC-PLSR with polynomial partial least squares regression (PLSR) and ordinary least squares (OLS) regression on various systems: six different gene regulatory network models with various types of feedback, a deterministic mathematical model of the mammalian circadian clock and a model of the mouse ventricular myocyte function. Our results indicate that multivariate regression is well suited for emulating dynamic models in systems biology. The hierarchical approach turned out to be superior to both polynomial PLSR and OLS regression in all three test cases. The advantage, in terms of explained variance and prediction accuracy, was largest in systems with highly nonlinear functional relationships and in systems with positive feedback loops. HC-PLSR is a promising approach for metamodelling in systems biology, especially for highly nonlinear or non-monotone parameter to phenotype maps. The algorithm can be flexibly adjusted to suit the complexity of the dynamic model behaviour, inviting automation in the metamodelling of complex systems.

  16. Hierarchical Cluster-based Partial Least Squares Regression (HC-PLSR) is an efficient tool for metamodelling of nonlinear dynamic models

    PubMed Central

    2011-01-01

    Background Deterministic dynamic models of complex biological systems contain a large number of parameters and state variables, related through nonlinear differential equations with various types of feedback. A metamodel of such a dynamic model is a statistical approximation model that maps variation in parameters and initial conditions (inputs) to variation in features of the trajectories of the state variables (outputs) throughout the entire biologically relevant input space. A sufficiently accurate mapping can be exploited both instrumentally and epistemically. Multivariate regression methodology is a commonly used approach for emulating dynamic models. However, when the input-output relations are highly nonlinear or non-monotone, a standard linear regression approach is prone to give suboptimal results. We therefore hypothesised that a more accurate mapping can be obtained by locally linear or locally polynomial regression. We present here a new method for local regression modelling, Hierarchical Cluster-based PLS regression (HC-PLSR), where fuzzy C-means clustering is used to separate the data set into parts according to the structure of the response surface. We compare the metamodelling performance of HC-PLSR with polynomial partial least squares regression (PLSR) and ordinary least squares (OLS) regression on various systems: six different gene regulatory network models with various types of feedback, a deterministic mathematical model of the mammalian circadian clock and a model of the mouse ventricular myocyte function. Results Our results indicate that multivariate regression is well suited for emulating dynamic models in systems biology. The hierarchical approach turned out to be superior to both polynomial PLSR and OLS regression in all three test cases. The advantage, in terms of explained variance and prediction accuracy, was largest in systems with highly nonlinear functional relationships and in systems with positive feedback loops. Conclusions HC-PLSR is a promising approach for metamodelling in systems biology, especially for highly nonlinear or non-monotone parameter to phenotype maps. The algorithm can be flexibly adjusted to suit the complexity of the dynamic model behaviour, inviting automation in the metamodelling of complex systems. PMID:21627852

  17. Measurement of pediatric regional cerebral blood flow from 6 months to 15 years of age in a clinical population.

    PubMed

    Carsin-Vu, Aline; Corouge, Isabelle; Commowick, Olivier; Bouzillé, Guillaume; Barillot, Christian; Ferré, Jean-Christophe; Proisy, Maia

    2018-04-01

    To investigate changes in cerebral blood flow (CBF) in gray matter (GM) between 6 months and 15 years of age and to provide CBF values for the brain, GM, white matter (WM), hemispheres and lobes. Between 2013 and 2016, we retrospectively included all clinical MRI examinations with arterial spin labeling (ASL). We excluded subjects with a condition potentially affecting brain perfusion. For each subject, mean values of CBF in the brain, GM, WM, hemispheres and lobes were calculated. GM CBF was fitted using linear, quadratic and cubic polynomial regression against age. Regression models were compared with Akaike's information criterion (AIC), and Likelihood Ratio tests. 84 children were included (44 females/40 males). Mean CBF values were 64.2 ± 13.8 mL/100 g/min in GM, and 29.3 ± 10.0 mL/100 g/min in WM. The best-fit model of brain perfusion was the cubic polynomial function (AIC = 672.7, versus respectively AIC = 673.9 and AIC = 674.1 with the linear negative function and the quadratic polynomial function). A statistically significant difference between the tested models demonstrating the superiority of the quadratic (p = 0.18) or cubic polynomial model (p = 0.06), over the negative linear regression model was not found. No effect of general anesthesia (p = 0.34) or of gender (p = 0.16) was found. we provided values for ASL CBF in the brain, GM, WM, hemispheres, and lobes over a wide pediatric age range, approximately showing inverted U-shaped changes in GM perfusion over the course of childhood. Copyright © 2018 Elsevier B.V. All rights reserved.

  18. USING LINEAR AND POLYNOMIAL MODELS TO EXAMINE THE ENVIRONMENTAL STABILITY OF VIRUSES

    EPA Science Inventory

    The article presents the development of model equations for describing the fate of viral infectivity in environmental samples. Most of the models were based upon the use of a two-step linear regression approach. The first step employs regression of log base 10 transformed viral t...

  19. Meta-Regression Approximations to Reduce Publication Selection Bias

    ERIC Educational Resources Information Center

    Stanley, T. D.; Doucouliagos, Hristos

    2014-01-01

    Publication selection bias is a serious challenge to the integrity of all empirical sciences. We derive meta-regression approximations to reduce this bias. Our approach employs Taylor polynomial approximations to the conditional mean of a truncated distribution. A quadratic approximation without a linear term, precision-effect estimate with…

  20. Multivariate random regression analysis for body weight and main morphological traits in genetically improved farmed tilapia (Oreochromis niloticus).

    PubMed

    He, Jie; Zhao, Yunfeng; Zhao, Jingli; Gao, Jin; Han, Dandan; Xu, Pao; Yang, Runqing

    2017-11-02

    Because of their high economic importance, growth traits in fish are under continuous improvement. For growth traits that are recorded at multiple time-points in life, the use of univariate and multivariate animal models is limited because of the variable and irregular timing of these measures. Thus, the univariate random regression model (RRM) was introduced for the genetic analysis of dynamic growth traits in fish breeding. We used a multivariate random regression model (MRRM) to analyze genetic changes in growth traits recorded at multiple time-point of genetically-improved farmed tilapia. Legendre polynomials of different orders were applied to characterize the influences of fixed and random effects on growth trajectories. The final MRRM was determined by optimizing the univariate RRM for the analyzed traits separately via penalizing adaptively the likelihood statistical criterion, which is superior to both the Akaike information criterion and the Bayesian information criterion. In the selected MRRM, the additive genetic effects were modeled by Legendre polynomials of three orders for body weight (BWE) and body length (BL) and of two orders for body depth (BD). By using the covariance functions of the MRRM, estimated heritabilities were between 0.086 and 0.628 for BWE, 0.155 and 0.556 for BL, and 0.056 and 0.607 for BD. Only heritabilities for BD measured from 60 to 140 days of age were consistently higher than those estimated by the univariate RRM. All genetic correlations between growth time-points exceeded 0.5 for either single or pairwise time-points. Moreover, correlations between early and late growth time-points were lower. Thus, for phenotypes that are measured repeatedly in aquaculture, an MRRM can enhance the efficiency of the comprehensive selection for BWE and the main morphological traits.

  1. DIFFERENTIAL CROSS SECTION ANALYSIS IN KAON PHOTOPRODUCTION USING ASSOCIATED LEGENDRE POLYNOMIALS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    P. T. P. HUTAURUK, D. G. IRELAND, G. ROSNER

    2009-04-01

    Angular distributions of differential cross sections from the latest CLAS data sets,6 for the reaction γ + p→K+ + Λ have been analyzed using associated Legendre polynomials. This analysis is based upon theoretical calculations in Ref. 1 where all sixteen observables in kaon photoproduction can be classified into four Legendre classes. Each observable can be described by an expansion of associated Legendre polynomial functions. One of the questions to be addressed is how many associated Legendre polynomials are required to describe the data. In this preliminary analysis, we used data models with different numbers of associated Legendre polynomials. We thenmore » compared these models by calculating posterior probabilities of the models. We found that the CLAS data set needs no more than four associated Legendre polynomials to describe the differential cross section data. In addition, we also show the extracted coefficients of the best model.« less

  2. Endpoint in plasma etch process using new modified w-multivariate charts and windowed regression

    NASA Astrophysics Data System (ADS)

    Zakour, Sihem Ben; Taleb, Hassen

    2017-09-01

    Endpoint detection is very important undertaking on the side of getting a good understanding and figuring out if a plasma etching process is done in the right way, especially if the etched area is very small (0.1%). It truly is a crucial part of supplying repeatable effects in every single wafer. When the film being etched has been completely cleared, the endpoint is reached. To ensure the desired device performance on the produced integrated circuit, the high optical emission spectroscopy (OES) sensor is employed. The huge number of gathered wavelengths (profiles) is then analyzed and pre-processed using a new proposed simple algorithm named Spectra peak selection (SPS) to select the important wavelengths, then we employ wavelet analysis (WA) to enhance the performance of detection by suppressing noise and redundant information. The selected and treated OES wavelengths are then used in modified multivariate control charts (MEWMA and Hotelling) for three statistics (mean, SD and CV) and windowed polynomial regression for mean. The employ of three aforementioned statistics is motivated by controlling mean shift, variance shift and their ratio (CV) if both mean and SD are not stable. The control charts show their performance in detecting endpoint especially W-mean Hotelling chart and the worst result is given by CV statistic. As the best detection of endpoint is given by the W-Hotelling mean statistic, this statistic will be used to construct a windowed wavelet Hotelling polynomial regression. This latter can only identify the window containing endpoint phenomenon.

  3. Exploring the use of random regression models with legendre polynomials to analyze measures of volume of ejaculate in Holstein bulls.

    PubMed

    Carabaño, M J; Díaz, C; Ugarte, C; Serrano, M

    2007-02-01

    Artificial insemination centers routinely collect records of quantity and quality of semen of bulls throughout the animals' productive period. The goal of this paper was to explore the use of random regression models with orthogonal polynomials to analyze repeated measures of semen production of Spanish Holstein bulls. A total of 8,773 records of volume of first ejaculate (VFE) collected between 12 and 30 mo of age from 213 Spanish Holstein bulls was analyzed under alternative random regression models. Legendre polynomial functions of increasing order (0 to 6) were fitted to the average trajectory, additive genetic and permanent environmental effects. Age at collection and days in production were used as time variables. Heterogeneous and homogeneous residual variances were alternatively assumed. Analyses were carried out within a Bayesian framework. The logarithm of the marginal density and the cross-validation predictive ability of the data were used as model comparison criteria. Based on both criteria, age at collection as a time variable and heterogeneous residuals models are recommended to analyze changes of VFE over time. Both criteria indicated that fitting random curves for genetic and permanent environmental components as well as for the average trajector improved the quality of models. Furthermore, models with a higher order polynomial for the permanent environmental (5 to 6) than for the genetic components (4 to 5) and the average trajectory (2 to 3) tended to perform best. High-order polynomials were needed to accommodate the highly oscillating nature of the phenotypic values. Heritability and repeatability estimates, disregarding the extremes of the studied period, ranged from 0.15 to 0.35 and from 0.20 to 0.50, respectively, indicating that selection for VFE may be effective at any stage. Small differences among models were observed. Apart from the extremes, estimated correlations between ages decreased steadily from 0.9 and 0.4 for measures 1 mo apart to 0.4 and 0.2 for most distant measures for additive genetic and phenotypic components, respectively. Further investigation to account for environmental factors that may be responsible for the oscillating observations of VFE is needed.

  4. Comparative assessment of orthogonal polynomials for wavefront reconstruction over the square aperture.

    PubMed

    Ye, Jingfei; Gao, Zhishan; Wang, Shuai; Cheng, Jinlong; Wang, Wei; Sun, Wenqing

    2014-10-01

    Four orthogonal polynomials for reconstructing a wavefront over a square aperture based on the modal method are currently available, namely, the 2D Chebyshev polynomials, 2D Legendre polynomials, Zernike square polynomials and Numerical polynomials. They are all orthogonal over the full unit square domain. 2D Chebyshev polynomials are defined by the product of Chebyshev polynomials in x and y variables, as are 2D Legendre polynomials. Zernike square polynomials are derived by the Gram-Schmidt orthogonalization process, where the integration region across the full unit square is circumscribed outside the unit circle. Numerical polynomials are obtained by numerical calculation. The presented study is to compare these four orthogonal polynomials by theoretical analysis and numerical experiments from the aspects of reconstruction accuracy, remaining errors, and robustness. Results show that the Numerical orthogonal polynomial is superior to the other three polynomials because of its high accuracy and robustness even in the case of a wavefront with incomplete data.

  5. Principal polynomial analysis.

    PubMed

    Laparra, Valero; Jiménez, Sandra; Tuia, Devis; Camps-Valls, Gustau; Malo, Jesus

    2014-11-01

    This paper presents a new framework for manifold learning based on a sequence of principal polynomials that capture the possibly nonlinear nature of the data. The proposed Principal Polynomial Analysis (PPA) generalizes PCA by modeling the directions of maximal variance by means of curves, instead of straight lines. Contrarily to previous approaches, PPA reduces to performing simple univariate regressions, which makes it computationally feasible and robust. Moreover, PPA shows a number of interesting analytical properties. First, PPA is a volume-preserving map, which in turn guarantees the existence of the inverse. Second, such an inverse can be obtained in closed form. Invertibility is an important advantage over other learning methods, because it permits to understand the identified features in the input domain where the data has physical meaning. Moreover, it allows to evaluate the performance of dimensionality reduction in sensible (input-domain) units. Volume preservation also allows an easy computation of information theoretic quantities, such as the reduction in multi-information after the transform. Third, the analytical nature of PPA leads to a clear geometrical interpretation of the manifold: it allows the computation of Frenet-Serret frames (local features) and of generalized curvatures at any point of the space. And fourth, the analytical Jacobian allows the computation of the metric induced by the data, thus generalizing the Mahalanobis distance. These properties are demonstrated theoretically and illustrated experimentally. The performance of PPA is evaluated in dimensionality and redundancy reduction, in both synthetic and real datasets from the UCI repository.

  6. Polynomial elimination theory and non-linear stability analysis for the Euler equations

    NASA Technical Reports Server (NTRS)

    Kennon, S. R.; Dulikravich, G. S.; Jespersen, D. C.

    1986-01-01

    Numerical methods are presented that exploit the polynomial properties of discretizations of the Euler equations. It is noted that most finite difference or finite volume discretizations of the steady-state Euler equations produce a polynomial system of equations to be solved. These equations are solved using classical polynomial elimination theory, with some innovative modifications. This paper also presents some preliminary results of a new non-linear stability analysis technique. This technique is applicable to determining the stability of polynomial iterative schemes. Results are presented for applying the elimination technique to a one-dimensional test case. For this test case, the exact solution is computed in three iterations. The non-linear stability analysis is applied to determine the optimal time step for solving Burgers' equation using the MacCormack scheme. The estimated optimal time step is very close to the time step that arises from a linear stability analysis.

  7. Polynomial Conjoint Analysis of Similarities: A Model for Constructing Polynomial Conjoint Measurement Algorithms.

    ERIC Educational Resources Information Center

    Young, Forrest W.

    A model permitting construction of algorithms for the polynomial conjoint analysis of similarities is presented. This model, which is based on concepts used in nonmetric scaling, permits one to obtain the best approximate solution. The concepts used to construct nonmetric scaling algorithms are reviewed. Finally, examples of algorithmic models for…

  8. Bayes Node Energy Polynomial Distribution to Improve Routing in Wireless Sensor Network

    PubMed Central

    Palanisamy, Thirumoorthy; Krishnasamy, Karthikeyan N.

    2015-01-01

    Wireless Sensor Network monitor and control the physical world via large number of small, low-priced sensor nodes. Existing method on Wireless Sensor Network (WSN) presented sensed data communication through continuous data collection resulting in higher delay and energy consumption. To conquer the routing issue and reduce energy drain rate, Bayes Node Energy and Polynomial Distribution (BNEPD) technique is introduced with energy aware routing in the wireless sensor network. The Bayes Node Energy Distribution initially distributes the sensor nodes that detect an object of similar event (i.e., temperature, pressure, flow) into specific regions with the application of Bayes rule. The object detection of similar events is accomplished based on the bayes probabilities and is sent to the sink node resulting in minimizing the energy consumption. Next, the Polynomial Regression Function is applied to the target object of similar events considered for different sensors are combined. They are based on the minimum and maximum value of object events and are transferred to the sink node. Finally, the Poly Distribute algorithm effectively distributes the sensor nodes. The energy efficient routing path for each sensor nodes are created by data aggregation at the sink based on polynomial regression function which reduces the energy drain rate with minimum communication overhead. Experimental performance is evaluated using Dodgers Loop Sensor Data Set from UCI repository. Simulation results show that the proposed distribution algorithm significantly reduce the node energy drain rate and ensure fairness among different users reducing the communication overhead. PMID:26426701

  9. Bayes Node Energy Polynomial Distribution to Improve Routing in Wireless Sensor Network.

    PubMed

    Palanisamy, Thirumoorthy; Krishnasamy, Karthikeyan N

    2015-01-01

    Wireless Sensor Network monitor and control the physical world via large number of small, low-priced sensor nodes. Existing method on Wireless Sensor Network (WSN) presented sensed data communication through continuous data collection resulting in higher delay and energy consumption. To conquer the routing issue and reduce energy drain rate, Bayes Node Energy and Polynomial Distribution (BNEPD) technique is introduced with energy aware routing in the wireless sensor network. The Bayes Node Energy Distribution initially distributes the sensor nodes that detect an object of similar event (i.e., temperature, pressure, flow) into specific regions with the application of Bayes rule. The object detection of similar events is accomplished based on the bayes probabilities and is sent to the sink node resulting in minimizing the energy consumption. Next, the Polynomial Regression Function is applied to the target object of similar events considered for different sensors are combined. They are based on the minimum and maximum value of object events and are transferred to the sink node. Finally, the Poly Distribute algorithm effectively distributes the sensor nodes. The energy efficient routing path for each sensor nodes are created by data aggregation at the sink based on polynomial regression function which reduces the energy drain rate with minimum communication overhead. Experimental performance is evaluated using Dodgers Loop Sensor Data Set from UCI repository. Simulation results show that the proposed distribution algorithm significantly reduce the node energy drain rate and ensure fairness among different users reducing the communication overhead.

  10. Modeling Source Water TOC Using Hydroclimate Variables and Local Polynomial Regression.

    PubMed

    Samson, Carleigh C; Rajagopalan, Balaji; Summers, R Scott

    2016-04-19

    To control disinfection byproduct (DBP) formation in drinking water, an understanding of the source water total organic carbon (TOC) concentration variability can be critical. Previously, TOC concentrations in water treatment plant source waters have been modeled using streamflow data. However, the lack of streamflow data or unimpaired flow scenarios makes it difficult to model TOC. In addition, TOC variability under climate change further exacerbates the problem. Here we proposed a modeling approach based on local polynomial regression that uses climate, e.g. temperature, and land surface, e.g., soil moisture, variables as predictors of TOC concentration, obviating the need for streamflow. The local polynomial approach has the ability to capture non-Gaussian and nonlinear features that might be present in the relationships. The utility of the methodology is demonstrated using source water quality and climate data in three case study locations with surface source waters including river and reservoir sources. The models show good predictive skill in general at these locations, with lower skills at locations with the most anthropogenic influences in their streams. Source water TOC predictive models can provide water treatment utilities important information for making treatment decisions for DBP regulation compliance under future climate scenarios.

  11. Optimization of binary thermodynamic and phase diagram data

    NASA Astrophysics Data System (ADS)

    Bale, Christopher W.; Pelton, A. D.

    1983-03-01

    An optimization technique based upon least squares regression is presented to permit the simultaneous analysis of diverse experimental binary thermodynamic and phase diagram data. Coefficients of polynomial expansions for the enthalpy and excess entropy of binary solutions are obtained which can subsequently be used to calculate the thermodynamic properties or the phase diagram. In an interactive computer-assisted analysis employing this technique, one can critically analyze a large number of diverse data in a binary system rapidly, in a manner which is fully self-consistent thermodynamically. Examples of applications to the Bi-Zn, Cd-Pb, PbCl2-KCl, LiCl-FeCl2, and Au-Ni binary systems are given.

  12. Random regression models for the prediction of days to weight, ultrasound rib eye area, and ultrasound back fat depth in beef cattle.

    PubMed

    Speidel, S E; Peel, R K; Crews, D H; Enns, R M

    2016-02-01

    Genetic evaluation research designed to reduce the required days to a specified end point has received very little attention in pertinent scientific literature, given that its economic importance was first discussed in 1957. There are many production scenarios in today's beef industry, making a prediction for the required number of days to a single end point a suboptimal option. Random regression is an attractive alternative to calculate days to weight (DTW), days to ultrasound back fat (DTUBF), and days to ultrasound rib eye area (DTUREA) genetic predictions that could overcome weaknesses of a single end point prediction. The objective of this study was to develop random regression approaches for the prediction of the DTW, DTUREA, and DTUBF. Data were obtained from the Agriculture and Agri-Food Canada Research Centre, Lethbridge, AB, Canada. Data consisted of records on 1,324 feedlot cattle spanning 1999 to 2007. Individual animals averaged 5.77 observations with weights, ultrasound rib eye area (UREA), ultrasound back fat depth (UBF), and ages ranging from 293 to 863 kg, 73.39 to 129.54 cm, 1.53 to 30.47 mm, and 276 to 519 d, respectively. Random regression models using Legendre polynomials were used to regress age of the individual on weight, UREA, and UBF. Fixed effects in the model included an overall fixed regression of age on end point (weight, UREA, and UBF) nested within breed to account for the mean relationship between age and weight as well as a contemporary group effect consisting of breed of the animal (Angus, Charolais, and Charolais sired), feedlot pen, and year of measure. Likelihood ratio tests were used to determine the appropriate random polynomial order. Use of the quadratic polynomial did not account for any additional genetic variation in days for DTW ( > 0.11), for DTUREA ( > 0.18), and for DTUBF ( > 0.20) when compared with the linear random polynomial. Heritability estimates from the linear random regression for DTW ranged from 0.54 to 0.74, corresponding to end points of 293 and 863 kg, respectively. Heritability for DTUREA ranged from 0.51 to 0.34 and for DTUBF ranged from 0.55 to 0.37. These estimates correspond to UREA end points of 35 and 125 cm and UBF end points of 1.53 and 30 mm, respectively. This range of heritability shows DTW, DTUREA, and DTUBF to be highly heritable and indicates that selection pressure aimed at reducing the number of days to reach a finish weight end point can result in genetic change given sufficient data.

  13. Genetic analysis of milk production traits of Tunisian Holsteins using random regression test-day model with Legendre polynomials

    PubMed Central

    2018-01-01

    Objective The objective of this study was to estimate genetic parameters of milk, fat, and protein yields within and across lactations in Tunisian Holsteins using a random regression test-day (TD) model. Methods A random regression multiple trait multiple lactation TD model was used to estimate genetic parameters in the Tunisian dairy cattle population. Data were TD yields of milk, fat, and protein from the first three lactations. Random regressions were modeled with third-order Legendre polynomials for the additive genetic, and permanent environment effects. Heritabilities, and genetic correlations were estimated by Bayesian techniques using the Gibbs sampler. Results All variance components tended to be high in the beginning and the end of lactations. Additive genetic variances for milk, fat, and protein yields were the lowest and were the least variable compared to permanent variances. Heritability values tended to increase with parity. Estimates of heritabilities for 305-d yield-traits were low to moderate, 0.14 to 0.2, 0.12 to 0.17, and 0.13 to 0.18 for milk, fat, and protein yields, respectively. Within-parity, genetic correlations among traits were up to 0.74. Genetic correlations among lactations for the yield traits were relatively high and ranged from 0.78±0.01 to 0.82±0.03, between the first and second parities, from 0.73±0.03 to 0.8±0.04 between the first and third parities, and from 0.82±0.02 to 0.84±0.04 between the second and third parities. Conclusion These results are comparable to previously reported estimates on the same population, indicating that the adoption of a random regression TD model as the official genetic evaluation for production traits in Tunisia, as developed by most Interbull countries, is possible in the Tunisian Holsteins. PMID:28823122

  14. Genetic analysis of milk production traits of Tunisian Holsteins using random regression test-day model with Legendre polynomials.

    PubMed

    Ben Zaabza, Hafedh; Ben Gara, Abderrahmen; Rekik, Boulbaba

    2018-05-01

    The objective of this study was to estimate genetic parameters of milk, fat, and protein yields within and across lactations in Tunisian Holsteins using a random regression test-day (TD) model. A random regression multiple trait multiple lactation TD model was used to estimate genetic parameters in the Tunisian dairy cattle population. Data were TD yields of milk, fat, and protein from the first three lactations. Random regressions were modeled with third-order Legendre polynomials for the additive genetic, and permanent environment effects. Heritabilities, and genetic correlations were estimated by Bayesian techniques using the Gibbs sampler. All variance components tended to be high in the beginning and the end of lactations. Additive genetic variances for milk, fat, and protein yields were the lowest and were the least variable compared to permanent variances. Heritability values tended to increase with parity. Estimates of heritabilities for 305-d yield-traits were low to moderate, 0.14 to 0.2, 0.12 to 0.17, and 0.13 to 0.18 for milk, fat, and protein yields, respectively. Within-parity, genetic correlations among traits were up to 0.74. Genetic correlations among lactations for the yield traits were relatively high and ranged from 0.78±0.01 to 0.82±0.03, between the first and second parities, from 0.73±0.03 to 0.8±0.04 between the first and third parities, and from 0.82±0.02 to 0.84±0.04 between the second and third parities. These results are comparable to previously reported estimates on the same population, indicating that the adoption of a random regression TD model as the official genetic evaluation for production traits in Tunisia, as developed by most Interbull countries, is possible in the Tunisian Holsteins.

  15. Hyperspectral imaging using a color camera and its application for pathogen detection

    NASA Astrophysics Data System (ADS)

    Yoon, Seung-Chul; Shin, Tae-Sung; Heitschmidt, Gerald W.; Lawrence, Kurt C.; Park, Bosoon; Gamble, Gary

    2015-02-01

    This paper reports the results of a feasibility study for the development of a hyperspectral image recovery (reconstruction) technique using a RGB color camera and regression analysis in order to detect and classify colonies of foodborne pathogens. The target bacterial pathogens were the six representative non-O157 Shiga-toxin producing Escherichia coli (STEC) serogroups (O26, O45, O103, O111, O121, and O145) grown in Petri dishes of Rainbow agar. The purpose of the feasibility study was to evaluate whether a DSLR camera (Nikon D700) could be used to predict hyperspectral images in the wavelength range from 400 to 1,000 nm and even to predict the types of pathogens using a hyperspectral STEC classification algorithm that was previously developed. Unlike many other studies using color charts with known and noise-free spectra for training reconstruction models, this work used hyperspectral and color images, separately measured by a hyperspectral imaging spectrometer and the DSLR color camera. The color images were calibrated (i.e. normalized) to relative reflectance, subsampled and spatially registered to match with counterpart pixels in hyperspectral images that were also calibrated to relative reflectance. Polynomial multivariate least-squares regression (PMLR) was previously developed with simulated color images. In this study, partial least squares regression (PLSR) was also evaluated as a spectral recovery technique to minimize multicollinearity and overfitting. The two spectral recovery models (PMLR and PLSR) and their parameters were evaluated by cross-validation. The QR decomposition was used to find a numerically more stable solution of the regression equation. The preliminary results showed that PLSR was more effective especially with higher order polynomial regressions than PMLR. The best classification accuracy measured with an independent test set was about 90%. The results suggest the potential of cost-effective color imaging using hyperspectral image classification algorithms for rapidly differentiating pathogens in agar plates.

  16. Genetic parameters for growth characteristics of free-range chickens under univariate random regression models.

    PubMed

    Rovadoscki, Gregori A; Petrini, Juliana; Ramirez-Diaz, Johanna; Pertile, Simone F N; Pertille, Fábio; Salvian, Mayara; Iung, Laiza H S; Rodriguez, Mary Ana P; Zampar, Aline; Gaya, Leila G; Carvalho, Rachel S B; Coelho, Antonio A D; Savino, Vicente J M; Coutinho, Luiz L; Mourão, Gerson B

    2016-09-01

    Repeated measures from the same individual have been analyzed by using repeatability and finite dimension models under univariate or multivariate analyses. However, in the last decade, the use of random regression models for genetic studies with longitudinal data have become more common. Thus, the aim of this research was to estimate genetic parameters for body weight of four experimental chicken lines by using univariate random regression models. Body weight data from hatching to 84 days of age (n = 34,730) from four experimental free-range chicken lines (7P, Caipirão da ESALQ, Caipirinha da ESALQ and Carijó Barbado) were used. The analysis model included the fixed effects of contemporary group (gender and rearing system), fixed regression coefficients for age at measurement, and random regression coefficients for permanent environmental effects and additive genetic effects. Heterogeneous variances for residual effects were considered, and one residual variance was assigned for each of six subclasses of age at measurement. Random regression curves were modeled by using Legendre polynomials of the second and third orders, with the best model chosen based on the Akaike Information Criterion, Bayesian Information Criterion, and restricted maximum likelihood. Multivariate analyses under the same animal mixed model were also performed for the validation of the random regression models. The Legendre polynomials of second order were better for describing the growth curves of the lines studied. Moderate to high heritabilities (h(2) = 0.15 to 0.98) were estimated for body weight between one and 84 days of age, suggesting that selection for body weight at all ages can be used as a selection criteria. Genetic correlations among body weight records obtained through multivariate analyses ranged from 0.18 to 0.96, 0.12 to 0.89, 0.06 to 0.96, and 0.28 to 0.96 in 7P, Caipirão da ESALQ, Caipirinha da ESALQ, and Carijó Barbado chicken lines, respectively. Results indicate that genetic gain for body weight can be achieved by selection. Also, selection for body weight at 42 days of age can be maintained as a selection criterion. © 2016 Poultry Science Association Inc.

  17. Evaluation of SLAR and thematic mapper MSS data for forest cover mapping using computer-aided analysis techniques

    NASA Technical Reports Server (NTRS)

    Hoffer, R. M. (Principal Investigator)

    1979-01-01

    The spatial characteristics of the data were evaluated. A program was developed to reduce the spatial distortions resulting from variable viewing distance, and geometrically adjusted data sets were generated. The potential need for some level of radiometric adjustment was evidenced by an along track band of high reflectance across different cover types in the Varian imagery. A multiple regression analysis was employed to explore the viewing angle effect on measured reflectance. Areas in the data set which appeared to have no across track stratification of cover type were identified. A program was developed which computed the average reflectance by column for each channel, over all of the scan lines in the designated areas. A regression analysis was then run using the first, second, and third degree polynomials, for each channel. An atmospheric effect as a component of the viewing angle source of variance is discussed. Cover type maps were completed and training and test field selection was initiated.

  18. Meta-regression approximations to reduce publication selection bias.

    PubMed

    Stanley, T D; Doucouliagos, Hristos

    2014-03-01

    Publication selection bias is a serious challenge to the integrity of all empirical sciences. We derive meta-regression approximations to reduce this bias. Our approach employs Taylor polynomial approximations to the conditional mean of a truncated distribution. A quadratic approximation without a linear term, precision-effect estimate with standard error (PEESE), is shown to have the smallest bias and mean squared error in most cases and to outperform conventional meta-analysis estimators, often by a great deal. Monte Carlo simulations also demonstrate how a new hybrid estimator that conditionally combines PEESE and the Egger regression intercept can provide a practical solution to publication selection bias. PEESE is easily expanded to accommodate systematic heterogeneity along with complex and differential publication selection bias that is related to moderator variables. By providing an intuitive reason for these approximations, we can also explain why the Egger regression works so well and when it does not. These meta-regression methods are applied to several policy-relevant areas of research including antidepressant effectiveness, the value of a statistical life, the minimum wage, and nicotine replacement therapy. Copyright © 2013 John Wiley & Sons, Ltd.

  19. Optimization of Paclitaxel Containing pH-Sensitive Liposomes By 3 Factor, 3 Level Box-Behnken Design.

    PubMed

    Rane, Smita; Prabhakar, Bala

    2013-07-01

    The aim of this study was to investigate the combined influence of 3 independent variables in the preparation of paclitaxel containing pH-sensitive liposomes. A 3 factor, 3 levels Box-Behnken design was used to derive a second order polynomial equation and construct contour plots to predict responses. The independent variables selected were molar ratio phosphatidylcholine:diolylphosphatidylethanolamine (X1), molar concentration of cholesterylhemisuccinate (X2), and amount of drug (X3). Fifteen batches were prepared by thin film hydration method and evaluated for percent drug entrapment, vesicle size, and pH sensitivity. The transformed values of the independent variables and the percent drug entrapment were subjected to multiple regression to establish full model second order polynomial equation. F was calculated to confirm the omission of insignificant terms from the full model equation to derive a reduced model polynomial equation to predict the dependent variables. Contour plots were constructed to show the effects of X1, X2, and X3 on the percent drug entrapment. A model was validated for accurate prediction of the percent drug entrapment by performing checkpoint analysis. The computer optimization process and contour plots predicted the levels of independent variables X1, X2, and X3 (0.99, -0.06, 0, respectively), for maximized response of percent drug entrapment with constraints on vesicle size and pH sensitivity.

  20. Application of mathematical model methods for optimization tasks in construction materials technology

    NASA Astrophysics Data System (ADS)

    Fomina, E. V.; Kozhukhova, N. I.; Sverguzova, S. V.; Fomin, A. E.

    2018-05-01

    In this paper, the regression equations method for design of construction material was studied. Regression and polynomial equations representing the correlation between the studied parameters were proposed. The logic design and software interface of the regression equations method focused on parameter optimization to provide the energy saving effect at the stage of autoclave aerated concrete design considering the replacement of traditionally used quartz sand by coal mining by-product such as argillite. The mathematical model represented by a quadric polynomial for the design of experiment was obtained using calculated and experimental data. This allowed the estimation of relationship between the composition and final properties of the aerated concrete. The surface response graphically presented in a nomogram allowed the estimation of concrete properties in response to variation of composition within the x-space. The optimal range of argillite content was obtained leading to a reduction of raw materials demand, development of target plastic strength of aerated concrete as well as a reduction of curing time before autoclave treatment. Generally, this method allows the design of autoclave aerated concrete with required performance without additional resource and time costs.

  1. Regression Simulation of Turbine Engine Performance - Accuracy Improvement (TASK IV)

    DTIC Science & Technology

    1978-09-30

    33 21 Generalized Form of the Regression Equation for the Optimized Polynomial Exponent M ethod...altitude, Mach number and power setting combinations were generated during the ARES evaluation. The orthogonal Latin Square selection procedure...pattern. In data generation , the low (L), mid (M), and high (H) values of a variable are not always the same. At some of the corner points where

  2. Response Surface Modeling Tolerance and Inference Error Risk Specifications: Proposed Industry Standards

    NASA Technical Reports Server (NTRS)

    DeLoach, Richard

    2012-01-01

    This paper reviews the derivation of an equation for scaling response surface modeling experiments. The equation represents the smallest number of data points required to fit a linear regression polynomial so as to achieve certain specified model adequacy criteria. Specific criteria are proposed which simplify an otherwise rather complex equation, generating a practical rule of thumb for the minimum volume of data required to adequately fit a polynomial with a specified number of terms in the model. This equation and the simplified rule of thumb it produces can be applied to minimize the cost of wind tunnel testing.

  3. Genetic analysis of partial egg production records in Japanese quail using random regression models.

    PubMed

    Abou Khadiga, G; Mahmoud, B Y F; Farahat, G S; Emam, A M; El-Full, E A

    2017-08-01

    The main objectives of this study were to detect the most appropriate random regression model (RRM) to fit the data of monthly egg production in 2 lines (selected and control) of Japanese quail and to test the consistency of different criteria of model choice. Data from 1,200 female Japanese quails for the first 5 months of egg production from 4 consecutive generations of an egg line selected for egg production in the first month (EP1) was analyzed. Eight RRMs with different orders of Legendre polynomials were compared to determine the proper model for analysis. All criteria of model choice suggested that the adequate model included the second-order Legendre polynomials for fixed effects, and the third-order for additive genetic effects and permanent environmental effects. Predictive ability of the best model was the highest among all models (ρ = 0.987). According to the best model fitted to the data, estimates of heritability were relatively low to moderate (0.10 to 0.17) showed a descending pattern from the first to the fifth month of production. A similar pattern was observed for permanent environmental effects with greater estimates in the first (0.36) and second (0.23) months of production than heritability estimates. Genetic correlations between separate production periods were higher (0.18 to 0.93) than their phenotypic counterparts (0.15 to 0.87). The superiority of the selected line over the control was observed through significant (P < 0.05) linear contrast estimates. Significant (P < 0.05) estimates of covariate effect (age at sexual maturity) showed a decreased pattern with greater impact on egg production in earlier ages (first and second months) than later ones. A methodology based on random regression animal models can be recommended for genetic evaluation of egg production in Japanese quail. © 2017 Poultry Science Association Inc.

  4. Wavefront analysis from its slope data

    NASA Astrophysics Data System (ADS)

    Mahajan, Virendra N.; Acosta, Eva

    2017-08-01

    In the aberration analysis of a wavefront over a certain domain, the polynomials that are orthogonal over and represent balanced wave aberrations for this domain are used. For example, Zernike circle polynomials are used for the analysis of a circular wavefront. Similarly, the annular polynomials are used to analyze the annular wavefronts for systems with annular pupils, as in a rotationally symmetric two-mirror system, such as the Hubble space telescope. However, when the data available for analysis are the slopes of a wavefront, as, for example, in a Shack- Hartmann sensor, we can integrate the slope data to obtain the wavefront data, and then use the orthogonal polynomials to obtain the aberration coefficients. An alternative is to find vector functions that are orthogonal to the gradients of the wavefront polynomials, and obtain the aberration coefficients directly as the inner products of these functions with the slope data. In this paper, we show that an infinite number of vector functions can be obtained in this manner. We show further that the vector functions that are irrotational are unique and propagate minimum uncorrelated additive random noise from the slope data to the aberration coefficients.

  5. Dirac(-Pauli), Fokker-Planck equations and exceptional Laguerre polynomials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ho, Choon-Lin, E-mail: hcl@mail.tku.edu.tw

    2011-04-15

    Research Highlights: > Physical examples involving exceptional orthogonal polynomials. > Exceptional polynomials as deformations of classical orthogonal polynomials. > Exceptional polynomials from Darboux-Crum transformation. - Abstract: An interesting discovery in the last two years in the field of mathematical physics has been the exceptional X{sub l} Laguerre and Jacobi polynomials. Unlike the well-known classical orthogonal polynomials which start with constant terms, these new polynomials have lowest degree l = 1, 2, and ..., and yet they form complete set with respect to some positive-definite measure. While the mathematical properties of these new X{sub l} polynomials deserve further analysis, it ismore » also of interest to see if they play any role in physical systems. In this paper we indicate some physical models in which these new polynomials appear as the main part of the eigenfunctions. The systems we consider include the Dirac equations coupled minimally and non-minimally with some external fields, and the Fokker-Planck equations. The systems presented here have enlarged the number of exactly solvable physical systems known so far.« less

  6. Linearity versus Nonlinearity of Offspring-Parent Regression: An Experimental Study of Drosophila Melanogaster

    PubMed Central

    Gimelfarb, A.; Willis, J. H.

    1994-01-01

    An experiment was conducted to investigate the offspring-parent regression for three quantitative traits (weight, abdominal bristles and wing length) in Drosophila melanogaster. Linear and polynomial models were fitted for the regressions of a character in offspring on both parents. It is demonstrated that responses by the characters to selection predicted by the nonlinear regressions may differ substantially from those predicted by the linear regressions. This is true even, and especially, if selection is weak. The realized heritability for a character under selection is shown to be determined not only by the offspring-parent regression but also by the distribution of the character and by the form and strength of selection. PMID:7828818

  7. Stability analysis of fuzzy parametric uncertain systems.

    PubMed

    Bhiwani, R J; Patre, B M

    2011-10-01

    In this paper, the determination of stability margin, gain and phase margin aspects of fuzzy parametric uncertain systems are dealt. The stability analysis of uncertain linear systems with coefficients described by fuzzy functions is studied. A complexity reduced technique for determining the stability margin for FPUS is proposed. The method suggested is dependent on the order of the characteristic polynomial. In order to find the stability margin of interval polynomials of order less than 5, it is not always necessary to determine and check all four Kharitonov's polynomials. It has been shown that, for determining stability margin of FPUS of order five, four, and three we require only 3, 2, and 1 Kharitonov's polynomials respectively. Only for sixth and higher order polynomials, a complete set of Kharitonov's polynomials are needed to determine the stability margin. Thus for lower order systems, the calculations are reduced to a large extent. This idea has been extended to determine the stability margin of fuzzy interval polynomials. It is also shown that the gain and phase margin of FPUS can be determined analytically without using graphical techniques. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.

  8. Critical Analysis of Dual-Probe Heat-Pulse Technique Applied to Measuring Thermal Diffusivity

    NASA Astrophysics Data System (ADS)

    Bovesecchi, G.; Coppa, P.; Corasaniti, S.; Potenza, M.

    2018-07-01

    The paper presents an analysis of the experimental parameters involved in application of the dual-probe heat pulse technique, followed by a critical review of methods for processing thermal response data (e.g., maximum detection and nonlinear least square regression) and the consequent obtainable uncertainty. Glycerol was selected as testing liquid, and its thermal diffusivity was evaluated over the temperature range from - 20 °C to 60 °C. In addition, Monte Carlo simulation was used to assess the uncertainty propagation for maximum detection. It was concluded that maximum detection approach to process thermal response data gives the closest results to the reference data inasmuch nonlinear regression results are affected by major uncertainties due to partial correlation between the evaluated parameters. Besides, the interpolation of temperature data with a polynomial to find the maximum leads to a systematic difference between measured and reference data, as put into evidence by the Monte Carlo simulations; through its correction, this systematic error can be reduced to a negligible value, about 0.8 %.

  9. Discrete Tchebycheff orthonormal polynomials and applications

    NASA Technical Reports Server (NTRS)

    Lear, W. M.

    1980-01-01

    Discrete Tchebycheff orthonormal polynomials offer a convenient way to make least squares polynomial fits of uniformly spaced discrete data. Computer programs to do so are simple and fast, and appear to be less affected by computer roundoff error, for the higher order fits, than conventional least squares programs. They are useful for any application of polynomial least squares fits: approximation of mathematical functions, noise analysis of radar data, and real time smoothing of noisy data, to name a few.

  10. The use of WaveLight® Contoura to create a uniform cornea: the LYRA Protocol. Part 3: the results of 50 treated eyes.

    PubMed

    Motwani, Manoj

    2017-01-01

    To demonstrate how using the Wavelight Contoura measured astigmatism and axis eliminates corneal astigmatism and creates uniformly shaped corneas. A retrospective analysis was conducted of the first 50 eyes to have bilateral full WaveLight ® Contoura LASIK correction of measured astigmatism and axis (vs conventional manifest refraction), using the Layer Yolked Reduction of Astigmatism Protocol in all cases. All patients had astigmatism corrected, and had at least 1 week of follow-up. Accuracy to desired refractive goal was assessed by postoperative refraction, aberration reduction via calculation of polynomials, and postoperative visions were analyzed as a secondary goal. The average difference of astigmatic power from manifest to measured was 0.5462D (with a range of 0-1.69D), and the average difference of axis was 14.94° (with a range of 0°-89°). Forty-seven of 50 eyes had a goal of plano, 3 had a monovision goal. Astigmatism was fully eliminated from all but 2 eyes, and 1 eye had regression with astigmatism. Of the eyes with plano as the goal, 80.85% were 20/15 or better, and 100% were 20/20 or better. Polynomial analysis postoperatively showed that at 6.5 mm, the average C3 was reduced by 86.5% and the average C5 by 85.14%. Using WaveLight ® Contoura measured astigmatism and axis removes higher order aberrations and allows for the creation of a more uniform cornea with accurate removal of astigmatism, and reduction of aberration polynomials. WaveLight ® Contoura successfully links the refractive correction layer and aberration repair layer using the Layer Yolked Reduction of Astigmatism Protocol to demonstrate how aberration removal can affect refractive correction.

  11. Minimizing the effects of multicollinearity in the polynomial regression of age relationships and sex differences in serum levels of pregnenolone sulfate in healthy subjects.

    PubMed

    Meloun, Milan; Hill, Martin; Vceláková-Havlíková, Helena

    2009-01-01

    Pregnenolone sulfate (PregS) is known as a steroid conjugate positively modulating N-methyl-D-aspartate receptors on neuronal membranes. These receptors are responsible for permeability of calcium channels and activation of neuronal function. Neuroactivating effect of PregS is also exerted via non-competitive negative modulation of GABA(A) receptors regulating the chloride influx. Recently, a penetrability of blood-brain barrier for PregS was found in rat, but some experiments in agreement with this finding were reported even earlier. It is known that circulating levels of PregS in human are relatively high depending primarily on age and adrenal activity. Concerning the neuromodulating effect of PregS, we recently evaluated age relationships of PregS in both sexes using polynomial regression models known to bring about the problems of multicollinearity, i.e., strong correlations among independent variables. Several criteria for the selection of suitable bias are demonstrated. Biased estimators based on the generalized principal component regression (GPCR) method avoiding multicollinearity problems are described. Significant differences were found between men and women in the course of the age dependence of PregS. In women, a significant maximum was found around the 30th year followed by a rapid decline, while the maximum in men was achieved almost 10 years earlier and changes were minor up to the 60th year. The investigation of gender differences and age dependencies in PregS could be of interest given its well-known neurostimulating effect, relatively high serum concentration, and the probable partial permeability of the blood-brain barrier for the steroid conjugate. GPCR in combination with the MEP (mean quadric error of prediction) criterion is extremely useful and appealing for constructing biased models. It can also be used for achieving such estimates with regard to keeping the model course corresponding to the data trend, especially in polynomial type regression models.

  12. Third molar development by measurements of open apices in an Italian sample of living subjects.

    PubMed

    De Luca, Stefano; Pacifici, Andrea; Pacifici, Luciano; Polimeni, Antonella; Fischetto, Sara Giulia; Velandia Palacio, Luz Andrea; Vanin, Stefano; Cameriere, Roberto

    2016-02-01

    The aim of this study is to analyse the age-predicting performance of third molar index (I3M) in dental age estimation. A multiple regression analysis was developed with chronological age as the independent variable. In order to investigate the relationship between the I3M and chronological age, the standard deviation and relative error were examined. Digitalized orthopantomographs (OPTs) of 975 Italian healthy subjects (531 female and 444 male), aged between 9 and 22 years, were studied. Third molar development was determined according to Cameriere et al. (2008). Analysis of covariance (ANCOVA) was applied to study the interaction between I3M and the gender. The difference between age and third molar index (I3M) was tested with Pearson's correlation coefficient. The I3M, the age and the gender of the subjects were used as predictive variable for age estimation. The small F-value for the gender (F = 0.042, p = 0.837) reveals that this factor does not affect the growth of the third molar. Adjusted R(2) (AdjR(2)) was used as parameter to define the best fitting function. All the regression models (linear, exponential, and polynomial) showed a similar AdjR(2). The polynomial (2nd order) fitting explains about the 78% of the total variance and do not add any relevant clinical information to the age estimation process from the third molar. The standard deviation and relative error increase with the age. The I3M has its minimum in the younger group of studied individuals and its maximum in the oldest ones, indicating that its precision and reliability decrease with the age. Copyright © 2015 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  13. Comparative Analysis of Various Single-tone Frequency Estimation Techniques in High-order Instantaneous Moments Based Phase Estimation Method

    NASA Astrophysics Data System (ADS)

    Rajshekhar, G.; Gorthi, Sai Siva; Rastogi, Pramod

    2010-04-01

    For phase estimation in digital holographic interferometry, a high-order instantaneous moments (HIM) based method was recently developed which relies on piecewise polynomial approximation of phase and subsequent evaluation of the polynomial coefficients using the HIM operator. A crucial step in the method is mapping the polynomial coefficient estimation to single-tone frequency determination for which various techniques exist. The paper presents a comparative analysis of the performance of the HIM operator based method in using different single-tone frequency estimation techniques for phase estimation. The analysis is supplemented by simulation results.

  14. Quadratic Polynomial Regression using Serial Observation Processing:Implementation within DART

    NASA Astrophysics Data System (ADS)

    Hodyss, D.; Anderson, J. L.; Collins, N.; Campbell, W. F.; Reinecke, P. A.

    2017-12-01

    Many Ensemble-Based Kalman ltering (EBKF) algorithms process the observations serially. Serial observation processing views the data assimilation process as an iterative sequence of scalar update equations. What is useful about this data assimilation algorithm is that it has very low memory requirements and does not need complex methods to perform the typical high-dimensional inverse calculation of many other algorithms. Recently, the push has been towards the prediction, and therefore the assimilation of observations, for regions and phenomena for which high-resolution is required and/or highly nonlinear physical processes are operating. For these situations, a basic hypothesis is that the use of the EBKF is sub-optimal and performance gains could be achieved by accounting for aspects of the non-Gaussianty. To this end, we develop here a new component of the Data Assimilation Research Testbed [DART] to allow for a wide-variety of users to test this hypothesis. This new version of DART allows one to run several variants of the EBKF as well as several variants of the quadratic polynomial lter using the same forecast model and observations. Dierences between the results of the two systems will then highlight the degree of non-Gaussianity in the system being examined. We will illustrate in this work the differences between the performance of linear versus quadratic polynomial regression in a hierarchy of models from Lorenz-63 to a simple general circulation model.

  15. Genetic modelling of test day records in dairy sheep using orthogonal Legendre polynomials.

    PubMed

    Kominakis, A; Volanis, M; Rogdakis, E

    2001-03-01

    Test day milk yields of three lactations in Sfakia sheep were analyzed fitting a random regression (RR) model, regressing on orthogonal polynomials of the stage of the lactation period, i.e. days in milk. Univariate (UV) and multivariate (MV) analyses were also performed for four stages of the lactation period, represented by average days in milk, i.e. 15, 45, 70 and 105 days, to compare estimates obtained from RR models with estimates from UV and MV analyses. The total number of test day records were 790, 1314 and 1041 obtained from 214, 342 and 303 ewes in the first, second and third lactation, respectively. Error variances and covariances between regression coefficients were estimated by restricted maximum likelihood. Models were compared using likelihood ratio tests (LRTs). Log likelihoods were not significantly reduced when the rank of the orthogonal Legendre polynomials (LPs) of lactation stage was reduced from 4 to 2 and homogenous variances for lactation stages within lactations were considered. Mean weighted heritability estimates with RR models were 0.19, 0.09 and 0.08 for first, second and third lactation, respectively. The respective estimates obtained from UV analyses were 0.14, 0.12 and 0.08, respectively. Mean permanent environmental variance, as a proportion of the total, was high at all stages and lactations ranging from 0.54 to 0.71. Within lactations, genetic and permanent environmental correlations between lactation stages were in the range from 0.36 to 0.99 and 0.76 to 0.99, respectively. Genetic parameters for additive genetic and permanent environmental effects obtained from RR models were different from those obtained from UV and MV analyses.

  16. An Evidence-Based Approach to Defining Fetal Macrosomia.

    PubMed

    Froehlich, Rosemary; Simhan, Hyagriv N; Larkin, Jacob C

    2016-04-01

    This study aims to determine the risk of adverse outcomes associated with the current diagnostic criteria for fetal macrosomia. Study We evaluated three techniques for characterizing birth weight as a predictor of shoulder dystocia or third- or fourth-degree laceration in 79,879 vaginal deliveries. First, we compared deliveries with birth weights above or below 4,500 g. We then performed logistic regression using birth weight as a continuous predictor, both with and without fractional polynomial transformation. Finally, we calculated the number of cesarean sections required to prevent one incident of the interrogated outcomes (number needed to treat [NNT]). Rates of adverse intrapartum outcomes increase incrementally with increasing birth weight and are predicted most accurately with logistic regression following fractional polynomial transformation. The NNT for third- or fourth-degree laceration dropped from 14.3 (95% confidence interval [CI], 13.9-14.7) at a birth weight of 3,500 g to 6.4 (95% CI, 6.1-6.8) at 4,500 g and, for shoulder dystocia, from 54.9 (95% CI, 51.5-58.6) at 3,500 g to 5.6 (95% CI, 5.2-6.0) at 4,500 g. The conventional distinction between "normal" and "macrosomic" does not reflect the incremental effect of increasing birth weight on the risk of obstetric morbidity. Outcomes analysis can inform fetal growth standards to better reflect relevant thresholds of risk. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  17. Generating the patterns of variation with GeoGebra: the case of polynomial approximations

    NASA Astrophysics Data System (ADS)

    Attorps, Iiris; Björk, Kjell; Radic, Mirko

    2016-01-01

    In this paper, we report a teaching experiment regarding the theory of polynomial approximations at the university mathematics teaching in Sweden. The experiment was designed by applying Variation theory and by using the free dynamic mathematics software GeoGebra. The aim of this study was to investigate if the technology-assisted teaching of Taylor polynomials compared with traditional way of work at the university level can support the teaching and learning of mathematical concepts and ideas. An engineering student group (n = 19) was taught Taylor polynomials with the assistance of GeoGebra while a control group (n = 18) was taught in a traditional way. The data were gathered by video recording of the lectures, by doing a post-test concerning Taylor polynomials in both groups and by giving one question regarding Taylor polynomials at the final exam for the course in Real Analysis in one variable. In the analysis of the lectures, we found Variation theory combined with GeoGebra to be a potentially powerful tool for revealing some critical aspects of Taylor Polynomials. Furthermore, the research results indicated that applying Variation theory, when planning the technology-assisted teaching, supported and enriched students' learning opportunities in the study group compared with the control group.

  18. Analysis of longitudinal "time series" data in toxicology.

    PubMed

    Cox, C; Cory-Slechta, D A

    1987-02-01

    Studies focusing on chronic toxicity or on the time course of toxicant effect often involve repeated measurements or longitudinal observations of endpoints of interest. Experimental design considerations frequently necessitate between-group comparisons of the resulting trends. Typically, procedures such as the repeated-measures analysis of variance have been used for statistical analysis, even though the required assumptions may not be satisfied in some circumstances. This paper describes an alternative analytical approach which summarizes curvilinear trends by fitting cubic orthogonal polynomials to individual profiles of effect. The resulting regression coefficients serve as quantitative descriptors which can be subjected to group significance testing. Randomization tests based on medians are proposed to provide a comparison of treatment and control groups. Examples from the behavioral toxicology literature are considered, and the results are compared to more traditional approaches, such as repeated-measures analysis of variance.

  19. Parametric analysis of ATM solar array.

    NASA Technical Reports Server (NTRS)

    Singh, B. K.; Adkisson, W. B.

    1973-01-01

    The paper discusses the methods used for the calculation of ATM solar array performance characteristics and provides the parametric analysis of solar panels used in SKYLAB. To predict the solar array performance under conditions other than test conditions, a mathematical model has been developed. Four computer programs have been used to convert the solar simulator test data to the parametric curves. The first performs module summations, the second determines average solar cell characteristics which will cause a mathematical model to generate a curve matching the test data, the third is a polynomial fit program which determines the polynomial equations for the solar cell characteristics versus temperature, and the fourth program uses the polynomial coefficients generated by the polynomial curve fit program to generate the parametric data.

  20. Model Robust Calibration: Method and Application to Electronically-Scanned Pressure Transducers

    NASA Technical Reports Server (NTRS)

    Walker, Eric L.; Starnes, B. Alden; Birch, Jeffery B.; Mays, James E.

    2010-01-01

    This article presents the application of a recently developed statistical regression method to the controlled instrument calibration problem. The statistical method of Model Robust Regression (MRR), developed by Mays, Birch, and Starnes, is shown to improve instrument calibration by reducing the reliance of the calibration on a predetermined parametric (e.g. polynomial, exponential, logarithmic) model. This is accomplished by allowing fits from the predetermined parametric model to be augmented by a certain portion of a fit to the residuals from the initial regression using a nonparametric (locally parametric) regression technique. The method is demonstrated for the absolute scale calibration of silicon-based pressure transducers.

  1. A DDDAS Framework for Volcanic Ash Propagation and Hazard Analysis

    DTIC Science & Technology

    2012-01-01

    probability distribution for the input variables (for example, Hermite polynomials for normally distributed parameters, or Legendre for uniformly...parameters and windfields will drive our simulations. We will use uncertainty quantification methodology – polynomial chaos quadrature in combination...quantification methodology ? polynomial chaos quadrature in combination with data integration to complete the DDDAS loop. 15. SUBJECT TERMS 16. SECURITY

  2. Modeling State-Space Aeroelastic Systems Using a Simple Matrix Polynomial Approach for the Unsteady Aerodynamics

    NASA Technical Reports Server (NTRS)

    Pototzky, Anthony S.

    2008-01-01

    A simple matrix polynomial approach is introduced for approximating unsteady aerodynamics in the s-plane and ultimately, after combining matrix polynomial coefficients with matrices defining the structure, a matrix polynomial of the flutter equations of motion (EOM) is formed. A technique of recasting the matrix-polynomial form of the flutter EOM into a first order form is also presented that can be used to determine the eigenvalues near the origin and everywhere on the complex plane. An aeroservoelastic (ASE) EOM have been generalized to include the gust terms on the right-hand side. The reasons for developing the new matrix polynomial approach are also presented, which are the following: first, the "workhorse" methods such as the NASTRAN flutter analysis lack the capability to consistently find roots near the origin, along the real axis or accurately find roots farther away from the imaginary axis of the complex plane; and, second, the existing s-plane methods, such as the Roger s s-plane approximation method as implemented in ISAC, do not always give suitable fits of some tabular data of the unsteady aerodynamics. A method available in MATLAB is introduced that will accurately fit generalized aerodynamic force (GAF) coefficients in a tabular data form into the coefficients of a matrix polynomial form. The root-locus results from the NASTRAN pknl flutter analysis, the ISAC-Roger's s-plane method and the present matrix polynomial method are presented and compared for accuracy and for the number and locations of roots.

  3. Hypothesis testing in functional linear regression models with Neyman's truncation and wavelet thresholding for longitudinal data.

    PubMed

    Yang, Xiaowei; Nie, Kun

    2008-03-15

    Longitudinal data sets in biomedical research often consist of large numbers of repeated measures. In many cases, the trajectories do not look globally linear or polynomial, making it difficult to summarize the data or test hypotheses using standard longitudinal data analysis based on various linear models. An alternative approach is to apply the approaches of functional data analysis, which directly target the continuous nonlinear curves underlying discretely sampled repeated measures. For the purposes of data exploration, many functional data analysis strategies have been developed based on various schemes of smoothing, but fewer options are available for making causal inferences regarding predictor-outcome relationships, a common task seen in hypothesis-driven medical studies. To compare groups of curves, two testing strategies with good power have been proposed for high-dimensional analysis of variance: the Fourier-based adaptive Neyman test and the wavelet-based thresholding test. Using a smoking cessation clinical trial data set, this paper demonstrates how to extend the strategies for hypothesis testing into the framework of functional linear regression models (FLRMs) with continuous functional responses and categorical or continuous scalar predictors. The analysis procedure consists of three steps: first, apply the Fourier or wavelet transform to the original repeated measures; then fit a multivariate linear model in the transformed domain; and finally, test the regression coefficients using either adaptive Neyman or thresholding statistics. Since a FLRM can be viewed as a natural extension of the traditional multiple linear regression model, the development of this model and computational tools should enhance the capacity of medical statistics for longitudinal data.

  4. Polynomial algebra of discrete models in systems biology.

    PubMed

    Veliz-Cuba, Alan; Jarrah, Abdul Salam; Laubenbacher, Reinhard

    2010-07-01

    An increasing number of discrete mathematical models are being published in Systems Biology, ranging from Boolean network models to logical models and Petri nets. They are used to model a variety of biochemical networks, such as metabolic networks, gene regulatory networks and signal transduction networks. There is increasing evidence that such models can capture key dynamic features of biological networks and can be used successfully for hypothesis generation. This article provides a unified framework that can aid the mathematical analysis of Boolean network models, logical models and Petri nets. They can be represented as polynomial dynamical systems, which allows the use of a variety of mathematical tools from computer algebra for their analysis. Algorithms are presented for the translation into polynomial dynamical systems. Examples are given of how polynomial algebra can be used for the model analysis. alanavc@vt.edu Supplementary data are available at Bioinformatics online.

  5. Stable Numerical Approach for Fractional Delay Differential Equations

    NASA Astrophysics Data System (ADS)

    Singh, Harendra; Pandey, Rajesh K.; Baleanu, D.

    2017-12-01

    In this paper, we present a new stable numerical approach based on the operational matrix of integration of Jacobi polynomials for solving fractional delay differential equations (FDDEs). The operational matrix approach converts the FDDE into a system of linear equations, and hence the numerical solution is obtained by solving the linear system. The error analysis of the proposed method is also established. Further, a comparative study of the approximate solutions is provided for the test examples of the FDDE by varying the values of the parameters in the Jacobi polynomials. As in special case, the Jacobi polynomials reduce to the well-known polynomials such as (1) Legendre polynomial, (2) Chebyshev polynomial of second kind, (3) Chebyshev polynomial of third and (4) Chebyshev polynomial of fourth kind respectively. Maximum absolute error and root mean square error are calculated for the illustrated examples and presented in form of tables for the comparison purpose. Numerical stability of the presented method with respect to all four kind of polynomials are discussed. Further, the obtained numerical results are compared with some known methods from the literature and it is observed that obtained results from the proposed method is better than these methods.

  6. Development of surrogate models for the prediction of the flow around an aircraft propeller

    NASA Astrophysics Data System (ADS)

    Salpigidou, Christina; Misirlis, Dimitris; Vlahostergios, Zinon; Yakinthos, Kyros

    2018-05-01

    In the present work, the derivation of two surrogate models (SMs) for modelling the flow around a propeller for small aircrafts is presented. Both methodologies use derived functions based on computations with the detailed propeller geometry. The computations were performed using k-ω shear stress transport for modelling turbulence. In the SMs, the modelling of the propeller was performed in a computational domain of disk-like geometry, where source terms were introduced in the momentum equations. In the first SM, the source terms were polynomial functions of swirl and thrust, mainly related to the propeller radius. In the second SM, regression analysis was used to correlate the source terms with the velocity distribution through the propeller. The proposed SMs achieved faster convergence, in relation to the detail model, by providing also results closer to the available operational data. The regression-based model was the most accurate and required less computational time for convergence.

  7. Robust stability of fractional order polynomials with complicated uncertainty structure

    PubMed Central

    Şenol, Bilal; Pekař, Libor

    2017-01-01

    The main aim of this article is to present a graphical approach to robust stability analysis for families of fractional order (quasi-)polynomials with complicated uncertainty structure. More specifically, the work emphasizes the multilinear, polynomial and general structures of uncertainty and, moreover, the retarded quasi-polynomials with parametric uncertainty are studied. Since the families with these complex uncertainty structures suffer from the lack of analytical tools, their robust stability is investigated by numerical calculation and depiction of the value sets and subsequent application of the zero exclusion condition. PMID:28662173

  8. Credible Set Estimation, Analysis, and Applications in Synthetic Aperture Radar Canonical Feature Extraction

    DTIC Science & Technology

    2015-03-26

    depicting the CSE implementation for use with CV Domes data. . . 88 B.1 Validation results for N = 1 observation at 1.0 interval. Legendre polynomial of... Legendre polynomial of order Nl = 5. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98 B.3 Validation results for N = 1 observation at...0.01 interval. Legendre polynomial of order Nl = 5. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99 B.4 Validation results for N

  9. Direct solution for thermal stresses in a nose cap under an arbitrary axisymmetric temperature distribution

    NASA Technical Reports Server (NTRS)

    Davis, Randall C.

    1988-01-01

    The design of a nose cap for a hypersonic vehicle is an iterative process requiring a rapid, easy to use and accurate stress analysis. The objective of this paper is to develop such a stress analysis technique from a direct solution of the thermal stress equations for a spherical shell. The nose cap structure is treated as a thin spherical shell with an axisymmetric temperature distribution. The governing differential equations are solved by expressing the stress solution to the thermoelastic equations in terms of a series of derivatives of the Legendre polynomials. The process of finding the coefficients for the series solution in terms of the temperature distribution is generalized by expressing the temperature along the shell and through the thickness as a polynomial in the spherical angle coordinate. Under this generalization the orthogonality property of the Legendre polynomials leads to a sequence of integrals involving powers of the spherical shell coordinate times the derivative of the Legendre polynomials. The coefficients of the temperature polynomial appear outside of these integrals. Thus, the integrals are evaluated only once and their values tabulated for use with any arbitrary polynomial temperature distribution.

  10. Spillover in the Academy: Marriage Stability and Faculty Evaluations.

    ERIC Educational Resources Information Center

    Ludlow, Larry H.; Alvarez-Salvat, Rose M.

    2001-01-01

    Studied the spillover between family and work by examining the link between marital status and work performance across marriage, divorce, and remarriage. A polynomial regression model was fit to the data from 78 evaluations of an individual professor, and a cubic curve through the 3 periods was statistically significant. (SLD)

  11. Genetic parameters for test-day yield of milk, fat and protein in buffaloes estimated by random regression models.

    PubMed

    Aspilcueta-Borquis, Rúsbel R; Araujo Neto, Francisco R; Baldi, Fernando; Santos, Daniel J A; Albuquerque, Lucia G; Tonhati, Humberto

    2012-08-01

    The test-day yields of milk, fat and protein were analysed from 1433 first lactations of buffaloes of the Murrah breed, daughters of 113 sires from 12 herds in the state of São Paulo, Brazil, born between 1985 and 2007. For the test-day yields, 10 monthly classes of lactation days were considered. The contemporary groups were defined as the herd-year-month of the test day. Random additive genetic, permanent environmental and residual effects were included in the model. The fixed effects considered were the contemporary group, number of milkings (1 or 2 milkings), linear and quadratic effects of the covariable cow age at calving and the mean lactation curve of the population (modelled by third-order Legendre orthogonal polynomials). The random additive genetic and permanent environmental effects were estimated by means of regression on third- to sixth-order Legendre orthogonal polynomials. The residual variances were modelled with a homogenous structure and various heterogeneous classes. According to the likelihood-ratio test, the best model for milk and fat production was that with four residual variance classes, while a third-order Legendre polynomial was best for the additive genetic effect for milk and fat yield, a fourth-order polynomial was best for the permanent environmental effect for milk production and a fifth-order polynomial was best for fat production. For protein yield, the best model was that with three residual variance classes and third- and fourth-order Legendre polynomials were best for the additive genetic and permanent environmental effects, respectively. The heritability estimates for the characteristics analysed were moderate, varying from 0·16±0·05 to 0·29±0·05 for milk yield, 0·20±0·05 to 0·30±0·08 for fat yield and 0·18±0·06 to 0·27±0·08 for protein yield. The estimates of the genetic correlations between the tests varied from 0·18±0·120 to 0·99±0·002; from 0·44±0·080 to 0·99±0·004; and from 0·41±0·080 to 0·99±0·004, for milk, fat and protein production, respectively, indicating that whatever the selection criterion used, indirect genetic gains can be expected throughout the lactation curve.

  12. Multiple regression technique for Pth degree polynominals with and without linear cross products

    NASA Technical Reports Server (NTRS)

    Davis, J. W.

    1973-01-01

    A multiple regression technique was developed by which the nonlinear behavior of specified independent variables can be related to a given dependent variable. The polynomial expression can be of Pth degree and can incorporate N independent variables. Two cases are treated such that mathematical models can be studied both with and without linear cross products. The resulting surface fits can be used to summarize trends for a given phenomenon and provide a mathematical relationship for subsequent analysis. To implement this technique, separate computer programs were developed for the case without linear cross products and for the case incorporating such cross products which evaluate the various constants in the model regression equation. In addition, the significance of the estimated regression equation is considered and the standard deviation, the F statistic, the maximum absolute percent error, and the average of the absolute values of the percent of error evaluated. The computer programs and their manner of utilization are described. Sample problems are included to illustrate the use and capability of the technique which show the output formats and typical plots comparing computer results to each set of input data.

  13. Short communication: Genetic variation of saturated fatty acids in Holsteins in the Walloon region of Belgium.

    PubMed

    Arnould, V M-R; Hammami, H; Soyeurt, H; Gengler, N

    2010-09-01

    Random regression test-day models using Legendre polynomials are commonly used for the estimation of genetic parameters and genetic evaluation for test-day milk production traits. However, some researchers have reported that these models present some undesirable properties such as the overestimation of variances at the edges of lactation. Describing genetic variation of saturated fatty acids expressed in milk fat might require the testing of different models. Therefore, 3 different functions were used and compared to take into account the lactation curve: (1) Legendre polynomials with the same order as currently applied for genetic model for production traits; 2) linear splines with 10 knots; and 3) linear splines with the same 10 knots reduced to 3 parameters. The criteria used were Akaike's information and Bayesian information criteria, percentage square biases, and log-likelihood function. These criteria indentified Legendre polynomials and linear splines with 10 knots reduced to 3 parameters models as the most useful. Reducing more complex models using eigenvalues seemed appealing because the resulting models are less time demanding and can reduce convergence difficulties, because convergence properties also seemed to be improved. Finally, the results showed that the reduced spline model was very similar to the Legendre polynomials model. Copyright (c) 2010 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  14. Regression discontinuity was a valid design for dichotomous outcomes in three randomized trials.

    PubMed

    van Leeuwen, Nikki; Lingsma, Hester F; Mooijaart, Simon P; Nieboer, Daan; Trompet, Stella; Steyerberg, Ewout W

    2018-06-01

    Regression discontinuity (RD) is a quasi-experimental design that may provide valid estimates of treatment effects in case of continuous outcomes. We aimed to evaluate validity and precision in the RD design for dichotomous outcomes. We performed validation studies in three large randomized controlled trials (RCTs) (Corticosteroid Randomization After Significant Head injury [CRASH], the Global Utilization of Streptokinase and Tissue Plasminogen Activator for Occluded Coronary Arteries [GUSTO], and PROspective Study of Pravastatin in elderly individuals at risk of vascular disease [PROSPER]). To mimic the RD design, we selected patients above and below a cutoff (e.g., age 75 years) randomized to treatment and control, respectively. Adjusted logistic regression models using restricted cubic splines (RCS) and polynomials and local logistic regression models estimated the odds ratio (OR) for treatment, with 95% confidence intervals (CIs) to indicate precision. In CRASH, treatment increased mortality with OR 1.22 [95% CI 1.06-1.40] in the RCT. The RD estimates were 1.42 (0.94-2.16) and 1.13 (0.90-1.40) with RCS adjustment and local regression, respectively. In GUSTO, treatment reduced mortality (OR 0.83 [0.72-0.95]), with more extreme estimates in the RD analysis (OR 0.57 [0.35; 0.92] and 0.67 [0.51; 0.86]). In PROSPER, similar RCT and RD estimates were found, again with less precision in RD designs. We conclude that the RD design provides similar but substantially less precise treatment effect estimates compared with an RCT, with local regression being the preferred method of analysis. Copyright © 2018 Elsevier Inc. All rights reserved.

  15. Random regression analyses using B-splines to model growth of Australian Angus cattle

    PubMed Central

    Meyer, Karin

    2005-01-01

    Regression on the basis function of B-splines has been advocated as an alternative to orthogonal polynomials in random regression analyses. Basic theory of splines in mixed model analyses is reviewed, and estimates from analyses of weights of Australian Angus cattle from birth to 820 days of age are presented. Data comprised 84 533 records on 20 731 animals in 43 herds, with a high proportion of animals with 4 or more weights recorded. Changes in weights with age were modelled through B-splines of age at recording. A total of thirteen analyses, considering different combinations of linear, quadratic and cubic B-splines and up to six knots, were carried out. Results showed good agreement for all ages with many records, but fluctuated where data were sparse. On the whole, analyses using B-splines appeared more robust against "end-of-range" problems and yielded more consistent and accurate estimates of the first eigenfunctions than previous, polynomial analyses. A model fitting quadratic B-splines, with knots at 0, 200, 400, 600 and 821 days and a total of 91 covariance components, appeared to be a good compromise between detailedness of the model, number of parameters to be estimated, plausibility of results, and fit, measured as residual mean square error. PMID:16093011

  16. Continuous monitoring of fetal scalp temperature in labor: a new technology validated in a fetal lamb model.

    PubMed

    Lavesson, Tony; Amer-Wåhlin, Isis; Hansson, Stefan; Ley, David; Marsál, Karel; Olofsson, Per

    2010-06-01

    To evaluate a new technical equipment for continuous recording of human fetal scalp temperature in labor. Experimental animal study. Two temperature sensors were placed subcutaneously and intracranially on the forehead of 10 fetal lambs and connected to a temperature monitoring system. The system records temperatures simultaneously on-line and stores data to be analyzed off-line. Throughout the experiment, the fetus was oxygenated via the umbilical cord circulation. Asphyxia was induced by intermittent cord compression, as assessed by pH in jugular vein blood. The intracranial (ICT) and subcutaneous (SCT) temperatures were compared with simple and polynomial regression analyses. Absolute and delta ICT and SCT changes. ICT and SCT were both successfully recorded in all 10 cases. With increasing acidosis, the temperatures decreased. The correlation coefficient between ICT and SCT had a range of 0.76-0.97 (median 0.88) by simple linear regression and 0.80-0.99 (median 0.89) by second grade polynomial regression. After an initial system stabilization period of 10 minutes, the delta temperature values (ICT minus SCT) were less than 1.5 degrees C throughout the experiment in all but one case. The fetal forehead SCT mirrored the ICT closely, with the ICT being higher.

  17. Nested polynomial trends for the improvement of Gaussian process-based predictors

    NASA Astrophysics Data System (ADS)

    Perrin, G.; Soize, C.; Marque-Pucheu, S.; Garnier, J.

    2017-10-01

    The role of simulation keeps increasing for the sensitivity analysis and the uncertainty quantification of complex systems. Such numerical procedures are generally based on the processing of a huge amount of code evaluations. When the computational cost associated with one particular evaluation of the code is high, such direct approaches based on the computer code only, are not affordable. Surrogate models have therefore to be introduced to interpolate the information given by a fixed set of code evaluations to the whole input space. When confronted to deterministic mappings, the Gaussian process regression (GPR), or kriging, presents a good compromise between complexity, efficiency and error control. Such a method considers the quantity of interest of the system as a particular realization of a Gaussian stochastic process, whose mean and covariance functions have to be identified from the available code evaluations. In this context, this work proposes an innovative parametrization of this mean function, which is based on the composition of two polynomials. This approach is particularly relevant for the approximation of strongly non linear quantities of interest from very little information. After presenting the theoretical basis of this method, this work compares its efficiency to alternative approaches on a series of examples.

  18. Monitoring of bone regeneration process by means of texture analysis

    NASA Astrophysics Data System (ADS)

    Kokkinou, E.; Boniatis, I.; Costaridou, L.; Saridis, A.; Panagiotopoulos, E.; Panayiotakis, G.

    2009-09-01

    An image analysis method is proposed for the monitoring of the regeneration of the tibial bone. For this purpose, 130 digitized radiographs of 13 patients, who had undergone tibial lengthening by the Ilizarov method, were studied. For each patient, 10 radiographs, taken at an equal number of postoperative successive time moments, were available. Employing available software, 3 Regions Of Interest (ROIs), corresponding to the: (a) upper, (b) central, and (c) lower aspect of the gap, where bone regeneration was expected to occur, were determined on each radiograph. Employing custom developed algorithms: (i) a number of textural features were generated from each of the ROIs, and (ii) a texture-feature based regression model was designed for the quantitative monitoring of the bone regeneration process. Statistically significant differences (p < 0.05) were derived for the initial and the final textural features values, generated from the first and the last postoperatively obtained radiographs, respectively. A quadratic polynomial regression equation fitted data adequately (r2 = 0.9, p < 0.001). The suggested method may contribute to the monitoring of the tibial bone regeneration process.

  19. Prediction of Spirometric Forced Expiratory Volume (FEV1) Data Using Support Vector Regression

    NASA Astrophysics Data System (ADS)

    Kavitha, A.; Sujatha, C. M.; Ramakrishnan, S.

    2010-01-01

    In this work, prediction of forced expiratory volume in 1 second (FEV1) in pulmonary function test is carried out using the spirometer and support vector regression analysis. Pulmonary function data are measured with flow volume spirometer from volunteers (N=175) using a standard data acquisition protocol. The acquired data are then used to predict FEV1. Support vector machines with polynomial kernel function with four different orders were employed to predict the values of FEV1. The performance is evaluated by computing the average prediction accuracy for normal and abnormal cases. Results show that support vector machines are capable of predicting FEV1 in both normal and abnormal cases and the average prediction accuracy for normal subjects was higher than that of abnormal subjects. Accuracy in prediction was found to be high for a regularization constant of C=10. Since FEV1 is the most significant parameter in the analysis of spirometric data, it appears that this method of assessment is useful in diagnosing the pulmonary abnormalities with incomplete data and data with poor recording.

  20. Three Gorges Dam: polynomial regression modeling of water level and the density of schistosome-transmitting snails Oncomelania hupensis.

    PubMed

    Yang, Ya; Gao, Jianchuan; Cheng, Wanting; Pan, Xiang; Yang, Yu; Chen, Yue; Dai, Qingqing; Zhu, Lan; Zhou, Yibiao; Jiang, Qingwu

    2018-03-14

    Schistosomiasis remains a major public health concern in China. Oncomelania hupensis (O. hupensis) is the sole intermediate host of Schistosoma japonicum, and its change in distribution and density influences the endemic S. japonicum. The Three Gorges Dam (TGD) has substantially changed the downstream water levels of the dam. This study investigated the quantitative relationship between flooding duration and the density of the snail population. Two bottomlands without any control measures for snails were selected in Yueyang City, Hunan Province. Data for the density of the snail population and water level in both spring and autumn were collected for the period 2009-2015. Polynomial regression analysis was applied to explore the relationship between flooding duration and the density of the snail population. Data showed a convex relationship between spring snail density and flooding duration of the previous year (adjusted R 2 , aR 2  = 0.61). The spring snail density remained low when the flooding duration was fewer than 50 days in the previous year, was the highest when the flooding duration was 123 days, and decreased thereafter. There was a similar convex relationship between autumn snail density and flooding duration of the current year (aR 2  = 0.77). The snail density was low when the flooding duration was fewer than 50 days and was the highest when the flooding duration was 139 days. There was a convex relationship between flooding duration and the spring or autumn snail density. The snail density was the highest when flooding lasted about four to 5 months.

  1. Random regression analyses using B-spline functions to model growth of Nellore cattle.

    PubMed

    Boligon, A A; Mercadante, M E Z; Lôbo, R B; Baldi, F; Albuquerque, L G

    2012-02-01

    The objective of this study was to estimate (co)variance components using random regression on B-spline functions to weight records obtained from birth to adulthood. A total of 82 064 weight records of 8145 females obtained from the data bank of the Nellore Breeding Program (PMGRN/Nellore Brazil) which started in 1987, were used. The models included direct additive and maternal genetic effects and animal and maternal permanent environmental effects as random. Contemporary group and dam age at calving (linear and quadratic effect) were included as fixed effects, and orthogonal Legendre polynomials of age (cubic regression) were considered as random covariate. The random effects were modeled using B-spline functions considering linear, quadratic and cubic polynomials for each individual segment. Residual variances were grouped in five age classes. Direct additive genetic and animal permanent environmental effects were modeled using up to seven knots (six segments). A single segment with two knots at the end points of the curve was used for the estimation of maternal genetic and maternal permanent environmental effects. A total of 15 models were studied, with the number of parameters ranging from 17 to 81. The models that used B-splines were compared with multi-trait analyses with nine weight traits and to a random regression model that used orthogonal Legendre polynomials. A model fitting quadratic B-splines, with four knots or three segments for direct additive genetic effect and animal permanent environmental effect and two knots for maternal additive genetic effect and maternal permanent environmental effect, was the most appropriate and parsimonious model to describe the covariance structure of the data. Selection for higher weight, such as at young ages, should be performed taking into account an increase in mature cow weight. Particularly, this is important in most of Nellore beef cattle production systems, where the cow herd is maintained on range conditions. There is limited modification of the growth curve of Nellore cattle with respect to the aim of selecting them for rapid growth at young ages while maintaining constant adult weight.

  2. Comparing Inference Approaches for RD Designs: A Reexamination of the Effect of Head Start on Child Mortality

    ERIC Educational Resources Information Center

    Cattaneo, Matias D.; Titiunik, Rocío; Vazquez-Bare, Gonzalo

    2017-01-01

    The regression discontinuity (RD) design is a popular quasi-experimental design for causal inference and policy evaluation. The most common inference approaches in RD designs employ "flexible" parametric and nonparametric local polynomial methods, which rely on extrapolation and large-sample approximations of conditional expectations…

  3. An Exploratory Investigation of the Role of Openness in Relationship Quality among Emerging Adult Chinese Couples

    PubMed Central

    Zhou, Yixin; Wang, Kexin; Chen, Shuang; Zhang, Jianxin; Zhou, Mingjie

    2017-01-01

    This study tested emerging adult couples’ openness and its fit effect on their romantic relationship quality using quadratic polynomial regression and response surface analysis. Participants were 260 emerging adult dyads. Both dyads’ openness and relationship quality were measured. The result showed that (1) female and male openness contribute differently to relationship quality; (2) couples with similar high openness could experience better relationship quality than those with similar low openness traits; and (3) when dyadic openness is dissimilar, it is better to be either relatively high or relatively low than to be moderate. These findings highlight the role of openness in emerging adults’ romantic relationships from a dyadic angle. PMID:28360875

  4. A robust nonparametric framework for reconstruction of stochastic differential equation models

    NASA Astrophysics Data System (ADS)

    Rajabzadeh, Yalda; Rezaie, Amir Hossein; Amindavar, Hamidreza

    2016-05-01

    In this paper, we employ a nonparametric framework to robustly estimate the functional forms of drift and diffusion terms from discrete stationary time series. The proposed method significantly improves the accuracy of the parameter estimation. In this framework, drift and diffusion coefficients are modeled through orthogonal Legendre polynomials. We employ the least squares regression approach along with the Euler-Maruyama approximation method to learn coefficients of stochastic model. Next, a numerical discrete construction of mean squared prediction error (MSPE) is established to calculate the order of Legendre polynomials in drift and diffusion terms. We show numerically that the new method is robust against the variation in sample size and sampling rate. The performance of our method in comparison with the kernel-based regression (KBR) method is demonstrated through simulation and real data. In case of real dataset, we test our method for discriminating healthy electroencephalogram (EEG) signals from epilepsy ones. We also demonstrate the efficiency of the method through prediction in the financial data. In both simulation and real data, our algorithm outperforms the KBR method.

  5. Explorations of the Gauss-Lucas Theorem

    ERIC Educational Resources Information Center

    Brilleslyper, Michael A.; Schaubroeck, Beth

    2017-01-01

    The Gauss-Lucas Theorem is a classical complex analysis result that states the critical points of a single-variable complex polynomial lie inside the closed convex hull of the zeros of the polynomial. Although the result is well-known, it is not typically presented in a first course in complex analysis. The ease with which modern technology allows…

  6. Baecklund transformation, Lax pair, and solutions for the Caudrey-Dodd-Gibbon equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qu Qixing; Sun Kun; Jiang Yan

    2011-01-15

    By using Bell polynomials and symbolic computation, we investigate the Caudrey-Dodd-Gibbon equation analytically. Through a generalization of Bells polynomials, its bilinear form is derived, based on which, the periodic wave solution and soliton solutions are presented. And the soliton solutions with graphic analysis are also given. Furthermore, Baecklund transformation and Lax pair are derived via the Bells exponential polynomials. Finally, the Ablowitz-Kaup-Newell-Segur system is constructed.

  7. Association of overjet and overbite with esthetic impairments of oral health-related quality of life.

    PubMed

    Sierwald, Ira; John, Mike T; Schierz, Oliver; Jost-Brinkmann, Paul-Georg; Reissmann, Daniel R

    2015-09-01

    Esthetics is an important part of quality of life and a frequent reason for orthodontic treatment demand. It was the aim of this study to investigate whether esthetic impairments, related to overjet and overbite, can be assessed with an established oral health-related quality of life instrument. Data from 1968 participants (age: 16-90 years; 69.8% female) from three German surveys were analyzed. Esthetic impairments of oral health-related quality of life were measured with four questions of the Oral Health Impact profile (OHIP), which comprise esthetic aspects of oral health-related quality of life. Higher values represent greater esthetic impairment (sum score: 0-16). Overbite and overjet values were categorized (≤ - 1 mm, 0-1 mm, 2-3 mm, 4-5 mm, ≥ 6 mm). The specific impact of each category on esthetic impairment, in relation to the reference category (2-3 mm), was calculated in linear regression analyses. The type of relationship and the specific impact of overbite and overjet were evaluated in regression analyses with fractional polynomials. Overbite ranged from - 5 to 15 mm (mean: 3.2 mm) and overjet from - 7 to 19 mm (mean: 3.1 mm). Both an increase and a decrease in overjet, in relation to the reference category, resulted in more esthetic-related oral health-related quality of life impairments. However, in this model, only the effect for increased overjet was statistically significant (4-5 mm: + 0.4 OHIP points; ≥ 6 mm: + 0.9 OHIP points). In the regression analysis with fractional polynomials, both an increase and a decrease in overjet resulted in more esthetic impairments, characterized by a U-shaped relationship. No association could be verified for overbite. A substantial increase or decrease of overjet from the reference values is associated with esthetic impairments of oral health-related quality of life, whereas the extent of overbite seems to have no impact on esthetics.

  8. The use of WaveLight® Contoura to create a uniform cornea: the LYRA Protocol. Part 3: the results of 50 treated eyes

    PubMed Central

    Motwani, Manoj

    2017-01-01

    Purpose To demonstrate how using the Wavelight Contoura measured astigmatism and axis eliminates corneal astigmatism and creates uniformly shaped corneas. Patients and methods A retrospective analysis was conducted of the first 50 eyes to have bilateral full WaveLight® Contoura LASIK correction of measured astigmatism and axis (vs conventional manifest refraction), using the Layer Yolked Reduction of Astigmatism Protocol in all cases. All patients had astigmatism corrected, and had at least 1 week of follow-up. Accuracy to desired refractive goal was assessed by postoperative refraction, aberration reduction via calculation of polynomials, and postoperative visions were analyzed as a secondary goal. Results The average difference of astigmatic power from manifest to measured was 0.5462D (with a range of 0–1.69D), and the average difference of axis was 14.94° (with a range of 0°–89°). Forty-seven of 50 eyes had a goal of plano, 3 had a monovision goal. Astigmatism was fully eliminated from all but 2 eyes, and 1 eye had regression with astigmatism. Of the eyes with plano as the goal, 80.85% were 20/15 or better, and 100% were 20/20 or better. Polynomial analysis postoperatively showed that at 6.5 mm, the average C3 was reduced by 86.5% and the average C5 by 85.14%. Conclusions Using WaveLight® Contoura measured astigmatism and axis removes higher order aberrations and allows for the creation of a more uniform cornea with accurate removal of astigmatism, and reduction of aberration polynomials. WaveLight® Contoura successfully links the refractive correction layer and aberration repair layer using the Layer Yolked Reduction of Astigmatism Protocol to demonstrate how aberration removal can affect refractive correction. PMID:28553071

  9. A New Navigation Satellite Clock Bias Prediction Method Based on Modified Clock-bias Quadratic Polynomial Model

    NASA Astrophysics Data System (ADS)

    Wang, Y. P.; Lu, Z. P.; Sun, D. S.; Wang, N.

    2016-01-01

    In order to better express the characteristics of satellite clock bias (SCB) and improve SCB prediction precision, this paper proposed a new SCB prediction model which can take physical characteristics of space-borne atomic clock, the cyclic variation, and random part of SCB into consideration. First, the new model employs a quadratic polynomial model with periodic items to fit and extract the trend term and cyclic term of SCB; then based on the characteristics of fitting residuals, a time series ARIMA ~(Auto-Regressive Integrated Moving Average) model is used to model the residuals; eventually, the results from the two models are combined to obtain final SCB prediction values. At last, this paper uses precise SCB data from IGS (International GNSS Service) to conduct prediction tests, and the results show that the proposed model is effective and has better prediction performance compared with the quadratic polynomial model, grey model, and ARIMA model. In addition, the new method can also overcome the insufficiency of the ARIMA model in model recognition and order determination.

  10. Cosmographic analysis with Chebyshev polynomials

    NASA Astrophysics Data System (ADS)

    Capozziello, Salvatore; D'Agostino, Rocco; Luongo, Orlando

    2018-05-01

    The limits of standard cosmography are here revised addressing the problem of error propagation during statistical analyses. To do so, we propose the use of Chebyshev polynomials to parametrize cosmic distances. In particular, we demonstrate that building up rational Chebyshev polynomials significantly reduces error propagations with respect to standard Taylor series. This technique provides unbiased estimations of the cosmographic parameters and performs significatively better than previous numerical approximations. To figure this out, we compare rational Chebyshev polynomials with Padé series. In addition, we theoretically evaluate the convergence radius of (1,1) Chebyshev rational polynomial and we compare it with the convergence radii of Taylor and Padé approximations. We thus focus on regions in which convergence of Chebyshev rational functions is better than standard approaches. With this recipe, as high-redshift data are employed, rational Chebyshev polynomials remain highly stable and enable one to derive highly accurate analytical approximations of Hubble's rate in terms of the cosmographic series. Finally, we check our theoretical predictions by setting bounds on cosmographic parameters through Monte Carlo integration techniques, based on the Metropolis-Hastings algorithm. We apply our technique to high-redshift cosmic data, using the Joint Light-curve Analysis supernovae sample and the most recent versions of Hubble parameter and baryon acoustic oscillation measurements. We find that cosmography with Taylor series fails to be predictive with the aforementioned data sets, while turns out to be much more stable using the Chebyshev approach.

  11. Numeric model to predict the location of market demand and economic order quantity for retailers of supply chain

    NASA Astrophysics Data System (ADS)

    Fradinata, Edy; Marli Kesuma, Zurnila

    2018-05-01

    Polynomials and Spline regression are the numeric model where they used to obtain the performance of methods, distance relationship models for cement retailers in Banda Aceh, predicts the market area for retailers and the economic order quantity (EOQ). These numeric models have their difference accuracy for measuring the mean square error (MSE). The distance relationships between retailers are to identify the density of retailers in the town. The dataset is collected from the sales of cement retailer with a global positioning system (GPS). The sales dataset is plotted of its characteristic to obtain the goodness of fitted quadratic, cubic, and fourth polynomial methods. On the real sales dataset, polynomials are used the behavior relationship x-abscissa and y-ordinate to obtain the models. This research obtains some advantages such as; the four models from the methods are useful for predicting the market area for the retailer in the competitiveness, the comparison of the performance of the methods, the distance of the relationship between retailers, and at last the inventory policy based on economic order quantity. The results, the high-density retail relationship areas indicate that the growing population with the construction project. The spline is better than quadratic, cubic, and four polynomials in predicting the points indicating of small MSE. The inventory policy usages the periodic review policy type.

  12. Analysis on the misalignment errors between Hartmann-Shack sensor and 45-element deformable mirror

    NASA Astrophysics Data System (ADS)

    Liu, Lihui; Zhang, Yi; Tao, Jianjun; Cao, Fen; Long, Yin; Tian, Pingchuan; Chen, Shangwu

    2017-02-01

    Aiming at 45-element adaptive optics system, the model of 45-element deformable mirror is truly built by COMSOL Multiphysics, and every actuator's influence function is acquired by finite element method. The process of this system correcting optical aberration is simulated by making use of procedure, and aiming for Strehl ratio of corrected diffraction facula, in the condition of existing different translation and rotation error between Hartmann-Shack sensor and deformable mirror, the system's correction ability for 3-20 Zernike polynomial wave aberration is analyzed. The computed result shows: the system's correction ability for 3-9 Zernike polynomial wave aberration is higher than that of 10-20 Zernike polynomial wave aberration. The correction ability for 3-20 Zernike polynomial wave aberration does not change with misalignment error changing. With rotation error between Hartmann-Shack sensor and deformable mirror increasing, the correction ability for 3-20 Zernike polynomial wave aberration gradually goes down, and with translation error increasing, the correction ability for 3-9 Zernike polynomial wave aberration gradually goes down, but the correction ability for 10-20 Zernike polynomial wave aberration behave up-and-down depression.

  13. Explicit bounds for the positive root of classes of polynomials with applications

    NASA Astrophysics Data System (ADS)

    Herzberger, Jürgen

    2003-03-01

    We consider a certain type of polynomial equations for which there exists--according to Descartes' rule of signs--only one simple positive root. These equations are occurring in Numerical Analysis when calculating or estimating the R-order or Q-order of convergence of certain iterative processes with an error-recursion of special form. On the other hand, these polynomial equations are very common as defining equations for the effective rate of return for certain cashflows like bonds or annuities in finance. The effective rate of interest i* for those cashflows is i*=q*-1, where q* is the unique positive root of such polynomial. We construct bounds for i* for a special problem concerning an ordinary simple annuity which is obtained by changing the conditions of such an annuity with given data applying the German rule (Preisangabeverordnung or short PAngV). Moreover, we consider a number of results for such polynomial roots in Numerical Analysis showing that by a simple variable transformation we can derive several formulas out of earlier results by applying this transformation. The same is possible in finance in order to generalize results to more complicated cashflows.

  14. A Fast, Locally Adaptive, Interactive Retrieval Algorithm for the Analysis of DIAL Measurements

    NASA Astrophysics Data System (ADS)

    Samarov, D. V.; Rogers, R.; Hair, J. W.; Douglass, K. O.; Plusquellic, D.

    2010-12-01

    Differential absorption light detection and ranging (DIAL) is a laser-based tool which is used for remote, range-resolved measurement of particular gases in the atmosphere, such as carbon-dioxide and methane. In many instances it is of interest to study how these gases are distributed over a region such as a landfill, factory, or farm. While a single DIAL measurement only tells us about the distribution of a gas along a single path, a sequence of consecutive measurements provides us with information on how that gas is distributed over a region, making DIAL a natural choice for such studies. DIAL measurements present a number of interesting challenges; first, in order to convert the raw data to concentration it is necessary to estimate the derivative along the path of the measurement. Second, as the distribution of gases across a region can be highly heterogeneous it is important that the spatial nature of the measurements be taken into account. Finally, since it is common for the set of collected measurements to be quite large it is important for the method to be computationally efficient. Existing work based on Local Polynomial Regression (LPR) has been developed which addresses the first two issues, but the issue of computational speed remains an open problem. In addition to the latter, another desirable property is to allow user input into the algorithm. In this talk we present a novel method based on LPR which utilizes a variant of the RODEO algorithm to provide a fast, locally adaptive and interactive approach to the analysis of DIAL measurements. This methodology is motivated by and applied to several simulated examples and a study out of NASA Langley Research Center (LaRC) looking at the estimation of aerosol extinction in the atmosphere. A comparison study of our method against several other algorithms is also presented. References Chaudhuri, P., Marron, J.S., Scale-space view of curve estimation, Annals of Statistics 28 (2000) 408-428. Duong, T., Cowling, A., Koch, I., Wand, M.P., Feature significance for multivariate kernel density estimation, Computational Statistics and Data Analysis 52 (2008) 4225-4242. Godtliebsen, F., Marron, J.S., Chaudhuri, P., Statistical Significance of features in digital images, Image and Vision Computing 22 (2004) 1093-1104. Lafferty, J., Wasserman, L., RODEO: Sparse, Greedy Nonparametric Regression, Annals of Statistics 36 (2008) 28-63. Lindstrom, T., Holst, U., Weibring, P., Analysis of lidar fields using local polynomial regression, Environmetrics 16 (2005) 619-634

  15. Optimization by response surface methodology of lutein recovery from paprika leaves using accelerated solvent extraction.

    PubMed

    Kang, Jae-Hyun; Kim, Suna; Moon, BoKyung

    2016-08-15

    In this study, we used response surface methodology (RSM) to optimize the extraction conditions for recovering lutein from paprika leaves using accelerated solvent extraction (ASE). The lutein content was quantitatively analyzed using a UPLC equipped with a BEH C18 column. A central composite design (CCD) was employed for experimental design to obtain the optimized combination of extraction temperature (°C), static time (min), and solvent (EtOH, %). The experimental data obtained from a twenty sample set were fitted to a second-order polynomial equation using multiple regression analysis. The adjusted coefficient of determination (R(2)) for the lutein extraction model was 0.9518, and the probability value (p=0.0000) demonstrated a high significance for the regression model. The optimum extraction conditions for lutein were temperature: 93.26°C, static time: 5 min, and solvent: 79.63% EtOH. Under these conditions, the predicted extraction yield of lutein was 232.60 μg/g. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Where are the roots of the Bethe Ansatz equations?

    NASA Astrophysics Data System (ADS)

    Vieira, R. S.; Lima-Santos, A.

    2015-10-01

    Changing the variables in the Bethe Ansatz Equations (BAE) for the XXZ six-vertex model we had obtained a coupled system of polynomial equations. This provided a direct link between the BAE deduced from the Algebraic Bethe Ansatz (ABA) and the BAE arising from the Coordinate Bethe Ansatz (CBA). For two magnon states this polynomial system could be decoupled and the solutions given in terms of the roots of some self-inversive polynomials. From theorems concerning the distribution of the roots of self-inversive polynomials we made a thorough analysis of the two magnon states, which allowed us to find the location and multiplicity of the Bethe roots in the complex plane, to discuss the completeness and singularities of Bethe's equations, the ill-founded string-hypothesis concerning the location of their roots, as well as to find an interesting connection between the BAE with Salem's polynomials.

  17. A recursive algorithm for Zernike polynomials

    NASA Technical Reports Server (NTRS)

    Davenport, J. W.

    1982-01-01

    The analysis of a function defined on a rotationally symmetric system, with either a circular or annular pupil is discussed. In order to numerically analyze such systems it is typical to expand the given function in terms of a class of orthogonal polynomials. Because of their particular properties, the Zernike polynomials are especially suited for numerical calculations. Developed is a recursive algorithm that can be used to generate the Zernike polynomials up to a given order. The algorithm is recursively defined over J where R(J,N) is the Zernike polynomial of degree N obtained by orthogonalizing the sequence R(J), R(J+2), ..., R(J+2N) over (epsilon, 1). The terms in the preceding row - the (J-1) row - up to the N+1 term is needed for generating the (J,N)th term. Thus, the algorith generates an upper left-triangular table. This algorithm was placed in the computer with the necessary support program also included.

  18. Using Tutte polynomials to analyze the structure of the benzodiazepines

    NASA Astrophysics Data System (ADS)

    Cadavid Muñoz, Juan José

    2014-05-01

    Graph theory in general and Tutte polynomials in particular, are implemented for analyzing the chemical structure of the benzodiazepines. Similarity analysis are used with the Tutte polynomials for finding other molecules that are similar to the benzodiazepines and therefore that might show similar psycho-active actions for medical purpose, in order to evade the drawbacks associated to the benzodiazepines based medicine. For each type of benzodiazepines, Tutte polynomials are computed and some numeric characteristics are obtained, such as the number of spanning trees and the number of spanning forests. Computations are done using the computer algebra Maple's GraphTheory package. The obtained analytical results are of great importance in pharmaceutical engineering. As a future research line, the usage of the chemistry computational program named Spartan, will be used to extent and compare it with the obtained results from the Tutte polynomials of benzodiazepines.

  19. Random Regression Models Using Legendre Polynomials to Estimate Genetic Parameters for Test-day Milk Protein Yields in Iranian Holstein Dairy Cattle.

    PubMed

    Naserkheil, Masoumeh; Miraie-Ashtiani, Seyed Reza; Nejati-Javaremi, Ardeshir; Son, Jihyun; Lee, Deukhwan

    2016-12-01

    The objective of this study was to estimate the genetic parameters of milk protein yields in Iranian Holstein dairy cattle. A total of 1,112,082 test-day milk protein yield records of 167,269 first lactation Holstein cows, calved from 1990 to 2010, were analyzed. Estimates of the variance components, heritability, and genetic correlations for milk protein yields were obtained using a random regression test-day model. Milking times, herd, age of recording, year, and month of recording were included as fixed effects in the model. Additive genetic and permanent environmental random effects for the lactation curve were taken into account by applying orthogonal Legendre polynomials of the fourth order in the model. The lowest and highest additive genetic variances were estimated at the beginning and end of lactation, respectively. Permanent environmental variance was higher at both extremes. Residual variance was lowest at the middle of the lactation and contrarily, heritability increased during this period. Maximum heritability was found during the 12th lactation stage (0.213±0.007). Genetic, permanent, and phenotypic correlations among test-days decreased as the interval between consecutive test-days increased. A relatively large data set was used in this study; therefore, the estimated (co)variance components for random regression coefficients could be used for national genetic evaluation of dairy cattle in Iran.

  20. Random Regression Models Using Legendre Polynomials to Estimate Genetic Parameters for Test-day Milk Protein Yields in Iranian Holstein Dairy Cattle

    PubMed Central

    Naserkheil, Masoumeh; Miraie-Ashtiani, Seyed Reza; Nejati-Javaremi, Ardeshir; Son, Jihyun; Lee, Deukhwan

    2016-01-01

    The objective of this study was to estimate the genetic parameters of milk protein yields in Iranian Holstein dairy cattle. A total of 1,112,082 test-day milk protein yield records of 167,269 first lactation Holstein cows, calved from 1990 to 2010, were analyzed. Estimates of the variance components, heritability, and genetic correlations for milk protein yields were obtained using a random regression test-day model. Milking times, herd, age of recording, year, and month of recording were included as fixed effects in the model. Additive genetic and permanent environmental random effects for the lactation curve were taken into account by applying orthogonal Legendre polynomials of the fourth order in the model. The lowest and highest additive genetic variances were estimated at the beginning and end of lactation, respectively. Permanent environmental variance was higher at both extremes. Residual variance was lowest at the middle of the lactation and contrarily, heritability increased during this period. Maximum heritability was found during the 12th lactation stage (0.213±0.007). Genetic, permanent, and phenotypic correlations among test-days decreased as the interval between consecutive test-days increased. A relatively large data set was used in this study; therefore, the estimated (co)variance components for random regression coefficients could be used for national genetic evaluation of dairy cattle in Iran. PMID:26954192

  1. Novel Image Encryption Scheme Based on Chebyshev Polynomial and Duffing Map

    PubMed Central

    2014-01-01

    We present a novel image encryption algorithm using Chebyshev polynomial based on permutation and substitution and Duffing map based on substitution. Comprehensive security analysis has been performed on the designed scheme using key space analysis, visual testing, histogram analysis, information entropy calculation, correlation coefficient analysis, differential analysis, key sensitivity test, and speed test. The study demonstrates that the proposed image encryption algorithm shows advantages of more than 10113 key space and desirable level of security based on the good statistical results and theoretical arguments. PMID:25143970

  2. Polynomial meta-models with canonical low-rank approximations: Numerical insights and comparison to sparse polynomial chaos expansions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Konakli, Katerina, E-mail: konakli@ibk.baug.ethz.ch; Sudret, Bruno

    2016-09-15

    The growing need for uncertainty analysis of complex computational models has led to an expanding use of meta-models across engineering and sciences. The efficiency of meta-modeling techniques relies on their ability to provide statistically-equivalent analytical representations based on relatively few evaluations of the original model. Polynomial chaos expansions (PCE) have proven a powerful tool for developing meta-models in a wide range of applications; the key idea thereof is to expand the model response onto a basis made of multivariate polynomials obtained as tensor products of appropriate univariate polynomials. The classical PCE approach nevertheless faces the “curse of dimensionality”, namely themore » exponential increase of the basis size with increasing input dimension. To address this limitation, the sparse PCE technique has been proposed, in which the expansion is carried out on only a few relevant basis terms that are automatically selected by a suitable algorithm. An alternative for developing meta-models with polynomial functions in high-dimensional problems is offered by the newly emerged low-rank approximations (LRA) approach. By exploiting the tensor–product structure of the multivariate basis, LRA can provide polynomial representations in highly compressed formats. Through extensive numerical investigations, we herein first shed light on issues relating to the construction of canonical LRA with a particular greedy algorithm involving a sequential updating of the polynomial coefficients along separate dimensions. Specifically, we examine the selection of optimal rank, stopping criteria in the updating of the polynomial coefficients and error estimation. In the sequel, we confront canonical LRA to sparse PCE in structural-mechanics and heat-conduction applications based on finite-element solutions. Canonical LRA exhibit smaller errors than sparse PCE in cases when the number of available model evaluations is small with respect to the input dimension, a situation that is often encountered in real-life problems. By introducing the conditional generalization error, we further demonstrate that canonical LRA tend to outperform sparse PCE in the prediction of extreme model responses, which is critical in reliability analysis.« less

  3. Retention behavior of lipids in reversed-phase ultrahigh-performance liquid chromatography-electrospray ionization mass spectrometry.

    PubMed

    Ovčačíková, Magdaléna; Lísa, Miroslav; Cífková, Eva; Holčapek, Michal

    2016-06-10

    Reversed-phase ultrahigh-performance liquid chromatography (RP-UHPLC) method using two 15cm sub-2μm particles octadecylsilica gel columns is developed with the goal to separate and unambiguously identify a large number of lipid species in biological samples. The identification is performed by the coupling with high-resolution tandem mass spectrometry (MS/MS) using quadrupole - time-of-flight (QTOF) instrument. Electrospray ionization (ESI) full scan and tandem mass spectra are measured in both polarity modes with the mass accuracy better than 5ppm, which provides a high confidence of lipid identification. Over 400 lipid species covering 14 polar and nonpolar lipid classes from 5 lipid categories are identified in total lipid extracts of human plasma, human urine and porcine brain. The general dependences of relative retention times on relative carbon number or relative double bond number are constructed and fit with the second degree polynomial regression. The regular retention patterns in homologous lipid series provide additional identification point for UHPLC/MS lipidomic analysis, which increases the confidence of lipid identification. The reprocessing of previously published data by our and other groups measured in the RP mode and ultrahigh-performance supercritical fluid chromatography on the silica column shows more generic applicability of the polynomial regression for the description of retention behavior and the prediction of retention times. The novelty of this work is the characterization of general trends in the retention behavior of lipids within logical series with constant fatty acyl length or double bond number, which may be used as an additional criterion to increase the confidence of lipid identification. Copyright © 2016 Elsevier B.V. All rights reserved.

  4. Height reduction among prenatally exposed atomic-bomb survivors: A longitudinal study of growth

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nakashima, Eiji; Funamoto, Sachiyo; Carter, R.L.

    Using a random coefficient regression model, sex-specific longitudinal analyses of height were made on 801 (392 male and 409 female) atomic-bomb survivors exposed in utero to detect dose effects on standing height. The data set resulted from repeated measurements of standing height of adolescents (age 10-18 y). The dose effect, if any, was assumed to be linear. Gestational ages at the time of radiation exposure were divided into trimesters. Since an earlier longitudinal data analysis has demonstrated radiation effects on height, the emphasis in this paper is on the interaction between dose and gestational age at exposure and radiation effectsmore » on the age of occurrence of the adolescent growth spurt. For males, a cubic polynomial growth-curve model applied to the data was affected significantly by radiation. The dose by trimester interaction effect was not significant. The onset of adolescent growth spurt was estimated at about 13 y at 0 Gy. There was no effect of radiation on the adolescent growth spurt For females, a quadratic polynomial growth-curve model was fitted to the data. The dose effect was significant, while the dose by trimester interaction was again not significant. 27 refs., 3 figs., 4 tabs.« less

  5. Magnetic Resonance Imaging-derived Flow Parameters for the Analysis of Cardiovascular Diseases and Drug Development.

    PubMed

    Michael, Dada O; Bamidele, Awojoyogbe O; Adewale, Adesola O; Karem, Boubaker

    2013-01-01

    Nuclear magnetic resonance (NMR) allows for fast, accurate and noninvasive measurement of fluid flow in restricted and non-restricted media. The results of such measurements may be possible for a very small B 0 field and can be enhanced through detailed examination of generating functions that may arise from polynomial solutions of NMR flow equations in terms of Legendre polynomials and Boubaker polynomials. The generating functions of these polynomials can present an array of interesting possibilities that may be useful for understanding the basic physics of extracting relevant NMR flow information from which various hemodynamic problems can be carefully studied. Specifically, these results may be used to develop effective drugs for cardiovascular-related diseases.

  6. Magnetic Resonance Imaging-derived Flow Parameters for the Analysis of Cardiovascular Diseases and Drug Development

    PubMed Central

    Michael, Dada O.; Bamidele, Awojoyogbe O.; Adewale, Adesola O.; Karem, Boubaker

    2013-01-01

    Nuclear magnetic resonance (NMR) allows for fast, accurate and noninvasive measurement of fluid flow in restricted and non-restricted media. The results of such measurements may be possible for a very small B0 field and can be enhanced through detailed examination of generating functions that may arise from polynomial solutions of NMR flow equations in terms of Legendre polynomials and Boubaker polynomials. The generating functions of these polynomials can present an array of interesting possibilities that may be useful for understanding the basic physics of extracting relevant NMR flow information from which various hemodynamic problems can be carefully studied. Specifically, these results may be used to develop effective drugs for cardiovascular-related diseases. PMID:25114546

  7. Polynomial order selection in random regression models via penalizing adaptively the likelihood.

    PubMed

    Corrales, J D; Munilla, S; Cantet, R J C

    2015-08-01

    Orthogonal Legendre polynomials (LP) are used to model the shape of additive genetic and permanent environmental effects in random regression models (RRM). Frequently, the Akaike (AIC) and the Bayesian (BIC) information criteria are employed to select LP order. However, it has been theoretically shown that neither AIC nor BIC is simultaneously optimal in terms of consistency and efficiency. Thus, the goal was to introduce a method, 'penalizing adaptively the likelihood' (PAL), as a criterion to select LP order in RRM. Four simulated data sets and real data (60,513 records, 6675 Colombian Holstein cows) were employed. Nested models were fitted to the data, and AIC, BIC and PAL were calculated for all of them. Results showed that PAL and BIC identified with probability of one the true LP order for the additive genetic and permanent environmental effects, but AIC tended to favour over parameterized models. Conversely, when the true model was unknown, PAL selected the best model with higher probability than AIC. In the latter case, BIC never favoured the best model. To summarize, PAL selected a correct model order regardless of whether the 'true' model was within the set of candidates. © 2015 Blackwell Verlag GmbH.

  8. Analysis of Adaptive Mesh Refinement for IMEX Discontinuous Galerkin Solutions of the Compressible Euler Equations with Application to Atmospheric Simulations

    DTIC Science & Technology

    2013-01-01

    ξi be the Legendre -Gauss-Lobatto (LGL) points defined as the roots of (1 − ξ2)P ′N (ξ) = 0, where PN (ξ) is the N th order Legendre polynomial . The...mesh refinement. By expanding the solution in a basis of high order polynomials in each element, one can dynamically adjust the order of these basis...on refining the mesh while keeping the polynomial order constant across the elements. If we choose to allow non-conforming elements, the challenge in

  9. Polynomial asymptotes of the second kind

    NASA Astrophysics Data System (ADS)

    Dobbs, David E.

    2011-03-01

    This note uses the analytic notion of asymptotic functions to study when a function is asymptotic to a polynomial function. Along with associated existence and uniqueness results, this kind of asymptotic behaviour is related to the type of asymptote that was recently defined in a more geometric way. Applications are given to rational functions and conics. Prerequisites include the division algorithm for polynomials with coefficients in the field of real numbers and elementary facts about limits from calculus. This note could be used as enrichment material in courses ranging from Calculus to Real Analysis to Abstract Algebra.

  10. Real estate value prediction using multivariate regression models

    NASA Astrophysics Data System (ADS)

    Manjula, R.; Jain, Shubham; Srivastava, Sharad; Rajiv Kher, Pranav

    2017-11-01

    The real estate market is one of the most competitive in terms of pricing and the same tends to vary significantly based on a lot of factors, hence it becomes one of the prime fields to apply the concepts of machine learning to optimize and predict the prices with high accuracy. Therefore in this paper, we present various important features to use while predicting housing prices with good accuracy. We have described regression models, using various features to have lower Residual Sum of Squares error. While using features in a regression model some feature engineering is required for better prediction. Often a set of features (multiple regressions) or polynomial regression (applying a various set of powers in the features) is used for making better model fit. For these models are expected to be susceptible towards over fitting ridge regression is used to reduce it. This paper thus directs to the best application of regression models in addition to other techniques to optimize the result.

  11. A Linear Algebraic Approach to Teaching Interpolation

    ERIC Educational Resources Information Center

    Tassa, Tamir

    2007-01-01

    A novel approach for teaching interpolation in the introductory course in numerical analysis is presented. The interpolation problem is viewed as a problem in linear algebra, whence the various forms of interpolating polynomial are seen as different choices of a basis to the subspace of polynomials of the corresponding degree. This approach…

  12. Genetic analyses of stillbirth in relation to litter size using random regression models.

    PubMed

    Chen, C Y; Misztal, I; Tsuruta, S; Herring, W O; Holl, J; Culbertson, M

    2010-12-01

    Estimates of genetic parameters for number of stillborns (NSB) in relation to litter size (LS) were obtained with random regression models (RRM). Data were collected from 4 purebred Duroc nucleus farms between 2004 and 2008. Two data sets with 6,575 litters for the first parity (P1) and 6,259 litters for the second to fifth parity (P2-5) with a total of 8,217 and 5,066 animals in the pedigree were analyzed separately. Number of stillborns was studied as a trait on sow level. Fixed effects were contemporary groups (farm-year-season) and fixed cubic regression coefficients on LS with Legendre polynomials. Models for P2-5 included the fixed effect of parity. Random effects were additive genetic effects for both data sets with permanent environmental effects included for P2-5. Random effects modeled with Legendre polynomials (RRM-L), linear splines (RRM-S), and degree 0 B-splines (RRM-BS) with regressions on LS were used. For P1, the order of polynomial, the number of knots, and the number of intervals used for respective models were quadratic, 3, and 3, respectively. For P2-5, the same parameters were linear, 2, and 2, respectively. Heterogeneous residual variances were considered in the models. For P1, estimates of heritability were 12 to 15%, 5 to 6%, and 6 to 7% in LS 5, 9, and 13, respectively. For P2-5, estimates were 15 to 17%, 4 to 5%, and 4 to 6% in LS 6, 9, and 12, respectively. For P1, average estimates of genetic correlations between LS 5 to 9, 5 to 13, and 9 to 13 were 0.53, -0.29, and 0.65, respectively. For P2-5, same estimates averaged for RRM-L and RRM-S were 0.75, -0.21, and 0.50, respectively. For RRM-BS with 2 intervals, the correlation was 0.66 between LS 5 to 7 and 8 to 13. Parameters obtained by 3 RRM revealed the nonlinear relationship between additive genetic effect of NSB and the environmental deviation of LS. The negative correlations between the 2 extreme LS might possibly indicate different genetic bases on incidence of stillbirth.

  13. Accurate polynomial expressions for the density and specific volume of seawater using the TEOS-10 standard

    NASA Astrophysics Data System (ADS)

    Roquet, F.; Madec, G.; McDougall, Trevor J.; Barker, Paul M.

    2015-06-01

    A new set of approximations to the standard TEOS-10 equation of state are presented. These follow a polynomial form, making it computationally efficient for use in numerical ocean models. Two versions are provided, the first being a fit of density for Boussinesq ocean models, and the second fitting specific volume which is more suitable for compressible models. Both versions are given as the sum of a vertical reference profile (6th-order polynomial) and an anomaly (52-term polynomial, cubic in pressure), with relative errors of ∼0.1% on the thermal expansion coefficients. A 75-term polynomial expression is also presented for computing specific volume, with a better accuracy than the existing TEOS-10 48-term rational approximation, especially regarding the sound speed, and it is suggested that this expression represents a valuable approximation of the TEOS-10 equation of state for hydrographic data analysis. In the last section, practical aspects about the implementation of TEOS-10 in ocean models are discussed.

  14. A Generalized Sampling and Preconditioning Scheme for Sparse Approximation of Polynomial Chaos Expansions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jakeman, John D.; Narayan, Akil; Zhou, Tao

    We propose an algorithm for recovering sparse orthogonal polynomial expansions via collocation. A standard sampling approach for recovering sparse polynomials uses Monte Carlo sampling, from the density of orthogonality, which results in poor function recovery when the polynomial degree is high. Our proposed approach aims to mitigate this limitation by sampling with respect to the weighted equilibrium measure of the parametric domain and subsequently solves a preconditionedmore » $$\\ell^1$$-minimization problem, where the weights of the diagonal preconditioning matrix are given by evaluations of the Christoffel function. Our algorithm can be applied to a wide class of orthogonal polynomial families on bounded and unbounded domains, including all classical families. We present theoretical analysis to motivate the algorithm and numerical results that show our method is superior to standard Monte Carlo methods in many situations of interest. In conclusion, numerical examples are also provided to demonstrate that our proposed algorithm leads to comparable or improved accuracy even when compared with Legendre- and Hermite-specific algorithms.« less

  15. A Generalized Sampling and Preconditioning Scheme for Sparse Approximation of Polynomial Chaos Expansions

    DOE PAGES

    Jakeman, John D.; Narayan, Akil; Zhou, Tao

    2017-06-22

    We propose an algorithm for recovering sparse orthogonal polynomial expansions via collocation. A standard sampling approach for recovering sparse polynomials uses Monte Carlo sampling, from the density of orthogonality, which results in poor function recovery when the polynomial degree is high. Our proposed approach aims to mitigate this limitation by sampling with respect to the weighted equilibrium measure of the parametric domain and subsequently solves a preconditionedmore » $$\\ell^1$$-minimization problem, where the weights of the diagonal preconditioning matrix are given by evaluations of the Christoffel function. Our algorithm can be applied to a wide class of orthogonal polynomial families on bounded and unbounded domains, including all classical families. We present theoretical analysis to motivate the algorithm and numerical results that show our method is superior to standard Monte Carlo methods in many situations of interest. In conclusion, numerical examples are also provided to demonstrate that our proposed algorithm leads to comparable or improved accuracy even when compared with Legendre- and Hermite-specific algorithms.« less

  16. Model-based estimates of long-term persistence of induced HPV antibodies: a flexible subject-specific approach.

    PubMed

    Aregay, Mehreteab; Shkedy, Ziv; Molenberghs, Geert; David, Marie-Pierre; Tibaldi, Fabián

    2013-01-01

    In infectious diseases, it is important to predict the long-term persistence of vaccine-induced antibodies and to estimate the time points where the individual titers are below the threshold value for protection. This article focuses on HPV-16/18, and uses a so-called fractional-polynomial model to this effect, derived in a data-driven fashion. Initially, model selection was done from among the second- and first-order fractional polynomials on the one hand and from the linear mixed model on the other. According to a functional selection procedure, the first-order fractional polynomial was selected. Apart from the fractional polynomial model, we also fitted a power-law model, which is a special case of the fractional polynomial model. Both models were compared using Akaike's information criterion. Over the observation period, the fractional polynomials fitted the data better than the power-law model; this, of course, does not imply that it fits best over the long run, and hence, caution ought to be used when prediction is of interest. Therefore, we point out that the persistence of the anti-HPV responses induced by these vaccines can only be ascertained empirically by long-term follow-up analysis.

  17. Testing of next-generation nonlinear calibration based non-uniformity correction techniques using SWIR devices

    NASA Astrophysics Data System (ADS)

    Lovejoy, McKenna R.; Wickert, Mark A.

    2017-05-01

    A known problem with infrared imaging devices is their non-uniformity. This non-uniformity is the result of dark current, amplifier mismatch as well as the individual photo response of the detectors. To improve performance, non-uniformity correction (NUC) techniques are applied. Standard calibration techniques use linear, or piecewise linear models to approximate the non-uniform gain and off set characteristics as well as the nonlinear response. Piecewise linear models perform better than the one and two-point models, but in many cases require storing an unmanageable number of correction coefficients. Most nonlinear NUC algorithms use a second order polynomial to improve performance and allow for a minimal number of stored coefficients. However, advances in technology now make higher order polynomial NUC algorithms feasible. This study comprehensively tests higher order polynomial NUC algorithms targeted at short wave infrared (SWIR) imagers. Using data collected from actual SWIR cameras, the nonlinear techniques and corresponding performance metrics are compared with current linear methods including the standard one and two-point algorithms. Machine learning, including principal component analysis, is explored for identifying and replacing bad pixels. The data sets are analyzed and the impact of hardware implementation is discussed. Average floating point results show 30% less non-uniformity, in post-corrected data, when using a third order polynomial correction algorithm rather than a second order algorithm. To maximize overall performance, a trade off analysis on polynomial order and coefficient precision is performed. Comprehensive testing, across multiple data sets, provides next generation model validation and performance benchmarks for higher order polynomial NUC methods.

  18. Models for Estimating Genetic Parameters of Milk Production Traits Using Random Regression Models in Korean Holstein Cattle

    PubMed Central

    Cho, C. I.; Alam, M.; Choi, T. J.; Choy, Y. H.; Choi, J. G.; Lee, S. S.; Cho, K. H.

    2016-01-01

    The objectives of the study were to estimate genetic parameters for milk production traits of Holstein cattle using random regression models (RRMs), and to compare the goodness of fit of various RRMs with homogeneous and heterogeneous residual variances. A total of 126,980 test-day milk production records of the first parity Holstein cows between 2007 and 2014 from the Dairy Cattle Improvement Center of National Agricultural Cooperative Federation in South Korea were used. These records included milk yield (MILK), fat yield (FAT), protein yield (PROT), and solids-not-fat yield (SNF). The statistical models included random effects of genetic and permanent environments using Legendre polynomials (LP) of the third to fifth order (L3–L5), fixed effects of herd-test day, year-season at calving, and a fixed regression for the test-day record (third to fifth order). The residual variances in the models were either homogeneous (HOM) or heterogeneous (15 classes, HET15; 60 classes, HET60). A total of nine models (3 orders of polynomials×3 types of residual variance) including L3-HOM, L3-HET15, L3-HET60, L4-HOM, L4-HET15, L4-HET60, L5-HOM, L5-HET15, and L5-HET60 were compared using Akaike information criteria (AIC) and/or Schwarz Bayesian information criteria (BIC) statistics to identify the model(s) of best fit for their respective traits. The lowest BIC value was observed for the models L5-HET15 (MILK; PROT; SNF) and L4-HET15 (FAT), which fit the best. In general, the BIC values of HET15 models for a particular polynomial order was lower than that of the HET60 model in most cases. This implies that the orders of LP and types of residual variances affect the goodness of models. Also, the heterogeneity of residual variances should be considered for the test-day analysis. The heritability estimates of from the best fitted models ranged from 0.08 to 0.15 for MILK, 0.06 to 0.14 for FAT, 0.08 to 0.12 for PROT, and 0.07 to 0.13 for SNF according to days in milk of first lactation. Genetic variances for studied traits tended to decrease during the earlier stages of lactation, which were followed by increases in the middle and decreases further at the end of lactation. With regards to the fitness of the models and the differential genetic parameters across the lactation stages, we could estimate genetic parameters more accurately from RRMs than from lactation models. Therefore, we suggest using RRMs in place of lactation models to make national dairy cattle genetic evaluations for milk production traits in Korea. PMID:26954184

  19. Models for Estimating Genetic Parameters of Milk Production Traits Using Random Regression Models in Korean Holstein Cattle.

    PubMed

    Cho, C I; Alam, M; Choi, T J; Choy, Y H; Choi, J G; Lee, S S; Cho, K H

    2016-05-01

    The objectives of the study were to estimate genetic parameters for milk production traits of Holstein cattle using random regression models (RRMs), and to compare the goodness of fit of various RRMs with homogeneous and heterogeneous residual variances. A total of 126,980 test-day milk production records of the first parity Holstein cows between 2007 and 2014 from the Dairy Cattle Improvement Center of National Agricultural Cooperative Federation in South Korea were used. These records included milk yield (MILK), fat yield (FAT), protein yield (PROT), and solids-not-fat yield (SNF). The statistical models included random effects of genetic and permanent environments using Legendre polynomials (LP) of the third to fifth order (L3-L5), fixed effects of herd-test day, year-season at calving, and a fixed regression for the test-day record (third to fifth order). The residual variances in the models were either homogeneous (HOM) or heterogeneous (15 classes, HET15; 60 classes, HET60). A total of nine models (3 orders of polynomials×3 types of residual variance) including L3-HOM, L3-HET15, L3-HET60, L4-HOM, L4-HET15, L4-HET60, L5-HOM, L5-HET15, and L5-HET60 were compared using Akaike information criteria (AIC) and/or Schwarz Bayesian information criteria (BIC) statistics to identify the model(s) of best fit for their respective traits. The lowest BIC value was observed for the models L5-HET15 (MILK; PROT; SNF) and L4-HET15 (FAT), which fit the best. In general, the BIC values of HET15 models for a particular polynomial order was lower than that of the HET60 model in most cases. This implies that the orders of LP and types of residual variances affect the goodness of models. Also, the heterogeneity of residual variances should be considered for the test-day analysis. The heritability estimates of from the best fitted models ranged from 0.08 to 0.15 for MILK, 0.06 to 0.14 for FAT, 0.08 to 0.12 for PROT, and 0.07 to 0.13 for SNF according to days in milk of first lactation. Genetic variances for studied traits tended to decrease during the earlier stages of lactation, which were followed by increases in the middle and decreases further at the end of lactation. With regards to the fitness of the models and the differential genetic parameters across the lactation stages, we could estimate genetic parameters more accurately from RRMs than from lactation models. Therefore, we suggest using RRMs in place of lactation models to make national dairy cattle genetic evaluations for milk production traits in Korea.

  20. A robust and efficient stepwise regression method for building sparse polynomial chaos expansions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Abraham, Simon, E-mail: Simon.Abraham@ulb.ac.be; Raisee, Mehrdad; Ghorbaniasl, Ghader

    2017-03-01

    Polynomial Chaos (PC) expansions are widely used in various engineering fields for quantifying uncertainties arising from uncertain parameters. The computational cost of classical PC solution schemes is unaffordable as the number of deterministic simulations to be calculated grows dramatically with the number of stochastic dimension. This considerably restricts the practical use of PC at the industrial level. A common approach to address such problems is to make use of sparse PC expansions. This paper presents a non-intrusive regression-based method for building sparse PC expansions. The most important PC contributions are detected sequentially through an automatic search procedure. The variable selectionmore » criterion is based on efficient tools relevant to probabilistic method. Two benchmark analytical functions are used to validate the proposed algorithm. The computational efficiency of the method is then illustrated by a more realistic CFD application, consisting of the non-deterministic flow around a transonic airfoil subject to geometrical uncertainties. To assess the performance of the developed methodology, a detailed comparison is made with the well established LAR-based selection technique. The results show that the developed sparse regression technique is able to identify the most significant PC contributions describing the problem. Moreover, the most important stochastic features are captured at a reduced computational cost compared to the LAR method. The results also demonstrate the superior robustness of the method by repeating the analyses using random experimental designs.« less

  1. Testing Informant Discrepancies as Predictors of Early Adolescent Psychopathology: Why Difference Scores Cannot Tell You What You Want to Know and How Polynomial Regression May

    ERIC Educational Resources Information Center

    Laird, Robert D.; De Los Reyes, Andres

    2013-01-01

    Multiple informants commonly disagree when reporting child and family behavior. In many studies of informant discrepancies, researchers take the difference between two informants' reports and seek to examine the link between this difference score and external constructs (e.g., child maladjustment). In this paper, we review two reasons why…

  2. Correlation among extinction efficiency and other parameters in an aggregate dust model

    NASA Astrophysics Data System (ADS)

    Dhar, Tanuj Kumar; Sekhar Das, Himadri

    2017-10-01

    We study the extinction properties of highly porous Ballistic Cluster-Cluster Aggregate dust aggregates in a wide range of complex refractive indices (1.4≤ n≤ 2.0, 0.001≤ k≤ 1.0) and wavelengths (0.11 {{μ }}{{m}}≤ {{λ }}≤ 3.4 {{μ }} m). An attempt has been made for the first time to investigate the correlation among extinction efficiency ({Q}{ext}), composition of dust aggregates (n,k), wavelength of radiation (λ) and size parameter of the monomers (x). If k is fixed at any value between 0.001 and 1.0, {Q}{ext} increases with increase of n from 1.4 to 2.0. {Q}{ext} and n are correlated via linear regression when the cluster size is small, whereas the correlation is quadratic at moderate and higher sizes of the cluster. This feature is observed at all wavelengths (ultraviolet to optical to infrared). We also find that the variation of {Q}{ext} with n is very small when λ is high. When n is fixed at any value between 1.4 and 2.0, it is observed that {Q}{ext} and k are correlated via a polynomial regression equation (of degree 1, 2, 3 or 4), where the degree of the equation depends on the cluster size, n and λ. The correlation is linear for small size and quadratic/cubic/quartic for moderate and higher sizes. We have also found that {Q}{ext} and x are correlated via a polynomial regression (of degree 3, 4 or 5) for all values of n. The degree of regression is found to be n and k-dependent. The set of relations obtained from our work can be used to model interstellar extinction for dust aggregates in a wide range of wavelengths and complex refractive indices.

  3. Two-dimensional orthonormal trend surfaces for prospecting

    NASA Astrophysics Data System (ADS)

    Sarma, D. D.; Selvaraj, J. B.

    Orthonormal polynomials have distinct advantages over conventional polynomials: the equations for evaluating trend coefficients are not ill-conditioned and the convergence power of this method is greater compared to the least-squares approximation and therefore the approach by orthonormal functions provides a powerful alternative to the least-squares method. In this paper, orthonormal polynomials in two dimensions are obtained using the Gram-Schmidt method for a polynomial series of the type: Z = 1 + x + y + x2 + xy + y2 + … + yn, where x and y are the locational coordinates and Z is the value of the variable under consideration. Trend-surface analysis, which has wide applications in prospecting, has been carried out using the orthonormal polynomial approach for two sample sets of data from India concerned with gold accumulation from the Kolar Gold Field, and gravity data. A comparison of the orthonormal polynomial trend surfaces with those obtained by the classical least-squares method has been made for the two data sets. In both the situations, the orthonormal polynomial surfaces gave an improved fit to the data. A flowchart and a FORTRAN-IV computer program for deriving orthonormal polynomials of any order and for using them to fit trend surfaces is included. The program has provision for logarithmic transformation of the Z variable. If log-transformation is performed the predicted Z values are reconverted to the original units and the trend-surface map generated for use. The illustration of gold assay data related to the Champion lode system of Kolar Gold Fields, for which a 9th-degree orthonormal trend surface was fit, could be used for further prospecting the area.

  4. Meta-analytic approaches to determine gender differences in the age-incidence characteristics of schizophrenia and related psychoses.

    PubMed

    Jackson, Dan; Kirkbride, James; Croudace, Tim; Morgan, Craig; Boydell, Jane; Errazuriz, Antonia; Murray, Robin M; Jones, Peter B

    2013-03-01

    A recent systematic review and meta-analysis of the incidence and prevalence of schizophrenia and other psychoses in England investigated the variation in the rates of psychotic disorders. However, some of the questions of interest, and the data collected to answer these, could not be adequately addressed using established meta-analysis techniques. We developed a novel statistical method, which makes combined use of fractional polynomials and meta-regression. This was used to quantify the evidence of gender differences and a secondary peak onset in women, where the outcome of interest is the incidence of schizophrenia. Statistically significant and epidemiologically important effects were obtained using our methods. Our analysis is based on data from four studies that provide 50 incidence rates, stratified by age and gender. We describe several variations of our method, in particular those that might be used where more data is available, and provide guidance for assessing the model fit. Copyright © 2013 John Wiley & Sons, Ltd.

  5. Digital Correlation Microwave Polarimetry: Analysis and Demonstration

    NASA Technical Reports Server (NTRS)

    Piepmeier, J. R.; Gasiewski, A. J.; Krebs, Carolyn A. (Technical Monitor)

    2000-01-01

    The design, analysis, and demonstration of a digital-correlation microwave polarimeter for use in earth remote sensing is presented. We begin with an analysis of three-level digital correlation and develop the correlator transfer function and radiometric sensitivity. A fifth-order polynomial regression is derived for inverting the digital correlation coefficient into the analog statistic. In addition, the effects of quantizer threshold asymmetry and hysteresis are discussed. A two-look unpolarized calibration scheme is developed for identifying correlation offsets. The developed theory and calibration method are verified using a 10.7 GHz and a 37.0 GHz polarimeter. The polarimeters are based upon 1-GS/s three-level digital correlators and measure the first three Stokes parameters. Through experiment, the radiometric sensitivity is shown to approach the theoretical as derived earlier in the paper and the two-look unpolarized calibration method is successfully compared with results using a polarimetric scheme. Finally, sample data from an aircraft experiment demonstrates that the polarimeter is highly-useful for ocean wind-vector measurement.

  6. Response Surface Methodology Using a Fullest Balanced Model: A Re-Analysis of a Dataset in the Korean Journal for Food Science of Animal Resources.

    PubMed

    Rheem, Sungsue; Rheem, Insoo; Oh, Sejong

    2017-01-01

    Response surface methodology (RSM) is a useful set of statistical techniques for modeling and optimizing responses in research studies of food science. In the analysis of response surface data, a second-order polynomial regression model is usually used. However, sometimes we encounter situations where the fit of the second-order model is poor. If the model fitted to the data has a poor fit including a lack of fit, the modeling and optimization results might not be accurate. In such a case, using a fullest balanced model, which has no lack of fit, can fix such problem, enhancing the accuracy of the response surface modeling and optimization. This article presents how to develop and use such a model for the better modeling and optimizing of the response through an illustrative re-analysis of a dataset in Park et al. (2014) published in the Korean Journal for Food Science of Animal Resources .

  7. Statistical analysis and isotherm study of uranium biosorption by Padina sp. algae biomass.

    PubMed

    Khani, Mohammad Hassan

    2011-06-01

    The application of response surface methodology is presented for optimizing the removal of U ions from aqueous solutions using Padina sp., a brown marine algal biomass. Box-Wilson central composite design was employed to assess individual and interactive effects of the four main parameters (pH and initial uranium concentration in solutions, contact time and temperature) on uranium uptake. Response surface analysis showed that the data were adequately fitted to second-order polynomial model. Analysis of variance showed a high coefficient of determination value (R (2)=0.9746) and satisfactory second-order regression model was derived. The optimum pH and initial uranium concentration in solutions, contact time and temperature were found to be 4.07, 778.48 mg/l, 74.31 min, and 37.47°C, respectively. Maximized uranium uptake was predicted and experimentally validated. The equilibrium data for biosorption of U onto the Padina sp. were well represented by the Langmuir isotherm, giving maximum monolayer adsorption capacity as high as 376.73 mg/g.

  8. Limitations of polynomial chaos expansions in the Bayesian solution of inverse problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lu, Fei; Department of Mathematics, University of California, Berkeley; Morzfeld, Matthias, E-mail: mmo@math.lbl.gov

    2015-02-01

    Polynomial chaos expansions are used to reduce the computational cost in the Bayesian solutions of inverse problems by creating a surrogate posterior that can be evaluated inexpensively. We show, by analysis and example, that when the data contain significant information beyond what is assumed in the prior, the surrogate posterior can be very different from the posterior, and the resulting estimates become inaccurate. One can improve the accuracy by adaptively increasing the order of the polynomial chaos, but the cost may increase too fast for this to be cost effective compared to Monte Carlo sampling without a surrogate posterior.

  9. Fitness Probability Distribution of Bit-Flip Mutation.

    PubMed

    Chicano, Francisco; Sutton, Andrew M; Whitley, L Darrell; Alba, Enrique

    2015-01-01

    Bit-flip mutation is a common mutation operator for evolutionary algorithms applied to optimize functions over binary strings. In this paper, we develop results from the theory of landscapes and Krawtchouk polynomials to exactly compute the probability distribution of fitness values of a binary string undergoing uniform bit-flip mutation. We prove that this probability distribution can be expressed as a polynomial in p, the probability of flipping each bit. We analyze these polynomials and provide closed-form expressions for an easy linear problem (Onemax), and an NP-hard problem, MAX-SAT. We also discuss a connection of the results with runtime analysis.

  10. High degree interpolation polynomial in Newton form

    NASA Technical Reports Server (NTRS)

    Tal-Ezer, Hillel

    1988-01-01

    Polynomial interpolation is an essential subject in numerical analysis. Dealing with a real interval, it is well known that even if f(x) is an analytic function, interpolating at equally spaced points can diverge. On the other hand, interpolating at the zeroes of the corresponding Chebyshev polynomial will converge. Using the Newton formula, this result of convergence is true only on the theoretical level. It is shown that the algorithm which computes the divided differences is numerically stable only if: (1) the interpolating points are arranged in a different order, and (2) the size of the interval is 4.

  11. Umbral Calculus and Holonomic Modules in Positive Characteristic

    NASA Astrophysics Data System (ADS)

    Kochubei, Anatoly N.

    2006-03-01

    In the framework of analysis over local fields of positive characteristic, we develop algebraic tools for introducing and investigating various polynomial systems. In this survey paper we describe a function field version of umbral calculus developed on the basis of a relation of binomial type satisfied by the Carlitz polynomials. We consider modules over the Weyl-Carlitz ring, a function field counterpart of the Weyl algebra. It is shown that some basic objects of function field arithmetic, like the Carlitz module, Thakur's hypergeometric polynomials, and analogs of binomial coefficients arising in the positive characteristic version of umbral calculus, generate holonomic modules.

  12. The Use of Generalized Laguerre Polynomials in Spectral Methods for Solving Fractional Delay Differential Equations.

    PubMed

    Khader, M M

    2013-10-01

    In this paper, an efficient numerical method for solving the fractional delay differential equations (FDDEs) is considered. The fractional derivative is described in the Caputo sense. The proposed method is based on the derived approximate formula of the Laguerre polynomials. The properties of Laguerre polynomials are utilized to reduce FDDEs to a linear or nonlinear system of algebraic equations. Special attention is given to study the error and the convergence analysis of the proposed method. Several numerical examples are provided to confirm that the proposed method is in excellent agreement with the exact solution.

  13. A dynamic multi-level optimal design method with embedded finite-element modeling for power transformers

    NASA Astrophysics Data System (ADS)

    Zhang, Yunpeng; Ho, Siu-lau; Fu, Weinong

    2018-05-01

    This paper proposes a dynamic multi-level optimal design method for power transformer design optimization (TDO) problems. A response surface generated by second-order polynomial regression analysis is updated dynamically by adding more design points, which are selected by Shifted Hammersley Method (SHM) and calculated by finite-element method (FEM). The updating stops when the accuracy requirement is satisfied, and optimized solutions of the preliminary design are derived simultaneously. The optimal design level is modulated through changing the level of error tolerance. Based on the response surface of the preliminary design, a refined optimal design is added using multi-objective genetic algorithm (MOGA). The effectiveness of the proposed optimal design method is validated through a classic three-phase power TDO problem.

  14. (Dis)similarity in Impulsivity and Marital Satisfaction: A Comparison of Volatility, Compatibility, and Incompatibility Hypotheses

    PubMed Central

    Derrick, Jaye L.; Houston, Rebecca J.; Quigley, Brian M.; Testa, Maria; Kubiak, Audrey; Levitt, Ash; Homish, Gregory G.; Leonard, Kenneth E.

    2016-01-01

    Impulsivity is negatively associated with relationship satisfaction, but whether relationship functioning is harmed or helped when both partners are high in impulsivity is unclear. The influence of impulsivity might be exacerbated (the Volatility Hypothesis) or reversed (the Compatibility Hypothesis). Alternatively, discrepancies in impulsivity might be particularly problematic (the Incompatibility Hypothesis). Behavioral and self-report measures of impulsivity were collected from a community sample of couples. Mixed effect polynomial regressions with response surface analysis provide evidence in favor of both the Compatibility Hypothesis and the Incompatibility Hypothesis, but not the Volatility Hypothesis. Mediation analyses suggest results for satisfaction are driven by perceptions of the partner's negative behavior and responsiveness. Implications for the study of both impulsivity and relationship functioning are discussed. PMID:26949275

  15. Spatial interpolation schemes of daily precipitation for hydrologic modeling

    USGS Publications Warehouse

    Hwang, Y.; Clark, M.R.; Rajagopalan, B.; Leavesley, G.

    2012-01-01

    Distributed hydrologic models typically require spatial estimates of precipitation interpolated from sparsely located observational points to the specific grid points. We compare and contrast the performance of regression-based statistical methods for the spatial estimation of precipitation in two hydrologically different basins and confirmed that widely used regression-based estimation schemes fail to describe the realistic spatial variability of daily precipitation field. The methods assessed are: (1) inverse distance weighted average; (2) multiple linear regression (MLR); (3) climatological MLR; and (4) locally weighted polynomial regression (LWP). In order to improve the performance of the interpolations, the authors propose a two-step regression technique for effective daily precipitation estimation. In this simple two-step estimation process, precipitation occurrence is first generated via a logistic regression model before estimate the amount of precipitation separately on wet days. This process generated the precipitation occurrence, amount, and spatial correlation effectively. A distributed hydrologic model (PRMS) was used for the impact analysis in daily time step simulation. Multiple simulations suggested noticeable differences between the input alternatives generated by three different interpolation schemes. Differences are shown in overall simulation error against the observations, degree of explained variability, and seasonal volumes. Simulated streamflows also showed different characteristics in mean, maximum, minimum, and peak flows. Given the same parameter optimization technique, LWP input showed least streamflow error in Alapaha basin and CMLR input showed least error (still very close to LWP) in Animas basin. All of the two-step interpolation inputs resulted in lower streamflow error compared to the directly interpolated inputs. ?? 2011 Springer-Verlag.

  16. A Compact Formula for Rotations as Spin Matrix Polynomials

    DOE PAGES

    Curtright, Thomas L.; Fairlie, David B.; Zachos, Cosmas K.

    2014-08-12

    Group elements of SU(2) are expressed in closed form as finite polynomials of the Lie algebra generators, for all definite spin representations of the rotation group. Here, the simple explicit result exhibits connections between group theory, combinatorics, and Fourier analysis, especially in the large spin limit. Salient intuitive features of the formula are illustrated and discussed.

  17. Control design and robustness analysis of a ball and plate system by using polynomial chaos

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Colón, Diego; Balthazar, José M.; Reis, Célia A. dos

    2014-12-10

    In this paper, we present a mathematical model of a ball and plate system, a control law and analyze its robustness properties by using the polynomial chaos method. The ball rolls without slipping. There is an auxiliary robot vision system that determines the bodies' positions and velocities, and is used for control purposes. The actuators are to orthogonal DC motors, that changes the plate's angles with the ground. The model is a extension of the ball and beam system and is highly nonlinear. The system is decoupled in two independent equations for coordinates x and y. Finally, the resulting nonlinearmore » closed loop systems are analyzed by the polynomial chaos methodology, which considers that some system parameters are random variables, and generates statistical data that can be used in the robustness analysis.« less

  18. Control design and robustness analysis of a ball and plate system by using polynomial chaos

    NASA Astrophysics Data System (ADS)

    Colón, Diego; Balthazar, José M.; dos Reis, Célia A.; Bueno, Átila M.; Diniz, Ivando S.; de S. R. F. Rosa, Suelia

    2014-12-01

    In this paper, we present a mathematical model of a ball and plate system, a control law and analyze its robustness properties by using the polynomial chaos method. The ball rolls without slipping. There is an auxiliary robot vision system that determines the bodies' positions and velocities, and is used for control purposes. The actuators are to orthogonal DC motors, that changes the plate's angles with the ground. The model is a extension of the ball and beam system and is highly nonlinear. The system is decoupled in two independent equations for coordinates x and y. Finally, the resulting nonlinear closed loop systems are analyzed by the polynomial chaos methodology, which considers that some system parameters are random variables, and generates statistical data that can be used in the robustness analysis.

  19. Compressive sampling of polynomial chaos expansions: Convergence analysis and sampling strategies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hampton, Jerrad; Doostan, Alireza, E-mail: alireza.doostan@colorado.edu

    2015-01-01

    Sampling orthogonal polynomial bases via Monte Carlo is of interest for uncertainty quantification of models with random inputs, using Polynomial Chaos (PC) expansions. It is known that bounding a probabilistic parameter, referred to as coherence, yields a bound on the number of samples necessary to identify coefficients in a sparse PC expansion via solution to an ℓ{sub 1}-minimization problem. Utilizing results for orthogonal polynomials, we bound the coherence parameter for polynomials of Hermite and Legendre type under their respective natural sampling distribution. In both polynomial bases we identify an importance sampling distribution which yields a bound with weaker dependence onmore » the order of the approximation. For more general orthonormal bases, we propose the coherence-optimal sampling: a Markov Chain Monte Carlo sampling, which directly uses the basis functions under consideration to achieve a statistical optimality among all sampling schemes with identical support. We demonstrate these different sampling strategies numerically in both high-order and high-dimensional, manufactured PC expansions. In addition, the quality of each sampling method is compared in the identification of solutions to two differential equations, one with a high-dimensional random input and the other with a high-order PC expansion. In both cases, the coherence-optimal sampling scheme leads to similar or considerably improved accuracy.« less

  20. Statistical optimization of medium components and growth conditions by response surface methodology to enhance phenol degradation by Pseudomonas putida.

    PubMed

    Annadurai, Gurusamy; Ling, Lai Yi; Lee, Jiunn-Fwu

    2008-02-28

    In this work, a four-level Box-Behnken factorial design was employed combining with response surface methodology (RSM) to optimize the medium composition for the degradation of phenol by pseudomonas putida (ATCC 31800). A mathematical model was then developed to show the effect of each medium composition and their interactions on the biodegradation of phenol. Response surface method was using four levels like glucose, yeast extract, ammonium sulfate and sodium chloride, which also enabled the identification of significant effects of interactions for the batch studies. The biodegradation of phenol on Pseudomonas putida (ATCC 31800) was determined to be pH-dependent and the maximum degradation capacity of microorganism at 30 degrees C when the phenol concentration was 0.2 g/L and the pH of the solution was 7.0. Second order polynomial regression model was used for analysis of the experiment. Cubic and quadratic terms were incorporated into the regression model through variable selection procedures. The experimental values are in good agreement with predicted values and the correlation coefficient was found to be 0.9980.

  1. The validation of a human force model to predict dynamic forces resulting from multi-joint motions

    NASA Technical Reports Server (NTRS)

    Pandya, Abhilash K.; Maida, James C.; Aldridge, Ann M.; Hasson, Scott M.; Woolford, Barbara J.

    1992-01-01

    The development and validation is examined of a dynamic strength model for humans. This model is based on empirical data. The shoulder, elbow, and wrist joints were characterized in terms of maximum isolated torque, or position and velocity, in all rotational planes. This data was reduced by a least squares regression technique into a table of single variable second degree polynomial equations determining torque as a function of position and velocity. The isolated joint torque equations were then used to compute forces resulting from a composite motion, in this case, a ratchet wrench push and pull operation. A comparison of the predicted results of the model with the actual measured values for the composite motion indicates that forces derived from a composite motion of joints (ratcheting) can be predicted from isolated joint measures. Calculated T values comparing model versus measured values for 14 subjects were well within the statistically acceptable limits and regression analysis revealed coefficient of variation between actual and measured to be within 0.72 and 0.80.

  2. Explaining Support Vector Machines: A Color Based Nomogram

    PubMed Central

    Van Belle, Vanya; Van Calster, Ben; Van Huffel, Sabine; Suykens, Johan A. K.; Lisboa, Paulo

    2016-01-01

    Problem setting Support vector machines (SVMs) are very popular tools for classification, regression and other problems. Due to the large choice of kernels they can be applied with, a large variety of data can be analysed using these tools. Machine learning thanks its popularity to the good performance of the resulting models. However, interpreting the models is far from obvious, especially when non-linear kernels are used. Hence, the methods are used as black boxes. As a consequence, the use of SVMs is less supported in areas where interpretability is important and where people are held responsible for the decisions made by models. Objective In this work, we investigate whether SVMs using linear, polynomial and RBF kernels can be explained such that interpretations for model-based decisions can be provided. We further indicate when SVMs can be explained and in which situations interpretation of SVMs is (hitherto) not possible. Here, explainability is defined as the ability to produce the final decision based on a sum of contributions which depend on one single or at most two input variables. Results Our experiments on simulated and real-life data show that explainability of an SVM depends on the chosen parameter values (degree of polynomial kernel, width of RBF kernel and regularization constant). When several combinations of parameter values yield the same cross-validation performance, combinations with a lower polynomial degree or a larger kernel width have a higher chance of being explainable. Conclusions This work summarizes SVM classifiers obtained with linear, polynomial and RBF kernels in a single plot. Linear and polynomial kernels up to the second degree are represented exactly. For other kernels an indication of the reliability of the approximation is presented. The complete methodology is available as an R package and two apps and a movie are provided to illustrate the possibilities offered by the method. PMID:27723811

  3. The Impact of Aortic Occlusion Balloon on Mortality After Endovascular Repair of Ruptured Abdominal Aortic Aneurysms: A Meta-analysis and Meta-regression Analysis.

    PubMed

    Karkos, Christos D; Papadimitriou, Christina T; Chatzivasileiadis, Theodoros N; Kapsali, Nikoletta S; Kalogirou, Thomas E; Giagtzidis, Ioakeim T; Papazoglou, Konstantinos O

    2015-12-01

    We aimed to investigate whether the use of aortic occlusion balloon (AOB) has an impact on mortality of patients undergoing endovascular repair of ruptured abdominal aortic aneurysms (RAAAs). A meta-analysis of the English-language literature was undertaken through February 2013. Articles reporting data on outcome after endovascular repair of RAAAs were identified and information regarding the use of AOB was sought. Included in this meta-analysis were 39 eligible studies reporting 1277 patients. The pooled perioperative mortality was 21.6% (95% CI 18.1-25.1%). There was significant within-study heterogeneity (I(2) 50.2%, P < 0.001). A total of 200 patients required AOB with an estimated pooled proportion of 14.1% (8.9-19.3%). Individual random-effects meta-regression investigating the effect of AOB and other risk factors on mortality revealed a significant linear association of hemodynamic instability, bifurcated endograft approach, and primary conversion to open repair with mortality and a nonlinear (second degree polynomial) association of AOB with mortality. On multivariable meta-regression models, both hemodynamic instability and AOB were found to be statistically significant, independent predictors of mortality. In particular, there was a statistically significant negative correlation between AOB and mortality and a positive effect of hemodynamic instability on mortality. In practical terms, mortality was significantly higher in studies with a higher proportion of hemodynamically unstable patients and lower in studies with a higher rate of AOB use. This study provides meta-analytical evidence that the use of an AOB in unstable RAAA patients undergoing endovascular repair may improve the results.

  4. Influence of Japanese consumer gender and age on sensory attributes and preference (a case study on deep-fried peanuts).

    PubMed

    Miyagi, Atsushi

    2017-09-01

    Detailed exploration of sensory perception as well as preference across gender and age for a certain food is very useful for developing a vendible food commodity related to physiological and psychological motivation for food preference. Sensory tests including color, sweetness, bitterness, fried peanut aroma, textural preference and overall liking of deep-fried peanuts with varying frying time (2, 4, 6, 9, 12 and 15 min) at 150 °C were carried out using 417 healthy Japanese consumers. To determine the influence of gender and age on sensory evaluation, systematic statistical analysis including one-way analysis of variance, polynomial regression analysis and multiple regression analysis was conducted using the collected data. The results indicated that females were more sensitive to bitterness than males. This may affect sensory preference; female subjects favored peanuts prepared with a shorter frying time more than male subjects did. With advancing age, textural preference played a more important role in overall preference. Older subjects liked deeper-fried peanuts, which are more brittle, more than younger subjects did. In the present study, systematic statistical analysis based on collected sensory evaluation data using deep-fried peanuts was conducted and the tendency of sensory perception and preference across gender and age was clarified. These results may be useful for engineering optimal strategies to target specific segments to gain greater acceptance in the market. © 2017 Society of Chemical Industry. © 2017 Society of Chemical Industry.

  5. The NonConforming Virtual Element Method for the Stokes Equations

    DOE PAGES

    Cangiani, Andrea; Gyrya, Vitaliy; Manzini, Gianmarco

    2016-01-01

    In this paper, we present the nonconforming virtual element method (VEM) for the numerical approximation of velocity and pressure in the steady Stokes problem. The pressure is approximated using discontinuous piecewise polynomials, while each component of the velocity is approximated using the nonconforming virtual element space. On each mesh element the local virtual space contains the space of polynomials of up to a given degree, plus suitable nonpolynomial functions. The virtual element functions are implicitly defined as the solution of local Poisson problems with polynomial Neumann boundary conditions. As typical in VEM approaches, the explicit evaluation of the non-polynomial functionsmore » is not required. This approach makes it possible to construct nonconforming (virtual) spaces for any polynomial degree regardless of the parity, for two- and three-dimensional problems, and for meshes with very general polygonal and polyhedral elements. We show that the nonconforming VEM is inf-sup stable and establish optimal a priori error estimates for the velocity and pressure approximations. Finally, numerical examples confirm the convergence analysis and the effectiveness of the method in providing high-order accurate approximations.« less

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cangiani, Andrea; Gyrya, Vitaliy; Manzini, Gianmarco

    In this paper, we present the nonconforming virtual element method (VEM) for the numerical approximation of velocity and pressure in the steady Stokes problem. The pressure is approximated using discontinuous piecewise polynomials, while each component of the velocity is approximated using the nonconforming virtual element space. On each mesh element the local virtual space contains the space of polynomials of up to a given degree, plus suitable nonpolynomial functions. The virtual element functions are implicitly defined as the solution of local Poisson problems with polynomial Neumann boundary conditions. As typical in VEM approaches, the explicit evaluation of the non-polynomial functionsmore » is not required. This approach makes it possible to construct nonconforming (virtual) spaces for any polynomial degree regardless of the parity, for two- and three-dimensional problems, and for meshes with very general polygonal and polyhedral elements. We show that the nonconforming VEM is inf-sup stable and establish optimal a priori error estimates for the velocity and pressure approximations. Finally, numerical examples confirm the convergence analysis and the effectiveness of the method in providing high-order accurate approximations.« less

  7. Influence of surface error on electromagnetic performance of reflectors based on Zernike polynomials

    NASA Astrophysics Data System (ADS)

    Li, Tuanjie; Shi, Jiachen; Tang, Yaqiong

    2018-04-01

    This paper investigates the influence of surface error distribution on the electromagnetic performance of antennas. The normalized Zernike polynomials are used to describe a smooth and continuous deformation surface. Based on the geometrical optics and piecewise linear fitting method, the electrical performance of reflector described by the Zernike polynomials is derived to reveal the relationship between surface error distribution and electromagnetic performance. Then the relation database between surface figure and electric performance is built for ideal and deformed surfaces to realize rapidly calculation of far-field electric performances. The simulation analysis of the influence of Zernike polynomials on the electrical properties for the axis-symmetrical reflector with the axial mode helical antenna as feed is further conducted to verify the correctness of the proposed method. Finally, the influence rules of surface error distribution on electromagnetic performance are summarized. The simulation results show that some terms of Zernike polynomials may decrease the amplitude of main lobe of antenna pattern, and some may reduce the pointing accuracy. This work extracts a new concept for reflector's shape adjustment in manufacturing process.

  8. Phase unwrapping algorithm using polynomial phase approximation and linear Kalman filter.

    PubMed

    Kulkarni, Rishikesh; Rastogi, Pramod

    2018-02-01

    A noise-robust phase unwrapping algorithm is proposed based on state space analysis and polynomial phase approximation using wrapped phase measurement. The true phase is approximated as a two-dimensional first order polynomial function within a small sized window around each pixel. The estimates of polynomial coefficients provide the measurement of phase and local fringe frequencies. A state space representation of spatial phase evolution and the wrapped phase measurement is considered with the state vector consisting of polynomial coefficients as its elements. Instead of using the traditional nonlinear Kalman filter for the purpose of state estimation, we propose to use the linear Kalman filter operating directly with the wrapped phase measurement. The adaptive window width is selected at each pixel based on the local fringe density to strike a balance between the computation time and the noise robustness. In order to retrieve the unwrapped phase, either a line-scanning approach or a quality guided strategy of pixel selection is used depending on the underlying continuous or discontinuous phase distribution, respectively. Simulation and experimental results are provided to demonstrate the applicability of the proposed method.

  9. A Small and Slim Coaxial Probe for Single Rice Grain Moisture Sensing

    PubMed Central

    You, Kok Yeow; Mun, Hou Kit; You, Li Ling; Salleh, Jamaliah; Abbas, Zulkifly

    2013-01-01

    A moisture detection of single rice grains using a slim and small open-ended coaxial probe is presented. The coaxial probe is suitable for the nondestructive measurement of moisture values in the rice grains ranging from from 9.5% to 26%. Empirical polynomial models are developed to predict the gravimetric moisture content of rice based on measured reflection coefficients using a vector network analyzer. The relationship between the reflection coefficient and relative permittivity were also created using a regression method and expressed in a polynomial model, whose model coefficients were obtained by fitting the data from Finite Element-based simulation. Besides, the designed single rice grain sample holder and experimental set-up were shown. The measurement of single rice grains in this study is more precise compared to the measurement in conventional bulk rice grains, as the random air gap present in the bulk rice grains is excluded. PMID:23493127

  10. A resilient domain decomposition polynomial chaos solver for uncertain elliptic PDEs

    NASA Astrophysics Data System (ADS)

    Mycek, Paul; Contreras, Andres; Le Maître, Olivier; Sargsyan, Khachik; Rizzi, Francesco; Morris, Karla; Safta, Cosmin; Debusschere, Bert; Knio, Omar

    2017-07-01

    A resilient method is developed for the solution of uncertain elliptic PDEs on extreme scale platforms. The method is based on a hybrid domain decomposition, polynomial chaos (PC) framework that is designed to address soft faults. Specifically, parallel and independent solves of multiple deterministic local problems are used to define PC representations of local Dirichlet boundary-to-boundary maps that are used to reconstruct the global solution. A LAD-lasso type regression is developed for this purpose. The performance of the resulting algorithm is tested on an elliptic equation with an uncertain diffusivity field. Different test cases are considered in order to analyze the impacts of correlation structure of the uncertain diffusivity field, the stochastic resolution, as well as the probability of soft faults. In particular, the computations demonstrate that, provided sufficiently many samples are generated, the method effectively overcomes the occurrence of soft faults.

  11. Prediction of random-regression coefficient for daily milk yield after 305 days in milk by using the regression-coefficient estimates from the first 305 days.

    PubMed

    Yamazaki, Takeshi; Takeda, Hisato; Hagiya, Koichi; Yamaguchi, Satoshi; Sasaki, Osamu

    2018-03-13

    Because lactation periods in dairy cows lengthen with increasing total milk production, it is important to predict individual productivities after 305 days in milk (DIM) to determine the optimal lactation period. We therefore examined whether the random regression (RR) coefficient from 306 to 450 DIM (M2) can be predicted from those during the first 305 DIM (M1) by using a random regression model. We analyzed test-day milk records from 85690 Holstein cows in their first lactations and 131727 cows in their later (second to fifth) lactations. Data in M1 and M2 were analyzed separately by using different single-trait RR animal models. We then performed a multiple regression analysis of the RR coefficients of M2 on those of M1 during the first and later lactations. The first-order Legendre polynomials were practical covariates of random regression for the milk yields of M2. All RR coefficients for the additive genetic (AG) effect and the intercept for the permanent environmental (PE) effect of M2 had moderate to strong correlations with the intercept for the AG effect of M1. The coefficients of determination for multiple regression of the combined intercepts for the AG and PE effects of M2 on the coefficients for the AG effect of M1 were moderate to high. The daily milk yields of M2 predicted by using the RR coefficients for the AG effect of M1 were highly correlated with those obtained by using the coefficients of M2. Milk production after 305 DIM can be predicted by using the RR coefficient estimates of the AG effect during the first 305 DIM.

  12. A Riemann-Hilbert approach to asymptotic questions for orthogonal polynomials

    NASA Astrophysics Data System (ADS)

    Deift, P.; Kriecherbauer, T.; McLaughlin, K. T.-R.; Venakides, S.; Zhou, X.

    2001-08-01

    A few years ago the authors introduced a new approach to study asymptotic questions for orthogonal polynomials. In this paper we give an overview of our method and review the results which have been obtained in Deift et al. (Internat. Math. Res. Notices (1997) 759, Comm. Pure Appl. Math. 52 (1999) 1491, 1335), Deift (Orthogonal Polynomials and Random Matrices: A Riemann-Hilbert Approach, Courant Lecture Notes, Vol. 3, New York University, 1999), Kriecherbauer and McLaughlin (Internat. Math. Res. Notices (1999) 299) and Baik et al. (J. Amer. Math. Soc. 12 (1999) 1119). We mainly consider orthogonal polynomials with respect to weights on the real line which are either (1) Freud-type weights d[alpha](x)=e-Q(x) dx (Q polynomial or Q(x)=x[beta], [beta]>0), or (2) varying weights d[alpha]n(x)=e-nV(x) dx (V analytic, limx-->[infinity] V(x)/logx=[infinity]). We obtain Plancherel-Rotach-type asymptotics in the entire complex plane as well as asymptotic formulae with error estimates for the leading coefficients, for the recurrence coefficients, and for the zeros of the orthogonal polynomials. Our proof starts from an observation of Fokas et al. (Comm. Math. Phys. 142 (1991) 313) that the orthogonal polynomials can be determined as solutions of certain matrix valued Riemann-Hilbert problems. We analyze the Riemann-Hilbert problems by a steepest descent type method introduced by Deift and Zhou (Ann. Math. 137 (1993) 295) and further developed in Deift and Zhou (Comm. Pure Appl. Math. 48 (1995) 277) and Deift et al. (Proc. Nat. Acad. Sci. USA 95 (1998) 450). A crucial step in our analysis is the use of the well-known equilibrium measure which describes the asymptotic distribution of the zeros of the orthogonal polynomials.

  13. Simple, fast, and low-cost camera-based water content measurement with colorimetric fluorescent indicator

    NASA Astrophysics Data System (ADS)

    Song, Seok-Jeong; Kim, Tae-Il; Kim, Youngmi; Nam, Hyoungsik

    2018-05-01

    Recently, a simple, sensitive, and low-cost fluorescent indicator has been proposed to determine water contents in organic solvents, drugs, and foodstuffs. The change of water content leads to the change of the indicator's fluorescence color under the ultra-violet (UV) light. Whereas the water content values could be estimated from the spectrum obtained by a bulky and expensive spectrometer in the previous research, this paper demonstrates a simple and low-cost camera-based water content measurement scheme with the same fluorescent water indicator. Water content is calculated over the range of 0-30% by quadratic polynomial regression models with color information extracted from the captured images of samples. Especially, several color spaces such as RGB, xyY, L∗a∗b∗, u‧v‧, HSV, and YCBCR have been investigated to establish the optimal color information features over both linear and nonlinear RGB data given by a camera before and after gamma correction. In the end, a 2nd order polynomial regression model along with HSV in a linear domain achieves the minimum mean square error of 1.06% for a 3-fold cross validation method. Additionally, the resultant water content estimation model is implemented and evaluated in an off-the-shelf Android-based smartphone.

  14. A method for the selection of a functional form for a thermodynamic equation of state using weighted linear least squares stepwise regression

    NASA Technical Reports Server (NTRS)

    Jacobsen, R. T.; Stewart, R. B.; Crain, R. W., Jr.; Rose, G. L.; Myers, A. F.

    1976-01-01

    A method was developed for establishing a rational choice of the terms to be included in an equation of state with a large number of adjustable coefficients. The methods presented were developed for use in the determination of an equation of state for oxygen and nitrogen. However, a general application of the methods is possible in studies involving the determination of an optimum polynomial equation for fitting a large number of data points. The data considered in the least squares problem are experimental thermodynamic pressure-density-temperature data. Attention is given to a description of stepwise multiple regression and the use of stepwise regression in the determination of an equation of state for oxygen and nitrogen.

  15. Forecasting carbon dioxide emissions based on a hybrid of mixed data sampling regression model and back propagation neural network in the USA.

    PubMed

    Zhao, Xin; Han, Meng; Ding, Lili; Calin, Adrian Cantemir

    2018-01-01

    The accurate forecast of carbon dioxide emissions is critical for policy makers to take proper measures to establish a low carbon society. This paper discusses a hybrid of the mixed data sampling (MIDAS) regression model and BP (back propagation) neural network (MIDAS-BP model) to forecast carbon dioxide emissions. Such analysis uses mixed frequency data to study the effects of quarterly economic growth on annual carbon dioxide emissions. The forecasting ability of MIDAS-BP is remarkably better than MIDAS, ordinary least square (OLS), polynomial distributed lags (PDL), autoregressive distributed lags (ADL), and auto-regressive moving average (ARMA) models. The MIDAS-BP model is suitable for forecasting carbon dioxide emissions for both the short and longer term. This research is expected to influence the methodology for forecasting carbon dioxide emissions by improving the forecast accuracy. Empirical results show that economic growth has both negative and positive effects on carbon dioxide emissions that last 15 quarters. Carbon dioxide emissions are also affected by their own change within 3 years. Therefore, there is a need for policy makers to explore an alternative way to develop the economy, especially applying new energy policies to establish a low carbon society.

  16. Gabor-based kernel PCA with fractional power polynomial models for face recognition.

    PubMed

    Liu, Chengjun

    2004-05-01

    This paper presents a novel Gabor-based kernel Principal Component Analysis (PCA) method by integrating the Gabor wavelet representation of face images and the kernel PCA method for face recognition. Gabor wavelets first derive desirable facial features characterized by spatial frequency, spatial locality, and orientation selectivity to cope with the variations due to illumination and facial expression changes. The kernel PCA method is then extended to include fractional power polynomial models for enhanced face recognition performance. A fractional power polynomial, however, does not necessarily define a kernel function, as it might not define a positive semidefinite Gram matrix. Note that the sigmoid kernels, one of the three classes of widely used kernel functions (polynomial kernels, Gaussian kernels, and sigmoid kernels), do not actually define a positive semidefinite Gram matrix either. Nevertheless, the sigmoid kernels have been successfully used in practice, such as in building support vector machines. In order to derive real kernel PCA features, we apply only those kernel PCA eigenvectors that are associated with positive eigenvalues. The feasibility of the Gabor-based kernel PCA method with fractional power polynomial models has been successfully tested on both frontal and pose-angled face recognition, using two data sets from the FERET database and the CMU PIE database, respectively. The FERET data set contains 600 frontal face images of 200 subjects, while the PIE data set consists of 680 images across five poses (left and right profiles, left and right half profiles, and frontal view) with two different facial expressions (neutral and smiling) of 68 subjects. The effectiveness of the Gabor-based kernel PCA method with fractional power polynomial models is shown in terms of both absolute performance indices and comparative performance against the PCA method, the kernel PCA method with polynomial kernels, the kernel PCA method with fractional power polynomial models, the Gabor wavelet-based PCA method, and the Gabor wavelet-based kernel PCA method with polynomial kernels.

  17. A polynomial based model for cell fate prediction in human diseases.

    PubMed

    Ma, Lichun; Zheng, Jie

    2017-12-21

    Cell fate regulation directly affects tissue homeostasis and human health. Research on cell fate decision sheds light on key regulators, facilitates understanding the mechanisms, and suggests novel strategies to treat human diseases that are related to abnormal cell development. In this study, we proposed a polynomial based model to predict cell fate. This model was derived from Taylor series. As a case study, gene expression data of pancreatic cells were adopted to test and verify the model. As numerous features (genes) are available, we employed two kinds of feature selection methods, i.e. correlation based and apoptosis pathway based. Then polynomials of different degrees were used to refine the cell fate prediction function. 10-fold cross-validation was carried out to evaluate the performance of our model. In addition, we analyzed the stability of the resultant cell fate prediction model by evaluating the ranges of the parameters, as well as assessing the variances of the predicted values at randomly selected points. Results show that, within both the two considered gene selection methods, the prediction accuracies of polynomials of different degrees show little differences. Interestingly, the linear polynomial (degree 1 polynomial) is more stable than others. When comparing the linear polynomials based on the two gene selection methods, it shows that although the accuracy of the linear polynomial that uses correlation analysis outcomes is a little higher (achieves 86.62%), the one within genes of the apoptosis pathway is much more stable. Considering both the prediction accuracy and the stability of polynomial models of different degrees, the linear model is a preferred choice for cell fate prediction with gene expression data of pancreatic cells. The presented cell fate prediction model can be extended to other cells, which may be important for basic research as well as clinical study of cell development related diseases.

  18. Weierstrass method for quaternionic polynomial root-finding

    NASA Astrophysics Data System (ADS)

    Falcão, M. Irene; Miranda, Fernando; Severino, Ricardo; Soares, M. Joana

    2018-01-01

    Quaternions, introduced by Hamilton in 1843 as a generalization of complex numbers, have found, in more recent years, a wealth of applications in a number of different areas which motivated the design of efficient methods for numerically approximating the zeros of quaternionic polynomials. In fact, one can find in the literature recent contributions to this subject based on the use of complex techniques, but numerical methods relying on quaternion arithmetic remain scarce. In this paper we propose a Weierstrass-like method for finding simultaneously {\\sl all} the zeros of unilateral quaternionic polynomials. The convergence analysis and several numerical examples illustrating the performance of the method are also presented.

  19. Closed-form estimates of the domain of attraction for nonlinear systems via fuzzy-polynomial models.

    PubMed

    Pitarch, José Luis; Sala, Antonio; Ariño, Carlos Vicente

    2014-04-01

    In this paper, the domain of attraction of the origin of a nonlinear system is estimated in closed form via level sets with polynomial boundaries, iteratively computed. In particular, the domain of attraction is expanded from a previous estimate, such as a classical Lyapunov level set. With the use of fuzzy-polynomial models, the domain of attraction analysis can be carried out via sum of squares optimization and an iterative algorithm. The result is a function that bounds the domain of attraction, free from the usual restriction of being positive and decrescent in all the interior of its level sets.

  20. On the Numerical Formulation of Parametric Linear Fractional Transformation (LFT) Uncertainty Models for Multivariate Matrix Polynomial Problems

    NASA Technical Reports Server (NTRS)

    Belcastro, Christine M.

    1998-01-01

    Robust control system analysis and design is based on an uncertainty description, called a linear fractional transformation (LFT), which separates the uncertain (or varying) part of the system from the nominal system. These models are also useful in the design of gain-scheduled control systems based on Linear Parameter Varying (LPV) methods. Low-order LFT models are difficult to form for problems involving nonlinear parameter variations. This paper presents a numerical computational method for constructing and LFT model for a given LPV model. The method is developed for multivariate polynomial problems, and uses simple matrix computations to obtain an exact low-order LFT representation of the given LPV system without the use of model reduction. Although the method is developed for multivariate polynomial problems, multivariate rational problems can also be solved using this method by reformulating the rational problem into a polynomial form.

  1. LMI-based stability analysis of fuzzy-model-based control systems using approximated polynomial membership functions.

    PubMed

    Narimani, Mohammand; Lam, H K; Dilmaghani, R; Wolfe, Charles

    2011-06-01

    Relaxed linear-matrix-inequality-based stability conditions for fuzzy-model-based control systems with imperfect premise matching are proposed. First, the derivative of the Lyapunov function, containing the product terms of the fuzzy model and fuzzy controller membership functions, is derived. Then, in the partitioned operating domain of the membership functions, the relations between the state variables and the mentioned product terms are represented by approximated polynomials in each subregion. Next, the stability conditions containing the information of all subsystems and the approximated polynomials are derived. In addition, the concept of the S-procedure is utilized to release the conservativeness caused by considering the whole operating region for approximated polynomials. It is shown that the well-known stability conditions can be special cases of the proposed stability conditions. Simulation examples are given to illustrate the validity of the proposed approach.

  2. Higher-order Fourier analysis over finite fields and applications

    NASA Astrophysics Data System (ADS)

    Hatami, Pooya

    Higher-order Fourier analysis is a powerful tool in the study of problems in additive and extremal combinatorics, for instance the study of arithmetic progressions in primes, where the traditional Fourier analysis comes short. In recent years, higher-order Fourier analysis has found multiple applications in computer science in fields such as property testing and coding theory. In this thesis, we develop new tools within this theory with several new applications such as a characterization theorem in algebraic property testing. One of our main contributions is a strong near-equidistribution result for regular collections of polynomials. The densities of small linear structures in subsets of Abelian groups can be expressed as certain analytic averages involving linear forms. Higher-order Fourier analysis examines such averages by approximating the indicator function of a subset by a function of bounded number of polynomials. Then, to approximate the average, it suffices to know the joint distribution of the polynomials applied to the linear forms. We prove a near-equidistribution theorem that describes these distributions for the group F(n/p) when p is a fixed prime. This fundamental fact was previously known only under various extra assumptions about the linear forms or the field size. We use this near-equidistribution theorem to settle a conjecture of Gowers and Wolf on the true complexity of systems of linear forms. Our next application is towards a characterization of testable algebraic properties. We prove that every locally characterized affine-invariant property of functions f : F(n/p) → R with n∈ N, is testable. In fact, we prove that any such property P is proximity-obliviously testable. More generally, we show that any affine-invariant property that is closed under subspace restrictions and has "bounded complexity" is testable. We also prove that any property that can be described as the property of decomposing into a known structure of low-degree polynomials is locally characterized and is, hence, testable. We discuss several notions of regularity which allow us to deduce algorithmic versions of various regularity lemmas for polynomials by Green and Tao and by Kaufman and Lovett. We show that our algorithmic regularity lemmas for polynomials imply algorithmic versions of several results relying on regularity, such as decoding Reed-Muller codes beyond the list decoding radius (for certain structured errors), and prescribed polynomial decompositions. Finally, motivated by the definition of Gowers norms, we investigate norms defined by different systems of linear forms. We give necessary conditions on the structure of systems of linear forms that define norms. We prove that such norms can be one of only two types, and assuming that |F p| is sufficiently large, they essentially are equivalent to either a Gowers norm or Lp norms.

  3. H0 from cosmic chronometers and Type Ia supernovae, with Gaussian Processes and the novel Weighted Polynomial Regression method

    NASA Astrophysics Data System (ADS)

    Gómez-Valent, Adrià; Amendola, Luca

    2018-04-01

    In this paper we present new constraints on the Hubble parameter H0 using: (i) the available data on H(z) obtained from cosmic chronometers (CCH); (ii) the Hubble rate data points extracted from the supernovae of Type Ia (SnIa) of the Pantheon compilation and the Hubble Space Telescope (HST) CANDELS and CLASH Multy-Cycle Treasury (MCT) programs; and (iii) the local HST measurement of H0 provided by Riess et al. (2018), H0HST=(73.45±1.66) km/s/Mpc. Various determinations of H0 using the Gaussian processes (GPs) method and the most updated list of CCH data have been recently provided by Yu, Ratra & Wang (2018). Using the Gaussian kernel they find H0=(67.42± 4.75) km/s/Mpc. Here we extend their analysis to also include the most released and complete set of SnIa data, which allows us to reduce the uncertainty by a factor ~ 3 with respect to the result found by only considering the CCH information. We obtain H0=(67.06± 1.68) km/s/Mpc, which favors again the lower range of values for H0 and is in tension with H0HST. The tension reaches the 2.71σ level. We round off the GPs determination too by taking also into account the error propagation of the kernel hyperparameters when the CCH with and without H0HST are used in the analysis. In addition, we present a novel method to reconstruct functions from data, which consists in a weighted sum of polynomial regressions (WPR). We apply it from a cosmographic perspective to reconstruct H(z) and estimate H0 from CCH and SnIa measurements. The result obtained with this method, H0=(68.90± 1.96) km/s/Mpc, is fully compatible with the GPs ones. Finally, a more conservative GPs+WPR value is also provided, H0=(68.45± 2.00) km/s/Mpc, which is still almost 2σ away from H0HST.

  4. Comparison of the Performance of Modal Control Schemes for an Adaptive Optics System and Analysis of the Effect of Actuator Limitations

    DTIC Science & Technology

    2012-06-01

    the open-loop path is established, the feedback system can be treated as a set of SISO feedback loops and a single SISO control law can be applied...Zernike polynomials are commonly referred to by the names, such as focus, coma, astigmatism , and etc. Zernike polynomials can be transformed into

  5. Polynomial modal analysis of lamellar diffraction gratings in conical mounting.

    PubMed

    Randriamihaja, Manjakavola Honore; Granet, Gérard; Edee, Kofi; Raniriharinosy, Karyl

    2016-09-01

    An efficient numerical modal method for modeling a lamellar grating in conical mounting is presented. Within each region of the grating, the electromagnetic field is expanded onto Legendre polynomials, which allows us to enforce in an exact manner the boundary conditions that determine the eigensolutions. Our code is successfully validated by comparison with results obtained with the analytical modal method.

  6. The Application of Various Nonlinear Models to Describe Academic Growth Trajectories: An Empirical Analysis Using Four-Wave Longitudinal Achievement Data from a Large Urban School District

    ERIC Educational Resources Information Center

    Shin, Tacksoo

    2012-01-01

    This study introduced various nonlinear growth models, including the quadratic conventional polynomial model, the fractional polynomial model, the Sigmoid model, the growth model with negative exponential functions, the multidimensional scaling technique, and the unstructured growth curve model. It investigated which growth models effectively…

  7. Molecular Isotopic Distribution Analysis (MIDAs) with Adjustable Mass Accuracy

    NASA Astrophysics Data System (ADS)

    Alves, Gelio; Ogurtsov, Aleksey Y.; Yu, Yi-Kuo

    2014-01-01

    In this paper, we present Molecular Isotopic Distribution Analysis (MIDAs), a new software tool designed to compute molecular isotopic distributions with adjustable accuracies. MIDAs offers two algorithms, one polynomial-based and one Fourier-transform-based, both of which compute molecular isotopic distributions accurately and efficiently. The polynomial-based algorithm contains few novel aspects, whereas the Fourier-transform-based algorithm consists mainly of improvements to other existing Fourier-transform-based algorithms. We have benchmarked the performance of the two algorithms implemented in MIDAs with that of eight software packages (BRAIN, Emass, Mercury, Mercury5, NeutronCluster, Qmass, JFC, IC) using a consensus set of benchmark molecules. Under the proposed evaluation criteria, MIDAs's algorithms, JFC, and Emass compute with comparable accuracy the coarse-grained (low-resolution) isotopic distributions and are more accurate than the other software packages. For fine-grained isotopic distributions, we compared IC, MIDAs's polynomial algorithm, and MIDAs's Fourier transform algorithm. Among the three, IC and MIDAs's polynomial algorithm compute isotopic distributions that better resemble their corresponding exact fine-grained (high-resolution) isotopic distributions. MIDAs can be accessed freely through a user-friendly web-interface at http://www.ncbi.nlm.nih.gov/CBBresearch/Yu/midas/index.html.

  8. Design of hybrid radial basis function neural networks (HRBFNNs) realized with the aid of hybridization of fuzzy clustering method (FCM) and polynomial neural networks (PNNs).

    PubMed

    Huang, Wei; Oh, Sung-Kwun; Pedrycz, Witold

    2014-12-01

    In this study, we propose Hybrid Radial Basis Function Neural Networks (HRBFNNs) realized with the aid of fuzzy clustering method (Fuzzy C-Means, FCM) and polynomial neural networks. Fuzzy clustering used to form information granulation is employed to overcome a possible curse of dimensionality, while the polynomial neural network is utilized to build local models. Furthermore, genetic algorithm (GA) is exploited here to optimize the essential design parameters of the model (including fuzzification coefficient, the number of input polynomial fuzzy neurons (PFNs), and a collection of the specific subset of input PFNs) of the network. To reduce dimensionality of the input space, principal component analysis (PCA) is considered as a sound preprocessing vehicle. The performance of the HRBFNNs is quantified through a series of experiments, in which we use several modeling benchmarks of different levels of complexity (different number of input variables and the number of available data). A comparative analysis reveals that the proposed HRBFNNs exhibit higher accuracy in comparison to the accuracy produced by some models reported previously in the literature. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. Molecular Isotopic Distribution Analysis (MIDAs) with adjustable mass accuracy.

    PubMed

    Alves, Gelio; Ogurtsov, Aleksey Y; Yu, Yi-Kuo

    2014-01-01

    In this paper, we present Molecular Isotopic Distribution Analysis (MIDAs), a new software tool designed to compute molecular isotopic distributions with adjustable accuracies. MIDAs offers two algorithms, one polynomial-based and one Fourier-transform-based, both of which compute molecular isotopic distributions accurately and efficiently. The polynomial-based algorithm contains few novel aspects, whereas the Fourier-transform-based algorithm consists mainly of improvements to other existing Fourier-transform-based algorithms. We have benchmarked the performance of the two algorithms implemented in MIDAs with that of eight software packages (BRAIN, Emass, Mercury, Mercury5, NeutronCluster, Qmass, JFC, IC) using a consensus set of benchmark molecules. Under the proposed evaluation criteria, MIDAs's algorithms, JFC, and Emass compute with comparable accuracy the coarse-grained (low-resolution) isotopic distributions and are more accurate than the other software packages. For fine-grained isotopic distributions, we compared IC, MIDAs's polynomial algorithm, and MIDAs's Fourier transform algorithm. Among the three, IC and MIDAs's polynomial algorithm compute isotopic distributions that better resemble their corresponding exact fine-grained (high-resolution) isotopic distributions. MIDAs can be accessed freely through a user-friendly web-interface at http://www.ncbi.nlm.nih.gov/CBBresearch/Yu/midas/index.html.

  10. Statistics of Data Fitting: Flaws and Fixes of Polynomial Analysis of Channeled Spectra

    NASA Astrophysics Data System (ADS)

    Karstens, William; Smith, David

    2013-03-01

    Starting from general statistical principles, we have critically examined Baumeister's procedure* for determining the refractive index of thin films from channeled spectra. Briefly, the method assumes that the index and interference fringe order may be approximated by polynomials quadratic and cubic in photon energy, respectively. The coefficients of the polynomials are related by differentiation, which is equivalent to comparing energy differences between fringes. However, we find that when the fringe order is calculated from the published IR index for silicon* and then analyzed with Baumeister's procedure, the results do not reproduce the original index. This problem has been traced to 1. Use of unphysical powers in the polynomials (e.g., time-reversal invariance requires that the index is an even function of photon energy), and 2. Use of insufficient terms of the correct parity. Exclusion of unphysical terms and addition of quartic and quintic terms to the index and order polynomials yields significantly better fits with fewer parameters. This represents a specific example of using statistics to determine if the assumed fitting model adequately captures the physics contained in experimental data. The use of analysis of variance (ANOVA) and the Durbin-Watson statistic to test criteria for the validity of least-squares fitting will be discussed. *D.F. Edwards and E. Ochoa, Appl. Opt. 19, 4130 (1980). Supported in part by the US Department of Energy, Office of Nuclear Physics under contract DE-AC02-06CH11357.

  11. Effects of homogenization process parameters on physicochemical properties of astaxanthin nanodispersions prepared using a solvent-diffusion technique

    PubMed Central

    Anarjan, Navideh; Jafarizadeh-Malmiri, Hoda; Nehdi, Imededdine Arbi; Sbihi, Hassen Mohamed; Al-Resayes, Saud Ibrahim; Tan, Chin Ping

    2015-01-01

    Nanodispersion systems allow incorporation of lipophilic bioactives, such as astaxanthin (a fat soluble carotenoid) into aqueous systems, which can improve their solubility, bioavailability, and stability, and widen their uses in water-based pharmaceutical and food products. In this study, response surface methodology was used to investigate the influences of homogenization time (0.5–20 minutes) and speed (1,000–9,000 rpm) in the formation of astaxanthin nanodispersions via the solvent-diffusion process. The product was characterized for particle size and astaxanthin concentration using laser diffraction particle size analysis and high performance liquid chromatography, respectively. Relatively high determination coefficients (ranging from 0.896 to 0.969) were obtained for all suggested polynomial regression models. The overall optimal homogenization conditions were determined by multiple response optimization analysis to be 6,000 rpm for 7 minutes. In vitro cellular uptake of astaxanthin from the suggested individual and multiple optimized astaxanthin nanodispersions was also evaluated. The cellular uptake of astaxanthin was found to be considerably increased (by more than five times) as it became incorporated into optimum nanodispersion systems. The lack of a significant difference between predicted and experimental values confirms the suitability of the regression equations connecting the response variables studied to the independent parameters. PMID:25709435

  12. An empirical analysis of the quantitative effect of data when fitting quadratic and cubic polynomials

    NASA Technical Reports Server (NTRS)

    Canavos, G. C.

    1974-01-01

    A study is made of the extent to which the size of the sample affects the accuracy of a quadratic or a cubic polynomial approximation of an experimentally observed quantity, and the trend with regard to improvement in the accuracy of the approximation as a function of sample size is established. The task is made possible through a simulated analysis carried out by the Monte Carlo method in which data are simulated by using several transcendental or algebraic functions as models. Contaminated data of varying amounts are fitted to either quadratic or cubic polynomials, and the behavior of the mean-squared error of the residual variance is determined as a function of sample size. Results indicate that the effect of the size of the sample is significant only for relatively small sizes and diminishes drastically for moderate and large amounts of experimental data.

  13. A weighted least squares estimation of the polynomial regression model on paddy production in the area of Kedah and Perlis

    NASA Astrophysics Data System (ADS)

    Musa, Rosliza; Ali, Zalila; Baharum, Adam; Nor, Norlida Mohd

    2017-08-01

    The linear regression model assumes that all random error components are identically and independently distributed with constant variance. Hence, each data point provides equally precise information about the deterministic part of the total variation. In other words, the standard deviations of the error terms are constant over all values of the predictor variables. When the assumption of constant variance is violated, the ordinary least squares estimator of regression coefficient lost its property of minimum variance in the class of linear and unbiased estimators. Weighted least squares estimation are often used to maximize the efficiency of parameter estimation. A procedure that treats all of the data equally would give less precisely measured points more influence than they should have and would give highly precise points too little influence. Optimizing the weighted fitting criterion to find the parameter estimates allows the weights to determine the contribution of each observation to the final parameter estimates. This study used polynomial model with weighted least squares estimation to investigate paddy production of different paddy lots based on paddy cultivation characteristics and environmental characteristics in the area of Kedah and Perlis. The results indicated that factors affecting paddy production are mixture fertilizer application cycle, average temperature, the squared effect of average rainfall, the squared effect of pest and disease, the interaction between acreage with amount of mixture fertilizer, the interaction between paddy variety and NPK fertilizer application cycle and the interaction between pest and disease and NPK fertilizer application cycle.

  14. Multiresponse semiparametric regression for modelling the effect of regional socio-economic variables on the use of information technology

    NASA Astrophysics Data System (ADS)

    Wibowo, Wahyu; Wene, Chatrien; Budiantara, I. Nyoman; Permatasari, Erma Oktania

    2017-03-01

    Multiresponse semiparametric regression is simultaneous equation regression model and fusion of parametric and nonparametric model. The regression model comprise several models and each model has two components, parametric and nonparametric. The used model has linear function as parametric and polynomial truncated spline as nonparametric component. The model can handle both linearity and nonlinearity relationship between response and the sets of predictor variables. The aim of this paper is to demonstrate the application of the regression model for modeling of effect of regional socio-economic on use of information technology. More specific, the response variables are percentage of households has access to internet and percentage of households has personal computer. Then, predictor variables are percentage of literacy people, percentage of electrification and percentage of economic growth. Based on identification of the relationship between response and predictor variable, economic growth is treated as nonparametric predictor and the others are parametric predictors. The result shows that the multiresponse semiparametric regression can be applied well as indicate by the high coefficient determination, 90 percent.

  15. Probing baryogenesis through the Higgs boson self-coupling

    NASA Astrophysics Data System (ADS)

    Reichert, M.; Eichhorn, A.; Gies, H.; Pawlowski, J. M.; Plehn, T.; Scherer, M. M.

    2018-04-01

    The link between a modified Higgs self-coupling and the strong first-order phase transition necessary for baryogenesis is well explored for polynomial extensions of the Higgs potential. We broaden this argument beyond leading polynomial expansions of the Higgs potential to higher polynomial terms and to nonpolynomial Higgs potentials. For our quantitative analysis we resort to the functional renormalization group, which allows us to evolve the full Higgs potential to higher scales and finite temperature. In all cases we find that a strong first-order phase transition manifests itself in an enhancement of the Higgs self-coupling by at least 50%, implying that such modified Higgs potentials should be accessible at the LHC.

  16. Asymptotic analysis of the density of states in random matrix models associated with a slowly decaying weight

    NASA Astrophysics Data System (ADS)

    Kuijlaars, A. B. J.

    2001-08-01

    The asymptotic behavior of polynomials that are orthogonal with respect to a slowly decaying weight is very different from the asymptotic behavior of polynomials that are orthogonal with respect to a Freud-type weight. While the latter has been extensively studied, much less is known about the former. Following an earlier investigation into the zero behavior, we study here the asymptotics of the density of states in a unitary ensemble of random matrices with a slowly decaying weight. This measure is also naturally connected with the orthogonal polynomials. It is shown that, after suitable rescaling, the weak limit is the same as the weak limit of the rescaled zeros.

  17. Genetic analysis of longevity in Dutch dairy cattle using random regression.

    PubMed

    van Pelt, M L; Meuwissen, T H E; de Jong, G; Veerkamp, R F

    2015-06-01

    Longevity, productive life, or lifespan of dairy cattle is an important trait for dairy farmers, and it is defined as the time from first calving to the last test date for milk production. Methods for genetic evaluations need to account for censored data; that is, records from cows that are still alive. The aim of this study was to investigate whether these methods also need to take account of survival being genetically a different trait across the entire lifespan of a cow. The data set comprised 112,000 cows with a total of 3,964,449 observations for survival per month from first calving until 72 mo in productive life. A random regression model with second-order Legendre polynomials was fitted for the additive genetic effect. Alternative parameterizations were (1) different trait definitions for the length of time interval for survival after first calving (1, 3, 6, and 12 mo); (2) linear or threshold model; and (3) differing the order of the Legendre polynomial. The partial derivatives of a profit function were used to transform variance components on the survival scale to those for lifespan. Survival rates were higher in early life than later in life (99 vs. 95%). When survival was defined over 12-mo intervals survival curves were smooth compared with curves when 1-, 3-, or 6-mo intervals were used. Heritabilities in each interval were very low and ranged from 0.002 to 0.031, but the heritability for lifespan over the entire period of 72 mo after first calving ranged from 0.115 to 0.149. Genetic correlations between time intervals ranged from 0.25 to 1.00. Genetic parameters and breeding values for the genetic effect were more sensitive to the trait definition than to whether a linear or threshold model was used or to the order of Legendre polynomial used. Cumulative survival up to the first 6 mo predicted lifespan with an accuracy of only 0.79 to 0.85; that is, reliability of breeding value with many daughters in the first 6 mo can be, at most, 0.62 to 0.72, and changes of breeding values are still expected when daughters are getting older. Therefore, an improved model for genetic evaluation should treat survival as different traits during the lifespan by splitting lifespan in time intervals of 6 mo or less to avoid overestimated reliabilities and changes in breeding values when daughters are getting older. Copyright © 2015 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  18. Poly-Frobenius-Euler polynomials

    NASA Astrophysics Data System (ADS)

    Kurt, Burak

    2017-07-01

    Hamahata [3] defined poly-Euler polynomials and the generalized poly-Euler polynomials. He proved some relations and closed formulas for the poly-Euler polynomials. By this motivation, we define poly-Frobenius-Euler polynomials. We give some relations for this polynomials. Also, we prove the relationships between poly-Frobenius-Euler polynomials and Stirling numbers of the second kind.

  19. Sequential experimental design based generalised ANOVA

    NASA Astrophysics Data System (ADS)

    Chakraborty, Souvik; Chowdhury, Rajib

    2016-07-01

    Over the last decade, surrogate modelling technique has gained wide popularity in the field of uncertainty quantification, optimization, model exploration and sensitivity analysis. This approach relies on experimental design to generate training points and regression/interpolation for generating the surrogate. In this work, it is argued that conventional experimental design may render a surrogate model inefficient. In order to address this issue, this paper presents a novel distribution adaptive sequential experimental design (DA-SED). The proposed DA-SED has been coupled with a variant of generalised analysis of variance (G-ANOVA), developed by representing the component function using the generalised polynomial chaos expansion. Moreover, generalised analytical expressions for calculating the first two statistical moments of the response, which are utilized in predicting the probability of failure, have also been developed. The proposed approach has been utilized in predicting probability of failure of three structural mechanics problems. It is observed that the proposed approach yields accurate and computationally efficient estimate of the failure probability.

  20. Sequential experimental design based generalised ANOVA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chakraborty, Souvik, E-mail: csouvik41@gmail.com; Chowdhury, Rajib, E-mail: rajibfce@iitr.ac.in

    Over the last decade, surrogate modelling technique has gained wide popularity in the field of uncertainty quantification, optimization, model exploration and sensitivity analysis. This approach relies on experimental design to generate training points and regression/interpolation for generating the surrogate. In this work, it is argued that conventional experimental design may render a surrogate model inefficient. In order to address this issue, this paper presents a novel distribution adaptive sequential experimental design (DA-SED). The proposed DA-SED has been coupled with a variant of generalised analysis of variance (G-ANOVA), developed by representing the component function using the generalised polynomial chaos expansion. Moreover,more » generalised analytical expressions for calculating the first two statistical moments of the response, which are utilized in predicting the probability of failure, have also been developed. The proposed approach has been utilized in predicting probability of failure of three structural mechanics problems. It is observed that the proposed approach yields accurate and computationally efficient estimate of the failure probability.« less

  1. Modeling and optimization of red currants vacuum drying process by response surface methodology (RSM).

    PubMed

    Šumić, Zdravko; Vakula, Anita; Tepić, Aleksandra; Čakarević, Jelena; Vitas, Jasmina; Pavlić, Branimir

    2016-07-15

    Fresh red currants were dried by vacuum drying process under different drying conditions. Box-Behnken experimental design with response surface methodology was used for optimization of drying process in terms of physical (moisture content, water activity, total color change, firmness and rehydratation power) and chemical (total phenols, total flavonoids, monomeric anthocyanins and ascorbic acid content and antioxidant activity) properties of dried samples. Temperature (48-78 °C), pressure (30-330 mbar) and drying time (8-16 h) were investigated as independent variables. Experimental results were fitted to a second-order polynomial model where regression analysis and analysis of variance were used to determine model fitness and optimal drying conditions. The optimal conditions of simultaneously optimized responses were temperature of 70.2 °C, pressure of 39 mbar and drying time of 8 h. It could be concluded that vacuum drying provides samples with good physico-chemical properties, similar to lyophilized sample and better than conventionally dried sample. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Spectra of normal and nutrient-deficient maize leaves

    NASA Technical Reports Server (NTRS)

    Al-Abbas, A. H.; Barr, R.; Hall, J. D.; Crane, F. L.; Baumgardner, M. F.

    1973-01-01

    Reflectance, transmittance and absorptance spectra of normal and six types of nutrient-deficient (N, P, K, S, Mg, and Ca) maize (Zea mays L.) leaves were analyzed at 30 selected wavelengths from 500 to 2600 nm. The analysis of variance showed significant differences in reflectance, transmittance and absorptance in the visible wavelengths among leaf numbers 3, 4, and 5, among the seven treatments, and among the interactions of leaf number and treatments. In the infrared wavelengths only treatments produced significant differences. The chlorophyll content of leaves was reduced in all nutrient-deficient treatments. Percent moisture was increased in S-, Mg-, and N-deficiencies. Polynomial regression analysis of leaf thickness and leaf moisture content showed that these two variables were significantly and directly related. Leaves from the P- and Ca-deficient plants absorbed less energy in the near infrared than the normal plants; S-, Mg-, K-, and N-deficient leaves absorbed more than the normal. Both S- and N-deficient leaves had higher temperatues than normal maize leaves.

  3. Modeling the North American vertical datum of 1988 errors in the conterminous United States

    NASA Astrophysics Data System (ADS)

    Li, X.

    2018-02-01

    A large systematic difference (ranging from -20 cm to +130 cm) was found between NAVD 88 (North AmericanVertical Datum of 1988) and the pure gravimetric geoid models. This difference not only makes it very difficult to augment the local geoid model by directly using the vast NAVD 88 network with state-of-the-art technologies recently developed in geodesy, but also limits the ability of researchers to effectively demonstrate the geoid model improvements on the NAVD 88 network. Here, both conventional regression analyses based on various predefined basis functions such as polynomials, B-splines, and Legendre functions and the Latent Variable Analysis (LVA) such as the Factor Analysis (FA) are used to analyze the systematic difference. Besides giving a mathematical model, the regression results do not reveal a great deal about the physical reasons that caused the large differences in NAVD 88, which may be of interest to various researchers. Furthermore, there is still a significant amount of no-Gaussian signals left in the residuals of the conventional regression models. On the other side, the FA method not only provides a better not of the data, but also offers possible explanations of the error sources. Without requiring extra hypothesis tests on the model coefficients, the results from FA are more efficient in terms of capturing the systematic difference. Furthermore, without using a covariance model, a novel interpolating method based on the relationship between the loading matrix and the factor scores is developed for predictive purposes. The prediction error analysis shows that about 3-7 cm precision is expected in NAVD 88 after removing the systematic difference.

  4. Melamine detection by mid- and near-infrared (MIR/NIR) spectroscopy: a quick and sensitive method for dairy products analysis including liquid milk, infant formula, and milk powder.

    PubMed

    Balabin, Roman M; Smirnov, Sergey V

    2011-07-15

    Melamine (2,4,6-triamino-1,3,5-triazine) is a nitrogen-rich chemical implicated in the pet and human food recalls and in the global food safety scares involving milk products. Due to the serious health concerns associated with melamine consumption and the extensive scope of affected products, rapid and sensitive methods to detect melamine's presence are essential. We propose the use of spectroscopy data-produced by near-infrared (near-IR/NIR) and mid-infrared (mid-IR/MIR) spectroscopies, in particular-for melamine detection in complex dairy matrixes. None of the up-to-date reported IR-based methods for melamine detection has unambiguously shown its wide applicability to different dairy products as well as limit of detection (LOD) below 1 ppm on independent sample set. It was found that infrared spectroscopy is an effective tool to detect melamine in dairy products, such as infant formula, milk powder, or liquid milk. ALOD below 1 ppm (0.76±0.11 ppm) can be reached if a correct spectrum preprocessing (pretreatment) technique and a correct multivariate (MDA) algorithm-partial least squares regression (PLS), polynomial PLS (Poly-PLS), artificial neural network (ANN), support vector regression (SVR), or least squares support vector machine (LS-SVM)-are used for spectrum analysis. The relationship between MIR/NIR spectrum of milk products and melamine content is nonlinear. Thus, nonlinear regression methods are needed to correctly predict the triazine-derivative content of milk products. It can be concluded that mid- and near-infrared spectroscopy can be regarded as a quick, sensitive, robust, and low-cost method for liquid milk, infant formula, and milk powder analysis. Copyright © 2011 Elsevier B.V. All rights reserved.

  5. Viewing the Roots of Polynomial Functions in Complex Variable: The Use of Geogebra and the CAS Maple

    ERIC Educational Resources Information Center

    Alves, Francisco Regis Vieira

    2013-01-01

    Admittedly, the Fundamental Theorem of Calculus-TFA holds an important role in the Complex Analysis-CA, as well as in other mathematical branches. In this article, we bring a discussion about the TFA, the Rouché's theorem and the winding number with the intention to analyze the roots of a polynomial equation. We propose also a description for a…

  6. Stochastic Modeling of Flow-Structure Interactions using Generalized Polynomial Chaos

    DTIC Science & Technology

    2001-09-11

    Some basic hypergeometric polynomials that generalize Jacobi polynomials . Memoirs Amer. Math. Soc...scheme, which is represented as a tree structure in figure 1 (following [24]), classifies the hypergeometric orthogonal polynomials and indicates the...2F0(1) 2F0(0) Figure 1: The Askey scheme of orthogonal polynomials The orthogonal polynomials associated with the generalized polynomial chaos,

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jakeman, John D.; Narayan, Akil; Zhou, Tao

    We propose an algorithm for recovering sparse orthogonal polynomial expansions via collocation. A standard sampling approach for recovering sparse polynomials uses Monte Carlo sampling, from the density of orthogonality, which results in poor function recovery when the polynomial degree is high. Our proposed approach aims to mitigate this limitation by sampling with respect to the weighted equilibrium measure of the parametric domain and subsequently solves a preconditionedmore » $$\\ell^1$$-minimization problem, where the weights of the diagonal preconditioning matrix are given by evaluations of the Christoffel function. Our algorithm can be applied to a wide class of orthogonal polynomial families on bounded and unbounded domains, including all classical families. We present theoretical analysis to motivate the algorithm and numerical results that show our method is superior to standard Monte Carlo methods in many situations of interest. In conclusion, numerical examples are also provided to demonstrate that our proposed algorithm leads to comparable or improved accuracy even when compared with Legendre- and Hermite-specific algorithms.« less

  8. Non-Abelian integrable hierarchies: matrix biorthogonal polynomials and perturbations

    NASA Astrophysics Data System (ADS)

    Ariznabarreta, Gerardo; García-Ardila, Juan C.; Mañas, Manuel; Marcellán, Francisco

    2018-05-01

    In this paper, Geronimus–Uvarov perturbations for matrix orthogonal polynomials on the real line are studied and then applied to the analysis of non-Abelian integrable hierarchies. The orthogonality is understood in full generality, i.e. in terms of a nondegenerate continuous sesquilinear form, determined by a quasidefinite matrix of bivariate generalized functions with a well-defined support. We derive Christoffel-type formulas that give the perturbed matrix biorthogonal polynomials and their norms in terms of the original ones. The keystone for this finding is the Gauss–Borel factorization of the Gram matrix. Geronimus–Uvarov transformations are considered in the context of the 2D non-Abelian Toda lattice and noncommutative KP hierarchies. The interplay between transformations and integrable flows is discussed. Miwa shifts, τ-ratio matrix functions and Sato formulas are given. Bilinear identities, involving Geronimus–Uvarov transformations, first for the Baker functions, then secondly for the biorthogonal polynomials and its second kind functions, and finally for the τ-ratio matrix functions, are found.

  9. Eye aberration analysis with Zernike polynomials

    NASA Astrophysics Data System (ADS)

    Molebny, Vasyl V.; Chyzh, Igor H.; Sokurenko, Vyacheslav M.; Pallikaris, Ioannis G.; Naoumidis, Leonidas P.

    1998-06-01

    New horizons for accurate photorefractive sight correction, afforded by novel flying spot technologies, require adequate measurements of photorefractive properties of an eye. Proposed techniques of eye refraction mapping present results of measurements for finite number of points of eye aperture, requiring to approximate these data by 3D surface. A technique of wave front approximation with Zernike polynomials is described, using optimization of the number of polynomial coefficients. Criterion of optimization is the nearest proximity of the resulted continuous surface to the values calculated for given discrete points. Methodology includes statistical evaluation of minimal root mean square deviation (RMSD) of transverse aberrations, in particular, varying consecutively the values of maximal coefficient indices of Zernike polynomials, recalculating the coefficients, and computing the value of RMSD. Optimization is finished at minimal value of RMSD. Formulas are given for computing ametropia, size of the spot of light on retina, caused by spherical aberration, coma, and astigmatism. Results are illustrated by experimental data, that could be of interest for other applications, where detailed evaluation of eye parameters is needed.

  10. Modeling lactation curves and estimation of genetic parameters in Holstein cows using multiple-trait random regression models.

    PubMed

    Kheirabadi, Khabat; Rashidi, Amir; Alijani, Sadegh; Imumorin, Ikhide

    2014-11-01

    We compared the goodness of fit of three mathematical functions (including: Legendre polynomials, Lidauer-Mäntysaari function and Wilmink function) for describing the lactation curve of primiparous Iranian Holstein cows by using multiple-trait random regression models (MT-RRM). Lactational submodels provided the largest daily additive genetic (AG) and permanent environmental (PE) variance estimates at the end and at the onset of lactation, respectively, as well as low genetic correlations between peripheral test-day records. For all models, heritability estimates were highest at the end of lactation (245 to 305 days) and ranged from 0.05 to 0.26, 0.03 to 0.12 and 0.04 to 0.24 for milk, fat and protein yields, respectively. Generally, the genetic correlations between traits depend on how far apart they are or whether they are on the same day in any two traits. On average, genetic correlations between milk and fat were the lowest and those between fat and protein were intermediate, while those between milk and protein were the highest. Results from all criteria (Akaike's and Schwarz's Bayesian information criterion, and -2*logarithm of the likelihood function) suggested that a model with 2 and 5 coefficients of Legendre polynomials for AG and PE effects, respectively, was the most adequate for fitting the data. © 2014 Japanese Society of Animal Science.

  11. Genetic evaluation of weekly body weight in Japanese quail using random regression models.

    PubMed

    Karami, K; Zerehdaran, S; Tahmoorespur, M; Barzanooni, B; Lotfi, E

    2017-02-01

    1. A total of 11 826 records from 2489 quails, hatched between 2012 and 2013, were used to estimate genetic parameters for BW (body weight) of Japanese quail using random regression models. Weekly BW was measured from hatch until 49 d of age. WOMBAT software (University of New England, Australia) was used for estimating genetic and phenotypic parameters. 2. Nineteen models were evaluated to identify the best orders of Legendre polynomials. A model with Legendre polynomial of order 3 for additive genetic effect, order 3 for permanent environmental effects and order 1 for maternal permanent environmental effects was chosen as the best model. 3. According to the best model, phenotypic and genetic variances were higher at the end of the rearing period. Although direct heritability for BW reduced from 0.18 at hatch to 0.12 at 7 d of age, it gradually increased to 0.42 at 49 d of age. It indicates that BW at older ages is more controlled by genetic components in Japanese quail. 4. Phenotypic and genetic correlations between adjacent periods except hatching weight were more closely correlated than remote periods. The present results suggested that BW at earlier ages, especially at hatch, are different traits compared to BW at older ages. Therefore, BW at earlier ages could not be used as a selection criterion for improving BW at slaughter age.

  12. Genetic Parameters for Milk Yield and Lactation Persistency Using Random Regression Models in Girolando Cattle

    PubMed Central

    Canaza-Cayo, Ali William; Lopes, Paulo Sávio; da Silva, Marcos Vinicius Gualberto Barbosa; de Almeida Torres, Robledo; Martins, Marta Fonseca; Arbex, Wagner Antonio; Cobuci, Jaime Araujo

    2015-01-01

    A total of 32,817 test-day milk yield (TDMY) records of the first lactation of 4,056 Girolando cows daughters of 276 sires, collected from 118 herds between 2000 and 2011 were utilized to estimate the genetic parameters for TDMY via random regression models (RRM) using Legendre’s polynomial functions whose orders varied from 3 to 5. In addition, nine measures of persistency in milk yield (PSi) and the genetic trend of 305-day milk yield (305MY) were evaluated. The fit quality criteria used indicated RRM employing the Legendre’s polynomial of orders 3 and 5 for fitting the genetic additive and permanent environment effects, respectively, as the best model. The heritability and genetic correlation for TDMY throughout the lactation, obtained with the best model, varied from 0.18 to 0.23 and from −0.03 to 1.00, respectively. The heritability and genetic correlation for persistency and 305MY varied from 0.10 to 0.33 and from −0.98 to 1.00, respectively. The use of PS7 would be the most suitable option for the evaluation of Girolando cattle. The estimated breeding values for 305MY of sires and cows showed significant and positive genetic trends. Thus, the use of selection indices would be indicated in the genetic evaluation of Girolando cattle for both traits. PMID:26323397

  13. Reachability Analysis in Probabilistic Biological Networks.

    PubMed

    Gabr, Haitham; Todor, Andrei; Dobra, Alin; Kahveci, Tamer

    2015-01-01

    Extra-cellular molecules trigger a response inside the cell by initiating a signal at special membrane receptors (i.e., sources), which is then transmitted to reporters (i.e., targets) through various chains of interactions among proteins. Understanding whether such a signal can reach from membrane receptors to reporters is essential in studying the cell response to extra-cellular events. This problem is drastically complicated due to the unreliability of the interaction data. In this paper, we develop a novel method, called PReach (Probabilistic Reachability), that precisely computes the probability that a signal can reach from a given collection of receptors to a given collection of reporters when the underlying signaling network is uncertain. This is a very difficult computational problem with no known polynomial-time solution. PReach represents each uncertain interaction as a bi-variate polynomial. It transforms the reachability problem to a polynomial multiplication problem. We introduce novel polynomial collapsing operators that associate polynomial terms with possible paths between sources and targets as well as the cuts that separate sources from targets. These operators significantly shrink the number of polynomial terms and thus the running time. PReach has much better time complexity than the recent solutions for this problem. Our experimental results on real data sets demonstrate that this improvement leads to orders of magnitude of reduction in the running time over the most recent methods. Availability: All the data sets used, the software implemented and the alignments found in this paper are available at http://bioinformatics.cise.ufl.edu/PReach/.

  14. Efficient computer algebra algorithms for polynomial matrices in control design

    NASA Technical Reports Server (NTRS)

    Baras, J. S.; Macenany, D. C.; Munach, R.

    1989-01-01

    The theory of polynomial matrices plays a key role in the design and analysis of multi-input multi-output control and communications systems using frequency domain methods. Examples include coprime factorizations of transfer functions, cannonical realizations from matrix fraction descriptions, and the transfer function design of feedback compensators. Typically, such problems abstract in a natural way to the need to solve systems of Diophantine equations or systems of linear equations over polynomials. These and other problems involving polynomial matrices can in turn be reduced to polynomial matrix triangularization procedures, a result which is not surprising given the importance of matrix triangularization techniques in numerical linear algebra. Matrices with entries from a field and Gaussian elimination play a fundamental role in understanding the triangularization process. In the case of polynomial matrices, matrices with entries from a ring for which Gaussian elimination is not defined and triangularization is accomplished by what is quite properly called Euclidean elimination. Unfortunately, the numerical stability and sensitivity issues which accompany floating point approaches to Euclidean elimination are not very well understood. New algorithms are presented which circumvent entirely such numerical issues through the use of exact, symbolic methods in computer algebra. The use of such error-free algorithms guarantees that the results are accurate to within the precision of the model data--the best that can be hoped for. Care must be taken in the design of such algorithms due to the phenomenon of intermediate expressions swell.

  15. Multiple-trait random regression models for the estimation of genetic parameters for milk, fat, and protein yield in buffaloes.

    PubMed

    Borquis, Rusbel Raul Aspilcueta; Neto, Francisco Ribeiro de Araujo; Baldi, Fernando; Hurtado-Lugo, Naudin; de Camargo, Gregório M F; Muñoz-Berrocal, Milthon; Tonhati, Humberto

    2013-09-01

    In this study, genetic parameters for test-day milk, fat, and protein yield were estimated for the first lactation. The data analyzed consisted of 1,433 first lactations of Murrah buffaloes, daughters of 113 sires from 12 herds in the state of São Paulo, Brazil, with calvings from 1985 to 2007. Ten-month classes of lactation days were considered for the test-day yields. The (co)variance components for the 3 traits were estimated using the regression analyses by Bayesian inference applying an animal model by Gibbs sampling. The contemporary groups were defined as herd-year-month of the test day. In the model, the random effects were additive genetic, permanent environment, and residual. The fixed effects were contemporary group and number of milkings (1 or 2), the linear and quadratic effects of the covariable age of the buffalo at calving, as well as the mean lactation curve of the population, which was modeled by orthogonal Legendre polynomials of fourth order. The random effects for the traits studied were modeled by Legendre polynomials of third and fourth order for additive genetic and permanent environment, respectively, the residual variances were modeled considering 4 residual classes. The heritability estimates for the traits were moderate (from 0.21-0.38), with higher estimates in the intermediate lactation phase. The genetic correlation estimates within and among the traits varied from 0.05 to 0.99. The results indicate that the selection for any trait test day will result in an indirect genetic gain for milk, fat, and protein yield in all periods of the lactation curve. The accuracy associated with estimated breeding values obtained using multi-trait random regression was slightly higher (around 8%) compared with single-trait random regression. This difference may be because to the greater amount of information available per animal. Copyright © 2013 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  16. Tensile stress-strain behavior of graphite/epoxy laminates

    NASA Technical Reports Server (NTRS)

    Garber, D. P.

    1982-01-01

    The tensile stress-strain behavior of a variety of graphite/epoxy laminates was examined. Longitudinal and transverse specimens from eleven different layups were monotonically loaded in tension to failure. Ultimate strength, ultimate strain, and strss-strain curves wee obtained from four replicate tests in each case. Polynominal equations were fitted by the method of least squares to the stress-strain data to determine average curves. Values of Young's modulus and Poisson's ratio, derived from polynomial coefficients, were compared with laminate analysis results. While the polynomials appeared to accurately fit the stress-strain data in most cases, the use of polynomial coefficients to calculate elastic moduli appeared to be of questionable value in cases involving sharp changes in the slope of the stress-strain data or extensive scatter.

  17. Event-Triggered Fault Detection of Nonlinear Networked Systems.

    PubMed

    Li, Hongyi; Chen, Ziran; Wu, Ligang; Lam, Hak-Keung; Du, Haiping

    2017-04-01

    This paper investigates the problem of fault detection for nonlinear discrete-time networked systems under an event-triggered scheme. A polynomial fuzzy fault detection filter is designed to generate a residual signal and detect faults in the system. A novel polynomial event-triggered scheme is proposed to determine the transmission of the signal. A fault detection filter is designed to guarantee that the residual system is asymptotically stable and satisfies the desired performance. Polynomial approximated membership functions obtained by Taylor series are employed for filtering analysis. Furthermore, sufficient conditions are represented in terms of sum of squares (SOSs) and can be solved by SOS tools in MATLAB environment. A numerical example is provided to demonstrate the effectiveness of the proposed results.

  18. Homogenous polynomially parameter-dependent H∞ filter designs of discrete-time fuzzy systems.

    PubMed

    Zhang, Huaguang; Xie, Xiangpeng; Tong, Shaocheng

    2011-10-01

    This paper proposes a novel H(∞) filtering technique for a class of discrete-time fuzzy systems. First, a novel kind of fuzzy H(∞) filter, which is homogenous polynomially parameter dependent on membership functions with an arbitrary degree, is developed to guarantee the asymptotic stability and a prescribed H(∞) performance of the filtering error system. Second, relaxed conditions for H(∞) performance analysis are proposed by using a new fuzzy Lyapunov function and the Finsler lemma with homogenous polynomial matrix Lagrange multipliers. Then, based on a new kind of slack variable technique, relaxed linear matrix inequality-based H(∞) filtering conditions are proposed. Finally, two numerical examples are provided to illustrate the effectiveness of the proposed approach.

  19. Empirical performance of interpolation techniques in risk-neutral density (RND) estimation

    NASA Astrophysics Data System (ADS)

    Bahaludin, H.; Abdullah, M. H.

    2017-03-01

    The objective of this study is to evaluate the empirical performance of interpolation techniques in risk-neutral density (RND) estimation. Firstly, the empirical performance is evaluated by using statistical analysis based on the implied mean and the implied variance of RND. Secondly, the interpolation performance is measured based on pricing error. We propose using the leave-one-out cross-validation (LOOCV) pricing error for interpolation selection purposes. The statistical analyses indicate that there are statistical differences between the interpolation techniques:second-order polynomial, fourth-order polynomial and smoothing spline. The results of LOOCV pricing error shows that interpolation by using fourth-order polynomial provides the best fitting to option prices in which it has the lowest value error.

  20. Comparison between splines and fractional polynomials for multivariable model building with continuous covariates: a simulation study with continuous response.

    PubMed

    Binder, Harald; Sauerbrei, Willi; Royston, Patrick

    2013-06-15

    In observational studies, many continuous or categorical covariates may be related to an outcome. Various spline-based procedures or the multivariable fractional polynomial (MFP) procedure can be used to identify important variables and functional forms for continuous covariates. This is the main aim of an explanatory model, as opposed to a model only for prediction. The type of analysis often guides the complexity of the final model. Spline-based procedures and MFP have tuning parameters for choosing the required complexity. To compare model selection approaches, we perform a simulation study in the linear regression context based on a data structure intended to reflect realistic biomedical data. We vary the sample size, variance explained and complexity parameters for model selection. We consider 15 variables. A sample size of 200 (1000) and R(2)  = 0.2 (0.8) is the scenario with the smallest (largest) amount of information. For assessing performance, we consider prediction error, correct and incorrect inclusion of covariates, qualitative measures for judging selected functional forms and further novel criteria. From limited information, a suitable explanatory model cannot be obtained. Prediction performance from all types of models is similar. With a medium amount of information, MFP performs better than splines on several criteria. MFP better recovers simpler functions, whereas splines better recover more complex functions. For a large amount of information and no local structure, MFP and the spline procedures often select similar explanatory models. Copyright © 2012 John Wiley & Sons, Ltd.

  1. Model-assisted probability of detection of flaws in aluminum blocks using polynomial chaos expansions

    NASA Astrophysics Data System (ADS)

    Du, Xiaosong; Leifsson, Leifur; Grandin, Robert; Meeker, William; Roberts, Ronald; Song, Jiming

    2018-04-01

    Probability of detection (POD) is widely used for measuring reliability of nondestructive testing (NDT) systems. Typically, POD is determined experimentally, while it can be enhanced by utilizing physics-based computational models in combination with model-assisted POD (MAPOD) methods. With the development of advanced physics-based methods, such as ultrasonic NDT testing, the empirical information, needed for POD methods, can be reduced. However, performing accurate numerical simulations can be prohibitively time-consuming, especially as part of stochastic analysis. In this work, stochastic surrogate models for computational physics-based measurement simulations are developed for cost savings of MAPOD methods while simultaneously ensuring sufficient accuracy. The stochastic surrogate is used to propagate the random input variables through the physics-based simulation model to obtain the joint probability distribution of the output. The POD curves are then generated based on those results. Here, the stochastic surrogates are constructed using non-intrusive polynomial chaos (NIPC) expansions. In particular, the NIPC methods used are the quadrature, ordinary least-squares (OLS), and least-angle regression sparse (LARS) techniques. The proposed approach is demonstrated on the ultrasonic testing simulation of a flat bottom hole flaw in an aluminum block. The results show that the stochastic surrogates have at least two orders of magnitude faster convergence on the statistics than direct Monte Carlo sampling (MCS). Moreover, the evaluation of the stochastic surrogate models is over three orders of magnitude faster than the underlying simulation model for this case, which is the UTSim2 model.

  2. Georeferencing CAMS data: Polynomial rectification and beyond

    NASA Astrophysics Data System (ADS)

    Yang, Xinghe

    The Calibrated Airborne Multispectral Scanner (CAMS) is a sensor used in the commercial remote sensing program at NASA Stennis Space Center. In geographic applications of the CAMS data, accurate geometric rectification is essential for the analysis of the remotely sensed data and for the integration of the data into Geographic Information Systems (GIS). The commonly used rectification techniques such as the polynomial transformation and ortho rectification have been very successful in the field of remote sensing and GIS for most remote sensing data such as Landsat imagery, SPOT imagery and aerial photos. However, due to the geometric nature of the airborne line scanner which has high spatial frequency distortions, the polynomial model and the ortho rectification technique in current commercial software packages such as Erdas Imagine are not adequate for obtaining sufficient geometric accuracy. In this research, the geometric nature, especially the major distortions, of the CAMS data has been described. An analytical step-by-step geometric preprocessing has been utilized to deal with the potential high frequency distortions of the CAMS data. A generic sensor-independent photogrammetric model has been developed for the ortho-rectification of the CAMS data. Three generalized kernel classes and directional elliptical basis have been formulated into a rectification model of summation of multisurface functions, which is a significant extension to the traditional radial basis functions. The preprocessing mechanism has been fully incorporated into the polynomial, the triangle-based finite element analysis as well as the summation of multisurface functions. While the multisurface functions and the finite element analysis have the characteristics of localization, piecewise logic has been applied to the polynomial and photogrammetric methods, which can produce significant accuracy improvement over the global approach. A software module has been implemented with full integration of data preprocessing and rectification techniques under Erdas Imagine development environment. The final root mean square (RMS) errors for the test CAMS data are about two pixels which are compatible with the random RMS errors existed in the reference map coordinates.

  3. Polynomial modal analysis of slanted lamellar gratings.

    PubMed

    Granet, Gérard; Randriamihaja, Manjakavola Honore; Raniriharinosy, Karyl

    2017-06-01

    The problem of diffraction by slanted lamellar dielectric and metallic gratings in classical mounting is formulated as an eigenvalue eigenvector problem. The numerical solution is obtained by using the moment method with Legendre polynomials as expansion and test functions, which allows us to enforce in an exact manner the boundary conditions which determine the eigensolutions. Our method is successfully validated by comparison with other methods including in the case of highly slanted gratings.

  4. Isogeometric Analysis of Boundary Integral Equations

    DTIC Science & Technology

    2015-04-21

    methods, IgA relies on Non-Uniform Rational B- splines (NURBS) [43, 46], T- splines [55, 53] or subdivision surfaces [21, 48, 51] rather than piece- wise...structural dynamics [25, 26], plates and shells [15, 16, 27, 28, 37, 22, 23], phase-field models [17, 32, 33], and shape optimization [40, 41, 45, 59...polynomials for approximating the geometry and field variables. Thus, by replacing piecewise polynomials with NURBS or T- splines , one can develop

  5. Approximation for limit cycles and their isochrons.

    PubMed

    Demongeot, Jacques; Françoise, Jean-Pierre

    2006-12-01

    Local analysis of trajectories of dynamical systems near an attractive periodic orbit displays the notion of asymptotic phase and isochrons. These notions are quite useful in applications to biosciences. In this note, we give an expression for the first approximation of equations of isochrons in the setting of perturbations of polynomial Hamiltonian systems. This method can be generalized to perturbations of systems that have a polynomial integral factor (like the Lotka-Volterra equation).

  6. Analysis of the inter- and extracellular formation of platinum nanoparticles by Fusarium oxysporum f. sp. lycopersici using response surface methodology

    NASA Astrophysics Data System (ADS)

    Riddin, T. L.; Gericke, M.; Whiteley, C. G.

    2006-07-01

    Fusarium oxysporum fungal strain was screened and found to be successful for the inter- and extracellular production of platinum nanoparticles. Nanoparticle formation was visually observed, over time, by the colour of the extracellular solution and/or the fungal biomass turning from yellow to dark brown, and their concentration was determined from the amount of residual hexachloroplatinic acid measured from a standard curve at 456 nm. The extracellular nanoparticles were characterized by transmission electron microscopy. Nanoparticles of varying size (10-100 nm) and shape (hexagons, pentagons, circles, squares, rectangles) were produced at both extracellular and intercellular levels by the Fusarium oxysporum. The particles precipitate out of solution and bioaccumulate by nucleation either intercellularly, on the cell wall/membrane, or extracellularly in the surrounding medium. The importance of pH, temperature and hexachloroplatinic acid (H2PtCl6) concentration in nanoparticle formation was examined through the use of a statistical response surface methodology. Only the extracellular production of nanoparticles proved to be statistically significant, with a concentration yield of 4.85 mg l-1 estimated by a first-order regression model. From a second-order polynomial regression, the predicted yield of nanoparticles increased to 5.66 mg l-1 and, after a backward step, regression gave a final model with a yield of 6.59 mg l-1.

  7. Modeling and control for closed environment plant production systems

    NASA Technical Reports Server (NTRS)

    Fleisher, David H.; Ting, K. C.; Janes, H. W. (Principal Investigator)

    2002-01-01

    A computer program was developed to study multiple crop production and control in controlled environment plant production systems. The program simulates crop growth and development under nominal and off-nominal environments. Time-series crop models for wheat (Triticum aestivum), soybean (Glycine max), and white potato (Solanum tuberosum) are integrated with a model-based predictive controller. The controller evaluates and compensates for effects of environmental disturbances on crop production scheduling. The crop models consist of a set of nonlinear polynomial equations, six for each crop, developed using multivariate polynomial regression (MPR). Simulated data from DSSAT crop models, previously modified for crop production in controlled environments with hydroponics under elevated atmospheric carbon dioxide concentration, were used for the MPR fitting. The model-based predictive controller adjusts light intensity, air temperature, and carbon dioxide concentration set points in response to environmental perturbations. Control signals are determined from minimization of a cost function, which is based on the weighted control effort and squared-error between the system response and desired reference signal.

  8. Additive-Multiplicative Approximation of Genotype-Environment Interaction

    PubMed Central

    Gimelfarb, A.

    1994-01-01

    A model of genotype-environment interaction in quantitative traits is considered. The model represents an expansion of the traditional additive (first degree polynomial) approximation of genotypic and environmental effects to a second degree polynomial incorporating a multiplicative term besides the additive terms. An experimental evaluation of the model is suggested and applied to a trait in Drosophila melanogaster. The environmental variance of a genotype in the model is shown to be a function of the genotypic value: it is a convex parabola. The broad sense heritability in a population depends not only on the genotypic and environmental variances, but also on the position of the genotypic mean in the population relative to the minimum of the parabola. It is demonstrated, using the model, that GXE interaction rectional may cause a substantial non-linearity in offspring-parent regression and a reversed response to directional selection. It is also shown that directional selection may be accompanied by an increase in the heritability. PMID:7896113

  9. Optimization of process variables for decolorization of Disperse Yellow 211 by Bacillus subtilis using Box-Behnken design.

    PubMed

    Sharma, Praveen; Singh, Lakhvinder; Dilbaghi, Neeraj

    2009-05-30

    Decolorization of textile azo dye Disperse Yellow 211 (DY 211) was carried out from simulated aqueous solution by bacterial strain Bacillus subtilis. Response surface methodology (RSM), involving Box-Behnken design matrix in three most important operating variables; temperature, pH and initial dye concentration was successfully employed for the study and optimization of decolorization process. The total 17 experiments were conducted in the study towards the construction of a quadratic model. According to analysis of variance (ANOVA) results, the proposed model can be used to navigate the design space. Under optimized conditions the bacterial strain was able to decolorize DY 211 up to 80%. Model indicated that initial dye concentration of 100 mgl(-1), pH 7 and a temperature of 32.5 degrees C were found optimum for maximum % decolorization. Very high regression coefficient between the variables and the response (R(2)=0.9930) indicated excellent evaluation of experimental data by polynomial regression model. The combination of the three variables predicted through RSM was confirmed through confirmatory experiments, hence the bacterial strain holds a great potential for the treatment of colored textile effluents.

  10. Genetic analysis of longitudinal measurements of performance traits in selection lines for residual feed intake in Yorkshire swine.

    PubMed

    Cai, W; Kaiser, M S; Dekkers, J C M

    2011-05-01

    A 5-generation selection experiment in Yorkshire pigs for feed efficiency consists of a line selected for low residual feed intake (LRFI) and a random control line (CTRL). The objectives of this study were to use random regression models to estimate genetic parameters for daily feed intake (DFI), BW, backfat (BF), and loin muscle area (LMA) along the growth trajectory and to evaluate the effect of LRFI selection on genetic curves for DFI and BW. An additional objective was to compare random regression models using polynomials (RRP) and spline functions (RRS). Data from approximately 3 to 8 mo of age on 586 boars and 495 gilts across 5 generations were used. The average number of measurements was 85, 14, 5, and 5 for DFI, BW, BF, and LMA. The RRP models for these 4 traits were fitted with pen × on-test group as a fixed effect, second-order Legendre polynomials of age as fixed curves for each generation, and random curves for additive genetic and permanent environmental effects. Different residual variances were used for the first and second halves of the test period. The RRS models were fitted with the same fixed effects and residual variance structure as the RRP models and included genetic and permanent environmental random effects for both splines and linear Legendre polynomials of age. The RRP model was used for further analysis because the RRS model had erratic estimates of phenotypic variance and heritability, despite having a smaller Bayesian information criterion than the RRP model. From 91 to 210 d of age, estimates of heritability from the RRP model ranged from 0.10 to 0.37 for boars and 0.14 to 0.26 for gilts for DFI, from 0.39 to 0.58 for boars and 0.55 to 0.61 for gilts for BW, from 0.48 to 0.61 for boars and 0.61 to 0.79 for gilts for BF, and from 0.46 to 0.55 for boars and 0.63 to 0.81 for gilts for LMA. In generation 5, LRFI pigs had lower average genetic curves than CTRL pigs for DFI and BW, especially toward the end of the test period; estimated line differences (CTRL-LRFI) for DFI were 0.04 kg/d for boars and 0.12 kg/d for gilts at 105 d and 0.20 kg/d for boars and 0.24 kg/d for gilts at 195 d. Line differences for BW were 0.17 kg for boars and 0.69 kg for gilts at 105 d and 3.49 kg for boars and 8.96 kg for gilts at 195 d. In conclusion, selection for LRFI has resulted in a lower feed intake curve and a lower BW curve toward maturity.

  11. Genetic analyses of protein yield in dairy cows applying random regression models with time-dependent and temperature x humidity-dependent covariates.

    PubMed

    Brügemann, K; Gernand, E; von Borstel, U U; König, S

    2011-08-01

    Data used in the present study included 1,095,980 first-lactation test-day records for protein yield of 154,880 Holstein cows housed on 196 large-scale dairy farms in Germany. Data were recorded between 2002 and 2009 and merged with meteorological data from public weather stations. The maximum distance between each farm and its corresponding weather station was 50 km. Hourly temperature-humidity indexes (THI) were calculated using the mean of hourly measurements of dry bulb temperature and relative humidity. On the phenotypic scale, an increase in THI was generally associated with a decrease in daily protein yield. For genetic analyses, a random regression model was applied using time-dependent (d in milk, DIM) and THI-dependent covariates. Additive genetic and permanent environmental effects were fitted with this random regression model and Legendre polynomials of order 3 for DIM and THI. In addition, the fixed curve was modeled with Legendre polynomials of order 3. Heterogeneous residuals were fitted by dividing DIM into 5 classes, and by dividing THI into 4 classes, resulting in 20 different classes. Additive genetic variances for daily protein yield decreased with increasing degrees of heat stress and were lowest at the beginning of lactation and at extreme THI. Due to higher additive genetic variances, slightly higher permanent environment variances, and similar residual variances, heritabilities were highest for low THI in combination with DIM at the end of lactation. Genetic correlations among individual values for THI were generally >0.90. These trends from the complex random regression model were verified by applying relatively simple bivariate animal models for protein yield measured in 2 THI environments; that is, defining a THI value of 60 as a threshold. These high correlations indicate the absence of any substantial genotype × environment interaction for protein yield. However, heritabilities and additive genetic variances from the random regression model tended to be slightly higher in the THI range corresponding to cows' comfort zone. Selecting such superior environments for progeny testing can contribute to an accurate genetic differentiation among selection candidates. Copyright © 2011 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  12. Socioeconomic dynamics of water quality in the Egyptian Nile

    NASA Astrophysics Data System (ADS)

    Malik, Maheen; Nisar, Zainab; Karakatsanis, Georgios

    2016-04-01

    The Nile River remains the most important source of freshwater for Egypt as it accounts for nearly all of the country's drinking and irrigation water. About 95% of the total population is accounted to live along the Banks of the Nile(1). Therefore, water quality deterioration in addition to general natural scarcity of water in the region(2) is the main driver for carrying out this study. What further aggravates this issue is the water conflict in the Blue Nile region. The study evaluates different water quality parameters and their concentrations in the Egyptian Nile; further assessing the temporal dynamics of water quality in the area with (a) the Environmental Kuznets Curve (EKC)(3) and (b) the Jevons Paradox (JP)(4) in order to identify water quality improvements or degradations using selected socioeconomic variables(5). For this purpose various environmental indicators including BOD, COD, DO, Phosphorus and TDS were plotted against different economic variables including Population, Gross Domestic Product (GDP), Annual Fresh Water Withdrawal and Improved Water Source. Mathematically, this was expressed by 2nd and 3rd degree polynomial regressions generating the EKC and JP respectively. The basic goal of the regression analysis is to model and highlight the dynamic trend of water quality indicators in relation to their established permissible limits, which will allow the identification of optimal future water quality policies. The results clearly indicate that the dependency of water quality indicators on socioeconomic variables differs for every indicator; while COD was above the permissible limits in all the cases despite of its decreasing trend in each case, BOD and phosphate signified increasing concentrations for the future, if they continue to follow the present trend. This could be an indication of rebound effect explained by the Jevons Paradox i.e. water quality deterioration after its improvement, either due to increase of population or intensification of economic activities related to these indicators. Keywords: Water quality dynamics, Environmental Kuznets Curve (EKC), Jevons Paradox (JP), economic variables, polynomial regressions, environmental indicators, permissible limit References: (1)Evans, A. (2007). River of Life River Nile. (2)Egypt's Water Crisis - Recipe for Disaster. (2016). [Blog] EcoMENA- Echoing Sustainability. (3)Alstine, J. and Neumayer, E. (2010). The Environmental Kuznets Curve. (4)Garrett, T. (2014). Rebound, Backfire, and the Jevons Paradox. [Blog] (5)Data.worldbank.org

  13. Robustness Analysis of Integrated LPV-FDI Filters and LTI-FTC System for a Transport Aircraft

    NASA Technical Reports Server (NTRS)

    Khong, Thuan H.; Shin, Jong-Yeob

    2007-01-01

    This paper proposes an analysis framework for robustness analysis of a nonlinear dynamics system that can be represented by a polynomial linear parameter varying (PLPV) system with constant bounded uncertainty. The proposed analysis framework contains three key tools: 1) a function substitution method which can convert a nonlinear system in polynomial form into a PLPV system, 2) a matrix-based linear fractional transformation (LFT) modeling approach, which can convert a PLPV system into an LFT system with the delta block that includes key uncertainty and scheduling parameters, 3) micro-analysis, which is a well known robust analysis tool for linear systems. The proposed analysis framework is applied to evaluating the performance of the LPV-fault detection and isolation (FDI) filters of the closed-loop system of a transport aircraft in the presence of unmodeled actuator dynamics and sensor gain uncertainty. The robustness analysis results are compared with nonlinear time simulations.

  14. [Optimization of one-step pelletization technology of Biqiu granules by Plackett-Burman design and Box-Behnken response surface methodology].

    PubMed

    Zhang, Yan-jun; Liu, Li-li; Hu, Jun-hua; Wu, Yun; Chao, En-xiang; Xiao, Wei

    2015-11-01

    First with the qualified rate of granules as the evaluation index, significant influencing factors were firstly screened by Plackett-Burman design. Then, with the qualified rate and moisture content as the evaluation indexes, significant factors that affect one-step pelletization technology were further optimized by Box-Behnken design; experimental data were imitated by multiple regression and second-order polynomial equation; and response surface method was used for predictive analysis of optimal technology. The best conditions were as follows: inlet air temperature of 85 degrees C, sample introduction speed of 33 r x min(-1), density of concrete 1. 10. One-step pelletization technology of Biqiu granules by Plackett-Burman design and Box-Behnken response surface methodology was stable and feasible with good predictability, which provided reliable basis for the industrialized production of Biqiu granules.

  15. Mathematical and statistical models for determining the crop load in grapevine

    NASA Astrophysics Data System (ADS)

    Alina, Dobrei; Alin, Dobrei; Eleonora, Nistor; Teodor, Cristea; Marius, Boldea; Florin, Sala

    2016-06-01

    Ensuring a balance between vine crop load and vine vegetative growth is a dynamic process, so it is necessary to develop models for describing this relationship. This study analyzed the interrelationship between the crop load and growing specific parameters (viable buds - VB, dead (frost-injured) buds - DB, total shoots growth-TSG, one-year-old wood - MSG), in two vine grapes varieties: Muscat Ottonel cultivar for wine and Victoria cultivar for fresh grapes. In both varieties interrelationship between the buds number and vegetative growth parameters were described by polynomial functions statistically assured. Using regression analysis it was possible to develop predictive models for one-year-old wood (MSG), an important parameter for the yield and quality of wine grape production, with statistical significance results (R2 = 0.884, p <0.001, F = 45.957 in Muscat Ottonel cultivar and R2 = 0.893, p = 0.001, F = 49.886 in Victoria cultivar).

  16. Age-related normative values for handgrip strength and grip strength’s usefulness as a predictor of mortality and both cognitive and physical decline in older adults in northwest Russia

    PubMed Central

    Turusheva, A.; Frolova, E.; Degryse, J-M.

    2017-01-01

    Objectives: This paper sought to provide normative values for grip strength among older adults across different age groups in northwest Russia and to investigate their predictive value for adverse events. Methods: A population-based prospective cohort study of 611 community-dwelling individuals 65+. Grip strength was measured using the standard protocol applied in the Groningen Elderly Tests. The cut-off thresholds for grip strength were defined separately for men and women of different ages using a weighted polynomial regression. A Cox regression analysis, the c-statistic, a risk reclassification analysis, and bootstrapping techniques were used to analyze the data. The outcomes were the 5-year mortality rate, the loss of autonomy and mental decline. Results: We determined the age-related reference intervals of grip strength for older adults. The 5th and 10th percentiles of grip strength were associated with a higher risk for malnutrition, low autonomy, physical and mental functioning and 5-year mortality. The 5th percentile of grip strength was associated with a decline in autonomy. Conclusions: This study presents age- and sex-specific reference values for grip strength in the 65+ Russian population derived from a prospective cohort study. The norms can be used in clinical practice to identify patients at increased risk for adverse outcomes. PMID:28250246

  17. XMGR5 users manual

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, K.R.; Fisher, J.E.

    1997-03-01

    ACE/gr is XY plotting tool for workstations or X-terminals using X. A few of its features are: User defined scaling, tick marks, labels, symbols, line styles, colors. Batch mode for unattended plotting. Read and write parameters used during a session. Polynomial regression, splines, running averages, DFT/FFT, cross/auto-correlation. Hardcopy support for PostScript, HP-GL, and FrameMaker.mif format. While ACE/gr has a convenient point-and-click interface, most parameter settings and operations are available through a command line interface (found in Files/Commands).

  18. Finding the Best-Fit Polynomial Approximation in Evaluating Drill Data: the Application of a Generalized Inverse Matrix / Poszukiwanie Najlepszej ZGODNOŚCI W PRZYBLIŻENIU Wielomianowym Wykorzystanej do Oceny Danych Z ODWIERTÓW - Zastosowanie UOGÓLNIONEJ Macierzy Odwrotnej

    NASA Astrophysics Data System (ADS)

    Karakus, Dogan

    2013-12-01

    In mining, various estimation models are used to accurately assess the size and the grade distribution of an ore body. The estimation of the positional properties of unknown regions using random samples with known positional properties was first performed using polynomial approximations. Although the emergence of computer technologies and statistical evaluation of random variables after the 1950s rendered the polynomial approximations less important, theoretically the best surface passing through the random variables can be expressed as a polynomial approximation. In geoscience studies, in which the number of random variables is high, reliable solutions can be obtained only with high-order polynomials. Finding the coefficients of these types of high-order polynomials can be computationally intensive. In this study, the solution coefficients of high-order polynomials were calculated using a generalized inverse matrix method. A computer algorithm was developed to calculate the polynomial degree giving the best regression between the values obtained for solutions of different polynomial degrees and random observational data with known values, and this solution was tested with data derived from a practical application. In this application, the calorie values for data from 83 drilling points in a coal site located in southwestern Turkey were used, and the results are discussed in the context of this study. W górnictwie wykorzystuje się rozmaite modele estymacji do dokładnego określenia wielkości i rozkładu zawartości pierwiastka użytecznego w rudzie. Estymację położenia i właściwości skał w nieznanych obszarach z wykorzystaniem próbek losowych o znanym położeniu przeprowadzano na początku z wykorzystaniem przybliżenia wielomianowego. Pomimo tego, że rozwój technik komputerowych i statystycznych metod ewaluacji próbek losowych sprawiły, że po roku 1950 metody przybliżenia wielomianowego straciły na znaczeniu, nadal teoretyczna powierzchnia najlepszej zgodności przechodząca przez zmienne losowe wyrażana jest właśnie poprzez przybliżenie wielomianowe. W geofizyce, gdzie liczba próbek losowych jest zazwyczaj bardzo wysoka, wiarygodne rozwiązania uzyskać można jedynie przy wykorzystaniu wielomianów wyższych stopni. Określenie współczynników w tego typu wielomia nach jest skomplikowaną procedurą obliczeniową. W pracy tej poszukiwane współczynniki wielomianu wyższych stopni obliczono przy zastosowaniu metody uogólnionej macierzy odwrotnej. Opracowano odpowiedni algorytm komputerowy do obliczania stopnia wielomianu, zapewniający najlepszą regresję pomiędzy wartościami otrzymanymi z rozwiązań bazujących na wielomianach różnych stopni i losowymi danymi z obserwacji, o znanych wartościach. Rozwiązanie to przetestowano z użyciem danych uzyskanych z zastosowań praktycznych. W tym zastosowaniu użyto danych o wartości opałowej pochodzących z 83 odwiertów wykonanych w zagłębiu węglowym w południowo- zachodniej Turcji, wyniki obliczeń przedyskutowano w kontekście zagadnień uwzględnionych w niniejszej pracy.

  19. Support vector machine regression (SVR/LS-SVM)--an alternative to neural networks (ANN) for analytical chemistry? Comparison of nonlinear methods on near infrared (NIR) spectroscopy data.

    PubMed

    Balabin, Roman M; Lomakina, Ekaterina I

    2011-04-21

    In this study, we make a general comparison of the accuracy and robustness of five multivariate calibration models: partial least squares (PLS) regression or projection to latent structures, polynomial partial least squares (Poly-PLS) regression, artificial neural networks (ANNs), and two novel techniques based on support vector machines (SVMs) for multivariate data analysis: support vector regression (SVR) and least-squares support vector machines (LS-SVMs). The comparison is based on fourteen (14) different datasets: seven sets of gasoline data (density, benzene content, and fractional composition/boiling points), two sets of ethanol gasoline fuel data (density and ethanol content), one set of diesel fuel data (total sulfur content), three sets of petroleum (crude oil) macromolecules data (weight percentages of asphaltenes, resins, and paraffins), and one set of petroleum resins data (resins content). Vibrational (near-infrared, NIR) spectroscopic data are used to predict the properties and quality coefficients of gasoline, biofuel/biodiesel, diesel fuel, and other samples of interest. The four systems presented here range greatly in composition, properties, strength of intermolecular interactions (e.g., van der Waals forces, H-bonds), colloid structure, and phase behavior. Due to the high diversity of chemical systems studied, general conclusions about SVM regression methods can be made. We try to answer the following question: to what extent can SVM-based techniques replace ANN-based approaches in real-world (industrial/scientific) applications? The results show that both SVR and LS-SVM methods are comparable to ANNs in accuracy. Due to the much higher robustness of the former, the SVM-based approaches are recommended for practical (industrial) application. This has been shown to be especially true for complicated, highly nonlinear objects.

  20. Modeling Uncertainty in Steady State Diffusion Problems via Generalized Polynomial Chaos

    DTIC Science & Technology

    2002-07-25

    Some basic hypergeometric polynomials that generalize Jacobi polynomials . Memoirs Amer. Math. Soc., AMS... orthogonal polynomial functionals from the Askey scheme, as a generalization of the original polynomial chaos idea of Wiener (1938). A Galerkin projection...1) by generalized polynomial chaos expansion, where the uncertainties can be introduced through κ, f , or g, or some combinations. It is worth

  1. Orthonormal aberration polynomials for anamorphic optical imaging systems with circular pupils.

    PubMed

    Mahajan, Virendra N

    2012-06-20

    In a recent paper, we considered the classical aberrations of an anamorphic optical imaging system with a rectangular pupil, representing the terms of a power series expansion of its aberration function. These aberrations are inherently separable in the Cartesian coordinates (x,y) of a point on the pupil. Accordingly, there is x-defocus and x-coma, y-defocus and y-coma, and so on. We showed that the aberration polynomials orthonormal over the pupil and representing balanced aberrations for such a system are represented by the products of two Legendre polynomials, one for each of the two Cartesian coordinates of the pupil point; for example, L(l)(x)L(m)(y), where l and m are positive integers (including zero) and L(l)(x), for example, represents an orthonormal Legendre polynomial of degree l in x. The compound two-dimensional (2D) Legendre polynomials, like the classical aberrations, are thus also inherently separable in the Cartesian coordinates of the pupil point. Moreover, for every orthonormal polynomial L(l)(x)L(m)(y), there is a corresponding orthonormal polynomial L(l)(y)L(m)(x) obtained by interchanging x and y. These polynomials are different from the corresponding orthogonal polynomials for a system with rotational symmetry but a rectangular pupil. In this paper, we show that the orthonormal aberration polynomials for an anamorphic system with a circular pupil, obtained by the Gram-Schmidt orthogonalization of the 2D Legendre polynomials, are not separable in the two coordinates. Moreover, for a given polynomial in x and y, there is no corresponding polynomial obtained by interchanging x and y. For example, there are polynomials representing x-defocus, balanced x-coma, and balanced x-spherical aberration, but no corresponding y-aberration polynomials. The missing y-aberration terms are contained in other polynomials. We emphasize that the Zernike circle polynomials, although orthogonal over a circular pupil, are not suitable for an anamorphic system as they do not represent balanced aberrations for such a system.

  2. Experimental Modal Analysis and Dynamic Component Synthesis. Volume 6. Software User’s Guide.

    DTIC Science & Technology

    1987-12-01

    generate a Complex Mode Indication Function ( CMIF ) from the measurement directory, including modifications from the measurement selection option. This...reference measurements are - included in the data set to be analyzed. The peaks in the CMIF chart indicate existing modes. Thus, the order of the the...polynomials is determined by the number of peaks found in the CMIF chart. Then, the order of the polynomials can be determined before the estimation process

  3. Prediction of zeolite-cement-sand unconfined compressive strength using polynomial neural network

    NASA Astrophysics Data System (ADS)

    MolaAbasi, H.; Shooshpasha, I.

    2016-04-01

    The improvement of local soils with cement and zeolite can provide great benefits, including strengthening slopes in slope stability problems, stabilizing problematic soils and preventing soil liquefaction. Recently, dosage methodologies are being developed for improved soils based on a rational criterion as it exists in concrete technology. There are numerous earlier studies showing the possibility of relating Unconfined Compressive Strength (UCS) and Cemented sand (CS) parameters (voids/cement ratio) as a power function fits. Taking into account the fact that the existing equations are incapable of estimating UCS for zeolite cemented sand mixture (ZCS) well, artificial intelligence methods are used for forecasting them. Polynomial-type neural network is applied to estimate the UCS from more simply determined index properties such as zeolite and cement content, porosity as well as curing time. In order to assess the merits of the proposed approach, a total number of 216 unconfined compressive tests have been done. A comparison is carried out between the experimentally measured UCS with the predictions in order to evaluate the performance of the current method. The results demonstrate that generalized polynomial-type neural network has a great ability for prediction of the UCS. At the end sensitivity analysis of the polynomial model is applied to study the influence of input parameters on model output. The sensitivity analysis reveals that cement and zeolite content have significant influence on predicting UCS.

  4. SPSS macros to compare any two fitted values from a regression model.

    PubMed

    Weaver, Bruce; Dubois, Sacha

    2012-12-01

    In regression models with first-order terms only, the coefficient for a given variable is typically interpreted as the change in the fitted value of Y for a one-unit increase in that variable, with all other variables held constant. Therefore, each regression coefficient represents the difference between two fitted values of Y. But the coefficients represent only a fraction of the possible fitted value comparisons that might be of interest to researchers. For many fitted value comparisons that are not captured by any of the regression coefficients, common statistical software packages do not provide the standard errors needed to compute confidence intervals or carry out statistical tests-particularly in more complex models that include interactions, polynomial terms, or regression splines. We describe two SPSS macros that implement a matrix algebra method for comparing any two fitted values from a regression model. The !OLScomp and !MLEcomp macros are for use with models fitted via ordinary least squares and maximum likelihood estimation, respectively. The output from the macros includes the standard error of the difference between the two fitted values, a 95% confidence interval for the difference, and a corresponding statistical test with its p-value.

  5. Optimization of the trienzyme extraction for the microbiological assay of folate in vegetables.

    PubMed

    Chen, Liwen; Eitenmiller, Ronald R

    2007-05-16

    Response surface methodology was applied to optimize the trienzyme digestion for the extraction of folate from vegetables. Trienzyme extraction is a combined enzymatic digestion by protease, alpha-amylase, and conjugase (gamma-glutamyl hydrolase) to liberate the carbohydrate and protein-bound folates from food matrices for total folate analysis. It is the extraction method used in AOAC Official Method 2004.05 for assay of total folate in cereal grain products. Certified reference material (CRM) 485 mixed vegetables was used to represent the matrix of vegetables. Regression and ridge analysis were performed by statistical analysis software. The predicted second-order polynomial model was adequate (R2 = 0.947), without significant lack of fit (p > 0.1). Both protease and alpha-amylase have significant effects on the extraction. Ridge analysis gave an optimum trienzyme digestion time: Pronase, 1.5 h; alpha-amylase, 1.5 h; and conjugase, 3 h. The experimental value for CRM 485 under this optimum digestion was close to the predicted value from the model, confirming the validity and adequacy of the model. The optimized trienzyme digestion condition was applied to five vegetables and yielded higher folate levels than the trienzyme digestion parameters employed in AOAC Official Method 2004.05.

  6. Bounding the Failure Probability Range of Polynomial Systems Subject to P-box Uncertainties

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2012-01-01

    This paper proposes a reliability analysis framework for systems subject to multiple design requirements that depend polynomially on the uncertainty. Uncertainty is prescribed by probability boxes, also known as p-boxes, whose distribution functions have free or fixed functional forms. An approach based on the Bernstein expansion of polynomials and optimization is proposed. In particular, we search for the elements of a multi-dimensional p-box that minimize (i.e., the best-case) and maximize (i.e., the worst-case) the probability of inner and outer bounding sets of the failure domain. This technique yields intervals that bound the range of failure probabilities. The offset between this bounding interval and the actual failure probability range can be made arbitrarily tight with additional computational effort.

  7. A Christoffel function weighted least squares algorithm for collocation approximations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Narayan, Akil; Jakeman, John D.; Zhou, Tao

    Here, we propose, theoretically investigate, and numerically validate an algorithm for the Monte Carlo solution of least-squares polynomial approximation problems in a collocation framework. Our investigation is motivated by applications in the collocation approximation of parametric functions, which frequently entails construction of surrogates via orthogonal polynomials. A standard Monte Carlo approach would draw samples according to the density defining the orthogonal polynomial family. Our proposed algorithm instead samples with respect to the (weighted) pluripotential equilibrium measure of the domain, and subsequently solves a weighted least-squares problem, with weights given by evaluations of the Christoffel function. We present theoretical analysis tomore » motivate the algorithm, and numerical results that show our method is superior to standard Monte Carlo methods in many situations of interest.« less

  8. A Subspace Semi-Definite programming-based Underestimation (SSDU) method for stochastic global optimization in protein docking*

    PubMed Central

    Nan, Feng; Moghadasi, Mohammad; Vakili, Pirooz; Vajda, Sandor; Kozakov, Dima; Ch. Paschalidis, Ioannis

    2015-01-01

    We propose a new stochastic global optimization method targeting protein docking problems. The method is based on finding a general convex polynomial underestimator to the binding energy function in a permissive subspace that possesses a funnel-like structure. We use Principal Component Analysis (PCA) to determine such permissive subspaces. The problem of finding the general convex polynomial underestimator is reduced into the problem of ensuring that a certain polynomial is a Sum-of-Squares (SOS), which can be done via semi-definite programming. The underestimator is then used to bias sampling of the energy function in order to recover a deep minimum. We show that the proposed method significantly improves the quality of docked conformations compared to existing methods. PMID:25914440

  9. A Christoffel function weighted least squares algorithm for collocation approximations

    DOE PAGES

    Narayan, Akil; Jakeman, John D.; Zhou, Tao

    2016-11-28

    Here, we propose, theoretically investigate, and numerically validate an algorithm for the Monte Carlo solution of least-squares polynomial approximation problems in a collocation framework. Our investigation is motivated by applications in the collocation approximation of parametric functions, which frequently entails construction of surrogates via orthogonal polynomials. A standard Monte Carlo approach would draw samples according to the density defining the orthogonal polynomial family. Our proposed algorithm instead samples with respect to the (weighted) pluripotential equilibrium measure of the domain, and subsequently solves a weighted least-squares problem, with weights given by evaluations of the Christoffel function. We present theoretical analysis tomore » motivate the algorithm, and numerical results that show our method is superior to standard Monte Carlo methods in many situations of interest.« less

  10. Approximating exponential and logarithmic functions using polynomial interpolation

    NASA Astrophysics Data System (ADS)

    Gordon, Sheldon P.; Yang, Yajun

    2017-04-01

    This article takes a closer look at the problem of approximating the exponential and logarithmic functions using polynomials. Either as an alternative to or a precursor to Taylor polynomial approximations at the precalculus level, interpolating polynomials are considered. A measure of error is given and the behaviour of the error function is analysed. The results of interpolating polynomials are compared with those of Taylor polynomials.

  11. Human evaluation in association to the mathematical analysis of arch forms: Two-dimensional study.

    PubMed

    Zabidin, Nurwahidah; Mohamed, Alizae Marny; Zaharim, Azami; Marizan Nor, Murshida; Rosli, Tanti Irawati

    2018-03-01

    To evaluate the relationship between human evaluation of the dental-arch form, to complete a mathematical analysis via two different methods in quantifying the arch form, and to establish agreement with the fourth-order polynomial equation. This study included 64 sets of digitised maxilla and mandible dental casts obtained from a sample of dental arch with normal occlusion. For human evaluation, a convenient sample of orthodontic practitioners ranked the photo images of dental cast from the most tapered to the less tapered (square). In the mathematical analysis, dental arches were interpolated using the fourth-order polynomial equation with millimetric acetate paper and AutoCAD software. Finally, the relations between human evaluation and mathematical objective analyses were evaluated. Human evaluations were found to be generally in agreement, but only at the extremes of tapered and square arch forms; this indicated general human error and observer bias. The two methods used to plot the arch form were comparable. The use of fourth-order polynomial equation may be facilitative in obtaining a smooth curve, which can produce a template for individual arch that represents all potential tooth positions for the dental arch. Copyright © 2018 CEO. Published by Elsevier Masson SAS. All rights reserved.

  12. Wavefront reconstruction algorithm based on Legendre polynomials for radial shearing interferometry over a square area and error analysis.

    PubMed

    Kewei, E; Zhang, Chen; Li, Mengyang; Xiong, Zhao; Li, Dahai

    2015-08-10

    Based on the Legendre polynomials expressions and its properties, this article proposes a new approach to reconstruct the distorted wavefront under test of a laser beam over square area from the phase difference data obtained by a RSI system. And the result of simulation and experimental results verifies the reliability of the method proposed in this paper. The formula of the error propagation coefficients is deduced when the phase difference data of overlapping area contain noise randomly. The matrix T which can be used to evaluate the impact of high-orders Legendre polynomial terms on the outcomes of the low-order terms due to mode aliasing is proposed, and the magnitude of impact can be estimated by calculating the F norm of the T. In addition, the relationship between ratio shear, sampling points, terms of polynomials and noise propagation coefficients, and the relationship between ratio shear, sampling points and norms of the T matrix are both analyzed, respectively. Those research results can provide an optimization design way for radial shearing interferometry system with the theoretical reference and instruction.

  13. A discrete method for modal analysis of overhead line conductor bundles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Migdalovici, M.A.; Sireteanu, T.D.; Albrecht, A.A.

    The paper presents a mathematical model and a semi-analytical procedure to calculate the vibration modes and eigenfrequencies of single or bundled conductors with spacers which are needed for evaluation of the wind induced vibration of conductors and for optimization of spacer-dampers placement. The method consists in decomposition of conductors in modules and the expansion by polynomial series of unknown displacements on each module. A complete system of polynomials are deduced for this by Legendre polynomials. Each module is considered either boundary conditions at the extremity of the module or the continuity conditions between the modules and also a number ofmore » projections of module equilibrium equation on the polynomials from the expansion series of unknown displacement. The global system of the eigenmodes and eigenfrequencies is of the matrix form: A X + {omega}{sup 2} M X = 0. The theoretical considerations are exemplified on one conductor and on bundle of two conductors with spacers. From this, a method for forced vibration calculus of a single or bundled conductors is also presented.« less

  14. Mapping Landslides in Lunar Impact Craters Using Chebyshev Polynomials and Dem's

    NASA Astrophysics Data System (ADS)

    Yordanov, V.; Scaioni, M.; Brunetti, M. T.; Melis, M. T.; Zinzi, A.; Giommi, P.

    2016-06-01

    Geological slope failure processes have been observed on the Moon surface for decades, nevertheless a detailed and exhaustive lunar landslide inventory has not been produced yet. For a preliminary survey, WAC images and DEM maps from LROC at 100 m/pixels have been exploited in combination with the criteria applied by Brunetti et al. (2015) to detect the landslides. These criteria are based on the visual analysis of optical images to recognize mass wasting features. In the literature, Chebyshev polynomials have been applied to interpolate crater cross-sections in order to obtain a parametric characterization useful for classification into different morphological shapes. Here a new implementation of Chebyshev polynomial approximation is proposed, taking into account some statistical testing of the results obtained during Least-squares estimation. The presence of landslides in lunar craters is then investigated by analyzing the absolute values off odd coefficients of estimated Chebyshev polynomials. A case study on the Cassini A crater has demonstrated the key-points of the proposed methodology and outlined the required future development to carry out.

  15. On Convergence Aspects of Spheroidal Monogenics

    NASA Astrophysics Data System (ADS)

    Georgiev, S.; Morais, J.

    2011-09-01

    Orthogonal polynomials have found wide applications in mathematical physics, numerical analysis, and other fields. Accordingly there is an enormous amount of variety of such polynomials and relations that describe their properties. The paper's main results are the discussion of approximation properties for monogenic functions over prolate spheroids in R3 in terms of orthogonal monogenic polynomials and their interdependences. Certain results are stated without proof for now. The motivation for the present study stems from the fact that these polynomials play an important role in the calculation of the Bergman kernel and Green's monogenic functions in a spheroid. Once these functions are known, it is possible to solve both basic boundary value and conformal mapping problems. Interestingly, most of the used methods have a n-dimensional counterpart and can be extended to arbitrary ellipsoids. But such a procedure would make the further study of the underlying ellipsoidal monogenics somewhat laborious, and for this reason we shall not discuss these general cases here. To the best of our knowledge, this does not appear to have been done in literature before.

  16. Genetic correlations among body condition score, yield, and fertility in first-parity cows estimated by random regression models.

    PubMed

    Veerkamp, R F; Koenen, E P; De Jong, G

    2001-10-01

    Twenty type classifiers scored body condition (BCS) of 91,738 first-parity cows from 601 sires and 5518 maternal grandsires. Fertility data during first lactation were extracted for 177,220 cows, of which 67,278 also had a BCS observation, and first-lactation 305-d milk, fat, and protein yields were added for 180,631 cows. Heritabilities and genetic correlations were estimated using a sire-maternal grandsire model. Heritability of BCS was 0.38. Heritabilities for fertility traits were low (0.01 to 0.07), but genetic standard deviations were substantial, 9 d for days to first service and calving interval, 0.25 for number of services, and 5% for first-service conception. Phenotypic correlations between fertility and yield or BCS were small (-0.15 to 0.20). Genetic correlations between yield and all fertility traits were unfavorable (0.37 to 0.74). Genetic correlations with BCS were between -0.4 and -0.6 for calving interval and days to first service. Random regression analysis (RR) showed that correlations changed with days in milk for BCS. Little agreement was found between variances and correlations from RR, and analysis including a single month (mo 1 to 10) of data for BCS, especially during early and late lactation. However, this was due to excluding data from the conventional analysis, rather than due to the polynomials used. RR and a conventional five-traits model where BCS in mo 1, 4, 7, and 10 was treated as a separate traits (plus yield or fertility) gave similar results. Thus a parsimonious random regression model gave more realistic estimates for the (co)variances than a series of bivariate analysis on subsets of the data for BCS. A higher genetic merit for yield has unfavorable effects on fertility, but the genetic correlation suggests that BCS (at some stages of lactation) might help to alleviate the unfavorable effect of selection for higher yield on fertility.

  17. Establishing a direct connection between detrended fluctuation analysis and Fourier analysis

    NASA Astrophysics Data System (ADS)

    Kiyono, Ken

    2015-10-01

    To understand methodological features of the detrended fluctuation analysis (DFA) using a higher-order polynomial fitting, we establish the direct connection between DFA and Fourier analysis. Based on an exact calculation of the single-frequency response of the DFA, the following facts are shown analytically: (1) in the analysis of stochastic processes exhibiting a power-law scaling of the power spectral density (PSD), S (f ) ˜f-β , a higher-order detrending in the DFA has no adverse effect in the estimation of the DFA scaling exponent α , which satisfies the scaling relation α =(β +1 )/2 ; (2) the upper limit of the scaling exponents detectable by the DFA depends on the order of polynomial fit used in the DFA, and is bounded by m +1 , where m is the order of the polynomial fit; (3) the relation between the time scale in the DFA and the corresponding frequency in the PSD are distorted depending on both the order of the DFA and the frequency dependence of the PSD. We can improve the scale distortion by introducing the corrected time scale in the DFA corresponding to the inverse of the frequency scale in the PSD. In addition, our analytical approach makes it possible to characterize variants of the DFA using different types of detrending. As an application, properties of the detrending moving average algorithm are discussed.

  18. Energy efficient data representation and aggregation with event region detection in wireless sensor networks

    NASA Astrophysics Data System (ADS)

    Banerjee, Torsha

    Unlike conventional networks, wireless sensor networks (WSNs) are limited in power, have much smaller memory buffers, and possess relatively slower processing speeds. These characteristics necessitate minimum transfer and storage of information in order to prolong the network lifetime. In this dissertation, we exploit the spatio-temporal nature of sensor data to approximate the current values of the sensors based on readings obtained from neighboring sensors and itself. We propose a Tree based polynomial REGression algorithm, (TREG) that addresses the problem of data compression in wireless sensor networks. Instead of aggregated data, a polynomial function (P) is computed by the regression function, TREG. The coefficients of P are then passed to achieve the following goals: (i) The sink can get attribute values in the regions devoid of sensor nodes, and (ii) Readings over any portion of the region can be obtained at one time by querying the root of the tree. As the size of the data packet from each tree node to its parent remains constant, the proposed scheme scales very well with growing network density or increased coverage area. Since physical attributes exhibit a gradual change over time, we propose an iterative scheme, UPDATE_COEFF, which obviates the need to perform the regression function repeatedly and uses approximations based on previous readings. Extensive simulations are performed on real world data to demonstrate the effectiveness of our proposed aggregation algorithm, TREG. Results reveal that for a network density of 0.0025 nodes/m2, a complete binary tree of depth 4 could provide the absolute error to be less than 6%. A data compression ratio of about 0.02 is achieved using our proposed algorithm, which is almost independent of the tree depth. In addition, our proposed updating scheme makes the aggregation process faster while maintaining the desired error bounds. We also propose a Polynomial-based scheme that addresses the problem of Event Region Detection (PERD) for WSNs. When a single event occurs, a child of the tree sends a Flagged Polynomial (FP) to its parent, if the readings approximated by it falls outside the data range defining the existing phenomenon. After the aggregation process is over, the root having the two polynomials, P and FP can be queried for FP (approximating the new event region) instead of flooding the whole network. For multiple such events, instead of computing a polynomial corresponding to each new event, areas with same data range are combined by the corresponding tree nodes and the aggregated coefficients are passed on. Results reveal that a new event can be detected by PERD while error in detection remains constant and is less than a threshold of 10%. As the node density increases, accuracy and delay for event detection are found to remain almost constant, making PERD highly scalable. Whenever an event occurs in a WSN, data is generated by closeby sensors and relaying the data to the base station (BS) make sensors closer to the BS run out of energy at a much faster rate than sensors in other parts of the network. This gives rise to an unequal distribution of residual energy in the network and makes those sensors with lower remaining energy level die at much faster rate than others. We propose a scheme for enhancing network Lifetime using mobile cluster heads (CH) in a WSN. To maintain remaining energy more evenly, some energy-rich nodes are designated as CHs which move in a controlled manner towards sensors rich in energy and data. This eliminates multihop transmission required by the static sensors and thus increases the overall lifetime of the WSN. We combine the idea of clustering and mobile CH to first form clusters of static sensor nodes. A collaborative strategy among the CHs further increases the lifetime of the network. Time taken for transmitting data to the BS is reduced further by making the CHs follow a connectivity strategy that always maintain a connected path to the BS. Spatial correlation of sensor data can be further exploited for dynamic channel selection in Cellular Communication. In such a scenario within a licensed band, wireless sensors can be deployed (each sensor tuned to a frequency of the channel at a particular time) to sense the interference power of the frequency band. In an ideal channel, interference temperature (IT) which is directly proportional to the interference power, can be assumed to vary spatially with the frequency of the sub channel. We propose a scheme for fitting the sub channel frequencies and corresponding ITs to a regression model for calculating the IT of a random sub channel for further analysis of the channel interference at the base station. Our scheme, based on the readings reported by Sensors helps in Dynamic Channel Selection (S-DCS) in extended C-band for assignment to unlicensed secondary users. S-DCS proves to be economic from energy consumption point of view and it also achieves accuracy with error bound within 6.8%. Again, users are assigned empty sub channels without actually probing them, incurring minimum delay in the process. The overall channel throughput is maximized along with fairness to individual users.

  19. Detection and correction of laser induced breakdown spectroscopy spectral background based on spline interpolation method

    NASA Astrophysics Data System (ADS)

    Tan, Bing; Huang, Min; Zhu, Qibing; Guo, Ya; Qin, Jianwei

    2017-12-01

    Laser-induced breakdown spectroscopy (LIBS) is an analytical technique that has gained increasing attention because of many applications. The production of continuous background in LIBS is inevitable because of factors associated with laser energy, gate width, time delay, and experimental environment. The continuous background significantly influences the analysis of the spectrum. Researchers have proposed several background correction methods, such as polynomial fitting, Lorenz fitting and model-free methods. However, less of them apply these methods in the field of LIBS Technology, particularly in qualitative and quantitative analyses. This study proposes a method based on spline interpolation for detecting and estimating the continuous background spectrum according to its smooth property characteristic. Experiment on the background correction simulation indicated that, the spline interpolation method acquired the largest signal-to-background ratio (SBR) over polynomial fitting, Lorenz fitting and model-free method after background correction. These background correction methods all acquire larger SBR values than that acquired before background correction (The SBR value before background correction is 10.0992, whereas the SBR values after background correction by spline interpolation, polynomial fitting, Lorentz fitting, and model-free methods are 26.9576, 24.6828, 18.9770, and 25.6273 respectively). After adding random noise with different kinds of signal-to-noise ratio to the spectrum, spline interpolation method acquires large SBR value, whereas polynomial fitting and model-free method obtain low SBR values. All of the background correction methods exhibit improved quantitative results of Cu than those acquired before background correction (The linear correlation coefficient value before background correction is 0.9776. Moreover, the linear correlation coefficient values after background correction using spline interpolation, polynomial fitting, Lorentz fitting, and model-free methods are 0.9998, 0.9915, 0.9895, and 0.9940 respectively). The proposed spline interpolation method exhibits better linear correlation and smaller error in the results of the quantitative analysis of Cu compared with polynomial fitting, Lorentz fitting and model-free methods, The simulation and quantitative experimental results show that the spline interpolation method can effectively detect and correct the continuous background.

  20. Equivalences of the multi-indexed orthogonal polynomials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Odake, Satoru

    2014-01-15

    Multi-indexed orthogonal polynomials describe eigenfunctions of exactly solvable shape-invariant quantum mechanical systems in one dimension obtained by the method of virtual states deletion. Multi-indexed orthogonal polynomials are labeled by a set of degrees of polynomial parts of virtual state wavefunctions. For multi-indexed orthogonal polynomials of Laguerre, Jacobi, Wilson, and Askey-Wilson types, two different index sets may give equivalent multi-indexed orthogonal polynomials. We clarify these equivalences. Multi-indexed orthogonal polynomials with both type I and II indices are proportional to those of type I indices only (or type II indices only) with shifted parameters.

  1. Complex Analysis and Related Topics. Proceedings of the Conference held in Amsterdam on 27 - 29 January 1993

    DTIC Science & Technology

    1993-01-29

    Bessel functions and Jacobi functions (cf. [2]). References [1] R. Askey & J. Wilson, Some basic hypergeometric orthogonal polynomials that gen- eralize...1; 1] can be treated as a part of general theory of T-systems (see [81 for that theory and [7] for some aspects of the Chebyshev polynomials theory...waves in elastic media. It has been known for some time that these multiplicities sometimes occur for topological reasons and are present generically , see

  2. The use of rational functions in numerical quadrature

    NASA Astrophysics Data System (ADS)

    Gautschi, Walter

    2001-08-01

    Quadrature problems involving functions that have poles outside the interval of integration can profitably be solved by methods that are exact not only for polynomials of appropriate degree, but also for rational functions having the same (or the most important) poles as the function to be integrated. Constructive and computational tools for accomplishing this are described and illustrated in a number of quadrature contexts. The superiority of such rational/polynomial methods is shown by an analysis of the remainder term and documented by numerical examples.

  3. Optimizing Support Vector Machine Parameters with Genetic Algorithm for Credit Risk Assessment

    NASA Astrophysics Data System (ADS)

    Manurung, Jonson; Mawengkang, Herman; Zamzami, Elviawaty

    2017-12-01

    Support vector machine (SVM) is a popular classification method known to have strong generalization capabilities. SVM can solve the problem of classification and linear regression or nonlinear kernel which can be a learning algorithm for the ability of classification and regression. However, SVM also has a weakness that is difficult to determine the optimal parameter value. SVM calculates the best linear separator on the input feature space according to the training data. To classify data which are non-linearly separable, SVM uses kernel tricks to transform the data into a linearly separable data on a higher dimension feature space. The kernel trick using various kinds of kernel functions, such as : linear kernel, polynomial, radial base function (RBF) and sigmoid. Each function has parameters which affect the accuracy of SVM classification. To solve the problem genetic algorithms are proposed to be applied as the optimal parameter value search algorithm thus increasing the best classification accuracy on SVM. Data taken from UCI repository of machine learning database: Australian Credit Approval. The results show that the combination of SVM and genetic algorithms is effective in improving classification accuracy. Genetic algorithms has been shown to be effective in systematically finding optimal kernel parameters for SVM, instead of randomly selected kernel parameters. The best accuracy for data has been upgraded from kernel Linear: 85.12%, polynomial: 81.76%, RBF: 77.22% Sigmoid: 78.70%. However, for bigger data sizes, this method is not practical because it takes a lot of time.

  4. Selection of relevant input variables in storm water quality modeling by multiobjective evolutionary polynomial regression paradigm

    NASA Astrophysics Data System (ADS)

    Creaco, E.; Berardi, L.; Sun, Siao; Giustolisi, O.; Savic, D.

    2016-04-01

    The growing availability of field data, from information and communication technologies (ICTs) in "smart" urban infrastructures, allows data modeling to understand complex phenomena and to support management decisions. Among the analyzed phenomena, those related to storm water quality modeling have recently been gaining interest in the scientific literature. Nonetheless, the large amount of available data poses the problem of selecting relevant variables to describe a phenomenon and enable robust data modeling. This paper presents a procedure for the selection of relevant input variables using the multiobjective evolutionary polynomial regression (EPR-MOGA) paradigm. The procedure is based on scrutinizing the explanatory variables that appear inside the set of EPR-MOGA symbolic model expressions of increasing complexity and goodness of fit to target output. The strategy also enables the selection to be validated by engineering judgement. In such context, the multiple case study extension of EPR-MOGA, called MCS-EPR-MOGA, is adopted. The application of the proposed procedure to modeling storm water quality parameters in two French catchments shows that it was able to significantly reduce the number of explanatory variables for successive analyses. Finally, the EPR-MOGA models obtained after the input selection are compared with those obtained by using the same technique without benefitting from input selection and with those obtained in previous works where other data-modeling techniques were used on the same data. The comparison highlights the effectiveness of both EPR-MOGA and the input selection procedure.

  5. Solutions of interval type-2 fuzzy polynomials using a new ranking method

    NASA Astrophysics Data System (ADS)

    Rahman, Nurhakimah Ab.; Abdullah, Lazim; Ghani, Ahmad Termimi Ab.; Ahmad, Noor'Ani

    2015-10-01

    A few years ago, a ranking method have been introduced in the fuzzy polynomial equations. Concept of the ranking method is proposed to find actual roots of fuzzy polynomials (if exists). Fuzzy polynomials are transformed to system of crisp polynomials, performed by using ranking method based on three parameters namely, Value, Ambiguity and Fuzziness. However, it was found that solutions based on these three parameters are quite inefficient to produce answers. Therefore in this study a new ranking method have been developed with the aim to overcome the inherent weakness. The new ranking method which have four parameters are then applied in the interval type-2 fuzzy polynomials, covering the interval type-2 of fuzzy polynomial equation, dual fuzzy polynomial equations and system of fuzzy polynomials. The efficiency of the new ranking method then numerically considered in the triangular fuzzy numbers and the trapezoidal fuzzy numbers. Finally, the approximate solutions produced from the numerical examples indicate that the new ranking method successfully produced actual roots for the interval type-2 fuzzy polynomials.

  6. Use Of Zernike Polynomials And Interferometry In The Optical Design And Assembly Of Large Carbon-Dioxide Laser Systems

    NASA Astrophysics Data System (ADS)

    Viswanathan, V. K.

    1982-02-01

    This paper describes the need for non-raytracing schemes in the optical design and analysis of large carbon-dioxide lasers like the Gigawatt,1 Gemini, 2 and Helios3 lasers currently operational at Los Alamos, and the Antares 4 laser fusion system under construction. The scheme currently used at Los Alamos involves characterizing the various optical components with a Zernike polynomial sets obtained by the digitization6 of experimentally produced interferograms of the components. A Fast Fourier Transform code then propagates the complex amplitude and phase of the beam through the whole system and computes the optical parameters of interest. The analysis scheme is illustrated through examples of the Gigawatt, Gemini, and Helios systems. A possible way of using the Zernike polynomials in optical design problems of this type is discussed. Comparisons between the computed values and experimentally obtained results are made and it is concluded that this appears to be a valid approach. As this is a review article, some previously published results are also used where relevant.

  7. Stabilization of nonlinear systems using sampled-data output-feedback fuzzy controller based on polynomial-fuzzy-model-based control approach.

    PubMed

    Lam, H K

    2012-02-01

    This paper investigates the stability of sampled-data output-feedback (SDOF) polynomial-fuzzy-model-based control systems. Representing the nonlinear plant using a polynomial fuzzy model, an SDOF fuzzy controller is proposed to perform the control process using the system output information. As only the system output is available for feedback compensation, it is more challenging for the controller design and system analysis compared to the full-state-feedback case. Furthermore, because of the sampling activity, the control signal is kept constant by the zero-order hold during the sampling period, which complicates the system dynamics and makes the stability analysis more difficult. In this paper, two cases of SDOF fuzzy controllers, which either share the same number of fuzzy rules or not, are considered. The system stability is investigated based on the Lyapunov stability theory using the sum-of-squares (SOS) approach. SOS-based stability conditions are obtained to guarantee the system stability and synthesize the SDOF fuzzy controller. Simulation examples are given to demonstrate the merits of the proposed SDOF fuzzy control approach.

  8. Coherent orthogonal polynomials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Celeghini, E., E-mail: celeghini@fi.infn.it; Olmo, M.A. del, E-mail: olmo@fta.uva.es

    2013-08-15

    We discuss a fundamental characteristic of orthogonal polynomials, like the existence of a Lie algebra behind them, which can be added to their other relevant aspects. At the basis of the complete framework for orthogonal polynomials we include thus–in addition to differential equations, recurrence relations, Hilbert spaces and square integrable functions–Lie algebra theory. We start here from the square integrable functions on the open connected subset of the real line whose bases are related to orthogonal polynomials. All these one-dimensional continuous spaces allow, besides the standard uncountable basis (|x〉), for an alternative countable basis (|n〉). The matrix elements that relatemore » these two bases are essentially the orthogonal polynomials: Hermite polynomials for the line and Laguerre and Legendre polynomials for the half-line and the line interval, respectively. Differential recurrence relations of orthogonal polynomials allow us to realize that they determine an infinite-dimensional irreducible representation of a non-compact Lie algebra, whose second order Casimir C gives rise to the second order differential equation that defines the corresponding family of orthogonal polynomials. Thus, the Weyl–Heisenberg algebra h(1) with C=0 for Hermite polynomials and su(1,1) with C=−1/4 for Laguerre and Legendre polynomials are obtained. Starting from the orthogonal polynomials the Lie algebra is extended both to the whole space of the L{sup 2} functions and to the corresponding Universal Enveloping Algebra and transformation group. Generalized coherent states from each vector in the space L{sup 2} and, in particular, generalized coherent polynomials are thus obtained. -- Highlights: •Fundamental characteristic of orthogonal polynomials (OP): existence of a Lie algebra. •Differential recurrence relations of OP determine a unitary representation of a non-compact Lie group. •2nd order Casimir originates a 2nd order differential equation that defines the corresponding OP family. •Generalized coherent polynomials are obtained from OP.« less

  9. A note on powers in finite fields

    NASA Astrophysics Data System (ADS)

    Aabrandt, Andreas; Lundsgaard Hansen, Vagn

    2016-08-01

    The study of solutions to polynomial equations over finite fields has a long history in mathematics and is an interesting area of contemporary research. In recent years, the subject has found important applications in the modelling of problems from applied mathematical fields such as signal analysis, system theory, coding theory and cryptology. In this connection, it is of interest to know criteria for the existence of squares and other powers in arbitrary finite fields. Making good use of polynomial division in polynomial rings over finite fields, we have examined a classical criterion of Euler for squares in odd prime fields, giving it a formulation that is apt for generalization to arbitrary finite fields and powers. Our proof uses algebra rather than classical number theory, which makes it convenient when presenting basic methods of applied algebra in the classroom.

  10. Correction factors for on-line microprobe analysis of multielement alloy systems

    NASA Technical Reports Server (NTRS)

    Unnam, J.; Tenney, D. R.; Brewer, W. D.

    1977-01-01

    An on-line correction technique was developed for the conversion of electron probe X-ray intensities into concentrations of emitting elements. This technique consisted of off-line calculation and representation of binary interaction data which were read into an on-line minicomputer to calculate variable correction coefficients. These coefficients were used to correct the X-ray data without significantly increasing computer core requirements. The binary interaction data were obtained by running Colby's MAGIC 4 program in the reverse mode. The data for each binary interaction were represented by polynomial coefficients obtained by least-squares fitting a third-order polynomial. Polynomial coefficients were generated for most of the common binary interactions at different accelerating potentials and are included. Results are presented for the analyses of several alloy standards to demonstrate the applicability of this correction procedure.

  11. Simple Proof of Jury Test for Complex Polynomials

    NASA Astrophysics Data System (ADS)

    Choo, Younseok; Kim, Dongmin

    Recently some attempts have been made in the literature to give simple proofs of Jury test for real polynomials. This letter presents a similar result for complex polynomials. A simple proof of Jury test for complex polynomials is provided based on the Rouché's Theorem and a single-parameter characterization of Schur stability property for complex polynomials.

  12. Temporal trends in physical violence, gender differences and spatial vulnerability of the location of victim's residences.

    PubMed

    Cavalcante, Gigliana Maria Sobral; de Macedo Bernardino, Ítalo; da Nóbrega, Lorena Marques; Ferreira, Raquel Conceição; Ferreira E Ferreira, Efigênia; d'Avila, Sérgio

    2018-06-01

    The aim of study was to describe trends in physical violence among Brazilian victims and investigate spatial vulnerability of the location of victim's residences. This study performed an ecological-level longitudinal analysis, examining violence rates over 4 years. Cases of 4795 victims of physical aggression attended at a Center of Legal Medicine were investigated. Trend analysis was used to evaluate the data, with the creation of polynomial regression models (p < 0.05). Violence rates showed significant temporal variations according to sociodemographic characteristics of victims (p < 0.05) and the circumstances of aggressions (p < 0.05). Moreover, there was a significant increase in violence rate in the North (R 2  = 16.1%; p = 0.019) and South (R 2  = 18.4%; p = 0.010), whereas the rural zone (R 2  = 10.1%; p = 0.028) presented a decrease. The findings highlight the need for protection policies that address spatial-temporal aspects. Copyright © 2018 Elsevier Ltd. All rights reserved.

  13. Enhancement of docosahexaenoic acid production by Schizochytrium SW1 using response surface methodology

    NASA Astrophysics Data System (ADS)

    Nazir, Mohd Yusuf Mohd; Al-Shorgani, Najeeb Kaid Nasser; Kalil, Mohd Sahaid; Hamid, Aidil Abdul

    2015-09-01

    In this study, three factors (fructose concentration, agitation speed and monosodium glutamate (MSG) concentration) were optimized to enhance DHA production by Schizochytrium SW1 using response surface methodology (RSM). Central composite design was applied as the experimental design and analysis of variance (ANOVA) was used to analyze the data. The experiments were conducted using 500 mL flask with 100 mL working volume at 30°C for 96 hours. ANOVA analysis revealed that the process was adequately represented significantly by the quadratic model (p<0.0001) and two of the factors namely agitation speed and MSG concentration significantly affect DHA production (p<0.005). Level of influence for each variable and quadratic polynomial equation were obtained for DHA production by multiple regression analyses. The estimated optimum conditions for maximizing DHA production by SW1 were 70 g/L fructose, 250 rpm agitation speed and 12 g/L MSG. Consequently, the quadratic model was validated by applying of the estimated optimum conditions, which confirmed the model validity and 52.86% of DHA was produced.

  14. Spectral characteristics of normal and nutrient-deficient maize leaves

    NASA Technical Reports Server (NTRS)

    Al-Abbas, A. H.; Barr, R.; Hall, J. D.; Crane, F. L.; Baumgardner, M. F.

    1972-01-01

    Reflectance, transmittance and absorbance spectra of normal and six types of mineral-deficient (N,P,K,S,Mg and Ca) maize (Zea mays L.) leaves were analyzed at 30 selected wavelengths along the electromagnetic spectrum from 500 to 2600 nm. Chlorophyll content and percent leaf moisture were also determined. Leaf thermograms were obtained for normal, N- and S- deficient leaves. The results of the analysis of variance showed significant differences in reflectance, transmittance and absorbance in the visible wavelengths among leaf numbers 3, 4, and 5, among the seven nutrient treatments, and among the interactions of leaves and treatments. In the reflective infrared wavelengths only treatments produced significant differences. The chlorophyll content of leaves was reduced in all deficiencies in comparison to controls. Percent moisture was increased in S-, Mg- and N- deficiencies. Positive correlation (r = 0.707) between moisture content and percent absorption at both 1450 and 1930 nm were obtained. Polynomial regression analysis of leaf thickness and leaf moisture content showed that these two variables were significantly and directly related (r = 0.894).

  15. Box-Behnken design for investigation of microwave-assisted extraction of patchouli oil

    NASA Astrophysics Data System (ADS)

    Kusuma, Heri Septya; Mahfud, Mahfud

    2015-12-01

    Microwave-assisted extraction (MAE) technique was employed to extract the essential oil from patchouli (Pogostemon cablin). The optimal conditions for microwave-assisted extraction of patchouli oil were determined by response surface methodology. A Box-Behnken design (BBD) was applied to evaluate the effects of three independent variables (microwave power (A: 400-800 W), plant material to solvent ratio (B: 0.10-0.20 g mL-1) and extraction time (C: 20-60 min)) on the extraction yield of patchouli oil. The correlation analysis of the mathematical-regression model indicated that quadratic polynomial model could be employed to optimize the microwave extraction of patchouli oil. The optimal extraction conditions of patchouli oil was microwave power 634.024 W, plant material to solvent ratio 0.147648 g ml-1 and extraction time 51.6174 min. The maximum patchouli oil yield was 2.80516% under these optimal conditions. Under the extraction condition, the experimental values agreed with the predicted results by analysis of variance. It indicated high fitness of the model used and the success of response surface methodology for optimizing and reflect the expected extraction condition.

  16. Element Library for Three-Dimensional Stress Analysis by the Integrated Force Method

    NASA Technical Reports Server (NTRS)

    Kaljevic, Igor; Patnaik, Surya N.; Hopkins, Dale A.

    1996-01-01

    The Integrated Force Method, a recently developed method for analyzing structures, is extended in this paper to three-dimensional structural analysis. First, a general formulation is developed to generate the stress interpolation matrix in terms of complete polynomials of the required order. The formulation is based on definitions of the stress tensor components in term of stress functions. The stress functions are written as complete polynomials and substituted into expressions for stress components. Then elimination of the dependent coefficients leaves the stress components expressed as complete polynomials whose coefficients are defined as generalized independent forces. Such derived components of the stress tensor identically satisfy homogenous Navier equations of equilibrium. The resulting element matrices are invariant with respect to coordinate transformation and are free of spurious zero-energy modes. The formulation provides a rational way to calculate the exact number of independent forces necessary to arrive at an approximation of the required order for complete polynomials. The influence of reducing the number of independent forces on the accuracy of the response is also analyzed. The stress fields derived are used to develop a comprehensive finite element library for three-dimensional structural analysis by the Integrated Force Method. Both tetrahedral- and hexahedral-shaped elements capable of modeling arbitrary geometric configurations are developed. A number of examples with known analytical solutions are solved by using the developments presented herein. The results are in good agreement with the analytical solutions. The responses obtained with the Integrated Force Method are also compared with those generated by the standard displacement method. In most cases, the performance of the Integrated Force Method is better overall.

  17. On the connection coefficients and recurrence relations arising from expansions in series of Laguerre polynomials

    NASA Astrophysics Data System (ADS)

    Doha, E. H.

    2003-05-01

    A formula expressing the Laguerre coefficients of a general-order derivative of an infinitely differentiable function in terms of its original coefficients is proved, and a formula expressing explicitly the derivatives of Laguerre polynomials of any degree and for any order as a linear combination of suitable Laguerre polynomials is deduced. A formula for the Laguerre coefficients of the moments of one single Laguerre polynomial of certain degree is given. Formulae for the Laguerre coefficients of the moments of a general-order derivative of an infinitely differentiable function in terms of its Laguerre coefficients are also obtained. A simple approach in order to build and solve recursively for the connection coefficients between Jacobi-Laguerre and Hermite-Laguerre polynomials is described. An explicit formula for these coefficients between Jacobi and Laguerre polynomials is given, of which the ultra-spherical polynomials of the first and second kinds and Legendre polynomials are important special cases. An analytical formula for the connection coefficients between Hermite and Laguerre polynomials is also obtained.

  18. Myocardial strains from 3D displacement encoded magnetic resonance imaging

    PubMed Central

    2012-01-01

    Background The ability to measure and quantify myocardial motion and deformation provides a useful tool to assist in the diagnosis, prognosis and management of heart disease. The recent development of magnetic resonance imaging methods, such as harmonic phase analysis of tagging and displacement encoding with stimulated echoes (DENSE), make detailed non-invasive 3D kinematic analyses of human myocardium possible in the clinic and for research purposes. A robust analysis method is required, however. Methods We propose to estimate strain using a polynomial function which produces local models of the displacement field obtained with DENSE. Given a specific polynomial order, the model is obtained as the least squares fit of the acquired displacement field. These local models are subsequently used to produce estimates of the full strain tensor. Results The proposed method is evaluated on a numerical phantom as well as in vivo on a healthy human heart. The evaluation showed that the proposed method produced accurate results and showed low sensitivity to noise in the numerical phantom. The method was also demonstrated in vivo by assessment of the full strain tensor and to resolve transmural strain variations. Conclusions Strain estimation within a 3D myocardial volume based on polynomial functions yields accurate and robust results when validated on an analytical model. The polynomial field is capable of resolving the measured material positions from the in vivo data, and the obtained in vivo strains values agree with previously reported myocardial strains in normal human hearts. PMID:22533791

  19. Molecular Dynamics Analysis of Lysozyme Protein in Ethanol-Water Mixed Solvent Environment

    NASA Astrophysics Data System (ADS)

    Ochije, Henry Ikechukwu

    Effect of protein-solvent interaction on the protein structure is widely studied using both experimental and computational techniques. Despite such extensive studies molecular level understanding of proteins and some simple solvents is still not fully understood. This work focuses on detailed molecular dynamics simulations to study of solvent effect on lysozyme protein, using water, alcohol and different concentrations of water-alcohol mixtures as solvents. The lysozyme protein structure in water, alcohol and alcohol-water mixture (0-12% alcohol) was studied using GROMACS molecular dynamics simulation code. Compared to water environment, the lysozome structure showed remarkable changes in solvents with increasing alcohol concentration. In particular, significant changes were observed in the protein secondary structure involving alpha helices. The influence of alcohol on the lysozyme protein was investigated by studying thermodynamic and structural properties. With increasing ethanol concentration we observed a systematic increase in total energy, enthalpy, root mean square deviation (RMSD), and radius of gyration. a polynomial interpolation approach. Using the resulting polynomial equation, we could determine above quantities for any intermediate alcohol percentage. In order to validate this approach, we selected an intermediate ethanol percentage and carried out full MD simulation. The results from MD simulation were in reasonably good agreement with that obtained using polynomial approach. Hence, the polynomial approach based method proposed here eliminates the need for computationally intensive full MD analysis for the concentrations within the range (0-12%) studied in this work.

  20. A polynomial chaos approach to the analysis of vehicle dynamics under uncertainty

    NASA Astrophysics Data System (ADS)

    Kewlani, Gaurav; Crawford, Justin; Iagnemma, Karl

    2012-05-01

    The ability of ground vehicles to quickly and accurately analyse their dynamic response to a given input is critical to their safety and efficient autonomous operation. In field conditions, significant uncertainty is associated with terrain and/or vehicle parameter estimates, and this uncertainty must be considered in the analysis of vehicle motion dynamics. Here, polynomial chaos approaches that explicitly consider parametric uncertainty during modelling of vehicle dynamics are presented. They are shown to be computationally more efficient than the standard Monte Carlo scheme, and experimental results compared with the simulation results performed on ANVEL (a vehicle simulator) indicate that the method can be utilised for efficient and accurate prediction of vehicle motion in realistic scenarios.

  1. Image distortion analysis using polynomial series expansion.

    PubMed

    Baggenstoss, Paul M

    2004-11-01

    In this paper, we derive a technique for analysis of local distortions which affect data in real-world applications. In the paper, we focus on image data, specifically handwritten characters. Given a reference image and a distorted copy of it, the method is able to efficiently determine the rotations, translations, scaling, and any other distortions that have been applied. Because the method is robust, it is also able to estimate distortions for two unrelated images, thus determining the distortions that would be required to cause the two images to resemble each other. The approach is based on a polynomial series expansion using matrix powers of linear transformation matrices. The technique has applications in pattern recognition in the presence of distortions.

  2. Seasonality in twin birth rates, Denmark, 1936-84.

    PubMed

    Bonnelykke, B; Søgaard, J; Nielsen, J

    1987-12-01

    A study was made of seasonality in twin birth rate in Denmark between 1977 and 1984. We studied all twin births (N = 45,550) in all deliveries (N = 3,679,932) during that period. Statistical analysis using a simple harmonic sinusoidal model provided no evidence for seasonality. However, sequential polynomial analysis disclosed a significant fit to a fifth order polynomial curve with peaks in twin birth rates in May-June and December, along with troughs in February and September. A falling trend in twinning rate broke off in Denmark around 1970, and from 1970 to 1984 an increasing trend was found. The results are discussed in terms of possible environmental influences on twinning.

  3. Approximating Multilinear Monomial Coefficients and Maximum Multilinear Monomials in Multivariate Polynomials

    NASA Astrophysics Data System (ADS)

    Chen, Zhixiang; Fu, Bin

    This paper is our third step towards developing a theory of testing monomials in multivariate polynomials and concentrates on two problems: (1) How to compute the coefficients of multilinear monomials; and (2) how to find a maximum multilinear monomial when the input is a ΠΣΠ polynomial. We first prove that the first problem is #P-hard and then devise a O *(3 n s(n)) upper bound for this problem for any polynomial represented by an arithmetic circuit of size s(n). Later, this upper bound is improved to O *(2 n ) for ΠΣΠ polynomials. We then design fully polynomial-time randomized approximation schemes for this problem for ΠΣ polynomials. On the negative side, we prove that, even for ΠΣΠ polynomials with terms of degree ≤ 2, the first problem cannot be approximated at all for any approximation factor ≥ 1, nor "weakly approximated" in a much relaxed setting, unless P=NP. For the second problem, we first give a polynomial time λ-approximation algorithm for ΠΣΠ polynomials with terms of degrees no more a constant λ ≥ 2. On the inapproximability side, we give a n (1 - ɛ)/2 lower bound, for any ɛ> 0, on the approximation factor for ΠΣΠ polynomials. When the degrees of the terms in these polynomials are constrained as ≤ 2, we prove a 1.0476 lower bound, assuming Pnot=NP; and a higher 1.0604 lower bound, assuming the Unique Games Conjecture.

  4. Orthonormal vector general polynomials derived from the Cartesian gradient of the orthonormal Zernike-based polynomials.

    PubMed

    Mafusire, Cosmas; Krüger, Tjaart P J

    2018-06-01

    The concept of orthonormal vector circle polynomials is revisited by deriving a set from the Cartesian gradient of Zernike polynomials in a unit circle using a matrix-based approach. The heart of this model is a closed-form matrix equation of the gradient of Zernike circle polynomials expressed as a linear combination of lower-order Zernike circle polynomials related through a gradient matrix. This is a sparse matrix whose elements are two-dimensional standard basis transverse Euclidean vectors. Using the outer product form of the Cholesky decomposition, the gradient matrix is used to calculate a new matrix, which we used to express the Cartesian gradient of the Zernike circle polynomials as a linear combination of orthonormal vector circle polynomials. Since this new matrix is singular, the orthonormal vector polynomials are recovered by reducing the matrix to its row echelon form using the Gauss-Jordan elimination method. We extend the model to derive orthonormal vector general polynomials, which are orthonormal in a general pupil by performing a similarity transformation on the gradient matrix to give its equivalent in the general pupil. The outer form of the Gram-Schmidt procedure and the Gauss-Jordan elimination method are then applied to the general pupil to generate the orthonormal vector general polynomials from the gradient of the orthonormal Zernike-based polynomials. The performance of the model is demonstrated with a simulated wavefront in a square pupil inscribed in a unit circle.

  5. Discrete-time state estimation for stochastic polynomial systems over polynomial observations

    NASA Astrophysics Data System (ADS)

    Hernandez-Gonzalez, M.; Basin, M.; Stepanov, O.

    2018-07-01

    This paper presents a solution to the mean-square state estimation problem for stochastic nonlinear polynomial systems over polynomial observations confused with additive white Gaussian noises. The solution is given in two steps: (a) computing the time-update equations and (b) computing the measurement-update equations for the state estimate and error covariance matrix. A closed form of this filter is obtained by expressing conditional expectations of polynomial terms as functions of the state estimate and error covariance. As a particular case, the mean-square filtering equations are derived for a third-degree polynomial system with second-degree polynomial measurements. Numerical simulations show effectiveness of the proposed filter compared to the extended Kalman filter.

  6. Time series modeling by a regression approach based on a latent process.

    PubMed

    Chamroukhi, Faicel; Samé, Allou; Govaert, Gérard; Aknin, Patrice

    2009-01-01

    Time series are used in many domains including finance, engineering, economics and bioinformatics generally to represent the change of a measurement over time. Modeling techniques may then be used to give a synthetic representation of such data. A new approach for time series modeling is proposed in this paper. It consists of a regression model incorporating a discrete hidden logistic process allowing for activating smoothly or abruptly different polynomial regression models. The model parameters are estimated by the maximum likelihood method performed by a dedicated Expectation Maximization (EM) algorithm. The M step of the EM algorithm uses a multi-class Iterative Reweighted Least-Squares (IRLS) algorithm to estimate the hidden process parameters. To evaluate the proposed approach, an experimental study on simulated data and real world data was performed using two alternative approaches: a heteroskedastic piecewise regression model using a global optimization algorithm based on dynamic programming, and a Hidden Markov Regression Model whose parameters are estimated by the Baum-Welch algorithm. Finally, in the context of the remote monitoring of components of the French railway infrastructure, and more particularly the switch mechanism, the proposed approach has been applied to modeling and classifying time series representing the condition measurements acquired during switch operations.

  7. Secure Logistic Regression Based on Homomorphic Encryption: Design and Evaluation

    PubMed Central

    Song, Yongsoo; Wang, Shuang; Xia, Yuhou; Jiang, Xiaoqian

    2018-01-01

    Background Learning a model without accessing raw data has been an intriguing idea to security and machine learning researchers for years. In an ideal setting, we want to encrypt sensitive data to store them on a commercial cloud and run certain analyses without ever decrypting the data to preserve privacy. Homomorphic encryption technique is a promising candidate for secure data outsourcing, but it is a very challenging task to support real-world machine learning tasks. Existing frameworks can only handle simplified cases with low-degree polynomials such as linear means classifier and linear discriminative analysis. Objective The goal of this study is to provide a practical support to the mainstream learning models (eg, logistic regression). Methods We adapted a novel homomorphic encryption scheme optimized for real numbers computation. We devised (1) the least squares approximation of the logistic function for accuracy and efficiency (ie, reduce computation cost) and (2) new packing and parallelization techniques. Results Using real-world datasets, we evaluated the performance of our model and demonstrated its feasibility in speed and memory consumption. For example, it took approximately 116 minutes to obtain the training model from the homomorphically encrypted Edinburgh dataset. In addition, it gives fairly accurate predictions on the testing dataset. Conclusions We present the first homomorphically encrypted logistic regression outsourcing model based on the critical observation that the precision loss of classification models is sufficiently small so that the decision plan stays still. PMID:29666041

  8. Fluoroscopic tumor tracking for image-guided lung cancer radiotherapy

    NASA Astrophysics Data System (ADS)

    Lin, Tong; Cerviño, Laura I.; Tang, Xiaoli; Vasconcelos, Nuno; Jiang, Steve B.

    2009-02-01

    Accurate lung tumor tracking in real time is a keystone to image-guided radiotherapy of lung cancers. Existing lung tumor tracking approaches can be roughly grouped into three categories: (1) deriving tumor position from external surrogates; (2) tracking implanted fiducial markers fluoroscopically or electromagnetically; (3) fluoroscopically tracking lung tumor without implanted fiducial markers. The first approach suffers from insufficient accuracy, while the second may not be widely accepted due to the risk of pneumothorax. Previous studies in fluoroscopic markerless tracking are mainly based on template matching methods, which may fail when the tumor boundary is unclear in fluoroscopic images. In this paper we propose a novel markerless tumor tracking algorithm, which employs the correlation between the tumor position and surrogate anatomic features in the image. The positions of the surrogate features are not directly tracked; instead, we use principal component analysis of regions of interest containing them to obtain parametric representations of their motion patterns. Then, the tumor position can be predicted from the parametric representations of surrogates through regression. Four regression methods were tested in this study: linear and two-degree polynomial regression, artificial neural network (ANN) and support vector machine (SVM). The experimental results based on fluoroscopic sequences of ten lung cancer patients demonstrate a mean tracking error of 2.1 pixels and a maximum error at a 95% confidence level of 4.6 pixels (pixel size is about 0.5 mm) for the proposed tracking algorithm.

  9. Nodal Statistics for the Van Vleck Polynomials

    NASA Astrophysics Data System (ADS)

    Bourget, Alain

    The Van Vleck polynomials naturally arise from the generalized Lamé equation as the polynomials of degree for which Eq. (1) has a polynomial solution of some degree k. In this paper, we compute the limiting distribution, as well as the limiting mean level spacings distribution of the zeros of any Van Vleck polynomial as N --> ∞.

  10. Clostridium Difficile Infection Due to Pneumonia Treatment: Mortality Risk Models.

    PubMed

    Chmielewska, M; Zycinska, K; Lenartowicz, B; Hadzik-Błaszczyk, M; Cieplak, M; Kur, Z; Wardyn, K A

    2017-01-01

    One of the most common gastrointestinal infection after the antibiotic treatment of community or nosocomial pneumonia is caused by the anaerobic spore Clostridium difficile (C. difficile). The aim of this study was to retrospectively assess mortality due to C. difficile infection (CDI) in patients treated for pneumonia. We identified 94 cases of post-pneumonia CDI out of the 217 patients with CDI. The mortality issue was addressed by creating a mortality risk models using logistic regression and multivariate fractional polynomial analysis. The patients' demographics, clinical features, and laboratory results were taken into consideration. To estimate the influence of the preceding respiratory infection, a pneumonia severity scale was included in the analysis. The analysis showed two statistically significant and clinically relevant mortality models. The model with the highest prognostic strength entailed age, leukocyte count, serum creatinine and urea concentration, hematocrit, coexisting neoplasia or chronic obstructive pulmonary disease. In conclusion, we report on two prognostic models, based on clinically relevant factors, which can be of help in predicting mortality risk in C. difficile infection, secondary to the antibiotic treatment of pneumonia. These models could be useful in preventive tailoring of individual therapy.

  11. Validation and Parameter Sensitivity Tests for Reconstructing Swell Field Based on an Ensemble Kalman Filter

    PubMed Central

    Wang, Xuan; Tandeo, Pierre; Fablet, Ronan; Husson, Romain; Guan, Lei; Chen, Ge

    2016-01-01

    The swell propagation model built on geometric optics is known to work well when simulating radiated swells from a far located storm. Based on this simple approximation, satellites have acquired plenty of large samples on basin-traversing swells induced by fierce storms situated in mid-latitudes. How to routinely reconstruct swell fields with these irregularly sampled observations from space via known swell propagation principle requires more examination. In this study, we apply 3-h interval pseudo SAR observations in the ensemble Kalman filter (EnKF) to reconstruct a swell field in ocean basin, and compare it with buoy swell partitions and polynomial regression results. As validated against in situ measurements, EnKF works well in terms of spatial–temporal consistency in far-field swell propagation scenarios. Using this framework, we further address the influence of EnKF parameters, and perform a sensitivity analysis to evaluate estimations made under different sets of parameters. Such analysis is of key interest with respect to future multiple-source routinely recorded swell field data. Satellite-derived swell data can serve as a valuable complementary dataset to in situ or wave re-analysis datasets. PMID:27898005

  12. Variation in reaction norms: Statistical considerations and biological interpretation.

    PubMed

    Morrissey, Michael B; Liefting, Maartje

    2016-09-01

    Analysis of reaction norms, the functions by which the phenotype produced by a given genotype depends on the environment, is critical to studying many aspects of phenotypic evolution. Different techniques are available for quantifying different aspects of reaction norm variation. We examine what biological inferences can be drawn from some of the more readily applicable analyses for studying reaction norms. We adopt a strongly biologically motivated view, but draw on statistical theory to highlight strengths and drawbacks of different techniques. In particular, consideration of some formal statistical theory leads to revision of some recently, and forcefully, advocated opinions on reaction norm analysis. We clarify what simple analysis of the slope between mean phenotype in two environments can tell us about reaction norms, explore the conditions under which polynomial regression can provide robust inferences about reaction norm shape, and explore how different existing approaches may be used to draw inferences about variation in reaction norm shape. We show how mixed model-based approaches can provide more robust inferences than more commonly used multistep statistical approaches, and derive new metrics of the relative importance of variation in reaction norm intercepts, slopes, and curvatures. © 2016 The Author(s). Evolution © 2016 The Society for the Study of Evolution.

  13. Ultrasonic-assisted extraction and in-vitro antioxidant activity of polysaccharide from Hibiscus leaf.

    PubMed

    Afshari, Kasra; Samavati, Vahid; Shahidi, Seyed-Ahmad

    2015-03-01

    The effects of ultrasonic power, extraction time, extraction temperature, and the water-to-raw material ratio on extraction yield of crude polysaccharide from the leaf of Hibiscus rosa-sinensis (HRLP) were optimized by statistical analysis using response surface methodology. The response surface methodology (RSM) was used to optimize HRLP extraction yield by implementing the Box-Behnken design (BBD). The experimental data obtained were fitted to a second-order polynomial equation using multiple regression analysis and also analyzed by appropriate statistical methods (ANOVA). Analysis of the results showed that the linear and quadratic terms of these four variables had significant effects. The optimal conditions for the highest extraction yield of HRLP were: ultrasonic power, 93.59 W; extraction time, 25.71 min; extraction temperature, 93.18°C; and the water to raw material ratio, 24.3 mL/g. Under these conditions, the experimental yield was 9.66±0.18%, which is well in close agreement with the value predicted by the model 9.526%. The results demonstrated that HRLP had strong scavenging activities in vitro on DPPH and hydroxyl radicals. Copyright © 2014 Elsevier B.V. All rights reserved.

  14. Multi-criteria manufacturability indices for ranking high-concentration monoclonal antibody formulations.

    PubMed

    Yang, Yang; Velayudhan, Ajoy; Thornhill, Nina F; Farid, Suzanne S

    2017-09-01

    The need for high-concentration formulations for subcutaneous delivery of therapeutic monoclonal antibodies (mAbs) can present manufacturability challenges for the final ultrafiltration/diafiltration (UF/DF) step. Viscosity levels and the propensity to aggregate are key considerations for high-concentration formulations. This work presents novel frameworks for deriving a set of manufacturability indices related to viscosity and thermostability to rank high-concentration mAb formulation conditions in terms of their ease of manufacture. This is illustrated by analyzing published high-throughput biophysical screening data that explores the influence of different formulation conditions (pH, ions, and excipients) on the solution viscosity and product thermostability. A decision tree classification method, CART (Classification and Regression Tree) is used to identify the critical formulation conditions that influence the viscosity and thermostability. In this work, three different multi-criteria data analysis frameworks were investigated to derive manufacturability indices from analysis of the stress maps and the process conditions experienced in the final UF/DF step. Polynomial regression techniques were used to transform the experimental data into a set of stress maps that show viscosity and thermostability as functions of the formulation conditions. A mathematical filtrate flux model was used to capture the time profiles of protein concentration and flux decay behavior during UF/DF. Multi-criteria decision-making analysis was used to identify the optimal formulation conditions that minimize the potential for both viscosity and aggregation issues during UF/DF. Biotechnol. Bioeng. 2017;114: 2043-2056. © 2017 The Authors. Biotechnology and Bioengineering Published by Wiley Perodicals, Inc. © 2017 The Authors. Biotechnology and Bioengineering Published by Wiley Perodicals, Inc.

  15. Multi‐criteria manufacturability indices for ranking high‐concentration monoclonal antibody formulations

    PubMed Central

    Velayudhan, Ajoy; Thornhill, Nina F.

    2017-01-01

    ABSTRACT The need for high‐concentration formulations for subcutaneous delivery of therapeutic monoclonal antibodies (mAbs) can present manufacturability challenges for the final ultrafiltration/diafiltration (UF/DF) step. Viscosity levels and the propensity to aggregate are key considerations for high‐concentration formulations. This work presents novel frameworks for deriving a set of manufacturability indices related to viscosity and thermostability to rank high‐concentration mAb formulation conditions in terms of their ease of manufacture. This is illustrated by analyzing published high‐throughput biophysical screening data that explores the influence of different formulation conditions (pH, ions, and excipients) on the solution viscosity and product thermostability. A decision tree classification method, CART (Classification and Regression Tree) is used to identify the critical formulation conditions that influence the viscosity and thermostability. In this work, three different multi‐criteria data analysis frameworks were investigated to derive manufacturability indices from analysis of the stress maps and the process conditions experienced in the final UF/DF step. Polynomial regression techniques were used to transform the experimental data into a set of stress maps that show viscosity and thermostability as functions of the formulation conditions. A mathematical filtrate flux model was used to capture the time profiles of protein concentration and flux decay behavior during UF/DF. Multi‐criteria decision‐making analysis was used to identify the optimal formulation conditions that minimize the potential for both viscosity and aggregation issues during UF/DF. Biotechnol. Bioeng. 2017;114: 2043–2056. © 2017 The Authors. Biotechnology and Bioengineering Published by Wiley Perodicals, Inc. PMID:28464235

  16. Legendre modified moments for Euler's constant

    NASA Astrophysics Data System (ADS)

    Prévost, Marc

    2008-10-01

    Polynomial moments are often used for the computation of Gauss quadrature to stabilize the numerical calculation of the orthogonal polynomials, see [W. Gautschi, Computational aspects of orthogonal polynomials, in: P. Nevai (Ed.), Orthogonal Polynomials-Theory and Practice, NATO ASI Series, Series C: Mathematical and Physical Sciences, vol. 294. Kluwer, Dordrecht, 1990, pp. 181-216 [6]; W. Gautschi, On the sensitivity of orthogonal polynomials to perturbations in the moments, Numer. Math. 48(4) (1986) 369-382 [5]; W. Gautschi, On generating orthogonal polynomials, SIAM J. Sci. Statist. Comput. 3(3) (1982) 289-317 [4

  17. [Design and Implementation of Image Interpolation and Color Correction for Ultra-thin Electronic Endoscope on FPGA].

    PubMed

    Luo, Qiang; Yan, Zhuangzhi; Gu, Dongxing; Cao, Lei

    This paper proposed an image interpolation algorithm based on bilinear interpolation and a color correction algorithm based on polynomial regression on FPGA, which focused on the limited number of imaging pixels and color distortion of the ultra-thin electronic endoscope. Simulation experiment results showed that the proposed algorithm realized the real-time display of 1280 x 720@60Hz HD video, and using the X-rite color checker as standard colors, the average color difference was reduced about 30% comparing with that before color correction.

  18. On multiple orthogonal polynomials for discrete Meixner measures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sorokin, Vladimir N

    2010-12-07

    The paper examines two examples of multiple orthogonal polynomials generalizing orthogonal polynomials of a discrete variable, meaning thereby the Meixner polynomials. One example is bound up with a discrete Nikishin system, and the other leads to essentially new effects. The limit distribution of the zeros of polynomials is obtained in terms of logarithmic equilibrium potentials and in terms of algebraic curves. Bibliography: 9 titles.

  19. Three-dimensional trend mapping from wire-line logs

    USGS Publications Warehouse

    Doveton, J.H.; Ke-an, Z.

    1985-01-01

    Mapping of lithofacies and porosities of stratigraphic units is complicated because these properties vary in three dimensions. The method of moments was proposed by Krumbein and Libby (1957) as a technique to aid in resolving this problem. Moments are easily computed from wireline logs and are simple statistics which summarize vertical variation in a log trace. Combinations of moment maps have proved useful in understanding vertical and lateral changes in lithology of sedimentary rock units. Although moments have meaning both as statistical descriptors and as mechanical properties, they also define polynomial curves which approximate lithologic changes as a function of depth. These polynomials can be fitted by least-squares methods, partitioning major trends in rock properties from finescale fluctuations. Analysis of variance yields the degree of fit of any polynomial and measures the proportion of vertical variability expressed by any moment or combination of moments. In addition, polynomial curves can be differentiated to determine depths at which pronounced expressions of facies occur and to determine the locations of boundaries between major lithologic subdivisions. Moments can be estimated at any location in an area by interpolating from log moments at control wells. A matrix algebra operation then converts moment estimates to coefficients of a polynomial function which describes a continuous curve of lithologic variation with depth. If this procedure is applied to a grid of geographic locations, the result is a model of variability in three dimensions. Resolution of the model is determined largely by number of moments used in its generation. The method is illustrated with an analysis of lithofacies in the Simpson Group of south-central Kansas; the three-dimensional model is shown as cross sections and slice maps. In this study, the gamma-ray log is used as a measure of shaliness of the unit. However, the method is general and can be applied, for example, to suites of neutron, density, or sonic logs to produce three-dimensional models of porosity in reservoir rocks. ?? 1985 Plenum Publishing Corporation.

  20. Discrimination of orange beverage emulsions with different formulations using multivariate analysis.

    PubMed

    Mirhosseini, Hamed; Tan, Chin Ping

    2010-06-01

    The constituents in a food emulsion interact with each other, either physically or chemically, determining the overall physico-chemical and organoleptic properties of the final product. Thus, the main objective of present study was to investigate the effect of emulsion components on beverage emulsion properties. In most cases, the second-order polynomial regression models with no significant (P > 0.05) lack of fit and high adjusted coefficient of determination (adjusted R(2), 0.851-0.996) were significantly fitted to explain the beverage emulsion properties as function of main emulsion components. The main effect of gum arabic was found to be significant (P < 0.05) in all response regression models. Orange beverage emulsion containing 222.0 g kg(-1) gum arabic, 2.4 g kg(-1) xanthan gum and 152.7 g kg(-1) orange oil was predicted to provide the desirable emulsion properties. The present study suggests that the concentration of gum arabic should be considered as a primary critical factor for the formulation of orange beverage emulsion. This study also indicated that the interaction effect between xanthan gum and orange oil showed the most significant (P < 0.05) effect among all interaction effects influencing all the physicochemical properties except for density. Copyright (c) 2010 Society of Chemical Industry.

  1. RBSURFpred: Modeling protein accessible surface area in real and binary space using regularized and optimized regression.

    PubMed

    Tarafder, Sumit; Toukir Ahmed, Md; Iqbal, Sumaiya; Tamjidul Hoque, Md; Sohel Rahman, M

    2018-03-14

    Accessible surface area (ASA) of a protein residue is an effective feature for protein structure prediction, binding region identification, fold recognition problems etc. Improving the prediction of ASA by the application of effective feature variables is a challenging but explorable task to consider, specially in the field of machine learning. Among the existing predictors of ASA, REGAd 3 p is a highly accurate ASA predictor which is based on regularized exact regression with polynomial kernel of degree 3. In this work, we present a new predictor RBSURFpred, which extends REGAd 3 p on several dimensions by incorporating 58 physicochemical, evolutionary and structural properties into 9-tuple peptides via Chou's general PseAAC, which allowed us to obtain higher accuracies in predicting both real-valued and binary ASA. We have compared RBSURFpred for both real and binary space predictions with state-of-the-art predictors, such as REGAd 3 p and SPIDER2. We also have carried out a rigorous analysis of the performance of RBSURFpred in terms of different amino acids and their properties, and also with biologically relevant case-studies. The performance of RBSURFpred establishes itself as a useful tool for the community. Copyright © 2018 Elsevier Ltd. All rights reserved.

  2. Direct calculation of modal parameters from matrix orthogonal polynomials

    NASA Astrophysics Data System (ADS)

    El-Kafafy, Mahmoud; Guillaume, Patrick

    2011-10-01

    The object of this paper is to introduce a new technique to derive the global modal parameter (i.e. system poles) directly from estimated matrix orthogonal polynomials. This contribution generalized the results given in Rolain et al. (1994) [5] and Rolain et al. (1995) [6] for scalar orthogonal polynomials to multivariable (matrix) orthogonal polynomials for multiple input multiple output (MIMO) system. Using orthogonal polynomials improves the numerical properties of the estimation process. However, the derivation of the modal parameters from the orthogonal polynomials is in general ill-conditioned if not handled properly. The transformation of the coefficients from orthogonal polynomials basis to power polynomials basis is known to be an ill-conditioned transformation. In this paper a new approach is proposed to compute the system poles directly from the multivariable orthogonal polynomials. High order models can be used without any numerical problems. The proposed method will be compared with existing methods (Van Der Auweraer and Leuridan (1987) [4] Chen and Xu (2003) [7]). For this comparative study, simulated as well as experimental data will be used.

  3. The algorithmic details of polynomials application in the problems of heat and mass transfer control on the hypersonic aircraft permeable surfaces

    NASA Astrophysics Data System (ADS)

    Bilchenko, G. G.; Bilchenko, N. G.

    2018-03-01

    The hypersonic aircraft permeable surfaces heat and mass transfer effective control mathematical modeling problems are considered. The analysis of the control (the blowing) constructive and gasdynamical restrictions is carried out for the porous and perforated surfaces. The functions classes allowing realize the controls taking into account the arising types of restrictions are suggested. Estimates of the computational complexity of the W. G. Horner scheme application in the case of using the C. Hermite interpolation polynomial are given.

  4. Whittaker-Hill equation, Ince polynomials, and molecular torsional modes

    NASA Astrophysics Data System (ADS)

    Roncaratti, Luiz F.; Aquilanti, Vincenzo

    We present an analysis of the Whittaker-Hill equation in view of its usefulness in quantum mechanics when periodic potentials are involved. The transformation due to Ince leads to polynomial solutions which have not attracted much attention so far in the applications. With respect to Mathieu equation, here we have an additional parameter, which permits to describe a variety of phenomena, including the treatment of the torsional motion of flexible molecules. Examples are discussed, with particular attention payed to the case of H2O2 and similar molecules.

  5. Identification of stochastic interactions in nonlinear models of structural mechanics

    NASA Astrophysics Data System (ADS)

    Kala, Zdeněk

    2017-07-01

    In the paper, the polynomial approximation is presented by which the Sobol sensitivity analysis can be evaluated with all sensitivity indices. The nonlinear FEM model is approximated. The input area is mapped using simulations runs of Latin Hypercube Sampling method. The domain of the approximation polynomial is chosen so that it were possible to apply large number of simulation runs of Latin Hypercube Sampling method. The method presented also makes possible to evaluate higher-order sensitivity indices, which could not be identified in case of nonlinear FEM.

  6. Normality of raw data in general linear models: The most widespread myth in statistics

    USGS Publications Warehouse

    Kery, Marc; Hatfield, Jeff S.

    2003-01-01

    In years of statistical consulting for ecologists and wildlife biologists, by far the most common misconception we have come across has been the one about normality in general linear models. These comprise a very large part of the statistical models used in ecology and include t tests, simple and multiple linear regression, polynomial regression, and analysis of variance (ANOVA) and covariance (ANCOVA). There is a widely held belief that the normality assumption pertains to the raw data rather than to the model residuals. We suspect that this error may also occur in countless published studies, whenever the normality assumption is tested prior to analysis. This may lead to the use of nonparametric alternatives (if there are any), when parametric tests would indeed be appropriate, or to use of transformations of raw data, which may introduce hidden assumptions such as multiplicative effects on the natural scale in the case of log-transformed data. Our aim here is to dispel this myth. We very briefly describe relevant theory for two cases of general linear models to show that the residuals need to be normally distributed if tests requiring normality are to be used, such as t and F tests. We then give two examples demonstrating that the distribution of the response variable may be nonnormal, and yet the residuals are well behaved. We do not go into the issue of how to test normality; instead we display the distributions of response variables and residuals graphically.

  7. Maximum Marginal Likelihood Estimation of a Monotonic Polynomial Generalized Partial Credit Model with Applications to Multiple Group Analysis.

    PubMed

    Falk, Carl F; Cai, Li

    2016-06-01

    We present a semi-parametric approach to estimating item response functions (IRF) useful when the true IRF does not strictly follow commonly used functions. Our approach replaces the linear predictor of the generalized partial credit model with a monotonic polynomial. The model includes the regular generalized partial credit model at the lowest order polynomial. Our approach extends Liang's (A semi-parametric approach to estimate IRFs, Unpublished doctoral dissertation, 2007) method for dichotomous item responses to the case of polytomous data. Furthermore, item parameter estimation is implemented with maximum marginal likelihood using the Bock-Aitkin EM algorithm, thereby facilitating multiple group analyses useful in operational settings. Our approach is demonstrated on both educational and psychological data. We present simulation results comparing our approach to more standard IRF estimation approaches and other non-parametric and semi-parametric alternatives.

  8. An Analysis of Polynomial Chaos Approximations for Modeling Single-Fluid-Phase Flow in Porous Medium Systems

    PubMed Central

    Rupert, C.P.; Miller, C.T.

    2008-01-01

    We examine a variety of polynomial-chaos-motivated approximations to a stochastic form of a steady state groundwater flow model. We consider approaches for truncating the infinite dimensional problem and producing decoupled systems. We discuss conditions under which such decoupling is possible and show that to generalize the known decoupling by numerical cubature, it would be necessary to find new multivariate cubature rules. Finally, we use the acceleration of Monte Carlo to compare the quality of polynomial models obtained for all approaches and find that in general the methods considered are more efficient than Monte Carlo for the relatively small domains considered in this work. A curse of dimensionality in the series expansion of the log-normal stochastic random field used to represent hydraulic conductivity provides a significant impediment to efficient approximations for large domains for all methods considered in this work, other than the Monte Carlo method. PMID:18836519

  9. Uncertainty Quantification in Simulations of Epidemics Using Polynomial Chaos

    PubMed Central

    Santonja, F.; Chen-Charpentier, B.

    2012-01-01

    Mathematical models based on ordinary differential equations are a useful tool to study the processes involved in epidemiology. Many models consider that the parameters are deterministic variables. But in practice, the transmission parameters present large variability and it is not possible to determine them exactly, and it is necessary to introduce randomness. In this paper, we present an application of the polynomial chaos approach to epidemiological mathematical models based on ordinary differential equations with random coefficients. Taking into account the variability of the transmission parameters of the model, this approach allows us to obtain an auxiliary system of differential equations, which is then integrated numerically to obtain the first-and the second-order moments of the output stochastic processes. A sensitivity analysis based on the polynomial chaos approach is also performed to determine which parameters have the greatest influence on the results. As an example, we will apply the approach to an obesity epidemic model. PMID:22927889

  10. Geometric analysis and restitution of digital multispectral scanner data arrays

    NASA Technical Reports Server (NTRS)

    Baker, J. R.; Mikhail, E. M.

    1975-01-01

    An investigation was conducted to define causes of geometric defects within digital multispectral scanner (MSS) data arrays, to analyze the resulting geometric errors, and to investigate restitution methods to correct or reduce these errors. Geometric transformation relationships for scanned data, from which collinearity equations may be derived, served as the basis of parametric methods of analysis and restitution of MSS digital data arrays. The linearization of these collinearity equations is presented. Algorithms considered for use in analysis and restitution included the MSS collinearity equations, piecewise polynomials based on linearized collinearity equations, and nonparametric algorithms. A proposed system for geometric analysis and restitution of MSS digital data arrays was used to evaluate these algorithms, utilizing actual MSS data arrays. It was shown that collinearity equations and nonparametric algorithms both yield acceptable results, but nonparametric algorithms possess definite advantages in computational efficiency. Piecewise polynomials were found to yield inferior results.

  11. Comparison of Response Surface and Kriging Models in the Multidisciplinary Design of an Aerospike Nozzle

    NASA Technical Reports Server (NTRS)

    Simpson, Timothy W.

    1998-01-01

    The use of response surface models and kriging models are compared for approximating non-random, deterministic computer analyses. After discussing the traditional response surface approach for constructing polynomial models for approximation, kriging is presented as an alternative statistical-based approximation method for the design and analysis of computer experiments. Both approximation methods are applied to the multidisciplinary design and analysis of an aerospike nozzle which consists of a computational fluid dynamics model and a finite element analysis model. Error analysis of the response surface and kriging models is performed along with a graphical comparison of the approximations. Four optimization problems are formulated and solved using both approximation models. While neither approximation technique consistently outperforms the other in this example, the kriging models using only a constant for the underlying global model and a Gaussian correlation function perform as well as the second order polynomial response surface models.

  12. Independence polynomial and matching polynomial of the Koch network

    NASA Astrophysics Data System (ADS)

    Liao, Yunhua; Xie, Xiaoliang

    2015-11-01

    The lattice gas model and the monomer-dimer model are two classical models in statistical mechanics. It is well known that the partition functions of these two models are associated with the independence polynomial and the matching polynomial in graph theory, respectively. Both polynomials have been shown to belong to the “#P-complete” class, which indicate the problems are computationally “intractable”. We consider these two polynomials of the Koch networks which are scale-free with small-world effects. Explicit recurrences are derived, and explicit formulae are presented for the number of independent sets of a certain type.

  13. Asymptotically extremal polynomials with respect to varying weights and application to Sobolev orthogonality

    NASA Astrophysics Data System (ADS)

    Díaz Mendoza, C.; Orive, R.; Pijeira Cabrera, H.

    2008-10-01

    We study the asymptotic behavior of the zeros of a sequence of polynomials whose weighted norms, with respect to a sequence of weight functions, have the same nth root asymptotic behavior as the weighted norms of certain extremal polynomials. This result is applied to obtain the (contracted) weak zero distribution for orthogonal polynomials with respect to a Sobolev inner product with exponential weights of the form e-[phi](x), giving a unified treatment for the so-called Freud (i.e., when [phi] has polynomial growth at infinity) and Erdös (when [phi] grows faster than any polynomial at infinity) cases. In addition, we provide a new proof for the bound of the distance of the zeros to the convex hull of the support for these Sobolev orthogonal polynomials.

  14. A study of the orthogonal polynomials associated with the quantum harmonic oscillator on constant curvature spaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vignat, C.; Lamberti, P. W.

    2009-10-15

    Recently, Carinena, et al. [Ann. Phys. 322, 434 (2007)] introduced a new family of orthogonal polynomials that appear in the wave functions of the quantum harmonic oscillator in two-dimensional constant curvature spaces. They are a generalization of the Hermite polynomials and will be called curved Hermite polynomials in the following. We show that these polynomials are naturally related to the relativistic Hermite polynomials introduced by Aldaya et al. [Phys. Lett. A 156, 381 (1991)], and thus are Jacobi polynomials. Moreover, we exhibit a natural bijection between the solutions of the quantum harmonic oscillator on negative curvature spaces and on positivemore » curvature spaces. At last, we show a maximum entropy property for the ground states of these oscillators.« less

  15. Stabilisation of discrete-time polynomial fuzzy systems via a polynomial lyapunov approach

    NASA Astrophysics Data System (ADS)

    Nasiri, Alireza; Nguang, Sing Kiong; Swain, Akshya; Almakhles, Dhafer

    2018-02-01

    This paper deals with the problem of designing a controller for a class of discrete-time nonlinear systems which is represented by discrete-time polynomial fuzzy model. Most of the existing control design methods for discrete-time fuzzy polynomial systems cannot guarantee their Lyapunov function to be a radially unbounded polynomial function, hence the global stability cannot be assured. The proposed control design in this paper guarantees a radially unbounded polynomial Lyapunov functions which ensures global stability. In the proposed design, state feedback structure is considered and non-convexity problem is solved by incorporating an integrator into the controller. Sufficient conditions of stability are derived in terms of polynomial matrix inequalities which are solved via SOSTOOLS in MATLAB. A numerical example is presented to illustrate the effectiveness of the proposed controller.

  16. Single-wavelength based Thai jasmine rice identification with polynomial fitting function and neural network analysis

    NASA Astrophysics Data System (ADS)

    Suwansukho, Kajpanya; Sumriddetchkajorn, Sarun; Buranasiri, Prathan

    2013-06-01

    We previously showed that a combination of image thresholding, chain coding, elliptic Fourier descriptors, and artificial neural network analysis provided a low false acceptance rate (FAR) and a false rejection rate (FRR) of 11.0% and 19.0%, respectively, in identify Thai jasmine rice from three unwanted rice varieties. In this work, we highlight that only a polynomial function fitting on the determined chain code and the neural network analysis are highly sufficient in obtaining a very low FAR of < 3.0% and a very low 0.3% FRR for the separation of Thai jasmine rice from Chainat 1 (CNT1), Prathumtani 1 (PTT1), and Hom-Pitsanulok (HPSL) rice varieties. With this proposed approach, the analytical time is tremendously suppressed from 4,250 seconds down to 2 seconds, implying extremely high potential in practical deployment.

  17. Hadamard Factorization of Stable Polynomials

    NASA Astrophysics Data System (ADS)

    Loredo-Villalobos, Carlos Arturo; Aguirre-Hernández, Baltazar

    2011-11-01

    The stable (Hurwitz) polynomials are important in the study of differential equations systems and control theory (see [7] and [19]). A property of these polynomials is related to Hadamard product. Consider two polynomials p,q ∈ R[x]:p(x) = anxn+an-1xn-1+...+a1x+a0q(x) = bmx m+bm-1xm-1+...+b1x+b0the Hadamard product (p × q) is defined as (p×q)(x) = akbkxk+ak-1bk-1xk-1+...+a1b1x+a0b0where k = min(m,n). Some results (see [16]) shows that if p,q ∈R[x] are stable polynomials then (p×q) is stable, also, i.e. the Hadamard product is closed; however, the reciprocal is not always true, that is, not all stable polynomial has a factorization into two stable polynomials the same degree n, if n> 4 (see [15]).In this work we will give some conditions to Hadamard factorization existence for stable polynomials.

  18. On the construction of recurrence relations for the expansion and connection coefficients in series of Jacobi polynomials

    NASA Astrophysics Data System (ADS)

    Doha, E. H.

    2004-01-01

    Formulae expressing explicitly the Jacobi coefficients of a general-order derivative (integral) of an infinitely differentiable function in terms of its original expansion coefficients, and formulae for the derivatives (integrals) of Jacobi polynomials in terms of Jacobi polynomials themselves are stated. A formula for the Jacobi coefficients of the moments of one single Jacobi polynomial of certain degree is proved. Another formula for the Jacobi coefficients of the moments of a general-order derivative of an infinitely differentiable function in terms of its original expanded coefficients is also given. A simple approach in order to construct and solve recursively for the connection coefficients between Jacobi-Jacobi polynomials is described. Explicit formulae for these coefficients between ultraspherical and Jacobi polynomials are deduced, of which the Chebyshev polynomials of the first and second kinds and Legendre polynomials are important special cases. Two analytical formulae for the connection coefficients between Laguerre-Jacobi and Hermite-Jacobi are developed.

  19. Investigation to realize a computationally efficient implementation of the high-order instantaneous-moments-based fringe analysis method

    NASA Astrophysics Data System (ADS)

    Gorthi, Sai Siva; Rajshekhar, Gannavarpu; Rastogi, Pramod

    2010-06-01

    Recently, a high-order instantaneous moments (HIM)-operator-based method was proposed for accurate phase estimation in digital holographic interferometry. The method relies on piece-wise polynomial approximation of phase and subsequent evaluation of the polynomial coefficients from the HIM operator using single-tone frequency estimation. The work presents a comparative analysis of the performance of different single-tone frequency estimation techniques, like Fourier transform followed by optimization, estimation of signal parameters by rotational invariance technique (ESPRIT), multiple signal classification (MUSIC), and iterative frequency estimation by interpolation on Fourier coefficients (IFEIF) in HIM-operator-based methods for phase estimation. Simulation and experimental results demonstrate the potential of the IFEIF technique with respect to computational efficiency and estimation accuracy.

  20. Nonlinear temperature dependent failure analysis of finite width composite laminates

    NASA Technical Reports Server (NTRS)

    Nagarkar, A. P.; Herakovich, C. T.

    1979-01-01

    A quasi-three dimensional, nonlinear elastic finite element stress analysis of finite width composite laminates including curing stresses is presented. Cross-ply, angle-ply, and two quasi-isotropic graphite/epoxy laminates are studied. Curing stresses are calculated using temperature dependent elastic properties that are input as percent retention curves, and stresses due to mechanical loading in the form of an axial strain are calculated using tangent modulii obtained by Ramberg-Osgood parameters. It is shown that curing stresses and stresses due to tensile loading are significant as edge effects in all types of laminate studies. The tensor polynomial failure criterion is used to predict the initiation of failure. The mode of failure is predicted by examining individual stress contributions to the tensor polynomial.

  1. Multivariable polynomial fitting of controlled single-phase nonlinear load of input current total harmonic distortion

    NASA Astrophysics Data System (ADS)

    Sikora, Roman; Markiewicz, Przemysław; Pabjańczyk, Wiesława

    2018-04-01

    The power systems usually include a number of nonlinear receivers. Nonlinear receivers are the source of disturbances generated to the power system in the form of higher harmonics. The level of these disturbances describes the total harmonic distortion coefficient THD. Its value depends on many factors. One of them are the deformation and change in RMS value of supply voltage. A modern LED luminaire is a nonlinear receiver as well. The paper presents the results of the analysis of the influence of change in RMS value of supply voltage and the level of dimming of the tested luminaire on the value of the current THD. The analysis was made using a mathematical model based on multivariable polynomial fitting.

  2. Elevation data fitting and precision analysis of Google Earth in road survey

    NASA Astrophysics Data System (ADS)

    Wei, Haibin; Luan, Xiaohan; Li, Hanchao; Jia, Jiangkun; Chen, Zhao; Han, Leilei

    2018-05-01

    Objective: In order to improve efficiency of road survey and save manpower and material resources, this paper intends to apply Google Earth to the feasibility study stage of road survey and design. Limited by the problem that Google Earth elevation data lacks precision, this paper is focused on finding several different fitting or difference methods to improve the data precision, in order to make every effort to meet the accuracy requirements of road survey and design specifications. Method: On the basis of elevation difference of limited public points, any elevation difference of the other points can be fitted or interpolated. Thus, the precise elevation can be obtained by subtracting elevation difference from the Google Earth data. Quadratic polynomial surface fitting method, cubic polynomial surface fitting method, V4 interpolation method in MATLAB and neural network method are used in this paper to process elevation data of Google Earth. And internal conformity, external conformity and cross correlation coefficient are used as evaluation indexes to evaluate the data processing effect. Results: There is no fitting difference at the fitting point while using V4 interpolation method. Its external conformity is the largest and the effect of accuracy improvement is the worst, so V4 interpolation method is ruled out. The internal and external conformity of the cubic polynomial surface fitting method both are better than those of the quadratic polynomial surface fitting method. The neural network method has a similar fitting effect with the cubic polynomial surface fitting method, but its fitting effect is better in the case of a higher elevation difference. Because the neural network method is an unmanageable fitting model, the cubic polynomial surface fitting method should be mainly used and the neural network method can be used as the auxiliary method in the case of higher elevation difference. Conclusions: Cubic polynomial surface fitting method can obviously improve data precision of Google Earth. The error of data in hilly terrain areas meets the requirement of specifications after precision improvement and it can be used in feasibility study stage of road survey and design.

  3. Random Regression Models Are Suitable to Substitute the Traditional 305-Day Lactation Model in Genetic Evaluations of Holstein Cattle in Brazil

    PubMed Central

    Padilha, Alessandro Haiduck; Cobuci, Jaime Araujo; Costa, Cláudio Napolis; Neto, José Braccini

    2016-01-01

    The aim of this study was to compare two random regression models (RRM) fitted by fourth (RRM4) and fifth-order Legendre polynomials (RRM5) with a lactation model (LM) for evaluating Holstein cattle in Brazil. Two datasets with the same animals were prepared for this study. To apply test-day RRM and LMs, 262,426 test day records and 30,228 lactation records covering 305 days were prepared, respectively. The lowest values of Akaike’s information criterion, Bayesian information criterion, and estimates of the maximum of the likelihood function (−2LogL) were for RRM4. Heritability for 305-day milk yield (305MY) was 0.23 (RRM4), 0.24 (RRM5), and 0.21 (LM). Heritability, additive genetic and permanent environmental variances of test days on days in milk was from 0.16 to 0.27, from 3.76 to 6.88 and from 11.12 to 20.21, respectively. Additive genetic correlations between test days ranged from 0.20 to 0.99. Permanent environmental correlations between test days were between 0.07 and 0.99. Standard deviations of average estimated breeding values (EBVs) for 305MY from RRM4 and RRM5 were from 11% to 30% higher for bulls and around 28% higher for cows than that in LM. Rank correlations between RRM EBVs and LM EBVs were between 0.86 to 0.96 for bulls and 0.80 to 0.87 for cows. Average percentage of gain in reliability of EBVs for 305-day yield increased from 4% to 17% for bulls and from 23% to 24% for cows when reliability of EBVs from RRM models was compared to those from LM model. Random regression model fitted by fourth order Legendre polynomials is recommended for genetic evaluations of Brazilian Holstein cattle because of the higher reliability in the estimation of breeding values. PMID:26954176

  4. Random Regression Models Are Suitable to Substitute the Traditional 305-Day Lactation Model in Genetic Evaluations of Holstein Cattle in Brazil.

    PubMed

    Padilha, Alessandro Haiduck; Cobuci, Jaime Araujo; Costa, Cláudio Napolis; Neto, José Braccini

    2016-06-01

    The aim of this study was to compare two random regression models (RRM) fitted by fourth (RRM4) and fifth-order Legendre polynomials (RRM5) with a lactation model (LM) for evaluating Holstein cattle in Brazil. Two datasets with the same animals were prepared for this study. To apply test-day RRM and LMs, 262,426 test day records and 30,228 lactation records covering 305 days were prepared, respectively. The lowest values of Akaike's information criterion, Bayesian information criterion, and estimates of the maximum of the likelihood function (-2LogL) were for RRM4. Heritability for 305-day milk yield (305MY) was 0.23 (RRM4), 0.24 (RRM5), and 0.21 (LM). Heritability, additive genetic and permanent environmental variances of test days on days in milk was from 0.16 to 0.27, from 3.76 to 6.88 and from 11.12 to 20.21, respectively. Additive genetic correlations between test days ranged from 0.20 to 0.99. Permanent environmental correlations between test days were between 0.07 and 0.99. Standard deviations of average estimated breeding values (EBVs) for 305MY from RRM4 and RRM5 were from 11% to 30% higher for bulls and around 28% higher for cows than that in LM. Rank correlations between RRM EBVs and LM EBVs were between 0.86 to 0.96 for bulls and 0.80 to 0.87 for cows. Average percentage of gain in reliability of EBVs for 305-day yield increased from 4% to 17% for bulls and from 23% to 24% for cows when reliability of EBVs from RRM models was compared to those from LM model. Random regression model fitted by fourth order Legendre polynomials is recommended for genetic evaluations of Brazilian Holstein cattle because of the higher reliability in the estimation of breeding values.

  5. Relationship between age and elite marathon race time in world single age records from 5 to 93 years

    PubMed Central

    2014-01-01

    Background The aims of the study were (i) to investigate the relationship between elite marathon race times and age in 1-year intervals by using the world single age records in marathon running from 5 to 93 years and (ii) to evaluate the sex difference in elite marathon running performance with advancing age. Methods World single age records in marathon running in 1-year intervals for women and men were analysed regarding changes across age for both men and women using linear and non-linear regression analyses for each age for women and men. Results The relationship between elite marathon race time and age was non-linear (i.e. polynomial regression 4th degree) for women and men. The curve was U-shaped where performance improved from 5 to ~20 years. From 5 years to ~15 years, boys and girls performed very similar. Between ~20 and ~35 years, performance was quite linear, but started to decrease at the age of ~35 years in a curvilinear manner with increasing age in both women and men. The sex difference increased non-linearly (i.e. polynomial regression 7th degree) from 5 to ~20 years, remained unchanged at ~20 min from ~20 to ~50 years and increased thereafter. The sex difference was lowest (7.5%, 10.5 min) at the age of 49 years. Conclusion Elite marathon race times improved from 5 to ~20 years, remained linear between ~20 and ~35 years, and started to increase at the age of ~35 years in a curvilinear manner with increasing age in both women and men. The sex difference in elite marathon race time increased non-linearly and was lowest at the age of ~49 years. PMID:25120915

  6. Percolation critical polynomial as a graph invariant

    DOE PAGES

    Scullard, Christian R.

    2012-10-18

    Every lattice for which the bond percolation critical probability can be found exactly possesses a critical polynomial, with the root in [0; 1] providing the threshold. Recent work has demonstrated that this polynomial may be generalized through a definition that can be applied on any periodic lattice. The polynomial depends on the lattice and on its decomposition into identical finite subgraphs, but once these are specified, the polynomial is essentially unique. On lattices for which the exact percolation threshold is unknown, the polynomials provide approximations for the critical probability with the estimates appearing to converge to the exact answer withmore » increasing subgraph size. In this paper, I show how the critical polynomial can be viewed as a graph invariant like the Tutte polynomial. In particular, the critical polynomial is computed on a finite graph and may be found using the deletion-contraction algorithm. This allows calculation on a computer, and I present such results for the kagome lattice using subgraphs of up to 36 bonds. For one of these, I find the prediction p c = 0:52440572:::, which differs from the numerical value, p c = 0:52440503(5), by only 6:9 X 10 -7.« less

  7. On Certain Wronskians of Multiple Orthogonal Polynomials

    NASA Astrophysics Data System (ADS)

    Zhang, Lun; Filipuk, Galina

    2014-11-01

    We consider determinants of Wronskian type whose entries are multiple orthogonal polynomials associated with a path connecting two multi-indices. By assuming that the weight functions form an algebraic Chebyshev (AT) system, we show that the polynomials represented by the Wronskians keep a constant sign in some cases, while in some other cases oscillatory behavior appears, which generalizes classical results for orthogonal polynomials due to Karlin and Szegő. There are two applications of our results. The first application arises from the observation that the m-th moment of the average characteristic polynomials for multiple orthogonal polynomial ensembles can be expressed as a Wronskian of the type II multiple orthogonal polynomials. Hence, it is straightforward to obtain the distinct behavior of the moments for odd and even m in a special multiple orthogonal ensemble - the AT ensemble. As the second application, we derive some Turán type inequalities for m! ultiple Hermite and multiple Laguerre polynomials (of two kinds). Finally, we study numerically the geometric configuration of zeros for the Wronskians of these multiple orthogonal polynomials. We observe that the zeros have regular configurations in the complex plane, which might be of independent interest.

  8. Riemann-Liouville Fractional Calculus of Certain Finite Class of Classical Orthogonal Polynomials

    NASA Astrophysics Data System (ADS)

    Malik, Pradeep; Swaminathan, A.

    2010-11-01

    In this work we consider certain class of classical orthogonal polynomials defined on the positive real line. These polynomials have their weight function related to the probability density function of F distribution and are finite in number up to orthogonality. We generalize these polynomials for fractional order by considering the Riemann-Liouville type operator on these polynomials. Various properties like explicit representation in terms of hypergeometric functions, differential equations, recurrence relations are derived.

  9. Unconditional reference values for the amniotic fluid index measurement between 26w0d and 41w6d of gestation in low-risk pregnancies.

    PubMed

    Peixoto, Alberto Borges; Caldas, Taciana Mara Rodrigues da Cunha; Martins, Wellington P; Da Silva Costa, Fabricio; Araujo Júnior, Edward

    2016-10-01

    To establish reference values for the amniotic fluid index (AFI) measurement between 26w0d and 41w6d of gestation in a Brazilian population. We performed a cross-sectional study with 1984 low-risk singleton pregnant women between 26w0d and 41w6d of gestation. AFI was measured according to the technique proposed by Phelan et al. Maternal abdomen was divided into four quadrants using the umbilicus and linea nigra as landmarks. Single vertical pocket in each quadrant was measured and the AFI was generated by the sum of these four values without umbilical cord or fetal parts. All ultrasound exams were performed by only two experienced examiners. AFI was expressed as median, interquartile range, mean and ranges in each gestational age (GA) interval. Polynomial regressions were performed to obtain the best fit with adjustment by the determination coefficient (R(2)). Mean of AFI ranged from 14.0 ± 4.1 cm (range, 9.7-14.0) at 26w0d to 8.3 ± 4.7 cm (range, 1.9-16.5) at 41w6d, respectively. The best polynomial regression fit curve was a first-degree: AFI = 16.29-0.125*GA (R(2) = 0.01). According the scatterplot, AFI values practically did not vary with advancing GA. Reference values for the AFI measurement between 26w0d and 41w6d of gestation in a low-risk Brazilian population were established.

  10. New methodology to reconstruct in 2-D the cuspal enamel of modern human lower molars.

    PubMed

    Modesto-Mata, Mario; García-Campos, Cecilia; Martín-Francés, Laura; Martínez de Pinillos, Marina; García-González, Rebeca; Quintino, Yuliet; Canals, Antoni; Lozano, Marina; Dean, M Christopher; Martinón-Torres, María; Bermúdez de Castro, José María

    2017-08-01

    In the last years different methodologies have been developed to reconstruct worn teeth. In this article, we propose a new 2-D methodology to reconstruct the worn enamel of lower molars. Our main goals are to reconstruct molars with a high level of accuracy when measuring relevant histological variables and to validate the methodology calculating the errors associated with the measurements. This methodology is based on polynomial regression equations, and has been validated using two different dental variables: cuspal enamel thickness and crown height of the protoconid. In order to perform the validation process, simulated worn modern human molars were employed. The associated errors of the measurements were also estimated applying methodologies previously proposed by other authors. The mean percentage error estimated in reconstructed molars for these two variables in comparison with their own real values is -2.17% for the cuspal enamel thickness of the protoconid and -3.18% for the crown height of the protoconid. This error significantly improves the results of other methodologies, both in the interobserver error and in the accuracy of the measurements. The new methodology based on polynomial regressions can be confidently applied to the reconstruction of cuspal enamel of lower molars, as it improves the accuracy of the measurements and reduces the interobserver error. The present study shows that it is important to validate all methodologies in order to know the associated errors. This new methodology can be easily exportable to other modern human populations, the human fossil record and forensic sciences. © 2017 Wiley Periodicals, Inc.

  11. Laguerre-Freud Equations for the Recurrence Coefficients of Some Discrete Semi-Classical Orthogonal Polynomials of Class Two

    NASA Astrophysics Data System (ADS)

    Hounga, C.; Hounkonnou, M. N.; Ronveaux, A.

    2006-10-01

    In this paper, we give Laguerre-Freud equations for the recurrence coefficients of discrete semi-classical orthogonal polynomials of class two, when the polynomials in the Pearson equation are of the same degree. The case of generalized Charlier polynomials is also presented.

  12. The Gibbs Phenomenon for Series of Orthogonal Polynomials

    ERIC Educational Resources Information Center

    Fay, T. H.; Kloppers, P. Hendrik

    2006-01-01

    This note considers the four classes of orthogonal polynomials--Chebyshev, Hermite, Laguerre, Legendre--and investigates the Gibbs phenomenon at a jump discontinuity for the corresponding orthogonal polynomial series expansions. The perhaps unexpected thing is that the Gibbs constant that arises for each class of polynomials appears to be the same…

  13. Determinants with orthogonal polynomial entries

    NASA Astrophysics Data System (ADS)

    Ismail, Mourad E. H.

    2005-06-01

    We use moment representations of orthogonal polynomials to evaluate the corresponding Hankel determinants formed by the orthogonal polynomials. We also study the Hankel determinants which start with pn on the top left-hand corner. As examples we evaluate the Hankel determinants whose entries are q-ultraspherical or Al-Salam-Chihara polynomials.

  14. Box-Behnken design based statistical modeling for ultrasound-assisted extraction of corn silk polysaccharide.

    PubMed

    Prakash Maran, J; Manikandan, S; Thirugnanasambandham, K; Vigna Nivetha, C; Dinesh, R

    2013-01-30

    In this study, ultrasound assisted extraction (UAE) conditions on the yield of polysaccharide from corn silk were studied using three factors, three level Box-Behnken response surface design. Process parameters, which affect the efficiency of UAE such as extraction temperature (40-60 °C), time (10-30 min) and solid-liquid ratio (1:10-1:30 g/ml) were investigated. The results showed that, the extraction conditions have significant effects on extraction yield of polysaccharide. The obtained experimental data were fitted to a second-order polynomial equation using multiple regression analysis with high coefficient of determination value (R(2)) of 0.994. An optimization study using Derringer's desired function methodology was performed and the optimal conditions based on both individual and combinations of all independent variables (extraction temperature of 56 °C, time of 17 min and solid-liquid ratio of 1:20 g/ml) were determined with maximum polysaccharide yield of 6.06%, which was confirmed through validation experiments. Copyright © 2012 Elsevier Ltd. All rights reserved.

  15. Optimization of isolation of cellulose from orange peel using sodium hydroxide and chelating agents.

    PubMed

    Bicu, Ioan; Mustata, Fanica

    2013-10-15

    Response surface methodology was used to optimize cellulose recovery from orange peel using sodium hydroxide (NaOH) as isolation reagent, and to minimize its ash content using ethylenediaminetetraacetic acid (EDTA) as chelating agent. The independent variables were NaOH charge, EDTA charge and cooking time. Other two constant parameters were cooking temperature (98 °C) and liquid-to-solid ratio (7.5). The dependent variables were cellulose yield and ash content. A second-order polynomial model was used for plotting response surfaces and for determining optimum cooking conditions. The analysis of coefficient values for independent variables in the regression equation showed that NaOH and EDTA charges were major factors influencing the cellulose yield and ash content, respectively. Optimum conditions were defined by: NaOH charge 38.2%, EDTA charge 9.56%, and cooking time 317 min. The predicted cellulose yield was 24.06% and ash content 0.69%. A good agreement between the experimental values and the predicted was observed. Copyright © 2013 Elsevier Ltd. All rights reserved.

  16. Usage of multivariate geostatistics in interpolation processes for meteorological precipitation maps

    NASA Astrophysics Data System (ADS)

    Gundogdu, Ismail Bulent

    2017-01-01

    Long-term meteorological data are very important both for the evaluation of meteorological events and for the analysis of their effects on the environment. Prediction maps which are constructed by different interpolation techniques often provide explanatory information. Conventional techniques, such as surface spline fitting, global and local polynomial models, and inverse distance weighting may not be adequate. Multivariate geostatistical methods can be more significant, especially when studying secondary variables, because secondary variables might directly affect the precision of prediction. In this study, the mean annual and mean monthly precipitations from 1984 to 2014 for 268 meteorological stations in Turkey have been used to construct country-wide maps. Besides linear regression, the inverse square distance and ordinary co-Kriging (OCK) have been used and compared to each other. Also elevation, slope, and aspect data for each station have been taken into account as secondary variables, whose use has reduced errors by up to a factor of three. OCK gave the smallest errors (1.002 cm) when aspect was included.

  17. Optimization of a novel improver gel formulation for Barbari flat bread using response surface methodology.

    PubMed

    Pourfarzad, Amir; Haddad Khodaparast, Mohammad Hossein; Karimi, Mehdi; Mortazavi, Seyed Ali

    2014-10-01

    Nowadays, the use of bread improvers has become an essential part of improving the production methods and quality of bakery products. In the present study, the Response Surface Methodology (RSM) was used to determine the optimum improver gel formulation which gave the best quality, shelf life, sensory and image properties for Barbari flat bread. Sodium stearoyl-2-lactylate (SSL), diacetyl tartaric acid esters of monoglyceride (DATEM) and propylene glycol (PG) were constituents of the gel and considered in this study. A second-order polynomial model was fitted to each response and the regression coefficients were determined using least square method. The optimum gel formulation was found to be 0.49 % of SSL, 0.36 % of DATEM and 0.5 % of PG when desirability function method was applied. There was a good agreement between the experimental data and their predicted counterparts. Results showed that the RSM, image processing and texture analysis are useful tools to investigate, approximate and predict a large number of bread properties.

  18. Optimisation of medium composition for probiotic biomass production using response surface methodology.

    PubMed

    Anvari, Masumeh; Khayati, Gholam; Rostami, Shora

    2014-02-01

    This study was aimed to optimise lactose, inulin and yeast extract concentration and also culture pH for maximising the growth of a probiotic bacterium, Bifidobacterium animalis subsp. lactis in apple juice and to assess the effects of these factors by using response surface methodology. A second-order central composite design was applied to evaluate the effects of these independent variables on growth of the microorganism. A polynomial regression model with cubic and quadratic terms was used for analysis of the experimental data. It was found that the effects involving inulin, yeast extract and pH on growth of the bacterium were significant, and the strongest effect was given by the yeast extract concentration. Estimated optimum conditions of the factors on the bacterial growth are as follows: lactose concentration=9·5 g/l; inulin concentration=38·5 mg/l; yeast extract concentration=9·6 g/l and initial pH=6·2.

  19. Elevated-temperature application of the IITRI compression test fixture for graphite/polyimide filamentary composites

    NASA Technical Reports Server (NTRS)

    Raju, B. B.; Camarda, C. J.; Cooper, P. A.

    1979-01-01

    Seventy-nine graphite/polyimide compression specimens were tested to investigate experimentally the IITRI test method for determining compressive properties of composite materials at room and elevated temperatures (589 K (600 F)). Minor modifications were made to the standard IITRI fixture and a high degree of precision was maintained in specimen fabrication and load alignment. Specimens included four symmetric laminate orientations. Various widths were tested to evaluate the effect of width on measured modulus and strength. In most cases three specimens of each width were tested at room and elevated temperature and a polynomial regression analysis was used to reduce the data. Scatter of replicate tests and back-to-back strain variations were low, and no specimens failed by instability. Variation of specimen width had a negligible effect on the measured ultimate strengths and initial moduli of the specimens. Measured compressive strength and stiffness values were sufficiently high for the material to be considered a usable structural material at temperatures as high as 589 K (600 F).

  20. The application of SVR model in the improvement of QbD: a case study of the extraction of podophyllotoxin.

    PubMed

    Zhai, Chun-Hui; Xuan, Jian-Bang; Fan, Hai-Liu; Zhao, Teng-Fei; Jiang, Jian-Lan

    2018-05-03

    In order to make a further optimization of process design via increasing the stability of design space, we brought in the model of Support Vector Regression (SVR). In this work, the extraction of podophyllotoxin was researched as a case study based on Quality by Design (QbD). We compared the fitting effect of SVR and the most used quadratic polynomial model (QPM) in QbD, and an analysis was made between the two design spaces obtained by SVR and QPM. As a result, the SVR stayed ahead of QPM in prediction accuracy, the stability of model and the generalization ability. The introduction of SVR into QbD made the extraction process of podophyllotoxin well designed and easier to control. The better fitting effect of SVR improved the application effect of QbD and the universal applicability of SVR, especially for non-linear, complicated and weak-regularity problems, widened the application field of QbD.

  1. Statistical optimization of recycled-paper enzymatic hydrolysis for simultaneous saccharification and fermentation via central composite design.

    PubMed

    Liu, Qing; Cheng, Ke-ke; Zhang, Jian-an; Li, Jin-ping; Wang, Ge-hua

    2010-01-01

    A central composite design of the response surface methodology (RSM) was employed to study the effects of temperature, enzyme concentration, and stirring rate on recycled-paper enzymatic hydrolysis. Among the three variables, temperature and enzyme concentration significantly affected the conversion efficiency of substrate, whereas stirring rate was not effective. A quadratic polynomial equation was obtained for enzymatic hydrolysis by multiple regression analysis using RSM. The results of validation experiments were coincident with the predicted model. The optimum conditions for enzymatic hydrolysis were temperature, enzyme concentration, and stirring rate of 43.1 degrees C, 20 FPU g(-1) substrate, and 145 rpm, respectively. In the subsequent simultaneous saccharification and fermentation (SSF) experiment under the optimum conditions, the highest 28.7 g ethanol l(-1) was reached in the fed-batch SSF when 5% (w/v) substrate concentration was used initially, and another 5% added after 12 h fermentation. This ethanol output corresponded to 77.7% of the theoretical yield based on the glucose content in the raw material.

  2. Application of Box-Behnken experimental design to optimize the extraction of insecticidal Cry1Ac from soil.

    PubMed

    Li, Yan-Liang; Fang, Zhi-Xiang; You, Jing

    2013-02-20

    A validated method for analyzing Cry proteins is a premise to study the fate and ecological effects of contaminants associated with genetically engineered Bacillus thuringiensis crops. The current study has optimized the extraction method to analyze Cry1Ac protein in soil using a response surface methodology with a three-level-three-factor Box-Behnken experimental design (BBD). The optimum extraction conditions were at 21 °C and 630 rpm for 2 h. Regression analysis showed a good fit of the experimental data to the second-order polynomial model with a coefficient of determination of 0.96. The method was sensitive and precise with a method detection limit of 0.8 ng/g dry weight and relative standard deviations at 7.3%. Finally, the established method was applied for analyzing Cry1Ac protein residues in field-collected soil samples. Trace amounts of Cry1Ac protein were detected in the soils where transgenic crops have been planted for 8 and 12 years.

  3. Bio hydrogen production from cassava starch by anaerobic mixed cultures: Multivariate statistical modeling

    NASA Astrophysics Data System (ADS)

    Tien, Hai Minh; Le, Kien Anh; Le, Phung Thi Kim

    2017-09-01

    Bio hydrogen is a sustainable energy resource due to its potentially higher efficiency of conversion to usable power, high energy efficiency and non-polluting nature resource. In this work, the experiments have been carried out to indicate the possibility of generating bio hydrogen as well as identifying effective factors and the optimum conditions from cassava starch. Experimental design was used to investigate the effect of operating temperature (37-43 °C), pH (6-7), and inoculums ratio (6-10 %) to the yield hydrogen production, the COD reduction and the ratio of volume of hydrogen production to COD reduction. The statistical analysis of the experiment indicated that the significant effects for the fermentation yield were the main effect of temperature, pH and inoculums ratio. The interaction effects between them seem not significant. The central composite design showed that the polynomial regression models were in good agreement with the experimental results. This result will be applied to enhance the process of cassava starch processing wastewater treatment.

  4. From sequences to polynomials and back, via operator orderings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amdeberhan, Tewodros, E-mail: tamdeber@tulane.edu; Dixit, Atul, E-mail: adixit@tulane.edu; Moll, Victor H., E-mail: vhm@tulane.edu

    2013-12-15

    Bender and Dunne [“Polynomials and operator orderings,” J. Math. Phys. 29, 1727–1731 (1988)] showed that linear combinations of words q{sup k}p{sup n}q{sup n−k}, where p and q are subject to the relation qp − pq = ı, may be expressed as a polynomial in the symbol z=1/2 (qp+pq). Relations between such polynomials and linear combinations of the transformed coefficients are explored. In particular, examples yielding orthogonal polynomials are provided.

  5. Extending Romanovski polynomials in quantum mechanics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quesne, C.

    2013-12-15

    Some extensions of the (third-class) Romanovski polynomials (also called Romanovski/pseudo-Jacobi polynomials), which appear in bound-state wavefunctions of rationally extended Scarf II and Rosen-Morse I potentials, are considered. For the former potentials, the generalized polynomials satisfy a finite orthogonality relation, while for the latter an infinite set of relations among polynomials with degree-dependent parameters is obtained. Both types of relations are counterparts of those known for conventional polynomials. In the absence of any direct information on the zeros of the Romanovski polynomials present in denominators, the regularity of the constructed potentials is checked by taking advantage of the disconjugacy properties ofmore » second-order differential equations of Schrödinger type. It is also shown that on going from Scarf I to Scarf II or from Rosen-Morse II to Rosen-Morse I potentials, the variety of rational extensions is narrowed down from types I, II, and III to type III only.« less

  6. Polynomial solutions of the Monge-Ampère equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aminov, Yu A

    2014-11-30

    The question of the existence of polynomial solutions to the Monge-Ampère equation z{sub xx}z{sub yy}−z{sub xy}{sup 2}=f(x,y) is considered in the case when f(x,y) is a polynomial. It is proved that if f is a polynomial of the second degree, which is positive for all values of its arguments and has a positive squared part, then no polynomial solution exists. On the other hand, a solution which is not polynomial but is analytic in the whole of the x, y-plane is produced. Necessary and sufficient conditions for the existence of polynomial solutions of degree up to 4 are found and methods for the construction ofmore » such solutions are indicated. An approximation theorem is proved. Bibliography: 10 titles.« less

  7. Solving the interval type-2 fuzzy polynomial equation using the ranking method

    NASA Astrophysics Data System (ADS)

    Rahman, Nurhakimah Ab.; Abdullah, Lazim

    2014-07-01

    Polynomial equations with trapezoidal and triangular fuzzy numbers have attracted some interest among researchers in mathematics, engineering and social sciences. There are some methods that have been developed in order to solve these equations. In this study we are interested in introducing the interval type-2 fuzzy polynomial equation and solving it using the ranking method of fuzzy numbers. The ranking method concept was firstly proposed to find real roots of fuzzy polynomial equation. Therefore, the ranking method is applied to find real roots of the interval type-2 fuzzy polynomial equation. We transform the interval type-2 fuzzy polynomial equation to a system of crisp interval type-2 fuzzy polynomial equation. This transformation is performed using the ranking method of fuzzy numbers based on three parameters, namely value, ambiguity and fuzziness. Finally, we illustrate our approach by numerical example.

  8. Parallel multigrid smoothing: polynomial versus Gauss-Seidel

    NASA Astrophysics Data System (ADS)

    Adams, Mark; Brezina, Marian; Hu, Jonathan; Tuminaro, Ray

    2003-07-01

    Gauss-Seidel is often the smoother of choice within multigrid applications. In the context of unstructured meshes, however, maintaining good parallel efficiency is difficult with multiplicative iterative methods such as Gauss-Seidel. This leads us to consider alternative smoothers. We discuss the computational advantages of polynomial smoothers within parallel multigrid algorithms for positive definite symmetric systems. Two particular polynomials are considered: Chebyshev and a multilevel specific polynomial. The advantages of polynomial smoothing over traditional smoothers such as Gauss-Seidel are illustrated on several applications: Poisson's equation, thin-body elasticity, and eddy current approximations to Maxwell's equations. While parallelizing the Gauss-Seidel method typically involves a compromise between a scalable convergence rate and maintaining high flop rates, polynomial smoothers achieve parallel scalable multigrid convergence rates without sacrificing flop rates. We show that, although parallel computers are the main motivation, polynomial smoothers are often surprisingly competitive with Gauss-Seidel smoothers on serial machines.

  9. Multiple zeros of polynomials

    NASA Technical Reports Server (NTRS)

    Wood, C. A.

    1974-01-01

    For polynomials of higher degree, iterative numerical methods must be used. Four iterative methods are presented for approximating the zeros of a polynomial using a digital computer. Newton's method and Muller's method are two well known iterative methods which are presented. They extract the zeros of a polynomial by generating a sequence of approximations converging to each zero. However, both of these methods are very unstable when used on a polynomial which has multiple zeros. That is, either they fail to converge to some or all of the zeros, or they converge to very bad approximations of the polynomial's zeros. This material introduces two new methods, the greatest common divisor (G.C.D.) method and the repeated greatest common divisor (repeated G.C.D.) method, which are superior methods for numerically approximating the zeros of a polynomial having multiple zeros. These methods were programmed in FORTRAN 4 and comparisons in time and accuracy are given.

  10. Approximating Exponential and Logarithmic Functions Using Polynomial Interpolation

    ERIC Educational Resources Information Center

    Gordon, Sheldon P.; Yang, Yajun

    2017-01-01

    This article takes a closer look at the problem of approximating the exponential and logarithmic functions using polynomials. Either as an alternative to or a precursor to Taylor polynomial approximations at the precalculus level, interpolating polynomials are considered. A measure of error is given and the behaviour of the error function is…

  11. Interpolation and Polynomial Curve Fitting

    ERIC Educational Resources Information Center

    Yang, Yajun; Gordon, Sheldon P.

    2014-01-01

    Two points determine a line. Three noncollinear points determine a quadratic function. Four points that do not lie on a lower-degree polynomial curve determine a cubic function. In general, n + 1 points uniquely determine a polynomial of degree n, presuming that they do not fall onto a polynomial of lower degree. The process of finding such a…

  12. A note on the zeros of Freud-Sobolev orthogonal polynomials

    NASA Astrophysics Data System (ADS)

    Moreno-Balcazar, Juan J.

    2007-10-01

    We prove that the zeros of a certain family of Sobolev orthogonal polynomials involving the Freud weight function e-x4 on are real, simple, and interlace with the zeros of the Freud polynomials, i.e., those polynomials orthogonal with respect to the weight function e-x4. Some numerical examples are shown.

  13. Optimal Chebyshev polynomials on ellipses in the complex plane

    NASA Technical Reports Server (NTRS)

    Fischer, Bernd; Freund, Roland

    1989-01-01

    The design of iterative schemes for sparse matrix computations often leads to constrained polynomial approximation problems on sets in the complex plane. For the case of ellipses, we introduce a new class of complex polynomials which are in general very good approximations to the best polynomials and even optimal in most cases.

  14. SAMBA: Sparse Approximation of Moment-Based Arbitrary Polynomial Chaos

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahlfeld, R., E-mail: r.ahlfeld14@imperial.ac.uk; Belkouchi, B.; Montomoli, F.

    2016-09-01

    A new arbitrary Polynomial Chaos (aPC) method is presented for moderately high-dimensional problems characterised by limited input data availability. The proposed methodology improves the algorithm of aPC and extends the method, that was previously only introduced as tensor product expansion, to moderately high-dimensional stochastic problems. The fundamental idea of aPC is to use the statistical moments of the input random variables to develop the polynomial chaos expansion. This approach provides the possibility to propagate continuous or discrete probability density functions and also histograms (data sets) as long as their moments exist, are finite and the determinant of the moment matrixmore » is strictly positive. For cases with limited data availability, this approach avoids bias and fitting errors caused by wrong assumptions. In this work, an alternative way to calculate the aPC is suggested, which provides the optimal polynomials, Gaussian quadrature collocation points and weights from the moments using only a handful of matrix operations on the Hankel matrix of moments. It can therefore be implemented without requiring prior knowledge about statistical data analysis or a detailed understanding of the mathematics of polynomial chaos expansions. The extension to more input variables suggested in this work, is an anisotropic and adaptive version of Smolyak's algorithm that is solely based on the moments of the input probability distributions. It is referred to as SAMBA (PC), which is short for Sparse Approximation of Moment-Based Arbitrary Polynomial Chaos. It is illustrated that for moderately high-dimensional problems (up to 20 different input variables or histograms) SAMBA can significantly simplify the calculation of sparse Gaussian quadrature rules. SAMBA's efficiency for multivariate functions with regard to data availability is further demonstrated by analysing higher order convergence and accuracy for a set of nonlinear test functions with 2, 5 and 10 different input distributions or histograms.« less

  15. SAMBA: Sparse Approximation of Moment-Based Arbitrary Polynomial Chaos

    NASA Astrophysics Data System (ADS)

    Ahlfeld, R.; Belkouchi, B.; Montomoli, F.

    2016-09-01

    A new arbitrary Polynomial Chaos (aPC) method is presented for moderately high-dimensional problems characterised by limited input data availability. The proposed methodology improves the algorithm of aPC and extends the method, that was previously only introduced as tensor product expansion, to moderately high-dimensional stochastic problems. The fundamental idea of aPC is to use the statistical moments of the input random variables to develop the polynomial chaos expansion. This approach provides the possibility to propagate continuous or discrete probability density functions and also histograms (data sets) as long as their moments exist, are finite and the determinant of the moment matrix is strictly positive. For cases with limited data availability, this approach avoids bias and fitting errors caused by wrong assumptions. In this work, an alternative way to calculate the aPC is suggested, which provides the optimal polynomials, Gaussian quadrature collocation points and weights from the moments using only a handful of matrix operations on the Hankel matrix of moments. It can therefore be implemented without requiring prior knowledge about statistical data analysis or a detailed understanding of the mathematics of polynomial chaos expansions. The extension to more input variables suggested in this work, is an anisotropic and adaptive version of Smolyak's algorithm that is solely based on the moments of the input probability distributions. It is referred to as SAMBA (PC), which is short for Sparse Approximation of Moment-Based Arbitrary Polynomial Chaos. It is illustrated that for moderately high-dimensional problems (up to 20 different input variables or histograms) SAMBA can significantly simplify the calculation of sparse Gaussian quadrature rules. SAMBA's efficiency for multivariate functions with regard to data availability is further demonstrated by analysing higher order convergence and accuracy for a set of nonlinear test functions with 2, 5 and 10 different input distributions or histograms.

  16. A FAST POLYNOMIAL TRANSFORM PROGRAM WITH A MODULARIZED STRUCTURE

    NASA Technical Reports Server (NTRS)

    Truong, T. K.

    1994-01-01

    This program utilizes a fast polynomial transformation (FPT) algorithm applicable to two-dimensional mathematical convolutions. Two-dimensional convolution has many applications, particularly in image processing. Two-dimensional cyclic convolutions can be converted to a one-dimensional convolution in a polynomial ring. Traditional FPT methods decompose the one-dimensional cyclic polynomial into polynomial convolutions of different lengths. This program will decompose a cyclic polynomial into polynomial convolutions of the same length. Thus, only FPTs and Fast Fourier Transforms of the same length are required. This modular approach can save computational resources. To further enhance its appeal, the program is written in the transportable 'C' language. The steps in the algorithm are: 1) formulate the modulus reduction equations, 2) calculate the polynomial transforms, 3) multiply the transforms using a generalized fast Fourier transformation, 4) compute the inverse polynomial transforms, and 5) reconstruct the final matrices using the Chinese remainder theorem. Input to this program is comprised of the row and column dimensions and the initial two matrices. The matrices are printed out at all steps, ending with the final reconstruction. This program is written in 'C' for batch execution and has been implemented on the IBM PC series of computers under DOS with a central memory requirement of approximately 18K of 8 bit bytes. This program was developed in 1986.

  17. AKLSQF - LEAST SQUARES CURVE FITTING

    NASA Technical Reports Server (NTRS)

    Kantak, A. V.

    1994-01-01

    The Least Squares Curve Fitting program, AKLSQF, computes the polynomial which will least square fit uniformly spaced data easily and efficiently. The program allows the user to specify the tolerable least squares error in the fitting or allows the user to specify the polynomial degree. In both cases AKLSQF returns the polynomial and the actual least squares fit error incurred in the operation. The data may be supplied to the routine either by direct keyboard entry or via a file. AKLSQF produces the least squares polynomial in two steps. First, the data points are least squares fitted using the orthogonal factorial polynomials. The result is then reduced to a regular polynomial using Sterling numbers of the first kind. If an error tolerance is specified, the program starts with a polynomial of degree 1 and computes the least squares fit error. The degree of the polynomial used for fitting is then increased successively until the error criterion specified by the user is met. At every step the polynomial as well as the least squares fitting error is printed to the screen. In general, the program can produce a curve fitting up to a 100 degree polynomial. All computations in the program are carried out under Double Precision format for real numbers and under long integer format for integers to provide the maximum accuracy possible. AKLSQF was written for an IBM PC X/AT or compatible using Microsoft's Quick Basic compiler. It has been implemented under DOS 3.2.1 using 23K of RAM. AKLSQF was developed in 1989.

  18. Graph characterization via Ihara coefficients.

    PubMed

    Ren, Peng; Wilson, Richard C; Hancock, Edwin R

    2011-02-01

    The novel contributions of this paper are twofold. First, we demonstrate how to characterize unweighted graphs in a permutation-invariant manner using the polynomial coefficients from the Ihara zeta function, i.e., the Ihara coefficients. Second, we generalize the definition of the Ihara coefficients to edge-weighted graphs. For an unweighted graph, the Ihara zeta function is the reciprocal of a quasi characteristic polynomial of the adjacency matrix of the associated oriented line graph. Since the Ihara zeta function has poles that give rise to infinities, the most convenient numerically stable representation is to work with the coefficients of the quasi characteristic polynomial. Moreover, the polynomial coefficients are invariant to vertex order permutations and also convey information concerning the cycle structure of the graph. To generalize the representation to edge-weighted graphs, we make use of the reduced Bartholdi zeta function. We prove that the computation of the Ihara coefficients for unweighted graphs is a special case of our proposed method for unit edge weights. We also present a spectral analysis of the Ihara coefficients and indicate their advantages over other graph spectral methods. We apply the proposed graph characterization method to capturing graph-class structure and clustering graphs. Experimental results reveal that the Ihara coefficients are more effective than methods based on Laplacian spectra.

  19. Compressive Sensing with Cross-Validation and Stop-Sampling for Sparse Polynomial Chaos Expansions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huan, Xun; Safta, Cosmin; Sargsyan, Khachik

    Compressive sensing is a powerful technique for recovering sparse solutions of underdetermined linear systems, which is often encountered in uncertainty quanti cation analysis of expensive and high-dimensional physical models. We perform numerical investigations employing several com- pressive sensing solvers that target the unconstrained LASSO formulation, with a focus on linear systems that arise in the construction of polynomial chaos expansions. With core solvers of l1 ls, SpaRSA, CGIST, FPC AS, and ADMM, we develop techniques to mitigate over tting through an automated selection of regularization constant based on cross-validation, and a heuristic strategy to guide the stop-sampling decision. Practical recommendationsmore » on parameter settings for these tech- niques are provided and discussed. The overall method is applied to a series of numerical examples of increasing complexity, including large eddy simulations of supersonic turbulent jet-in-cross flow involving a 24-dimensional input. Through empirical phase-transition diagrams and convergence plots, we illustrate sparse recovery performance under structures induced by polynomial chaos, accuracy and computational tradeoffs between polynomial bases of different degrees, and practi- cability of conducting compressive sensing for a realistic, high-dimensional physical application. Across test cases studied in this paper, we find ADMM to have demonstrated empirical advantages through consistent lower errors and faster computational times.« less

  20. Adaptive polynomial chaos techniques for uncertainty quantification of a gas cooled fast reactor transient

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Perko, Z.; Gilli, L.; Lathouwers, D.

    2013-07-01

    Uncertainty quantification plays an increasingly important role in the nuclear community, especially with the rise of Best Estimate Plus Uncertainty methodologies. Sensitivity analysis, surrogate models, Monte Carlo sampling and several other techniques can be used to propagate input uncertainties. In recent years however polynomial chaos expansion has become a popular alternative providing high accuracy at affordable computational cost. This paper presents such polynomial chaos (PC) methods using adaptive sparse grids and adaptive basis set construction, together with an application to a Gas Cooled Fast Reactor transient. Comparison is made between a new sparse grid algorithm and the traditionally used techniquemore » proposed by Gerstner. An adaptive basis construction method is also introduced and is proved to be advantageous both from an accuracy and a computational point of view. As a demonstration the uncertainty quantification of a 50% loss of flow transient in the GFR2400 Gas Cooled Fast Reactor design was performed using the CATHARE code system. The results are compared to direct Monte Carlo sampling and show the superior convergence and high accuracy of the polynomial chaos expansion. Since PC techniques are easy to implement, they can offer an attractive alternative to traditional techniques for the uncertainty quantification of large scale problems. (authors)« less

  1. Hydrodynamics-based functional forms of activity metabolism: a case for the power-law polynomial function in animal swimming energetics.

    PubMed

    Papadopoulos, Anthony

    2009-01-01

    The first-degree power-law polynomial function is frequently used to describe activity metabolism for steady swimming animals. This function has been used in hydrodynamics-based metabolic studies to evaluate important parameters of energetic costs, such as the standard metabolic rate and the drag power indices. In theory, however, the power-law polynomial function of any degree greater than one can be used to describe activity metabolism for steady swimming animals. In fact, activity metabolism has been described by the conventional exponential function and the cubic polynomial function, although only the power-law polynomial function models drag power since it conforms to hydrodynamic laws. Consequently, the first-degree power-law polynomial function yields incorrect parameter values of energetic costs if activity metabolism is governed by the power-law polynomial function of any degree greater than one. This issue is important in bioenergetics because correct comparisons of energetic costs among different steady swimming animals cannot be made unless the degree of the power-law polynomial function derives from activity metabolism. In other words, a hydrodynamics-based functional form of activity metabolism is a power-law polynomial function of any degree greater than or equal to one. Therefore, the degree of the power-law polynomial function should be treated as a parameter, not as a constant. This new treatment not only conforms to hydrodynamic laws, but also ensures correct comparisons of energetic costs among different steady swimming animals. Furthermore, the exponential power-law function, which is a new hydrodynamics-based functional form of activity metabolism, is a special case of the power-law polynomial function. Hence, the link between the hydrodynamics of steady swimming and the exponential-based metabolic model is defined.

  2. Asthma exacerbation and proximity of residence to major roads: a population-based matched case-control study among the pediatric Medicaid population in Detroit, Michigan

    PubMed Central

    2011-01-01

    Background The relationship between asthma and traffic-related pollutants has received considerable attention. The use of individual-level exposure measures, such as residence location or proximity to emission sources, may avoid ecological biases. Method This study focused on the pediatric Medicaid population in Detroit, MI, a high-risk population for asthma-related events. A population-based matched case-control analysis was used to investigate associations between acute asthma outcomes and proximity of residence to major roads, including freeways. Asthma cases were identified as all children who made at least one asthma claim, including inpatient and emergency department visits, during the three-year study period, 2004-06. Individually matched controls were randomly selected from the rest of the Medicaid population on the basis of non-respiratory related illness. We used conditional logistic regression with distance as both categorical and continuous variables, and examined non-linear relationships with distance using polynomial splines. The conditional logistic regression models were then extended by considering multiple asthma states (based on the frequency of acute asthma outcomes) using polychotomous conditional logistic regression. Results Asthma events were associated with proximity to primary roads with an odds ratio of 0.97 (95% CI: 0.94, 0.99) for a 1 km increase in distance using conditional logistic regression, implying that asthma events are less likely as the distance between the residence and a primary road increases. Similar relationships and effect sizes were found using polychotomous conditional logistic regression. Another plausible exposure metric, a reduced form response surface model that represents atmospheric dispersion of pollutants from roads, was not associated under that exposure model. Conclusions There is moderately strong evidence of elevated risk of asthma close to major roads based on the results obtained in this population-based matched case-control study. PMID:21513554

  3. A graphical method to evaluate spectral preprocessing in multivariate regression calibrations: example with Savitzky-Golay filters and partial least squares regression.

    PubMed

    Delwiche, Stephen R; Reeves, James B

    2010-01-01

    In multivariate regression analysis of spectroscopy data, spectral preprocessing is often performed to reduce unwanted background information (offsets, sloped baselines) or accentuate absorption features in intrinsically overlapping bands. These procedures, also known as pretreatments, are commonly smoothing operations or derivatives. While such operations are often useful in reducing the number of latent variables of the actual decomposition and lowering residual error, they also run the risk of misleading the practitioner into accepting calibration equations that are poorly adapted to samples outside of the calibration. The current study developed a graphical method to examine this effect on partial least squares (PLS) regression calibrations of near-infrared (NIR) reflection spectra of ground wheat meal with two analytes, protein content and sodium dodecyl sulfate sedimentation (SDS) volume (an indicator of the quantity of the gluten proteins that contribute to strong doughs). These two properties were chosen because of their differing abilities to be modeled by NIR spectroscopy: excellent for protein content, fair for SDS sedimentation volume. To further demonstrate the potential pitfalls of preprocessing, an artificial component, a randomly generated value, was included in PLS regression trials. Savitzky-Golay (digital filter) smoothing, first-derivative, and second-derivative preprocess functions (5 to 25 centrally symmetric convolution points, derived from quadratic polynomials) were applied to PLS calibrations of 1 to 15 factors. The results demonstrated the danger of an over reliance on preprocessing when (1) the number of samples used in a multivariate calibration is low (<50), (2) the spectral response of the analyte is weak, and (3) the goodness of the calibration is based on the coefficient of determination (R(2)) rather than a term based on residual error. The graphical method has application to the evaluation of other preprocess functions and various types of spectroscopy data.

  4. Inhibitory effect of chlorine and ultraviolet radiation on growth of Listeria monocytogenes in chicken breast and development of predictive growth models.

    PubMed

    Oh, S R; Kang, I; Oh, M H; Ha, S D

    2014-01-01

    The inhibitory effect of chlorine (50, 100, and 200 mg/kg) was investigated with and without UV radiation (300 mW·s/cm(2)) for the growth of Listeria monocytogenes in chicken breast meat. Using a polynomial model, predictive growth models were also developed as a function of chlorine concentration, UV exposure, and storage temperature (4, 10, and 15°C). A maximum L. monocytogenes reduction (0.8 log cfu, cfu/g) was obtained when combining chlorine at 200 mg/kg and UV at 300 mW·s/cm(2), and a maximum synergistic effect (0.4 log cfu/g) was observed when using chlorine at 100 mg/kg and UV at 300 mW·s/cm(2). Primary models developed for specific growth rate and lag time showed a good fitness (R(2) > 0.91), as determined by the reparameterized Gompertz equation. Secondary polynomial models were obtained using nonlinear regression analysis. The developed models were validated with mean square error, bias factor, and accuracy factor, which were 0.0003, 0.96, and 1.11, respectively, for specific growth rate and 7.69, 0.99, and 1.04, respectively, for lag time. The treatment of chlorine and UV did not change the color and texture of chicken breast after 7 d of storage at 4°C. As a result, the combination of chlorine at 100 mg/kg and UV at 300 mW·s/cm(2) appears to an effective method into inhibit L. monocytogenes growth in broiler carcasses with no negative effects on color and textural quality. Based on the validation results, the predictive models can be used to accurately predict L. monocytogenes growth in chicken breast.

  5. Neural Network and Response Surface Methodology for Rocket Engine Component Optimization

    NASA Technical Reports Server (NTRS)

    Vaidyanathan, Rajkumar; Papita, Nilay; Shyy, Wei; Tucker, P. Kevin; Griffin, Lisa W.; Haftka, Raphael; Fitz-Coy, Norman; McConnaughey, Helen (Technical Monitor)

    2000-01-01

    The goal of this work is to compare the performance of response surface methodology (RSM) and two types of neural networks (NN) to aid preliminary design of two rocket engine components. A data set of 45 training points and 20 test points obtained from a semi-empirical model based on three design variables is used for a shear coaxial injector element. Data for supersonic turbine design is based on six design variables, 76 training, data and 18 test data obtained from simplified aerodynamic analysis. Several RS and NN are first constructed using the training data. The test data are then employed to select the best RS or NN. Quadratic and cubic response surfaces. radial basis neural network (RBNN) and back-propagation neural network (BPNN) are compared. Two-layered RBNN are generated using two different training algorithms, namely solverbe and solverb. A two layered BPNN is generated with Tan-Sigmoid transfer function. Various issues related to the training of the neural networks are addressed including number of neurons, error goals, spread constants and the accuracy of different models in representing the design space. A search for the optimum design is carried out using a standard gradient-based optimization algorithm over the response surfaces represented by the polynomials and trained neural networks. Usually a cubic polynominal performs better than the quadratic polynomial but exceptions have been noticed. Among the NN choices, the RBNN designed using solverb yields more consistent performance for both engine components considered. The training of RBNN is easier as it requires linear regression. This coupled with the consistency in performance promise the possibility of it being used as an optimization strategy for engineering design problems.

  6. Physically elastic analysis of a cylindrical ring as a unit cell of a complete composite under applied stress in the complex plane using cubic polynomials

    NASA Astrophysics Data System (ADS)

    Monfared, Vahid

    2018-03-01

    Elastic analysis is analytically presented to predict the behaviors of the stress and displacement components in the cylindrical ring as a unit cell of a complete composite under applied stress in the complex plane using cubic polynomials. This analysis is based on the complex computation of the stress functions in the complex plane and polar coordinates. Also, suitable boundary conditions are considered and assumed to analyze along with the equilibrium equations and bi-harmonic equation. This method has some important applications in many fields of engineering such as mechanical, civil and material engineering generally. One of the applications of this research work is in composite design and designing the cylindrical devices under various loadings. Finally, it is founded that the convergence and accuracy of the results are suitable and acceptable through comparing the results.

  7. Tolerance analysis of optical telescopes using coherent addition of wavefront errors

    NASA Technical Reports Server (NTRS)

    Davenport, J. W.

    1982-01-01

    A near diffraction-limited telescope requires that tolerance analysis be done on the basis of system wavefront error. One method of analyzing the wavefront error is to represent the wavefront error function in terms of its Zernike polynomial expansion. A Ramsey-Korsch ray trace package, a computer program that simulates the tracing of rays through an optical telescope system, was expanded to include the Zernike polynomial expansion up through the fifth-order spherical term. An option to determine a 3 dimensional plot of the wavefront error function was also included in the Ramsey-Korsch package. Several assimulation runs were analyzed to determine the particular set of coefficients in the Zernike expansion that are effected by various errors such as tilt, decenter and despace. A 3 dimensional plot of each error up through the fifth-order spherical term was also included in the study. Tolerance analysis data are presented.

  8. Generalized Appended Product Indicator Procedure for Nonlinear Structural Equation Analysis.

    ERIC Educational Resources Information Center

    Wall, Melanie M.; Amemiya, Yasuo

    2001-01-01

    Considers the estimation of polynomial structural models and shows a limitation of an existing method. Introduces a new procedure, the generalized appended product indicator procedure, for nonlinear structural equation analysis. Addresses statistical issues associated with the procedure through simulation. (SLD)

  9. Regression-based reduced-order models to predict transient thermal output for enhanced geothermal systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mudunuru, Maruti Kumar; Karra, Satish; Harp, Dylan Robert

    Reduced-order modeling is a promising approach, as many phenomena can be described by a few parameters/mechanisms. An advantage and attractive aspect of a reduced-order model is that it is computational inexpensive to evaluate when compared to running a high-fidelity numerical simulation. A reduced-order model takes couple of seconds to run on a laptop while a high-fidelity simulation may take couple of hours to run on a high-performance computing cluster. The goal of this paper is to assess the utility of regression-based reduced-order models (ROMs) developed from high-fidelity numerical simulations for predicting transient thermal power output for an enhanced geothermal reservoirmore » while explicitly accounting for uncertainties in the subsurface system and site-specific details. Numerical simulations are performed based on equally spaced values in the specified range of model parameters. Key sensitive parameters are then identified from these simulations, which are fracture zone permeability, well/skin factor, bottom hole pressure, and injection flow rate. We found the fracture zone permeability to be the most sensitive parameter. The fracture zone permeability along with time, are used to build regression-based ROMs for the thermal power output. The ROMs are trained and validated using detailed physics-based numerical simulations. Finally, predictions from the ROMs are then compared with field data. We propose three different ROMs with different levels of model parsimony, each describing key and essential features of the power production curves. The coefficients in the proposed regression-based ROMs are developed by minimizing a non-linear least-squares misfit function using the Levenberg–Marquardt algorithm. The misfit function is based on the difference between numerical simulation data and reduced-order model. ROM-1 is constructed based on polynomials up to fourth order. ROM-1 is able to accurately reproduce the power output of numerical simulations for low values of permeabilities and certain features of the field-scale data. ROM-2 is a model with more analytical functions consisting of polynomials up to order eight, exponential functions and smooth approximations of Heaviside functions, and accurately describes the field-data. At higher permeabilities, ROM-2 reproduces numerical results better than ROM-1, however, there is a considerable deviation from numerical results at low fracture zone permeabilities. ROM-3 consists of polynomials up to order ten, and is developed by taking the best aspects of ROM-1 and ROM-2. ROM-1 is relatively parsimonious than ROM-2 and ROM-3, while ROM-2 overfits the data. ROM-3 on the other hand, provides a middle ground for model parsimony. Based on R 2-values for training, validation, and prediction data sets we found that ROM-3 is better model than ROM-2 and ROM-1. For predicting thermal drawdown in EGS applications, where high fracture zone permeabilities (typically greater than 10 –15 m 2) are desired, ROM-2 and ROM-3 outperform ROM-1. As per computational time, all the ROMs are 10 4 times faster when compared to running a high-fidelity numerical simulation. In conclusion, this makes the proposed regression-based ROMs attractive for real-time EGS applications because they are fast and provide reasonably good predictions for thermal power output.« less

  10. Regression-based reduced-order models to predict transient thermal output for enhanced geothermal systems

    DOE PAGES

    Mudunuru, Maruti Kumar; Karra, Satish; Harp, Dylan Robert; ...

    2017-07-10

    Reduced-order modeling is a promising approach, as many phenomena can be described by a few parameters/mechanisms. An advantage and attractive aspect of a reduced-order model is that it is computational inexpensive to evaluate when compared to running a high-fidelity numerical simulation. A reduced-order model takes couple of seconds to run on a laptop while a high-fidelity simulation may take couple of hours to run on a high-performance computing cluster. The goal of this paper is to assess the utility of regression-based reduced-order models (ROMs) developed from high-fidelity numerical simulations for predicting transient thermal power output for an enhanced geothermal reservoirmore » while explicitly accounting for uncertainties in the subsurface system and site-specific details. Numerical simulations are performed based on equally spaced values in the specified range of model parameters. Key sensitive parameters are then identified from these simulations, which are fracture zone permeability, well/skin factor, bottom hole pressure, and injection flow rate. We found the fracture zone permeability to be the most sensitive parameter. The fracture zone permeability along with time, are used to build regression-based ROMs for the thermal power output. The ROMs are trained and validated using detailed physics-based numerical simulations. Finally, predictions from the ROMs are then compared with field data. We propose three different ROMs with different levels of model parsimony, each describing key and essential features of the power production curves. The coefficients in the proposed regression-based ROMs are developed by minimizing a non-linear least-squares misfit function using the Levenberg–Marquardt algorithm. The misfit function is based on the difference between numerical simulation data and reduced-order model. ROM-1 is constructed based on polynomials up to fourth order. ROM-1 is able to accurately reproduce the power output of numerical simulations for low values of permeabilities and certain features of the field-scale data. ROM-2 is a model with more analytical functions consisting of polynomials up to order eight, exponential functions and smooth approximations of Heaviside functions, and accurately describes the field-data. At higher permeabilities, ROM-2 reproduces numerical results better than ROM-1, however, there is a considerable deviation from numerical results at low fracture zone permeabilities. ROM-3 consists of polynomials up to order ten, and is developed by taking the best aspects of ROM-1 and ROM-2. ROM-1 is relatively parsimonious than ROM-2 and ROM-3, while ROM-2 overfits the data. ROM-3 on the other hand, provides a middle ground for model parsimony. Based on R 2-values for training, validation, and prediction data sets we found that ROM-3 is better model than ROM-2 and ROM-1. For predicting thermal drawdown in EGS applications, where high fracture zone permeabilities (typically greater than 10 –15 m 2) are desired, ROM-2 and ROM-3 outperform ROM-1. As per computational time, all the ROMs are 10 4 times faster when compared to running a high-fidelity numerical simulation. In conclusion, this makes the proposed regression-based ROMs attractive for real-time EGS applications because they are fast and provide reasonably good predictions for thermal power output.« less

  11. Stochastic Estimation via Polynomial Chaos

    DTIC Science & Technology

    2015-10-01

    AFRL-RW-EG-TR-2015-108 Stochastic Estimation via Polynomial Chaos Douglas V. Nance Air Force Research...COVERED (From - To) 20-04-2015 – 07-08-2015 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER Stochastic Estimation via Polynomial Chaos ...This expository report discusses fundamental aspects of the polynomial chaos method for representing the properties of second order stochastic

  12. Vehicle Sprung Mass Estimation for Rough Terrain

    DTIC Science & Technology

    2011-03-01

    distributions are greater than zero. The multivariate polynomials are functions of the Legendre polynomials (Poularikas (1999...developed methods based on polynomial chaos theory and on the maximum likelihood approach to estimate the most likely value of the vehicle sprung...mass. The polynomial chaos estimator is compared to benchmark algorithms including recursive least squares, recursive total least squares, extended

  13. Degenerate r-Stirling Numbers and r-Bell Polynomials

    NASA Astrophysics Data System (ADS)

    Kim, T.; Yao, Y.; Kim, D. S.; Jang, G.-W.

    2018-01-01

    The purpose of this paper is to exploit umbral calculus in order to derive some properties, recurrence relations, and identities related to the degenerate r-Stirling numbers of the second kind and the degenerate r-Bell polynomials. Especially, we will express the degenerate r-Bell polynomials as linear combinations of many well-known families of special polynomials.

  14. From Chebyshev to Bernstein: A Tour of Polynomials Small and Large

    ERIC Educational Resources Information Center

    Boelkins, Matthew; Miller, Jennifer; Vugteveen, Benjamin

    2006-01-01

    Consider the family of monic polynomials of degree n having zeros at -1 and +1 and all their other real zeros in between these two values. This article explores the size of these polynomials using the supremum of the absolute value on [-1, 1], showing that scaled Chebyshev and Bernstein polynomials give the extremes.

  15. Correlation and prediction of dynamic human isolated joint strength from lean body mass

    NASA Technical Reports Server (NTRS)

    Pandya, Abhilash K.; Hasson, Scott M.; Aldridge, Ann M.; Maida, James C.; Woolford, Barbara J.

    1992-01-01

    A relationship between a person's lean body mass and the amount of maximum torque that can be produced with each isolated joint of the upper extremity was investigated. The maximum dynamic isolated joint torque (upper extremity) on 14 subjects was collected using a dynamometer multi-joint testing unit. These data were reduced to a table of coefficients of second degree polynomials, computed using a least squares regression method. All the coefficients were then organized into look-up tables, a compact and convenient storage/retrieval mechanism for the data set. Data from each joint, direction and velocity, were normalized with respect to that joint's average and merged into files (one for each curve for a particular joint). Regression was performed on each one of these files to derive a table of normalized population curve coefficients for each joint axis, direction, and velocity. In addition, a regression table which included all upper extremity joints was built which related average torque to lean body mass for an individual. These two tables are the basis of the regression model which allows the prediction of dynamic isolated joint torques from an individual's lean body mass.

  16. Formal methods for modeling and analysis of hybrid systems

    NASA Technical Reports Server (NTRS)

    Tiwari, Ashish (Inventor); Lincoln, Patrick D. (Inventor)

    2009-01-01

    A technique based on the use of a quantifier elimination decision procedure for real closed fields and simple theorem proving to construct a series of successively finer qualitative abstractions of hybrid automata is taught. The resulting abstractions are always discrete transition systems which can then be used by any traditional analysis tool. The constructed abstractions are conservative and can be used to establish safety properties of the original system. The technique works on linear and non-linear polynomial hybrid systems: the guards on discrete transitions and the continuous flows in all modes can be specified using arbitrary polynomial expressions over the continuous variables. An exemplar tool in the SAL environment built over the theorem prover PVS is detailed. The technique scales well to large and complex hybrid systems.

  17. The construction of high-accuracy schemes for acoustic equations

    NASA Technical Reports Server (NTRS)

    Tang, Lei; Baeder, James D.

    1995-01-01

    An accuracy analysis of various high order schemes is performed from an interpolation point of view. The analysis indicates that classical high order finite difference schemes, which use polynomial interpolation, hold high accuracy only at nodes and are therefore not suitable for time-dependent problems. Thus, some schemes improve their numerical accuracy within grid cells by the near-minimax approximation method, but their practical significance is degraded by maintaining the same stencil as classical schemes. One-step methods in space discretization, which use piecewise polynomial interpolation and involve data at only two points, can generate a uniform accuracy over the whole grid cell and avoid spurious roots. As a result, they are more accurate and efficient than multistep methods. In particular, the Cubic-Interpolated Psuedoparticle (CIP) scheme is recommended for computational acoustics.

  18. Method for obtaining electron energy-density functions from Langmuir-probe data using a card-programmable calculator

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Longhurst, G.R.

    This paper presents a method for obtaining electron energy density functions from Langmuir probe data taken in cool, dense plasmas where thin-sheath criteria apply and where magnetic effects are not severe. Noise is filtered out by using regression of orthogonal polynomials. The method requires only a programmable calculator (TI-59 or equivalent) to implement and can be used for the most general, nonequilibrium electron energy distribution plasmas. Data from a mercury ion source analyzed using this method are presented and compared with results for the same data using standard numerical techniques.

  19. Optimization study for Pb(II) and COD sequestration by consortium of sulphate-reducing bacteria

    NASA Astrophysics Data System (ADS)

    Verma, Anamika; Bishnoi, Narsi R.; Gupta, Asha

    2017-09-01

    In this study, initial minimum inhibitory concentration (MIC) of Pb(II) ions was analysed to check optimum concentration of Pb(II) ions at which the growth of sulphate-reducing consortium (SRC) was found to be maximum. 80 ppm of Pb(II) ions was investigated as minimum inhibitory concentration for SRC. Influence of electron donors such as lactose, sucrose, glucose and sodium lactate was examined to investigate best carbon source for growth and activity of sulphate-reducing bacteria. Sodium lactate was found to be the prime carbon source for SRC. Later optimization of various parameters was executed using Box-Behnken design model of response surface methodology to explore the effectiveness of three independent operating variables, namely, pH (5.0-9.0), temperature (32-42 °C) and time (5.0-9.0 days), on dependent variables, i.e. protein content, precipitation of Pb(II) ions, and removal of COD by SRC biomass. Maximum removal of COD and Pb(II) was observed to be 91 and 98 %, respectively, at pH 7.0 and temperature 37 °C and incubation time 7 days. According to response surface analysis and analysis of variance, the experimental data were perfectly fitted to the quadratic model, and the interactive influence of pH, temperature and time on Pb(II) and COD removal was highly significant. A high regression coefficient between the variables and response ( r 2 = 0.9974) corroborate eminent evaluation of experimental data by second-order polynomial regression model. SEM and Fourier transform infrared analysis was performed to investigate morphology of PbS precipitates, sorption mechanism and involved functional groups in metal-free and metal-loaded biomass of SRC for Pb(II) binding.

  20. Secure Logistic Regression Based on Homomorphic Encryption: Design and Evaluation.

    PubMed

    Kim, Miran; Song, Yongsoo; Wang, Shuang; Xia, Yuhou; Jiang, Xiaoqian

    2018-04-17

    Learning a model without accessing raw data has been an intriguing idea to security and machine learning researchers for years. In an ideal setting, we want to encrypt sensitive data to store them on a commercial cloud and run certain analyses without ever decrypting the data to preserve privacy. Homomorphic encryption technique is a promising candidate for secure data outsourcing, but it is a very challenging task to support real-world machine learning tasks. Existing frameworks can only handle simplified cases with low-degree polynomials such as linear means classifier and linear discriminative analysis. The goal of this study is to provide a practical support to the mainstream learning models (eg, logistic regression). We adapted a novel homomorphic encryption scheme optimized for real numbers computation. We devised (1) the least squares approximation of the logistic function for accuracy and efficiency (ie, reduce computation cost) and (2) new packing and parallelization techniques. Using real-world datasets, we evaluated the performance of our model and demonstrated its feasibility in speed and memory consumption. For example, it took approximately 116 minutes to obtain the training model from the homomorphically encrypted Edinburgh dataset. In addition, it gives fairly accurate predictions on the testing dataset. We present the first homomorphically encrypted logistic regression outsourcing model based on the critical observation that the precision loss of classification models is sufficiently small so that the decision plan stays still. ©Miran Kim, Yongsoo Song, Shuang Wang, Yuhou Xia, Xiaoqian Jiang. Originally published in JMIR Medical Informatics (http://medinform.jmir.org), 17.04.2018.

  1. Umbral orthogonal polynomials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lopez-Sendino, J. E.; del Olmo, M. A.

    2010-12-23

    We present an umbral operator version of the classical orthogonal polynomials. We obtain three families which are the umbral counterpart of the Jacobi, Laguerre and Hermite polynomials in the classical case.

  2. The combined effect of age and basal follicle-stimulating hormone on the cost of a live birth at assisted reproductive technology.

    PubMed

    Henne, Melinda B; Stegmann, Barbara J; Neithardt, Adrienne B; Catherino, William H; Armstrong, Alicia Y; Kao, Tzu-Cheg; Segars, James H

    2008-01-01

    To predict the cost of a delivery following assisted reproductive technologies (ART). Cost analysis based on retrospective chart analysis. University-based ART program. Women aged >or=26 and

  3. Tachyon inflation in the large-N formalism

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barbosa-Cendejas, Nandinii; De-Santiago, Josue; German, Gabriel

    2015-11-01

    We study tachyon inflation within the large-N formalism, which takes a prescription for the small Hubble flow slow-roll parameter ε{sub 1} as a function of the large number of e-folds N. This leads to a classification of models through their behaviour at large N. In addition to the perturbative N class, we introduce the polynomial and exponential classes for the ε{sub 1} parameter. With this formalism we reconstruct a large number of potentials used previously in the literature for tachyon inflation. We also obtain new families of potentials from the polynomial class. We characterize the realizations of tachyon inflation bymore » computing the usual cosmological observables up to second order in the Hubble flow slow-roll parameters. This allows us to look at observable differences between tachyon and canonical single field inflation. The analysis of observables in light of the Planck 2015 data shows the viability of some of these models, mostly for certain realization of the polynomial and exponential classes.« less

  4. Exploiting structure: Introduction and motivation

    NASA Technical Reports Server (NTRS)

    Xu, Zhong Ling

    1993-01-01

    Research activities performed during the period of 29 June 1993 through 31 Aug. 1993 are summarized. The Robust Stability of Systems where transfer function or characteristic polynomial are multilinear affine functions of parameters of interest in two directions, Algorithmic and Theoretical, was developed. In the algorithmic direction, a new approach that reduces the computational burden of checking the robust stability of the system with multilinear uncertainty is found. This technique is called 'Stability by linear process.' In fact, the 'Stability by linear process' described gives an algorithm. In analysis, we obtained a robustness criterion for the family of polynomials with coefficients of multilinear affine function in the coefficient space and obtained the result for the robust stability of diamond families of polynomials with complex coefficients also. We obtained the limited results for SPR design and we provide a framework for solving ACS. Finally, copies of the outline of our results are provided in the appendix. Also, there is an administration issue in the appendix.

  5. Design and Use of a Learning Object for Finding Complex Polynomial Roots

    ERIC Educational Resources Information Center

    Benitez, Julio; Gimenez, Marcos H.; Hueso, Jose L.; Martinez, Eulalia; Riera, Jaime

    2013-01-01

    Complex numbers are essential in many fields of engineering, but students often fail to have a natural insight of them. We present a learning object for the study of complex polynomials that graphically shows that any complex polynomials has a root and, furthermore, is useful to find the approximate roots of a complex polynomial. Moreover, we…

  6. Extending a Property of Cubic Polynomials to Higher-Degree Polynomials

    ERIC Educational Resources Information Center

    Miller, David A.; Moseley, James

    2012-01-01

    In this paper, the authors examine a property that holds for all cubic polynomials given two zeros. This property is discovered after reviewing a variety of ways to determine the equation of a cubic polynomial given specific conditions through algebra and calculus. At the end of the article, they will connect the property to a very famous method…

  7. Analysis of the impacts of horizontal translation and scaling on wavefront approximation coefficients with rectangular pupils for Chebyshev and Legendre polynomials.

    PubMed

    Sun, Wenqing; Chen, Lei; Tuya, Wulan; He, Yong; Zhu, Rihong

    2013-12-01

    Chebyshev and Legendre polynomials are frequently used in rectangular pupils for wavefront approximation. Ideally, the dataset completely fits with the polynomial basis, which provides the full-pupil approximation coefficients and the corresponding geometric aberrations. However, if there are horizontal translation and scaling, the terms in the original polynomials will become the linear combinations of the coefficients of the other terms. This paper introduces analytical expressions for two typical situations after translation and scaling. With a small translation, first-order Taylor expansion could be used to simplify the computation. Several representative terms could be selected as inputs to compute the coefficient changes before and after translation and scaling. Results show that the outcomes of the analytical solutions and the approximated values under discrete sampling are consistent. With the computation of a group of randomly generated coefficients, we contrasted the changes under different translation and scaling conditions. The larger ratios correlate the larger deviation from the approximated values to the original ones. Finally, we analyzed the peak-to-valley (PV) and root mean square (RMS) deviations from the uses of the first-order approximation and the direct expansion under different translation values. The results show that when the translation is less than 4%, the most deviated 5th term in the first-order 1D-Legendre expansion has a PV deviation less than 7% and an RMS deviation less than 2%. The analytical expressions and the computed results under discrete sampling given in this paper for the multiple typical function basis during translation and scaling in the rectangular areas could be applied in wavefront approximation and analysis.

  8. Computing Galois Groups of Eisenstein Polynomials Over P-adic Fields

    NASA Astrophysics Data System (ADS)

    Milstead, Jonathan

    The most efficient algorithms for computing Galois groups of polynomials over global fields are based on Stauduhar's relative resolvent method. These methods are not directly generalizable to the local field case, since they require a field that contains the global field in which all roots of the polynomial can be approximated. We present splitting field-independent methods for computing the Galois group of an Eisenstein polynomial over a p-adic field. Our approach is to combine information from different disciplines. We primarily, make use of the ramification polygon of the polynomial, which is the Newton polygon of a related polynomial. This allows us to quickly calculate several invariants that serve to reduce the number of possible Galois groups. Algorithms by Greve and Pauli very efficiently return the Galois group of polynomials where the ramification polygon consists of one segment as well as information about the subfields of the stem field. Second, we look at the factorization of linear absolute resolvents to further narrow the pool of possible groups.

  9. On polynomial preconditioning for indefinite Hermitian matrices

    NASA Technical Reports Server (NTRS)

    Freund, Roland W.

    1989-01-01

    The minimal residual method is studied combined with polynomial preconditioning for solving large linear systems (Ax = b) with indefinite Hermitian coefficient matrices (A). The standard approach for choosing the polynomial preconditioners leads to preconditioned systems which are positive definite. Here, a different strategy is studied which leaves the preconditioned coefficient matrix indefinite. More precisely, the polynomial preconditioner is designed to cluster the positive, resp. negative eigenvalues of A around 1, resp. around some negative constant. In particular, it is shown that such indefinite polynomial preconditioners can be obtained as the optimal solutions of a certain two parameter family of Chebyshev approximation problems. Some basic results are established for these approximation problems and a Remez type algorithm is sketched for their numerical solution. The problem of selecting the parameters such that the resulting indefinite polynomial preconditioners speeds up the convergence of minimal residual method optimally is also addressed. An approach is proposed based on the concept of asymptotic convergence factors. Finally, some numerical examples of indefinite polynomial preconditioners are given.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Genest, Vincent X.; Vinet, Luc; Zhedanov, Alexei

    The algebra H of the dual -1 Hahn polynomials is derived and shown to arise in the Clebsch-Gordan problem of sl{sub -1}(2). The dual -1 Hahn polynomials are the bispectral polynomials of a discrete argument obtained from the q{yields}-1 limit of the dual q-Hahn polynomials. The Hopf algebra sl{sub -1}(2) has four generators including an involution, it is also a q{yields}-1 limit of the quantum algebra sl{sub q}(2) and furthermore, the dynamical algebra of the parabose oscillator. The algebra H, a two-parameter generalization of u(2) with an involution as additional generator, is first derived from the recurrence relation of themore » -1 Hahn polynomials. It is then shown that H can be realized in terms of the generators of two added sl{sub -1}(2) algebras, so that the Clebsch-Gordan coefficients of sl{sub -1}(2) are dual -1 Hahn polynomials. An irreducible representation of H involving five-diagonal matrices and connected to the difference equation of the dual -1 Hahn polynomials is constructed.« less

  11. Stability analysis of nonlinear Roesser-type two-dimensional systems via a homogenous polynomial technique

    NASA Astrophysics Data System (ADS)

    Zhang, Tie-Yan; Zhao, Yan; Xie, Xiang-Peng

    2012-12-01

    This paper is concerned with the problem of stability analysis of nonlinear Roesser-type two-dimensional (2D) systems. Firstly, the fuzzy modeling method for the usual one-dimensional (1D) systems is extended to the 2D case so that the underlying nonlinear 2D system can be represented by the 2D Takagi—Sugeno (TS) fuzzy model, which is convenient for implementing the stability analysis. Secondly, a new kind of fuzzy Lyapunov function, which is a homogeneous polynomially parameter dependent on fuzzy membership functions, is developed to conceive less conservative stability conditions for the TS Roesser-type 2D system. In the process of stability analysis, the obtained stability conditions approach exactness in the sense of convergence by applying some novel relaxed techniques. Moreover, the obtained result is formulated in the form of linear matrix inequalities, which can be easily solved via standard numerical software. Finally, a numerical example is also given to demonstrate the effectiveness of the proposed approach.

  12. Linear and nonlinear ARMA model parameter estimation using an artificial neural network

    NASA Technical Reports Server (NTRS)

    Chon, K. H.; Cohen, R. J.

    1997-01-01

    This paper addresses parametric system identification of linear and nonlinear dynamic systems by analysis of the input and output signals. Specifically, we investigate the relationship between estimation of the system using a feedforward neural network model and estimation of the system by use of linear and nonlinear autoregressive moving-average (ARMA) models. By utilizing a neural network model incorporating a polynomial activation function, we show the equivalence of the artificial neural network to the linear and nonlinear ARMA models. We compare the parameterization of the estimated system using the neural network and ARMA approaches by utilizing data generated by means of computer simulations. Specifically, we show that the parameters of a simulated ARMA system can be obtained from the neural network analysis of the simulated data or by conventional least squares ARMA analysis. The feasibility of applying neural networks with polynomial activation functions to the analysis of experimental data is explored by application to measurements of heart rate (HR) and instantaneous lung volume (ILV) fluctuations.

  13. Interbasis expansions in the Zernike system

    NASA Astrophysics Data System (ADS)

    Atakishiyev, Natig M.; Pogosyan, George S.; Wolf, Kurt Bernardo; Yakhno, Alexander

    2017-10-01

    The differential equation with free boundary conditions on the unit disk that was proposed by Frits Zernike in 1934 to find Jacobi polynomial solutions (indicated as I) serves to define a classical system and a quantum system which have been found to be superintegrable. We have determined two new orthogonal polynomial solutions (indicated as II and III) that are separable and involve Legendre and Gegenbauer polynomials. Here we report on their three interbasis expansion coefficients: between the I-II and I-III bases, they are given by F32(⋯|1 ) polynomials that are also special su(2) Clebsch-Gordan coefficients and Hahn polynomials. Between the II-III bases, we find an expansion expressed by F43(⋯|1 ) 's and Racah polynomials that are related to the Wigner 6j coefficients.

  14. Regional climate change predictions from the Goddard Institute for Space Studies high resolution GCM

    NASA Technical Reports Server (NTRS)

    Crane, Robert G.; Hewitson, B. C.

    1991-01-01

    A new diagnostic tool is developed for examining relationships between the synoptic scale circulation and regional temperature distributions in GCMs. The 4 x 5 deg GISS GCM is shown to produce accurate simulations of the variance in the synoptic scale sea level pressure distribution over the U.S. An analysis of the observational data set from the National Meteorological Center (NMC) also shows a strong relationship between the synoptic circulation and grid point temperatures. This relationship is demonstrated by deriving transfer functions between a time-series of circulation parameters and temperatures at individual grid points. The circulation parameters are derived using rotated principal components analysis, and the temperature transfer functions are based on multivariate polynomial regression models. The application of these transfer functions to the GCM circulation indicates that there is considerable spatial bias present in the GCM temperature distributions. The transfer functions are also used to indicate the possible changes in U.S. regional temperatures that could result from differences in synoptic scale circulation between a 1XCO2 and a 2xCO2 climate, using a doubled CO2 version of the same GISS GCM.

  15. Retrieval Accuracy Assessment with Gap Detection for Case 2 Waters Chla Algorithms

    NASA Astrophysics Data System (ADS)

    Salem, S. I.; Higa, H.; Kim, H.; Oki, K.; Oki, T.

    2016-12-01

    Inland lakes and coastal regions types of Case 2 Waters should be continuously and accurately monitored as the former contain 90% of the global liquid freshwater storage, while the latter provide most of the dissolved organic carbon (DOC) which is an important link in the global carbon cycle. The optical properties of Case 2 Waters are dominated by three optically active components: phytoplankton, non-algal particles (NAP) and color dissolved organic matter (CDOM). During the last three decades, researchers have proposed several algorithms to retrieve Chla concentration from the remote sensing reflectance. In this study, seven algorithms are assessed with various band combinations from multi and hyper-spectral data with linear, polynomial and power regression approaches. To evaluate the performance of the 43 algorithm combination sets, 500,000 remote sensing reflectance spectra are simulated with a wide range of concentrations for Chla, NAP and CDOM. The concentrations of Chla and NAP vary from 1-200 (mg m-3) and 1-200 (gm m-3), respectively, and the absorption of CDOM at 440 nm has the range of 0.1-10 (m-1). It is found that the three-band algorithm (665, 709 and 754 nm) with the quadratic polynomial (3b_665_QP) indicates the best overall performance. 3b_665_QP has the least error with a root mean square error (RMSE) of 0.2 (mg m-3) and a mean absolute relative error (MARE) of 0.7 %. The less accurate retrieval of Chla was obtained by the synthetic chlorophyll index algorithm with RMSE and MARE of 35.8 mg m-3 and 160.4 %, respectively. In general, Chla algorithms which incorporates 665 nm band or band tuning technique performs better than those with 680 nm. In addition, the retrieval accuracy of Chla algorithms with quadratic polynomial and power regression approaches are consistently better than the linear ones. By analyzing Chla versus NAP concentrations, the 3b_665_QP outperforms the other algorithms for all Chla concentrations and NAP concentrations above 40 gm m-3which accounts for 81.3 % of the total combinations of NAP and Chla. In conclusion, these findings provide a reference for algorithm selection based on constituents' concentrations and open the door for developing a classification scheme to retrieve Chla with higher accuracy.

  16. Least square regularized regression in sum space.

    PubMed

    Xu, Yong-Li; Chen, Di-Rong; Li, Han-Xiong; Liu, Lu

    2013-04-01

    This paper proposes a least square regularized regression algorithm in sum space of reproducing kernel Hilbert spaces (RKHSs) for nonflat function approximation, and obtains the solution of the algorithm by solving a system of linear equations. This algorithm can approximate the low- and high-frequency component of the target function with large and small scale kernels, respectively. The convergence and learning rate are analyzed. We measure the complexity of the sum space by its covering number and demonstrate that the covering number can be bounded by the product of the covering numbers of basic RKHSs. For sum space of RKHSs with Gaussian kernels, by choosing appropriate parameters, we tradeoff the sample error and regularization error, and obtain a polynomial learning rate, which is better than that in any single RKHS. The utility of this method is illustrated with two simulated data sets and five real-life databases.

  17. Zeros and logarithmic asymptotics of Sobolev orthogonal polynomials for exponential weights

    NASA Astrophysics Data System (ADS)

    Díaz Mendoza, C.; Orive, R.; Pijeira Cabrera, H.

    2009-12-01

    We obtain the (contracted) weak zero asymptotics for orthogonal polynomials with respect to Sobolev inner products with exponential weights in the real semiaxis, of the form , with [gamma]>0, which include as particular cases the counterparts of the so-called Freud (i.e., when [phi] has a polynomial growth at infinity) and Erdös (when [phi] grows faster than any polynomial at infinity) weights. In addition, the boundness of the distance of the zeros of these Sobolev orthogonal polynomials to the convex hull of the support and, as a consequence, a result on logarithmic asymptotics are derived.

  18. Combinatorial theory of Macdonald polynomials I: proof of Haglund's formula.

    PubMed

    Haglund, J; Haiman, M; Loehr, N

    2005-02-22

    Haglund recently proposed a combinatorial interpretation of the modified Macdonald polynomials H(mu). We give a combinatorial proof of this conjecture, which establishes the existence and integrality of H(mu). As corollaries, we obtain the cocharge formula of Lascoux and Schutzenberger for Hall-Littlewood polynomials, a formula of Sahi and Knop for Jack's symmetric functions, a generalization of this result to the integral Macdonald polynomials J(mu), a formula for H(mu) in terms of Lascoux-Leclerc-Thibon polynomials, and combinatorial expressions for the Kostka-Macdonald coefficients K(lambda,mu) when mu is a two-column shape.

  19. Multi-indexed (q-)Racah polynomials

    NASA Astrophysics Data System (ADS)

    Odake, Satoru; Sasaki, Ryu

    2012-09-01

    As the second stage of the project multi-indexed orthogonal polynomials, we present, in the framework of ‘discrete quantum mechanics’ with real shifts in one dimension, the multi-indexed (q-)Racah polynomials. They are obtained from the (q-)Racah polynomials by the multiple application of the discrete analogue of the Darboux transformations or the Crum-Krein-Adler deletion of ‘virtual state’ vectors, in a similar way to the multi-indexed Laguerre and Jacobi polynomials reported earlier. The virtual state vectors are the ‘solutions’ of the matrix Schrödinger equation with negative ‘eigenvalues’, except for one of the two boundary points.

  20. [Spanish doctoral theses in emergency medicine (1978-2013)].

    PubMed

    Fernández-Guerrero, Inés María

    2015-01-01

    To quantitatively analyze the production of Spanish doctoral theses in emergency medicine. Quantitative synthesis of productivity indicators for 214 doctoral theses in emergency medicine found in the database (TESEO) for Spanish universities from 1978 to 2013. We processed the data in 3 ways as follows: compilation of descriptive statistics, regression analysis (correlation coefficients of determination), and modeling of linear trend (time-series analysis). Most of the thesis supervisors (84.1%) only oversaw a single project. No major supervisor of 10 or more theses was identified. Analysis of cosupervision indicated there were 1.6 supervisors per thesis. The theses were defended in 67 departments (both general and specialist departments) because no emergency medicine departments had been established. The most productive universities were 2 large ones (Universitat de Barcelona and Universidad Complutense de Madrid) and 3 medium-sized ones (Universidad de Granada, Universitat Autónoma de Barcelona, and Universidad de La Laguna). Productivity over time analyzed as the trend for 2-year periods in the time-series was expressed as a polynomial function with a correlation coefficient of determination of R2 = 0.80. Spanish doctoral research in emergency medicine has grown markedly. Work has been done in various university departments in different disciplines and specialties. The findings confirm that emergency medicine is a disciplinary field.

  1. Antibacterial and antifungal activities of pyroligneous acid from wood of Eucalyptus urograndis and Mimosa tenuiflora.

    PubMed

    de Souza Araújo, E; Pimenta, A S; Feijó, F M C; Castro, R V O; Fasciotti, M; Monteiro, T V C; de Lima, K M G

    2018-01-01

    This work aimed to evaluate the antibacterial and antifungal activities of two types of pyroligneous acid (PA) obtained from slow pyrolysis of wood of Mimosa tenuiflora and of a hybrid of Eucalyptus urophylla × Eucalyptus grandis. Wood wedges were carbonized on a heating rate of 1·25°C min -1 until 450°C. Pyrolysis smoke was trapped and condensed to yield liquid products. Crude pyrolysis liquids were bidistilled under 5 mmHg vacuum yielding purified PA. Multi-antibiotic-resistant strains of Escherichia coli, Pseudomonas aeruginosa (ATCC 27853) and Staphylococcus aureus (ATCC 25923) had their sensitivity to PA evaluated using agar diffusion test. Two yeasts were evaluated as well, Candida albicans (ATCC 10231) and Cryptococcus neoformans. GC-MS analysis of both PAs was carried out to obtain their chemical composition. Regression analysis was performed, and models were adjusted, with diameter of inhibition halos and PA concentration (100, 50 and 20%) as parameters. Identity of regression models and equality of parameters in polynomial orthogonal equations were verified. Inhibition halos were observed in the range 15-25 mm of diameter. All micro-organisms were inhibited by both types of PA even in the lowest concentration of 20%. The feasibility of the usage of PAs produced with wood species planted in large scale in Brazil was evident and the real potential as a basis to produce natural antibacterial and antifungal agents, with real possibility to be used in veterinary and zootechnical applications. © 2017 The Society for Applied Microbiology.

  2. Statistical and evolutionary optimization for enhanced production of an antileukemic enzyme, L-asparaginase, in a protease-deficient Bacillus aryabhattai ITBHU02 isolated from the soil contaminated with hospital waste.

    PubMed

    Singh, Yogendra; Srivastav, S K

    2013-04-01

    Over the past few decades, L-asparaginase has emerged as an excellent anti-neoplastic agent. In present study, a new strain ITBHU02, isolated from soil site near degrading hospital waste, was investigated for the production of extracellular L-asparaginase. Further, it was renamed as Bacillus aryabhattai ITBHU02 based on its phenotypical features, biochemical characteristics, fatty acid methyl ester (FAME) profile and phylogenetic similarity of 16S rDNA sequences. The strain was found protease-deficient and its optimal growth occurred at 37 degrees C and pH 7.5. The strain was capable of producing enzyme L-asparaginase with maximum specific activity of 3.02 +/- 0.3 Umg(-1) protein, when grown in un-optimized medium composition and physical parameters. In order to improve the production of L-asparaginase by the isolate, response surface methodology (RSM) and genetic algorithm (GA) based techniques were implemented. The data achieved through the statistical design matrix were used for regression analysis and analysis of variance studies. Furthermore, GA was implemented utilizing polynomial regression equation as a fitness function. Maximum average L-asparaginase productivity of 6.35 Umg(-1) was found at GA optimized concentrations of 4.07, 0.82, 4.91, and 5.2 gL(-1) for KH2PO4, MgSO4 x 7H2O, L-asparagine, and glucose respectively. The GA optimized yield of the enzyme was 7.8% higher in comparison to the yield obtained through RSM based optimization.

  3. The association between temperature and mortality in tropical middle income Thailand from 1999 to 2008

    NASA Astrophysics Data System (ADS)

    Tawatsupa, Benjawan; Dear, Keith; Kjellstrom, Tord; Sleigh, Adrian

    2014-03-01

    We have investigated the association between tropical weather condition and age-sex adjusted death rates (ADR) in Thailand over a 10-year period from 1999 to 2008. Population, mortality, weather and air pollution data were obtained from four national databases. Alternating multivariable fractional polynomial (MFP) regression and stepwise multivariable linear regression analysis were used to sequentially build models of the associations between temperature variable and deaths, adjusted for the effects and interactions of age, sex, weather (6 variables), and air pollution (10 variables). The associations are explored and compared among three seasons (cold, hot and wet months) and four weather zones of Thailand (the North, Northeast, Central, and South regions). We found statistically significant associations between temperature and mortality in Thailand. The maximum temperature is the most important variable in predicting mortality. Overall, the association is nonlinear U-shape and 31 °C is the minimum-mortality temperature in Thailand. The death rates increase when maximum temperature increase with the highest rates in the North and Central during hot months. The final equation used in this study allowed estimation of the impact of a 4 °C increase in temperature as projected for Thailand by 2100; this analysis revealed that the heat-related deaths will increase more than the cold-related deaths avoided in the hot and wet months, and overall the net increase in expected mortality by region ranges from 5 to 13 % unless preventive measures were adopted. Overall, these results are useful for health impact assessment for the present situation and future public health implication of global climate change for tropical Thailand.

  4. Conformal Galilei algebras, symmetric polynomials and singular vectors

    NASA Astrophysics Data System (ADS)

    Křižka, Libor; Somberg, Petr

    2018-01-01

    We classify and explicitly describe homomorphisms of Verma modules for conformal Galilei algebras cga_ℓ (d,C) with d=1 for any integer value ℓ \\in N. The homomorphisms are uniquely determined by singular vectors as solutions of certain differential operators of flag type and identified with specific polynomials arising as coefficients in the expansion of a parametric family of symmetric polynomials into power sum symmetric polynomials.

  5. Identities associated with Milne-Thomson type polynomials and special numbers.

    PubMed

    Simsek, Yilmaz; Cakic, Nenad

    2018-01-01

    The purpose of this paper is to give identities and relations including the Milne-Thomson polynomials, the Hermite polynomials, the Bernoulli numbers, the Euler numbers, the Stirling numbers, the central factorial numbers, and the Cauchy numbers. By using fermionic and bosonic p -adic integrals, we derive some new relations and formulas related to these numbers and polynomials, and also the combinatorial sums.

  6. Simulation of aspheric tolerance with polynomial fitting

    NASA Astrophysics Data System (ADS)

    Li, Jing; Cen, Zhaofeng; Li, Xiaotong

    2018-01-01

    The shape of the aspheric lens changes caused by machining errors, resulting in a change in the optical transfer function, which affects the image quality. At present, there is no universally recognized tolerance criterion standard for aspheric surface. To study the influence of aspheric tolerances on the optical transfer function, the tolerances of polynomial fitting are allocated on the aspheric surface, and the imaging simulation is carried out by optical imaging software. Analysis is based on a set of aspheric imaging system. The error is generated in the range of a certain PV value, and expressed as a form of Zernike polynomial, which is added to the aspheric surface as a tolerance term. Through optical software analysis, the MTF of optical system can be obtained and used as the main evaluation index. Evaluate whether the effect of the added error on the MTF of the system meets the requirements of the current PV value. Change the PV value and repeat the operation until the acceptable maximum allowable PV value is obtained. According to the actual processing technology, consider the error of various shapes, such as M type, W type, random type error. The new method will provide a certain development for the actual free surface processing technology the reference value.

  7. Sensitivity Analysis of the Static Aeroelastic Response of a Wing

    NASA Technical Reports Server (NTRS)

    Eldred, Lloyd B.

    1993-01-01

    A technique to obtain the sensitivity of the static aeroelastic response of a three dimensional wing model is designed and implemented. The formulation is quite general and accepts any aerodynamic and structural analysis capability. A program to combine the discipline level, or local, sensitivities into global sensitivity derivatives is developed. A variety of representations of the wing pressure field are developed and tested to determine the most accurate and efficient scheme for representing the field outside of the aerodynamic code. Chebyshev polynomials are used to globally fit the pressure field. This approach had some difficulties in representing local variations in the field, so a variety of local interpolation polynomial pressure representations are also implemented. These panel based representations use a constant pressure value, a bilinearly interpolated value. or a biquadraticallv interpolated value. The interpolation polynomial approaches do an excellent job of reducing the numerical problems of the global approach for comparable computational effort. Regardless of the pressure representation used. sensitivity and response results with excellent accuracy have been produced for large integrated quantities such as wing tip deflection and trim angle of attack. The sensitivities of such things as individual generalized displacements have been found with fair accuracy. In general, accuracy is found to be proportional to the relative size of the derivatives to the quantity itself.

  8. Estimation of genetic parameters for milk yield in Murrah buffaloes by Bayesian inference.

    PubMed

    Breda, F C; Albuquerque, L G; Euclydes, R F; Bignardi, A B; Baldi, F; Torres, R A; Barbosa, L; Tonhati, H

    2010-02-01

    Random regression models were used to estimate genetic parameters for test-day milk yield in Murrah buffaloes using Bayesian inference. Data comprised 17,935 test-day milk records from 1,433 buffaloes. Twelve models were tested using different combinations of third-, fourth-, fifth-, sixth-, and seventh-order orthogonal polynomials of weeks of lactation for additive genetic and permanent environmental effects. All models included the fixed effects of contemporary group, number of daily milkings and age of cow at calving as covariate (linear and quadratic effect). In addition, residual variances were considered to be heterogeneous with 6 classes of variance. Models were selected based on the residual mean square error, weighted average of residual variance estimates, and estimates of variance components, heritabilities, correlations, eigenvalues, and eigenfunctions. Results indicated that changes in the order of fit for additive genetic and permanent environmental random effects influenced the estimation of genetic parameters. Heritability estimates ranged from 0.19 to 0.31. Genetic correlation estimates were close to unity between adjacent test-day records, but decreased gradually as the interval between test-days increased. Results from mean squared error and weighted averages of residual variance estimates suggested that a model considering sixth- and seventh-order Legendre polynomials for additive and permanent environmental effects, respectively, and 6 classes for residual variances, provided the best fit. Nevertheless, this model presented the largest degree of complexity. A more parsimonious model, with fourth- and sixth-order polynomials, respectively, for these same effects, yielded very similar genetic parameter estimates. Therefore, this last model is recommended for routine applications. Copyright 2010 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  9. Selection of Polynomial Chaos Bases via Bayesian Model Uncertainty Methods with Applications to Sparse Approximation of PDEs with Stochastic Inputs

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karagiannis, Georgios; Lin, Guang

    2014-02-15

    Generalized polynomial chaos (gPC) expansions allow the representation of the solution of a stochastic system as a series of polynomial terms. The number of gPC terms increases dramatically with the dimension of the random input variables. When the number of the gPC terms is larger than that of the available samples, a scenario that often occurs if the evaluations of the system are expensive, the evaluation of the gPC expansion can be inaccurate due to over-fitting. We propose a fully Bayesian approach that allows for global recovery of the stochastic solution, both in spacial and random domains, by coupling Bayesianmore » model uncertainty and regularization regression methods. It allows the evaluation of the PC coefficients on a grid of spacial points via (1) Bayesian model average or (2) medial probability model, and their construction as functions on the spacial domain via spline interpolation. The former accounts the model uncertainty and provides Bayes-optimal predictions; while the latter, additionally, provides a sparse representation of the solution by evaluating the expansion on a subset of dominating gPC bases when represented as a gPC expansion. Moreover, the method quantifies the importance of the gPC bases through inclusion probabilities. We design an MCMC sampler that evaluates all the unknown quantities without the need of ad-hoc techniques. The proposed method is suitable for, but not restricted to, problems whose stochastic solution is sparse at the stochastic level with respect to the gPC bases while the deterministic solver involved is expensive. We demonstrate the good performance of the proposed method and make comparisons with others on 1D, 14D and 40D in random space elliptic stochastic partial differential equations.« less

  10. Explaining variation in tropical plant community composition: influence of environmental and spatial data quality.

    PubMed

    Jones, Mirkka M; Tuomisto, Hanna; Borcard, Daniel; Legendre, Pierre; Clark, David B; Olivas, Paulo C

    2008-03-01

    The degree to which variation in plant community composition (beta-diversity) is predictable from environmental variation, relative to other spatial processes, is of considerable current interest. We addressed this question in Costa Rican rain forest pteridophytes (1,045 plots, 127 species). We also tested the effect of data quality on the results, which has largely been overlooked in earlier studies. To do so, we compared two alternative spatial models [polynomial vs. principal coordinates of neighbour matrices (PCNM)] and ten alternative environmental models (all available environmental variables vs. four subsets, and including their polynomials vs. not). Of the environmental data types, soil chemistry contributed most to explaining pteridophyte community variation, followed in decreasing order of contribution by topography, soil type and forest structure. Environmentally explained variation increased moderately when polynomials of the environmental variables were included. Spatially explained variation increased substantially when the multi-scale PCNM spatial model was used instead of the traditional, broad-scale polynomial spatial model. The best model combination (PCNM spatial model and full environmental model including polynomials) explained 32% of pteridophyte community variation, after correcting for the number of sampling sites and explanatory variables. Overall evidence for environmental control of beta-diversity was strong, and the main floristic gradients detected were correlated with environmental variation at all scales encompassed by the study (c. 100-2,000 m). Depending on model choice, however, total explained variation differed more than fourfold, and the apparent relative importance of space and environment could be reversed. Therefore, we advocate a broader recognition of the impacts that data quality has on analysis results. A general understanding of the relative contributions of spatial and environmental processes to species distributions and beta-diversity requires that methodological artefacts are separated from real ecological differences.

  11. Investigation of advanced UQ for CRUD prediction with VIPRE.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Eldred, Michael Scott

    2011-09-01

    This document summarizes the results from a level 3 milestone study within the CASL VUQ effort. It demonstrates the application of 'advanced UQ,' in particular dimension-adaptive p-refinement for polynomial chaos and stochastic collocation. The study calculates statistics for several quantities of interest that are indicators for the formation of CRUD (Chalk River unidentified deposit), which can lead to CIPS (CRUD induced power shift). Stochastic expansion methods are attractive methods for uncertainty quantification due to their fast convergence properties. For smooth functions (i.e., analytic, infinitely-differentiable) in L{sup 2} (i.e., possessing finite variance), exponential convergence rates can be obtained under order refinementmore » for integrated statistical quantities of interest such as mean, variance, and probability. Two stochastic expansion methods are of interest: nonintrusive polynomial chaos expansion (PCE), which computes coefficients for a known basis of multivariate orthogonal polynomials, and stochastic collocation (SC), which forms multivariate interpolation polynomials for known coefficients. Within the DAKOTA project, recent research in stochastic expansion methods has focused on automated polynomial order refinement ('p-refinement') of expansions to support scalability to higher dimensional random input spaces [4, 3]. By preferentially refining only in the most important dimensions of the input space, the applicability of these methods can be extended from O(10{sup 0})-O(10{sup 1}) random variables to O(10{sup 2}) and beyond, depending on the degree of anisotropy (i.e., the extent to which randominput variables have differing degrees of influence on the statistical quantities of interest (QOIs)). Thus, the purpose of this study is to investigate the application of these adaptive stochastic expansion methods to the analysis of CRUD using the VIPRE simulation tools for two different plant models of differing random dimension, anisotropy, and smoothness.« less

  12. Consensus seeking in a network of discrete-time linear agents with communication noises

    NASA Astrophysics Data System (ADS)

    Wang, Yunpeng; Cheng, Long; Hou, Zeng-Guang; Tan, Min; Zhou, Chao; Wang, Ming

    2015-07-01

    This paper studies the mean square consensus of discrete-time linear time-invariant multi-agent systems with communication noises. A distributed consensus protocol, which is composed of the agent's own state feedback and the relative states between the agent and its neighbours, is proposed. A time-varying consensus gain a[k] is applied to attenuate the effect of noises which inherits in the inaccurate measurement of relative states with neighbours. A polynomial, namely 'parameter polynomial', is constructed. And its coefficients are the parameters in the feedback gain vector of the proposed protocol. It turns out that the parameter polynomial plays an important role in guaranteeing the consensus of linear multi-agent systems. By the proposed protocol, necessary and sufficient conditions for mean square consensus are presented under different topology conditions: (1) if the communication topology graph has a spanning tree and every node in the graph has at least one parent node, then the mean square consensus can be achieved if and only if ∑∞k = 0a[k] = ∞, ∑∞k = 0a2[k] < ∞ and all roots of the parameter polynomial are in the unit circle; (2) if the communication topology graph has a spanning tree and there exits one node without any parent node (the leader-follower case), then the mean square consensus can be achieved if and only if ∑∞k = 0a[k] = ∞, limk → ∞a[k] = 0 and all roots of the parameter polynomial are in the unit circle; (3) if the communication topology graph does not have a spanning tree, then the mean square consensus can never be achieved. Finally, one simulation example on the multiple aircrafts system is provided to validate the theoretical analysis.

  13. A generalized multivariate regression model for modelling ocean wave heights

    NASA Astrophysics Data System (ADS)

    Wang, X. L.; Feng, Y.; Swail, V. R.

    2012-04-01

    In this study, a generalized multivariate linear regression model is developed to represent the relationship between 6-hourly ocean significant wave heights (Hs) and the corresponding 6-hourly mean sea level pressure (MSLP) fields. The model is calibrated using the ERA-Interim reanalysis of Hs and MSLP fields for 1981-2000, and is validated using the ERA-Interim reanalysis for 2001-2010 and ERA40 reanalysis of Hs and MSLP for 1958-2001. The performance of the fitted model is evaluated in terms of Pierce skill score, frequency bias index, and correlation skill score. Being not normally distributed, wave heights are subjected to a data adaptive Box-Cox transformation before being used in the model fitting. Also, since 6-hourly data are being modelled, lag-1 autocorrelation must be and is accounted for. The models with and without Box-Cox transformation, and with and without accounting for autocorrelation, are inter-compared in terms of their prediction skills. The fitted MSLP-Hs relationship is then used to reconstruct historical wave height climate from the 6-hourly MSLP fields taken from the Twentieth Century Reanalysis (20CR, Compo et al. 2011), and to project possible future wave height climates using CMIP5 model simulations of MSLP fields. The reconstructed and projected wave heights, both seasonal means and maxima, are subject to a trend analysis that allows for non-linear (polynomial) trends.

  14. Box–Cox Transformation and Random Regression Models for Fecal egg Count Data

    PubMed Central

    da Silva, Marcos Vinícius Gualberto Barbosa; Van Tassell, Curtis P.; Sonstegard, Tad S.; Cobuci, Jaime Araujo; Gasbarre, Louis C.

    2012-01-01

    Accurate genetic evaluation of livestock is based on appropriate modeling of phenotypic measurements. In ruminants, fecal egg count (FEC) is commonly used to measure resistance to nematodes. FEC values are not normally distributed and logarithmic transformations have been used in an effort to achieve normality before analysis. However, the transformed data are often still not normally distributed, especially when data are extremely skewed. A series of repeated FEC measurements may provide information about the population dynamics of a group or individual. A total of 6375 FEC measures were obtained for 410 animals between 1992 and 2003 from the Beltsville Agricultural Research Center Angus herd. Original data were transformed using an extension of the Box–Cox transformation to approach normality and to estimate (co)variance components. We also proposed using random regression models (RRM) for genetic and non-genetic studies of FEC. Phenotypes were analyzed using RRM and restricted maximum likelihood. Within the different orders of Legendre polynomials used, those with more parameters (order 4) adjusted FEC data best. Results indicated that the transformation of FEC data utilizing the Box–Cox transformation family was effective in reducing the skewness and kurtosis, and dramatically increased estimates of heritability, and measurements of FEC obtained in the period between 12 and 26 weeks in a 26-week experimental challenge period are genetically correlated. PMID:22303406

  15. Box-Cox Transformation and Random Regression Models for Fecal egg Count Data.

    PubMed

    da Silva, Marcos Vinícius Gualberto Barbosa; Van Tassell, Curtis P; Sonstegard, Tad S; Cobuci, Jaime Araujo; Gasbarre, Louis C

    2011-01-01

    Accurate genetic evaluation of livestock is based on appropriate modeling of phenotypic measurements. In ruminants, fecal egg count (FEC) is commonly used to measure resistance to nematodes. FEC values are not normally distributed and logarithmic transformations have been used in an effort to achieve normality before analysis. However, the transformed data are often still not normally distributed, especially when data are extremely skewed. A series of repeated FEC measurements may provide information about the population dynamics of a group or individual. A total of 6375 FEC measures were obtained for 410 animals between 1992 and 2003 from the Beltsville Agricultural Research Center Angus herd. Original data were transformed using an extension of the Box-Cox transformation to approach normality and to estimate (co)variance components. We also proposed using random regression models (RRM) for genetic and non-genetic studies of FEC. Phenotypes were analyzed using RRM and restricted maximum likelihood. Within the different orders of Legendre polynomials used, those with more parameters (order 4) adjusted FEC data best. Results indicated that the transformation of FEC data utilizing the Box-Cox transformation family was effective in reducing the skewness and kurtosis, and dramatically increased estimates of heritability, and measurements of FEC obtained in the period between 12 and 26 weeks in a 26-week experimental challenge period are genetically correlated.

  16. Adjusting powerlifting performances for differences in body mass.

    PubMed

    Cleather, Daniel John

    2006-05-01

    It has been established that, in the sports of Olympic weightlifting (OL) and powerlifting (PL), the relationship between lifting performance and body mass is not linear. This relationship has been frequently studied in OL, but the literature on PL is less extensive. In this study, PL performance and body mass, for both men and women, was examined by using data from the International Powerlifting Federation World Championships during 1995-2004. Nonlinear regression was used to apply 7 models (including allometric, polynomial, and power models) to the data. The results of this study indicate that the relationship between PL performance and body mass can be best modeled by the equation y = a - bx(-c), where y is the weight lifted (in kg) in the squat, bench press, or deadlift, x is the body mass of the lifter (in kg), and a, b, and c are constants. The constants a, b, and c are determined by the type of lift (squat, bench press, or deadlift) and the gender of the lifter and were obtained from the regression analysis. Inspection of the plots of raw residuals (actual performance minus predicted performance) vs. body mass revealed no body mass bias to this formula in contrast to research into other handicapping formulas. This study supports previous research that found a bias toward lifters in the intermediate weight categories in allometric fits to PL data.

  17. Are there differences in the first stage of labor between Black and White women?

    PubMed

    Tuuli, Methodius G; Odibo, Anthony O; Caughey, Aaron B; Roehl, Kimberly; Macones, George A; Cahill, Alison G

    2015-02-01

    The objective of this study was to determine whether the duration and progress of the first stage of labor are different in black compared with white women. Retrospective cohort study of labor progress among consecutive black (n = 3,924) and white (n = 921) women with singleton term pregnancies (≥ 37 weeks) who completed the first stage of labor. Duration of labor and progression from 1 cm to the next was estimated using interval-censored regression. Labor duration and progress among black and white women in the entire cohort, and stratified by parity, were compared in multivariable interval-censored regression models. Repeated-measures analysis with 9th-degree polynomial modeling was used to construct average labor curves. There were no significant differences in duration of the first stage of labor in black compared with white women (median, 4-10 cm: 5.1 vs. 4.9 hours [p = 0.43] for nulliparous and 3.5 vs. 3.9 hours [p = 0.84] for multiparous women). Similarly, there were no significant differences in progression in increments of 1 cm. Average labor curves were also not significantly different. Duration and progress of the first stage of labor are identical in black and white women. This suggests similar standards may be applied in the first stage of labor. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  18. Approximating smooth functions using algebraic-trigonometric polynomials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharapudinov, Idris I

    2011-01-14

    The problem under consideration is that of approximating classes of smooth functions by algebraic-trigonometric polynomials of the form p{sub n}(t)+{tau}{sub m}(t), where p{sub n}(t) is an algebraic polynomial of degree n and {tau}{sub m}(t)=a{sub 0}+{Sigma}{sub k=1}{sup m}a{sub k} cos k{pi}t + b{sub k} sin k{pi}t is a trigonometric polynomial of order m. The precise order of approximation by such polynomials in the classes W{sup r}{sub {infinity}(}M) and an upper bound for similar approximations in the class W{sup r}{sub p}(M) with 4/3

  19. Parameter reduction in nonlinear state-space identification of hysteresis

    NASA Astrophysics Data System (ADS)

    Fakhrizadeh Esfahani, Alireza; Dreesen, Philippe; Tiels, Koen; Noël, Jean-Philippe; Schoukens, Johan

    2018-05-01

    Recent work on black-box polynomial nonlinear state-space modeling for hysteresis identification has provided promising results, but struggles with a large number of parameters due to the use of multivariate polynomials. This drawback is tackled in the current paper by applying a decoupling approach that results in a more parsimonious representation involving univariate polynomials. This work is carried out numerically on input-output data generated by a Bouc-Wen hysteretic model and follows up on earlier work of the authors. The current article discusses the polynomial decoupling approach and explores the selection of the number of univariate polynomials with the polynomial degree. We have found that the presented decoupling approach is able to reduce the number of parameters of the full nonlinear model up to about 50%, while maintaining a comparable output error level.

  20. Predicting Subnational Ebola Virus Disease Epidemic Dynamics from Sociodemographic Indicators

    PubMed Central

    Valeri, Linda; Patterson-Lomba, Oscar; Gurmu, Yared; Ablorh, Akweley; Bobb, Jennifer; Townes, F. William; Harling, Guy

    2016-01-01

    Background The recent Ebola virus disease (EVD) outbreak in West Africa has spread wider than any previous human EVD epidemic. While individual-level risk factors that contribute to the spread of EVD have been studied, the population-level attributes of subnational regions associated with outbreak severity have not yet been considered. Methods To investigate the area-level predictors of EVD dynamics, we integrated time series data on cumulative reported cases of EVD from the World Health Organization and covariate data from the Demographic and Health Surveys. We first estimated the early growth rates of epidemics in each second-level administrative district (ADM2) in Guinea, Sierra Leone and Liberia using exponential, logistic and polynomial growth models. We then evaluated how these growth rates, as well as epidemic size within ADM2s, were ecologically associated with several demographic and socio-economic characteristics of the ADM2, using bivariate correlations and multivariable regression models. Results The polynomial growth model appeared to best fit the ADM2 epidemic curves, displaying the lowest residual standard error. Each outcome was associated with various regional characteristics in bivariate models, however in stepwise multivariable models only mean education levels were consistently associated with a worse local epidemic. Discussion By combining two common methods—estimation of epidemic parameters using mathematical models, and estimation of associations using ecological regression models—we identified some factors predicting rapid and severe EVD epidemics in West African subnational regions. While care should be taken interpreting such results as anything more than correlational, we suggest that our approach of using data sources that were publicly available in advance of the epidemic or in real-time provides an analytic framework that may assist countries in understanding the dynamics of future outbreaks as they occur. PMID:27732614

  1. Mother-Child Discrepancy in Perceived Family Functioning and Adolescent Developmental Outcomes in Families Experiencing Economic Disadvantage in Hong Kong.

    PubMed

    Leung, Janet T Y; Shek, Daniel T L; Li, Lin

    2016-10-01

    Though growing attention has been devoted to examining informant discrepancies of family attributes in social science research, studies that examine how interactions between mother-reported and adolescent-reported family functioning predict adolescent developmental outcomes in underprivileged families are severely lacking. The current study investigated the difference between mothers and adolescents in their reports of family functioning, as well as the relationships between mother-reported and adolescent-reported family functioning and adolescent developmental outcomes in a sample of 432 Chinese single-mother families (mean age of adolescents = 13.7 years, 51.2 % girls, mean age of mothers = 43.5 years, 69.9 % divorced) experiencing economic disadvantage in Hong Kong. Polynomial regression analyses were conducted to assess whether discrepancy in family functioning between mother reports and adolescent reports predicted resilience, beliefs in the future, cognitive competence, self-efficacy and self-determination of adolescents. The results indicated that adolescents reported family functioning more negatively than did their mothers. Polynomial regression analyses showed that the interaction term between mothers' reports and adolescents' reports of family functioning predicted adolescent developmental outcomes in Chinese single-mother families living in poverty. Basically, under poor adolescent-reported family functioning, adolescent development would be relatively better if their mothers reported more positive family functioning. In contrast, under good adolescent-reported family functioning, adolescents expressed better developmental outcomes when mothers reported lower levels of family functioning than those mothers who reported higher levels of family functioning. The findings provide insights on how congruency and discrepancy between informant reports of family functioning would influence adolescent development. Theoretical and practical implications of the findings are discussed.

  2. Kurtosis Approach Nonlinear Blind Source Separation

    NASA Technical Reports Server (NTRS)

    Duong, Vu A.; Stubbemd, Allen R.

    2005-01-01

    In this paper, we introduce a new algorithm for blind source signal separation for post-nonlinear mixtures. The mixtures are assumed to be linearly mixed from unknown sources first and then distorted by memoryless nonlinear functions. The nonlinear functions are assumed to be smooth and can be approximated by polynomials. Both the coefficients of the unknown mixing matrix and the coefficients of the approximated polynomials are estimated by the gradient descent method conditional on the higher order statistical requirements. The results of simulation experiments presented in this paper demonstrate the validity and usefulness of our approach for nonlinear blind source signal separation Keywords: Independent Component Analysis, Kurtosis, Higher order statistics.

  3. Constraint analysis of two-dimensional quadratic gravity from { BF} theory

    NASA Astrophysics Data System (ADS)

    Valcárcel, C. E.

    2017-01-01

    Quadratic gravity in two dimensions can be formulated as a background field ( BF) theory plus an interaction term which is polynomial in both, the gauge and background fields. This formulation is similar to the one given by Freidel and Starodubtsev to obtain MacDowell-Mansouri gravity in four dimensions. In this article we use the Dirac's Hamiltonian formalism to analyze the constraint structure of the two-dimensional Polynomial BF action. After we obtain the constraints of the theory, we proceed with the Batalin-Fradkin-Vilkovisky procedure to obtain the transition amplitude. We also compare our results with the ones obtained from generalized dilaton gravity.

  4. Learning polynomial feedforward neural networks by genetic programming and backpropagation.

    PubMed

    Nikolaev, N Y; Iba, H

    2003-01-01

    This paper presents an approach to learning polynomial feedforward neural networks (PFNNs). The approach suggests, first, finding the polynomial network structure by means of a population-based search technique relying on the genetic programming paradigm, and second, further adjustment of the best discovered network weights by an especially derived backpropagation algorithm for higher order networks with polynomial activation functions. These two stages of the PFNN learning process enable us to identify networks with good training as well as generalization performance. Empirical results show that this approach finds PFNN which outperform considerably some previous constructive polynomial network algorithms on processing benchmark time series.

  5. Quasi-kernel polynomials and convergence results for quasi-minimal residual iterations

    NASA Technical Reports Server (NTRS)

    Freund, Roland W.

    1992-01-01

    Recently, Freund and Nachtigal have proposed a novel polynominal-based iteration, the quasi-minimal residual algorithm (QMR), for solving general nonsingular non-Hermitian linear systems. Motivated by the QMR method, we have introduced the general concept of quasi-kernel polynomials, and we have shown that the QMR algorithm is based on a particular instance of quasi-kernel polynomials. In this paper, we continue our study of quasi-kernel polynomials. In particular, we derive bounds for the norms of quasi-kernel polynomials. These results are then applied to obtain convergence theorems both for the QMR method and for a transpose-free variant of QMR, the TFQMR algorithm.

  6. On universal knot polynomials

    NASA Astrophysics Data System (ADS)

    Mironov, A.; Mkrtchyan, R.; Morozov, A.

    2016-02-01

    We present a universal knot polynomials for 2- and 3-strand torus knots in adjoint representation, by universalization of appropriate Rosso-Jones formula. According to universality, these polynomials coincide with adjoined colored HOMFLY and Kauffman polynomials at SL and SO/Sp lines on Vogel's plane, respectively and give their exceptional group's counterparts on exceptional line. We demonstrate that [m,n]=[n,m] topological invariance, when applicable, take place on the entire Vogel's plane. We also suggest the universal form of invariant of figure eight knot in adjoint representation, and suggest existence of such universalization for any knot in adjoint and its descendant representations. Properties of universal polynomials and applications of these results are discussed.

  7. Zernike Basis to Cartesian Transformations

    NASA Astrophysics Data System (ADS)

    Mathar, R. J.

    2009-12-01

    The radial polynomials of the 2D (circular) and 3D (spherical) Zernike functions are tabulated as powers of the radial distance. The reciprocal tabulation of powers of the radial distance in series of radial polynomials is also given, based on projections that take advantage of the orthogonality of the polynomials over the unit interval. They play a role in the expansion of products of the polynomials into sums, which is demonstrated by some examples. Multiplication of the polynomials by the angular bases (azimuth, polar angle) defines the Zernike functions, for which we derive transformations to and from the Cartesian coordinate system centered at the middle of the circle or sphere.

  8. Chaos, Fractals, and Polynomials.

    ERIC Educational Resources Information Center

    Tylee, J. Louis; Tylee, Thomas B.

    1996-01-01

    Discusses chaos theory; linear algebraic equations and the numerical solution of polynomials, including the use of the Newton-Raphson technique to find polynomial roots; fractals; search region and coordinate systems; convergence; and generating color fractals on a computer. (LRW)

  9. Universal Racah matrices and adjoint knot polynomials: Arborescent knots

    NASA Astrophysics Data System (ADS)

    Mironov, A.; Morozov, A.

    2016-04-01

    By now it is well established that the quantum dimensions of descendants of the adjoint representation can be described in a universal form, independent of a particular family of simple Lie algebras. The Rosso-Jones formula then implies a universal description of the adjoint knot polynomials for torus knots, which in particular unifies the HOMFLY (SUN) and Kauffman (SON) polynomials. For E8 the adjoint representation is also fundamental. We suggest to extend the universality from the dimensions to the Racah matrices and this immediately produces a unified description of the adjoint knot polynomials for all arborescent (double-fat) knots, including twist, 2-bridge and pretzel. Technically we develop together the universality and the "eigenvalue conjecture", which expresses the Racah and mixing matrices through the eigenvalues of the quantum R-matrix, and for dealing with the adjoint polynomials one has to extend it to the previously unknown 6 × 6 case. The adjoint polynomials do not distinguish between mutants and therefore are not very efficient in knot theory, however, universal polynomials in higher representations can probably be better in this respect.

  10. Imaging characteristics of Zernike and annular polynomial aberrations.

    PubMed

    Mahajan, Virendra N; Díaz, José Antonio

    2013-04-01

    The general equations for the point-spread function (PSF) and optical transfer function (OTF) are given for any pupil shape, and they are applied to optical imaging systems with circular and annular pupils. The symmetry properties of the PSF, the real and imaginary parts of the OTF, and the modulation transfer function (MTF) of a system with a circular pupil aberrated by a Zernike circle polynomial aberration are derived. The interferograms and PSFs are illustrated for some typical polynomial aberrations with a sigma value of one wave, and 3D PSFs and MTFs are shown for 0.1 wave. The Strehl ratio is also calculated for polynomial aberrations with a sigma value of 0.1 wave, and shown to be well estimated from the sigma value. The numerical results are compared with the corresponding results in the literature. Because of the same angular dependence of the corresponding annular and circle polynomial aberrations, the symmetry properties of systems with annular pupils aberrated by an annular polynomial aberration are the same as those for a circular pupil aberrated by a corresponding circle polynomial aberration. They are also illustrated with numerical examples.

  11. Applications of polynomial optimization in financial risk investment

    NASA Astrophysics Data System (ADS)

    Zeng, Meilan; Fu, Hongwei

    2017-09-01

    Recently, polynomial optimization has many important applications in optimization, financial economics and eigenvalues of tensor, etc. This paper studies the applications of polynomial optimization in financial risk investment. We consider the standard mean-variance risk measurement model and the mean-variance risk measurement model with transaction costs. We use Lasserre's hierarchy of semidefinite programming (SDP) relaxations to solve the specific cases. The results show that polynomial optimization is effective for some financial optimization problems.

  12. A Stochastic Mixed Finite Element Heterogeneous Multiscale Method for Flow in Porous Media

    DTIC Science & Technology

    2010-08-01

    applicable for flow in porous media has drawn significant interest in the last few years. Several techniques like generalized polynomial chaos expansions (gPC...represents the stochastic solution as a polynomial approxima- tion. This interpolant is constructed via independent function calls to the de- terministic...of orthogonal polynomials [34,38] or sparse grid approximations [39–41]. It is well known that the global polynomial interpolation cannot resolve lo

  13. A Set of Orthogonal Polynomials That Generalize the Racah Coefficients or 6 - j Symbols.

    DTIC Science & Technology

    1978-03-01

    Generalized Hypergeometric Functions, Cambridge Univ. Press, Cambridge, 1966. [11] D. Stanton, Some basic hypergeometric polynomials arising from... Some bas ic hypergeometr ic an a logues of the classical orthogonal polynomials and applications , to appear. [3] C. de Boor and G. H. Golub , The...Report #1833 A SET OF ORTHOGONAL POLYNOMIALS THAT GENERALIZE THE RACAR COEFFICIENTS OR 6 — j SYMBOLS Richard Askey and James Wilson •

  14. Tutte polynomial in functional magnetic resonance imaging

    NASA Astrophysics Data System (ADS)

    García-Castillón, Marlly V.

    2015-09-01

    Methods of graph theory are applied to the processing of functional magnetic resonance images. Specifically the Tutte polynomial is used to analyze such kind of images. Functional Magnetic Resonance Imaging provide us connectivity networks in the brain which are represented by graphs and the Tutte polynomial will be applied. The problem of computing the Tutte polynomial for a given graph is #P-hard even for planar graphs. For a practical application the maple packages "GraphTheory" and "SpecialGraphs" will be used. We will consider certain diagram which is depicting functional connectivity, specifically between frontal and posterior areas, in autism during an inferential text comprehension task. The Tutte polynomial for the resulting neural networks will be computed and some numerical invariants for such network will be obtained. Our results show that the Tutte polynomial is a powerful tool to analyze and characterize the networks obtained from functional magnetic resonance imaging.

  15. On the coefficients of integrated expansions and integrals of ultraspherical polynomials and their applications for solving differential equations

    NASA Astrophysics Data System (ADS)

    Doha, E. H.

    2002-02-01

    An analytical formula expressing the ultraspherical coefficients of an expansion for an infinitely differentiable function that has been integrated an arbitrary number of times in terms of the coefficients of the original expansion of the function is stated in a more compact form and proved in a simpler way than the formula suggested by Phillips and Karageorghis (27 (1990) 823). A new formula expressing explicitly the integrals of ultraspherical polynomials of any degree that has been integrated an arbitrary number of times of ultraspherical polynomials is given. The tensor product of ultraspherical polynomials is used to approximate a function of more than one variable. Formulae expressing the coefficients of differentiated expansions of double and triple ultraspherical polynomials in terms of the original expansion are stated and proved. Some applications of how to use ultraspherical polynomials for solving ordinary and partial differential equations are described.

  16. Rural counties chlamydia and gonorrhea rates in Pennsylvania among adolescents and young adults.

    PubMed

    Pinto, Casey N; Dorn, Lorah D; Chinchilli, Vernon M; Du, Ping; Chi, Guangqing

    2017-09-01

    American adolescents and young adults between the ages of 15 and 24 account for 50% of all sexually transmitted diseases (STDs) annually. Rural populations in this age group are often understudied, despite having factors that place them at higher risk for STDs. The purpose of this study was to evaluate the utility of time series analysis in the assessment of rural Pennsylvania county-level chlamydia and gonorrhea rates overtime (2004-2014) for 15- to 19- and 20- to 24-year-old age groups by gender. An exploratory analysis was completed using Pennsylvania STD surveillance case report and census data, to develop a linear mixed-effects model of the STD rate for each Pennsylvania county for the years 2004 through 2014 using 3-month increments. A cubic polynomial spline regression model was assumed over the 44 time points for each county to account for possible oscillations in the STD rate during the 11-year period. Eight out of 12 rural counties had a significant increase in chlamydia or gonorrhea rates, and five rural counties had significant decreases in chlamydia or gonorrhea rates from 2004 to 2014. Results from this study provide the first analysis of change in rates of STDs in rural settings and demonstrate the utility of time series analysis for populations with small sample sizes. Copyright © 2017 Elsevier Inc. All rights reserved.

  17. Use of error grid analysis to evaluate acceptability of a point of care prothrombin time meter.

    PubMed

    Petersen, John R; Vonmarensdorf, Hans M; Weiss, Heidi L; Elghetany, M Tarek

    2010-02-01

    Statistical methods (linear regression, correlation analysis, etc.) are frequently employed in comparing methods in the central laboratory (CL). Assessing acceptability of point of care testing (POCT) equipment, however, is more difficult because statistically significant biases may not have an impact on clinical care. We showed how error grid (EG) analysis can be used to evaluate POCT PT INR with the CL. We compared results from 103 patients seen in an anti-coagulation clinic that were on Coumadin maintenance therapy using fingerstick samples for POCT (Roche CoaguChek XS and S) and citrated venous blood samples for CL (Stago STAR). To compare clinical acceptability of results we developed an EG with zones A, B, C and D. Using 2nd order polynomial equation analysis, POCT results highly correlate with the CL for CoaguChek XS (R(2)=0. 955) and CoaguChek S (R(2)=0. 93), respectively but does not indicate if POCT results are clinically interchangeable with the CL. Using EG it is readily apparent which levels can be considered clinically identical to the CL despite analytical bias. We have demonstrated the usefulness of EG in determining acceptability of POCT PT INR testing and how it can be used to determine cut-offs where differences in POCT results may impact clinical care. Copyright 2009 Elsevier B.V. All rights reserved.

  18. Applicational possibilities of linear and non-linear (polynomial) regression and analysis of variance. III. Stability determination of pharmaceutical preparations: stability of diclofenac-sodium in Diclofen injections.

    PubMed

    Arambasić, M B; Jatić-Slavković, D

    2004-05-01

    This paper presents the application of the regression analysis program and the program for comparing linear regressions (modified method for one-way, analysis of variance), writtens in BASIC program language, for instance, determination of content of Diclofenac-Sodium (active ingredient in DIKLOFEN injections, ampules á 75 mg/3 ml). Stability testing of Diclofenac-Sodium was done by isothermic method of accelerated aging at 4 different temperatures (30 degrees, 40 degrees, 50 degrees and 60 degrees C) as a function of time (4 different duration of treatment: (0-155, 0-145, 0-74 and 0-44 days). The decrease in stability (decrease in the mean value of the content of Diclofenac-Sodium (in %), at different temperatures as a function of time, is possible to describe by, linear dependance. According to the value for regression equation values, the times are assessed in which the content of Diclofenac-Sodium (in %) will decrease by 10%, of the initial value. The times are follows at 30 degrees C 761.02 days, at 40 degrees C 397.26 days, at 50 degrees C 201.96 days and at 60 degrees C 58.85 days. The estimated times (in days) in which the mean value for Diclofenac-Sodium content (in %) will by 10% of the initial values, as a junction of time, are most suitably described by 3rd order parabola. Based on the parameter values which describe the 3rd order parabola, the time was estimated in which Diclofenac-Sodium content mean value (in %) will fall by 10% of the initial one at average ambient temperatures of 20 degrees C and 25 degrees C. The times are: 1409.47 days (20 degrees C) and 1042.39 days (25 degrees C). Based on the value for Fischer's coefficien (F), the comparison of trenf of Diclofenac-Sodium content (in %) shows that, under the influence of different temperatures as a function of time, among them, depending on temperature value, there is: statistically very significant difference (P < .05) at 50 degrees C and lower toward 60 degrees C, i.e. statistically probably significant difference (P > 0.01) at 40 degrees C and lower towards 50 degrees C and there is no statistically significance difference (P > 0.05) at 30 degrees C towards 40 degrees C.

  19. Quadratically Convergent Method for Simultaneously Approaching the Roots of Polynomial Solutions of a Class of Differential Equations

    NASA Astrophysics Data System (ADS)

    Recchioni, Maria Cristina

    2001-12-01

    This paper investigates the application of the method introduced by L. Pasquini (1989) for simultaneously approaching the zeros of polynomial solutions to a class of second-order linear homogeneous ordinary differential equations with polynomial coefficients to a particular case in which these polynomial solutions have zeros symmetrically arranged with respect to the origin. The method is based on a family of nonlinear equations which is associated with a given class of differential equations. The roots of the nonlinear equations are related to the roots of the polynomial solutions of differential equations considered. Newton's method is applied to find the roots of these nonlinear equations. In (Pasquini, 1994) the nonsingularity of the roots of these nonlinear equations is studied. In this paper, following the lines in (Pasquini, 1994), the nonsingularity of the roots of these nonlinear equations is studied. More favourable results than the ones in (Pasquini, 1994) are proven in the particular case of polynomial solutions with symmetrical zeros. The method is applied to approximate the roots of Hermite-Sobolev type polynomials and Freud polynomials. A lower bound for the smallest positive root of Hermite-Sobolev type polynomials is given via the nonlinear equation. The quadratic convergence of the method is proven. A comparison with a classical method that uses the Jacobi matrices is carried out. We show that the algorithm derived by the proposed method is sometimes preferable to the classical QR type algorithms for computing the eigenvalues of the Jacobi matrices even if these matrices are real and symmetric.

  20. On a Family of Multivariate Modified Humbert Polynomials

    PubMed Central

    Aktaş, Rabia; Erkuş-Duman, Esra

    2013-01-01

    This paper attempts to present a multivariable extension of generalized Humbert polynomials. The results obtained here include various families of multilinear and multilateral generating functions, miscellaneous properties, and also some special cases for these multivariable polynomials. PMID:23935411

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lue Xing; Sun Kun; Wang Pan

    In the framework of Bell-polynomial manipulations, under investigation hereby are three single-field bilinearizable equations: the (1+1)-dimensional shallow water wave model, Boiti-Leon-Manna-Pempinelli model, and (2+1)-dimensional Sawada-Kotera model. Based on the concept of scale invariance, a direct and unifying Bell-polynomial scheme is employed to achieve the Baecklund transformations and Lax pairs associated with those three soliton equations. Note that the Bell-polynomial expressions and Bell-polynomial-typed Baecklund transformations for those three soliton equations can be, respectively, cast into the bilinear equations and bilinear Baecklund transformations with symbolic computation. Consequently, it is also shown that the Bell-polynomial-typed Baecklund transformations can be linearized into the correspondingmore » Lax pairs.« less

  2. An O(log sup 2 N) parallel algorithm for computing the eigenvalues of a symmetric tridiagonal matrix

    NASA Technical Reports Server (NTRS)

    Swarztrauber, Paul N.

    1989-01-01

    An O(log sup 2 N) parallel algorithm is presented for computing the eigenvalues of a symmetric tridiagonal matrix using a parallel algorithm for computing the zeros of the characteristic polynomial. The method is based on a quadratic recurrence in which the characteristic polynomial is constructed on a binary tree from polynomials whose degree doubles at each level. Intervals that contain exactly one zero are determined by the zeros of polynomials at the previous level which ensures that different processors compute different zeros. The exact behavior of the polynomials at the interval endpoints is used to eliminate the usual problems induced by finite precision arithmetic.

  3. A Unified and Comprehensible View of Parametric and Kernel Methods for Genomic Prediction with Application to Rice.

    PubMed

    Jacquin, Laval; Cao, Tuong-Vi; Ahmadi, Nourollah

    2016-01-01

    One objective of this study was to provide readers with a clear and unified understanding of parametric statistical and kernel methods, used for genomic prediction, and to compare some of these in the context of rice breeding for quantitative traits. Furthermore, another objective was to provide a simple and user-friendly R package, named KRMM, which allows users to perform RKHS regression with several kernels. After introducing the concept of regularized empirical risk minimization, the connections between well-known parametric and kernel methods such as Ridge regression [i.e., genomic best linear unbiased predictor (GBLUP)] and reproducing kernel Hilbert space (RKHS) regression were reviewed. Ridge regression was then reformulated so as to show and emphasize the advantage of the kernel "trick" concept, exploited by kernel methods in the context of epistatic genetic architectures, over parametric frameworks used by conventional methods. Some parametric and kernel methods; least absolute shrinkage and selection operator (LASSO), GBLUP, support vector machine regression (SVR) and RKHS regression were thereupon compared for their genomic predictive ability in the context of rice breeding using three real data sets. Among the compared methods, RKHS regression and SVR were often the most accurate methods for prediction followed by GBLUP and LASSO. An R function which allows users to perform RR-BLUP of marker effects, GBLUP and RKHS regression, with a Gaussian, Laplacian, polynomial or ANOVA kernel, in a reasonable computation time has been developed. Moreover, a modified version of this function, which allows users to tune kernels for RKHS regression, has also been developed and parallelized for HPC Linux clusters. The corresponding KRMM package and all scripts have been made publicly available.

  4. Dynamical genetic programming in XCSF.

    PubMed

    Preen, Richard J; Bull, Larry

    2013-01-01

    A number of representation schemes have been presented for use within learning classifier systems, ranging from binary encodings to artificial neural networks. This paper presents results from an investigation into using a temporally dynamic symbolic representation within the XCSF learning classifier system. In particular, dynamical arithmetic networks are used to represent the traditional condition-action production system rules to solve continuous-valued reinforcement learning problems and to perform symbolic regression, finding competitive performance with traditional genetic programming on a number of composite polynomial tasks. In addition, the network outputs are later repeatedly sampled at varying temporal intervals to perform multistep-ahead predictions of a financial time series.

  5. Investigation on phase transitions of 1-decylammonium hydrochloride as the potential thermal energy storage material

    NASA Astrophysics Data System (ADS)

    Dan, Wen-Yan; Di, You-Ying; He, Dong-Hua; Liu, Yu-Pu

    2011-02-01

    1-Decylammonium hydrochloride was synthesized by the method of liquid phase synthesis. Chemical analysis, elemental analysis, and X-ray single crystal diffraction techniques were applied to characterize its composition and structure. Low-temperature heat capacities of the compounds were measured with a precision automated adiabatic calorimeter over the temperature range from 78 to 380 K. Three solid-solid phase transitions have been observed at the peak temperatures of 307.52 ± 0.13, 325.02 ± 0.19, and 327.26 ± 0.07 K. The molar enthalpies and entropies of three phase transitions were determined based on the analysis of heat capacity curves. Experimental molar heat capacities were fitted to two polynomial equations of the heat capacities as a function of temperature by least square method. Smoothed heat capacities and thermodynamic functions of the compound relative to the standard reference temperature 298.15 K were calculated and tabulated at intervals of 5 K based on the fitted polynomials.

  6. Interpolation problem for the solutions of linear elasticity equations based on monogenic functions

    NASA Astrophysics Data System (ADS)

    Grigor'ev, Yuri; Gürlebeck, Klaus; Legatiuk, Dmitrii

    2017-11-01

    Interpolation is an important tool for many practical applications, and very often it is beneficial to interpolate not only with a simple basis system, but rather with solutions of a certain differential equation, e.g. elasticity equation. A typical example for such type of interpolation are collocation methods widely used in practice. It is known, that interpolation theory is fully developed in the framework of the classical complex analysis. However, in quaternionic analysis, which shows a lot of analogies to complex analysis, the situation is more complicated due to the non-commutative multiplication. Thus, a fundamental theorem of algebra is not available, and standard tools from linear algebra cannot be applied in the usual way. To overcome these problems, a special system of monogenic polynomials the so-called Pseudo Complex Polynomials, sharing some properties of complex powers, is used. In this paper, we present an approach to deal with the interpolation problem, where solutions of elasticity equations in three dimensions are used as an interpolation basis.

  7. Polynomial time blackbox identity testers for depth-3 circuits : the field doesn't matter.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seshadhri, Comandur; Saxena, Nitin

    Let C be a depth-3 circuit with n variables, degree d and top fanin k (called {Sigma}{Pi}{Sigma}(k, d, n) circuits) over base field F. It is a major open problem to design a deterministic polynomial time blackbox algorithm that tests if C is identically zero. Klivans & Spielman (STOC 2001) observed that the problem is open even when k is a constant. This case has been subjected to a serious study over the past few years, starting from the work of Dvir & Shpilka (STOC 2005). We give the first polynomial time blackbox algorithm for this problem. Our algorithm runsmore » in time poly(n)d{sup k}, regardless of the base field. The only field for which polynomial time algorithms were previously known is F = Q (Kayal & Saraf, FOCS 2009, and Saxena & Seshadhri, FOCS 2010). This is the first blackbox algorithm for depth-3 circuits that does not use the rank based approaches of Karnin & Shpilka (CCC 2008). We prove an important tool for the study of depth-3 identities. We design a blackbox polynomial time transformation that reduces the number of variables in a {Sigma}{Pi}{Sigma}(k, d, n) circuit to k variables, but preserves the identity structure. Polynomial identity testing (PIT) is a major open problem in theoretical computer science. The input is an arithmetic circuit that computes a polynomial p(x{sub 1}, x{sub 2},..., x{sub n}) over a base field F. We wish to check if p is the zero polynomial, or in other words, is identically zero. We may be provided with an explicit circuit, or may only have blackbox access. In the latter case, we can only evaluate the polynomial p at various domain points. The main goal is to devise a deterministic blackbox polynomial time algorithm for PIT.« less

  8. Recurrences and explicit formulae for the expansion and connection coefficients in series of Bessel polynomials

    NASA Astrophysics Data System (ADS)

    Doha, E. H.; Ahmed, H. M.

    2004-08-01

    A formula expressing explicitly the derivatives of Bessel polynomials of any degree and for any order in terms of the Bessel polynomials themselves is proved. Another explicit formula, which expresses the Bessel expansion coefficients of a general-order derivative of an infinitely differentiable function in terms of its original Bessel coefficients, is also given. A formula for the Bessel coefficients of the moments of one single Bessel polynomial of certain degree is proved. A formula for the Bessel coefficients of the moments of a general-order derivative of an infinitely differentiable function in terms of its Bessel coefficients is also obtained. Application of these formulae for solving ordinary differential equations with varying coefficients, by reducing them to recurrence relations in the expansion coefficients of the solution, is explained. An algebraic symbolic approach (using Mathematica) in order to build and solve recursively for the connection coefficients between Bessel-Bessel polynomials is described. An explicit formula for these coefficients between Jacobi and Bessel polynomials is given, of which the ultraspherical polynomial and its consequences are important special cases. Two analytical formulae for the connection coefficients between Laguerre-Bessel and Hermite-Bessel are also developed.

  9. Optical Performance Modeling of FUSE Telescope Mirror

    NASA Technical Reports Server (NTRS)

    Saha, Timo T.; Ohl, Raymond G.; Friedman, Scott D.; Moos, H. Warren

    2000-01-01

    We describe the Metrology Data Processor (METDAT), the Optical Surface Analysis Code (OSAC), and their application to the image evaluation of the Far Ultraviolet Spectroscopic Explorer (FUSE) mirrors. The FUSE instrument - designed and developed by the Johns Hopkins University and launched in June 1999 is an astrophysics satellite which provides high resolution spectra (lambda/Delta(lambda) = 20,000 - 25,000) in the wavelength region from 90.5 to 118.7 nm The FUSE instrument is comprised of four co-aligned, normal incidence, off-axis parabolic mirrors, four Rowland circle spectrograph channels with holographic gratings, and delay line microchannel plate detectors. The OSAC code provides a comprehensive analysis of optical system performance, including the effects of optical surface misalignments, low spatial frequency deformations described by discrete polynomial terms, mid- and high-spatial frequency deformations (surface roughness), and diffraction due to the finite size of the aperture. Both normal incidence (traditionally infrared, visible, and near ultraviolet mirror systems) and grazing incidence (x-ray mirror systems) systems can be analyzed. The code also properly accounts for reflectance losses on the mirror surfaces. Low frequency surface errors are described in OSAC by using Zernike polynomials for normal incidence mirrors and Legendre-Fourier polynomials for grazing incidence mirrors. The scatter analysis of the mirror is based on scalar scatter theory. The program accepts simple autocovariance (ACV) function models or power spectral density (PSD) models derived from mirror surface metrology data as input to the scatter calculation. The end product of the program is a user-defined pixel array containing the system Point Spread Function (PSF). The METDAT routine is used in conjunction with the OSAC program. This code reads in laboratory metrology data in a normalized format. The code then fits the data using Zernike polynomials for normal incidence systems or Legendre-Fourier polynomials for grazing incidence systems. It removes low order terms from the metrology data, calculates statistical ACV or PSD functions, and fits these data to OSAC models for the scatter analysis. In this paper we briefly describe the laboratory image testing of FUSE spare mirror performed in the near and vacuum ultraviolet at John Hopkins University and OSAC modeling of the test setup performed at NASA/GSFC. The test setup is a double-pass configuration consisting of a Hg discharge source, the FUSE off-axis parabolic mirror under test, an autocollimating flat mirror, and a tomographic imaging detector. Two additional, small fold flats are used in the optical train to accommodate the light source and the detector. The modeling is based on Zernike fitting and PSD analysis of surface metrology data measured by both the mirror vendor (Tinsley) and JHU. The results of our models agree well with the laboratory imaging data, thus validating our theoretical model. Finally, we predict the imaging performance of FUSE mirrors in their flight configuration at far-ultraviolet wavelengths.

  10. On the coefficients of differentiated expansions of ultraspherical polynomials

    NASA Technical Reports Server (NTRS)

    Karageorghis, Andreas; Phillips, Timothy N.

    1989-01-01

    A formula expressing the coefficients of an expression of ultraspherical polynomials which has been differentiated an arbitrary number of times in terms of the coefficients of the original expansion is proved. The particular examples of Chebyshev and Legendre polynomials are considered.

  11. On Polynomial Solutions of Linear Differential Equations with Polynomial Coefficients

    ERIC Educational Resources Information Center

    Si, Do Tan

    1977-01-01

    Demonstrates a method for solving linear differential equations with polynomial coefficients based on the fact that the operators z and D + d/dz are known to be Hermitian conjugates with respect to the Bargman and Louck-Galbraith scalar products. (MLH)

  12. Algorithms for computing solvents of unilateral second-order matrix polynomials over prime finite fields using lambda-matrices

    NASA Astrophysics Data System (ADS)

    Burtyka, Filipp

    2018-01-01

    The paper considers algorithms for finding diagonalizable and non-diagonalizable roots (so called solvents) of monic arbitrary unilateral second-order matrix polynomial over prime finite field. These algorithms are based on polynomial matrices (lambda-matrices). This is an extension of existing general methods for computing solvents of matrix polynomials over field of complex numbers. We analyze how techniques for complex numbers can be adapted for finite field and estimate asymptotic complexity of the obtained algorithms.

  13. On the Analytical and Numerical Properties of the Truncated Laplace Transform I

    DTIC Science & Technology

    2014-09-05

    contains generalizations and conclusions. 2 2 Preliminaries 2.1 The Legendre Polynomials In this subsection we summarize some of the properties of the the...standard Legendre Polynomi - als, and restate these properties for shifted and normalized forms of the Legendre Polynomials . We define the Shifted... Legendre Polynomial of degree k = 0, 1, ..., which we will be denoting by P ∗k , by the formula P ∗k (x) = Pk(2x− 1), (5) where Pk is the Legendre

  14. Development of Fast Deterministic Physically Accurate Solvers for Kinetic Collision Integral for Applications of Near Space Flight and Control Devices

    DTIC Science & Technology

    2015-08-31

    following functions were used: where are the Legendre polynomials of degree . It is assumed that the coefficient standing with has the form...enforce relaxation rates of high order moments, higher order polynomial basis functions are used. The use of high order polynomials results in strong...enforced while only polynomials up to second degree were used in the representation of the collision frequency. It can be seen that the new model

  15. Effects of Air Drag and Lunar Third-Body Perturbations on Motion Near a Reference KAM Torus

    DTIC Science & Technology

    2011-03-01

    body m 1) mass of satellite; 2) order of associated Legendre polynomial n 1) mean motion; 2) degree of associated Legendre polynomial n3 mean motion...physical momentum pi ith physical momentum Pmn associated Legendre polynomial of order m and degree n q̇ physical coordinate derivatives vector, [q̇1...are constants specifying the shape of the gravitational field; and Pmn are associated Legendre polynomials . When m = n = 0, the geopotential function

  16. Luigi Gatteschi's work on asymptotics of special functions and their zeros

    NASA Astrophysics Data System (ADS)

    Gautschi, Walter; Giordano, Carla

    2008-12-01

    A good portion of Gatteschi's research publications-about 65%-is devoted to asymptotics of special functions and their zeros. Most prominently among the special functions studied figure classical orthogonal polynomials, notably Jacobi polynomials and their special cases, Laguerre polynomials, and Hermite polynomials by implication. Other important classes of special functions dealt with are Bessel functions of the first and second kind, Airy functions, and confluent hypergeometric functions, both in Tricomi's and Whittaker's form. This work is reviewed here, and organized along methodological lines.

  17. Polynomial compensation, inversion, and approximation of discrete time linear systems

    NASA Technical Reports Server (NTRS)

    Baram, Yoram

    1987-01-01

    The least-squares transformation of a discrete-time multivariable linear system into a desired one by convolving the first with a polynomial system yields optimal polynomial solutions to the problems of system compensation, inversion, and approximation. The polynomial coefficients are obtained from the solution to a so-called normal linear matrix equation, whose coefficients are shown to be the weighting patterns of certain linear systems. These, in turn, can be used in the recursive solution of the normal equation.

  18. Empirical Modeling of Plant Gas Fluxes in Controlled Environments

    NASA Technical Reports Server (NTRS)

    Cornett, Jessie David

    1994-01-01

    As humans extend their reach beyond the earth, bioregenerative life support systems must replace the resupply and physical/chemical systems now used. The Controlled Ecological Life Support System (CELSS) will utilize plants to recycle the carbon dioxide (CO2) and excrement produced by humans and return oxygen (O2), purified water and food. CELSS design requires knowledge of gas flux levels for net photosynthesis (PS(sub n)), dark respiration (R(sub d)) and evapotranspiration (ET). Full season gas flux data regarding these processes for wheat (Triticum aestivum), soybean (Glycine max) and rice (Oryza sativa) from published sources were used to develop empirical models. Univariate models relating crop age (days after planting) and gas flux were fit by simple regression. Models are either high order (5th to 8th) or more complex polynomials whose curves describe crop development characteristics. The models provide good estimates of gas flux maxima, but are of limited utility. To broaden the applicability, data were transformed to dimensionless or correlation formats and, again, fit by regression. Polynomials, similar to those in the initial effort, were selected as the most appropriate models. These models indicate that, within a cultivar, gas flux patterns appear remarkably similar prior to maximum flux, but exhibit considerable variation beyond this point. This suggests that more broadly applicable models of plant gas flux are feasible, but univariate models defining gas flux as a function of crop age are too simplistic. Multivariate models using CO2 and crop age were fit for PS(sub n), and R(sub d) by multiple regression. In each case, the selected model is a subset of a full third order model with all possible interactions. These models are improvements over the univariate models because they incorporate more than the single factor, crop age, as the primary variable governing gas flux. They are still limited, however, by their reliance on the other environmental conditions under which the original data were collected. Three-dimensional plots representing the response surface of each model are included. Suitability of using empirical models to generate engineering design estimates is discussed. Recommendations for the use of more complex multivariate models to increase versatility are included.

  19. Comparison of volatility function technique for risk-neutral densities estimation

    NASA Astrophysics Data System (ADS)

    Bahaludin, Hafizah; Abdullah, Mimi Hafizah

    2017-08-01

    Volatility function technique by using interpolation approach plays an important role in extracting the risk-neutral density (RND) of options. The aim of this study is to compare the performances of two interpolation approaches namely smoothing spline and fourth order polynomial in extracting the RND. The implied volatility of options with respect to strike prices/delta are interpolated to obtain a well behaved density. The statistical analysis and forecast accuracy are tested using moments of distribution. The difference between the first moment of distribution and the price of underlying asset at maturity is used as an input to analyze forecast accuracy. RNDs are extracted from the Dow Jones Industrial Average (DJIA) index options with a one month constant maturity for the period from January 2011 until December 2015. The empirical results suggest that the estimation of RND using a fourth order polynomial is more appropriate to be used compared to a smoothing spline in which the fourth order polynomial gives the lowest mean square error (MSE). The results can be used to help market participants capture market expectations of the future developments of the underlying asset.

  20. Automated image segmentation-assisted flattening of atomic force microscopy images.

    PubMed

    Wang, Yuliang; Lu, Tongda; Li, Xiaolai; Wang, Huimin

    2018-01-01

    Atomic force microscopy (AFM) images normally exhibit various artifacts. As a result, image flattening is required prior to image analysis. To obtain optimized flattening results, foreground features are generally manually excluded using rectangular masks in image flattening, which is time consuming and inaccurate. In this study, a two-step scheme was proposed to achieve optimized image flattening in an automated manner. In the first step, the convex and concave features in the foreground were automatically segmented with accurate boundary detection. The extracted foreground features were taken as exclusion masks. In the second step, data points in the background were fitted as polynomial curves/surfaces, which were then subtracted from raw images to get the flattened images. Moreover, sliding-window-based polynomial fitting was proposed to process images with complex background trends. The working principle of the two-step image flattening scheme were presented, followed by the investigation of the influence of a sliding-window size and polynomial fitting direction on the flattened images. Additionally, the role of image flattening on the morphological characterization and segmentation of AFM images were verified with the proposed method.

Top