Sample records for polynomial response surface

  1. Construction of Response Surface with Higher Order Continuity and Its Application to Reliability Engineering

    NASA Technical Reports Server (NTRS)

    Krishnamurthy, T.; Romero, V. J.

    2002-01-01

    The usefulness of piecewise polynomials with C1 and C2 derivative continuity for response surface construction method is examined. A Moving Least Squares (MLS) method is developed and compared with four other interpolation methods, including kriging. First the selected methods are applied and compared with one another in a two-design variables problem with a known theoretical response function. Next the methods are tested in a four-design variables problem from a reliability-based design application. In general the piecewise polynomial with higher order derivative continuity methods produce less error in the response prediction. The MLS method was found to be superior for response surface construction among the methods evaluated.

  2. Comparison of Response Surface and Kriging Models in the Multidisciplinary Design of an Aerospike Nozzle

    NASA Technical Reports Server (NTRS)

    Simpson, Timothy W.

    1998-01-01

    The use of response surface models and kriging models are compared for approximating non-random, deterministic computer analyses. After discussing the traditional response surface approach for constructing polynomial models for approximation, kriging is presented as an alternative statistical-based approximation method for the design and analysis of computer experiments. Both approximation methods are applied to the multidisciplinary design and analysis of an aerospike nozzle which consists of a computational fluid dynamics model and a finite element analysis model. Error analysis of the response surface and kriging models is performed along with a graphical comparison of the approximations. Four optimization problems are formulated and solved using both approximation models. While neither approximation technique consistently outperforms the other in this example, the kriging models using only a constant for the underlying global model and a Gaussian correlation function perform as well as the second order polynomial response surface models.

  3. A comparison of polynomial approximations and artificial neural nets as response surfaces

    NASA Technical Reports Server (NTRS)

    Carpenter, William C.; Barthelemy, Jean-Francois M.

    1992-01-01

    Artificial neural nets and polynomial approximations were used to develop response surfaces for several test problems. Based on the number of functional evaluations required to build the approximations and the number of undetermined parameters associated with the approximations, the performance of the two types of approximations was found to be comparable. A rule of thumb is developed for determining the number of nodes to be used on a hidden layer of an artificial neural net, and the number of designs needed to train an approximation is discussed.

  4. Method for Constructing Composite Response Surfaces by Combining Neural Networks with other Interpolation or Estimation Techniques

    NASA Technical Reports Server (NTRS)

    Rai, Man Mohan (Inventor); Madavan, Nateri K. (Inventor)

    2003-01-01

    A method and system for design optimization that incorporates the advantages of both traditional response surface methodology (RSM) and neural networks is disclosed. The present invention employs a unique strategy called parameter-based partitioning of the given design space. In the design procedure, a sequence of composite response surfaces based on both neural networks and polynomial fits is used to traverse the design space to identify an optimal solution. The composite response surface has both the power of neural networks and the economy of low-order polynomials (in terms of the number of simulations needed and the network training requirements). The present invention handles design problems with many more parameters than would be possible using neural networks alone and permits a designer to rapidly perform a variety of trade-off studies before arriving at the final design.

  5. Response Surface Modeling Tolerance and Inference Error Risk Specifications: Proposed Industry Standards

    NASA Technical Reports Server (NTRS)

    DeLoach, Richard

    2012-01-01

    This paper reviews the derivation of an equation for scaling response surface modeling experiments. The equation represents the smallest number of data points required to fit a linear regression polynomial so as to achieve certain specified model adequacy criteria. Specific criteria are proposed which simplify an otherwise rather complex equation, generating a practical rule of thumb for the minimum volume of data required to adequately fit a polynomial with a specified number of terms in the model. This equation and the simplified rule of thumb it produces can be applied to minimize the cost of wind tunnel testing.

  6. Comparison of polynomial approximations and artificial neural nets for response surfaces in engineering optimization

    NASA Technical Reports Server (NTRS)

    Carpenter, William C.

    1991-01-01

    Engineering optimization problems involve minimizing some function subject to constraints. In areas such as aircraft optimization, the constraint equations may be from numerous disciplines such as transfer of information between these disciplines and the optimization algorithm. They are also suited to problems which may require numerous re-optimizations such as in multi-objective function optimization or to problems where the design space contains numerous local minima, thus requiring repeated optimizations from different initial designs. Their use has been limited, however, by the fact that development of response surfaces randomly selected or preselected points in the design space. Thus, they have been thought to be inefficient compared to algorithms to the optimum solution. A development has taken place in the last several years which may effect the desirability of using response surfaces. It may be possible that artificial neural nets are more efficient in developing response surfaces than polynomial approximations which have been used in the past. This development is the concern of the work.

  7. UTILIZATION OF A RESPONSE-SURFACE TECHNIQUE IN THE STUDY OF PLANT RESPONSES TO OZONE AND SULFUR DIOXIDE MIXTURES

    EPA Science Inventory

    A second order rotatable design was used to obtain polynomial equations describing the effects of combinations of sulfur dioxide (SO2) and ozone (O3) on foliar injury and plant growth. The response surfaces derived from these equations were displayed as contour or isometric (3-di...

  8. Method for Constructing Composite Response Surfaces by Combining Neural Networks with Polynominal Interpolation or Estimation Techniques

    NASA Technical Reports Server (NTRS)

    Rai, Man Mohan (Inventor); Madavan, Nateri K. (Inventor)

    2007-01-01

    A method and system for data modeling that incorporates the advantages of both traditional response surface methodology (RSM) and neural networks is disclosed. The invention partitions the parameters into a first set of s simple parameters, where observable data are expressible as low order polynomials, and c complex parameters that reflect more complicated variation of the observed data. Variation of the data with the simple parameters is modeled using polynomials; and variation of the data with the complex parameters at each vertex is analyzed using a neural network. Variations with the simple parameters and with the complex parameters are expressed using a first sequence of shape functions and a second sequence of neural network functions. The first and second sequences are multiplicatively combined to form a composite response surface, dependent upon the parameter values, that can be used to identify an accurate mode

  9. Leader-follower value congruence in social responsibility and ethical satisfaction: a polynomial regression analysis.

    PubMed

    Kang, Seung-Wan; Byun, Gukdo; Park, Hun-Joon

    2014-12-01

    This paper presents empirical research into the relationship between leader-follower value congruence in social responsibility and the level of ethical satisfaction for employees in the workplace. 163 dyads were analyzed, each consisting of a team leader and an employee working at a large manufacturing company in South Korea. Following current methodological recommendations for congruence research, polynomial regression and response surface modeling methodologies were used to determine the effects of value congruence. Results indicate that leader-follower value congruence in social responsibility was positively related to the ethical satisfaction of employees. Furthermore, employees' ethical satisfaction was stronger when aligned with a leader with high social responsibility. The theoretical and practical implications are discussed.

  10. Application of response surface techniques to helicopter rotor blade optimization procedure

    NASA Technical Reports Server (NTRS)

    Henderson, Joseph Lynn; Walsh, Joanne L.; Young, Katherine C.

    1995-01-01

    In multidisciplinary optimization problems, response surface techniques can be used to replace the complex analyses that define the objective function and/or constraints with simple functions, typically polynomials. In this work a response surface is applied to the design optimization of a helicopter rotor blade. In previous work, this problem has been formulated with a multilevel approach. Here, the response surface takes advantage of this decomposition and is used to replace the lower level, a structural optimization of the blade. Problems that were encountered and important considerations in applying the response surface are discussed. Preliminary results are also presented that illustrate the benefits of using the response surface.

  11. Monte Carlo Solution to Find Input Parameters in Systems Design Problems

    NASA Astrophysics Data System (ADS)

    Arsham, Hossein

    2013-06-01

    Most engineering system designs, such as product, process, and service design, involve a framework for arriving at a target value for a set of experiments. This paper considers a stochastic approximation algorithm for estimating the controllable input parameter within a desired accuracy, given a target value for the performance function. Two different problems, what-if and goal-seeking problems, are explained and defined in an auxiliary simulation model, which represents a local response surface model in terms of a polynomial. A method of constructing this polynomial by a single run simulation is explained. An algorithm is given to select the design parameter for the local response surface model. Finally, the mean time to failure (MTTF) of a reliability subsystem is computed and compared with its known analytical MTTF value for validation purposes.

  12. Investigation on imperfection sensitivity of composite cylindrical shells using the nonlinearity reduction technique and the polynomial chaos method

    NASA Astrophysics Data System (ADS)

    Liang, Ke; Sun, Qin; Liu, Xiaoran

    2018-05-01

    The theoretical buckling load of a perfect cylinder must be reduced by a knock-down factor to account for structural imperfections. The EU project DESICOS proposed a new robust design for imperfection-sensitive composite cylindrical shells using the combination of deterministic and stochastic simulations, however the high computational complexity seriously affects its wider application in aerospace structures design. In this paper, the nonlinearity reduction technique and the polynomial chaos method are implemented into the robust design process, to significantly lower computational costs. The modified Newton-type Koiter-Newton approach which largely reduces the number of degrees of freedom in the nonlinear finite element model, serves as the nonlinear buckling solver to trace the equilibrium paths of geometrically nonlinear structures efficiently. The non-intrusive polynomial chaos method provides the buckling load with an approximate chaos response surface with respect to imperfections and uses buckling solver codes as black boxes. A fast large-sample study can be applied using the approximate chaos response surface to achieve probability characteristics of buckling loads. The performance of the method in terms of reliability, accuracy and computational effort is demonstrated with an unstiffened CFRP cylinder.

  13. Two-dimensional orthonormal trend surfaces for prospecting

    NASA Astrophysics Data System (ADS)

    Sarma, D. D.; Selvaraj, J. B.

    Orthonormal polynomials have distinct advantages over conventional polynomials: the equations for evaluating trend coefficients are not ill-conditioned and the convergence power of this method is greater compared to the least-squares approximation and therefore the approach by orthonormal functions provides a powerful alternative to the least-squares method. In this paper, orthonormal polynomials in two dimensions are obtained using the Gram-Schmidt method for a polynomial series of the type: Z = 1 + x + y + x2 + xy + y2 + … + yn, where x and y are the locational coordinates and Z is the value of the variable under consideration. Trend-surface analysis, which has wide applications in prospecting, has been carried out using the orthonormal polynomial approach for two sample sets of data from India concerned with gold accumulation from the Kolar Gold Field, and gravity data. A comparison of the orthonormal polynomial trend surfaces with those obtained by the classical least-squares method has been made for the two data sets. In both the situations, the orthonormal polynomial surfaces gave an improved fit to the data. A flowchart and a FORTRAN-IV computer program for deriving orthonormal polynomials of any order and for using them to fit trend surfaces is included. The program has provision for logarithmic transformation of the Z variable. If log-transformation is performed the predicted Z values are reconverted to the original units and the trend-surface map generated for use. The illustration of gold assay data related to the Champion lode system of Kolar Gold Fields, for which a 9th-degree orthonormal trend surface was fit, could be used for further prospecting the area.

  14. Cylinder surface test with Chebyshev polynomial fitting method

    NASA Astrophysics Data System (ADS)

    Yu, Kui-bang; Guo, Pei-ji; Chen, Xi

    2017-10-01

    Zernike polynomials fitting method is often applied in the test of optical components and systems, used to represent the wavefront and surface error in circular domain. Zernike polynomials are not orthogonal in rectangular region which results in its unsuitable for the test of optical element with rectangular aperture such as cylinder surface. Applying the Chebyshev polynomials which are orthogonal among the rectangular area as an substitution to the fitting method, can solve the problem. Corresponding to a cylinder surface with diameter of 50 mm and F number of 1/7, a measuring system has been designed in Zemax based on Fizeau Interferometry. The expressions of the two-dimensional Chebyshev polynomials has been given and its relationship with the aberration has been presented. Furthermore, Chebyshev polynomials are used as base items to analyze the rectangular aperture test data. The coefficient of different items are obtained from the test data through the method of least squares. Comparing the Chebyshev spectrum in different misalignment, it show that each misalignment is independence and has a certain relationship with the certain Chebyshev terms. The simulation results show that, through the Legendre polynomials fitting method, it will be a great improvement in the efficient of the detection and adjustment of the cylinder surface test.

  15. Influence of surface error on electromagnetic performance of reflectors based on Zernike polynomials

    NASA Astrophysics Data System (ADS)

    Li, Tuanjie; Shi, Jiachen; Tang, Yaqiong

    2018-04-01

    This paper investigates the influence of surface error distribution on the electromagnetic performance of antennas. The normalized Zernike polynomials are used to describe a smooth and continuous deformation surface. Based on the geometrical optics and piecewise linear fitting method, the electrical performance of reflector described by the Zernike polynomials is derived to reveal the relationship between surface error distribution and electromagnetic performance. Then the relation database between surface figure and electric performance is built for ideal and deformed surfaces to realize rapidly calculation of far-field electric performances. The simulation analysis of the influence of Zernike polynomials on the electrical properties for the axis-symmetrical reflector with the axial mode helical antenna as feed is further conducted to verify the correctness of the proposed method. Finally, the influence rules of surface error distribution on electromagnetic performance are summarized. The simulation results show that some terms of Zernike polynomials may decrease the amplitude of main lobe of antenna pattern, and some may reduce the pointing accuracy. This work extracts a new concept for reflector's shape adjustment in manufacturing process.

  16. COMPUTATIONAL METHODS FOR SENSITIVITY AND UNCERTAINTY ANALYSIS FOR ENVIRONMENTAL AND BIOLOGICAL MODELS

    EPA Science Inventory

    This work introduces a computationally efficient alternative method for uncertainty propagation, the Stochastic Response Surface Method (SRSM). The SRSM approximates uncertainties in model outputs through a series expansion in normal random variables (polynomial chaos expansion)...

  17. Comparison of Response Surface and Kriging Models for Multidisciplinary Design Optimization

    NASA Technical Reports Server (NTRS)

    Simpson, Timothy W.; Korte, John J.; Mauery, Timothy M.; Mistree, Farrokh

    1998-01-01

    In this paper, we compare and contrast the use of second-order response surface models and kriging models for approximating non-random, deterministic computer analyses. After reviewing the response surface method for constructing polynomial approximations, kriging is presented as an alternative approximation method for the design and analysis of computer experiments. Both methods are applied to the multidisciplinary design of an aerospike nozzle which consists of a computational fluid dynamics model and a finite-element model. Error analysis of the response surface and kriging models is performed along with a graphical comparison of the approximations, and four optimization problems m formulated and solved using both sets of approximation models. The second-order response surface models and kriging models-using a constant underlying global model and a Gaussian correlation function-yield comparable results.

  18. Effect of design selection on response surface performance

    NASA Technical Reports Server (NTRS)

    Carpenter, William C.

    1993-01-01

    Artificial neural nets and polynomial approximations were used to develop response surfaces for several test problems. Based on the number of functional evaluations required to build the approximations and the number of undetermined parameters associated with the approximations, the performance of the two types of approximations was found to be comparable. A rule of thumb is developed for determining the number of nodes to be used on a hidden layer of an artificial neural net and the number of designs needed to train an approximation is discussed.

  19. Combining freeform optics and curved detectors for wide field imaging: a polynomial approach over squared aperture.

    PubMed

    Muslimov, Eduard; Hugot, Emmanuel; Jahn, Wilfried; Vives, Sebastien; Ferrari, Marc; Chambion, Bertrand; Henry, David; Gaschet, Christophe

    2017-06-26

    In the recent years a significant progress was achieved in the field of design and fabrication of optical systems based on freeform optical surfaces. They provide a possibility to build fast, wide-angle and high-resolution systems, which are very compact and free of obscuration. However, the field of freeform surfaces design techniques still remains underexplored. In the present paper we use the mathematical apparatus of orthogonal polynomials defined over a square aperture, which was developed before for the tasks of wavefront reconstruction, to describe shape of a mirror surface. Two cases, namely Legendre polynomials and generalization of the Zernike polynomials on a square, are considered. The potential advantages of these polynomials sets are demonstrated on example of a three-mirror unobscured telescope with F/# = 2.5 and FoV = 7.2x7.2°. In addition, we discuss possibility of use of curved detectors in such a design.

  20. Nano-transfersomes as a novel carrier for transdermal delivery.

    PubMed

    Chaudhary, Hema; Kohli, Kanchan; Kumar, Vikash

    2013-09-15

    The aim of this study was to design and optimize a nano-transfersomes of Diclofenac diethylamine (DDEA) and Curcumin (CRM). A 3(3) factorial design (Box-Behnken) was used to derive a polynomial equation (second order) to construct 2-D (contour) and 3-D (Response Surface) plots for prediction of responses. The ratio of lipid to surfactant (X1), weight of lipid to surfactant (X2) and sonication time (X3) (independent variables) and dependent variables [entrapment efficiency of DDEA (Y1), entrapment efficiency of CRM (Y2), effect on particle size (Y3), flux of DDEA (Y4), and flux of CRM (Y5)] were studied. The 2-D and 3-D plots were drawn and a statistical validity of the polynomials was established to find the compositions of optimized formulation. The design established the role of the derived polynomial equation, 2-D and 3-D plots in predicting the values of dependent variables for the preparation and optimization of nano-transfersomes for transdermal drug release. Copyright © 2013 Elsevier B.V. All rights reserved.

  1. Applications of Response Surface-Based Methods to Noise Analysis in the Conceptual Design of Revolutionary Aircraft

    NASA Technical Reports Server (NTRS)

    Hill, Geoffrey A.; Olson, Erik D.

    2004-01-01

    Due to the growing problem of noise in today's air transportation system, there have arisen needs to incorporate noise considerations in the conceptual design of revolutionary aircraft. Through the use of response surfaces, complex noise models may be converted into polynomial equations for rapid and simplified evaluation. This conversion allows many of the commonly used response surface-based trade space exploration methods to be applied to noise analysis. This methodology is demonstrated using a noise model of a notional 300 passenger Blended-Wing-Body (BWB) transport. Response surfaces are created relating source noise levels of the BWB vehicle to its corresponding FAR-36 certification noise levels and the resulting trade space is explored. Methods demonstrated include: single point analysis, parametric study, an optimization technique for inverse analysis, sensitivity studies, and probabilistic analysis. Extended applications of response surface-based methods in noise analysis are also discussed.

  2. Warpage analysis on thin shell part using response surface methodology (RSM)

    NASA Astrophysics Data System (ADS)

    Zulhasif, Z.; Shayfull, Z.; Nasir, S. M.; Fathullah, M.; Hazwan, M. H. M.

    2017-09-01

    The optimisation of moulding parameters appropriate to reduce warpage defects produce using Autodesk Moldflow Insight (AMI) 2012 software The product is injected by using Acrylonitrile-Butadiene-Styrene (ABS) materials. This analysis has processing parameter that varies in melting temperature, mould temperature, packing pressure and packing time. Design of Experiments (DOE) has been integrated to obtain a polynomial model using Response Surface Methodology (RSM). The Glowworm Swarm Optimisation (GSO) method is used to predict a best combination parameters to minimise warpage defect in order to produce high quality parts.

  3. Optimization of Turbine Blade Design for Reusable Launch Vehicles

    NASA Technical Reports Server (NTRS)

    Shyy, Wei

    1998-01-01

    To facilitate design optimization of turbine blade shape for reusable launching vehicles, appropriate techniques need to be developed to process and estimate the characteristics of the design variables and the response of the output with respect to the variations of the design variables. The purpose of this report is to offer insight into developing appropriate techniques for supporting such design and optimization needs. Neural network and polynomial-based techniques are applied to process aerodynamic data obtained from computational simulations for flows around a two-dimensional airfoil and a generic three- dimensional wing/blade. For the two-dimensional airfoil, a two-layered radial-basis network is designed and trained. The performances of two different design functions for radial-basis networks, one based on the accuracy requirement, whereas the other one based on the limit on the network size. While the number of neurons needed to satisfactorily reproduce the information depends on the size of the data, the neural network technique is shown to be more accurate for large data set (up to 765 simulations have been used) than the polynomial-based response surface method. For the three-dimensional wing/blade case, smaller aerodynamic data sets (between 9 to 25 simulations) are considered, and both the neural network and the polynomial-based response surface techniques improve their performance as the data size increases. It is found while the relative performance of two different network types, a radial-basis network and a back-propagation network, depends on the number of input data, the number of iterations required for radial-basis network is less than that for the back-propagation network.

  4. Dynamic response analysis of structure under time-variant interval process model

    NASA Astrophysics Data System (ADS)

    Xia, Baizhan; Qin, Yuan; Yu, Dejie; Jiang, Chao

    2016-10-01

    Due to the aggressiveness of the environmental factor, the variation of the dynamic load, the degeneration of the material property and the wear of the machine surface, parameters related with the structure are distinctly time-variant. Typical model for time-variant uncertainties is the random process model which is constructed on the basis of a large number of samples. In this work, we propose a time-variant interval process model which can be effectively used to deal with time-variant uncertainties with limit information. And then two methods are presented for the dynamic response analysis of the structure under the time-variant interval process model. The first one is the direct Monte Carlo method (DMCM) whose computational burden is relative high. The second one is the Monte Carlo method based on the Chebyshev polynomial expansion (MCM-CPE) whose computational efficiency is high. In MCM-CPE, the dynamic response of the structure is approximated by the Chebyshev polynomials which can be efficiently calculated, and then the variational range of the dynamic response is estimated according to the samples yielded by the Monte Carlo method. To solve the dependency phenomenon of the interval operation, the affine arithmetic is integrated into the Chebyshev polynomial expansion. The computational effectiveness and efficiency of MCM-CPE is verified by two numerical examples, including a spring-mass-damper system and a shell structure.

  5. Elevation data fitting and precision analysis of Google Earth in road survey

    NASA Astrophysics Data System (ADS)

    Wei, Haibin; Luan, Xiaohan; Li, Hanchao; Jia, Jiangkun; Chen, Zhao; Han, Leilei

    2018-05-01

    Objective: In order to improve efficiency of road survey and save manpower and material resources, this paper intends to apply Google Earth to the feasibility study stage of road survey and design. Limited by the problem that Google Earth elevation data lacks precision, this paper is focused on finding several different fitting or difference methods to improve the data precision, in order to make every effort to meet the accuracy requirements of road survey and design specifications. Method: On the basis of elevation difference of limited public points, any elevation difference of the other points can be fitted or interpolated. Thus, the precise elevation can be obtained by subtracting elevation difference from the Google Earth data. Quadratic polynomial surface fitting method, cubic polynomial surface fitting method, V4 interpolation method in MATLAB and neural network method are used in this paper to process elevation data of Google Earth. And internal conformity, external conformity and cross correlation coefficient are used as evaluation indexes to evaluate the data processing effect. Results: There is no fitting difference at the fitting point while using V4 interpolation method. Its external conformity is the largest and the effect of accuracy improvement is the worst, so V4 interpolation method is ruled out. The internal and external conformity of the cubic polynomial surface fitting method both are better than those of the quadratic polynomial surface fitting method. The neural network method has a similar fitting effect with the cubic polynomial surface fitting method, but its fitting effect is better in the case of a higher elevation difference. Because the neural network method is an unmanageable fitting model, the cubic polynomial surface fitting method should be mainly used and the neural network method can be used as the auxiliary method in the case of higher elevation difference. Conclusions: Cubic polynomial surface fitting method can obviously improve data precision of Google Earth. The error of data in hilly terrain areas meets the requirement of specifications after precision improvement and it can be used in feasibility study stage of road survey and design.

  6. Orthogonal basis with a conicoid first mode for shape specification of optical surfaces.

    PubMed

    Ferreira, Chelo; López, José L; Navarro, Rafael; Sinusía, Ester Pérez

    2016-03-07

    A rigorous and powerful theoretical framework is proposed to obtain systems of orthogonal functions (or shape modes) to represent optical surfaces. The method is general so it can be applied to different initial shapes and different polynomials. Here we present results for surfaces with circular apertures when the first basis function (mode) is a conicoid. The system for aspheres with rotational symmetry is obtained applying an appropriate change of variables to Legendre polynomials, whereas the system for general freeform case is obtained applying a similar procedure to spherical harmonics. Numerical comparisons with standard systems, such as Forbes and Zernike polynomials, are performed and discussed.

  7. Effect of design selection on response surface performance

    NASA Technical Reports Server (NTRS)

    Carpenter, William C.

    1993-01-01

    The mathematical formulation of the engineering optimization problem is given. Evaluation of the objective function and constraint equations can be very expensive in a computational sense. Thus, it is desirable to use as few evaluations as possible in obtaining its solution. In solving the equation, one approach is to develop approximations to the objective function and/or restraint equations and then to solve the equation using the approximations in place of the original functions. These approximations are referred to as response surfaces. The desirability of using response surfaces depends upon the number of functional evaluations required to build the response surfaces compared to the number required in the direct solution of the equation without approximations. The present study is concerned with evaluating the performance of response surfaces so that a decision can be made as to their effectiveness in optimization applications. In particular, this study focuses on how the quality of approximations is effected by design selection. Polynomial approximations and neural net approximations are considered.

  8. Response Surface Methodology Using a Fullest Balanced Model: A Re-Analysis of a Dataset in the Korean Journal for Food Science of Animal Resources.

    PubMed

    Rheem, Sungsue; Rheem, Insoo; Oh, Sejong

    2017-01-01

    Response surface methodology (RSM) is a useful set of statistical techniques for modeling and optimizing responses in research studies of food science. In the analysis of response surface data, a second-order polynomial regression model is usually used. However, sometimes we encounter situations where the fit of the second-order model is poor. If the model fitted to the data has a poor fit including a lack of fit, the modeling and optimization results might not be accurate. In such a case, using a fullest balanced model, which has no lack of fit, can fix such problem, enhancing the accuracy of the response surface modeling and optimization. This article presents how to develop and use such a model for the better modeling and optimizing of the response through an illustrative re-analysis of a dataset in Park et al. (2014) published in the Korean Journal for Food Science of Animal Resources .

  9. Determining animal drug combinations based on efficacy and safety.

    PubMed

    Kratzer, D D; Geng, S

    1986-08-01

    A procedure for deriving drug combinations for animal health is used to derive an optimal combination of 200 mg of novobiocin and 650,000 IU of penicillin for nonlactating cow mastitis treatment. The procedure starts with an estimated second order polynomial response surface equation. That surface is translated into a probability surface with contours called isoprobs. The isoprobs show drug amounts that have equal probability to produce maximal efficacy. Safety factors are incorporated into the probability surface via a noncentrality parameter that causes the isoprobs to expand as safety decreases, resulting in lower amounts of drug being used.

  10. Neural Network and Response Surface Methodology for Rocket Engine Component Optimization

    NASA Technical Reports Server (NTRS)

    Vaidyanathan, Rajkumar; Papita, Nilay; Shyy, Wei; Tucker, P. Kevin; Griffin, Lisa W.; Haftka, Raphael; Fitz-Coy, Norman; McConnaughey, Helen (Technical Monitor)

    2000-01-01

    The goal of this work is to compare the performance of response surface methodology (RSM) and two types of neural networks (NN) to aid preliminary design of two rocket engine components. A data set of 45 training points and 20 test points obtained from a semi-empirical model based on three design variables is used for a shear coaxial injector element. Data for supersonic turbine design is based on six design variables, 76 training, data and 18 test data obtained from simplified aerodynamic analysis. Several RS and NN are first constructed using the training data. The test data are then employed to select the best RS or NN. Quadratic and cubic response surfaces. radial basis neural network (RBNN) and back-propagation neural network (BPNN) are compared. Two-layered RBNN are generated using two different training algorithms, namely solverbe and solverb. A two layered BPNN is generated with Tan-Sigmoid transfer function. Various issues related to the training of the neural networks are addressed including number of neurons, error goals, spread constants and the accuracy of different models in representing the design space. A search for the optimum design is carried out using a standard gradient-based optimization algorithm over the response surfaces represented by the polynomials and trained neural networks. Usually a cubic polynominal performs better than the quadratic polynomial but exceptions have been noticed. Among the NN choices, the RBNN designed using solverb yields more consistent performance for both engine components considered. The training of RBNN is easier as it requires linear regression. This coupled with the consistency in performance promise the possibility of it being used as an optimization strategy for engineering design problems.

  11. Smoothing optimization of supporting quadratic surfaces with Zernike polynomials

    NASA Astrophysics Data System (ADS)

    Zhang, Hang; Lu, Jiandong; Liu, Rui; Ma, Peifu

    2018-03-01

    A new optimization method to get a smooth freeform optical surface from an initial surface generated by the supporting quadratic method (SQM) is proposed. To smooth the initial surface, a 9-vertex system from the neighbor quadratic surface and the Zernike polynomials are employed to establish a linear equation system. A local optimized surface to the 9-vertex system can be build by solving the equations. Finally, a continuous smooth optimization surface is constructed by stitching the above algorithm on the whole initial surface. The spot corresponding to the optimized surface is no longer discrete pixels but a continuous distribution.

  12. Improving the Unsteady Aerodynamic Performance of Transonic Turbines using Neural Networks

    NASA Technical Reports Server (NTRS)

    Rai, Man Mohan; Madavan, Nateri K.; Huber, Frank W.

    1999-01-01

    A recently developed neural net-based aerodynamic design procedure is used in the redesign of a transonic turbine stage to improve its unsteady aerodynamic performance. The redesign procedure used incorporates the advantages of both traditional response surface methodology and neural networks by employing a strategy called parameter-based partitioning of the design space. Starting from the reference design, a sequence of response surfaces based on both neural networks and polynomial fits are constructed to traverse the design space in search of an optimal solution that exhibits improved unsteady performance. The procedure combines the power of neural networks and the economy of low-order polynomials (in terms of number of simulations required and network training requirements). A time-accurate, two-dimensional, Navier-Stokes solver is used to evaluate the various intermediate designs and provide inputs to the optimization procedure. The procedure yielded a modified design that improves the aerodynamic performance through small changes to the reference design geometry. These results demonstrate the capabilities of the neural net-based design procedure, and also show the advantages of including high-fidelity unsteady simulations that capture the relevant flow physics in the design optimization process.

  13. Neural Net-Based Redesign of Transonic Turbines for Improved Unsteady Aerodynamic Performance

    NASA Technical Reports Server (NTRS)

    Madavan, Nateri K.; Rai, Man Mohan; Huber, Frank W.

    1998-01-01

    A recently developed neural net-based aerodynamic design procedure is used in the redesign of a transonic turbine stage to improve its unsteady aerodynamic performance. The redesign procedure used incorporates the advantages of both traditional response surface methodology (RSM) and neural networks by employing a strategy called parameter-based partitioning of the design space. Starting from the reference design, a sequence of response surfaces based on both neural networks and polynomial fits are constructed to traverse the design space in search of an optimal solution that exhibits improved unsteady performance. The procedure combines the power of neural networks and the economy of low-order polynomials (in terms of number of simulations required and network training requirements). A time-accurate, two-dimensional, Navier-Stokes solver is used to evaluate the various intermediate designs and provide inputs to the optimization procedure. The optimization procedure yields a modified design that improves the aerodynamic performance through small changes to the reference design geometry. The computed results demonstrate the capabilities of the neural net-based design procedure, and also show the tremendous advantages that can be gained by including high-fidelity unsteady simulations that capture the relevant flow physics in the design optimization process.

  14. Chitosan based grey wastewater treatment--a statistical design approach.

    PubMed

    Thirugnanasambandham, K; Sivakumar, V; Prakash Maran, J; Kandasamy, S

    2014-01-01

    In this present study, grey wastewater was treated under different operating conditions such as agitation time (1-3 min), pH (2.5-5.5), chitosan dose (0.3-0.6g/l) and settling time (10-20 min) using response surface methodology (RSM). Four factors with three levels Box-Behnken response surface design (BBD) were employed to optimize and investigate the effect of process variables on the responses such as turbidity, BOD and COD removal. The results were analyzed by Pareto analysis of variance (ANOVA) and second order polynomial models were developed in order to predict the responses. Under the optimum conditions, experimental values such as turbidity (96%), BOD (91%) and COD (73%) removals are closely agreed with predicted values. Copyright © 2013 Elsevier Ltd. All rights reserved.

  15. Fabrication and correction of freeform surface based on Zernike polynomials by slow tool servo

    NASA Astrophysics Data System (ADS)

    Cheng, Yuan-Chieh; Hsu, Ming-Ying; Peng, Wei-Jei; Hsu, Wei-Yao

    2017-10-01

    Recently, freeform surface widely using to the optical system; because it is have advance of optical image and freedom available to improve the optical performance. For freeform optical fabrication by integrating freeform optical design, precision freeform manufacture, metrology freeform optics and freeform compensate method, to modify the form deviation of surface, due to production process of freeform lens ,compared and provides more flexibilities and better performance. This paper focuses on the fabrication and correction of the free-form surface. In this study, optical freeform surface using multi-axis ultra-precision manufacturing could be upgrading the quality of freeform. It is a machine equipped with a positioning C-axis and has the CXZ machining function which is also called slow tool servo (STS) function. The freeform compensate method of Zernike polynomials results successfully verified; it is correction the form deviation of freeform surface. Finally, the freeform surface are measured experimentally by Ultrahigh Accurate 3D Profilometer (UA3P), compensate the freeform form error with Zernike polynomial fitting to improve the form accuracy of freeform.

  16. Optimization of ultrasonic-assisted extraction of bioactive alkaloid compounds from rhizoma coptidis (Coptis chinensis Franch.) using response surface methodology.

    PubMed

    Teng, Hui; Choi, Yong Hee

    2014-01-01

    The optimum extraction conditions for the maximum recovery of total alkaloid content (TAC), berberine content (BC), palmatine content (PC), and the highest antioxidant capacity (AC) from rhizoma coptidis subjected to ultrasonic-assisted extraction (UAE) were determined using response surface methodology (RSM). Central composite design (CCD) with three variables and five levels was employed, and response surface plots were constructed in accordance with a second order polynomial model. Analysis of variance (ANOVA) showed that the quadratic model was well fitted and significant for responses of TAC, BC, PC, and AA. The optimum conditions obtained through the overlapped contour plot were as follows: ethanol concentration of 59%, extraction time of 46.57min, and temperature of 66.22°C. Verification experiment was carried out, and no significant difference was found between observed and estimated values for each response, suggesting that the estimated models were reliable and valid for UAE of alkaloids. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.

  17. On the best mean-square approximations to a planet's gravitational potential

    NASA Astrophysics Data System (ADS)

    Lobkova, N. I.

    1985-02-01

    The continuous problem of approximating the gravitational potential of a planet in the form of polynomials of solid spherical functions is considered. The best mean-square polynomials, referred to different parts of space, are compared with each other. The harmonic coefficients corresponding to the surface of a planet are shown to be unstable with respect to the degree of the polynomial and to differ from the Stokes constants.

  18. [Optimization of one-step pelletization technology of Biqiu granules by Plackett-Burman design and Box-Behnken response surface methodology].

    PubMed

    Zhang, Yan-jun; Liu, Li-li; Hu, Jun-hua; Wu, Yun; Chao, En-xiang; Xiao, Wei

    2015-11-01

    First with the qualified rate of granules as the evaluation index, significant influencing factors were firstly screened by Plackett-Burman design. Then, with the qualified rate and moisture content as the evaluation indexes, significant factors that affect one-step pelletization technology were further optimized by Box-Behnken design; experimental data were imitated by multiple regression and second-order polynomial equation; and response surface method was used for predictive analysis of optimal technology. The best conditions were as follows: inlet air temperature of 85 degrees C, sample introduction speed of 33 r x min(-1), density of concrete 1. 10. One-step pelletization technology of Biqiu granules by Plackett-Burman design and Box-Behnken response surface methodology was stable and feasible with good predictability, which provided reliable basis for the industrialized production of Biqiu granules.

  19. Application of response surface methodology to optimise supercritical carbon dioxide extraction of essential oil from Cyperus rotundus Linn.

    PubMed

    Wang, Hongwu; Liu, Yanqing; Wei, Shoulian; Yan, Zijun

    2012-05-01

    Supercritical fluid extraction with carbon dioxide (SC-CO2 extraction) was performed to isolate essential oils from the rhizomes of Cyperus rotundus Linn. Effects of temperature, pressure, extraction time, and CO2 flow rate on the yield of essential oils were investigated by response surface methodology (RSM). The oil yield was represented by a second-order polynomial model using central composite rotatable design (CCRD). The oil yield increased significantly with pressure (p<0.0001) and CO2 flow rate (p<0.01). The maximum oil yield from the response surface equation was predicted to be 1.82% using an extraction temperature of 37.6°C, pressure of 294.4bar, extraction time of 119.8 min, and CO2 flow rate of 20.9L/h. Copyright © 2011 Elsevier Ltd. All rights reserved.

  20. Field curvature correction method for ultrashort throw ratio projection optics design using an odd polynomial mirror surface.

    PubMed

    Zhuang, Zhenfeng; Chen, Yanting; Yu, Feihong; Sun, Xiaowei

    2014-08-01

    This paper presents a field curvature correction method of designing an ultrashort throw ratio (TR) projection lens for an imaging system. The projection lens is composed of several refractive optical elements and an odd polynomial mirror surface. A curved image is formed in a direction away from the odd polynomial mirror surface by the refractive optical elements from the image formed on the digital micromirror device (DMD) panel, and the curved image formed is its virtual image. Then the odd polynomial mirror surface enlarges the curved image and a plane image is formed on the screen. Based on the relationship between the chief ray from the exit pupil of each field of view (FOV) and the corresponding predescribed position on the screen, the initial profile of the freeform mirror surface is calculated by using segments of the hyperbolic according to the laws of reflection. For further optimization, the value of the high-order odd polynomial surface is used to express the freeform mirror surface through a least-squares fitting method. As an example, an ultrashort TR projection lens that realizes projection onto a large 50 in. screen at a distance of only 510 mm is presented. The optical performance for the designed projection lens is analyzed by ray tracing method. Results show that an ultrashort TR projection lens modulation transfer function of over 60% at 0.5 cycles/mm for all optimization fields is achievable with f-number of 2.0, 126° full FOV, <1% distortion, and 0.46 TR. Moreover, in comparing the proposed projection lens' optical specifications to that of traditional projection lenses, aspheric mirror projection lenses, and conventional short TR projection lenses, results indicate that this projection lens has the advantages of ultrashort TR, low f-number, wide full FOV, and small distortion.

  1. FIT: Computer Program that Interactively Determines Polynomial Equations for Data which are a Function of Two Independent Variables

    NASA Technical Reports Server (NTRS)

    Arbuckle, P. D.; Sliwa, S. M.; Roy, M. L.; Tiffany, S. H.

    1985-01-01

    A computer program for interactively developing least-squares polynomial equations to fit user-supplied data is described. The program is characterized by the ability to compute the polynomial equations of a surface fit through data that are a function of two independent variables. The program utilizes the Langley Research Center graphics packages to display polynomial equation curves and data points, facilitating a qualitative evaluation of the effectiveness of the fit. An explanation of the fundamental principles and features of the program, as well as sample input and corresponding output, are included.

  2. Modeling corneal surfaces with rational functions for high-speed videokeratoscopy data compression.

    PubMed

    Schneider, Martin; Iskander, D Robert; Collins, Michael J

    2009-02-01

    High-speed videokeratoscopy is an emerging technique that enables study of the corneal surface and tear-film dynamics. Unlike its static predecessor, this new technique results in a very large amount of digital data for which storage needs become significant. We aimed to design a compression technique that would use mathematical functions to parsimoniously fit corneal surface data with a minimum number of coefficients. Since the Zernike polynomial functions that have been traditionally used for modeling corneal surfaces may not necessarily correctly represent given corneal surface data in terms of its optical performance, we introduced the concept of Zernike polynomial-based rational functions. Modeling optimality criteria were employed in terms of both the rms surface error as well as the point spread function cross-correlation. The parameters of approximations were estimated using a nonlinear least-squares procedure based on the Levenberg-Marquardt algorithm. A large number of retrospective videokeratoscopic measurements were used to evaluate the performance of the proposed rational-function-based modeling approach. The results indicate that the rational functions almost always outperform the traditional Zernike polynomial approximations with the same number of coefficients.

  3. Are We All in the Same Boat? The Role of Perceptual Distance in Organizational Health Interventions.

    PubMed

    Hasson, Henna; von Thiele Schwarz, Ulrica; Nielsen, Karina; Tafvelin, Susanne

    2016-10-01

    The study investigates how agreement between leaders' and their team's perceptions influence intervention outcomes in a leadership-training intervention aimed at improving organizational learning. Agreement, i.e. perceptual distance was calculated for the organizational learning dimensions at baseline. Changes in the dimensions from pre-intervention to post-intervention were evaluated using polynomial regression analysis with response surface analysis. The general pattern of the results indicated that the organizational learning improved when leaders and their teams agreed on the level of organizational learning prior to the intervention. The improvement was greatest when the leader's and the team's perceptions at baseline were aligned and high rather than aligned and low. The least beneficial scenario was when the leader's perceptions were higher than the team's perceptions. These results give insights into the importance of comparing leaders' and their team's perceptions in intervention research. Polynomial regression analyses with response surface methodology allow three-dimensional examination of relationship between two predictor variables and an outcome. This contributes with knowledge on how combination of predictor variables may affect outcome and allows studies of potential non-linearity relating to the outcome. Future studies could use these methods in process evaluation of interventions. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  4. Optimization and formulation design of gels of Diclofenac and Curcumin for transdermal drug delivery by Box-Behnken statistical design.

    PubMed

    Chaudhary, Hema; Kohli, Kanchan; Amin, Saima; Rathee, Permender; Kumar, Vikash

    2011-02-01

    The aim of this study was to develop and optimize a transdermal gel formulation for Diclofenac diethylamine (DDEA) and Curcumin (CRM). A 3-factor, 3-level Box-Behnken design was used to derive a second-order polynomial equation to construct contour plots for prediction of responses. Independent variables studied were the polymer concentration (X(1)), ethanol (X(2)) and propylene glycol (X(3)) and the levels of each factor were low, medium, and high. The dependent variables studied were the skin permeation rate of DDEA (Y(1)), skin permeation rate of CRM (Y(2)), and viscosity of the gels (Y(3)). Response surface plots were drawn, statistical validity of the polynomials was established to find the compositions of optimized formulation which was evaluated using the Franz-type diffusion cell. The permeation rate of DDEA increased proportionally with ethanol concentration but decreased with polymer concentration, whereas the permeation rate of CRM increased proportionally with polymer concentration. Gels showed a non-Fickian super case II (typical zero order) and non-Fickian diffusion release mechanism for DDEA and CRM, respectively. The design demonstrated the role of the derived polynomial equation and contour plots in predicting the values of dependent variables for the preparation and optimization of gel formulation for transdermal drug release. Copyright © 2010 Wiley-Liss, Inc.

  5. Polynomial chaos expansion with random and fuzzy variables

    NASA Astrophysics Data System (ADS)

    Jacquelin, E.; Friswell, M. I.; Adhikari, S.; Dessombz, O.; Sinou, J.-J.

    2016-06-01

    A dynamical uncertain system is studied in this paper. Two kinds of uncertainties are addressed, where the uncertain parameters are described through random variables and/or fuzzy variables. A general framework is proposed to deal with both kinds of uncertainty using a polynomial chaos expansion (PCE). It is shown that fuzzy variables may be expanded in terms of polynomial chaos when Legendre polynomials are used. The components of the PCE are a solution of an equation that does not depend on the nature of uncertainty. Once this equation is solved, the post-processing of the data gives the moments of the random response when the uncertainties are random or gives the response interval when the variables are fuzzy. With the PCE approach, it is also possible to deal with mixed uncertainty, when some parameters are random and others are fuzzy. The results provide a fuzzy description of the response statistical moments.

  6. Bayesian Revision of Residual Detection Power

    NASA Technical Reports Server (NTRS)

    DeLoach, Richard

    2013-01-01

    This paper addresses some issues with quality assessment and quality assurance in response surface modeling experiments executed in wind tunnels. The role of data volume on quality assurance for response surface models is reviewed. Specific wind tunnel response surface modeling experiments are considered for which apparent discrepancies exist between fit quality expectations based on implemented quality assurance tactics, and the actual fit quality achieved in those experiments. These discrepancies are resolved by using Bayesian inference to account for certain imperfections in the assessment methodology. Estimates of the fraction of out-of-tolerance model predictions based on traditional frequentist methods are revised to account for uncertainty in the residual assessment process. The number of sites in the design space for which residuals are out of tolerance is seen to exceed the number of sites where the model actually fails to fit the data. A method is presented to estimate how much of the design space in inadequately modeled by low-order polynomial approximations to the true but unknown underlying response function.

  7. SIR-B ocean-wave enhancement with fast Fourier transform techniques

    NASA Technical Reports Server (NTRS)

    Tilley, David G.

    1987-01-01

    Shuttle Imaging Radar (SIR-B) imagery is Fourier filtered to remove the estimated system-transfer function, reduce speckle noise, and produce ocean scenes with a gray scale that is proportional to wave height. The SIR-B system response to speckled scenes of uniform surfaces yields an estimate of the stationary wavenumber response of the imaging radar, modeled by the 15 even terms of an eighth-order two-dimensional polynomial. Speckle can also be used to estimate the dynamic wavenumber response of the system due to surface motion during the aperture synthesis period, modeled with a single adaptive parameter describing an exponential correlation along track. A Fourier filter can then be devised to correct for the wavenumber response of the remote sensor and scene correlation, with subsequent subtraction of an estimate of the speckle noise component. A linearized velocity bunching model, combined with a surface tilt and hydrodynamic model, is incorporated in the Fourier filter to derive estimates of wave height from the radar intensities corresponding to individual picture elements.

  8. The algorithmic details of polynomials application in the problems of heat and mass transfer control on the hypersonic aircraft permeable surfaces

    NASA Astrophysics Data System (ADS)

    Bilchenko, G. G.; Bilchenko, N. G.

    2018-03-01

    The hypersonic aircraft permeable surfaces heat and mass transfer effective control mathematical modeling problems are considered. The analysis of the control (the blowing) constructive and gasdynamical restrictions is carried out for the porous and perforated surfaces. The functions classes allowing realize the controls taking into account the arising types of restrictions are suggested. Estimates of the computational complexity of the W. G. Horner scheme application in the case of using the C. Hermite interpolation polynomial are given.

  9. Optimum Design of a Helicopter Rotor for Low Vibration Using Aeroelastic Analysis and Response Surface Methods

    NASA Astrophysics Data System (ADS)

    Ganguli, R.

    2002-11-01

    An aeroelastic analysis based on finite elements in space and time is used to model the helicopter rotor in forward flight. The rotor blade is represented as an elastic cantilever beam undergoing flap and lag bending, elastic torsion and axial deformations. The objective of the improved design is to reduce vibratory loads at the rotor hub that are the main source of helicopter vibration. Constraints are imposed on aeroelastic stability, and move limits are imposed on the blade elastic stiffness design variables. Using the aeroelastic analysis, response surface approximations are constructed for the objective function (vibratory hub loads). It is found that second order polynomial response surfaces constructed using the central composite design of the theory of design of experiments adequately represents the aeroelastic model in the vicinity of the baseline design. Optimization results show a reduction in the objective function of about 30 per cent. A key accomplishment of this paper is the decoupling of the analysis problem and the optimization problems using response surface methods, which should encourage the use of optimization methods by the helicopter industry.

  10. On computing the geoelastic response to a disk load

    NASA Astrophysics Data System (ADS)

    Bevis, M.; Melini, D.; Spada, G.

    2016-06-01

    We review the theory of the Earth's elastic and gravitational response to a surface disk load. The solutions for displacement of the surface and the geoid are developed using expansions of Legendre polynomials, their derivatives and the load Love numbers. We provide a MATLAB function called diskload that computes the solutions for both uncompensated and compensated disk loads. In order to numerically implement the Legendre expansions, it is necessary to choose a harmonic degree, nmax, at which to truncate the series used to construct the solutions. We present a rule of thumb (ROT) for choosing an appropriate value of nmax, describe the consequences of truncating the expansions prematurely and provide a means to judiciously violate the ROT when that becomes a practical necessity.

  11. Eye aberration analysis with Zernike polynomials

    NASA Astrophysics Data System (ADS)

    Molebny, Vasyl V.; Chyzh, Igor H.; Sokurenko, Vyacheslav M.; Pallikaris, Ioannis G.; Naoumidis, Leonidas P.

    1998-06-01

    New horizons for accurate photorefractive sight correction, afforded by novel flying spot technologies, require adequate measurements of photorefractive properties of an eye. Proposed techniques of eye refraction mapping present results of measurements for finite number of points of eye aperture, requiring to approximate these data by 3D surface. A technique of wave front approximation with Zernike polynomials is described, using optimization of the number of polynomial coefficients. Criterion of optimization is the nearest proximity of the resulted continuous surface to the values calculated for given discrete points. Methodology includes statistical evaluation of minimal root mean square deviation (RMSD) of transverse aberrations, in particular, varying consecutively the values of maximal coefficient indices of Zernike polynomials, recalculating the coefficients, and computing the value of RMSD. Optimization is finished at minimal value of RMSD. Formulas are given for computing ametropia, size of the spot of light on retina, caused by spherical aberration, coma, and astigmatism. Results are illustrated by experimental data, that could be of interest for other applications, where detailed evaluation of eye parameters is needed.

  12. Calculating the sensitivity of wind turbine loads to wind inputs using response surfaces

    NASA Astrophysics Data System (ADS)

    Rinker, Jennifer M.

    2016-09-01

    This paper presents a methodology to calculate wind turbine load sensitivities to turbulence parameters through the use of response surfaces. A response surface is a highdimensional polynomial surface that can be calibrated to any set of input/output data and then used to generate synthetic data at a low computational cost. Sobol sensitivity indices (SIs) can then be calculated with relative ease using the calibrated response surface. The proposed methodology is demonstrated by calculating the total sensitivity of the maximum blade root bending moment of the WindPACT 5 MW reference model to four turbulence input parameters: a reference mean wind speed, a reference turbulence intensity, the Kaimal length scale, and a novel parameter reflecting the nonstationarity present in the inflow turbulence. The input/output data used to calibrate the response surface were generated for a previous project. The fit of the calibrated response surface is evaluated in terms of error between the model and the training data and in terms of the convergence. The Sobol SIs are calculated using the calibrated response surface, and the convergence is examined. The Sobol SIs reveal that, of the four turbulence parameters examined in this paper, the variance caused by the Kaimal length scale and nonstationarity parameter are negligible. Thus, the findings in this paper represent the first systematic evidence that stochastic wind turbine load response statistics can be modeled purely by mean wind wind speed and turbulence intensity.

  13. Density, Viscosity and Surface Tension of Binary Mixtures of 1-Butyl-1-Methylpyrrolidinium Tricyanomethanide with Benzothiophene.

    PubMed

    Domańska, Urszula; Królikowska, Marta; Walczak, Klaudia

    2014-01-01

    The effects of temperature and composition on the density and viscosity of pure benzothiophene and ionic liquid (IL), and those of the binary mixtures containing the IL 1-butyl-1-methylpyrrolidynium tricyanomethanide ([BMPYR][TCM] + benzothiophene), are reported at six temperatures (308.15, 318.15, 328.15, 338.15, 348.15 and 358.15) K and ambient pressure. The temperature dependences of the density and viscosity were represented by an empirical second-order polynomial and by the Vogel-Fucher-Tammann equation, respectively. The density and viscosity variations with compositions were described by polynomials. Excess molar volumes and viscosity deviations were calculated and correlated by Redlich-Kister polynomial expansions. The surface tensions of benzothiophene, pure IL and binary mixtures of ([BMPYR][TCM] + benzothiophene) were measured at atmospheric pressure at four temperatures (308.15, 318.15, 328.15 and 338.15) K. The surface tension deviations were calculated and correlated by a Redlich-Kister polynomial expansion. The temperature dependence of the interfacial tension was used to evaluate the surface entropy, the surface enthalpy, the critical temperature, the surface energy and the parachor for pure IL. These measurements have been provided to complete information of the influence of temperature and composition on physicochemical properties for the selected IL, which was chosen as a possible new entrainer in the separation of sulfur compounds from fuels. A qualitative analysis on these quantities in terms of molecular interactions is reported. The obtained results indicate that IL interactions with benzothiophene are strongly dependent on packing effects and hydrogen bonding of this IL with the polar solvent.

  14. Assessing the Multidimensional Relationship Between Medication Beliefs and Adherence in Older Adults With Hypertension Using Polynomial Regression.

    PubMed

    Dillon, Paul; Phillips, L Alison; Gallagher, Paul; Smith, Susan M; Stewart, Derek; Cousins, Gráinne

    2018-02-05

    The Necessity-Concerns Framework (NCF) is a multidimensional theory describing the relationship between patients' positive and negative evaluations of their medication which interplay to influence adherence. Most studies evaluating the NCF have failed to account for the multidimensional nature of the theory, placing the separate dimensions of medication "necessity beliefs" and "concerns" onto a single dimension (e.g., the Beliefs about Medicines Questionnaire-difference score model). To assess the multidimensional effect of patient medication beliefs (concerns and necessity beliefs) on medication adherence using polynomial regression with response surface analysis. Community-dwelling older adults >65 years (n = 1,211) presenting their own prescription for antihypertensive medication to 106 community pharmacies in the Republic of Ireland rated their concerns and necessity beliefs to antihypertensive medications at baseline and their adherence to antihypertensive medication at 12 months via structured telephone interview. Confirmatory polynomial regression found the difference-score model to be inaccurate; subsequent exploratory analysis identified a quadratic model to be the best-fitting polynomial model. Adherence was lowest among those with strong medication concerns and weak necessity beliefs, and adherence was greatest for those with weak concerns and strong necessity beliefs (slope β = -0.77, p<.001; curvature β = -0.26, p = .004). However, novel nonreciprocal effects were also observed; patients with simultaneously high concerns and necessity beliefs had lower adherence than those with simultaneously low concerns and necessity beliefs (slope β = -0.36, p = .004; curvature β = -0.25, p = .003). The difference-score model fails to account for the potential nonreciprocal effects. Results extend evidence supporting the use of polynomial regression to assess the multidimensional effect of medication beliefs on adherence.

  15. Method of Characteristics Calculations and Computer Code for Materials with Arbitrary Equations of State and Using Orthogonal Polynomial Least Square Surface Fits

    NASA Technical Reports Server (NTRS)

    Chang, T. S.

    1974-01-01

    A numerical scheme using the method of characteristics to calculate the flow properties and pressures behind decaying shock waves for materials under hypervelocity impact is developed. Time-consuming double interpolation subroutines are replaced by a technique based on orthogonal polynomial least square surface fits. Typical calculated results are given and compared with the double interpolation results. The complete computer program is included.

  16. Application of mathematical model methods for optimization tasks in construction materials technology

    NASA Astrophysics Data System (ADS)

    Fomina, E. V.; Kozhukhova, N. I.; Sverguzova, S. V.; Fomin, A. E.

    2018-05-01

    In this paper, the regression equations method for design of construction material was studied. Regression and polynomial equations representing the correlation between the studied parameters were proposed. The logic design and software interface of the regression equations method focused on parameter optimization to provide the energy saving effect at the stage of autoclave aerated concrete design considering the replacement of traditionally used quartz sand by coal mining by-product such as argillite. The mathematical model represented by a quadric polynomial for the design of experiment was obtained using calculated and experimental data. This allowed the estimation of relationship between the composition and final properties of the aerated concrete. The surface response graphically presented in a nomogram allowed the estimation of concrete properties in response to variation of composition within the x-space. The optimal range of argillite content was obtained leading to a reduction of raw materials demand, development of target plastic strength of aerated concrete as well as a reduction of curing time before autoclave treatment. Generally, this method allows the design of autoclave aerated concrete with required performance without additional resource and time costs.

  17. Real-Time Curvature Defect Detection on Outer Surfaces Using Best-Fit Polynomial Interpolation

    PubMed Central

    Golkar, Ehsan; Prabuwono, Anton Satria; Patel, Ahmed

    2012-01-01

    This paper presents a novel, real-time defect detection system, based on a best-fit polynomial interpolation, that inspects the conditions of outer surfaces. The defect detection system is an enhanced feature extraction method that employs this technique to inspect the flatness, waviness, blob, and curvature faults of these surfaces. The proposed method has been performed, tested, and validated on numerous pipes and ceramic tiles. The results illustrate that the physical defects such as abnormal, popped-up blobs are recognized completely, and that flames, waviness, and curvature faults are detected simultaneously. PMID:23202186

  18. Error-Based Design Space Windowing

    NASA Technical Reports Server (NTRS)

    Papila, Melih; Papila, Nilay U.; Shyy, Wei; Haftka, Raphael T.; Fitz-Coy, Norman

    2002-01-01

    Windowing of design space is considered in order to reduce the bias errors due to low-order polynomial response surfaces (RS). Standard design space windowing (DSW) uses a region of interest by setting a requirement on response level and checks it by a global RS predictions over the design space. This approach, however, is vulnerable since RS modeling errors may lead to the wrong region to zoom on. The approach is modified by introducing an eigenvalue error measure based on point-to-point mean squared error criterion. Two examples are presented to demonstrate the benefit of the error-based DSW.

  19. Integrating uniform design and response surface methodology to optimize thiacloprid suspension

    PubMed Central

    Li, Bei-xing; Wang, Wei-chang; Zhang, Xian-peng; Zhang, Da-xia; Mu, Wei; Liu, Feng

    2017-01-01

    A model 25% suspension concentrate (SC) of thiacloprid was adopted to evaluate an integrative approach of uniform design and response surface methodology. Tersperse2700, PE1601, xanthan gum and veegum were the four experimental factors, and the aqueous separation ratio and viscosity were the two dependent variables. Linear and quadratic polynomial models of stepwise regression and partial least squares were adopted to test the fit of the experimental data. Verification tests revealed satisfactory agreement between the experimental and predicted data. The measured values for the aqueous separation ratio and viscosity were 3.45% and 278.8 mPa·s, respectively, and the relative errors of the predicted values were 9.57% and 2.65%, respectively (prepared under the proposed conditions). Comprehensive benefits could also be obtained by appropriately adjusting the amount of certain adjuvants based on practical requirements. Integrating uniform design and response surface methodology is an effective strategy for optimizing SC formulas. PMID:28383036

  20. Combined mixture-process variable approach: a suitable statistical tool for nanovesicular systems optimization.

    PubMed

    Habib, Basant A; AbouGhaly, Mohamed H H

    2016-06-01

    This study aims to illustrate the applicability of combined mixture-process variable (MPV) design and modeling for optimization of nanovesicular systems. The D-optimal experimental plan studied the influence of three mixture components (MCs) and two process variables (PVs) on lercanidipine transfersomes. The MCs were phosphatidylcholine (A), sodium glycocholate (B) and lercanidipine hydrochloride (C), while the PVs were glycerol amount in the hydration mixture (D) and sonication time (E). The studied responses were Y1: particle size, Y2: zeta potential and Y3: entrapment efficiency percent (EE%). Polynomial equations were used to study the influence of MCs and PVs on each response. Response surface methodology and multiple response optimization were applied to optimize the formulation with the goals of minimizing Y1 and maximizing Y2 and Y3. The obtained polynomial models had prediction R(2) values of 0.645, 0.947 and 0.795 for Y1, Y2 and Y3, respectively. Contour, Piepel's response trace, perturbation, and interaction plots were drawn for responses representation. The optimized formulation, A: 265 mg, B: 10 mg, C: 40 mg, D: zero g and E: 120 s, had desirability of 0.9526. The actual response values for the optimized formulation were within the two-sided 95% prediction intervals and were close to the predicted values with maximum percent deviation of 6.2%. This indicates the validity of combined MPV design and modeling for optimization of transfersomal formulations as an example of nanovesicular systems.

  1. A dynamic multi-level optimal design method with embedded finite-element modeling for power transformers

    NASA Astrophysics Data System (ADS)

    Zhang, Yunpeng; Ho, Siu-lau; Fu, Weinong

    2018-05-01

    This paper proposes a dynamic multi-level optimal design method for power transformer design optimization (TDO) problems. A response surface generated by second-order polynomial regression analysis is updated dynamically by adding more design points, which are selected by Shifted Hammersley Method (SHM) and calculated by finite-element method (FEM). The updating stops when the accuracy requirement is satisfied, and optimized solutions of the preliminary design are derived simultaneously. The optimal design level is modulated through changing the level of error tolerance. Based on the response surface of the preliminary design, a refined optimal design is added using multi-objective genetic algorithm (MOGA). The effectiveness of the proposed optimal design method is validated through a classic three-phase power TDO problem.

  2. Synthesis and Process Optimization of Electrospun PEEK-Sulfonated Nanofibers by Response Surface Methodology

    PubMed Central

    Boaretti, Carlo; Roso, Martina; Lorenzetti, Alessandra; Modesti, Michele

    2015-01-01

    In this study electrospun nanofibers of partially sulfonated polyether ether ketone have been produced as a preliminary step for a possible development of composite proton exchange membranes for fuel cells. Response surface methodology has been employed for the modelling and optimization of the electrospinning process, using a Box-Behnken design. The investigation, based on a second order polynomial model, has been focused on the analysis of the effect of both process (voltage, tip-to-collector distance, flow rate) and material (sulfonation degree) variables on the mean fiber diameter. The final model has been verified by a series of statistical tests on the residuals and validated by a comparison procedure of samples at different sulfonation degrees, realized according to optimized conditions, for the production of homogeneous thin nanofibers. PMID:28793427

  3. Synthesis and Process Optimization of Electrospun PEEK-Sulfonated Nanofibers by Response Surface Methodology.

    PubMed

    Boaretti, Carlo; Roso, Martina; Lorenzetti, Alessandra; Modesti, Michele

    2015-07-07

    In this study electrospun nanofibers of partially sulfonated polyether ether ketone have been produced as a preliminary step for a possible development of composite proton exchange membranes for fuel cells. Response surface methodology has been employed for the modelling and optimization of the electrospinning process, using a Box-Behnken design. The investigation, based on a second order polynomial model, has been focused on the analysis of the effect of both process (voltage, tip-to-collector distance, flow rate) and material (sulfonation degree) variables on the mean fiber diameter. The final model has been verified by a series of statistical tests on the residuals and validated by a comparison procedure of samples at different sulfonation degrees, realized according to optimized conditions, for the production of homogeneous thin nanofibers.

  4. Quadratic polynomial interpolation on triangular domain

    NASA Astrophysics Data System (ADS)

    Li, Ying; Zhang, Congcong; Yu, Qian

    2018-04-01

    In the simulation of natural terrain, the continuity of sample points are not in consonance with each other always, traditional interpolation methods often can't faithfully reflect the shape information which lie in data points. So, a new method for constructing the polynomial interpolation surface on triangular domain is proposed. Firstly, projected the spatial scattered data points onto a plane and then triangulated them; Secondly, A C1 continuous piecewise quadric polynomial patch was constructed on each vertex, all patches were required to be closed to the line-interpolation one as far as possible. Lastly, the unknown quantities were gotten by minimizing the object functions, and the boundary points were treated specially. The result surfaces preserve as many properties of data points as possible under conditions of satisfying certain accuracy and continuity requirements, not too convex meantime. New method is simple to compute and has a good local property, applicable to shape fitting of mines and exploratory wells and so on. The result of new surface is given in experiments.

  5. Cylinder stitching interferometry: with and without overlap regions

    NASA Astrophysics Data System (ADS)

    Peng, Junzheng; Chen, Dingfu; Yu, Yingjie

    2017-06-01

    Since the cylinder surface is closed and periodic in the azimuthal direction, existing stitching methods cannot be used to yield the 360° form map. To address this problem, this paper presents two methods for stitching interferometry of cylinder: one requires overlap regions, and the other does not need the overlap regions. For the former, we use the first order approximation of cylindrical coordinate transformation to build the stitching model. With it, the relative parameters between the adjacent sub-apertures can be calculated by the stitching model. For the latter, a set of orthogonal polynomials, termed Legendre Fourier (LF) polynomials, was developed. With these polynomials, individual sub-aperture data can be expanded as composition of inherent form of partial cylinder surface and additional misalignment parameters. Then the 360° form map can be acquired by simultaneously fitting all sub-aperture data with LF polynomials. Finally the two proposed methods are compared under various conditions. The merits and drawbacks of each stitching method are consequently revealed to provide suggestion in acquisition of 360° form map for a precision cylinder.

  6. Frequency domain system identification methods - Matrix fraction description approach

    NASA Technical Reports Server (NTRS)

    Horta, Luca G.; Juang, Jer-Nan

    1993-01-01

    This paper presents the use of matrix fraction descriptions for least-squares curve fitting of the frequency spectra to compute two matrix polynomials. The matrix polynomials are intermediate step to obtain a linearized representation of the experimental transfer function. Two approaches are presented: first, the matrix polynomials are identified using an estimated transfer function; second, the matrix polynomials are identified directly from the cross/auto spectra of the input and output signals. A set of Markov parameters are computed from the polynomials and subsequently realization theory is used to recover a minimum order state space model. Unevenly spaced frequency response functions may be used. Results from a simple numerical example and an experiment are discussed to highlight some of the important aspect of the algorithm.

  7. Permutation invariant polynomial neural network approach to fitting potential energy surfaces. II. Four-atom systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Jun; Jiang, Bin; Guo, Hua, E-mail: hguo@unm.edu

    2013-11-28

    A rigorous, general, and simple method to fit global and permutation invariant potential energy surfaces (PESs) using neural networks (NNs) is discussed. This so-called permutation invariant polynomial neural network (PIP-NN) method imposes permutation symmetry by using in its input a set of symmetry functions based on PIPs. For systems with more than three atoms, it is shown that the number of symmetry functions in the input vector needs to be larger than the number of internal coordinates in order to include both the primary and secondary invariant polynomials. This PIP-NN method is successfully demonstrated in three atom-triatomic reactive systems, resultingmore » in full-dimensional global PESs with average errors on the order of meV. These PESs are used in full-dimensional quantum dynamical calculations.« less

  8. Uncertainty propagation through an aeroelastic wind turbine model using polynomial surrogates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Murcia, Juan Pablo; Réthoré, Pierre-Elouan; Dimitrov, Nikolay

    Polynomial surrogates are used to characterize the energy production and lifetime equivalent fatigue loads for different components of the DTU 10 MW reference wind turbine under realistic atmospheric conditions. The variability caused by different turbulent inflow fields are captured by creating independent surrogates for the mean and standard deviation of each output with respect to the inflow realizations. A global sensitivity analysis shows that the turbulent inflow realization has a bigger impact on the total distribution of equivalent fatigue loads than the shear coefficient or yaw miss-alignment. The methodology presented extends the deterministic power and thrust coefficient curves to uncertaintymore » models and adds new variables like damage equivalent fatigue loads in different components of the turbine. These surrogate models can then be implemented inside other work-flows such as: estimation of the uncertainty in annual energy production due to wind resource variability and/or robust wind power plant layout optimization. It can be concluded that it is possible to capture the global behavior of a modern wind turbine and its uncertainty under realistic inflow conditions using polynomial response surfaces. In conclusion, the surrogates are a way to obtain power and load estimation under site specific characteristics without sharing the proprietary aeroelastic design.« less

  9. Uncertainty propagation through an aeroelastic wind turbine model using polynomial surrogates

    DOE PAGES

    Murcia, Juan Pablo; Réthoré, Pierre-Elouan; Dimitrov, Nikolay; ...

    2017-07-17

    Polynomial surrogates are used to characterize the energy production and lifetime equivalent fatigue loads for different components of the DTU 10 MW reference wind turbine under realistic atmospheric conditions. The variability caused by different turbulent inflow fields are captured by creating independent surrogates for the mean and standard deviation of each output with respect to the inflow realizations. A global sensitivity analysis shows that the turbulent inflow realization has a bigger impact on the total distribution of equivalent fatigue loads than the shear coefficient or yaw miss-alignment. The methodology presented extends the deterministic power and thrust coefficient curves to uncertaintymore » models and adds new variables like damage equivalent fatigue loads in different components of the turbine. These surrogate models can then be implemented inside other work-flows such as: estimation of the uncertainty in annual energy production due to wind resource variability and/or robust wind power plant layout optimization. It can be concluded that it is possible to capture the global behavior of a modern wind turbine and its uncertainty under realistic inflow conditions using polynomial response surfaces. In conclusion, the surrogates are a way to obtain power and load estimation under site specific characteristics without sharing the proprietary aeroelastic design.« less

  10. Preparation of Curcumin Loaded Egg Albumin Nanoparticles Using Acetone and Optimization of Desolvation Process.

    PubMed

    Aniesrani Delfiya, D S; Thangavel, K; Amirtham, D

    2016-04-01

    In this study, acetone was used as a desolvating agent to prepare the curcumin-loaded egg albumin nanoparticles. Response surface methodology was employed to analyze the influence of process parameters namely concentration (5-15%w/v) and pH (5-7) of egg albumin solution on solubility, curcumin loading and entrapment efficiency, nanoparticles yield and particle size. Optimum processing conditions obtained from response surface analysis were found to be the egg albumin solution concentration of 8.85%w/v and pH of 5. At this optimum condition, the solubility of 33.57%, curcumin loading of 4.125%, curcumin entrapment efficiency of 55.23%, yield of 72.85% and particles size of 232.6 nm were obtained and these values were related to the values which are predicted using polynomial model equations. Thus, the model equations generated for each response was validated and it can be used to predict the response values at any concentration and pH.

  11. Polynomial approximation of Poincare maps for Hamiltonian system

    NASA Technical Reports Server (NTRS)

    Froeschle, Claude; Petit, Jean-Marc

    1992-01-01

    Different methods are proposed and tested for transforming a non-linear differential system, and more particularly a Hamiltonian one, into a map without integrating the whole orbit as in the well-known Poincare return map technique. We construct piecewise polynomial maps by coarse-graining the phase-space surface of section into parallelograms and using either only values of the Poincare maps at the vertices or also the gradient information at the nearest neighbors to define a polynomial approximation within each cell. The numerical experiments are in good agreement with both the real symplectic and Poincare maps.

  12. Analytical Solutions for the Resonance Response of Goupillaud-type Elastic Media Using Z-transform Methods

    DTIC Science & Technology

    2012-02-01

    using z-transform methods. The determinant of the resulting global system matrix in the z-space |Am| is a palindromic polynomial with real...resulting global system matrix in the z-space |Am| is a palindromic polynomial with real coefficients. The zeros of the palindromic polynomial are distinct...Goupillaud-type multilayered media. In addition, the present treatment uses a global matrix method that is attributed to Knopoff [16], rather than the

  13. Estimation of Supersonic Stage Separation Aerodynamics of Winged-Body Launch Vehicles Using Response Surface Methods

    NASA Technical Reports Server (NTRS)

    Erickson, Gary E.

    2010-01-01

    Response surface methodology was used to estimate the longitudinal stage separation aerodynamic characteristics of a generic, bimese, winged multi-stage launch vehicle configuration at supersonic speeds in the NASA LaRC Unitary Plan Wind Tunnel. The Mach 3 staging was dominated by shock wave interactions between the orbiter and booster vehicles throughout the relative spatial locations of interest. The inference space was partitioned into several contiguous regions within which the separation aerodynamics were presumed to be well-behaved and estimable using central composite designs capable of fitting full second-order response functions. The underlying aerodynamic response surfaces of the booster vehicle in belly-to-belly proximity to the orbiter vehicle were estimated using piecewise-continuous lower-order polynomial functions. The quality of fit and prediction capabilities of the empirical models were assessed in detail, and the issue of subspace boundary discontinuities was addressed. Augmenting the central composite designs to full third-order using computer-generated D-optimality criteria was evaluated. The usefulness of central composite designs, the subspace sizing, and the practicality of fitting lower-order response functions over a partitioned inference space dominated by highly nonlinear and possibly discontinuous shock-induced aerodynamics are discussed.

  14. Efficient uncertainty quantification in fully-integrated surface and subsurface hydrologic simulations

    NASA Astrophysics Data System (ADS)

    Miller, K. L.; Berg, S. J.; Davison, J. H.; Sudicky, E. A.; Forsyth, P. A.

    2018-01-01

    Although high performance computers and advanced numerical methods have made the application of fully-integrated surface and subsurface flow and transport models such as HydroGeoSphere common place, run times for large complex basin models can still be on the order of days to weeks, thus, limiting the usefulness of traditional workhorse algorithms for uncertainty quantification (UQ) such as Latin Hypercube simulation (LHS) or Monte Carlo simulation (MCS), which generally require thousands of simulations to achieve an acceptable level of accuracy. In this paper we investigate non-intrusive polynomial chaos for uncertainty quantification, which in contrast to random sampling methods (e.g., LHS and MCS), represents a model response of interest as a weighted sum of polynomials over the random inputs. Once a chaos expansion has been constructed, approximating the mean, covariance, probability density function, cumulative distribution function, and other common statistics as well as local and global sensitivity measures is straightforward and computationally inexpensive, thus making PCE an attractive UQ method for hydrologic models with long run times. Our polynomial chaos implementation was validated through comparison with analytical solutions as well as solutions obtained via LHS for simple numerical problems. It was then used to quantify parametric uncertainty in a series of numerical problems with increasing complexity, including a two-dimensional fully-saturated, steady flow and transient transport problem with six uncertain parameters and one quantity of interest; a one-dimensional variably-saturated column test involving transient flow and transport, four uncertain parameters, and two quantities of interest at 101 spatial locations and five different times each (1010 total); and a three-dimensional fully-integrated surface and subsurface flow and transport problem for a small test catchment involving seven uncertain parameters and three quantities of interest at 241 different times each. Numerical experiments show that polynomial chaos is an effective and robust method for quantifying uncertainty in fully-integrated hydrologic simulations, which provides a rich set of features and is computationally efficient. Our approach has the potential for significant speedup over existing sampling based methods when the number of uncertain model parameters is modest ( ≤ 20). To our knowledge, this is the first implementation of the algorithm in a comprehensive, fully-integrated, physically-based three-dimensional hydrosystem model.

  15. Application of the polynomial chaos expansion to approximate the homogenised response of the intervertebral disc.

    PubMed

    Karajan, N; Otto, D; Oladyshkin, S; Ehlers, W

    2014-10-01

    A possibility to simulate the mechanical behaviour of the human spine is given by modelling the stiffer structures, i.e. the vertebrae, as a discrete multi-body system (MBS), whereas the softer connecting tissue, i.e. the softer intervertebral discs (IVD), is represented in a continuum-mechanical sense using the finite-element method (FEM). From a modelling point of view, the mechanical behaviour of the IVD can be included into the MBS in two different ways. They can either be computed online in a so-called co-simulation of a MBS and a FEM or offline in a pre-computation step, where a representation of the discrete mechanical response of the IVD needs to be defined in terms of the applied degrees of freedom (DOF) of the MBS. For both methods, an appropriate homogenisation step needs to be applied to obtain the discrete mechanical response of the IVD, i.e. the resulting forces and moments. The goal of this paper was to present an efficient method to approximate the mechanical response of an IVD in an offline computation. In a previous paper (Karajan et al. in Biomech Model Mechanobiol 12(3):453-466, 2012), it was proven that a cubic polynomial for the homogenised forces and moments of the FE model is a suitable choice to approximate the purely elastic response as a coupled function of the DOF of the MBS. In this contribution, the polynomial chaos expansion (PCE) is applied to generate these high-dimensional polynomials. Following this, the main challenge is to determine suitable deformation states of the IVD for pre-computation, such that the polynomials can be constructed with high accuracy and low numerical cost. For the sake of a simple verification, the coupling method and the PCE are applied to the same simplified motion segment of the spine as was used in the previous paper, i.e. two cylindrical vertebrae and a cylindrical IVD in between. In a next step, the loading rates are included as variables in the polynomial response functions to account for a more realistic response of the overall viscoelastic intervertebral disc. Herein, an additive split into elastic and inelastic contributions to the homogenised forces and moments is applied.

  16. Statistical optimization of medium components and growth conditions by response surface methodology to enhance phenol degradation by Pseudomonas putida.

    PubMed

    Annadurai, Gurusamy; Ling, Lai Yi; Lee, Jiunn-Fwu

    2008-02-28

    In this work, a four-level Box-Behnken factorial design was employed combining with response surface methodology (RSM) to optimize the medium composition for the degradation of phenol by pseudomonas putida (ATCC 31800). A mathematical model was then developed to show the effect of each medium composition and their interactions on the biodegradation of phenol. Response surface method was using four levels like glucose, yeast extract, ammonium sulfate and sodium chloride, which also enabled the identification of significant effects of interactions for the batch studies. The biodegradation of phenol on Pseudomonas putida (ATCC 31800) was determined to be pH-dependent and the maximum degradation capacity of microorganism at 30 degrees C when the phenol concentration was 0.2 g/L and the pH of the solution was 7.0. Second order polynomial regression model was used for analysis of the experiment. Cubic and quadratic terms were incorporated into the regression model through variable selection procedures. The experimental values are in good agreement with predicted values and the correlation coefficient was found to be 0.9980.

  17. Stitching interferometry of a full cylinder without using overlap areas

    NASA Astrophysics Data System (ADS)

    Peng, Junzheng; Chen, Dingfu; Yu, Yingjie

    2017-08-01

    Traditional stitching interferometry requires finding out the overlap correspondence and computing the discrepancies in the overlap regions, which makes it complex and time-consuming to obtain the 360° form map of a cylinder. In this paper, we develop a cylinder stitching model based on a new set of orthogonal polynomials, termed Legendre Fourier (LF) polynomials. With these polynomials, individual subaperture data can be expanded as a composition of the inherent form of a partial cylinder surface and additional misalignment parameters. Then the 360° form map can be acquired by simultaneously fitting all subaperture data with the LF polynomials. A metal shaft was measured to experimentally verify the proposed method. In contrast to traditional stitching interferometry, our technique does not require overlapping of adjacent subapertures, thus significantly reducing the measurement time and making the stitching algorithm simple.

  18. Error reduction in three-dimensional metrology combining optical and touch probe data

    NASA Astrophysics Data System (ADS)

    Gerde, Janice R.; Christens-Barry, William A.

    2010-08-01

    Analysis of footwear under the Harmonized Tariff Schedule of the United States (HTSUS) is partly based on identifying the boundary ("parting line") between the "external surface area upper" (ESAU) and the sample's sole. Often, that boundary is obscured. We establish the parting line as the curved intersection between the sample outer surface and its insole surface. The outer surface is determined by discrete point cloud coordinates obtained using a laser scanner. The insole surface is defined by point cloud data, obtained using a touch probe device-a coordinate measuring machine (CMM). Because these point cloud data sets do not overlap spatially, a polynomial surface is fitted to the insole data and extended to intersect a mesh fitted to the outer surface point cloud. This line of intersection defines the ESAU boundary, permitting further fractional area calculations to proceed. The defined parting line location is sensitive to the polynomial used to fit experimental data. Extrapolation to the intersection with the ESAU can heighten this sensitivity. We discuss a methodology for transforming these data into a common reference frame. Three scenarios are considered: measurement error in point cloud coordinates, from fitting a polynomial surface to a point cloud then extrapolating beyond the data set, and error from reference frame transformation. These error sources can influence calculated surface areas. We describe experiments to assess error magnitude, the sensitivity of calculated results on these errors, and minimizing error impact on calculated quantities. Ultimately, we must ensure that statistical error from these procedures is minimized and within acceptance criteria.

  19. Improvements to a Response Surface Thermal Model for Orion Mated to the International Space Station

    NASA Technical Reports Server (NTRS)

    Miller, StephenW.; Walker, William Q.

    2011-01-01

    This study is an extension of previous work to evaluate the applicability of Design of Experiments (DOE)/Response Surface Methodology to on-orbit thermal analysis. The goal was to determine if the methodology could produce a Response Surface Equation (RSE) that predicted the thermal model temperature results within +/-10 F. An RSE is a polynomial expression that can then be used to predict temperatures for a defined range of factor combinations. Based on suggestions received from the previous work, this study used a model with simpler geometry, considered polynomials up to fifth order, and evaluated orbital temperature variations to establish a minimum and maximum temperature for each component. A simplified Outer Mold Line (OML) thermal model of the Orion spacecraft was used in this study. The factors chosen were the vehicle's Yaw, Pitch, and Roll (defining the on-orbit attitude), the Beta angle (restricted to positive beta angles from 0 to 75), and the environmental constants (varying from cold to hot). All factors were normalized from their native ranges to a non-dimensional range from -1.0 to 1.0. Twenty-three components from the OML were chosen and the minimum and maximum orbital temperatures were calculated for each to produce forty-six responses for the DOE model. A customized DOE case matrix of 145 analysis cases was developed which used analysis points at the factor corners, mid-points, and center. From this data set, RSE s were developed which consisted of cubic, quartic, and fifth order polynomials. The results presented are for the fifth order RSE. The RSE results were then evaluated for agreement with the analytical model predictions to produce a +/-3(sigma) error band. Forty of the 46 responses had a +/-3(sigma) value of 10 F or less. Encouraged by this initial success, two additional sets of verification cases were selected. One contained 20 cases, the other 50 cases. These cases were evaluated both with the fifth order RSE and with the analytical model. For the maximum temperature predictions, 12 of the 23 components had all predictions within +/-10 F and 17 were within +/-20 F. For the minimum temperature predictions, only 4 of the 23 components (the four radiator temperatures), were within the 10 F goal. The maximum temperature RSEs were then run through 59,049 screening cases. The RSE predictions were then filtered to find 55 cases that produced the hottest temperatures. These 55 cases were then analyzed using the thermal model and the results compared against the RSE predictions. As noted earlier, 12 of the 23 responses were within +/-10 F at 17 within +/-20 F. These results demonstrate that if properly formulated, an RSE can provide a reliable, fast temperature prediction. Despite this progress, additional work is needed to determine why the minimum temperatures responses and 6 of the hot temperature responses did not produce reliable RSEs. Recommend focus areas are the model itself (arithmetic vs. diffusion nodes) and seeking consultations with statistical application experts.

  20. Optimization of Geothermal Well Placement under Geological Uncertainty

    NASA Astrophysics Data System (ADS)

    Schulte, Daniel O.; Arnold, Dan; Demyanov, Vasily; Sass, Ingo; Geiger, Sebastian

    2017-04-01

    Well placement optimization is critical to commercial success of geothermal projects. However, uncertainties of geological parameters prohibit optimization based on a single scenario of the subsurface, particularly when few expensive wells are to be drilled. The optimization of borehole locations is usually based on numerical reservoir models to predict reservoir performance and entails the choice of objectives to optimize (total enthalpy, minimum enthalpy rate, production temperature) and the development options to adjust (well location, pump rate, difference in production and injection temperature). Optimization traditionally requires trying different development options on a single geological realization yet there are many possible different interpretations possible. Therefore, we aim to optimize across a range of representative geological models to account for geological uncertainty in geothermal optimization. We present an approach that uses a response surface methodology based on a large number of geological realizations selected by experimental design to optimize the placement of geothermal wells in a realistic field example. A large number of geological scenarios and design options were simulated and the response surfaces were constructed using polynomial proxy models, which consider both geological uncertainties and design parameters. The polynomial proxies were validated against additional simulation runs and shown to provide an adequate representation of the model response for the cases tested. The resulting proxy models allow for the identification of the optimal borehole locations given the mean response of the geological scenarios from the proxy (i.e. maximizing or minimizing the mean response). The approach is demonstrated on the realistic Watt field example by optimizing the borehole locations to maximize the mean heat extraction from the reservoir under geological uncertainty. The training simulations are based on a comprehensive semi-synthetic data set of a hierarchical benchmark case study for a hydrocarbon reservoir, which specifically considers the interpretational uncertainty in the modeling work flow. The optimal choice of boreholes prolongs the time to cold water breakthrough and allows for higher pump rates and increased water production temperatures.

  1. Fitting by Orthonormal Polynomials of Silver Nanoparticles Spectroscopic Data

    NASA Astrophysics Data System (ADS)

    Bogdanova, Nina; Koleva, Mihaela

    2018-02-01

    Our original Orthonormal Polynomial Expansion Method (OPEM) in one-dimensional version is applied for first time to describe the silver nanoparticles (NPs) spectroscopic data. The weights for approximation include experimental errors in variables. In this way we construct orthonormal polynomial expansion for approximating the curve on a non equidistant point grid. The corridors of given data and criteria define the optimal behavior of searched curve. The most important subinterval of spectra data is investigated, where the minimum (surface plasmon resonance absorption) is looking for. This study describes the Ag nanoparticles produced by laser approach in a ZnO medium forming a AgNPs/ZnO nanocomposite heterostructure.

  2. Reachability Analysis in Probabilistic Biological Networks.

    PubMed

    Gabr, Haitham; Todor, Andrei; Dobra, Alin; Kahveci, Tamer

    2015-01-01

    Extra-cellular molecules trigger a response inside the cell by initiating a signal at special membrane receptors (i.e., sources), which is then transmitted to reporters (i.e., targets) through various chains of interactions among proteins. Understanding whether such a signal can reach from membrane receptors to reporters is essential in studying the cell response to extra-cellular events. This problem is drastically complicated due to the unreliability of the interaction data. In this paper, we develop a novel method, called PReach (Probabilistic Reachability), that precisely computes the probability that a signal can reach from a given collection of receptors to a given collection of reporters when the underlying signaling network is uncertain. This is a very difficult computational problem with no known polynomial-time solution. PReach represents each uncertain interaction as a bi-variate polynomial. It transforms the reachability problem to a polynomial multiplication problem. We introduce novel polynomial collapsing operators that associate polynomial terms with possible paths between sources and targets as well as the cuts that separate sources from targets. These operators significantly shrink the number of polynomial terms and thus the running time. PReach has much better time complexity than the recent solutions for this problem. Our experimental results on real data sets demonstrate that this improvement leads to orders of magnitude of reduction in the running time over the most recent methods. Availability: All the data sets used, the software implemented and the alignments found in this paper are available at http://bioinformatics.cise.ufl.edu/PReach/.

  3. Response Surface Analysis of Experiments with Random Blocks

    DTIC Science & Technology

    1988-09-01

    partitioned into a lack of fit sum of squares, SSLOF, and a pure error sum of squares, SSPE . The latter is obtained by pooling the pure error sums of squares...from the blocks. Tests concerning the polynomial effects can then proceed using SSPE as the error term in the denominators of the F test statistics. 3.2...the center point in each of the three blocks is equal to SSPE = 2.0127 with 5 degrees of freedom. Hence, the lack of fit sum of squares is SSLoF

  4. Polynomial meta-models with canonical low-rank approximations: Numerical insights and comparison to sparse polynomial chaos expansions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Konakli, Katerina, E-mail: konakli@ibk.baug.ethz.ch; Sudret, Bruno

    2016-09-15

    The growing need for uncertainty analysis of complex computational models has led to an expanding use of meta-models across engineering and sciences. The efficiency of meta-modeling techniques relies on their ability to provide statistically-equivalent analytical representations based on relatively few evaluations of the original model. Polynomial chaos expansions (PCE) have proven a powerful tool for developing meta-models in a wide range of applications; the key idea thereof is to expand the model response onto a basis made of multivariate polynomials obtained as tensor products of appropriate univariate polynomials. The classical PCE approach nevertheless faces the “curse of dimensionality”, namely themore » exponential increase of the basis size with increasing input dimension. To address this limitation, the sparse PCE technique has been proposed, in which the expansion is carried out on only a few relevant basis terms that are automatically selected by a suitable algorithm. An alternative for developing meta-models with polynomial functions in high-dimensional problems is offered by the newly emerged low-rank approximations (LRA) approach. By exploiting the tensor–product structure of the multivariate basis, LRA can provide polynomial representations in highly compressed formats. Through extensive numerical investigations, we herein first shed light on issues relating to the construction of canonical LRA with a particular greedy algorithm involving a sequential updating of the polynomial coefficients along separate dimensions. Specifically, we examine the selection of optimal rank, stopping criteria in the updating of the polynomial coefficients and error estimation. In the sequel, we confront canonical LRA to sparse PCE in structural-mechanics and heat-conduction applications based on finite-element solutions. Canonical LRA exhibit smaller errors than sparse PCE in cases when the number of available model evaluations is small with respect to the input dimension, a situation that is often encountered in real-life problems. By introducing the conditional generalization error, we further demonstrate that canonical LRA tend to outperform sparse PCE in the prediction of extreme model responses, which is critical in reliability analysis.« less

  5. Simulation of aspheric tolerance with polynomial fitting

    NASA Astrophysics Data System (ADS)

    Li, Jing; Cen, Zhaofeng; Li, Xiaotong

    2018-01-01

    The shape of the aspheric lens changes caused by machining errors, resulting in a change in the optical transfer function, which affects the image quality. At present, there is no universally recognized tolerance criterion standard for aspheric surface. To study the influence of aspheric tolerances on the optical transfer function, the tolerances of polynomial fitting are allocated on the aspheric surface, and the imaging simulation is carried out by optical imaging software. Analysis is based on a set of aspheric imaging system. The error is generated in the range of a certain PV value, and expressed as a form of Zernike polynomial, which is added to the aspheric surface as a tolerance term. Through optical software analysis, the MTF of optical system can be obtained and used as the main evaluation index. Evaluate whether the effect of the added error on the MTF of the system meets the requirements of the current PV value. Change the PV value and repeat the operation until the acceptable maximum allowable PV value is obtained. According to the actual processing technology, consider the error of various shapes, such as M type, W type, random type error. The new method will provide a certain development for the actual free surface processing technology the reference value.

  6. Modelling local GPS/levelling geoid undulations using artificial neural networks

    NASA Astrophysics Data System (ADS)

    Kavzoglu, T.; Saka, M. H.

    2005-04-01

    The use of GPS for establishing height control in an area where levelling data are available can involve the so-called GPS/levelling technique. Modelling of the GPS/levelling geoid undulations has usually been carried out using polynomial surface fitting, least-squares collocation (LSC) and finite-element methods. Artificial neural networks (ANNs) have recently been used for many investigations, and proven to be effective in solving complex problems represented by noisy and missing data. In this study, a feed-forward ANN structure, learning the characteristics of the training data through the back-propagation algorithm, is employed to model the local GPS/levelling geoid surface. The GPS/levelling geoid undulations for Istanbul, Turkey, were estimated from GPS and precise levelling measurements obtained during a field study in the period 1998-99. The results are compared to those produced by two well-known conventional methods, namely polynomial fitting and LSC, in terms of root mean square error (RMSE) that ranged from 3.97 to 5.73 cm. The results show that ANNs can produce results that are comparable to polynomial fitting and LSC. The main advantage of the ANN-based surfaces seems to be the low deviations from the GPS/levelling data surface, which is particularly important for distorted levelling networks.

  7. Adaptive optics with a magnetic deformable mirror: applications in the human eye

    NASA Astrophysics Data System (ADS)

    Fernandez, Enrique J.; Vabre, Laurent; Hermann, Boris; Unterhuber, Angelika; Povazay, Boris; Drexler, Wolfgang

    2006-10-01

    A novel deformable mirror using 52 independent magnetic actuators (MIRAO 52, Imagine Eyes) is presented and characterized for ophthalmic applications. The capabilities of the device to reproduce different surfaces, in particular Zernike polynomials up to the fifth order, are investigated in detail. The study of the influence functions of the deformable mirror reveals a significant linear response with the applied voltage. The correcting device also presents a high fidelity in the generation of surfaces. The ranges of production of Zernike polynomials fully cover those typically found in the human eye, even for the cases of highly aberrated eyes. Data from keratoconic eyes are confronted with the obtained ranges, showing that the deformable mirror is able to compensate for these strong aberrations. Ocular aberration correction with polychromatic light, using a near Gaussian spectrum of 130 nm full width at half maximum centered at 800 nm, in five subjects is accomplished by simultaneously using the deformable mirror and an achromatizing lens, in order to compensate for the monochromatic and chromatic aberrations, respectively. Results from living eyes, including one exhibiting 4.66 D of myopia and a near pathologic cornea with notable high order aberrations, show a practically perfect aberration correction. Benefits and applications of simultaneous monochromatic and chromatic aberration correction are finally discussed in the context of retinal imaging and vision.

  8. Assessment of coagulation pretreatment of leachate by response surface methodology.

    PubMed

    Lessoued, Ridha; Souahi, Fatiha; Castrillon Pelaez, Leonor

    2017-11-01

    Coagulation-flocculation is a relatively simple technique that can be used successfully for the treatment of old leachate by poly-aluminum chloride (PAC). The main objectives of this study are to design the experiments, build models and optimize the operating parameters, dosage m and pH, using the central composite design and response surface method. Developed for chemical organic matter (COD) and turbidity responses, the quadratic polynomial model is suitable for prediction within the range of simulated variables as it showed that the optimum conditions were m of 5.55 g/L at pH 7.05, with a determination coefficient R² at 99.33%, 99.92% and adjusted R² at 98.85% and 99.86% for both COD and turbidity. We confirm that the initial pH and PAC dosage have significant effects on COD and turbidity removal. The experimental data and model predictions agreed well and the removal efficiency of COD, turbidity, Fe, Pb and Cu reached respectively 61%, 96.4%, 97.1%, 99% and 100%.

  9. Optimization of a novel improver gel formulation for Barbari flat bread using response surface methodology.

    PubMed

    Pourfarzad, Amir; Haddad Khodaparast, Mohammad Hossein; Karimi, Mehdi; Mortazavi, Seyed Ali

    2014-10-01

    Nowadays, the use of bread improvers has become an essential part of improving the production methods and quality of bakery products. In the present study, the Response Surface Methodology (RSM) was used to determine the optimum improver gel formulation which gave the best quality, shelf life, sensory and image properties for Barbari flat bread. Sodium stearoyl-2-lactylate (SSL), diacetyl tartaric acid esters of monoglyceride (DATEM) and propylene glycol (PG) were constituents of the gel and considered in this study. A second-order polynomial model was fitted to each response and the regression coefficients were determined using least square method. The optimum gel formulation was found to be 0.49 % of SSL, 0.36 % of DATEM and 0.5 % of PG when desirability function method was applied. There was a good agreement between the experimental data and their predicted counterparts. Results showed that the RSM, image processing and texture analysis are useful tools to investigate, approximate and predict a large number of bread properties.

  10. Modeling and optimization of red currants vacuum drying process by response surface methodology (RSM).

    PubMed

    Šumić, Zdravko; Vakula, Anita; Tepić, Aleksandra; Čakarević, Jelena; Vitas, Jasmina; Pavlić, Branimir

    2016-07-15

    Fresh red currants were dried by vacuum drying process under different drying conditions. Box-Behnken experimental design with response surface methodology was used for optimization of drying process in terms of physical (moisture content, water activity, total color change, firmness and rehydratation power) and chemical (total phenols, total flavonoids, monomeric anthocyanins and ascorbic acid content and antioxidant activity) properties of dried samples. Temperature (48-78 °C), pressure (30-330 mbar) and drying time (8-16 h) were investigated as independent variables. Experimental results were fitted to a second-order polynomial model where regression analysis and analysis of variance were used to determine model fitness and optimal drying conditions. The optimal conditions of simultaneously optimized responses were temperature of 70.2 °C, pressure of 39 mbar and drying time of 8 h. It could be concluded that vacuum drying provides samples with good physico-chemical properties, similar to lyophilized sample and better than conventionally dried sample. Copyright © 2016 Elsevier Ltd. All rights reserved.

  11. Image defects from surface and alignment errors in grazing incidence telescopes

    NASA Technical Reports Server (NTRS)

    Saha, Timo T.

    1989-01-01

    The rigid body motions and low frequency surface errors of grazing incidence Wolter telescopes are studied. The analysis is based on surface error descriptors proposed by Paul Glenn. In his analysis, the alignment and surface errors are expressed in terms of Legendre-Fourier polynomials. Individual terms in the expression correspond to rigid body motions (decenter and tilt) and low spatial frequency surface errors of mirrors. With the help of the Legendre-Fourier polynomials and the geometry of grazing incidence telescopes, exact and approximated first order equations are derived in this paper for the components of the ray intercepts at the image plane. These equations are then used to calculate the sensitivities of Wolter type I and II telescopes for the rigid body motions and surface deformations. The rms spot diameters calculated from this theory and OSAC ray tracing code agree very well. This theory also provides a tool to predict how rigid body motions and surface errors of the mirrors compensate each other.

  12. Ligand Shaping in Induced Fit Docking of MraY Inhibitors. Polynomial Discriminant and Laplacian Operator as Biological Activity Descriptors.

    PubMed

    Lungu, Claudiu N; Diudea, Mircea V; Putz, Mihai V

    2017-06-27

    Docking-i.e., interaction of a small molecule (ligand) with a proteic structure (receptor)-represents the ground of drug action mechanism of the vast majority of bioactive chemicals. Ligand and receptor accommodate their geometry and energy, within this interaction, in the benefit of receptor-ligand complex. In an induced fit docking, the structure of ligand is most susceptible to changes in topology and energy, comparative to the receptor. These changes can be described by manifold hypersurfaces, in terms of polynomial discriminant and Laplacian operator. Such topological surfaces were represented for each MraY (phospho-MurNAc-pentapeptide translocase) inhibitor, studied before and after docking with MraY. Binding affinities of all ligands were calculated by this procedure. For each ligand, Laplacian and polynomial discriminant were correlated with the ligand minimum inhibitory concentration (MIC) retrieved from literature. It was observed that MIC is correlated with Laplacian and polynomial discriminant.

  13. Explicit formulae for Chern-Simons invariants of the twist-knot orbifolds and edge polynomials of twist knots

    NASA Astrophysics Data System (ADS)

    Ham, J.-Y.; Lee, J.

    2016-09-01

    We calculate the Chern-Simons invariants of twist-knot orbifolds using the Schläfli formula for the generalized Chern-Simons function on the family of twist knot cone-manifold structures. Following the general instruction of Hilden, Lozano, and Montesinos-Amilibia, we here present concrete formulae and calculations. We use the Pythagorean Theorem, which was used by Ham, Mednykh and Petrov, to relate the complex length of the longitude and the complex distance between the two axes fixed by two generators. As an application, we calculate the Chern-Simons invariants of cyclic coverings of the hyperbolic twist-knot orbifolds. We also derive some interesting results. The explicit formulae of the A-polynomials of twist knots are obtained from the complex distance polynomials. Hence the edge polynomials corresponding to the edges of the Newton polygons of the A-polynomials of twist knots can be obtained. In particular, the number of boundary components of every incompressible surface corresponding to slope -4n+2 turns out to be 2. Bibliography: 39 titles.

  14. Orthonormal vector polynomials in a unit circle, Part I: Basis set derived from gradients of Zernike polynomials.

    PubMed

    Zhao, Chunyu; Burge, James H

    2007-12-24

    Zernike polynomials provide a well known, orthogonal set of scalar functions over a circular domain, and are commonly used to represent wavefront phase or surface irregularity. A related set of orthogonal functions is given here which represent vector quantities, such as mapping distortion or wavefront gradient. These functions are generated from gradients of Zernike polynomials, made orthonormal using the Gram- Schmidt technique. This set provides a complete basis for representing vector fields that can be defined as a gradient of some scalar function. It is then efficient to transform from the coefficients of the vector functions to the scalar Zernike polynomials that represent the function whose gradient was fit. These new vector functions have immediate application for fitting data from a Shack-Hartmann wavefront sensor or for fitting mapping distortion for optical testing. A subsequent paper gives an additional set of vector functions consisting only of rotational terms with zero divergence. The two sets together provide a complete basis that can represent all vector distributions in a circular domain.

  15. A Comparison of Approximation Modeling Techniques: Polynomial Versus Interpolating Models

    NASA Technical Reports Server (NTRS)

    Giunta, Anthony A.; Watson, Layne T.

    1998-01-01

    Two methods of creating approximation models are compared through the calculation of the modeling accuracy on test problems involving one, five, and ten independent variables. Here, the test problems are representative of the modeling challenges typically encountered in realistic engineering optimization problems. The first approximation model is a quadratic polynomial created using the method of least squares. This type of polynomial model has seen considerable use in recent engineering optimization studies due to its computational simplicity and ease of use. However, quadratic polynomial models may be of limited accuracy when the response data to be modeled have multiple local extrema. The second approximation model employs an interpolation scheme known as kriging developed in the fields of spatial statistics and geostatistics. This class of interpolating model has the flexibility to model response data with multiple local extrema. However, this flexibility is obtained at an increase in computational expense and a decrease in ease of use. The intent of this study is to provide an initial exploration of the accuracy and modeling capabilities of these two approximation methods.

  16. Modelling the breeding of Aedes Albopictus species in an urban area in Pulau Pinang using polynomial regression

    NASA Astrophysics Data System (ADS)

    Salleh, Nur Hanim Mohd; Ali, Zalila; Noor, Norlida Mohd.; Baharum, Adam; Saad, Ahmad Ramli; Sulaiman, Husna Mahirah; Ahmad, Wan Muhamad Amir W.

    2014-07-01

    Polynomial regression is used to model a curvilinear relationship between a response variable and one or more predictor variables. It is a form of a least squares linear regression model that predicts a single response variable by decomposing the predictor variables into an nth order polynomial. In a curvilinear relationship, each curve has a number of extreme points equal to the highest order term in the polynomial. A quadratic model will have either a single maximum or minimum, whereas a cubic model has both a relative maximum and a minimum. This study used quadratic modeling techniques to analyze the effects of environmental factors: temperature, relative humidity, and rainfall distribution on the breeding of Aedes albopictus, a type of Aedes mosquito. Data were collected at an urban area in south-west Penang from September 2010 until January 2011. The results indicated that the breeding of Aedes albopictus in the urban area is influenced by all three environmental characteristics. The number of mosquito eggs is estimated to reach a maximum value at a medium temperature, a medium relative humidity and a high rainfall distribution.

  17. A gradient-based model parametrization using Bernstein polynomials in Bayesian inversion of surface wave dispersion

    NASA Astrophysics Data System (ADS)

    Gosselin, Jeremy M.; Dosso, Stan E.; Cassidy, John F.; Quijano, Jorge E.; Molnar, Sheri; Dettmer, Jan

    2017-10-01

    This paper develops and applies a Bernstein-polynomial parametrization to efficiently represent general, gradient-based profiles in nonlinear geophysical inversion, with application to ambient-noise Rayleigh-wave dispersion data. Bernstein polynomials provide a stable parametrization in that small perturbations to the model parameters (basis-function coefficients) result in only small perturbations to the geophysical parameter profile. A fully nonlinear Bayesian inversion methodology is applied to estimate shear wave velocity (VS) profiles and uncertainties from surface wave dispersion data extracted from ambient seismic noise. The Bayesian information criterion is used to determine the appropriate polynomial order consistent with the resolving power of the data. Data error correlations are accounted for in the inversion using a parametric autoregressive model. The inversion solution is defined in terms of marginal posterior probability profiles for VS as a function of depth, estimated using Metropolis-Hastings sampling with parallel tempering. This methodology is applied to synthetic dispersion data as well as data processed from passive array recordings collected on the Fraser River Delta in British Columbia, Canada. Results from this work are in good agreement with previous studies, as well as with co-located invasive measurements. The approach considered here is better suited than `layered' modelling approaches in applications where smooth gradients in geophysical parameters are expected, such as soil/sediment profiles. Further, the Bernstein polynomial representation is more general than smooth models based on a fixed choice of gradient type (e.g. power-law gradient) because the form of the gradient is determined objectively by the data, rather than by a subjective parametrization choice.

  18. Testing of next-generation nonlinear calibration based non-uniformity correction techniques using SWIR devices

    NASA Astrophysics Data System (ADS)

    Lovejoy, McKenna R.; Wickert, Mark A.

    2017-05-01

    A known problem with infrared imaging devices is their non-uniformity. This non-uniformity is the result of dark current, amplifier mismatch as well as the individual photo response of the detectors. To improve performance, non-uniformity correction (NUC) techniques are applied. Standard calibration techniques use linear, or piecewise linear models to approximate the non-uniform gain and off set characteristics as well as the nonlinear response. Piecewise linear models perform better than the one and two-point models, but in many cases require storing an unmanageable number of correction coefficients. Most nonlinear NUC algorithms use a second order polynomial to improve performance and allow for a minimal number of stored coefficients. However, advances in technology now make higher order polynomial NUC algorithms feasible. This study comprehensively tests higher order polynomial NUC algorithms targeted at short wave infrared (SWIR) imagers. Using data collected from actual SWIR cameras, the nonlinear techniques and corresponding performance metrics are compared with current linear methods including the standard one and two-point algorithms. Machine learning, including principal component analysis, is explored for identifying and replacing bad pixels. The data sets are analyzed and the impact of hardware implementation is discussed. Average floating point results show 30% less non-uniformity, in post-corrected data, when using a third order polynomial correction algorithm rather than a second order algorithm. To maximize overall performance, a trade off analysis on polynomial order and coefficient precision is performed. Comprehensive testing, across multiple data sets, provides next generation model validation and performance benchmarks for higher order polynomial NUC methods.

  19. Improved Potential Energy Surface of Ozone Constructed Using the Fitting by Permutationally Invariant Polynomial Function

    DOE PAGES

    Ayouz, Mehdi; Babikov, Dmitri

    2012-01-01

    New global potential energy surface for the ground electronic state of ozone is constructed at the complete basis set level of the multireference configuration interaction theory. A method of fitting the data points by analytical permutationally invariant polynomial function is adopted. A small set of 500 points is preoptimized using the old surface of ozone. In this procedure the positions of points in the configuration space are chosen such that the RMS deviation of the fit is minimized. New ab initio calculations are carried out at these points and are used to build new surface. Additional points are added tomore » the vicinity of the minimum energy path in order to improve accuracy of the fit, particularly in the region where the surface of ozone exhibits a shallow van der Waals well. New surface can be used to study formation of ozone at thermal energies and its spectroscopy near the dissociation threshold.« less

  20. Optimization of reaction parameters of radiation induced grafting of 1-vinylimidazole onto poly(ethylene-co-tetraflouroethene) using response surface method

    NASA Astrophysics Data System (ADS)

    Nasef, Mohamed Mahmoud; Aly, Amgad Ahmed; Saidi, Hamdani; Ahmad, Arshad

    2011-11-01

    Radiation induced grafting of 1-vinylimidazole (1-VIm) onto poly(ethylene-co-tetraflouroethene) (ETFE) was investigated. The grafting parameters such as absorbed dose, monomer concentration, grafting time and temperature were optimized using response surface method (RSM). The Box-Behnken module available in the design expert software was used to investigate the effect of reaction conditions (independent parameters) varied in four levels on the degree of grafting ( G%) (response parameter). The model yielded a polynomial equation that relates the linear, quadratic and interaction effects of the independent parameters to the response parameter. The analysis of variance (ANOVA) was used to evaluate the results of the model and detect the significant values for the independent parameters. The optimum parameters to achieve a maximum G% were found to be monomer concentration of 55 vol%, absorbed dose of 100 kGy, time in the range of 14-20 h and a temperature of 61 °C. Fourier transform infrared (FTIR), thermogravimetric analysis (TGA) and differential scanning calorimetry (DSC) were used to investigate the properties of the obtained films and provide evidence for grafting.

  1. Deriving Two-Dimensional Ocean Wave Spectra and Surface Height Maps from the Shuttle Imaging Radar (SIR-B)

    NASA Technical Reports Server (NTRS)

    Tilley, D. G.

    1986-01-01

    Directional ocean wave spectra were derived from Shuttle Imaging Radar (SIR-B) imagery in regions where nearly simultaneous aircraft-based measurements of the wave spectra were also available as part of the NASA Shuttle Mission 41G experiments. The SIR-B response to a coherently speckled scene is used to estimate the stationary system transfer function in the 15 even terms of an eighth-order two-dimensional polynomial. Surface elevation contours are assigned to SIR-B ocean scenes Fourier filtered using a empirical model of the modulation transfer function calibrated with independent measurements of wave height. The empirical measurements of the wave height distribution are illustrated for a variety of sea states.

  2. Modeling Frequency Fluctuations in Surface Contaminated Crystal Resonators

    DTIC Science & Technology

    1990-07-23

    resonator cannot be described by cubic polynomial equations. (Cubic polynomial equations are used in the quartz resonator industry to describe frequency...frequency M = 28 ( nitrogen ), fluctuations- are studied. Our study is motivated by the the rate of-arrival ro of nitrogen molecules at a contami...the pressure If the product Qf 0 is constant, as is usually the case, then gradually. Extra care must be taken to keep constant all their spectral

  3. Isogeometric Analysis of Boundary Integral Equations

    DTIC Science & Technology

    2015-04-21

    methods, IgA relies on Non-Uniform Rational B- splines (NURBS) [43, 46], T- splines [55, 53] or subdivision surfaces [21, 48, 51] rather than piece- wise...structural dynamics [25, 26], plates and shells [15, 16, 27, 28, 37, 22, 23], phase-field models [17, 32, 33], and shape optimization [40, 41, 45, 59...polynomials for approximating the geometry and field variables. Thus, by replacing piecewise polynomials with NURBS or T- splines , one can develop

  4. Computational algebraic geometry for statistical modeling FY09Q2 progress.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thompson, David C.; Rojas, Joseph Maurice; Pebay, Philippe Pierre

    2009-03-01

    This is a progress report on polynomial system solving for statistical modeling. This is a progress report on polynomial system solving for statistical modeling. This quarter we have developed our first model of shock response data and an algorithm for identifying the chamber cone containing a polynomial system in n variables with n+k terms within polynomial time - a significant improvement over previous algorithms, all having exponential worst-case complexity. We have implemented and verified the chamber cone algorithm for n+3 and are working to extend the implementation to handle arbitrary k. Later sections of this report explain chamber cones inmore » more detail; the next section provides an overview of the project and how the current progress fits into it.« less

  5. An application of a Hill-based response surface model for a drug combination experiment on lung cancer.

    PubMed

    Ning, Shaoyang; Xu, Hongquan; Al-Shyoukh, Ibrahim; Feng, Jiaying; Sun, Ren

    2014-10-30

    Combination chemotherapy with multiple drugs has been widely applied to cancer treatment owing to enhanced efficacy and reduced drug resistance. For drug combination experiment analysis, response surface modeling has been commonly adopted. In this paper, we introduce a Hill-based global response surface model and provide an application of the model to a 512-run drug combination experiment with three chemicals, namely AG490, U0126, and indirubin-3  ' -monoxime (I-3-M), on lung cancer cells. The results demonstrate generally improved goodness of fit of our model from the traditional polynomial model, as well as the original Hill model on the basis of fixed-ratio drug combinations. We identify different dose-effect patterns between normal and cancer cells on the basis of our model, which indicates the potential effectiveness of the drug combination in cancer treatment. Meanwhile, drug interactions are analyzed both qualitatively and quantitatively. The distinct interaction patterns between U0126 and I-3-M on two types of cells uncovered by the model could be a further indicator of the efficacy of the drug combination. Copyright © 2014 John Wiley & Sons, Ltd.

  6. Model-based estimates of long-term persistence of induced HPV antibodies: a flexible subject-specific approach.

    PubMed

    Aregay, Mehreteab; Shkedy, Ziv; Molenberghs, Geert; David, Marie-Pierre; Tibaldi, Fabián

    2013-01-01

    In infectious diseases, it is important to predict the long-term persistence of vaccine-induced antibodies and to estimate the time points where the individual titers are below the threshold value for protection. This article focuses on HPV-16/18, and uses a so-called fractional-polynomial model to this effect, derived in a data-driven fashion. Initially, model selection was done from among the second- and first-order fractional polynomials on the one hand and from the linear mixed model on the other. According to a functional selection procedure, the first-order fractional polynomial was selected. Apart from the fractional polynomial model, we also fitted a power-law model, which is a special case of the fractional polynomial model. Both models were compared using Akaike's information criterion. Over the observation period, the fractional polynomials fitted the data better than the power-law model; this, of course, does not imply that it fits best over the long run, and hence, caution ought to be used when prediction is of interest. Therefore, we point out that the persistence of the anti-HPV responses induced by these vaccines can only be ascertained empirically by long-term follow-up analysis.

  7. Development of Response Surface Models for Rapid Analysis & Multidisciplinary Optimization of Launch Vehicle Design Concepts

    NASA Technical Reports Server (NTRS)

    Unal, Resit

    1999-01-01

    Multdisciplinary design optimization (MDO) is an important step in the design and evaluation of launch vehicles, since it has a significant impact on performance and lifecycle cost. The objective in MDO is to search the design space to determine the values of design parameters that optimize the performance characteristics subject to system constraints. Vehicle Analysis Branch (VAB) at NASA Langley Research Center has computerized analysis tools in many of the disciplines required for the design and analysis of launch vehicles. Vehicle performance characteristics can be determined by the use of these computerized analysis tools. The next step is to optimize the system performance characteristics subject to multidisciplinary constraints. However, most of the complex sizing and performance evaluation codes used for launch vehicle design are stand-alone tools, operated by disciplinary experts. They are, in general, difficult to integrate and use directly for MDO. An alternative has been to utilize response surface methodology (RSM) to obtain polynomial models that approximate the functional relationships between performance characteristics and design variables. These approximation models, called response surface models, are then used to integrate the disciplines using mathematical programming methods for efficient system level design analysis, MDO and fast sensitivity simulations. A second-order response surface model of the form given has been commonly used in RSM since in many cases it can provide an adequate approximation especially if the region of interest is sufficiently limited.

  8. Investigating and Modelling Effects of Climatically and Hydrologically Indicators on the Urmia Lake Coastline Changes Using Time Series Analysis

    NASA Astrophysics Data System (ADS)

    Ahmadijamal, M.; Hasanlou, M.

    2017-09-01

    Study of hydrological parameters of lakes and examine the variation of water level to operate management on water resources are important. The purpose of this study is to investigate and model the Urmia Lake water level changes due to changes in climatically and hydrological indicators that affects in the process of level variation and area of this lake. For this purpose, Landsat satellite images, hydrological data, the daily precipitation, the daily surface evaporation and the daily discharge in total of the lake basin during the period of 2010-2016 have been used. Based on time-series analysis that is conducted on individual data independently with same procedure, to model variation of Urmia Lake level, we used polynomial regression technique and combined polynomial with periodic behavior. In the first scenario, we fit a multivariate linear polynomial to our datasets and determining RMSE, NRSME and R² value. We found that fourth degree polynomial can better fit to our datasets with lowest RMSE value about 9 cm. In the second scenario, we combine polynomial with periodic behavior for modeling. The second scenario has superiority comparing to the first one, by RMSE value about 3 cm.

  9. Algebraic invariant curves of plane polynomial differential systems

    NASA Astrophysics Data System (ADS)

    Tsygvintsev, Alexei

    2001-01-01

    We consider a plane polynomial vector field P(x,y) dx + Q(x,y) dy of degree m>1. With each algebraic invariant curve of such a field we associate a compact Riemann surface with the meromorphic differential ω = dx/P = dy/Q. The asymptotic estimate of the degree of an arbitrary algebraic invariant curve is found. In the smooth case this estimate has already been found by Cerveau and Lins Neto in a different way.

  10. Genetic evaluation and selection response for growth in meat-type quail through random regression models using B-spline functions and Legendre polynomials.

    PubMed

    Mota, L F M; Martins, P G M A; Littiere, T O; Abreu, L R A; Silva, M A; Bonafé, C M

    2018-04-01

    The objective was to estimate (co)variance functions using random regression models (RRM) with Legendre polynomials, B-spline function and multi-trait models aimed at evaluating genetic parameters of growth traits in meat-type quail. A database containing the complete pedigree information of 7000 meat-type quail was utilized. The models included the fixed effects of contemporary group and generation. Direct additive genetic and permanent environmental effects, considered as random, were modeled using B-spline functions considering quadratic and cubic polynomials for each individual segment, and Legendre polynomials for age. Residual variances were grouped in four age classes. Direct additive genetic and permanent environmental effects were modeled using 2 to 4 segments and were modeled by Legendre polynomial with orders of fit ranging from 2 to 4. The model with quadratic B-spline adjustment, using four segments for direct additive genetic and permanent environmental effects, was the most appropriate and parsimonious to describe the covariance structure of the data. The RRM using Legendre polynomials presented an underestimation of the residual variance. Lesser heritability estimates were observed for multi-trait models in comparison with RRM for the evaluated ages. In general, the genetic correlations between measures of BW from hatching to 35 days of age decreased as the range between the evaluated ages increased. Genetic trend for BW was positive and significant along the selection generations. The genetic response to selection for BW in the evaluated ages presented greater values for RRM compared with multi-trait models. In summary, RRM using B-spline functions with four residual variance classes and segments were the best fit for genetic evaluation of growth traits in meat-type quail. In conclusion, RRM should be considered in genetic evaluation of breeding programs.

  11. Ligand Shaping in Induced Fit Docking of MraY Inhibitors. Polynomial Discriminant and Laplacian Operator as Biological Activity Descriptors

    PubMed Central

    Diudea, Mircea V.; Putz, Mihai V.

    2017-01-01

    Docking—i.e., interaction of a small molecule (ligand) with a proteic structure (receptor)—represents the ground of drug action mechanism of the vast majority of bioactive chemicals. Ligand and receptor accommodate their geometry and energy, within this interaction, in the benefit of receptor–ligand complex. In an induced fit docking, the structure of ligand is most susceptible to changes in topology and energy, comparative to the receptor. These changes can be described by manifold hypersurfaces, in terms of polynomial discriminant and Laplacian operator. Such topological surfaces were represented for each MraY (phospho-MurNAc-pentapeptide translocase) inhibitor, studied before and after docking with MraY. Binding affinities of all ligands were calculated by this procedure. For each ligand, Laplacian and polynomial discriminant were correlated with the ligand minimum inhibitory concentration (MIC) retrieved from literature. It was observed that MIC is correlated with Laplacian and polynomial discriminant. PMID:28653980

  12. Measurement of EUV lithography pupil amplitude and phase variation via image-based methodology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Levinson, Zachary; Verduijn, Erik; Wood, Obert R.

    2016-04-01

    Here, an approach to image-based EUV aberration metrology using binary mask targets and iterative model-based solutions to extract both the amplitude and phase components of the aberrated pupil function is presented. The approach is enabled through previously developed modeling, fitting, and extraction algorithms. We seek to examine the behavior of pupil amplitude variation in real-optical systems. Optimized target images were captured under several conditions to fit the resulting pupil responses. Both the amplitude and phase components of the pupil function were extracted from a zone-plate-based EUV mask microscope. The pupil amplitude variation was expanded in three different bases: Zernike polynomials,more » Legendre polynomials, and Hermite polynomials. It was found that the Zernike polynomials describe pupil amplitude variation most effectively of the three.« less

  13. Communication: Fitting potential energy surfaces with fundamental invariant neural network

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shao, Kejie; Chen, Jun; Zhao, Zhiqiang

    A more flexible neural network (NN) method using the fundamental invariants (FIs) as the input vector is proposed in the construction of potential energy surfaces for molecular systems involving identical atoms. Mathematically, FIs finitely generate the permutation invariant polynomial (PIP) ring. In combination with NN, fundamental invariant neural network (FI-NN) can approximate any function to arbitrary accuracy. Because FI-NN minimizes the size of input permutation invariant polynomials, it can efficiently reduce the evaluation time of potential energy, in particular for polyatomic systems. In this work, we provide the FIs for all possible molecular systems up to five atoms. Potential energymore » surfaces for OH{sub 3} and CH{sub 4} were constructed with FI-NN, with the accuracy confirmed by full-dimensional quantum dynamic scattering and bound state calculations.« less

  14. Maximum Marginal Likelihood Estimation of a Monotonic Polynomial Generalized Partial Credit Model with Applications to Multiple Group Analysis.

    PubMed

    Falk, Carl F; Cai, Li

    2016-06-01

    We present a semi-parametric approach to estimating item response functions (IRF) useful when the true IRF does not strictly follow commonly used functions. Our approach replaces the linear predictor of the generalized partial credit model with a monotonic polynomial. The model includes the regular generalized partial credit model at the lowest order polynomial. Our approach extends Liang's (A semi-parametric approach to estimate IRFs, Unpublished doctoral dissertation, 2007) method for dichotomous item responses to the case of polytomous data. Furthermore, item parameter estimation is implemented with maximum marginal likelihood using the Bock-Aitkin EM algorithm, thereby facilitating multiple group analyses useful in operational settings. Our approach is demonstrated on both educational and psychological data. We present simulation results comparing our approach to more standard IRF estimation approaches and other non-parametric and semi-parametric alternatives.

  15. SAPO-34/AlMCM-41, as a novel hierarchical nanocomposite: preparation, characterization and investigation of synthesis factors using response surface methodology

    NASA Astrophysics Data System (ADS)

    Roohollahi, Hossein; Halladj, Rouein; Askari, Sima; Yaripour, Fereydoon

    2018-06-01

    SAPO-34/AlMCM-41, as a new hierarchical nanocomposite was successfully synthesized via hydrothermal and dry-gel conversion. In an experimental and statistical study, effect of five input parameters including synthesis period, drying temperature, NaOH/Si, water/dried-gel and SAPO% were investigated on range-order degree of mesochannels and the relative crystallinity. X-ray diffraction (XRD) patterns were recorded to characterize the ordered AlMCM-41 and crystalline SAPO-34 structures. Nitrogen adsorption-desorption technique, scanning electron microscopy (SEM), field-emission SEM (FESEM) equipped with an energy-dispersive X-ray spectroscopy (EDS-Map) and transmission electron microscopy (TEM) were used to study the textural properties, morphology and surface elemental composition. Two reduced polynomials were fitted to the responses with good precision. Further, based on analysis of variances, SAPO% and time duration of dry-gel conversion were observed as the most effective parameters on the composite structure. The hierarchical porosity, narrow pore size distribution, high external surface area and large specific pore volume were of interesting characteristics for this novel nanocomposite.

  16. Optimization of Car Body under Constraints of Noise, Vibration, and Harshness (NVH), and Crash

    NASA Technical Reports Server (NTRS)

    Kodiyalam, Srinivas; Yang, Ren-Jye; Sobieszczanski-Sobieski, Jaroslaw (Editor)

    2000-01-01

    To be competitive on the today's market, cars have to be as light as possible while meeting the Noise, Vibration, and Harshness (NVH) requirements and conforming to Government-man dated crash survival regulations. The latter are difficult to meet because they involve very compute-intensive, nonlinear analysis, e.g., the code RADIOSS capable of simulation of the dynamics, and the geometrical and material nonlinearities of a thin-walled car structure in crash, would require over 12 days of elapsed time for a single design of a 390K elastic degrees of freedom model, if executed on a single processor of the state-of-the-art SGI Origin2000 computer. Of course, in optimization that crash analysis would have to be invoked many times. Needless to say, that has rendered such optimization intractable until now. The car finite element model is shown. The advent of computers that comprise large numbers of concurrently operating processors has created a new environment wherein the above optimization, and other engineering problems heretofore regarded as intractable may be solved. The procedure, shown, is a piecewise approximation based method and involves using a sensitivity based Taylor series approximation model for NVH and a polynomial response surface model for Crash. In that method the NVH constraints are evaluated using a finite element code (MSC/NASTRAN) that yields the constraint values and their derivatives with respect to design variables. The crash constraints are evaluated using the explicit code RADIOSS on the Origin 2000 operating on 256 processors simultaneously to generate data for a polynomial response surface in the design variable domain. The NVH constraints and their derivatives combined with the response surface for the crash constraints form an approximation to the system analysis (surrogate analysis) that enables a cycle of multidisciplinary optimization within move limits. In the inner loop, the NVH sensitivities are recomputed to update the NVH approximation model while keeping the Crash response surface constant. In every outer loop, the Crash response surface approximation is updated, including a gradual increase in the order of the response surface and the response surface extension in the direction of the search. In this optimization task, the NVH discipline has 30 design variables while the crash discipline has 20 design variables. A subset of these design variables (10) are common to both the NVH and crash disciplines. In order to construct a linear response surface for the Crash discipline constraints, a minimum of 21 design points would have to be analyzed using the RADIOSS code. On a single processor in Origin 2000 that amount of computing would require over 9 months! In this work, these runs were carried out concurrently on the Origin 2000 using multiple processors, ranging from 8 to 16, for each crash (RADIOSS) analysis. Another figure shows the wall time required for a single RADIOSS analysis using varying number of processors, as well as provides a comparison of 2 different common data placement procedures within the allotted memories for each analysis. The initial design is an infeasible design with NVH discipline Static Torsion constraint violations of over 10%. The final optimized design is a feasible design with a weight reduction of 15 kg compared to the initial design. This work demonstrates how advanced methodology for optimization combined with the technology of concurrent processing enables applications that until now were out of reach because of very long time-to-solution.

  17. Optimization of high pressure bioactive compounds extraction from pansies (Viola × wittrockiana) by response surface methodology

    NASA Astrophysics Data System (ADS)

    Fernandes, Luana; Casal, Susana I. P.; Pereira, José A.; Ramalhosa, Elsa; Saraiva, Jorge A.

    2017-07-01

    Response surface methodology (RSM) was employed for the first time to optimize high pressure extraction (HPE) conditions of bioactive compounds from pansies, namely: pressure (X1: 0-500 MPa), time (X2: 5-15 min) and ethanol concentration (X3: 0-100%). Consistent fittings using second-order polynomial models were obtained for flavonoids, tannins, anthocyanins, total reducing capacity (TRC) and DPPH (2,2-diphenyl-1-picrylhydrazyl) radical scavenging activity. The optimum extraction conditions based on combination responses for TRC, tannins and anthocyanins were: X1 = 384 MPa, X2 = 15 min and X3 = 35% (v/v) ethanol, shortening the extraction time when compared to the classic method of stirring (approx. 24 h). When the optimum extraction conditions were applied, 65.1 mg of TRC, 42.8 mg of tannins and 56.15 mg of anthocyanins/g dried flower were obtained. Thus, HPE has shown to be a promising technique to extract bioactive compounds from pansies, by reducing the extraction time and by using green solvents (ethanol and water), for application in diverse industrial fields.

  18. Response surface methodological approach for the decolorization of simulated dye effluent using Aspergillus fumigatus fresenius.

    PubMed

    Sharma, Praveen; Singh, Lakhvinder; Dilbaghi, Neeraj

    2009-01-30

    The aim of our research was to study, effect of temperature, pH and initial dye concentration on decolorization of diazo dye Acid Red 151 (AR 151) from simulated dye solution using a fungal isolate Aspergillus fumigatus fresenius have been investigated. The central composite design matrix and response surface methodology (RSM) have been applied to design the experiments to evaluate the interactive effects of three most important operating variables: temperature (25-35 degrees C), pH (4.0-7.0), and initial dye concentration (100-200 mg/L) on the biodegradation of AR 151. The total 20 experiments were conducted in the present study towards the construction of a quadratic model. Very high regression coefficient between the variables and the response (R(2)=0.9934) indicated excellent evaluation of experimental data by second-order polynomial regression model. The RSM indicated that initial dye concentration of 150 mg/L, pH 5.5 and a temperature of 30 degrees C were optimal for maximum % decolorization of AR 151 in simulated dye solution, and 84.8% decolorization of AR 151 was observed at optimum growth conditions.

  19. Global Design Optimization for Aerodynamics and Rocket Propulsion Components

    NASA Technical Reports Server (NTRS)

    Shyy, Wei; Papila, Nilay; Vaidyanathan, Rajkumar; Tucker, Kevin; Turner, James E. (Technical Monitor)

    2000-01-01

    Modern computational and experimental tools for aerodynamics and propulsion applications have matured to a stage where they can provide substantial insight into engineering processes involving fluid flows, and can be fruitfully utilized to help improve the design of practical devices. In particular, rapid and continuous development in aerospace engineering demands that new design concepts be regularly proposed to meet goals for increased performance, robustness and safety while concurrently decreasing cost. To date, the majority of the effort in design optimization of fluid dynamics has relied on gradient-based search algorithms. Global optimization methods can utilize the information collected from various sources and by different tools. These methods offer multi-criterion optimization, handle the existence of multiple design points and trade-offs via insight into the entire design space, can easily perform tasks in parallel, and are often effective in filtering the noise intrinsic to numerical and experimental data. However, a successful application of the global optimization method needs to address issues related to data requirements with an increase in the number of design variables, and methods for predicting the model performance. In this article, we review recent progress made in establishing suitable global optimization techniques employing neural network and polynomial-based response surface methodologies. Issues addressed include techniques for construction of the response surface, design of experiment techniques for supplying information in an economical manner, optimization procedures and multi-level techniques, and assessment of relative performance between polynomials and neural networks. Examples drawn from wing aerodynamics, turbulent diffuser flows, gas-gas injectors, and supersonic turbines are employed to help demonstrate the issues involved in an engineering design context. Both the usefulness of the existing knowledge to aid current design practices and the need for future research are identified.

  20. A response surface methodology based damage identification technique

    NASA Astrophysics Data System (ADS)

    Fang, S. E.; Perera, R.

    2009-06-01

    Response surface methodology (RSM) is a combination of statistical and mathematical techniques to represent the relationship between the inputs and outputs of a physical system by explicit functions. This methodology has been widely employed in many applications such as design optimization, response prediction and model validation. But so far the literature related to its application in structural damage identification (SDI) is scarce. Therefore this study attempts to present a systematic SDI procedure comprising four sequential steps of feature selection, parameter screening, primary response surface (RS) modeling and updating, and reference-state RS modeling with SDI realization using the factorial design (FD) and the central composite design (CCD). The last two steps imply the implementation of inverse problems by model updating in which the RS models substitute the FE models. The proposed method was verified against a numerical beam, a tested reinforced concrete (RC) frame and an experimental full-scale bridge with the modal frequency being the output responses. It was found that the proposed RSM-based method performs well in predicting the damage of both numerical and experimental structures having single and multiple damage scenarios. The screening capacity of the FD can provide quantitative estimation of the significance levels of updating parameters. Meanwhile, the second-order polynomial model established by the CCD provides adequate accuracy in expressing the dynamic behavior of a physical system.

  1. Edge detection and mathematic fitting for corneal surface with Matlab software.

    PubMed

    Di, Yue; Li, Mei-Yan; Qiao, Tong; Lu, Na

    2017-01-01

    To select the optimal edge detection methods to identify the corneal surface, and compare three fitting curve equations with Matlab software. Fifteen subjects were recruited. The corneal images from optical coherence tomography (OCT) were imported into Matlab software. Five edge detection methods (Canny, Log, Prewitt, Roberts, Sobel) were used to identify the corneal surface. Then two manual identifying methods (ginput and getpts) were applied to identify the edge coordinates respectively. The differences among these methods were compared. Binomial curve (y=Ax 2 +Bx+C), Polynomial curve [p(x)=p1x n +p2x n-1 +....+pnx+pn+1] and Conic section (Ax 2 +Bxy+Cy 2 +Dx+Ey+F=0) were used for curve fitting the corneal surface respectively. The relative merits among three fitting curves were analyzed. Finally, the eccentricity (e) obtained by corneal topography and conic section were compared with paired t -test. Five edge detection algorithms all had continuous coordinates which indicated the edge of the corneal surface. The ordinates of manual identifying were close to the inside of the actual edges. Binomial curve was greatly affected by tilt angle. Polynomial curve was lack of geometrical properties and unstable. Conic section could calculate the tilted symmetry axis, eccentricity, circle center, etc . There were no significant differences between 'e' values by corneal topography and conic section ( t =0.9143, P =0.3760 >0.05). It is feasible to simulate the corneal surface with mathematical curve with Matlab software. Edge detection has better repeatability and higher efficiency. The manual identifying approach is an indispensable complement for detection. Polynomial and conic section are both the alternative methods for corneal curve fitting. Conic curve was the optimal choice based on the specific geometrical properties.

  2. Matrix of moments of the Legendre polynomials and its application to problems of electrostatics

    NASA Astrophysics Data System (ADS)

    Savchenko, A. O.

    2017-01-01

    In this work, properties of the matrix of moments of the Legendre polynomials are presented and proven. In particular, the explicit form of the elements of the matrix inverse to the matrix of moments is found and theorems of the linear combination and orthogonality are proven. On the basis of these properties, the total charge and the dipole moment of a conducting ball in a nonuniform electric field, the charge distribution over the surface of the conducting ball, its multipole moments, and the force acting on a conducting ball situated on the axis of a nonuniform axisymmetric electric field are determined. All assertions are formulated in theorems, the proofs of which are based on the properties of the matrix of moments of the Legendre polynomials.

  3. Optimization of HNO3 leaching of copper from old AMD Athlon processors using response surface methodology.

    PubMed

    Javed, Umair; Farooq, Robina; Shehzad, Farrukh; Khan, Zakir

    2018-04-01

    The present study investigates the optimization of HNO 3 leaching of Cu from old AMD Athlon processors under the effect of nitric acid concentration (%), temperature (°C) and ultrasonic power (W). The optimization study is carried out using response surface methodology with central composite rotatable design (CCRD). The ANOVA study concludes that the second degree polynomial model is fitted well to the fifteen experimental runs based on p-value (0.003), R 2 (0.97) and Adj-R 2 (0.914). The study shows that the temperature is the most significant process variable to the leaching concentration of Cu followed by nitric acid concentration. However, ultrasound power shows no significant impact on the leaching concentration. The optimum conditions were found to be 20% nitric acid concentration, 48.89 °C temperature and 5.52 W ultrasound power for attaining maximum concentration of 97.916 mg/l for Cu leaching in solution. Copyright © 2018 Elsevier Ltd. All rights reserved.

  4. [Optimization of dissolution process for superfine grinding technology on total saponins of Panax ginseng fibrous root by response surface methodology].

    PubMed

    Zhao, Ya; Lai, Xiao-Pin; Yao, Hai-Yan; Zhao, Ran; Wu, Yi-Na; Li, Geng

    2014-03-01

    To investigate the effects of superfine comminution extraction technology of ginseng total saponins from Panax ginseng fibrous root, and to make sure the optimal extraction condition. Optimal condition of ginseng total saponins from Panax ginseng fibrous root was based on single factor experiment to study the effects of crushing degree, extraction time, alcohol concentration and extraction temperature on extraction rate. Response surface method was used to investigate three main factors such as superfine comminution time, extraction time and alcohol concentration. The relationship between content of ginseng total saponins in Panax ginseng fibrous root and three factors fitted second degree polynomial models. The optimal extraction condition was 9 min of superfine comminution time, 70% of alcohol, 50 degrees C of extraction temperature and 70 min of extraction time. Under the optimal condition, ginseng total saponins from Panax ginseng fibrous root was average 94. 81%, which was consistent with the predicted value. The optimization of technology is rapid, efficient, simple and stable.

  5. Optimization of isolation of cellulose from orange peel using sodium hydroxide and chelating agents.

    PubMed

    Bicu, Ioan; Mustata, Fanica

    2013-10-15

    Response surface methodology was used to optimize cellulose recovery from orange peel using sodium hydroxide (NaOH) as isolation reagent, and to minimize its ash content using ethylenediaminetetraacetic acid (EDTA) as chelating agent. The independent variables were NaOH charge, EDTA charge and cooking time. Other two constant parameters were cooking temperature (98 °C) and liquid-to-solid ratio (7.5). The dependent variables were cellulose yield and ash content. A second-order polynomial model was used for plotting response surfaces and for determining optimum cooking conditions. The analysis of coefficient values for independent variables in the regression equation showed that NaOH and EDTA charges were major factors influencing the cellulose yield and ash content, respectively. Optimum conditions were defined by: NaOH charge 38.2%, EDTA charge 9.56%, and cooking time 317 min. The predicted cellulose yield was 24.06% and ash content 0.69%. A good agreement between the experimental values and the predicted was observed. Copyright © 2013 Elsevier Ltd. All rights reserved.

  6. Robust design of multiple trailing edge flaps for helicopter vibration reduction: A multi-objective bat algorithm approach

    NASA Astrophysics Data System (ADS)

    Mallick, Rajnish; Ganguli, Ranjan; Seetharama Bhat, M.

    2015-09-01

    The objective of this study is to determine an optimal trailing edge flap configuration and flap location to achieve minimum hub vibration levels and flap actuation power simultaneously. An aeroelastic analysis of a soft in-plane four-bladed rotor is performed in conjunction with optimal control. A second-order polynomial response surface based on an orthogonal array (OA) with 3-level design describes both the objectives adequately. Two new orthogonal arrays called MGB2P-OA and MGB4P-OA are proposed to generate nonlinear response surfaces with all interaction terms for two and four parameters, respectively. A multi-objective bat algorithm (MOBA) approach is used to obtain the optimal design point for the mutually conflicting objectives. MOBA is a recently developed nature-inspired metaheuristic optimization algorithm that is based on the echolocation behaviour of bats. It is found that MOBA inspired Pareto optimal trailing edge flap design reduces vibration levels by 73% and flap actuation power by 27% in comparison with the baseline design.

  7. Optimisation of medium composition for probiotic biomass production using response surface methodology.

    PubMed

    Anvari, Masumeh; Khayati, Gholam; Rostami, Shora

    2014-02-01

    This study was aimed to optimise lactose, inulin and yeast extract concentration and also culture pH for maximising the growth of a probiotic bacterium, Bifidobacterium animalis subsp. lactis in apple juice and to assess the effects of these factors by using response surface methodology. A second-order central composite design was applied to evaluate the effects of these independent variables on growth of the microorganism. A polynomial regression model with cubic and quadratic terms was used for analysis of the experimental data. It was found that the effects involving inulin, yeast extract and pH on growth of the bacterium were significant, and the strongest effect was given by the yeast extract concentration. Estimated optimum conditions of the factors on the bacterial growth are as follows: lactose concentration=9·5 g/l; inulin concentration=38·5 mg/l; yeast extract concentration=9·6 g/l and initial pH=6·2.

  8. Shape Optimization of Supersonic Turbines Using Response Surface and Neural Network Methods

    NASA Technical Reports Server (NTRS)

    Papila, Nilay; Shyy, Wei; Griffin, Lisa W.; Dorney, Daniel J.

    2001-01-01

    Turbine performance directly affects engine specific impulse, thrust-to-weight ratio, and cost in a rocket propulsion system. A global optimization framework combining the radial basis neural network (RBNN) and the polynomial-based response surface method (RSM) is constructed for shape optimization of a supersonic turbine. Based on the optimized preliminary design, shape optimization is performed for the first vane and blade of a 2-stage supersonic turbine, involving O(10) design variables. The design of experiment approach is adopted to reduce the data size needed by the optimization task. It is demonstrated that a major merit of the global optimization approach is that it enables one to adaptively revise the design space to perform multiple optimization cycles. This benefit is realized when an optimal design approaches the boundary of a pre-defined design space. Furthermore, by inspecting the influence of each design variable, one can also gain insight into the existence of multiple design choices and select the optimum design based on other factors such as stress and materials considerations.

  9. A new approach to synthesis of benzyl cinnamate: Optimization by response surface methodology.

    PubMed

    Zhang, Dong-Hao; Zhang, Jiang-Yan; Che, Wen-Cai; Wang, Yun

    2016-09-01

    In this work, the new approach to synthesis of benzyl cinnamate by enzymatic esterification of cinnamic acid with benzyl alcohol is optimized by response surface methodology. The effects of various reaction conditions, including temperature, enzyme loading, substrate molar ratio of benzyl alcohol to cinnamic acid, and reaction time, are investigated. A 5-level-4-factor central composite design is employed to search for the optimal yield of benzyl cinnamate. A quadratic polynomial regression model is used to analyze the experimental data at a 95% confidence level (P<0.05). The coefficient of determination of this model is found to be 0.9851. Three sets of optimum reaction conditions are established, and the verified experimental trials are performed for validating the optimum points. Under the optimum conditions (40°C, 31mg/mL enzyme loading, 2.6:1 molar ratio, 27h), the yield reaches 97.7%, which provides an efficient processes for industrial production of benzyl cinnamate. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Response Surface Methodology for the Optimization of Preparation of Biocomposites Based on Poly(lactic acid) and Durian Peel Cellulose

    PubMed Central

    Penjumras, Patpen; Abdul Rahman, Russly; Talib, Rosnita A.; Abdan, Khalina

    2015-01-01

    Response surface methodology was used to optimize preparation of biocomposites based on poly(lactic acid) and durian peel cellulose. The effects of cellulose loading, mixing temperature, and mixing time on tensile strength and impact strength were investigated. A central composite design was employed to determine the optimum preparation condition of the biocomposites to obtain the highest tensile strength and impact strength. A second-order polynomial model was developed for predicting the tensile strength and impact strength based on the composite design. It was found that composites were best fit by a quadratic regression model with high coefficient of determination (R 2) value. The selected optimum condition was 35 wt.% cellulose loading at 165°C and 15 min of mixing, leading to a desirability of 94.6%. Under the optimum condition, the tensile strength and impact strength of the biocomposites were 46.207 MPa and 2.931 kJ/m2, respectively. PMID:26167523

  11. Statistical analysis and isotherm study of uranium biosorption by Padina sp. algae biomass.

    PubMed

    Khani, Mohammad Hassan

    2011-06-01

    The application of response surface methodology is presented for optimizing the removal of U ions from aqueous solutions using Padina sp., a brown marine algal biomass. Box-Wilson central composite design was employed to assess individual and interactive effects of the four main parameters (pH and initial uranium concentration in solutions, contact time and temperature) on uranium uptake. Response surface analysis showed that the data were adequately fitted to second-order polynomial model. Analysis of variance showed a high coefficient of determination value (R (2)=0.9746) and satisfactory second-order regression model was derived. The optimum pH and initial uranium concentration in solutions, contact time and temperature were found to be 4.07, 778.48 mg/l, 74.31 min, and 37.47°C, respectively. Maximized uranium uptake was predicted and experimentally validated. The equilibrium data for biosorption of U onto the Padina sp. were well represented by the Langmuir isotherm, giving maximum monolayer adsorption capacity as high as 376.73 mg/g.

  12. Multi-criteria optimization for ultrasonic-assisted extraction of antioxidants from Pericarpium Citri Reticulatae using response surface methodology, an activity-based approach.

    PubMed

    Zeng, Shanshan; Wang, Lu; Zhang, Lei; Qu, Haibin; Gong, Xingchu

    2013-06-01

    An activity-based approach to optimize the ultrasonic-assisted extraction of antioxidants from Pericarpium Citri Reticulatae (Chenpi in Chinese) was developed. Response surface optimization based on a quantitative composition-activity relationship model showed the relationships among product chemical composition, antioxidant activity of extract, and parameters of extraction process. Three parameters of ultrasonic-assisted extraction, including the ethanol/water ratio, Chenpi amount, and alkaline amount, were investigated to give optimum extraction conditions for antioxidants of Chenpi: ethanol/water 70:30 v/v, Chenpi amount of 10 g, and alkaline amount of 28 mg. The experimental antioxidant yield under the optimum conditions was found to be 196.5 mg/g Chenpi, and the antioxidant activity was 2023.8 μmol Trolox equivalents/g of the Chenpi powder. The results agreed well with the second-order polynomial regression model. This presented approach promised great application potentials in both food and pharmaceutical industries. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  13. Optimization by response surface methodology of lutein recovery from paprika leaves using accelerated solvent extraction.

    PubMed

    Kang, Jae-Hyun; Kim, Suna; Moon, BoKyung

    2016-08-15

    In this study, we used response surface methodology (RSM) to optimize the extraction conditions for recovering lutein from paprika leaves using accelerated solvent extraction (ASE). The lutein content was quantitatively analyzed using a UPLC equipped with a BEH C18 column. A central composite design (CCD) was employed for experimental design to obtain the optimized combination of extraction temperature (°C), static time (min), and solvent (EtOH, %). The experimental data obtained from a twenty sample set were fitted to a second-order polynomial equation using multiple regression analysis. The adjusted coefficient of determination (R(2)) for the lutein extraction model was 0.9518, and the probability value (p=0.0000) demonstrated a high significance for the regression model. The optimum extraction conditions for lutein were temperature: 93.26°C, static time: 5 min, and solvent: 79.63% EtOH. Under these conditions, the predicted extraction yield of lutein was 232.60 μg/g. Copyright © 2016 Elsevier Ltd. All rights reserved.

  14. Antioxidant Compound Extraction from Maqui (Aristotelia chilensis [Mol] Stuntz) Berries: Optimization by Response Surface Methodology

    PubMed Central

    Quispe-Fuentes, Issis; Vega-Gálvez, Antonio; Campos-Requena, Víctor H.

    2017-01-01

    The optimum conditions for the antioxidant extraction from maqui berry were determined using a response surface methodology. A three level D-optimal design was used to investigate the effects of three independent variables namely, solvent type (methanol, acetone and ethanol), solvent concentration and extraction time over total antioxidant capacity by using the oxygen radical absorbance capacity (ORAC) method. The D-optimal design considered 42 experiments including 10 central point replicates. A second-order polynomial model showed that more than 89% of the variation is explained with a satisfactory prediction (78%). ORAC values are higher when acetone was used as a solvent at lower concentrations, and the extraction time range studied showed no significant influence on ORAC values. The optimal conditions for antioxidant extraction obtained were 29% of acetone for 159 min under agitation. From the results obtained it can be concluded that the given predictive model describes an antioxidant extraction process from maqui berry.

  15. Probabilistic risk assessment for CO2 storage in geological formations: robust design and support for decision making under uncertainty

    NASA Astrophysics Data System (ADS)

    Oladyshkin, Sergey; Class, Holger; Helmig, Rainer; Nowak, Wolfgang

    2010-05-01

    CO2 storage in geological formations is currently being discussed intensively as a technology for mitigating CO2 emissions. However, any large-scale application requires a thorough analysis of the potential risks. Current numerical simulation models are too expensive for probabilistic risk analysis and for stochastic approaches based on brute-force repeated simulation. Even single deterministic simulations may require parallel high-performance computing. The multiphase flow processes involved are too non-linear for quasi-linear error propagation and other simplified stochastic tools. As an alternative approach, we propose a massive stochastic model reduction based on the probabilistic collocation method. The model response is projected onto a orthogonal basis of higher-order polynomials to approximate dependence on uncertain parameters (porosity, permeability etc.) and design parameters (injection rate, depth etc.). This allows for a non-linear propagation of model uncertainty affecting the predicted risk, ensures fast computation and provides a powerful tool for combining design variables and uncertain variables into one approach based on an integrative response surface. Thus, the design task of finding optimal injection regimes explicitly includes uncertainty, which leads to robust designs of the non-linear system that minimize failure probability and provide valuable support for risk-informed management decisions. We validate our proposed stochastic approach by Monte Carlo simulation using a common 3D benchmark problem (Class et al. Computational Geosciences 13, 2009). A reasonable compromise between computational efforts and precision was reached already with second-order polynomials. In our case study, the proposed approach yields a significant computational speedup by a factor of 100 compared to Monte Carlo simulation. We demonstrate that, due to the non-linearity of the flow and transport processes during CO2 injection, including uncertainty in the analysis leads to a systematic and significant shift of predicted leakage rates towards higher values compared with deterministic simulations, affecting both risk estimates and the design of injection scenarios. This implies that, neglecting uncertainty can be a strong simplification for modeling CO2 injection, and the consequences can be stronger than when neglecting several physical phenomena (e.g. phase transition, convective mixing, capillary forces etc.). The authors would like to thank the German Research Foundation (DFG) for financial support of the project within the Cluster of Excellence in Simulation Technology (EXC 310/1) at the University of Stuttgart. Keywords: polynomial chaos; CO2 storage; multiphase flow; porous media; risk assessment; uncertainty; integrative response surfaces

  16. Using Spherical-Harmonics Expansions for Optics Surface Reconstruction from Gradients.

    PubMed

    Solano-Altamirano, Juan Manuel; Vázquez-Otero, Alejandro; Khikhlukha, Danila; Dormido, Raquel; Duro, Natividad

    2017-11-30

    In this paper, we propose a new algorithm to reconstruct optics surfaces (aka wavefronts) from gradients, defined on a circular domain, by means of the Spherical Harmonics. The experimental results indicate that this algorithm renders the same accuracy, compared to the reconstruction based on classical Zernike polynomials, using a smaller number of polynomial terms, which potentially speeds up the wavefront reconstruction. Additionally, we provide an open-source C++ library, released under the terms of the GNU General Public License version 2 (GPLv2), wherein several polynomial sets are coded. Therefore, this library constitutes a robust software alternative for wavefront reconstruction in a high energy laser field, optical surface reconstruction, and, more generally, in surface reconstruction from gradients. The library is a candidate for being integrated in control systems for optical devices, or similarly to be used in ad hoc simulations. Moreover, it has been developed with flexibility in mind, and, as such, the implementation includes the following features: (i) a mock-up generator of various incident wavefronts, intended to simulate the wavefronts commonly encountered in the field of high-energy lasers production; (ii) runtime selection of the library in charge of performing the algebraic computations; (iii) a profiling mechanism to measure and compare the performance of different steps of the algorithms and/or third-party linear algebra libraries. Finally, the library can be easily extended to include additional dependencies, such as porting the algebraic operations to specific architectures, in order to exploit hardware acceleration features.

  17. Using Spherical-Harmonics Expansions for Optics Surface Reconstruction from Gradients

    PubMed Central

    Solano-Altamirano, Juan Manuel; Khikhlukha, Danila

    2017-01-01

    In this paper, we propose a new algorithm to reconstruct optics surfaces (aka wavefronts) from gradients, defined on a circular domain, by means of the Spherical Harmonics. The experimental results indicate that this algorithm renders the same accuracy, compared to the reconstruction based on classical Zernike polynomials, using a smaller number of polynomial terms, which potentially speeds up the wavefront reconstruction. Additionally, we provide an open-source C++ library, released under the terms of the GNU General Public License version 2 (GPLv2), wherein several polynomial sets are coded. Therefore, this library constitutes a robust software alternative for wavefront reconstruction in a high energy laser field, optical surface reconstruction, and, more generally, in surface reconstruction from gradients. The library is a candidate for being integrated in control systems for optical devices, or similarly to be used in ad hoc simulations. Moreover, it has been developed with flexibility in mind, and, as such, the implementation includes the following features: (i) a mock-up generator of various incident wavefronts, intended to simulate the wavefronts commonly encountered in the field of high-energy lasers production; (ii) runtime selection of the library in charge of performing the algebraic computations; (iii) a profiling mechanism to measure and compare the performance of different steps of the algorithms and/or third-party linear algebra libraries. Finally, the library can be easily extended to include additional dependencies, such as porting the algebraic operations to specific architectures, in order to exploit hardware acceleration features. PMID:29189722

  18. Comparison of parametric methods for modeling corneal surfaces

    NASA Astrophysics Data System (ADS)

    Bouazizi, Hala; Brunette, Isabelle; Meunier, Jean

    2017-02-01

    Corneal topography is a medical imaging technique to get the 3D shape of the cornea as a set of 3D points of its anterior and posterior surfaces. From these data, topographic maps can be derived to assist the ophthalmologist in the diagnosis of disorders. In this paper, we compare three different mathematical parametric representations of the corneal surfaces leastsquares fitted to the data provided by corneal topography. The parameters obtained from these models reduce the dimensionality of the data from several thousand 3D points to only a few parameters and could eventually be useful for diagnosis, biometry, implant design etc. The first representation is based on Zernike polynomials that are commonly used in optics. A variant of these polynomials, named Bhatia-Wolf will also be investigated. These two sets of polynomials are defined over a circular domain which is convenient to model the elevation (height) of the corneal surface. The third representation uses Spherical Harmonics that are particularly well suited for nearly-spherical object modeling, which is the case for cornea. We compared the three methods using the following three criteria: the root-mean-square error (RMSE), the number of parameters and the visual accuracy of the reconstructed topographic maps. A large dataset of more than 2000 corneal topographies was used. Our results showed that Spherical Harmonics were superior with a RMSE mean lower than 2.5 microns with 36 coefficients (order 5) for normal corneas and lower than 5 microns for two diseases affecting the corneal shapes: keratoconus and Fuchs' dystrophy.

  19. Invariant algebraic surfaces for a virus dynamics

    NASA Astrophysics Data System (ADS)

    Valls, Claudia

    2015-08-01

    In this paper, we provide a complete classification of the invariant algebraic surfaces and of the rational first integrals for a well-known virus system. In the proofs, we use the weight-homogeneous polynomials and the method of characteristic curves for solving linear partial differential equations.

  20. Parametric Study of an Ablative TPS and Hot Structure Heatshield for a Mars Entry Capsule Vehicle

    NASA Technical Reports Server (NTRS)

    Langston, Sarah L.; Lang, Christapher G.; Samareh, Jamshid A.

    2017-01-01

    The National Aeronautics and Space Administration is planning to send humans to Mars. As part of the Evolvable Mars Campaign, different en- try vehicle configurations are being designed and considered for delivering larger payloads than have been previously sent to the surface of Mars. Mass and packing volume are driving factors in the vehicle design, and the thermal protection for planetary entry is an area in which advances in technology can offer potential mass and volume savings. The feasibility and potential benefits of a carbon-carbon hot structure concept for a Mars entry vehicle is explored in this paper. The windward heat shield of a capsule design is assessed for the hot structure concept as well as an ablative thermal protection system (TPS) attached to a honeycomb sandwich structure. Independent thermal and structural analyses are performed to determine the minimum mass design. The analyses are repeated for a range of design parameters, which include the trajectory, vehicle size, and payload. Polynomial response functions are created from the analysis results to study the capsule mass with respect to the design parameters. Results from the polynomial response functions created from the thermal and structural analyses indicate that the mass of the capsule was higher for the hot structure concept as compared to the ablative TPS for the parameter space considered in this study.

  1. (Dis)similarity in Impulsivity and Marital Satisfaction: A Comparison of Volatility, Compatibility, and Incompatibility Hypotheses

    PubMed Central

    Derrick, Jaye L.; Houston, Rebecca J.; Quigley, Brian M.; Testa, Maria; Kubiak, Audrey; Levitt, Ash; Homish, Gregory G.; Leonard, Kenneth E.

    2016-01-01

    Impulsivity is negatively associated with relationship satisfaction, but whether relationship functioning is harmed or helped when both partners are high in impulsivity is unclear. The influence of impulsivity might be exacerbated (the Volatility Hypothesis) or reversed (the Compatibility Hypothesis). Alternatively, discrepancies in impulsivity might be particularly problematic (the Incompatibility Hypothesis). Behavioral and self-report measures of impulsivity were collected from a community sample of couples. Mixed effect polynomial regressions with response surface analysis provide evidence in favor of both the Compatibility Hypothesis and the Incompatibility Hypothesis, but not the Volatility Hypothesis. Mediation analyses suggest results for satisfaction are driven by perceptions of the partner's negative behavior and responsiveness. Implications for the study of both impulsivity and relationship functioning are discussed. PMID:26949275

  2. Optical Performance Modeling of FUSE Telescope Mirror

    NASA Technical Reports Server (NTRS)

    Saha, Timo T.; Ohl, Raymond G.; Friedman, Scott D.; Moos, H. Warren

    2000-01-01

    We describe the Metrology Data Processor (METDAT), the Optical Surface Analysis Code (OSAC), and their application to the image evaluation of the Far Ultraviolet Spectroscopic Explorer (FUSE) mirrors. The FUSE instrument - designed and developed by the Johns Hopkins University and launched in June 1999 is an astrophysics satellite which provides high resolution spectra (lambda/Delta(lambda) = 20,000 - 25,000) in the wavelength region from 90.5 to 118.7 nm The FUSE instrument is comprised of four co-aligned, normal incidence, off-axis parabolic mirrors, four Rowland circle spectrograph channels with holographic gratings, and delay line microchannel plate detectors. The OSAC code provides a comprehensive analysis of optical system performance, including the effects of optical surface misalignments, low spatial frequency deformations described by discrete polynomial terms, mid- and high-spatial frequency deformations (surface roughness), and diffraction due to the finite size of the aperture. Both normal incidence (traditionally infrared, visible, and near ultraviolet mirror systems) and grazing incidence (x-ray mirror systems) systems can be analyzed. The code also properly accounts for reflectance losses on the mirror surfaces. Low frequency surface errors are described in OSAC by using Zernike polynomials for normal incidence mirrors and Legendre-Fourier polynomials for grazing incidence mirrors. The scatter analysis of the mirror is based on scalar scatter theory. The program accepts simple autocovariance (ACV) function models or power spectral density (PSD) models derived from mirror surface metrology data as input to the scatter calculation. The end product of the program is a user-defined pixel array containing the system Point Spread Function (PSF). The METDAT routine is used in conjunction with the OSAC program. This code reads in laboratory metrology data in a normalized format. The code then fits the data using Zernike polynomials for normal incidence systems or Legendre-Fourier polynomials for grazing incidence systems. It removes low order terms from the metrology data, calculates statistical ACV or PSD functions, and fits these data to OSAC models for the scatter analysis. In this paper we briefly describe the laboratory image testing of FUSE spare mirror performed in the near and vacuum ultraviolet at John Hopkins University and OSAC modeling of the test setup performed at NASA/GSFC. The test setup is a double-pass configuration consisting of a Hg discharge source, the FUSE off-axis parabolic mirror under test, an autocollimating flat mirror, and a tomographic imaging detector. Two additional, small fold flats are used in the optical train to accommodate the light source and the detector. The modeling is based on Zernike fitting and PSD analysis of surface metrology data measured by both the mirror vendor (Tinsley) and JHU. The results of our models agree well with the laboratory imaging data, thus validating our theoretical model. Finally, we predict the imaging performance of FUSE mirrors in their flight configuration at far-ultraviolet wavelengths.

  3. Radial Basis Function Based Quadrature over Smooth Surfaces

    DTIC Science & Technology

    2016-03-24

    Radial Basis Functions φ(r) Piecewise Smooth (Conditionally Positive Definite) MN Monomial |r|2m+1 TPS thin plate spline |r|2mln|r| Infinitely Smooth...smooth surfaces using polynomial interpolants, while [27] couples Thin - Plate Spline interpolation (see table 1) with Green’s integral formula [29

  4. Mapping Landslides in Lunar Impact Craters Using Chebyshev Polynomials and Dem's

    NASA Astrophysics Data System (ADS)

    Yordanov, V.; Scaioni, M.; Brunetti, M. T.; Melis, M. T.; Zinzi, A.; Giommi, P.

    2016-06-01

    Geological slope failure processes have been observed on the Moon surface for decades, nevertheless a detailed and exhaustive lunar landslide inventory has not been produced yet. For a preliminary survey, WAC images and DEM maps from LROC at 100 m/pixels have been exploited in combination with the criteria applied by Brunetti et al. (2015) to detect the landslides. These criteria are based on the visual analysis of optical images to recognize mass wasting features. In the literature, Chebyshev polynomials have been applied to interpolate crater cross-sections in order to obtain a parametric characterization useful for classification into different morphological shapes. Here a new implementation of Chebyshev polynomial approximation is proposed, taking into account some statistical testing of the results obtained during Least-squares estimation. The presence of landslides in lunar craters is then investigated by analyzing the absolute values off odd coefficients of estimated Chebyshev polynomials. A case study on the Cassini A crater has demonstrated the key-points of the proposed methodology and outlined the required future development to carry out.

  5. Polynomial sequences for bond percolation critical thresholds

    DOE PAGES

    Scullard, Christian R.

    2011-09-22

    In this paper, I compute the inhomogeneous (multi-probability) bond critical surfaces for the (4, 6, 12) and (3 4, 6) using the linearity approximation described in (Scullard and Ziff, J. Stat. Mech. 03021), implemented as a branching process of lattices. I find the estimates for the bond percolation thresholds, pc(4, 6, 12) = 0.69377849... and p c(3 4, 6) = 0.43437077..., compared with Parviainen’s numerical results of p c = 0.69373383... and p c = 0.43430621... . These deviations are of the order 10 -5, as is standard for this method. Deriving thresholds in this way for a given latticemore » leads to a polynomial with integer coefficients, the root in [0, 1] of which gives the estimate for the bond threshold and I show how the method can be refined, leading to a series of higher order polynomials making predictions that likely converge to the exact answer. Finally, I discuss how this fact hints that for certain graphs, such as the kagome lattice, the exact bond threshold may not be the root of any polynomial with integer coefficients.« less

  6. [Optimization of vacuum belt drying process of Gardeniae Fructus in Reduning injection by Box-Behnken design-response surface methodology].

    PubMed

    Huang, Dao-sheng; Shi, Wei; Han, Lei; Sun, Ke; Chen, Guang-bo; Wu Jian-xiong; Xu, Gui-hong; Bi, Yu-an; Wang, Zhen-zhong; Xiao, Wei

    2015-06-01

    To optimize the belt drying process conditions optimization of Gardeniae Fructus extract from Reduning injection by Box-Behnken design-response surface methodology, on the basis of single factor experiment, a three-factor and three-level Box-Behnken experimental design was employed to optimize the drying technology of Gardeniae Fructus extract from Reduning injection. With drying temperature, drying time, feeding speed as independent variables and the content of geniposide as dependent variable, the experimental data were fitted to a second order polynomial equation, establishing the mathematical relationship between the content of geniposide and respective variables. With the experimental data analyzed by Design-Expert 8. 0. 6, the optimal drying parameter was as follows: the drying temperature was 98.5 degrees C , the drying time was 89 min, the feeding speed was 99.8 r x min(-1). Three verification experiments were taked under this technology and the measured average content of geniposide was 564. 108 mg x g(-1), which was close to the model prediction: 563. 307 mg x g(-1). According to the verification test, the Gardeniae Fructus belt drying process is steady and feasible. So single factor experiments combined with response surface method (RSM) could be used to optimize the drying technology of Reduning injection Gardenia extract.

  7. Modeling Source Water TOC Using Hydroclimate Variables and Local Polynomial Regression.

    PubMed

    Samson, Carleigh C; Rajagopalan, Balaji; Summers, R Scott

    2016-04-19

    To control disinfection byproduct (DBP) formation in drinking water, an understanding of the source water total organic carbon (TOC) concentration variability can be critical. Previously, TOC concentrations in water treatment plant source waters have been modeled using streamflow data. However, the lack of streamflow data or unimpaired flow scenarios makes it difficult to model TOC. In addition, TOC variability under climate change further exacerbates the problem. Here we proposed a modeling approach based on local polynomial regression that uses climate, e.g. temperature, and land surface, e.g., soil moisture, variables as predictors of TOC concentration, obviating the need for streamflow. The local polynomial approach has the ability to capture non-Gaussian and nonlinear features that might be present in the relationships. The utility of the methodology is demonstrated using source water quality and climate data in three case study locations with surface source waters including river and reservoir sources. The models show good predictive skill in general at these locations, with lower skills at locations with the most anthropogenic influences in their streams. Source water TOC predictive models can provide water treatment utilities important information for making treatment decisions for DBP regulation compliance under future climate scenarios.

  8. An extended UTD analysis for the scattering and diffraction from cubic polynomial strips

    NASA Technical Reports Server (NTRS)

    Constantinides, E. D.; Marhefka, R. J.

    1993-01-01

    Spline and polynomial type surfaces are commonly used in high frequency modeling of complex structures such as aircraft, ships, reflectors, etc. It is therefore of interest to develop an efficient and accurate solution to describe the scattered fields from such surfaces. An extended Uniform Geometrical Theory of Diffraction (UTD) solution for the scattering and diffraction from perfectly conducting cubic polynomial strips is derived and involves the incomplete Airy integrals as canonical functions. This new solution is universal in nature and can be used to effectively describe the scattered fields from flat, strictly concave or convex, and concave convex boundaries containing edges. The classic UTD solution fails to describe the more complicated field behavior associated with higher order phase catastrophes and therefore a new set of uniform reflection and first-order edge diffraction coefficients is derived. Also, an additional diffraction coefficient associated with a zero-curvature (inflection) point is presented. Higher order effects such as double edge diffraction, creeping waves, and whispering gallery modes are not examined. The extended UTD solution is independent of the scatterer size and also provides useful physical insight into the various scattering and diffraction processes. Its accuracy is confirmed via comparison with some reference moment method results.

  9. Polynomial algebra reveals diverging roles of the unfolded protein response in endothelial cells during ischemia-reperfusion injury.

    PubMed

    Le Pape, Sylvain; Dimitrova, Elena; Hannaert, Patrick; Konovalov, Alexander; Volmer, Romain; Ron, David; Thuillier, Raphaël; Hauet, Thierry

    2014-08-25

    The unfolded protein response (UPR)--the endoplasmic reticulum stress response--is found in various pathologies including ischemia-reperfusion injury (IRI). However, its role during IRI is still unclear. Here, by combining two different bioinformatical methods--a method based on ordinary differential equations (Time Series Network Inference) and an algebraic method (probabilistic polynomial dynamical systems)--we identified the IRE1α-XBP1 and the ATF6 pathways as the main UPR effectors involved in cell's adaptation to IRI. We validated these findings experimentally by assessing the impact of their knock-out and knock-down on cell survival during IRI. Copyright © 2014 Federation of European Biochemical Societies. Published by Elsevier B.V. All rights reserved.

  10. The complexity of identifying Ryu-Takayanagi surfaces in AdS 3/CFT 2

    DOE PAGES

    Bao, Ning; Chatwin-Davies, A.

    2016-11-07

    Here, we present a constructive algorithm for the determination of Ryu-Takayanagi surfaces in AdS 3/CFT 2 which exploits previously noted connections between holographic entanglement entropy and max-flow/min-cut. We then characterize its complexity as a polynomial time algorithm.

  11. Optimization of Extraction Conditions for Phenolic Acids from the Leaves of Melissa officinalis L. Using Response Surface Methodology

    PubMed Central

    Yoo, Guijae; Lee, Il Kyun; Park, Seonju; Kim, Nanyoung; Park, Jun Hyung; Kim, Seung Hyun

    2018-01-01

    Background: Melissa officinalis L. is a well-known medicinal plant from the family Lamiaceae, which is distributed throughout Eastern Mediterranean region and Western Asia. Objective: In this study, response surface methodology (RSM) was utilized to optimize the extraction conditions for bioactive compounds from the leaves of M. officinalis L. Materials and Methods: A Box–Behnken design (BBD) was utilized to evaluate the effects of three independent variables, namely extraction temperature (°C), methanol concentration (%), and solvent-to-material ratio (mL/g) on the responses of the contents of caffeic acid and rosmarinic acid. Results: Regression analysis showed a good fit of the experimental data. The optimal condition was obtained at extraction temperature 80.53°C, methanol concentration 29.89%, and solvent-to-material ratio 30 mL/g. Conclusion: These results indicate the suitability of the model employed and the successful application of RSM in optimizing the extraction conditions. This study may be useful for standardizing production quality, including improving the efficiency of large-scale extraction systems. SUMMARY The optimum conditions for the extraction of major phenolic acids from the leaves of Melissa officinalis L. were determined using response surface methodologyBox–Behnken design was utilized to evaluate the effects of three independent variablesQuadratic polynomial model provided a satisfactory description of the experimental dataThe optimized condition for simultaneous maximum contents of caffeic acid and rosmarinic acid was determined. Abbreviations used: RSM: Response surface methodology, BBD: Box–Behnken design, CA: Caffeic acid, RA: Rosmarinic acid, HPLC: High-performance liquid chromatography. PMID:29720824

  12. A quadratic regression modelling on paddy production in the area of Perlis

    NASA Astrophysics Data System (ADS)

    Goh, Aizat Hanis Annas; Ali, Zalila; Nor, Norlida Mohd; Baharum, Adam; Ahmad, Wan Muhamad Amir W.

    2017-08-01

    Polynomial regression models are useful in situations in which the relationship between a response variable and predictor variables is curvilinear. Polynomial regression fits the nonlinear relationship into a least squares linear regression model by decomposing the predictor variables into a kth order polynomial. The polynomial order determines the number of inflexions on the curvilinear fitted line. A second order polynomial forms a quadratic expression (parabolic curve) with either a single maximum or minimum, a third order polynomial forms a cubic expression with both a relative maximum and a minimum. This study used paddy data in the area of Perlis to model paddy production based on paddy cultivation characteristics and environmental characteristics. The results indicated that a quadratic regression model best fits the data and paddy production is affected by urea fertilizer application and the interaction between amount of average rainfall and percentage of area defected by pest and disease. Urea fertilizer application has a quadratic effect in the model which indicated that if the number of days of urea fertilizer application increased, paddy production is expected to decrease until it achieved a minimum value and paddy production is expected to increase at higher number of days of urea application. The decrease in paddy production with an increased in rainfall is greater, the higher the percentage of area defected by pest and disease.

  13. Utilization of a Response-Surface Technique in the Study of Plant Responses to Ozone and Sulfur Dioxide Mixtures 1

    PubMed Central

    Ormrod, Douglas P.; Tingey, David T.; Gumpertz, Marcia L.; Olszyk, David M.

    1984-01-01

    A second order rotatable design was used to obtain polynomial equations describing the effects of combinations of sulfur dioxide (SO2) and ozone (O3) on foliar injury and plant growth. The response surfaces derived from these equations were displayed as contour or isometric (3-dimensional) plots. The contour plots aided in the interpretation of the pollutant interactions and were judged easier to use than the isometric plots. Plants of `Grand Rapids' lettuce (Lactuca sativa L.), `Cherry Belle' radish (Raphanus sativus L.), and `Alsweet' pea (Pisum sativum L.) were grown in a controlled environment chamber and exposed to seven combinations of SO2 and O3. Injury was evaluated based on visible chlorosis and necrosis and growth was evaluated as leaf area and dry weight. Covariate measurements were used to increase precision. Radish and pea had greater injury, in general, that did lettuce; all three species were sensitive to O3, and pea was most sensitive and radish least sensitive to SO2. Leaf injury responses were relatively more affected by the pollutants than were plant growth responses in radish and pea but not in lettuce. In radish, hypocotyl growth was more sensitive to the pollutants than was leaf growth. PMID:16663598

  14. Utilization of a response-surface technique in the study of plant responses to ozone and sulfur dioxide mixtures.

    PubMed

    Ormrod, D P; Tingey, D T; Gumpertz, M L; Olszyk, D M

    1984-05-01

    A second order rotatable design was used to obtain polynomial equations describing the effects of combinations of sulfur dioxide (SO(2)) and ozone (O(3)) on foliar injury and plant growth. The response surfaces derived from these equations were displayed as contour or isometric (3-dimensional) plots. The contour plots aided in the interpretation of the pollutant interactions and were judged easier to use than the isometric plots. Plants of ;Grand Rapids' lettuce (Lactuca sativa L.), ;Cherry Belle' radish (Raphanus sativus L.), and ;Alsweet' pea (Pisum sativum L.) were grown in a controlled environment chamber and exposed to seven combinations of SO(2) and O(3). Injury was evaluated based on visible chlorosis and necrosis and growth was evaluated as leaf area and dry weight. Covariate measurements were used to increase precision. Radish and pea had greater injury, in general, that did lettuce; all three species were sensitive to O(3), and pea was most sensitive and radish least sensitive to SO(2). Leaf injury responses were relatively more affected by the pollutants than were plant growth responses in radish and pea but not in lettuce. In radish, hypocotyl growth was more sensitive to the pollutants than was leaf growth.

  15. Using polynomials to simplify fixed pattern noise and photometric correction of logarithmic CMOS image sensors.

    PubMed

    Li, Jing; Mahmoodi, Alireza; Joseph, Dileepan

    2015-10-16

    An important class of complementary metal-oxide-semiconductor (CMOS) image sensors are those where pixel responses are monotonic nonlinear functions of light stimuli. This class includes various logarithmic architectures, which are easily capable of wide dynamic range imaging, at video rates, but which are vulnerable to image quality issues. To minimize fixed pattern noise (FPN) and maximize photometric accuracy, pixel responses must be calibrated and corrected due to mismatch and process variation during fabrication. Unlike literature approaches, which employ circuit-based models of varying complexity, this paper introduces a novel approach based on low-degree polynomials. Although each pixel may have a highly nonlinear response, an approximately-linear FPN calibration is possible by exploiting the monotonic nature of imaging. Moreover, FPN correction requires only arithmetic, and an optimal fixed-point implementation is readily derived, subject to a user-specified number of bits per pixel. Using a monotonic spline, involving cubic polynomials, photometric calibration is also possible without a circuit-based model, and fixed-point photometric correction requires only a look-up table. The approach is experimentally validated with a logarithmic CMOS image sensor and is compared to a leading approach from the literature. The novel approach proves effective and efficient.

  16. Uncertainty Quantification for Polynomial Systems via Bernstein Expansions

    NASA Technical Reports Server (NTRS)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2012-01-01

    This paper presents a unifying framework to uncertainty quantification for systems having polynomial response metrics that depend on both aleatory and epistemic uncertainties. The approach proposed, which is based on the Bernstein expansions of polynomials, enables bounding the range of moments and failure probabilities of response metrics as well as finding supersets of the extreme epistemic realizations where the limits of such ranges occur. These bounds and supersets, whose analytical structure renders them free of approximation error, can be made arbitrarily tight with additional computational effort. Furthermore, this framework enables determining the importance of particular uncertain parameters according to the extent to which they affect the first two moments of response metrics and failure probabilities. This analysis enables determining the parameters that should be considered uncertain as well as those that can be assumed to be constants without incurring significant error. The analytical nature of the approach eliminates the numerical error that characterizes the sampling-based techniques commonly used to propagate aleatory uncertainties as well as the possibility of under predicting the range of the statistic of interest that may result from searching for the best- and worstcase epistemic values via nonlinear optimization or sampling.

  17. Are Khovanov-Rozansky polynomials consistent with evolution in the space of knots?

    NASA Astrophysics Data System (ADS)

    Anokhina, A.; Morozov, A.

    2018-04-01

    R-coloured knot polynomials for m-strand torus knots Torus [ m, n] are described by the Rosso-Jones formula, which is an example of evolution in n with Lyapunov exponents, labelled by Young diagrams from R ⊗ m . This means that they satisfy a finite-difference equation (recursion) of finite degree. For the gauge group SL( N ) only diagrams with no more than N lines can contribute and the recursion degree is reduced. We claim that these properties (evolution/recursion and reduction) persist for Khovanov-Rozansky (KR) polynomials, obtained by additional factorization modulo 1 + t, which is not yet adequately described in quantum field theory. Also preserved is some weakened version of differential expansion, which is responsible at least for a simple relation between reduced and unreduced Khovanov polynomials. However, in the KR case evolution is incompatible with the mirror symmetry under the change n -→ - n, what can signal about an ambiguity in the KR factorization even for torus knots.

  18. Estimation of Supersonic Stage Separation Aerodynamics of Winged-Body Launch Vehicles Using Response Surface Methods

    NASA Technical Reports Server (NTRS)

    Erickson, Gary E.; Deloach, Richard

    2008-01-01

    A collection of statistical and mathematical techniques referred to as response surface methodology was used to estimate the longitudinal stage separation aerodynamic characteristics of a generic, bimese, winged multi-stage launch vehicle configuration using data obtained on small-scale models at supersonic speeds in the NASA Langley Research Center Unitary Plan Wind Tunnel. The simulated Mach 3 staging was dominated by multiple shock wave interactions between the orbiter and booster vehicles throughout the relative spatial locations of interest. This motivated a partitioning of the overall inference space into several contiguous regions within which the separation aerodynamics were presumed to be well-behaved and estimable using cuboidal and spherical central composite designs capable of fitting full second-order response functions. The primary goal was to approximate the underlying overall aerodynamic response surfaces of the booster vehicle in belly-to-belly proximity to the orbiter vehicle using relatively simple, lower-order polynomial functions that were piecewise-continuous across the full independent variable ranges of interest. The quality of fit and prediction capabilities of the empirical models were assessed in detail, and the issue of subspace boundary discontinuities was addressed. The potential benefits of augmenting the central composite designs to full third order using computer-generated D-optimality criteria were also evaluated. The usefulness of central composite designs, the subspace sizing, and the practicality of fitting low-order response functions over a partitioned inference space dominated by highly nonlinear and possibly discontinuous shock-induced aerodynamics are discussed.

  19. Poisson traces, D-modules, and symplectic resolutions

    NASA Astrophysics Data System (ADS)

    Etingof, Pavel; Schedler, Travis

    2018-03-01

    We survey the theory of Poisson traces (or zeroth Poisson homology) developed by the authors in a series of recent papers. The goal is to understand this subtle invariant of (singular) Poisson varieties, conditions for it to be finite-dimensional, its relationship to the geometry and topology of symplectic resolutions, and its applications to quantizations. The main technique is the study of a canonical D-module on the variety. In the case the variety has finitely many symplectic leaves (such as for symplectic singularities and Hamiltonian reductions of symplectic vector spaces by reductive groups), the D-module is holonomic, and hence, the space of Poisson traces is finite-dimensional. As an application, there are finitely many irreducible finite-dimensional representations of every quantization of the variety. Conjecturally, the D-module is the pushforward of the canonical D-module under every symplectic resolution of singularities, which implies that the space of Poisson traces is dual to the top cohomology of the resolution. We explain many examples where the conjecture is proved, such as symmetric powers of du Val singularities and symplectic surfaces and Slodowy slices in the nilpotent cone of a semisimple Lie algebra. We compute the D-module in the case of surfaces with isolated singularities and show it is not always semisimple. We also explain generalizations to arbitrary Lie algebras of vector fields, connections to the Bernstein-Sato polynomial, relations to two-variable special polynomials such as Kostka polynomials and Tutte polynomials, and a conjectural relationship with deformations of symplectic resolutions. In the appendix we give a brief recollection of the theory of D-modules on singular varieties that we require.

  20. Poisson traces, D-modules, and symplectic resolutions.

    PubMed

    Etingof, Pavel; Schedler, Travis

    2018-01-01

    We survey the theory of Poisson traces (or zeroth Poisson homology) developed by the authors in a series of recent papers. The goal is to understand this subtle invariant of (singular) Poisson varieties, conditions for it to be finite-dimensional, its relationship to the geometry and topology of symplectic resolutions, and its applications to quantizations. The main technique is the study of a canonical D-module on the variety. In the case the variety has finitely many symplectic leaves (such as for symplectic singularities and Hamiltonian reductions of symplectic vector spaces by reductive groups), the D-module is holonomic, and hence, the space of Poisson traces is finite-dimensional. As an application, there are finitely many irreducible finite-dimensional representations of every quantization of the variety. Conjecturally, the D-module is the pushforward of the canonical D-module under every symplectic resolution of singularities, which implies that the space of Poisson traces is dual to the top cohomology of the resolution. We explain many examples where the conjecture is proved, such as symmetric powers of du Val singularities and symplectic surfaces and Slodowy slices in the nilpotent cone of a semisimple Lie algebra. We compute the D-module in the case of surfaces with isolated singularities and show it is not always semisimple. We also explain generalizations to arbitrary Lie algebras of vector fields, connections to the Bernstein-Sato polynomial, relations to two-variable special polynomials such as Kostka polynomials and Tutte polynomials, and a conjectural relationship with deformations of symplectic resolutions. In the appendix we give a brief recollection of the theory of D-modules on singular varieties that we require.

  1. Polynomials with Restricted Coefficients and Their Applications

    DTIC Science & Technology

    1987-01-01

    sums of exponentials of quadratics, he reduced such ýzums to exponentials of linears (geometric sums!) by simplg multiplying by their conjugates...n, the same algebraic manipulations as before lead to rn V`-~ v ie ? --8-- el4V’ .fk ts with 𔄃 = a+(2r+l)t, A = a+(2r+2m+l)t. To estimate the right...coefficients. These random polynomials represent the deviation in frequency response of a linear , equispaced antenna array cauised by coefficient

  2. Optimisation of the supercritical extraction of toxic elements in fish oil.

    PubMed

    Hajeb, P; Jinap, S; Shakibazadeh, Sh; Afsah-Hejri, L; Mohebbi, G H; Zaidul, I S M

    2014-01-01

    This study aims to optimise the operating conditions for the supercritical fluid extraction (SFE) of toxic elements from fish oil. The SFE operating parameters of pressure, temperature, CO2 flow rate and extraction time were optimised using a central composite design (CCD) of response surface methodology (RSM). High coefficients of determination (R²) (0.897-0.988) for the predicted response surface models confirmed a satisfactory adjustment of the polynomial regression models with the operation conditions. The results showed that the linear and quadratic terms of pressure and temperature were the most significant (p < 0.05) variables affecting the overall responses. The optimum conditions for the simultaneous elimination of toxic elements comprised a pressure of 61 MPa, a temperature of 39.8ºC, a CO₂ flow rate of 3.7 ml min⁻¹ and an extraction time of 4 h. These optimised SFE conditions were able to produce fish oil with the contents of lead, cadmium, arsenic and mercury reduced by up to 98.3%, 96.1%, 94.9% and 93.7%, respectively. The fish oil extracted under the optimised SFE operating conditions was of good quality in terms of its fatty acid constituents.

  3. A Polynomial Subset-Based Efficient Multi-Party Key Management System for Lightweight Device Networks.

    PubMed

    Mahmood, Zahid; Ning, Huansheng; Ghafoor, AtaUllah

    2017-03-24

    Wireless Sensor Networks (WSNs) consist of lightweight devices to measure sensitive data that are highly vulnerable to security attacks due to their constrained resources. In a similar manner, the internet-based lightweight devices used in the Internet of Things (IoT) are facing severe security and privacy issues because of the direct accessibility of devices due to their connection to the internet. Complex and resource-intensive security schemes are infeasible and reduce the network lifetime. In this regard, we have explored the polynomial distribution-based key establishment schemes and identified an issue that the resultant polynomial value is either storage intensive or infeasible when large values are multiplied. It becomes more costly when these polynomials are regenerated dynamically after each node join or leave operation and whenever key is refreshed. To reduce the computation, we have proposed an Efficient Key Management (EKM) scheme for multiparty communication-based scenarios. The proposed session key management protocol is established by applying a symmetric polynomial for group members, and the group head acts as a responsible node. The polynomial generation method uses security credentials and secure hash function. Symmetric cryptographic parameters are efficient in computation, communication, and the storage required. The security justification of the proposed scheme has been completed by using Rubin logic, which guarantees that the protocol attains mutual validation and session key agreement property strongly among the participating entities. Simulation scenarios are performed using NS 2.35 to validate the results for storage, communication, latency, energy, and polynomial calculation costs during authentication, session key generation, node migration, secure joining, and leaving phases. EKM is efficient regarding storage, computation, and communication overhead and can protect WSN-based IoT infrastructure.

  4. A Polynomial Subset-Based Efficient Multi-Party Key Management System for Lightweight Device Networks

    PubMed Central

    Mahmood, Zahid; Ning, Huansheng; Ghafoor, AtaUllah

    2017-01-01

    Wireless Sensor Networks (WSNs) consist of lightweight devices to measure sensitive data that are highly vulnerable to security attacks due to their constrained resources. In a similar manner, the internet-based lightweight devices used in the Internet of Things (IoT) are facing severe security and privacy issues because of the direct accessibility of devices due to their connection to the internet. Complex and resource-intensive security schemes are infeasible and reduce the network lifetime. In this regard, we have explored the polynomial distribution-based key establishment schemes and identified an issue that the resultant polynomial value is either storage intensive or infeasible when large values are multiplied. It becomes more costly when these polynomials are regenerated dynamically after each node join or leave operation and whenever key is refreshed. To reduce the computation, we have proposed an Efficient Key Management (EKM) scheme for multiparty communication-based scenarios. The proposed session key management protocol is established by applying a symmetric polynomial for group members, and the group head acts as a responsible node. The polynomial generation method uses security credentials and secure hash function. Symmetric cryptographic parameters are efficient in computation, communication, and the storage required. The security justification of the proposed scheme has been completed by using Rubin logic, which guarantees that the protocol attains mutual validation and session key agreement property strongly among the participating entities. Simulation scenarios are performed using NS 2.35 to validate the results for storage, communication, latency, energy, and polynomial calculation costs during authentication, session key generation, node migration, secure joining, and leaving phases. EKM is efficient regarding storage, computation, and communication overhead and can protect WSN-based IoT infrastructure. PMID:28338632

  5. THEORETICAL p-MODE OSCILLATION FREQUENCIES FOR THE RAPIDLY ROTATING {delta} SCUTI STAR {alpha} OPHIUCHI

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Deupree, Robert G., E-mail: bdeupree@ap.smu.ca

    2011-11-20

    A rotating, two-dimensional stellar model is evolved to match the approximate conditions of {alpha} Oph. Both axisymmetric and nonaxisymmetric oscillation frequencies are computed for two-dimensional rotating models which approximate the properties of {alpha} Oph. These computed frequencies are compared to the observed frequencies. Oscillation calculations are made assuming the eigenfunction can be fitted with six Legendre polynomials, but comparison calculations with eight Legendre polynomials show the frequencies agree to within about 0.26% on average. The surface horizontal shape of the eigenfunctions for the two sets of assumed number of Legendre polynomials agrees less well, but all calculations show significant departuresmore » from that of a single Legendre polynomial. It is still possible to determine the large separation, although the small separation is more complicated to estimate. With the addition of the nonaxisymmetric modes with |m| {<=} 4, the frequency space becomes sufficiently dense that it is difficult to comment on the adequacy of the fit of the computed to the observed frequencies. While the nonaxisymmetric frequency mode splitting is no longer uniform, the frequency difference between the frequencies for positive and negative values of the same m remains 2m times the rotation rate.« less

  6. On the Design of Wide-Field X-ray Telescopes

    NASA Technical Reports Server (NTRS)

    Elsner, Ronald F.; O'Dell, Stephen L.; Ramsey, Brian D.; Weiskopf, Martin C.

    2009-01-01

    X-ray telescopes having a relatively wide field-of-view and spatial resolution vs. polar off-axis angle curves much flatter than the parabolic dependence characteristic of Wolter I designs are of great interest for surveys of the X-ray sky and potentially for study of the Sun s X-ray emission. We discuss the various considerations affecting the design of such telescopes, including the possible use of polynomial mirror surface prescriptions, a method of optimizing the polynomial coefficients, scaling laws for mirror segment length vs. intersection radius, the loss of on-axis spatial resolution, and the positioning of focal plane detectors.

  7. An asymptotic formula for polynomials orthonormal with respect to a varying weight. II

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Komlov, A V; Suetin, S P

    2014-09-30

    This paper gives a proof of the theorem announced by the authors in the preceding paper with the same title. The theorem states that asymptotically the behaviour of the polynomials which are orthonormal with respect to the varying weight e{sup −2nQ(x)}p{sub g}(x)/√(∏{sub j=1}{sup 2p}(x−e{sub j})) coincides with the asymptotic behaviour of the Nuttall psi-function, which solves a special boundary-value problem on the relevant hyperelliptic Riemann surface of genus g=p−1. Here e{sub 1}

  8. Optimal approximation of harmonic growth clusters by orthogonal polynomials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Teodorescu, Razvan

    2008-01-01

    Interface dynamics in two-dimensional systems with a maximal number of conservation laws gives an accurate theoreticaI model for many physical processes, from the hydrodynamics of immiscible, viscous flows (zero surface-tension limit of Hele-Shaw flows), to the granular dynamics of hard spheres, and even diffusion-limited aggregation. Although a complete solution for the continuum case exists, efficient approximations of the boundary evolution are very useful due to their practical applications. In this article, the approximation scheme based on orthogonal polynomials with a deformed Gaussian kernel is discussed, as well as relations to potential theory.

  9. Are we on the same page? The performance effects of congruence between supervisor and group trust.

    PubMed

    Carter, Min Z; Mossholder, Kevin W

    2015-09-01

    Taking a multiple-stakeholder perspective, we examined the effects of supervisor-work group trust congruence on groups' task and contextual performance using a polynomial regression and response surface analytical framework. We expected motivation experienced by work groups to mediate the positive influence of trust congruence on performance. Although hypothesized congruence effects on performance were more strongly supported for affective rather than for cognitive trust, we found significant indirect effects on performance (via work group motivation) for both types of trust. We discuss the performance effects of trust congruence and incongruence between supervisors and work groups, as well as implications for practice and future research. (c) 2015 APA, all rights reserved).

  10. An Exploratory Investigation of the Role of Openness in Relationship Quality among Emerging Adult Chinese Couples

    PubMed Central

    Zhou, Yixin; Wang, Kexin; Chen, Shuang; Zhang, Jianxin; Zhou, Mingjie

    2017-01-01

    This study tested emerging adult couples’ openness and its fit effect on their romantic relationship quality using quadratic polynomial regression and response surface analysis. Participants were 260 emerging adult dyads. Both dyads’ openness and relationship quality were measured. The result showed that (1) female and male openness contribute differently to relationship quality; (2) couples with similar high openness could experience better relationship quality than those with similar low openness traits; and (3) when dyadic openness is dissimilar, it is better to be either relatively high or relatively low than to be moderate. These findings highlight the role of openness in emerging adults’ romantic relationships from a dyadic angle. PMID:28360875

  11. A frequency domain global parameter estimation method for multiple reference frequency response measurements

    NASA Astrophysics Data System (ADS)

    Shih, C. Y.; Tsuei, Y. G.; Allemang, R. J.; Brown, D. L.

    1988-10-01

    A method of using the matrix Auto-Regressive Moving Average (ARMA) model in the Laplace domain for multiple-reference global parameter identification is presented. This method is particularly applicable to the area of modal analysis where high modal density exists. The method is also applicable when multiple reference frequency response functions are used to characterise linear systems. In order to facilitate the mathematical solution, the Forsythe orthogonal polynomial is used to reduce the ill-conditioning of the formulated equations and to decouple the normal matrix into two reduced matrix blocks. A Complex Mode Indicator Function (CMIF) is introduced, which can be used to determine the proper order of the rational polynomials.

  12. Microwave alkaline roasting-water dissolving process for germanium extraction from zinc oxide dust and its analysis by response surface methodology (RSM)

    NASA Astrophysics Data System (ADS)

    Wang, Wankun; Wang, Fuchun; Lu, Fanghai

    2017-12-01

    Microwave alkaline roasting-water dissolving process was proposed to improve the germanium (Ge) extraction from zinc oxide (ZnO) dust. The effects of important parameters were investigated and the process conditions were optimized using response surface methodology (RSM). The Ge extraction is consistent with the linear polynomial model type. Alkali-material ratio, microwave heating temperature and leaching temperature are the significant factors for this process. The optimized conditions are obtained as follows, alkali-material ratio of 0.9 kg/kg, aging time of 1.12 day, microwave heating at 658 K for 10 min, liquid-solid ratio of 4.31 L/kg, leaching temperature at 330 K, leaching time of 47 min with the Ge extraction about 99.38%. It is in consistence with the predictive value of 99.31%. Compared to the existed alkaline roasting process heated by electric furnace in literature, the alkaline roasting temperature and holding time. It shows a good prospect on leaching Ge from ZnO dust with microwave alkaline roasting-water dissolving process.

  13. Optimized extraction of polysaccharides from corn silk by pulsed electric field and response surface quadratic design.

    PubMed

    Zhao, Wenzhu; Yu, Zhipeng; Liu, Jingbo; Yu, Yiding; Yin, Yongguang; Lin, Songyi; Chen, Feng

    2011-09-01

    Corn silk is a traditional Chinese herbal medicine, which has been widely used for treatment of some diseases. In this study the effects of pulsed electric field on the extraction of polysaccharides from corn silk were investigated. Polysaccharides in corn silk were extracted by pulsed electric field and optimized by response surface methodology (RSM), based on a Box-Behnken design (BBD). Three independent variables, including electric field intensity (kV cm(-1) ), ratio of liquid to raw material and pulse duration (µs), were investigated. The experimental data were fitted to a second-order polynomial equation and also profiled into the corresponding 3-D contour plots. Optimal extraction conditions were as follows: electric field intensity 30 kV cm(-1) , ratio of liquid to raw material 50, and pulse duration 6 µs. Under these condition, the experimental yield of extracted polysaccharides was 7.31% ± 0.15%, matching well with the predicted value. The results showed that a pulsed electric field could be applied to extract value-added products from foods and/or agricultural matrix. Copyright © 2011 Society of Chemical Industry.

  14. Investigation of stability, consistency, and oil oxidation of emulsion filled gel prepared by inulin and rice bran oil using ultrasonic radiation.

    PubMed

    Nourbehesht, Newsha; Shekarchizadeh, Hajar; Soltanizadeh, Nafiseh

    2018-04-01

    Inulin, rice bran oil and rosemary essential oil were used to produce high quality emulsion filled gel (EFG) using ultrasonic radiation. Response surface methodology was used to investigate the effects of oil content, inulin content and power of ultrasound on the stability and consistency of prepared EFG. The process conditions were optimized by conducting experiments at five different levels. Second order polynomial response surface equations were developed indicating the effect of variables on EFG stability and consistency. The oil content of 18%; inulin content of 44.6%; and power of ultrasound of 256 W were found to be the optimum conditions to achieve the best EFG stability and consistency. Microstructure and rheological properties of prepared EFG were investigated. Oil oxidation as a result of using ultrasonic radiation was also investigated. The increase of oxidation products and the decrease of total phenolic compounds as well as radical scavenging activity of antioxidant compounds showed the damaging effect of ultrasound on the oil quality of EFG. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Investigation on the Flexural Creep Stiffness Behavior of PC-ABS Material Processed by Fused Deposition Modeling Using Response Surface Definitive Screening Design

    NASA Astrophysics Data System (ADS)

    Mohamed, Omar Ahmed; Masood, Syed Hasan; Bhowmik, Jahar Lal

    2017-03-01

    The resistance of polymeric materials to time-dependent plastic deformation is an important requirement of the fused deposition modeling (FDM) design process, its processed products, and their application for long-term loading, durability, and reliability. The creep performance of the material and part processed by FDM is the fundamental criterion for many applications with strict dimensional stability requirements, including medical implants, electrical and electronic products, and various automotive applications. Herein, the effect of FDM fabrication conditions on the flexural creep stiffness behavior of polycarbonate-acrylonitrile-butadiene-styrene processed parts was investigated. A relatively new class of experimental design called "definitive screening design" was adopted for this investigation. The effects of process variables on flexural creep stiffness behavior were monitored, and the best suited quadratic polynomial model with high coefficient of determination ( R 2) value was developed. This study highlights the value of response surface definitive screening design in optimizing properties for the products and materials, and it demonstrates its role and potential application in material processing and additive manufacturing.

  16. Response surface optimization of medium components for naringinase production from Staphylococcus xylosus MAK2.

    PubMed

    Puri, Munish; Kaur, Aneet; Singh, Ram Sarup; Singh, Anubhav

    2010-09-01

    Response surface methodology was used to optimize the fermentation medium for enhancing naringinase production by Staphylococcus xylosus. The first step of this process involved the individual adjustment and optimization of various medium components at shake flask level. Sources of carbon (sucrose) and nitrogen (sodium nitrate), as well as an inducer (naringin) and pH levels were all found to be the important factors significantly affecting naringinase production. In the second step, a 22 full factorial central composite design was applied to determine the optimal levels of each of the significant variables. A second-order polynomial was derived by multiple regression analysis on the experimental data. Using this methodology, the optimum values for the critical components were obtained as follows: sucrose, 10.0%; sodium nitrate, 10.0%; pH 5.6; biomass concentration, 1.58%; and naringin, 0.50% (w/v), respectively. Under optimal conditions, the experimental naringinase production was 8.45 U/mL. The determination coefficients (R(2)) were 0.9908 and 0.9950 for naringinase activity and biomass production, respectively, indicating an adequate degree of reliability in the model.

  17. Photocatalytic degradation using design of experiments: a review and example of the Congo red degradation.

    PubMed

    Sakkas, Vasilios A; Islam, Md Azharul; Stalikas, Constantine; Albanis, Triantafyllos A

    2010-03-15

    The use of chemometric methods such as response surface methodology (RSM) based on statistical design of experiments (DOEs) is becoming increasingly widespread in several sciences such as analytical chemistry, engineering and environmental chemistry. Applied catalysis, is certainly not the exception. It is clear that photocatalytic processes mated with chemometric experimental design play a crucial role in the ability of reaching the optimum of the catalytic reactions. The present article reviews the major applications of RSM in modern experimental design combined with photocatalytic degradation processes. Moreover, the theoretical principles and designs that enable to obtain a polynomial regression equation, which expresses the influence of process parameters on the response are thoroughly discussed. An original experimental work, the photocatalytic degradation of the dye Congo red (CR) using TiO(2) suspensions and H(2)O(2), in natural surface water (river water) is comprehensively described as a case study, in order to provide sufficient guidelines to deal with this subject, in a rational and integrated way. (c) 2009 Elsevier B.V. All rights reserved.

  18. Enhancement of docosahexaenoic acid production by Schizochytrium SW1 using response surface methodology

    NASA Astrophysics Data System (ADS)

    Nazir, Mohd Yusuf Mohd; Al-Shorgani, Najeeb Kaid Nasser; Kalil, Mohd Sahaid; Hamid, Aidil Abdul

    2015-09-01

    In this study, three factors (fructose concentration, agitation speed and monosodium glutamate (MSG) concentration) were optimized to enhance DHA production by Schizochytrium SW1 using response surface methodology (RSM). Central composite design was applied as the experimental design and analysis of variance (ANOVA) was used to analyze the data. The experiments were conducted using 500 mL flask with 100 mL working volume at 30°C for 96 hours. ANOVA analysis revealed that the process was adequately represented significantly by the quadratic model (p<0.0001) and two of the factors namely agitation speed and MSG concentration significantly affect DHA production (p<0.005). Level of influence for each variable and quadratic polynomial equation were obtained for DHA production by multiple regression analyses. The estimated optimum conditions for maximizing DHA production by SW1 were 70 g/L fructose, 250 rpm agitation speed and 12 g/L MSG. Consequently, the quadratic model was validated by applying of the estimated optimum conditions, which confirmed the model validity and 52.86% of DHA was produced.

  19. Optimization of ultrasound assisted extraction of bioactive components from brown seaweed Ascophyllum nodosum using response surface methodology.

    PubMed

    Kadam, Shekhar U; Tiwari, Brijesh K; Smyth, Thomas J; O'Donnell, Colm P

    2015-03-01

    The objective of this study was to investigate the effect of key extraction parameters of extraction time (5-25 min), acid concentration (0-0.06 M HCl) and ultrasound amplitude (22.8-114 μm) on yields of bioactive compounds (total phenolics, fucose and uronic acid) from Ascophyllumnodosum. Response surface methodology was employed to optimize the extraction variables for bioactive compounds' yield. A second order polynomial model was fitted well to the extraction experimental data with (R(2)>0.79). Extraction yields of 143.12 mgGAE/gdb, 87.06 mg/gdb and 128.54 mg/gdb were obtained for total phenolics, fucose and uronic acid respectively at optimized extraction conditions of extraction time (25 min), acid concentration (0.03 M HCl) and ultrasonic amplitude (114 μm). Mass spectroscopy analysis of extracts show that ultrasound enhances the extraction of high molecular weight phenolic compounds from A. nodosum. This study demonstrates that ultrasound assisted extraction (UAE) can be employed to enhance extraction of bioactive compounds from seaweed. Copyright © 2014 Elsevier B.V. All rights reserved.

  20. Box-Behnken design for investigation of microwave-assisted extraction of patchouli oil

    NASA Astrophysics Data System (ADS)

    Kusuma, Heri Septya; Mahfud, Mahfud

    2015-12-01

    Microwave-assisted extraction (MAE) technique was employed to extract the essential oil from patchouli (Pogostemon cablin). The optimal conditions for microwave-assisted extraction of patchouli oil were determined by response surface methodology. A Box-Behnken design (BBD) was applied to evaluate the effects of three independent variables (microwave power (A: 400-800 W), plant material to solvent ratio (B: 0.10-0.20 g mL-1) and extraction time (C: 20-60 min)) on the extraction yield of patchouli oil. The correlation analysis of the mathematical-regression model indicated that quadratic polynomial model could be employed to optimize the microwave extraction of patchouli oil. The optimal extraction conditions of patchouli oil was microwave power 634.024 W, plant material to solvent ratio 0.147648 g ml-1 and extraction time 51.6174 min. The maximum patchouli oil yield was 2.80516% under these optimal conditions. Under the extraction condition, the experimental values agreed with the predicted results by analysis of variance. It indicated high fitness of the model used and the success of response surface methodology for optimizing and reflect the expected extraction condition.

  1. Geometry Dynamics of α-Helices in Different Class I Major Histocompatibility Complexes

    PubMed Central

    Karch, Rudolf; Schreiner, Wolfgang

    2015-01-01

    MHC α-helices form the antigen-binding cleft and are of particular interest for immunological reactions. To monitor these helices in molecular dynamics simulations, we applied a parsimonious fragment-fitting method to trace the axes of the α-helices. Each resulting axis was fitted by polynomials in a least-squares sense and the curvature integral was computed. To find the appropriate polynomial degree, the method was tested on two artificially modelled helices, one performing a bending movement and another a hinge movement. We found that second-order polynomials retrieve predefined parameters of helical motion with minimal relative error. From MD simulations we selected those parts of α-helices that were stable and also close to the TCR/MHC interface. We monitored the curvature integral, generated a ruled surface between the two MHC α-helices, and computed interhelical area and surface torsion, as they changed over time. We found that MHC α-helices undergo rapid but small changes in conformation. The curvature integral of helices proved to be a sensitive measure, which was closely related to changes in shape over time as confirmed by RMSD analysis. We speculate that small changes in the conformation of individual MHC α-helices are part of the intrinsic dynamics induced by engagement with the TCR. PMID:26649324

  2. Theoretical effect of modifications to the upper surface of two NACA airfoils using smooth polynomial additional thickness distributions which emphasize leading edge profile and which vary quadratically at the trailing edge. [using flow equations and a CDC 7600 computer

    NASA Technical Reports Server (NTRS)

    Merz, A. W.; Hague, D. S.

    1975-01-01

    An investigation was conducted on a CDC 7600 digital computer to determine the effects of additional thickness distributions to the upper surface of the NACA 64-206 and 64 sub 1 - 212 airfoils. The additional thickness distribution had the form of a continuous mathematical function which disappears at both the leading edge and the trailing edge. The function behaves as a polynomial of order epsilon sub 1 at the leading edge, and a polynomial of order epsilon sub 2 at the trailing edge. Epsilon sub 2 is a constant and epsilon sub 1 is varied over a range of practical interest. The magnitude of the additional thickness, y, is a second input parameter, and the effect of varying epsilon sub 1 and y on the aerodynamic performance of the airfoil was investigated. Results were obtained at a Mach number of 0.2 with an angle-of-attack of 6 degrees on the basic airfoils, and all calculations employ the full potential flow equations for two dimensional flow. The relaxation method of Jameson was employed for solution of the potential flow equations.

  3. Sensitivity Analysis of the Static Aeroelastic Response of a Wing

    NASA Technical Reports Server (NTRS)

    Eldred, Lloyd B.

    1993-01-01

    A technique to obtain the sensitivity of the static aeroelastic response of a three dimensional wing model is designed and implemented. The formulation is quite general and accepts any aerodynamic and structural analysis capability. A program to combine the discipline level, or local, sensitivities into global sensitivity derivatives is developed. A variety of representations of the wing pressure field are developed and tested to determine the most accurate and efficient scheme for representing the field outside of the aerodynamic code. Chebyshev polynomials are used to globally fit the pressure field. This approach had some difficulties in representing local variations in the field, so a variety of local interpolation polynomial pressure representations are also implemented. These panel based representations use a constant pressure value, a bilinearly interpolated value. or a biquadraticallv interpolated value. The interpolation polynomial approaches do an excellent job of reducing the numerical problems of the global approach for comparable computational effort. Regardless of the pressure representation used. sensitivity and response results with excellent accuracy have been produced for large integrated quantities such as wing tip deflection and trim angle of attack. The sensitivities of such things as individual generalized displacements have been found with fair accuracy. In general, accuracy is found to be proportional to the relative size of the derivatives to the quantity itself.

  4. Using Polynomials to Simplify Fixed Pattern Noise and Photometric Correction of Logarithmic CMOS Image Sensors

    PubMed Central

    Li, Jing; Mahmoodi, Alireza; Joseph, Dileepan

    2015-01-01

    An important class of complementary metal-oxide-semiconductor (CMOS) image sensors are those where pixel responses are monotonic nonlinear functions of light stimuli. This class includes various logarithmic architectures, which are easily capable of wide dynamic range imaging, at video rates, but which are vulnerable to image quality issues. To minimize fixed pattern noise (FPN) and maximize photometric accuracy, pixel responses must be calibrated and corrected due to mismatch and process variation during fabrication. Unlike literature approaches, which employ circuit-based models of varying complexity, this paper introduces a novel approach based on low-degree polynomials. Although each pixel may have a highly nonlinear response, an approximately-linear FPN calibration is possible by exploiting the monotonic nature of imaging. Moreover, FPN correction requires only arithmetic, and an optimal fixed-point implementation is readily derived, subject to a user-specified number of bits per pixel. Using a monotonic spline, involving cubic polynomials, photometric calibration is also possible without a circuit-based model, and fixed-point photometric correction requires only a look-up table. The approach is experimentally validated with a logarithmic CMOS image sensor and is compared to a leading approach from the literature. The novel approach proves effective and efficient. PMID:26501287

  5. A family of Nikishin systems with periodic recurrence coefficients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Delvaux, Steven; Lopez, Abey; Lopez, Guillermo L

    2013-01-31

    Suppose we have a Nikishin system of p measures with the kth generating measure of the Nikishin system supported on an interval {Delta}{sub k} subset of R with {Delta}{sub k} Intersection {Delta}{sub k+1} = Empty-Set for all k. It is well known that the corresponding staircase sequence of multiple orthogonal polynomials satisfies a (p+2)-term recurrence relation whose recurrence coefficients, under appropriate assumptions on the generating measures, have periodic limits of period p. (The limit values depend only on the positions of the intervals {Delta}{sub k}.) Taking these periodic limit values as the coefficients of a new (p+2)-term recurrence relation, wemore » construct a canonical sequence of monic polynomials {l_brace}P{sub n}{r_brace}{sub n=0}{sup {infinity}}, the so-called Chebyshev-Nikishin polynomials. We show that the polynomials P{sub n} themselves form a sequence of multiple orthogonal polynomials with respect to some Nikishin system of measures, with the kth generating measure being absolutely continuous on {Delta}{sub k}. In this way we generalize a result of the third author and Rocha [22] for the case p=2. The proof uses the connection with block Toeplitz matrices, and with a certain Riemann surface of genus zero. We also obtain strong asymptotics and an exact Widom-type formula for functions of the second kind of the Nikishin system for {l_brace}P{sub n}{r_brace}{sub n=0}{sup {infinity}}. Bibliography: 27 titles.« less

  6. The role of under-determined approximations in engineering and science application

    NASA Technical Reports Server (NTRS)

    Carpenter, William C.

    1992-01-01

    There is currently a great deal of interest in using response surfaces in the optimization of aircraft performance. The objective function and/or constraint equations involved in these optimization problems may come from numerous disciplines such as structures, aerodynamics, environmental engineering, etc. In each of these disciplines, the mathematical complexity of the governing equations usually dictates that numerical results be obtained from large computer programs such as a finite element method program. Thus, when performing optimization studies, response surfaces are a convenient way of transferring information from the various disciplines to the optimization algorithm as opposed to bringing all the sundry computer programs together in a massive computer code. Response surfaces offer another advantage in the optimization of aircraft structures. A characteristic of these types of optimization problems is that evaluation of the objective function and response equations (referred to as a functional evaluation) can be very expensive in a computational sense. Because of the computational expense in obtaining functional evaluations, the present study was undertaken to investigate under-determinined approximations. An under-determined approximation is one in which there are fewer training pairs (pieces of information about a function) than there are undetermined parameters (coefficients or weights) associated with the approximation. Both polynomial approximations and neural net approximations were examined. Three main example problems were investigated: (1) a function of one design variable was considered; (2) a function of two design variables was considered; and (3) a 35 bar truss with 4 design variables was considered.

  7. Development of Phaleria macrocarpa (Scheff.) Boerl Fruits Using Response Surface Methodology Focused on Phenolics, Flavonoids and Antioxidant Properties.

    PubMed

    Mohamed Mahzir, Khurul Ain; Abd Gani, Siti Salwa; Hasanah Zaidan, Uswatun; Halmi, Mohd Izuan Effendi

    2018-03-22

    In this study, the optimal conditions for the extraction of antioxidants from the Buah Mahkota Dewa fruit ( Phaleria macrocarpa) was determined by using Response Surface Methodology (RSM). The optimisation was applied using a Central Composite Design (CCD) to investigate the effect of three independent variables, namely extraction temperature (°C), extraction time (minutes) and extraction solvent to-feed ratio (% v / v ) on four responses: free radical scavenging activity (DPPH), ferric ion reducing power assay (FRAP), total phenolic content (TPC) and total flavonoid content (TFC). The optimal conditions for the antioxidants extraction were found to be 64 °C extraction temperature, 66 min extraction time and 75% v / v solvent to-feed ratio giving the highest percentage yields of DPPH, FRAP, TPC and TFC of 86.85%, 7.47%, 292.86 mg/g and 3.22 mg/g, respectively. Moreover, the data were subjected to Response Surface Methodology (RSM) and the results showed that the polynomial equations for all models were significant, did not show lack of fit, and presented adjusted determination coefficients ( R ²) above 99%, proving that the yield of phenolic, flavonoid and antioxidants activities obtained experimentally were close to the predicted values and the suitability of the model employed in RSM to optimise the extraction conditions. Hence, in this study, the fruit from P. macrocarpa could be considered to have strong antioxidant ability and can be used in various cosmeceutical or medicinal applications.

  8. Reduced-order modeling with sparse polynomial chaos expansion and dimension reduction for evaluating the impact of CO2 and brine leakage on groundwater

    NASA Astrophysics Data System (ADS)

    Liu, Y.; Zheng, L.; Pau, G. S. H.

    2016-12-01

    A careful assessment of the risk associated with geologic CO2 storage is critical to the deployment of large-scale storage projects. While numerical modeling is an indispensable tool for risk assessment, there has been increasing need in considering and addressing uncertainties in the numerical models. However, uncertainty analyses have been significantly hindered by the computational complexity of the model. As a remedy, reduced-order models (ROM), which serve as computationally efficient surrogates for high-fidelity models (HFM), have been employed. The ROM is constructed at the expense of an initial set of HFM simulations, and afterwards can be relied upon to predict the model output values at minimal cost. The ROM presented here is part of National Risk Assessment Program (NRAP) and intends to predict the water quality change in groundwater in response to hypothetical CO2 and brine leakage. The HFM based on which the ROM is derived is a multiphase flow and reactive transport model, with 3-D heterogeneous flow field and complex chemical reactions including aqueous complexation, mineral dissolution/precipitation, adsorption/desorption via surface complexation and cation exchange. Reduced-order modeling techniques based on polynomial basis expansion, such as polynomial chaos expansion (PCE), are widely used in the literature. However, the accuracy of such ROMs can be affected by the sparse structure of the coefficients of the expansion. Failing to identify vanishing polynomial coefficients introduces unnecessary sampling errors, the accumulation of which deteriorates the accuracy of the ROMs. To address this issue, we treat the PCE as a sparse Bayesian learning (SBL) problem, and the sparsity is obtained by detecting and including only the non-zero PCE coefficients one at a time by iteratively selecting the most contributing coefficients. The computational complexity due to predicting the entire 3-D concentration fields is further mitigated by a dimension reduction procedure-proper orthogonal decomposition (POD). Our numerical results show that utilizing the sparse structure and POD significantly enhances the accuracy and efficiency of the ROMs, laying the basis for further analyses that necessitate a large number of model simulations.

  9. Application of Statistic Experimental Design to Assess the Effect of Gammairradiation Pre-Treatment on the Drying Characteristics and Qualities of Wheat

    NASA Astrophysics Data System (ADS)

    Yu, Yong; Wang, Jun

    Wheat, pretreated by 60Co gamma irradiation, was dried by hot-air with irradiation dosage 0-3 kGy, drying temperature 40-60 °C, and initial moisture contents 19-25% (drying basis). The drying characteristics and dried qualities of wheat were evaluated based on drying time, average dehydration rate, wet gluten content (WGC), moisture content of wet gluten (MCWG)and titratable acidity (TA). A quadratic rotation-orthogonal composite experimental design, with three variables (at five levels) and five response functions, and analysis method were employed to study the effect of three variables on the individual response functions. The five response functions (drying time, average dehydration rate, WGC, MCWG, TA) correlated with these variables by second order polynomials consisting of linear, quadratic and interaction terms. A high correlation coefficient indicated the suitability of the second order polynomial to predict these response functions. The linear, interaction and quadratic effects of three variables on the five response functions were all studied.

  10. Microencapsulation of citronella oil for mosquito-repellent application: formulation and in vitro permeation studies.

    PubMed

    Solomon, B; Sahle, F F; Gebre-Mariam, T; Asres, K; Neubert, R H H

    2012-01-01

    Citronella oil (CO) has been reported to possess a mosquito-repellent action. However, its application in topical preparations is limited due to its rapid volatility. The objective of this study was therefore to reduce the rate of evaporation of the oil via microencapsulation. Microcapsules (MCs) were prepared using gelatin simple coacervation method and sodium sulfate (20%) as a coacervating agent. The MCs were hardened with a cross-linking agent, formaldehyde (37%). The effects of three variables, stirring rate, oil loading and the amount of cross-linking agent, on encapsulation efficiency (EE, %) were studied. Response surface methodology was employed to optimize the EE (%), and a polynomial regression model equation was generated. The effect of the amount of cross-linker was insignificant on EE (%). The response surface plot constructed for the polynomial equation provided an optimum area. The MCs under the optimized conditions provided EE of 60%. The optimized MCs were observed to have a sustained in vitro release profile (70% of the content was released at the 10th hour of the study) with minimum initial burst effect. Topical formulations of the microencapsulated oil and non-microencapsulated oil were prepared with different bases, white petrolatum, wool wax alcohol, hydrophilic ointment (USP) and PEG ointment (USP). In vitro membrane permeation of CO from the ointments was evaluated in Franz diffusion cells using cellulose acetate membrane at 32 °C, with the receptor compartment containing a water-ethanol solution (50:50). The receptor phase samples were analyzed with GC/MS, using citronellal as a reference standard. The results showed that microencapsulation decreased membrane permeation of the CO by at least 50%. The amount of CO permeated was dependent on the type of ointment base used; PEG base exhibited the highest degree of release. Therefore, microencapsulation reduces membrane permeation of CO while maintaining a constant supply of the oil. Copyright © 2011 Elsevier B.V. All rights reserved.

  11. Estimation of design space for an extrusion-spheronization process using response surface methodology and artificial neural network modelling.

    PubMed

    Sovány, Tamás; Tislér, Zsófia; Kristó, Katalin; Kelemen, András; Regdon, Géza

    2016-09-01

    The application of the Quality by Design principles is one of the key issues of the recent pharmaceutical developments. In the past decade a lot of knowledge was collected about the practical realization of the concept, but there are still a lot of unanswered questions. The key requirement of the concept is the mathematical description of the effect of the critical factors and their interactions on the critical quality attributes (CQAs) of the product. The process design space (PDS) is usually determined by the use of design of experiment (DoE) based response surface methodologies (RSM), but inaccuracies in the applied polynomial models often resulted in the over/underestimation of the real trends and changes making the calculations uncertain, especially in the edge regions of the PDS. The completion of RSM with artificial neural network (ANN) based models is therefore a commonly used method to reduce the uncertainties. Nevertheless, since the different researches are focusing on the use of a given DoE, there is lack of comparative studies on different experimental layouts. Therefore, the aim of present study was to investigate the effect of the different DoE layouts (2 level full factorial, Central Composite, Box-Behnken, 3 level fractional and 3 level full factorial design) on the model predictability and to compare model sensitivities according to the organization of the experimental data set. It was revealed that the size of the design space could differ more than 40% calculated with different polynomial models, which was associated with a considerable shift in its position when higher level layouts were applied. The shift was more considerable when the calculation was based on RSM. The model predictability was also better with ANN based models. Nevertheless, both modelling methods exhibit considerable sensitivity to the organization of the experimental data set, and the use of design layouts is recommended, where the extreme values factors are more represented. Copyright © 2016 Elsevier B.V. All rights reserved.

  12. Theory of aberration fields for general optical systems with freeform surfaces.

    PubMed

    Fuerschbach, Kyle; Rolland, Jannick P; Thompson, Kevin P

    2014-11-03

    This paper utilizes the framework of nodal aberration theory to describe the aberration field behavior that emerges in optical systems with freeform optical surfaces, particularly φ-polynomial surfaces, including Zernike polynomial surfaces, that lie anywhere in the optical system. If the freeform surface is located at the stop or pupil, the net aberration contribution of the freeform surface is field constant. As the freeform optical surface is displaced longitudinally away from the stop or pupil of the optical system, the net aberration contribution becomes field dependent. It is demonstrated that there are no new aberration types when describing the aberration fields that arise with the introduction of freeform optical surfaces. Significantly it is shown that the aberration fields that emerge with the inclusion of freeform surfaces in an optical system are exactly those that have been described by nodal aberration theory for tilted and decentered optical systems. The key contribution here lies in establishing the field dependence and nodal behavior of each freeform term that is essential knowledge for effective application to optical system design. With this development, the nodes that are distributed throughout the field of view for each aberration type can be anticipated and targeted during optimization for the correction or control of the aberrations in an optical system with freeform surfaces. This work does not place any symmetry constraints on the optical system, which could be packaged in a fully three dimensional geometry, without fold mirrors.

  13. Extraction of natural anthocyanin and colors from pulp of jamun fruit.

    PubMed

    Maran, J Prakash; Sivakumar, V; Thirugnanasambandham, K; Sridhar, R

    2015-06-01

    In this present study, natural pigment and colors from pulp of jamun fruit were extracted under different extraction conditions such as extraction temperature (40-60 ˚C), time (20-100 min) and solid-liquid ratio (1:10-1: 15 g/ml) by aqueous extraction method. Three factors with three levels Box-Behnken response surface design was employed to optimize and investigate the effect of process variables on the responses (total anthocyanin and color). The results were analyzed by Pareto analysis of variance (ANOVA) and second order polynomial models were developed to predict the responses. Optimum extraction conditions for maximizing the extraction yield of total anthocyanin (10.58 mg/100 g) and colors (10618.3 mg/l) were found to be: extraction temperature of 44 °C, extraction time of 93 min and solid-liquid ratio of 1:15 g/ml. Under these conditions, experimental values are closely agreed with predicted values.

  14. Modeling and analysis of film composition on mechanical properties of maize starch based edible films.

    PubMed

    Prakash Maran, J; Sivakumar, V; Thirugnanasambandham, K; Kandasamy, S

    2013-11-01

    The present study investigates the influence of composition (content of maize starch (1-3 g), sorbitol (0.5-1.0 ml), agar (0.5-1.0 g) and tween-80 (0.1-0.5 ml)) on the mechanical properties (tensile strength, elongation, Young's modulus, puncture force and puncture deformation) of the maize starch based edible films using four factors with three level Box-Behnken design. The edible films were obtained by casting method. The results showed that, tween-80 increases the permeation of sorbitol in to the polymer matrix. Increasing concentration of sorbitol (hydrophilic nature and plasticizing effect of sorbitol) decreases the tensile strength, Young's modulus and puncture force of the films. The results were analyzed by Pareto analysis of variance (ANOVA) and second order polynomial models were obtained for all responses with high R(2) values (R(2)>0.95). 3D response surface plots were constructed to study the relationship between process variables and the responses. Copyright © 2013 Elsevier B.V. All rights reserved.

  15. Explicit 2-D Hydrodynamic FEM Program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Jerry

    1996-08-07

    DYNA2D* is a vectorized, explicit, two-dimensional, axisymmetric and plane strain finite element program for analyzing the large deformation dynamic and hydrodynamic response of inelastic solids. DYNA2D* contains 13 material models and 9 equations of state (EOS) to cover a wide range of material behavior. The material models implemented in all machine versions are: elastic, orthotropic elastic, kinematic/isotropic elastic plasticity, thermoelastoplastic, soil and crushable foam, linear viscoelastic, rubber, high explosive burn, isotropic elastic-plastic, temperature-dependent elastic-plastic. The isotropic and temperature-dependent elastic-plastic models determine only the deviatoric stresses. Pressure is determined by one of 9 equations of state including linear polynomial, JWL highmore » explosive, Sack Tuesday high explosive, Gruneisen, ratio of polynomials, linear polynomial with energy deposition, ignition and growth of reaction in HE, tabulated compaction, and tabulated.« less

  16. Waypoints Following Guidance for Surface-to-Surface Missiles

    NASA Astrophysics Data System (ADS)

    Zhou, Hao; Khalil, Elsayed M.; Rahman, Tawfiqur; Chen, Wanchun

    2018-04-01

    The paper proposes waypoints following guidance law. In this method an optimal trajectory is first generated which is then represented through a set of waypoints that are distributed from the starting point up to the final target point using a polynomial. The guidance system then works by issuing guidance command needed to move from one waypoint to the next one. Here the method is applied for a surface-to-surface missile. The results show that the method is feasible for on-board application.

  17. Segmented Mirror Telescope Model and Simulation

    DTIC Science & Technology

    2011-06-01

    mirror surface is treated as a grid of masses and springs. The actuators have surface normal forces applied to individual masses. The equation to...are not widely treated in the literature. The required modifications for the wavefront reconstruction algorithm of a circular aperture to correctly...Zernike polynomials, which are particularly suitable to describe the common optical character- izations of astigmatism , coma, defocus and others [9

  18. Investigation of equilibrium and kinetics of Cr(VI) adsorption by dried Bacillus cereus using response surface methodology.

    PubMed

    Yang, Kai; Zhang, Jing; Yang, Tao; Wang, Hongyu

    2016-01-01

    In this study, response surface methodology (RSM) based on three-variable-five-level central composite rotatable design was used to analyze the effects of combined and individual operating parameters (biomass dose, initial concentration of Cr(VI) and pH) on the Cr(VI) adsorption capacity of dried Bacillus cereus. A quadratic polynomial equation was obtained to predict the adsorbed Cr(VI) amount. Analysis of variance showed that the effect of biomass dose was the key factor in the removal of Cr(VI). The maximum adsorbed Cr(VI) amount (30.93 mg g(-1)) was found at 165.30 mg L(-1), 2.96, and 3.01 g L(-1) for initial Cr(VI) concentration, pH, and biosorbent dosage, respectively. The surface chemical functional groups and microstructure of unloaded and Cr(VI)-loaded dried Bacillus cereus were identified by Fourier transform infrared spectroscopy (FTIR) and scanning electron microscopy (SEM), respectively. Besides, the results gained from these studies indicated that Langmuir isotherm and the second-order rate expression were suitable for the removal of Cr(VI) from wastewater. The results revealed RSM was an effective method for optimizing biosorption process, and dried Bacillus cereus had a remarkable performance on the removal of Cr(VI) from wastewater.

  19. Comparative assessment of orthogonal polynomials for wavefront reconstruction over the square aperture.

    PubMed

    Ye, Jingfei; Gao, Zhishan; Wang, Shuai; Cheng, Jinlong; Wang, Wei; Sun, Wenqing

    2014-10-01

    Four orthogonal polynomials for reconstructing a wavefront over a square aperture based on the modal method are currently available, namely, the 2D Chebyshev polynomials, 2D Legendre polynomials, Zernike square polynomials and Numerical polynomials. They are all orthogonal over the full unit square domain. 2D Chebyshev polynomials are defined by the product of Chebyshev polynomials in x and y variables, as are 2D Legendre polynomials. Zernike square polynomials are derived by the Gram-Schmidt orthogonalization process, where the integration region across the full unit square is circumscribed outside the unit circle. Numerical polynomials are obtained by numerical calculation. The presented study is to compare these four orthogonal polynomials by theoretical analysis and numerical experiments from the aspects of reconstruction accuracy, remaining errors, and robustness. Results show that the Numerical orthogonal polynomial is superior to the other three polynomials because of its high accuracy and robustness even in the case of a wavefront with incomplete data.

  20. An accurate surface topography restoration algorithm for white light interferometry

    NASA Astrophysics Data System (ADS)

    Yuan, He; Zhang, Xiangchao; Xu, Min

    2017-10-01

    As an important measuring technique, white light interferometry can realize fast and non-contact measurement, thus it is now widely used in the field of ultra-precision engineering. However, the traditional recovery algorithms of surface topographies have flaws and limits. In this paper, we propose a new algorithm to solve these problems. It is a combination of Fourier transform and improved polynomial fitting method. Because the white light interference signal is usually expressed as a cosine signal whose amplitude is modulated by a Gaussian function, its fringe visibility is not constant and varies with different scanning positions. The interference signal is processed first by Fourier transform, then the positive frequency part is selected and moved back to the center of the amplitude-frequency curve. In order to restore the surface morphology, a polynomial fitting method is used to fit the amplitude curve after inverse Fourier transform and obtain the corresponding topography information. The new method is then compared to the traditional algorithms. It is proved that the aforementioned drawbacks can be effectively overcome. The relative error is less than 0.8%.

  1. Design and simulation of the surface shape control system for membrane mirror

    NASA Astrophysics Data System (ADS)

    Zhang, Gengsheng; Tang, Minxue

    2009-11-01

    The surface shape control is one of the key technologies for the manufacture of membrane mirror. This paper presents a design of membrane mirror's surface shape control system on the basis of fuzzy logic control. The system contains such function modules as surface shape design, surface shape control, surface shape analysis, and etc. The system functions are realized by using hybrid programming technology of Visual C# and MATLAB. The finite element method is adopted to simulate the surface shape control of membrane mirror. The finite element analysis model is established through ANSYS Parametric Design Language (APDL). ANSYS software kernel is called by the system in background running mode when doing the simulation. The controller is designed by means of controlling the sag of the mirror's central crosssection. The surface shape of the membrane mirror and its optical aberration are obtained by applying Zernike polynomial fitting. The analysis of surface shape control and the simulation of disturbance response are performed for a membrane mirror with 300mm aperture and F/2.7. The result of the simulation shows that by using the designed control system, the RMS wavefront error of the mirror can reach to 142λ (λ=632.8nm), which is consistent to the surface accuracy of the membrane mirror obtained by the large deformation theory of membrane under the same condition.

  2. Bicubic uniform B-spline wavefront fitting technology applied in computer-generated holograms

    NASA Astrophysics Data System (ADS)

    Cao, Hui; Sun, Jun-qiang; Chen, Guo-jie

    2006-02-01

    This paper presented a bicubic uniform B-spline wavefront fitting technology to figure out the analytical expression for object wavefront used in Computer-Generated Holograms (CGHs). In many cases, to decrease the difficulty of optical processing, off-axis CGHs rather than complex aspherical surface elements are used in modern advanced military optical systems. In order to design and fabricate off-axis CGH, we have to fit out the analytical expression for object wavefront. Zernike Polynomial is competent for fitting wavefront of centrosymmetric optical systems, but not for axisymmetrical optical systems. Although adopting high-degree polynomials fitting method would achieve higher fitting precision in all fitting nodes, the greatest shortcoming of this method is that any departure from the fitting nodes would result in great fitting error, which is so-called pulsation phenomenon. Furthermore, high-degree polynomials fitting method would increase the calculation time in coding computer-generated hologram and solving basic equation. Basing on the basis function of cubic uniform B-spline and the character mesh of bicubic uniform B-spline wavefront, bicubic uniform B-spline wavefront are described as the product of a series of matrices. Employing standard MATLAB routines, four kinds of different analytical expressions for object wavefront are fitted out by bicubic uniform B-spline as well as high-degree polynomials. Calculation results indicate that, compared with high-degree polynomials, bicubic uniform B-spline is a more competitive method to fit out the analytical expression for object wavefront used in off-axis CGH, for its higher fitting precision and C2 continuity.

  3. Application of response surface methodology for optimization of natural organic matter degradation by UV/H2O2 advanced oxidation process

    PubMed Central

    2014-01-01

    Background In this research, the removal of natural organic matter from aqueous solutions using advanced oxidation processes (UV/H2O2) was evaluated. Therefore, the response surface methodology and Box-Behnken design matrix were employed to design the experiments and to determine the optimal conditions. The effects of various parameters such as initial concentration of H2O2 (100–180 mg/L), pH (3–11), time (10–30 min) and initial total organic carbon (TOC) concentration (4–10 mg/L) were studied. Results Analysis of variance (ANOVA), revealed a good agreement between experimental data and proposed quadratic polynomial model (R2 = 0.98). Experimental results showed that with increasing H2O2 concentration, time and decreasing in initial TOC concentration, TOC removal efficiency was increased. Neutral and nearly acidic pH values also improved the TOC removal. Accordingly, the TOC removal efficiency of 78.02% in terms of the independent variables including H2O2 concentration (100 mg/L), pH (6.12), time (22.42 min) and initial TOC concentration (4 mg/L) were optimized. Further confirmation tests under optimal conditions showed a 76.50% of TOC removal and confirmed that the model is accordance with the experiments. In addition TOC removal for natural water based on response surface methodology optimum condition was 62.15%. Conclusions This study showed that response surface methodology based on Box-Behnken method is a useful tool for optimizing the operating parameters for TOC removal using UV/H2O2 process. PMID:24735555

  4. Optimization of electrocoagulation process to treat grey wastewater in batch mode using response surface methodology.

    PubMed

    Karichappan, Thirugnanasambandham; Venkatachalam, Sivakumar; Jeganathan, Prakash Maran

    2014-01-10

    Discharge of grey wastewater into the ecological system causes the negative impact effect on receiving water bodies. In this present study, electrocoagulation process (EC) was investigated to treat grey wastewater under different operating conditions such as initial pH (4-8), current density (10-30 mA/cm2), electrode distance (4-6 cm) and electrolysis time (5-25 min) by using stainless steel (SS) anode in batch mode. Four factors with five levels Box-Behnken response surface design (BBD) was employed to optimize and investigate the effect of process variables on the responses such as total solids (TS), chemical oxygen demand (COD) and fecal coliform (FC) removal. The process variables showed significant effect on the electrocoagulation treatment process. The results were analyzed by Pareto analysis of variance (ANOVA) and second order polynomial models were developed in order to study the electrocoagulation process statistically. The optimal operating conditions were found to be: initial pH of 7, current density of 20 mA/cm2, electrode distance of 5 cm and electrolysis time of 20 min. These results indicated that EC process can be scale up in large scale level to treat grey wastewater with high removal efficiency of TS, COD and FC.

  5. Investigation of the Process Conditions for Hydrogen Production by Steam Reforming of Glycerol over Ni/Al₂O₃ Catalyst Using Response Surface Methodology (RSM).

    PubMed

    Ebshish, Ali; Yaakob, Zahira; Taufiq-Yap, Yun Hin; Bshish, Ahmed

    2014-03-19

    In this work; a response surface methodology (RSM) was implemented to investigate the process variables in a hydrogen production system. The effects of five independent variables; namely the temperature (X₁); the flow rate (X₂); the catalyst weight (X₃); the catalyst loading (X₄) and the glycerol-water molar ratio (X₅) on the H₂ yield (Y₁) and the conversion of glycerol to gaseous products (Y₂) were explored. Using multiple regression analysis; the experimental results of the H₂ yield and the glycerol conversion to gases were fit to quadratic polynomial models. The proposed mathematical models have correlated the dependent factors well within the limits that were being examined. The best values of the process variables were a temperature of approximately 600 °C; a feed flow rate of 0.05 mL/min; a catalyst weight of 0.2 g; a catalyst loading of 20% and a glycerol-water molar ratio of approximately 12; where the H₂ yield was predicted to be 57.6% and the conversion of glycerol was predicted to be 75%. To validate the proposed models; statistical analysis using a two-sample t -test was performed; and the results showed that the models could predict the responses satisfactorily within the limits of the variables that were studied.

  6. Damage identification of beam structures using free response shapes obtained by use of a continuously scanning laser Doppler vibrometer system

    NASA Astrophysics Data System (ADS)

    Xu, Y. F.; Chen, Da-Ming; Zhu, W. D.

    2017-08-01

    Spatially dense operating deflection shapes and mode shapes can be rapidly obtained by use of a continuously scanning laser Doppler vibrometer (CSLDV) system, which sweeps its laser spot over a vibrating structure surface. This paper introduces a new type of vibration shapes called a free response shape (FRS) that can be obtained by use of a CSLDV system, and a new damage identification methodology using FRSs is developed for beam structures. An analytical expression of FRSs of a damped beam structure is derived, and FRSs from the analytical expression compare well with those from a finite element model. In the damage identification methodology, a free-response damage index (FRDI) is proposed, and damage regions can be identified near neighborhoods with consistently high values of FRDIs associated with different modes; an auxiliary FRDI is defined to assist identification of the neighborhoods. A FRDI associated with a mode consists of differences between curvatures of FRSs associated with the mode in a number of half-scan periods of a CSLDV system and those from polynomials that fit the FRSs with properly determined orders. A convergence index is proposed to determine the proper order of a polynomial fit. One advantage of the methodology is that the FRDI does not require any baseline information of an undamaged beam structure, if it is geometrically smooth and made of materials that have no stiffness and mass discontinuities. Another advantage is that FRDIs associated with multiple modes can be obtained using free response of a beam structure measured by a CSLDV system in one scan. The number of half-scan periods for calculation of the FRDI associated with a mode can be determined by use of the short-time Fourier transform. The proposed methodology was numerically and experimentally applied to identify damage in beam structures; effects of the scan frequency of a CSLDV system on qualities of obtained FRSs were experimentally investigated.

  7. Detecting changes in the spatial distribution of nitrate contamination in ground water

    USGS Publications Warehouse

    Liu, Z.-J.; Hallberg, G.R.; Zimmerman, D.L.; Libra, R.D.

    1997-01-01

    Many studies of ground water pollution in general and nitrate contamination in particular have often relied on a one-time investigation, tracking of individual wells, or aggregate summaries. Studies of changes in spatial distribution of contaminants over time are lacking. This paper presents a method to compare spatial distributions for possible changes over time. The large-scale spatial distribution at a given time can be considered as a surface over the area (a trend surface). The changes in spatial distribution from period to period can be revealed by the differences in the shape and/or height of surfaces. If such a surface is described by a polynomial function, changes in surfaces can be detected by testing statistically for differences in their corresponding polynomial functions. This method was applied to nitrate concentration in a population of wells in an agricultural drainage basin in Iowa, sampled in three different years. For the period of 1981-1992, the large-scale spatial distribution of nitrate concentration did not show significant change in the shape of spatial surfaces; while the magnitude of nitrate concentration in the basin, or height of the computed surfaces showed significant fluctuations. The change in magnitude of nitrate concentration is closely related to climatic variations, especially in precipitation. The lack of change in the shape of spatial surfaces means that either the influence of land use/nitrogen management was overshadowed by climatic influence, or the changes in land use/management occurred in a random fashion.

  8. Poly-Frobenius-Euler polynomials

    NASA Astrophysics Data System (ADS)

    Kurt, Burak

    2017-07-01

    Hamahata [3] defined poly-Euler polynomials and the generalized poly-Euler polynomials. He proved some relations and closed formulas for the poly-Euler polynomials. By this motivation, we define poly-Frobenius-Euler polynomials. We give some relations for this polynomials. Also, we prove the relationships between poly-Frobenius-Euler polynomials and Stirling numbers of the second kind.

  9. Development, optimization, and in vitro characterization of dasatinib-loaded PEG functionalized chitosan capped gold nanoparticles using Box-Behnken experimental design.

    PubMed

    Adena, Sandeep Kumar Reddy; Upadhyay, Mansi; Vardhan, Harsh; Mishra, Brahmeshwar

    2018-03-01

    The purpose of this research study was to develop, optimize, and characterize dasatinib loaded polyethylene glycol (PEG) stabilized chitosan capped gold nanoparticles (DSB-PEG-Ch-GNPs). Gold (III) chloride hydrate was reduced with chitosan and the resulting nanoparticles were coated with thiol-terminated PEG and loaded with dasatinib (DSB). Plackett-Burman design (PBD) followed by Box-Behnken experimental design (BBD) were employed to optimize the process parameters. Polynomial equations, contour, and 3D response surface plots were generated to relate the factors and responses. The optimized DSB-PEG-Ch-GNPs were characterized by FTIR, XRD, HR-SEM, EDX, TEM, SAED, AFM, DLS, and ZP. The results of the optimized DSB-PEG-Ch-GNPs showed particle size (PS) of 24.39 ± 1.82 nm, apparent drug content (ADC) of 72.06 ± 0.86%, and zeta potential (ZP) of -13.91 ± 1.21 mV. The responses observed and the predicted values of the optimized process were found to be close. The shape and surface morphology studies showed that the resulting DSB-PEG-Ch-GNPs were spherical and smooth. The stability and in vitro drug release studies confirmed that the optimized formulation was stable at different conditions of storage and exhibited a sustained drug release of the drug of up to 76% in 48 h and followed Korsmeyer-Peppas release kinetic model. A process for preparing gold nanoparticles using chitosan, anchoring PEG to the particle surface, and entrapping dasatinib in the chitosan-PEG surface corona was optimized.

  10. Optimizing culture conditions for production of intra and extracellular inulinase and invertase from Aspergillus niger ATCC 20611 by response surface methodology (RSM).

    PubMed

    Dinarvand, Mojdeh; Rezaee, Malahat; Foroughi, Majid

    The aim of this study was obtain a model that maximizes growth and production of inulinase and invertase by Aspergillus niger ATCC 20611, employing response surface methodology (RSM). The RSM with a five-variable and three-level central composite design (CCD) was employed to optimize the medium composition. Results showed that the experimental data could be appropriately fitted into a second-order polynomial model with a coefficient of determination (R 2 ) more than 0.90 for all responses. This model adequately explained the data variation and represented the actual relationships between the parameters and responses. The pH and temperature value of the cultivation medium were the most significant variables and the effects of inoculum size and agitation speed were slightly lower. The intra-extracellular inulinase, invertase production and biomass content increased 10-32 fold in the optimized medium condition (pH 6.5, temperature 30°C, 6% (v/v), inoculum size and 150rpm agitation speed) by RSM compared with medium optimized through the one-factor-at-a-time method. The process development and intensification for simultaneous production of intra-extracellular inulinase (exo and endo inulinase) and invertase from A. niger could be used for industrial applications. Copyright © 2017 Sociedade Brasileira de Microbiologia. Published by Elsevier Editora Ltda. All rights reserved.

  11. Stochastic Modeling of Flow-Structure Interactions using Generalized Polynomial Chaos

    DTIC Science & Technology

    2001-09-11

    Some basic hypergeometric polynomials that generalize Jacobi polynomials . Memoirs Amer. Math. Soc...scheme, which is represented as a tree structure in figure 1 (following [24]), classifies the hypergeometric orthogonal polynomials and indicates the...2F0(1) 2F0(0) Figure 1: The Askey scheme of orthogonal polynomials The orthogonal polynomials associated with the generalized polynomial chaos,

  12. On a self-consistent representation of earth models, with an application to the computing of internal flattening

    NASA Astrophysics Data System (ADS)

    Denis, C.; Ibrahim, A.

    Self-consistent parametric earth models are discussed in terms of a flexible numerical code. The density profile of each layer is represented as a polynomial, and figures of gravity, mass, mean density, hydrostatic pressure, and moment of inertia are derived. The polynomial representation also allows computation of the first order flattening of the internal strata of some models, using a Gauss-Legendre quadrature with a rapidly converging iteration technique. Agreement with measured geophysical data is obtained, and algorithm for estimation of the geometric flattening for any equidense surface identified by its fractional radius is developed. The program can also be applied in studies of planetary and stellar models.

  13. Partial oxidation of landfill leachate in supercritical water: Optimization by response surface methodology

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gong, Yanmeng; Wang, Shuzhong; Xu, Haidong

    Highlights: • Partial oxidation of landfill leachate in supercritical water was investigated. • The process was optimized by Box–Behnken design and response surface methodology. • GY{sub H2}, TRE and CR could exhibit up to 14.32 mmol·gTOC{sup −1}, 82.54% and 94.56%. • Small amounts of oxidant can decrease the generation of tar and char. - Abstract: To achieve the maximum H{sub 2} yield (GY{sub H2}), TOC removal rate (TRE) and carbon recovery rate (CR), response surface methodology was applied to optimize the process parameters for supercritical water partial oxidation (SWPO) of landfill leachate in a batch reactor. Quadratic polynomial models formore » GY{sub H2}, CR and TRE were established with Box–Behnken design. GY{sub H2}, CR and TRE reached up to 14.32 mmol·gTOC{sup −1}, 82.54% and 94.56% under optimum conditions, respectively. TRE was invariably above 91.87%. In contrast, TC removal rate (TR) only changed from 8.76% to 32.98%. Furthermore, carbonate and bicarbonate were the most abundant carbonaceous substances in product, whereas CO{sub 2} and H{sub 2} were the most abundant gaseous products. As a product of nitrogen-containing organics, NH{sub 3} has an important effect on gas composition. The carbon balance cannot be reached duo to the formation of tar and char. CR increased with the increase of temperature and oxidation coefficient.« less

  14. Optimization of enzymatic hydrolysis of guar gum using response surface methodology.

    PubMed

    Mudgil, Deepak; Barak, Sheweta; Khatkar, B S

    2014-08-01

    Guar gum is a polysaccharide obtained from guar seed endosperm portion. Enzymatically hydrolyzed guar gum is low in viscosity and has several health benefits as dietary fiber. In this study, response surface methodology was used to determine the optimum conditions for hydrolysis that give minimum viscosity of guar gum. Central composite was employed to investigate the effects of pH (3-7), temperature (20-60 °C), reaction time (1-5 h) and cellulase concentration (0.25-1.25 mg/g) on viscosity during enzymatic hydrolysis of guar (Cyamopsis tetragonolobus) gum. A second order polynomial model was developed for viscosity using regression analysis. Results revealed statistical significance of model as evidenced from high value of coefficient of determination (R(2) = 0.9472) and P < 0.05. Viscosity was primarily affected by cellulase concentration, pH and hydrolysis time. Maximum viscosity reduction was obtained when pH, temperature, hydrolysis time and cellulase concentration were 6, 50 °C, 4 h and 1.00 mg/g, respectively. The study is important in optimizing the enzymatic process for hydrolysis of guar gum as potential source of soluble dietary fiber for human health benefits.

  15. Optimization of phenolics and flavonoids extraction conditions of Curcuma Zedoaria leaves using response surface methodology.

    PubMed

    Azahar, Nur Fauwizah; Gani, Siti Salwa Abd; Mohd Mokhtar, Nor Fadzillah

    2017-10-02

    This study focused on maximizing the extraction yield of total phenolics and flavonoids from Curcuma Zedoaria leaves as a function of time (80-120 min), temperature (60-80 °C) and ethanol concentration (70-90 v/v%). The data were subjected to response surface methodology (RSM) and the results showed that the polynomial equations for all models were significant, did not show lack of fit, and presented adjusted determination coefficients (R 2 ) above 99%, proving their suitability for prediction purposes. Using desirability function, the optimum operating conditions to attain a higher extraction of phenolics and flavonoids was found to be 75 °C, 92 min of extraction time and 90:10 of ethanol concentration ratios. Under these optimal conditions, the experimental values for total phenolics and flavonoids of Curcuma zedoaria leaves were 125.75 ± 0.17 mg of gallic acid equivalents and 6.12 ± 0.23 mg quercetin/g of extract, which closely agreed with the predicted values. Besides, in this study, the leaves from Curcuma zedoaria could be considered to have the strong antioxidative ability and can be used in various cosmeceuticals or medicinal applications.

  16. Production of antioxidant compounds of grape seed skin by fermentation and its optimization using response surface method

    NASA Astrophysics Data System (ADS)

    Andayani, D. G. S.; Risdian, C.; Saraswati, V.; Primadona, I.; Mawarda, P. C.

    2017-03-01

    Skins and seeds of grape are waste generated from food industry. These wastes contain nutrients of which able to be utilized as an important source for antioxidant metabolite production. Through an environmentally friendly process, natural antioxidant material was produced. This study aimed to generate antioxidant compounds by liquid fermentation. Optimization was carried out by using Schizosaccharomyces cerevisiae in Katu leaf substrate. Optimization variables through response surface methodology (RSM) were of sucrose concentration, skins and seeds of grape concentration, and pH. Results showed that the optimum conditions for antioxidant production were of 5 g/L sucrose, 5 g/L skins and seed at pH 5, respectively. The resulted antioxidant activity was of 1.62 mg/mL. Mathematical model of variance analysis using a second order polynomial corresponding to the resulted data for the antioxidant was of 20.70124 - 3.86997 A - 0.65996 B - 1.88367 C + 0.19634 A2 - 0.016638 B2 + 0.28848 C2 + 0.26980 AB - 0.068333 AC - 0.12367 BC. From the gained equation, the optimum yield from all variables was significant. Chemical analysis of the antioxidant was carried out using 2,2-Diphenyl-1-picrylhydrazyl (DPPH).

  17. Ultrasonic-assisted extraction and in-vitro antioxidant activity of polysaccharide from Hibiscus leaf.

    PubMed

    Afshari, Kasra; Samavati, Vahid; Shahidi, Seyed-Ahmad

    2015-03-01

    The effects of ultrasonic power, extraction time, extraction temperature, and the water-to-raw material ratio on extraction yield of crude polysaccharide from the leaf of Hibiscus rosa-sinensis (HRLP) were optimized by statistical analysis using response surface methodology. The response surface methodology (RSM) was used to optimize HRLP extraction yield by implementing the Box-Behnken design (BBD). The experimental data obtained were fitted to a second-order polynomial equation using multiple regression analysis and also analyzed by appropriate statistical methods (ANOVA). Analysis of the results showed that the linear and quadratic terms of these four variables had significant effects. The optimal conditions for the highest extraction yield of HRLP were: ultrasonic power, 93.59 W; extraction time, 25.71 min; extraction temperature, 93.18°C; and the water to raw material ratio, 24.3 mL/g. Under these conditions, the experimental yield was 9.66±0.18%, which is well in close agreement with the value predicted by the model 9.526%. The results demonstrated that HRLP had strong scavenging activities in vitro on DPPH and hydroxyl radicals. Copyright © 2014 Elsevier B.V. All rights reserved.

  18. Sharing Teaching Ideas.

    ERIC Educational Resources Information Center

    Crouse, Richard J.; And Others

    1991-01-01

    The first idea concerns a board game similar to tic-tac-toe in which the strategy involves the knowledge of the factorization of quadratic polynomials. The second game uses the calculation of the surface areas of solid figures applying the specific examples of cigar boxes and cylindrical tin cans. (JJK)

  19. Injection molding lens metrology using software configurable optical test system

    NASA Astrophysics Data System (ADS)

    Zhan, Cheng; Cheng, Dewen; Wang, Shanshan; Wang, Yongtian

    2016-10-01

    Optical plastic lens produced by injection molding machine possesses numerous advantages of light quality, impact resistance, low cost, etc. The measuring methods in the optical shop are mainly interferometry, profile meter. However, these instruments are not only expensive, but also difficult to alignment. The software configurable optical test system (SCOTS) is based on the geometry of the fringe refection and phase measuring deflectometry method (PMD), which can be used to measure large diameter mirror, aspheric and freeform surface rapidly, robustly, and accurately. In addition to the conventional phase shifting method, we propose another data collection method called as dots matrix projection. We also use the Zernike polynomials to correct the camera distortion. This polynomials fitting mapping distortion method has not only simple operation, but also high conversion precision. We simulate this test system to measure the concave surface using CODE V and MATLAB. The simulation results show that the dots matrix projection method has high accuracy and SCOTS has important significance for on-line detection in optical shop.

  20. Model parameter extraction of lateral propagating surface acoustic waves with coupling on SiO2/grating/LiNbO3 structure

    NASA Astrophysics Data System (ADS)

    Zhang, Benfeng; Han, Tao; Li, Xinyi; Huang, Yulin; Omori, Tatsuya; Hashimoto, Ken-ya

    2018-07-01

    This paper investigates how lateral propagation of Rayleigh and shear horizontal (SH) surface acoustic waves (SAWs) changes with rotation angle θ and SiO2 and electrode thicknesses, h SiO2 and h Cu, respectively. The extended thin plate model is used for purpose. First, the extraction method is presented for determining parameters appearing in the extended thin plate model. Then, the model parameters are expressed in polynomials in terms of h SiO2, h Cu, and θ. Finally, a piston mode structure without phase shifters is designed using the extracted parameters. The possible piston mode structures can be searched automatically by use of the polynomial expression. The resonance characteristics are analyzed by both the extended thin plate model and three-dimensional (3D) finite element method (FEM). Agreement between the results of both methods confirms validity and effectiveness of the parameter extraction process and the design technique.

  1. Propagation and attenuation of Rayleigh waves in generalized thermoelastic media

    NASA Astrophysics Data System (ADS)

    Sharma, M. D.

    2014-01-01

    This study considers the propagation of Rayleigh waves in a generalized thermoelastic half-space with stress-free plane boundary. The boundary has the option of being either isothermal or thermally insulated. In either case, the dispersion equation is obtained in the form of a complex irrational expression due to the presence of radicals. This dispersion equation is rationalized into a polynomial equation, which is solvable, numerically, for exact complex roots. The roots of the dispersion equation are obtained after removing the extraneous zeros of this polynomial equation. Then, these roots are filtered out for the inhomogeneous propagation of waves decaying with depth. Numerical examples are solved to analyze the effects of thermal properties of elastic materials on the dispersion of existing surface waves. For these thermoelastic Rayleigh waves, the behavior of elliptical particle motion is studied inside and at the surface of the medium. Insulation of boundary does play a significant role in changing the speed, amplitude, and polarization of Rayleigh waves in thermoelastic media.

  2. Geometric properties of commutative subalgebras of partial differential operators

    NASA Astrophysics Data System (ADS)

    Zheglov, A. B.; Kurke, H.

    2015-05-01

    We investigate further algebro-geometric properties of commutative rings of partial differential operators, continuing our research started in previous articles. In particular, we start to explore the simplest and also certain known examples of quantum algebraically completely integrable systems from the point of view of a recent generalization of Sato's theory, developed by the first author. We give a complete characterization of the spectral data for a class of 'trivial' commutative algebras and strengthen geometric properties known earlier for a class of known examples. We also define a kind of restriction map from the moduli space of coherent sheaves with fixed Hilbert polynomial on a surface to an analogous moduli space on a divisor (both the surface and the divisor are part of the spectral data). We give several explicit examples of spectral data and corresponding algebras of commuting (completed) operators, producing as a by-product interesting examples of surfaces that are not isomorphic to spectral surfaces of any (maximal) commutative ring of partial differential operators of rank one. Finally, we prove that any commutative ring of partial differential operators whose normalization is isomorphic to the ring of polynomials k \\lbrack u,t \\rbrack is a Darboux transformation of a ring of operators with constant coefficients. Bibliography: 39 titles.

  3. CORFIG- CORRECTOR SURFACE DESIGN SOFTWARE

    NASA Technical Reports Server (NTRS)

    Dantzler, A.

    1994-01-01

    Corrector Surface Design Software, CORFIG, calculates the optimum figure of a corrector surface for an optical system based on real ray traces. CORFIG generates the corrector figure in the form of a spline data point table and/or a list of polynomial coefficients. The number of spline data points as well as the number of coefficients is user specified. First, the optical system's parameters (thickness, radii of curvature, etc.) are entered. CORFIG will trace the outermost axial real ray through the uncorrected system to determine approximate radial limits for all rays. Then, several real rays are traced backwards through the system from the image to the surface that originally followed the object, within these radial limits. At this first surface, the local curvature is adjusted on a small scale to direct the rays toward the object, thus removing any accumulated aberrations. For each ray traced, this adjustment will be different, so that at the end of this process the resultant surface is made up of many local curvatures. The equations that describe these local surfaces, expressed as high order polynomials, are then solved simultaneously to yield the final surface figure, from which data points are extracted. Finally, a spline table or list of polynomial coefficients is extracted from these data points. CORFIG is intended to be used in the late stages of optical design. The system's design must have at least a good paraxial foundation. Preferably, the design should be at a stage where traditional methods of Seidel aberration correction will not bring about the required image spot size specification. CORFIG will read the system parameters of such a design and calculate the optimum figure for the first surface such that all of the original parameters remain unchanged. Depending upon the system, CORFIG can reduce the RMS image spot radius by a factor of 5 to 25. The original parameters (magnification, back focal length, etc.) are maintained because all rays upon which the corrector figure is based are traced within the bounds of the original system's outermost ray. For this reason the original system must have a certain degree of integrity. CORFIG optimizes the corrector surface figure for on-axis images at a single wavelength only. However, it has been demonstrated many times that CORFIG's method also significantly improves the quality of field images and images formed from wavelengths other than the center wavelength. CORFIG is written completely in VAX FORTRAN. It has been implemented on a DEC VAX series computer under VMS with a central memory requirement of 55 K bytes. This program was developed in 1986.

  4. Automated image segmentation-assisted flattening of atomic force microscopy images.

    PubMed

    Wang, Yuliang; Lu, Tongda; Li, Xiaolai; Wang, Huimin

    2018-01-01

    Atomic force microscopy (AFM) images normally exhibit various artifacts. As a result, image flattening is required prior to image analysis. To obtain optimized flattening results, foreground features are generally manually excluded using rectangular masks in image flattening, which is time consuming and inaccurate. In this study, a two-step scheme was proposed to achieve optimized image flattening in an automated manner. In the first step, the convex and concave features in the foreground were automatically segmented with accurate boundary detection. The extracted foreground features were taken as exclusion masks. In the second step, data points in the background were fitted as polynomial curves/surfaces, which were then subtracted from raw images to get the flattened images. Moreover, sliding-window-based polynomial fitting was proposed to process images with complex background trends. The working principle of the two-step image flattening scheme were presented, followed by the investigation of the influence of a sliding-window size and polynomial fitting direction on the flattened images. Additionally, the role of image flattening on the morphological characterization and segmentation of AFM images were verified with the proposed method.

  5. Enhanced cell disruption strategy in the release of recombinant hepatitis B surface antigen from Pichia pastoris using response surface methodology

    PubMed Central

    2012-01-01

    Background Cell disruption strategies by high pressure homogenizer for the release of recombinant Hepatitis B surface antigen (HBsAg) from Pichia pastoris expression cells were optimized using response surface methodology (RSM) based on the central composite design (CCD). The factors studied include number of passes, biomass concentration and pulse pressure. Polynomial models were used to correlate the above mentioned factors to project the cell disruption capability and specific protein release of HBsAg from P. pastoris cells. Results The proposed cell disruption strategy consisted of a number of passes set at 20 times, biomass concentration of 7.70 g/L of dry cell weight (DCW) and pulse pressure at 1,029 bar. The optimized cell disruption strategy was shown to increase cell disruption efficiency by 2-fold and 4-fold for specific protein release of HBsAg when compared to glass bead method yielding 75.68% cell disruption rate (CDR) and HBsAg concentration of 29.20 mg/L respectively. Conclusions The model equation generated from RSM on cell disruption of P. pastoris was found adequate to determine the significant factors and its interactions among the process variables and the optimum conditions in releasing HBsAg when validated against a glass bead cell disruption method. The findings from the study can open up a promising strategy for better recovery of HBsAg recombinant protein during downstream processing. PMID:23039947

  6. Radiometer Calibrations: Saving Time by Automating the Gathering and Analysis Procedures

    NASA Technical Reports Server (NTRS)

    Sadino, Jeffrey L.

    2005-01-01

    Mr. Abtahi custom-designs radiometers for Mr. Hook's research group. Inherently, when the radiometers report the temperature of arbitrary surfaces, the results are affected by errors in accuracy. This problem can be reduced if the errors can be accounted for in a polynomial. This is achieved by pointing the radiometer at a constant-temperature surface. We have been using a Hartford Scientific WaterBath. The measurements from the radiometer are collected at many different temperatures and compared to the measurements made by a Hartford Chubb thermometer with a four-decimal point resolution. The data is analyzed and fit to a fifth-order polynomial. This formula is then uploaded into the radiometer software, enabling accurate data gathering. Traditionally, Mr. Abtahi has done this by hand, spending several hours of his time setting the temperature, waiting for stabilization, taking measurements, and then repeating for other temperatures. My program, written in the Python language, has enabled the data gathering and analysis process to be handed off to a less-senior member of the team. Simply by entering several initial settings, the program will simultaneously control all three instruments and organize the data suitable for computer analyses, thus giving the desired fifth-order polynomial. This will save time, allow for a more complete calibration data set, and allow for base calibrations to be developed. The program is expandable to simultaneously take any type of measurement from up to nine distinct instruments.

  7. Partial null astigmatism-compensated interferometry for a concave freeform Zernike mirror

    NASA Astrophysics Data System (ADS)

    Dou, Yimeng; Yuan, Qun; Gao, Zhishan; Yin, Huimin; Chen, Lu; Yao, Yanxia; Cheng, Jinlong

    2018-06-01

    Partial null interferometry without using any null optics is proposed to measure a concave freeform Zernike mirror. Oblique incidence on the freeform mirror is used to compensate for astigmatism as the main component in its figure, and to constrain the divergence of the test beam as well. The phase demodulated from the partial nulled interferograms is divided into low-frequency phase and high-frequency phase by Zernike polynomial fitting. The low-frequency surface figure error of the freeform mirror represented by the coefficients of Zernike polynomials is reconstructed from the low-frequency phase, applying the reverse optimization reconstruction technology in the accurate model of the interferometric system. The high-frequency surface figure error of the freeform mirror is retrieved from the high-frequency phase adopting back propagating technology, according to the updated model in which the low-frequency surface figure error has been superimposed on the sag of the freeform mirror. Simulations verified that this method is capable of testing a wide variety of astigmatism-dominated freeform mirrors due to the high dynamic range. The experimental result using our proposed method for a concave freeform Zernike mirror is consistent with the null test result employing the computer-generated hologram.

  8. Adaptive nonlinear polynomial neural networks for control of boundary layer/structural interaction

    NASA Technical Reports Server (NTRS)

    Parker, B. Eugene, Jr.; Cellucci, Richard L.; Abbott, Dean W.; Barron, Roger L.; Jordan, Paul R., III; Poor, H. Vincent

    1993-01-01

    The acoustic pressures developed in a boundary layer can interact with an aircraft panel to induce significant vibration in the panel. Such vibration is undesirable due to the aerodynamic drag and structure-borne cabin noises that result. The overall objective of this work is to develop effective and practical feedback control strategies for actively reducing this flow-induced structural vibration. This report describes the results of initial evaluations using polynomial, neural network-based, feedback control to reduce flow induced vibration in aircraft panels due to turbulent boundary layer/structural interaction. Computer simulations are used to develop and analyze feedback control strategies to reduce vibration in a beam as a first step. The key differences between this work and that going on elsewhere are as follows: that turbulent and transitional boundary layers represent broadband excitation and thus present a more complex stochastic control scenario than that of narrow band (e.g., laminar boundary layer) excitation; and secondly, that the proposed controller structures are adaptive nonlinear infinite impulse response (IIR) polynomial neural network, as opposed to the traditional adaptive linear finite impulse response (FIR) filters used in most studies to date. The controllers implemented in this study achieved vibration attenuation of 27 to 60 dB depending on the type of boundary layer established by laminar, turbulent, and intermittent laminar-to-turbulent transitional flows. Application of multi-input, multi-output, adaptive, nonlinear feedback control of vibration in aircraft panels based on polynomial neural networks appears to be feasible today. Plans are outlined for Phase 2 of this study, which will include extending the theoretical investigation conducted in Phase 2 and verifying the results in a series of laboratory experiments involving both bum and plate models.

  9. Element Library for Three-Dimensional Stress Analysis by the Integrated Force Method

    NASA Technical Reports Server (NTRS)

    Kaljevic, Igor; Patnaik, Surya N.; Hopkins, Dale A.

    1996-01-01

    The Integrated Force Method, a recently developed method for analyzing structures, is extended in this paper to three-dimensional structural analysis. First, a general formulation is developed to generate the stress interpolation matrix in terms of complete polynomials of the required order. The formulation is based on definitions of the stress tensor components in term of stress functions. The stress functions are written as complete polynomials and substituted into expressions for stress components. Then elimination of the dependent coefficients leaves the stress components expressed as complete polynomials whose coefficients are defined as generalized independent forces. Such derived components of the stress tensor identically satisfy homogenous Navier equations of equilibrium. The resulting element matrices are invariant with respect to coordinate transformation and are free of spurious zero-energy modes. The formulation provides a rational way to calculate the exact number of independent forces necessary to arrive at an approximation of the required order for complete polynomials. The influence of reducing the number of independent forces on the accuracy of the response is also analyzed. The stress fields derived are used to develop a comprehensive finite element library for three-dimensional structural analysis by the Integrated Force Method. Both tetrahedral- and hexahedral-shaped elements capable of modeling arbitrary geometric configurations are developed. A number of examples with known analytical solutions are solved by using the developments presented herein. The results are in good agreement with the analytical solutions. The responses obtained with the Integrated Force Method are also compared with those generated by the standard displacement method. In most cases, the performance of the Integrated Force Method is better overall.

  10. Site specific seismic hazard analysis and determination of response spectra of Kolkata for maximum considered earthquake

    NASA Astrophysics Data System (ADS)

    Shiuly, Amit; Sahu, R. B.; Mandal, Saroj

    2017-06-01

    This paper presents site specific seismic hazard analysis of Kolkata city, former capital of India and present capital of state West Bengal, situated on the world’s largest delta island, Bengal basin. For this purpose, peak ground acceleration (PGA) for a maximum considered earthquake (MCE) at bedrock level has been estimated using an artificial neural network (ANN) based attenuation relationship developed on the basis of synthetic ground motion data for the region. Using the PGA corresponding to the MCE, a spectrum compatible acceleration time history at bedrock level has been generated by using a wavelet based computer program, WAVEGEN. This spectrum compatible time history at bedrock level has been converted to the same at surface level using SHAKE2000 for 144 borehole locations in the study region. Using the predicted values of PGA and PGV at the surface, corresponding contours for the region have been drawn. For the MCE, the PGA at bedrock level of Kolkata city has been obtained as 0.184 g, while that at the surface level varies from 0.22 g to 0.37 g. Finally, Kolkata has been subdivided into eight seismic subzones, and for each subzone a response spectrum equation has been derived using polynomial regression analysis. This will be very helpful for structural and geotechnical engineers to design safe and economical earthquake resistant structures.

  11. Optimal placement of trailing-edge flaps for helicopter vibration reduction using response surface methods

    NASA Astrophysics Data System (ADS)

    Viswamurthy, S. R.; Ganguli, Ranjan

    2007-03-01

    This study aims to determine optimal locations of dual trailing-edge flaps to achieve minimum hub vibration levels in a helicopter, while incurring low penalty in terms of required trailing-edge flap control power. An aeroelastic analysis based on finite elements in space and time is used in conjunction with an optimal control algorithm to determine the flap time history for vibration minimization. The reduced hub vibration levels and required flap control power (due to flap motion) are the two objectives considered in this study and the flap locations along the blade are the design variables. It is found that second order polynomial response surfaces based on the central composite design of the theory of design of experiments describe both objectives adequately. Numerical studies for a four-bladed hingeless rotor show that both objectives are more sensitive to outboard flap location compared to the inboard flap location by an order of magnitude. Optimization results show a disjoint Pareto surface between the two objectives. Two interesting design points are obtained. The first design gives 77 percent vibration reduction from baseline conditions (no flap motion) with a 7 percent increase in flap power compared to the initial design. The second design yields 70 percent reduction in hub vibration with a 27 percent reduction in flap power from the initial design.

  12. Modeling Uncertainty in Steady State Diffusion Problems via Generalized Polynomial Chaos

    DTIC Science & Technology

    2002-07-25

    Some basic hypergeometric polynomials that generalize Jacobi polynomials . Memoirs Amer. Math. Soc., AMS... orthogonal polynomial functionals from the Askey scheme, as a generalization of the original polynomial chaos idea of Wiener (1938). A Galerkin projection...1) by generalized polynomial chaos expansion, where the uncertainties can be introduced through κ, f , or g, or some combinations. It is worth

  13. Orthonormal aberration polynomials for anamorphic optical imaging systems with circular pupils.

    PubMed

    Mahajan, Virendra N

    2012-06-20

    In a recent paper, we considered the classical aberrations of an anamorphic optical imaging system with a rectangular pupil, representing the terms of a power series expansion of its aberration function. These aberrations are inherently separable in the Cartesian coordinates (x,y) of a point on the pupil. Accordingly, there is x-defocus and x-coma, y-defocus and y-coma, and so on. We showed that the aberration polynomials orthonormal over the pupil and representing balanced aberrations for such a system are represented by the products of two Legendre polynomials, one for each of the two Cartesian coordinates of the pupil point; for example, L(l)(x)L(m)(y), where l and m are positive integers (including zero) and L(l)(x), for example, represents an orthonormal Legendre polynomial of degree l in x. The compound two-dimensional (2D) Legendre polynomials, like the classical aberrations, are thus also inherently separable in the Cartesian coordinates of the pupil point. Moreover, for every orthonormal polynomial L(l)(x)L(m)(y), there is a corresponding orthonormal polynomial L(l)(y)L(m)(x) obtained by interchanging x and y. These polynomials are different from the corresponding orthogonal polynomials for a system with rotational symmetry but a rectangular pupil. In this paper, we show that the orthonormal aberration polynomials for an anamorphic system with a circular pupil, obtained by the Gram-Schmidt orthogonalization of the 2D Legendre polynomials, are not separable in the two coordinates. Moreover, for a given polynomial in x and y, there is no corresponding polynomial obtained by interchanging x and y. For example, there are polynomials representing x-defocus, balanced x-coma, and balanced x-spherical aberration, but no corresponding y-aberration polynomials. The missing y-aberration terms are contained in other polynomials. We emphasize that the Zernike circle polynomials, although orthogonal over a circular pupil, are not suitable for an anamorphic system as they do not represent balanced aberrations for such a system.

  14. Semiparametric Item Response Functions in the Context of Guessing

    ERIC Educational Resources Information Center

    Falk, Carl F.; Cai, Li

    2016-01-01

    We present a logistic function of a monotonic polynomial with a lower asymptote, allowing additional flexibility beyond the three-parameter logistic model. We develop a maximum marginal likelihood-based approach to estimate the item parameters. The new item response model is demonstrated on math assessment data from a state, and a computationally…

  15. Transfer function analysis of thermospheric perturbations

    NASA Technical Reports Server (NTRS)

    Mayr, H. G.; Harris, I.; Varosi, F.; Herrero, F. A.; Spencer, N. W.

    1986-01-01

    Applying perturbation theory, a spectral model in terms of vectors spherical harmonics (Legendre polynomials) is used to describe the short term thermospheric perturbations originating in the auroral regions. The source may be Joule heating, particle precipitation or ExB ion drift-momentum coupling. A multiconstituent atmosphere is considered, allowing for the collisional momentum exchange between species including Ar, O2, N2, O, He and H. The coupled equations of energy, mass and momentum conservation are solved simultaneously for the major species N2 and O. Applying homogeneous boundary conditions, the integration is carred out from the Earth's surface up to 700 km. In the analysis, the spherical harmonics are treated as eigenfunctions, assuming that the Earth's rotation (and prevailing circulation) do not significantly affect perturbations with periods which are typically much less than one day. Under these simplifying assumptions, and given a particular source distribution in the vertical, a two dimensional transfer function is constructed to describe the three dimensional response of the atmosphere. In the order of increasing horizontal wave numbers (order of polynomials), this transfer function reveals five components. To compile the transfer function, the numerical computations are very time consuming (about 100 hours on a VAX for one particular vertical source distribution). However, given the transfer function, the atmospheric response in space and time (using Fourier integral representation) can be constructed with a few seconds of a central processing unit. This model is applied in a case study of wind and temperature measurements on the Dynamics Explorer B, which show features characteristic of a ringlike excitation source in the auroral oval. The data can be interpreted as gravity waves which are focused (and amplified) in the polar region and then are reflected to propagate toward lower latitudes.

  16. Transfer function analysis of thermospheric perturbations

    NASA Astrophysics Data System (ADS)

    Mayr, H. G.; Harris, I.; Varosi, F.; Herrero, F. A.; Spencer, N. W.

    1986-06-01

    Applying perturbation theory, a spectral model in terms of vectors spherical harmonics (Legendre polynomials) is used to describe the short term thermospheric perturbations originating in the auroral regions. The source may be Joule heating, particle precipitation or ExB ion drift-momentum coupling. A multiconstituent atmosphere is considered, allowing for the collisional momentum exchange between species including Ar, O2, N2, O, He and H. The coupled equations of energy, mass and momentum conservation are solved simultaneously for the major species N2 and O. Applying homogeneous boundary conditions, the integration is carred out from the Earth's surface up to 700 km. In the analysis, the spherical harmonics are treated as eigenfunctions, assuming that the Earth's rotation (and prevailing circulation) do not significantly affect perturbations with periods which are typically much less than one day. Under these simplifying assumptions, and given a particular source distribution in the vertical, a two dimensional transfer function is constructed to describe the three dimensional response of the atmosphere. In the order of increasing horizontal wave numbers (order of polynomials), this transfer function reveals five components. To compile the transfer function, the numerical computations are very time consuming (about 100 hours on a VAX for one particular vertical source distribution). However, given the transfer function, the atmospheric response in space and time (using Fourier integral representation) can be constructed with a few seconds of a central processing unit. This model is applied in a case study of wind and temperature measurements on the Dynamics Explorer B, which show features characteristic of a ringlike excitation source in the auroral oval. The data can be interpreted as gravity waves which are focused (and amplified) in the polar region and then are reflected to propagate toward lower latitudes.

  17. Approximating exponential and logarithmic functions using polynomial interpolation

    NASA Astrophysics Data System (ADS)

    Gordon, Sheldon P.; Yang, Yajun

    2017-04-01

    This article takes a closer look at the problem of approximating the exponential and logarithmic functions using polynomials. Either as an alternative to or a precursor to Taylor polynomial approximations at the precalculus level, interpolating polynomials are considered. A measure of error is given and the behaviour of the error function is analysed. The results of interpolating polynomials are compared with those of Taylor polynomials.

  18. Isolation and characterization of hydrophobic compounds from carbohydrate matrix of Pistacia atlantica.

    PubMed

    Samavati, Vahid; Adeli, Mostafa

    2014-01-30

    The present work is focused on the optimization of hydrophobic compounds extraction process from the carbohydrate matrix of Iranian Pistacia atlantica seed at laboratory level using ultrasonic-assisted extraction. Response surface methodology (RSM) was used to optimize oil seed extraction yield. Independent variables were extraction temperature (30, 45, 60, 75 and 90°C), extraction time (10, 15, 20, 25, 30 and 35 min) and power of ultrasonic (20, 40, 60, 80 and 100 W). A second order polynomial equation was used to express the oil extraction yield as a function of independent variables. The responses and variables were fitted well to each other by multiple regressions. The optimum extraction conditions were as follows: extraction temperature of 75°C, extraction time of 25 min, and power of ultrasonic of 80 W. A comparison between seed oil composition extracted by ultrasonic waves under the optimum operating conditions determined by RSM for oil yield and by organic solvent was reported. Copyright © 2013 Elsevier Ltd. All rights reserved.

  19. Investigation of the Process Conditions for Hydrogen Production by Steam Reforming of Glycerol over Ni/Al2O3 Catalyst Using Response Surface Methodology (RSM)

    PubMed Central

    Ebshish, Ali; Yaakob, Zahira; Taufiq-Yap, Yun Hin; Bshish, Ahmed

    2014-01-01

    In this work; a response surface methodology (RSM) was implemented to investigate the process variables in a hydrogen production system. The effects of five independent variables; namely the temperature (X1); the flow rate (X2); the catalyst weight (X3); the catalyst loading (X4) and the glycerol-water molar ratio (X5) on the H2 yield (Y1) and the conversion of glycerol to gaseous products (Y2) were explored. Using multiple regression analysis; the experimental results of the H2 yield and the glycerol conversion to gases were fit to quadratic polynomial models. The proposed mathematical models have correlated the dependent factors well within the limits that were being examined. The best values of the process variables were a temperature of approximately 600 °C; a feed flow rate of 0.05 mL/min; a catalyst weight of 0.2 g; a catalyst loading of 20% and a glycerol-water molar ratio of approximately 12; where the H2 yield was predicted to be 57.6% and the conversion of glycerol was predicted to be 75%. To validate the proposed models; statistical analysis using a two-sample t-test was performed; and the results showed that the models could predict the responses satisfactorily within the limits of the variables that were studied. PMID:28788567

  20. Optimization of electrocoagulation process to treat grey wastewater in batch mode using response surface methodology

    PubMed Central

    2014-01-01

    Background Discharge of grey wastewater into the ecological system causes the negative impact effect on receiving water bodies. Methods In this present study, electrocoagulation process (EC) was investigated to treat grey wastewater under different operating conditions such as initial pH (4–8), current density (10–30 mA/cm2), electrode distance (4–6 cm) and electrolysis time (5–25 min) by using stainless steel (SS) anode in batch mode. Four factors with five levels Box-Behnken response surface design (BBD) was employed to optimize and investigate the effect of process variables on the responses such as total solids (TS), chemical oxygen demand (COD) and fecal coliform (FC) removal. Results The process variables showed significant effect on the electrocoagulation treatment process. The results were analyzed by Pareto analysis of variance (ANOVA) and second order polynomial models were developed in order to study the electrocoagulation process statistically. The optimal operating conditions were found to be: initial pH of 7, current density of 20 mA/cm2, electrode distance of 5 cm and electrolysis time of 20 min. Conclusion These results indicated that EC process can be scale up in large scale level to treat grey wastewater with high removal efficiency of TS, COD and FC. PMID:24410752

  1. Modal Analysis Using the Singular Value Decomposition and Rational Fraction Polynomials

    DTIC Science & Technology

    2017-04-06

    information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and...results. The programs are designed for experimental datasets with multiple drive and response points and have proven effective even for systems with... designed for experimental datasets with multiple drive and response points and have proven effective even for systems with numerous closely-spaced

  2. Digital SAR processing using a fast polynomial transform

    NASA Technical Reports Server (NTRS)

    Butman, S.; Lipes, R.; Rubin, A.; Truong, T. K.

    1981-01-01

    A new digital processing algorithm based on the fast polynomial transform is developed for producing images from Synthetic Aperture Radar data. This algorithm enables the computation of the two dimensional cyclic correlation of the raw echo data with the impulse response of a point target, thereby reducing distortions inherent in one dimensional transforms. This SAR processing technique was evaluated on a general-purpose computer and an actual Seasat SAR image was produced. However, regular production runs will require a dedicated facility. It is expected that such a new SAR processing algorithm could provide the basis for a real-time SAR correlator implementation in the Deep Space Network.

  3. Introduction to methodology of dose-response meta-analysis for binary outcome: With application on software.

    PubMed

    Zhang, Chao; Jia, Pengli; Yu, Liu; Xu, Chang

    2018-05-01

    Dose-response meta-analysis (DRMA) is widely applied to investigate the dose-specific relationship between independent and dependent variables. Such methods have been in use for over 30 years and are increasingly employed in healthcare and clinical decision-making. In this article, we give an overview of the methodology used in DRMA. We summarize the commonly used regression model and the pooled method in DRMA. We also use an example to illustrate how to employ a DRMA by these methods. Five regression models, linear regression, piecewise regression, natural polynomial regression, fractional polynomial regression, and restricted cubic spline regression, were illustrated in this article to fit the dose-response relationship. And two types of pooling approaches, that is, one-stage approach and two-stage approach are illustrated to pool the dose-response relationship across studies. The example showed similar results among these models. Several dose-response meta-analysis methods can be used for investigating the relationship between exposure level and the risk of an outcome. However the methodology of DRMA still needs to be improved. © 2018 Chinese Cochrane Center, West China Hospital of Sichuan University and John Wiley & Sons Australia, Ltd.

  4. Equivalences of the multi-indexed orthogonal polynomials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Odake, Satoru

    2014-01-15

    Multi-indexed orthogonal polynomials describe eigenfunctions of exactly solvable shape-invariant quantum mechanical systems in one dimension obtained by the method of virtual states deletion. Multi-indexed orthogonal polynomials are labeled by a set of degrees of polynomial parts of virtual state wavefunctions. For multi-indexed orthogonal polynomials of Laguerre, Jacobi, Wilson, and Askey-Wilson types, two different index sets may give equivalent multi-indexed orthogonal polynomials. We clarify these equivalences. Multi-indexed orthogonal polynomials with both type I and II indices are proportional to those of type I indices only (or type II indices only) with shifted parameters.

  5. Inelastic scattering with Chebyshev polynomials and preconditioned conjugate gradient minimization.

    PubMed

    Temel, Burcin; Mills, Greg; Metiu, Horia

    2008-03-27

    We describe and test an implementation, using a basis set of Chebyshev polynomials, of a variational method for solving scattering problems in quantum mechanics. This minimum error method (MEM) determines the wave function Psi by minimizing the least-squares error in the function (H Psi - E Psi), where E is the desired scattering energy. We compare the MEM to an alternative, the Kohn variational principle (KVP), by solving the Secrest-Johnson model of two-dimensional inelastic scattering, which has been studied previously using the KVP and for which other numerical solutions are available. We use a conjugate gradient (CG) method to minimize the error, and by preconditioning the CG search, we are able to greatly reduce the number of iterations necessary; the method is thus faster and more stable than a matrix inversion, as is required in the KVP. Also, we avoid errors due to scattering off of the boundaries, which presents substantial problems for other methods, by matching the wave function in the interaction region to the correct asymptotic states at the specified energy; the use of Chebyshev polynomials allows this boundary condition to be implemented accurately. The use of Chebyshev polynomials allows for a rapid and accurate evaluation of the kinetic energy. This basis set is as efficient as plane waves but does not impose an artificial periodicity on the system. There are problems in surface science and molecular electronics which cannot be solved if periodicity is imposed, and the Chebyshev basis set is a good alternative in such situations.

  6. Harmonize input selection for sediment transport prediction

    NASA Astrophysics Data System (ADS)

    Afan, Haitham Abdulmohsin; Keshtegar, Behrooz; Mohtar, Wan Hanna Melini Wan; El-Shafie, Ahmed

    2017-09-01

    In this paper, three modeling approaches using a Neural Network (NN), Response Surface Method (RSM) and response surface method basis Global Harmony Search (GHS) are applied to predict the daily time series suspended sediment load. Generally, the input variables for forecasting the suspended sediment load are manually selected based on the maximum correlations of input variables in the modeling approaches based on NN and RSM. The RSM is improved to select the input variables by using the errors terms of training data based on the GHS, namely as response surface method and global harmony search (RSM-GHS) modeling method. The second-order polynomial function with cross terms is applied to calibrate the time series suspended sediment load with three, four and five input variables in the proposed RSM-GHS. The linear, square and cross corrections of twenty input variables of antecedent values of suspended sediment load and water discharge are investigated to achieve the best predictions of the RSM based on the GHS method. The performances of the NN, RSM and proposed RSM-GHS including both accuracy and simplicity are compared through several comparative predicted and error statistics. The results illustrated that the proposed RSM-GHS is as uncomplicated as the RSM but performed better, where fewer errors and better correlation was observed (R = 0.95, MAE = 18.09 (ton/day), RMSE = 25.16 (ton/day)) compared to the ANN (R = 0.91, MAE = 20.17 (ton/day), RMSE = 33.09 (ton/day)) and RSM (R = 0.91, MAE = 20.06 (ton/day), RMSE = 31.92 (ton/day)) for all types of input variables.

  7. Semi-Parametric Item Response Functions in the Context of Guessing. CRESST Report 844

    ERIC Educational Resources Information Center

    Falk, Carl F.; Cai, Li

    2015-01-01

    We present a logistic function of a monotonic polynomial with a lower asymptote, allowing additional flexibility beyond the three-parameter logistic model. We develop a maximum marginal likelihood based approach to estimate the item parameters. The new item response model is demonstrated on math assessment data from a state, and a computationally…

  8. Dirac(-Pauli), Fokker-Planck equations and exceptional Laguerre polynomials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ho, Choon-Lin, E-mail: hcl@mail.tku.edu.tw

    2011-04-15

    Research Highlights: > Physical examples involving exceptional orthogonal polynomials. > Exceptional polynomials as deformations of classical orthogonal polynomials. > Exceptional polynomials from Darboux-Crum transformation. - Abstract: An interesting discovery in the last two years in the field of mathematical physics has been the exceptional X{sub l} Laguerre and Jacobi polynomials. Unlike the well-known classical orthogonal polynomials which start with constant terms, these new polynomials have lowest degree l = 1, 2, and ..., and yet they form complete set with respect to some positive-definite measure. While the mathematical properties of these new X{sub l} polynomials deserve further analysis, it ismore » also of interest to see if they play any role in physical systems. In this paper we indicate some physical models in which these new polynomials appear as the main part of the eigenfunctions. The systems we consider include the Dirac equations coupled minimally and non-minimally with some external fields, and the Fokker-Planck equations. The systems presented here have enlarged the number of exactly solvable physical systems known so far.« less

  9. Solutions of interval type-2 fuzzy polynomials using a new ranking method

    NASA Astrophysics Data System (ADS)

    Rahman, Nurhakimah Ab.; Abdullah, Lazim; Ghani, Ahmad Termimi Ab.; Ahmad, Noor'Ani

    2015-10-01

    A few years ago, a ranking method have been introduced in the fuzzy polynomial equations. Concept of the ranking method is proposed to find actual roots of fuzzy polynomials (if exists). Fuzzy polynomials are transformed to system of crisp polynomials, performed by using ranking method based on three parameters namely, Value, Ambiguity and Fuzziness. However, it was found that solutions based on these three parameters are quite inefficient to produce answers. Therefore in this study a new ranking method have been developed with the aim to overcome the inherent weakness. The new ranking method which have four parameters are then applied in the interval type-2 fuzzy polynomials, covering the interval type-2 of fuzzy polynomial equation, dual fuzzy polynomial equations and system of fuzzy polynomials. The efficiency of the new ranking method then numerically considered in the triangular fuzzy numbers and the trapezoidal fuzzy numbers. Finally, the approximate solutions produced from the numerical examples indicate that the new ranking method successfully produced actual roots for the interval type-2 fuzzy polynomials.

  10. Prediction of visual evoked potentials at any surface location from a set of three recording electrodes.

    PubMed

    Mazinani, Babac A E; Waberski, Till D; van Ooyen, Andre; Walter, Peter

    2008-05-01

    Purpose of this study was to introduce a mathematical model which allows the calculation of a source dipole as the origin of the evoked activity based on the data of three simultaneously recorded VEPs from different locations at the scalp surface to predict field potentials at any neighboring location and to validate this model by comparison with actual recordings. In 10 healthy subjects (25-38, mean 29 years) continuous VEPs were recorded via 96 channels. On the base of the recordings at the positions POz', O1' and O2', a source dipole vector was calculated for each time point of the recordings and VEP responses were back projected for any of the 96 electrode positions. Differences between the calculated and the actually recorded responses were quantified by coefficients of variation (CV). The prediction precision and response size depended on the distance between the electrode of the predicted response and the recording electrodes. After compensating this relationship using a polynomial function, the CV of the mean difference between calculated and recorded responses of the 10 subjects was 2.8 +/- 1.2%. In conclusion, the "Mini-Brainmapping" model can provide precise topographical information with minimal additional recording efforts with good reliability. The implementation of this method in a routine diagnostic setting as an "easy-to-do" procedure would allow to examine a large number of patients and normal subjects in a short time, and thus, a solid data base could be created to correlate well defined pathologies with topographical VEP changes.

  11. Coherent orthogonal polynomials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Celeghini, E., E-mail: celeghini@fi.infn.it; Olmo, M.A. del, E-mail: olmo@fta.uva.es

    2013-08-15

    We discuss a fundamental characteristic of orthogonal polynomials, like the existence of a Lie algebra behind them, which can be added to their other relevant aspects. At the basis of the complete framework for orthogonal polynomials we include thus–in addition to differential equations, recurrence relations, Hilbert spaces and square integrable functions–Lie algebra theory. We start here from the square integrable functions on the open connected subset of the real line whose bases are related to orthogonal polynomials. All these one-dimensional continuous spaces allow, besides the standard uncountable basis (|x〉), for an alternative countable basis (|n〉). The matrix elements that relatemore » these two bases are essentially the orthogonal polynomials: Hermite polynomials for the line and Laguerre and Legendre polynomials for the half-line and the line interval, respectively. Differential recurrence relations of orthogonal polynomials allow us to realize that they determine an infinite-dimensional irreducible representation of a non-compact Lie algebra, whose second order Casimir C gives rise to the second order differential equation that defines the corresponding family of orthogonal polynomials. Thus, the Weyl–Heisenberg algebra h(1) with C=0 for Hermite polynomials and su(1,1) with C=−1/4 for Laguerre and Legendre polynomials are obtained. Starting from the orthogonal polynomials the Lie algebra is extended both to the whole space of the L{sup 2} functions and to the corresponding Universal Enveloping Algebra and transformation group. Generalized coherent states from each vector in the space L{sup 2} and, in particular, generalized coherent polynomials are thus obtained. -- Highlights: •Fundamental characteristic of orthogonal polynomials (OP): existence of a Lie algebra. •Differential recurrence relations of OP determine a unitary representation of a non-compact Lie group. •2nd order Casimir originates a 2nd order differential equation that defines the corresponding OP family. •Generalized coherent polynomials are obtained from OP.« less

  12. Characteristic of entire corneal topography and tomography for the detection of sub-clinical keratoconus with Zernike polynomials using Pentacam.

    PubMed

    Xu, Zhe; Li, Weibo; Jiang, Jun; Zhuang, Xiran; Chen, Wei; Peng, Mei; Wang, Jianhua; Lu, Fan; Shen, Meixiao; Wang, Yuanyuan

    2017-11-28

    The study aimed to characterize the entire corneal topography and tomography for the detection of sub-clinical keratoconus (KC) with a Zernike application method. Normal subjects (n = 147; 147 eyes), sub-clinical KC patients (n = 77; 77 eyes), and KC patients (n = 139; 139 eyes) were imaged with the Pentacam HR system. The entire corneal data of pachymetry and elevation of both the anterior and posterior surfaces were exported from the Pentacam HR software. Zernike polynomials fitting was used to quantify the 3D distribution of the corneal thickness and surface elevation. The root mean square (RMS) values for each order and the total high-order irregularity were calculated. Multimeric discriminant functions combined with individual indices were built using linear step discriminant analysis. Receiver operating characteristic curves determined the diagnostic accuracy (area under the curve, AUC). The 3rd-order RMS of the posterior surface (AUC: 0.928) obtained the highest discriminating capability in sub-clinical KC eyes. The multimeric function, which consisted of the Zernike fitting indices of corneal posterior elevation, showed the highest discriminant ability (AUC: 0.951). Indices generated from the elevation of posterior surface and thickness measurements over the entire cornea using the Zernike method based on the Pentacam HR system were able to identify very early KC.

  13. Shielding Analysis of a Small Compact Space Nuclear Reactor

    DTIC Science & Technology

    1987-08-01

    RESPONSE) =4, MAXWELLIAN FISSION SPECTRUM (ILNTEGRAL RESPONSE) =5, LOS ALAMOS FISSION SPECTRUM, 1982 (INTEGRAL RESPONSE) =6, VITAMIN C NEUTRON SPECTRUM...Appendices Appendix A: Calculations of Effective Radii.. A-1 Appendix B: Atom Density Calculations for FEMPlD and FEMP2D ................ B-I Appendix C ...FEMPID and FEM22D Data........... C -i Appendix D: Energy Group Definition .......... D-I Appendix E: Transport Equation, Legendr4 Polynomial

  14. Simple Proof of Jury Test for Complex Polynomials

    NASA Astrophysics Data System (ADS)

    Choo, Younseok; Kim, Dongmin

    Recently some attempts have been made in the literature to give simple proofs of Jury test for real polynomials. This letter presents a similar result for complex polynomials. A simple proof of Jury test for complex polynomials is provided based on the Rouché's Theorem and a single-parameter characterization of Schur stability property for complex polynomials.

  15. Colour calibration of a laboratory computer vision system for quality evaluation of pre-sliced hams.

    PubMed

    Valous, Nektarios A; Mendoza, Fernando; Sun, Da-Wen; Allen, Paul

    2009-01-01

    Due to the high variability and complex colour distribution in meats and meat products, the colour signal calibration of any computer vision system used for colour quality evaluations, represents an essential condition for objective and consistent analyses. This paper compares two methods for CIE colour characterization using a computer vision system (CVS) based on digital photography; namely the polynomial transform procedure and the transform proposed by the sRGB standard. Also, it presents a procedure for evaluating the colour appearance and presence of pores and fat-connective tissue on pre-sliced hams made from pork, turkey and chicken. Our results showed high precision, in colour matching, for device characterization when the polynomial transform was used to match the CIE tristimulus values in comparison with the sRGB standard approach as indicated by their ΔE(ab)(∗) values. The [3×20] polynomial transfer matrix yielded a modelling accuracy averaging below 2.2 ΔE(ab)(∗) units. Using the sRGB transform, high variability was appreciated among the computed ΔE(ab)(∗) (8.8±4.2). The calibrated laboratory CVS, implemented with a low-cost digital camera, exhibited reproducible colour signals in a wide range of colours capable of pinpointing regions-of-interest and allowed the extraction of quantitative information from the overall ham slice surface with high accuracy. The extracted colour and morphological features showed potential for characterizing the appearance of ham slice surfaces. CVS is a tool that can objectively specify colour and appearance properties of non-uniformly coloured commercial ham slices.

  16. Central composite rotatable design for investigation of microwave-assisted extraction of okra pod hydrocolloid.

    PubMed

    Samavati, Vahid

    2013-10-01

    Microwave-assisted extraction (MAE) technique was employed to extract the hydrocolloid from okra pods (OPH). The optimal conditions for microwave-assisted extraction of OPH were determined by response surface methodology. A central composite rotatable design (CCRD) was applied to evaluate the effects of three independent variables (microwave power (X1: 100-500 W), extraction time (X2: 30-90 min), and extraction temperature (X3: 40-90 °C)) on the extraction yield of OPH. The correlation analysis of the mathematical-regression model indicated that quadratic polynomial model could be employed to optimize the microwave extraction of OPH. The optimal conditions to obtain the highest recovery of OPH (14.911±0.27%) were as follows: microwave power, 395.56 W; extraction time, 67.11 min and extraction temperature, 73.33 °C. Under these optimal conditions, the experimental values agreed with the predicted ones by analysis of variance. It indicated high fitness of the model used and the success of response surface methodology for optimizing OPH extraction. After method development, the DPPH radical scavenging activity of the OPH was evaluated. MAE showed obvious advantages in terms of high extraction efficiency and radical scavenging activity of extract within the shorter extraction time. Copyright © 2013 Elsevier B.V. All rights reserved.

  17. System Synthesis in Preliminary Aircraft Design using Statistical Methods

    NASA Technical Reports Server (NTRS)

    DeLaurentis, Daniel; Mavris, Dimitri N.; Schrage, Daniel P.

    1996-01-01

    This paper documents an approach to conceptual and preliminary aircraft design in which system synthesis is achieved using statistical methods, specifically design of experiments (DOE) and response surface methodology (RSM). These methods are employed in order to more efficiently search the design space for optimum configurations. In particular, a methodology incorporating three uses of these techniques is presented. First, response surface equations are formed which represent aerodynamic analyses, in the form of regression polynomials, which are more sophisticated than generally available in early design stages. Next, a regression equation for an overall evaluation criterion is constructed for the purpose of constrained optimization at the system level. This optimization, though achieved in a innovative way, is still traditional in that it is a point design solution. The methodology put forward here remedies this by introducing uncertainty into the problem, resulting a solutions which are probabilistic in nature. DOE/RSM is used for the third time in this setting. The process is demonstrated through a detailed aero-propulsion optimization of a high speed civil transport. Fundamental goals of the methodology, then, are to introduce higher fidelity disciplinary analyses to the conceptual aircraft synthesis and provide a roadmap for transitioning from point solutions to probabalistic designs (and eventually robust ones).

  18. Biosorption of Lead(II) by Arthrobacter sp. 25: Process Optimization and Mechanism.

    PubMed

    Jin, Yu; Wang, Xin; Zang, Tingting; Hu, Yang; Hu, Xiaojing; Ren, Guangming; Xu, Xiuhong; Qu, Juanjuan

    2016-08-28

    In the present work, Arthrobacter sp. 25, a lead-tolerant bacterium, was assayed to remove lead(II) from aqueous solution. The biosorption process was optimized by response surface methodology (RSM) based on the Box-Behnken design. The relationships between dependent and independent variables were quantitatively determined by second-order polynomial equation and 3D response surface plots. The biosorption mechanism was explored by characterization of the biosorbent before and after biosorption using atomic force microscopy (AFM), scanning electron microscopy, energy dispersive X-ray spectroscopy, X-ray diffraction, and Fourier transform infrared spectroscopy. The results showed that the maximum adsorption capacity of 9.6 mg/g was obtained at the initial lead ion concentration of 108.79 mg/l, pH value of 5.75, and biosorbent dosage of 9.9 g/l (fresh weight), which was close to the theoretically expected value of 9.88 mg/g. Arthrobacter sp. 25 is an ellipsoidalshaped bacterium covered with extracellular polymeric substances. The biosorption mechanism involved physical adsorption and microprecipitation as well as ion exchange, and functional groups such as phosphoryl, hydroxyl, amino, amide, carbonyl, and phosphate groups played vital roles in adsorption. The results indicate that Arthrobacter sp. 25 may be potentially used as a biosorbent for low-concentration lead(II) removal from wastewater.

  19. Enhanced styrene recovery from waste polystyrene pyrolysis using response surface methodology coupled with Box-Behnken design.

    PubMed

    Mo, Yu; Zhao, Lei; Wang, Zhonghui; Chen, Chia-Lung; Tan, Giin-Yu Amy; Wang, Jing-Yuan

    2014-04-01

    A work applied response surface methodology coupled with Box-Behnken design (RSM-BBD) has been developed to enhance styrene recovery from waste polystyrene (WPS) through pyrolysis. The relationship between styrene yield and three selected operating parameters (i.e., temperature, heating rate, and carrier gas flow rate) was investigated. A second order polynomial equation was successfully built to describe the process and predict styrene yield under the study conditions. The factors identified as statistically significant to styrene production were: temperature, with a quadratic effect; heating rate, with a linear effect; carrier gas flow rate, with a quadratic effect; interaction between temperature and carrier gas flow rate; and interaction between heating rate and carrier gas flow rate. The optimum conditions for the current system were determined to be at a temperature range of 470-505°C, a heating rate of 40°C/min, and a carrier gas flow rate range of 115-140mL/min. Under such conditions, 64.52% WPS was recovered as styrene, which was 12% more than the highest reported yield for reactors of similar size. It is concluded that RSM-BBD is an effective approach for yield optimization of styrene recovery from WPS pyrolysis. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. Using Response Surface Analysis to Interpret the Impact of Parent–Offspring Personality Similarity on Adolescent Externalizing Problems

    PubMed Central

    Laceulle, Odillia M.; Van Aken, Marcel A.G.; Ormel, Johan

    2017-01-01

    Abstract Personality similarity between parent and offspring has been suggested to play an important role in offspring's development of externalizing problems. Nonetheless, much remains unknown regarding the nature of this association. This study aimed to investigate the effects of parent–offspring similarity at different levels of personality traits, comparing expectations based on evolutionary and goodness‐of‐fit perspectives. Two waves of data from the TRAILS study (N = 1587, 53% girls) were used to study parent–offspring similarity at different levels of personality traits at age 16 predicting externalizing problems at age 19. Polynomial regression analyses and Response Surface Analyses were used to disentangle effects of different levels and combinations of parents and offspring personality similarity. Although several facets of the offspring's personality had an impact on offspring's externalizing problems, few similarity effects were found. Therefore, there is little support for assumptions based on either an evolutionary or a goodness‐of‐fit perspective. Instead, our findings point in the direction that offspring personality, and at similar levels also parent personality might impact the development of externalizing problems during late adolescence. © 2017 The Authors. European Journal of Personality published by John Wiley & Sons Ltd on behalf of European Association of Personality Psychology PMID:28303077

  1. Use of response surface methodology in a fed-batch process for optimization of tricarboxylic acid cycle intermediates to achieve high levels of canthaxanthin from Dietzia natronolimnaea HS-1.

    PubMed

    Nasri Nasrabadi, Mohammad Reza; Razavi, Seyed Hadi

    2010-04-01

    In this work, we applied statistical experimental design to a fed-batch process for optimization of tricarboxylic acid cycle (TCA) intermediates in order to achieve high-level production of canthaxanthin from Dietzia natronolimnaea HS-1 cultured in beet molasses. A fractional factorial design (screening test) was first conducted on five TCA cycle intermediates. Out of the five TCA cycle intermediates investigated via screening tests, alfaketoglutarate, oxaloacetate and succinate were selected based on their statistically significant (P<0.05) and positive effects on canthaxanthin production. These significant factors were optimized by means of response surface methodology (RSM) in order to achieve high-level production of canthaxanthin. The experimental results of the RSM were fitted with a second-order polynomial equation by means of a multiple regression technique to identify the relationship between canthaxanthin production and the three TCA cycle intermediates. By means of this statistical design under a fed-batch process, the optimum conditions required to achieve the highest level of canthaxanthin (13172 + or - 25 microg l(-1)) were determined as follows: alfaketoglutarate, 9.69 mM; oxaloacetate, 8.68 mM; succinate, 8.51 mM. Copyright 2009 The Society for Biotechnology, Japan. Published by Elsevier B.V. All rights reserved.

  2. Modeling of phase velocity and frequency spectrum of guided Lamb waves in piezoelectric-semiconductor multilayered structures made of AlAs and GaAs

    NASA Astrophysics Data System (ADS)

    Othmani, Cherif; Takali, Farid; Njeh, Anouar

    2017-11-01

    Modeling of guided Lamb waves propagation in piezoelectric-semiconductor multilayered structures made of AlAs and GaAs is evaluated in this paper. Here, the Legendre polynomial method is used to calculate dispersion curves, frequency spectrum and field distributions of guided Lamb waves propagation modes in AlAs, GaAs, AlAs/GaAs and AlAs/GaAs/AlAs-1/2/1 structures. In fact, formulations are given for open-circuit surface. Consequently, the polynomial method is numerically stable according to the total number of layers and the frequency range. This analysis is meaningful for the applications of the piezoelectric-semiconductor multilayered structures made of AlAs and GaAs such as in novel acoustic devices.

  3. Charge-based MOSFET model based on the Hermite interpolation polynomial

    NASA Astrophysics Data System (ADS)

    Colalongo, Luigi; Richelli, Anna; Kovacs, Zsolt

    2017-04-01

    An accurate charge-based compact MOSFET model is developed using the third order Hermite interpolation polynomial to approximate the relation between surface potential and inversion charge in the channel. This new formulation of the drain current retains the same simplicity of the most advanced charge-based compact MOSFET models such as BSIM, ACM and EKV, but it is developed without requiring the crude linearization of the inversion charge. Hence, the asymmetry and the non-linearity in the channel are accurately accounted for. Nevertheless, the expression of the drain current can be worked out to be analytically equivalent to BSIM, ACM and EKV. Furthermore, thanks to this new mathematical approach the slope factor is rigorously defined in all regions of operation and no empirical assumption is required.

  4. Conditioning of sewage sludge with electrolysis: effectiveness and optimizing study to improve dewaterability.

    PubMed

    Yuan, Haiping; Zhu, Nanwen; Song, Lijie

    2010-06-01

    The potential benefits of electrolysis-conditioned sludge dewatering treatment were investigated in this paper. Focuses were placed on effectiveness and factors affecting such novel application of electrolysis process. Experiments have demonstrated that a significant improvement of sludge dewaterability evaluated by capillary suction time (CST) could be obtained at a relative low value of electrolysis voltage. A Box-Behnken experimental design based on the response surface methodology (RSM) was applied to evaluate the optimum of the influencing variables. The optimal values for electrolysis voltage, electrode distance and electrolysis time are 21 V, 5 cm and 12 min, respectively, at which the CST reduction efficiency of 18.8+/-3.1% could be achieved, this agreed with that predicted by an established polynomial model in this study. (c) 2010 Elsevier Ltd. All rights reserved.

  5. Thermospheric gravity waves near the source - Comparison of variations in neutral temperature and vertical velocity at Sondre Stromfjord

    NASA Technical Reports Server (NTRS)

    Herrero, F. A.; Mayr, H. G.; Harris, I.; Varosi, F.; Meriwether, J. W., Jr.

    1984-01-01

    Theoretical predictions of thermospheric gravity wave oscillations are compared with observed neutral temperatures and velocities. The data were taken in February 1983 using a Fabry-Perot interferometer located on Greenland, close to impulse heat sources in the auroral oval. The phenomenon was modeled in terms of linearized equations of motion of the atmosphere on a slowly rotating sphere. Legendre polynomials were used as eigenfunctions and the transfer function amplitude surface was characterized by maxima in the wavenumber frequency plane. Good agreement for predicted and observed velocities and temperatures was attained in the 250-300 km altitude. The amplitude of the vertical velocity, however, was not accurately predicted, nor was the temperature variability. The vertical velocity did exhibit maxima and minima in response to corresponding temperature changes.

  6. Thermospheric gravity waves near the source - Comparison of variations in neutral temperature and vertical velocity at Sondre Stromfjord

    NASA Astrophysics Data System (ADS)

    Herrero, F. A.; Mayr, H. G.; Harris, I.; Varosi, F.; Meriwether, J. W., Jr.

    1984-09-01

    Theoretical predictions of thermospheric gravity wave oscillations are compared with observed neutral temperatures and velocities. The data were taken in February 1983 using a Fabry-Perot interferometer located on Greenland, close to impulse heat sources in the auroral oval. The phenomenon was modeled in terms of linearized equations of motion of the atmosphere on a slowly rotating sphere. Legendre polynomials were used as eigenfunctions and the transfer function amplitude surface was characterized by maxima in the wavenumber frequency plane. Good agreement for predicted and observed velocities and temperatures was attained in the 250-300 km altitude. The amplitude of the vertical velocity, however, was not accurately predicted, nor was the temperature variability. The vertical velocity did exhibit maxima and minima in response to corresponding temperature changes.

  7. Medium Optimization for the Production of Fibrinolytic Enzyme by Paenibacillus sp. IND8 Using Response Surface Methodology

    PubMed Central

    Prakash Vincent, Samuel Gnana

    2014-01-01

    Production of fibrinolytic enzyme by a newly isolated Paenibacillus sp. IND8 was optimized using wheat bran in solid state fermentation. A 25 full factorial design (first-order model) was applied to elucidate the key factors as moisture, pH, sucrose, yeast extract, and sodium dihydrogen phosphate. Statistical analysis of the results has shown that moisture, sucrose, and sodium dihydrogen phosphate have the most significant effects on fibrinolytic enzymes production (P < 0.05). Central composite design (CCD) was used to determine the optimal concentrations of these three components and the experimental results were fitted with a second-order polynomial model at 95% level (P < 0.05). Overall, 4.5-fold increase in fibrinolytic enzyme production was achieved in the optimized medium as compared with the unoptimized medium. PMID:24523635

  8. On the connection coefficients and recurrence relations arising from expansions in series of Laguerre polynomials

    NASA Astrophysics Data System (ADS)

    Doha, E. H.

    2003-05-01

    A formula expressing the Laguerre coefficients of a general-order derivative of an infinitely differentiable function in terms of its original coefficients is proved, and a formula expressing explicitly the derivatives of Laguerre polynomials of any degree and for any order as a linear combination of suitable Laguerre polynomials is deduced. A formula for the Laguerre coefficients of the moments of one single Laguerre polynomial of certain degree is given. Formulae for the Laguerre coefficients of the moments of a general-order derivative of an infinitely differentiable function in terms of its Laguerre coefficients are also obtained. A simple approach in order to build and solve recursively for the connection coefficients between Jacobi-Laguerre and Hermite-Laguerre polynomials is described. An explicit formula for these coefficients between Jacobi and Laguerre polynomials is given, of which the ultra-spherical polynomials of the first and second kinds and Legendre polynomials are important special cases. An analytical formula for the connection coefficients between Hermite and Laguerre polynomials is also obtained.

  9. Compact representation of continuous energy surfaces for more efficient protein design

    PubMed Central

    Hallen, Mark A.; Gainza, Pablo; Donald, Bruce R.

    2015-01-01

    In macromolecular design, conformational energies are sensitive to small changes in atom coordinates, so modeling the small, continuous motions of atoms around low-energy wells confers a substantial advantage in structural accuracy; however, modeling these motions comes at the cost of a very large number of energy function calls, which form the bottleneck in the design calculation. In this work, we remove this bottleneck by consolidating all conformational energy evaluations into the precomputation of a local polynomial expansion of the energy about the “ideal” conformation for each low-energy, “rotameric” state of each residue pair. This expansion is called Energy as Polynomials in Internal Coordinates (EPIC), where the internal coordinates can be sidechain dihedrals, backrub angles, and/or any other continuous degrees of freedom of a macromolecule, and any energy function can be used without adding any asymptotic complexity to the design. We demonstrate that EPIC efficiently represents the energy surface for both molecular-mechanics and quantum-mechanical energy functions, and apply it specifically to protein design to model both sidechain and backbone degrees of freedom. PMID:26089744

  10. Air motion determination by tracking humidity patterns in isentropic layers

    NASA Technical Reports Server (NTRS)

    Mancuso, R. L.; Hall, D. J.

    1975-01-01

    Determining air motions by tracking humidity patterns in isentropic layers was investigated. Upper-air rawinsonde data from the NSSL network and from the AVE-II pilot experiment were used to simulate temperature and humidity profile data that will eventually be available from geosynchronous satellites. Polynomial surfaces that move with time were fitted to the mixing-ratio values of the different isentropic layers. The velocity components of the polynomial surfaces are part of the coefficients that are determined in order to give an optimum fitting of the data. In the mid-troposphere, the derived humidity motions were in good agreement with the winds measured by rawinsondes so long as there were few or no clouds and the lapse rate was relatively stable. In the lower troposphere, the humidity motions were unreliable primarily because of nonadiabatic processes and unstable lapse rates. In the upper troposphere, the humidity amounts were too low to be measured with sufficient accuracy to give reliable results. However, it appears that humidity motions could be used to provide mid-tropospheric wind data over large regions of the globe.

  11. Using Chebyshev polynomial interpolation to improve the computational efficiency of gravity models near an irregularly-shaped asteroid

    NASA Astrophysics Data System (ADS)

    Hu, Shou-Cun; Ji, Jiang-Hui

    2017-12-01

    In asteroid rendezvous missions, the dynamical environment near an asteroid’s surface should be made clear prior to launch of the mission. However, most asteroids have irregular shapes, which lower the efficiency of calculating their gravitational field by adopting the traditional polyhedral method. In this work, we propose a method to partition the space near an asteroid adaptively along three spherical coordinates and use Chebyshev polynomial interpolation to represent the gravitational acceleration in each cell. Moreover, we compare four different interpolation schemes to obtain the best precision with identical initial parameters. An error-adaptive octree division is combined to improve the interpolation precision near the surface. As an example, we take the typical irregularly-shaped near-Earth asteroid 4179 Toutatis to demonstrate the advantage of this method; as a result, we show that the efficiency can be increased by hundreds to thousands of times with our method. Our results indicate that this method can be applicable to other irregularly-shaped asteroids and can greatly improve the evaluation efficiency.

  12. The Baker-Akhiezer Function and Factorization of the Chebotarev-Khrapkov Matrix

    NASA Astrophysics Data System (ADS)

    Antipov, Yuri A.

    2014-10-01

    A new technique is proposed for the solution of the Riemann-Hilbert problem with the Chebotarev-Khrapkov matrix coefficient {G(t) = α1(t)I + α2(t)Q(t)} , {α1(t), α2(t) in H(L)} , I = diag{1, 1}, Q(t) is a {2×2} zero-trace polynomial matrix. This problem has numerous applications in elasticity and diffraction theory. The main feature of the method is the removal of essential singularities of the solution to the associated homogeneous scalar Riemann-Hilbert problem on the hyperelliptic surface of an algebraic function by means of the Baker-Akhiezer function. The consequent application of this function for the derivation of the general solution to the vector Riemann-Hilbert problem requires the finding of the {ρ} zeros of the Baker-Akhiezer function ({ρ} is the genus of the surface). These zeros are recovered through the solution to the associated Jacobi problem of inversion of abelian integrals or, equivalently, the determination of the zeros of the associated degree-{ρ} polynomial and solution of a certain linear algebraic system of {ρ} equations.

  13. Optimization of Paclitaxel Containing pH-Sensitive Liposomes By 3 Factor, 3 Level Box-Behnken Design.

    PubMed

    Rane, Smita; Prabhakar, Bala

    2013-07-01

    The aim of this study was to investigate the combined influence of 3 independent variables in the preparation of paclitaxel containing pH-sensitive liposomes. A 3 factor, 3 levels Box-Behnken design was used to derive a second order polynomial equation and construct contour plots to predict responses. The independent variables selected were molar ratio phosphatidylcholine:diolylphosphatidylethanolamine (X1), molar concentration of cholesterylhemisuccinate (X2), and amount of drug (X3). Fifteen batches were prepared by thin film hydration method and evaluated for percent drug entrapment, vesicle size, and pH sensitivity. The transformed values of the independent variables and the percent drug entrapment were subjected to multiple regression to establish full model second order polynomial equation. F was calculated to confirm the omission of insignificant terms from the full model equation to derive a reduced model polynomial equation to predict the dependent variables. Contour plots were constructed to show the effects of X1, X2, and X3 on the percent drug entrapment. A model was validated for accurate prediction of the percent drug entrapment by performing checkpoint analysis. The computer optimization process and contour plots predicted the levels of independent variables X1, X2, and X3 (0.99, -0.06, 0, respectively), for maximized response of percent drug entrapment with constraints on vesicle size and pH sensitivity.

  14. Macromolecular Rate Theory (MMRT) Provides a Thermodynamics Rationale to Underpin the Convergent Temperature Response in Plant Leaf Respiration

    NASA Astrophysics Data System (ADS)

    Liang, L. L.; Arcus, V. L.; Heskel, M.; O'Sullivan, O. S.; Weerasinghe, L. K.; Creek, D.; Egerton, J. J. G.; Tjoelker, M. G.; Atkin, O. K.; Schipper, L. A.

    2017-12-01

    Temperature is a crucial factor in determining the rates of ecosystem processes such as leaf respiration (R) - the flux of plant respired carbon dioxide (CO2) from leaves to the atmosphere. Generally, respiration rate increases exponentially with temperature as modelled by the Arrhenius equation, but a recent study (Heskel et al., 2016) showed a universally convergent temperature response of R using an empirical exponential/polynomial model whereby the exponent in the Arrhenius model is replaced by a quadratic function of temperature. The exponential/polynomial model has been used elsewhere to describe shoot respiration and plant respiration. What are the principles that underlie these empirical observations? Here, we demonstrate that macromolecular rate theory (MMRT), based on transition state theory for chemical kinetics, is equivalent to the exponential/polynomial model. We re-analyse the data from Heskel et al. 2016 using MMRT to show this equivalence and thus, provide an explanation based on thermodynamics, for the convergent temperature response of R. Using statistical tools, we also show the equivalent explanatory power of MMRT when compared to the exponential/polynomial model and the superiority of both of these models over the Arrhenius function. Three meaningful parameters emerge from MMRT analysis: the temperature at which the rate of respiration is maximum (the so called optimum temperature, Topt), the temperature at which the respiration rate is most sensitive to changes in temperature (the inflection temperature, Tinf) and the overall curvature of the log(rate) versus temperature plot (the so called change in heat capacity for the system, ). The latter term originates from the change in heat capacity between an enzyme-substrate complex and an enzyme transition state complex in enzyme-catalysed metabolic reactions. From MMRT, we find the average Topt and Tinf of R are 67.0±1.2 °C and 41.4±0.7 °C across global sites. The average curvature (average negative) is -1.2±0.1 kJ.mol-1K-1. MMRT extends the classic transition state theory to enzyme-catalysed reactions and scales up to more complex processes including micro-organism growth rates and ecosystem processes.

  15. Approximating Multilinear Monomial Coefficients and Maximum Multilinear Monomials in Multivariate Polynomials

    NASA Astrophysics Data System (ADS)

    Chen, Zhixiang; Fu, Bin

    This paper is our third step towards developing a theory of testing monomials in multivariate polynomials and concentrates on two problems: (1) How to compute the coefficients of multilinear monomials; and (2) how to find a maximum multilinear monomial when the input is a ΠΣΠ polynomial. We first prove that the first problem is #P-hard and then devise a O *(3 n s(n)) upper bound for this problem for any polynomial represented by an arithmetic circuit of size s(n). Later, this upper bound is improved to O *(2 n ) for ΠΣΠ polynomials. We then design fully polynomial-time randomized approximation schemes for this problem for ΠΣ polynomials. On the negative side, we prove that, even for ΠΣΠ polynomials with terms of degree ≤ 2, the first problem cannot be approximated at all for any approximation factor ≥ 1, nor "weakly approximated" in a much relaxed setting, unless P=NP. For the second problem, we first give a polynomial time λ-approximation algorithm for ΠΣΠ polynomials with terms of degrees no more a constant λ ≥ 2. On the inapproximability side, we give a n (1 - ɛ)/2 lower bound, for any ɛ> 0, on the approximation factor for ΠΣΠ polynomials. When the degrees of the terms in these polynomials are constrained as ≤ 2, we prove a 1.0476 lower bound, assuming Pnot=NP; and a higher 1.0604 lower bound, assuming the Unique Games Conjecture.

  16. Orthonormal vector general polynomials derived from the Cartesian gradient of the orthonormal Zernike-based polynomials.

    PubMed

    Mafusire, Cosmas; Krüger, Tjaart P J

    2018-06-01

    The concept of orthonormal vector circle polynomials is revisited by deriving a set from the Cartesian gradient of Zernike polynomials in a unit circle using a matrix-based approach. The heart of this model is a closed-form matrix equation of the gradient of Zernike circle polynomials expressed as a linear combination of lower-order Zernike circle polynomials related through a gradient matrix. This is a sparse matrix whose elements are two-dimensional standard basis transverse Euclidean vectors. Using the outer product form of the Cholesky decomposition, the gradient matrix is used to calculate a new matrix, which we used to express the Cartesian gradient of the Zernike circle polynomials as a linear combination of orthonormal vector circle polynomials. Since this new matrix is singular, the orthonormal vector polynomials are recovered by reducing the matrix to its row echelon form using the Gauss-Jordan elimination method. We extend the model to derive orthonormal vector general polynomials, which are orthonormal in a general pupil by performing a similarity transformation on the gradient matrix to give its equivalent in the general pupil. The outer form of the Gram-Schmidt procedure and the Gauss-Jordan elimination method are then applied to the general pupil to generate the orthonormal vector general polynomials from the gradient of the orthonormal Zernike-based polynomials. The performance of the model is demonstrated with a simulated wavefront in a square pupil inscribed in a unit circle.

  17. Discrete-time state estimation for stochastic polynomial systems over polynomial observations

    NASA Astrophysics Data System (ADS)

    Hernandez-Gonzalez, M.; Basin, M.; Stepanov, O.

    2018-07-01

    This paper presents a solution to the mean-square state estimation problem for stochastic nonlinear polynomial systems over polynomial observations confused with additive white Gaussian noises. The solution is given in two steps: (a) computing the time-update equations and (b) computing the measurement-update equations for the state estimate and error covariance matrix. A closed form of this filter is obtained by expressing conditional expectations of polynomial terms as functions of the state estimate and error covariance. As a particular case, the mean-square filtering equations are derived for a third-degree polynomial system with second-degree polynomial measurements. Numerical simulations show effectiveness of the proposed filter compared to the extended Kalman filter.

  18. Nodal Statistics for the Van Vleck Polynomials

    NASA Astrophysics Data System (ADS)

    Bourget, Alain

    The Van Vleck polynomials naturally arise from the generalized Lamé equation as the polynomials of degree for which Eq. (1) has a polynomial solution of some degree k. In this paper, we compute the limiting distribution, as well as the limiting mean level spacings distribution of the zeros of any Van Vleck polynomial as N --> ∞.

  19. Legendre modified moments for Euler's constant

    NASA Astrophysics Data System (ADS)

    Prévost, Marc

    2008-10-01

    Polynomial moments are often used for the computation of Gauss quadrature to stabilize the numerical calculation of the orthogonal polynomials, see [W. Gautschi, Computational aspects of orthogonal polynomials, in: P. Nevai (Ed.), Orthogonal Polynomials-Theory and Practice, NATO ASI Series, Series C: Mathematical and Physical Sciences, vol. 294. Kluwer, Dordrecht, 1990, pp. 181-216 [6]; W. Gautschi, On the sensitivity of orthogonal polynomials to perturbations in the moments, Numer. Math. 48(4) (1986) 369-382 [5]; W. Gautschi, On generating orthogonal polynomials, SIAM J. Sci. Statist. Comput. 3(3) (1982) 289-317 [4

  20. Radius of curvature measurement of spherical smooth surfaces by multiple-beam interferometry in reflection

    NASA Astrophysics Data System (ADS)

    Abdelsalam, D. G.; Shaalan, M. S.; Eloker, M. M.; Kim, Daesuk

    2010-06-01

    In this paper a method is presented to accurately measure the radius of curvature of different types of curved surfaces of different radii of curvatures of 38 000,18 000 and 8000 mm using multiple-beam interference fringes in reflection. The images captured by the digital detector were corrected by flat fielding method. The corrected images were analyzed and the form of the surfaces was obtained. A 3D profile for the three types of surfaces was obtained using Zernike polynomial fitting. Some sources of uncertainty in measurement were calculated by means of ray tracing simulations and the uncertainty budget was estimated within λ/40.

  1. A polynomial chaos approach to the analysis of vehicle dynamics under uncertainty

    NASA Astrophysics Data System (ADS)

    Kewlani, Gaurav; Crawford, Justin; Iagnemma, Karl

    2012-05-01

    The ability of ground vehicles to quickly and accurately analyse their dynamic response to a given input is critical to their safety and efficient autonomous operation. In field conditions, significant uncertainty is associated with terrain and/or vehicle parameter estimates, and this uncertainty must be considered in the analysis of vehicle motion dynamics. Here, polynomial chaos approaches that explicitly consider parametric uncertainty during modelling of vehicle dynamics are presented. They are shown to be computationally more efficient than the standard Monte Carlo scheme, and experimental results compared with the simulation results performed on ANVEL (a vehicle simulator) indicate that the method can be utilised for efficient and accurate prediction of vehicle motion in realistic scenarios.

  2. Digital SAR processing using a fast polynomial transform

    NASA Technical Reports Server (NTRS)

    Truong, T. K.; Lipes, R. G.; Butman, S. A.; Reed, I. S.; Rubin, A. L.

    1984-01-01

    A new digital processing algorithm based on the fast polynomial transform is developed for producing images from Synthetic Aperture Radar data. This algorithm enables the computation of the two dimensional cyclic correlation of the raw echo data with the impulse response of a point target, thereby reducing distortions inherent in one dimensional transforms. This SAR processing technique was evaluated on a general-purpose computer and an actual Seasat SAR image was produced. However, regular production runs will require a dedicated facility. It is expected that such a new SAR processing algorithm could provide the basis for a real-time SAR correlator implementation in the Deep Space Network. Previously announced in STAR as N82-11295

  3. Optimizing the physical-chemical properties of carbon nanotubes (CNT) and graphene nanoplatelets (GNP) on Cu(II) adsorption.

    PubMed

    Rosenzweig, Shirley; Sorial, George A; Sahle-Demessie, Endalkachew; McAvoy, Drew C

    2014-08-30

    Systematic experiments of copper adsorption on 10 different commercially available nanomaterials were studied for the influence of physical-chemical properties and their interactions. Design of experiment and response surface methodology was used to develop a polynomial model to predict maximum copper adsorption (initial concentration, Co=10mg/L) per mass of nanomaterial, qe, using multivariable regression and maximum R-square criterion. The best subsets of properties to predict qe in order of significant contribution to the model were: bulk density, ID, mesopore volume, tube length, pore size, zeta-charge, specific surface area and OD. The highest experimental qe observed was for an alcohol-functionalized MWCNT (16.7mg/g) with relative high bulk density (0.48g/cm(3)), ID (2-5nm), 10-30μm long and OD<8nm. Graphene nanoplatelets (GNP) showed poor adsorptive capacity associated to stacked-nanoplatelets, but good colloidal stability due to high functionalized surface. Good adsorption results for pristine SWCNT indicated that tubes with small diameter were more associated with good adsorption than functionalized surface. XPS and ICP analysis explored surface chemistry and purity, but pHpzc and zeta-charge were ultimately applied to indicate the degree of functionalization. Optimum CNT were identified in the scatter plot, but actual manufacturing processes introduced size and shape variations which interfered with final property results. Copyright © 2014 Elsevier B.V. All rights reserved.

  4. On multiple orthogonal polynomials for discrete Meixner measures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sorokin, Vladimir N

    2010-12-07

    The paper examines two examples of multiple orthogonal polynomials generalizing orthogonal polynomials of a discrete variable, meaning thereby the Meixner polynomials. One example is bound up with a discrete Nikishin system, and the other leads to essentially new effects. The limit distribution of the zeros of polynomials is obtained in terms of logarithmic equilibrium potentials and in terms of algebraic curves. Bibliography: 9 titles.

  5. Profile shape optimization in multi-jet impingement cooling of dimpled topologies for local heat transfer enhancement

    NASA Astrophysics Data System (ADS)

    Negi, Deepchand Singh; Pattamatta, Arvind

    2015-04-01

    The present study deals with shape optimization of dimples on the target surface in multi-jet impingement heat transfer. Bezier polynomial formulation is incorporated to generate profile shapes for the dimple profile generation and a multi-objective optimization is performed. The optimized dimple shape exhibits higher local Nusselt number values compared to the reference hemispherical dimpled plate optimized shape which can be used to alleviate local temperature hot spots on target surface.

  6. Direct calculation of modal parameters from matrix orthogonal polynomials

    NASA Astrophysics Data System (ADS)

    El-Kafafy, Mahmoud; Guillaume, Patrick

    2011-10-01

    The object of this paper is to introduce a new technique to derive the global modal parameter (i.e. system poles) directly from estimated matrix orthogonal polynomials. This contribution generalized the results given in Rolain et al. (1994) [5] and Rolain et al. (1995) [6] for scalar orthogonal polynomials to multivariable (matrix) orthogonal polynomials for multiple input multiple output (MIMO) system. Using orthogonal polynomials improves the numerical properties of the estimation process. However, the derivation of the modal parameters from the orthogonal polynomials is in general ill-conditioned if not handled properly. The transformation of the coefficients from orthogonal polynomials basis to power polynomials basis is known to be an ill-conditioned transformation. In this paper a new approach is proposed to compute the system poles directly from the multivariable orthogonal polynomials. High order models can be used without any numerical problems. The proposed method will be compared with existing methods (Van Der Auweraer and Leuridan (1987) [4] Chen and Xu (2003) [7]). For this comparative study, simulated as well as experimental data will be used.

  7. Photocatalytic water splitting over titania supported copper and nickel oxide in photoelectrochemical cell; optimization of photoconversion efficiency

    NASA Astrophysics Data System (ADS)

    Muti Mohamed, Norani; Bashiri, Robabeh; Kait, Chong Fai; Sufian, Suriati

    2018-04-01

    we investigated the influence of fluctuating the preparation variables of TiO2 on the efficiency of photocatalytic water splitting in photoelectrochemical (PEC) cell. Hydrothermal associated sol-gel technique was applied to synthesis modified TiO2 with nickel and copper oxide. The variation of water (mL), acid (mL) and total metal loading (%) were mathematically modelled using central composite design (CCD) from the response surface method (RSM) to explore the single and combined effects of parameters on the system performance. The experimental data were fitted using quadratic polynomial regression model from analysis of variance (ANOVA). The coefficient of determination value of 98% confirms the linear relationship between the experimental and predicted values. The amount of water had maximum effect on the photoconversion efficiency due to a direct effect on the crystalline and the number of defects on the surface of photocatalyst. The optimal parameter ratios with maximum photoconversion efficiency were 16 mL, 3 mL and 5 % for water, acid and total metal loading, respectively.

  8. Statistical optimization of arsenic biosorption by microbial enzyme via Ca-alginate beads.

    PubMed

    Banerjee, Suchetana; Banerjee, Anindita; Sarkar, Priyabrata

    2018-04-16

    Bioremediation of arsenic using green technology via microbial enzymes has attracted scientists due to its simplicity and cost effectiveness. Statistical optimization of arsenate bioremediation was conducted by the enzyme arsenate reductase extracted from arsenic tolerant bacterium Pseudomonas alcaligenes. Response surface methodology based on Box-Behnken design matrix was performed to determine the optimal operational conditions of a multivariable system and their interactive effects on the bioremediation process. The highest biosorptive activity of 96.2 µg gm -1 of beads was achieved under optimized conditions (pH = 7.0; As (V) concentration = 1000 ppb; time = 2 h). SEM analysis showed the morphological changes on the surface of enzyme immobilized gluteraldehyde crosslinked Ca-alginate beads. The immobilized enzyme retained its activity for 8 cycles. ANOVA with a high correlation coefficient (R 2 > 0.99) and lower "Prob > F"value (<0.0001) corroborated the second-order polynomial model for the biosorption process. This study on the adsorptive removal of As (V) by enzyme-loaded biosorbent revealed a possible way of its application in large scale treatment of As (V)-contaminated water bodies.

  9. Sustained release biodegradable solid lipid microparticles: Formulation, evaluation and statistical optimization by response surface methodology.

    PubMed

    Hanif, Muhammad; Khan, Hafeez Ullah; Afzal, Samina; Mahmood, Asif; Maheen, Safirah; Afzal, Khurram; Iqbal, Nabila; Andleeb, Mehwish; Abbas, Nazar

    2017-12-20

    For preparing nebivolol loaded solid lipid microparticles (SLMs) by the solvent evaporation microencapsulation process from carnauba wax and glyceryl monostearate, central composite design was used to study the impact of independent variables on yield (Y1), entrapment efficiency (Y2) and drug release (Y3). SLMs having a 10-40 μm size range, with good rheological behavior and spherical smooth surfaces, were produced. Fourier transform infrared spectroscopy, differential scanning calorimetry and X-ray diffractometry pointed to compatibility between formulation components and the zeta-potential study confirmed better stability due to the presence of negative charge (-20 to -40 mV). The obtained outcomes for Y1 (29-86 %), Y2 (45-83 %) and Y3 (49-86 %) were analyzed by polynomial equations and the suggested quadratic model were validated. Nebivolol release from SLMs at pH 1.2 and 6.8 was significantly (p < 0.05) affected by lipid concentration. The release mechanism followed Higuchi and zero order models, while n > 0.85 value (Korsmeyer- Peppas) suggested slow erosion along with diffusion. The optimized SLMs have the potential to improve nebivolol oral bioavailability.

  10. Surface Catalysis and Characterization of Proposed Candidate TPS for Access-to-Space Vehicles

    NASA Technical Reports Server (NTRS)

    Stewart, David A.

    1997-01-01

    Surface properties have been obtained on several classes of thermal protection systems (TPS) using data from both side-arm-reactor and arc-jet facilities. Thermochemical stability, optical properties, and coefficients for atom recombination were determined for candidate TPS proposed for single-stage-to-orbit vehicles. The systems included rigid fibrous insulations, blankets, reinforced carbon carbon, and metals. Test techniques, theories used to define arc-jet and side-arm-reactor flow, and material surface properties are described. Total hemispherical emittance and atom recombination coefficients for each candidate TPS are summarized in the form of polynomial and Arrhenius expressions.

  11. Independence polynomial and matching polynomial of the Koch network

    NASA Astrophysics Data System (ADS)

    Liao, Yunhua; Xie, Xiaoliang

    2015-11-01

    The lattice gas model and the monomer-dimer model are two classical models in statistical mechanics. It is well known that the partition functions of these two models are associated with the independence polynomial and the matching polynomial in graph theory, respectively. Both polynomials have been shown to belong to the “#P-complete” class, which indicate the problems are computationally “intractable”. We consider these two polynomials of the Koch networks which are scale-free with small-world effects. Explicit recurrences are derived, and explicit formulae are presented for the number of independent sets of a certain type.

  12. Automatic bone outer contour extraction from B-modes ultrasound images based on local phase symmetry and quadratic polynomial fitting

    NASA Astrophysics Data System (ADS)

    Karlita, Tita; Yuniarno, Eko Mulyanto; Purnama, I. Ketut Eddy; Purnomo, Mauridhi Hery

    2017-06-01

    Analyzing ultrasound (US) images to get the shapes and structures of particular anatomical regions is an interesting field of study since US imaging is a non-invasive method to capture internal structures of a human body. However, bone segmentation of US images is still challenging because it is strongly influenced by speckle noises and it has poor image quality. This paper proposes a combination of local phase symmetry and quadratic polynomial fitting methods to extract bone outer contour (BOC) from two dimensional (2D) B-modes US image as initial steps of three-dimensional (3D) bone surface reconstruction. By using local phase symmetry, the bone is initially extracted from US images. BOC is then extracted by scanning one pixel on the bone boundary in each column of the US images using first phase features searching method. Quadratic polynomial fitting is utilized to refine and estimate the pixel location that fails to be detected during the extraction process. Hole filling method is then applied by utilize the polynomial coefficients to fill the gaps with new pixel. The proposed method is able to estimate the new pixel position and ensures smoothness and continuity of the contour path. Evaluations are done using cow and goat bones by comparing the resulted BOCs with the contours produced by manual segmentation and contours produced by canny edge detection. The evaluation shows that our proposed methods produces an excellent result with average MSE before and after hole filling at the value of 0.65.

  13. Mixed kernel function support vector regression for global sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Cheng, Kai; Lu, Zhenzhou; Wei, Yuhao; Shi, Yan; Zhou, Yicheng

    2017-11-01

    Global sensitivity analysis (GSA) plays an important role in exploring the respective effects of input variables on an assigned output response. Amongst the wide sensitivity analyses in literature, the Sobol indices have attracted much attention since they can provide accurate information for most models. In this paper, a mixed kernel function (MKF) based support vector regression (SVR) model is employed to evaluate the Sobol indices at low computational cost. By the proposed derivation, the estimation of the Sobol indices can be obtained by post-processing the coefficients of the SVR meta-model. The MKF is constituted by the orthogonal polynomials kernel function and Gaussian radial basis kernel function, thus the MKF possesses both the global characteristic advantage of the polynomials kernel function and the local characteristic advantage of the Gaussian radial basis kernel function. The proposed approach is suitable for high-dimensional and non-linear problems. Performance of the proposed approach is validated by various analytical functions and compared with the popular polynomial chaos expansion (PCE). Results demonstrate that the proposed approach is an efficient method for global sensitivity analysis.

  14. Asymptotically extremal polynomials with respect to varying weights and application to Sobolev orthogonality

    NASA Astrophysics Data System (ADS)

    Díaz Mendoza, C.; Orive, R.; Pijeira Cabrera, H.

    2008-10-01

    We study the asymptotic behavior of the zeros of a sequence of polynomials whose weighted norms, with respect to a sequence of weight functions, have the same nth root asymptotic behavior as the weighted norms of certain extremal polynomials. This result is applied to obtain the (contracted) weak zero distribution for orthogonal polynomials with respect to a Sobolev inner product with exponential weights of the form e-[phi](x), giving a unified treatment for the so-called Freud (i.e., when [phi] has polynomial growth at infinity) and Erdös (when [phi] grows faster than any polynomial at infinity) cases. In addition, we provide a new proof for the bound of the distance of the zeros to the convex hull of the support for these Sobolev orthogonal polynomials.

  15. A study of the orthogonal polynomials associated with the quantum harmonic oscillator on constant curvature spaces

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vignat, C.; Lamberti, P. W.

    2009-10-15

    Recently, Carinena, et al. [Ann. Phys. 322, 434 (2007)] introduced a new family of orthogonal polynomials that appear in the wave functions of the quantum harmonic oscillator in two-dimensional constant curvature spaces. They are a generalization of the Hermite polynomials and will be called curved Hermite polynomials in the following. We show that these polynomials are naturally related to the relativistic Hermite polynomials introduced by Aldaya et al. [Phys. Lett. A 156, 381 (1991)], and thus are Jacobi polynomials. Moreover, we exhibit a natural bijection between the solutions of the quantum harmonic oscillator on negative curvature spaces and on positivemore » curvature spaces. At last, we show a maximum entropy property for the ground states of these oscillators.« less

  16. Stabilisation of discrete-time polynomial fuzzy systems via a polynomial lyapunov approach

    NASA Astrophysics Data System (ADS)

    Nasiri, Alireza; Nguang, Sing Kiong; Swain, Akshya; Almakhles, Dhafer

    2018-02-01

    This paper deals with the problem of designing a controller for a class of discrete-time nonlinear systems which is represented by discrete-time polynomial fuzzy model. Most of the existing control design methods for discrete-time fuzzy polynomial systems cannot guarantee their Lyapunov function to be a radially unbounded polynomial function, hence the global stability cannot be assured. The proposed control design in this paper guarantees a radially unbounded polynomial Lyapunov functions which ensures global stability. In the proposed design, state feedback structure is considered and non-convexity problem is solved by incorporating an integrator into the controller. Sufficient conditions of stability are derived in terms of polynomial matrix inequalities which are solved via SOSTOOLS in MATLAB. A numerical example is presented to illustrate the effectiveness of the proposed controller.

  17. Preliminary results of neural networks and zernike polynomials for classification of videokeratography maps.

    PubMed

    Carvalho, Luis Alberto

    2005-02-01

    Our main goal in this work was to develop an artificial neural network (NN) that could classify specific types of corneal shapes using Zernike coefficients as input. Other authors have implemented successful NN systems in the past and have demonstrated their efficiency using different parameters. Our claim is that, given the increasing popularity of Zernike polynomials among the eye care community, this may be an interesting choice to add complementing value and precision to existing methods. By using a simple and well-documented corneal surface representation scheme, which relies on corneal elevation information, one can generate simple NN input parameters that are independent of curvature definition and that are also efficient. We have used the Matlab Neural Network Toolbox (MathWorks, Natick, MA) to implement a three-layer feed-forward NN with 15 inputs and 5 outputs. A database from an EyeSys System 2000 (EyeSys Vision, Houston, TX) videokeratograph installed at the Escola Paulista de Medicina-Sao Paulo was used. This database contained an unknown number of corneal types. From this database, two specialists selected 80 corneas that could be clearly classified into five distinct categories: (1) normal, (2) with-the-rule astigmatism, (3) against-the-rule astigmatism, (4) keratoconus, and (5) post-laser-assisted in situ keratomileusis. The corneal height (SAG) information of the 80 data files was fit with the first 15 Vision Science and it Applications (VSIA) standard Zernike coefficients, which were individually used to feed the 15 neurons of the input layer. The five output neurons were associated with the five typical corneal shapes. A group of 40 cases was randomly selected from the larger group of 80 corneas and used as the training set. The NN responses were statistically analyzed in terms of sensitivity [true positive/(true positive + false negative)], specificity [true negative/(true negative + false positive)], and precision [(true positive + true negative)/total number of cases]. The mean values for these parameters were, respectively, 78.75, 97.81, and 94%. Although we have used a relatively small training and testing set, results presented here should be considered promising. They are certainly an indication of the potential of Zernike polynomials as reliable parameters, at least in the cases presented here, as input data for artificial intelligence automation of the diagnosis process of videokeratography examinations. This technique should facilitate the implementation and add value to the classification methods already available. We also discuss briefly certain special properties of Zernike polynomials that are what we think make them suitable as NN inputs for this type of application.

  18. Optimization of microwave-assisted extraction (MAE) of coriander phenolic antioxidants - response surface methodology approach.

    PubMed

    Zeković, Zoran; Vladić, Jelena; Vidović, Senka; Adamović, Dušan; Pavlić, Branimir

    2016-10-01

    Microwave-assisted extraction (MAE) of polyphenols from coriander seeds was optimized by simultaneous maximization of total phenolic (TP) and total flavonoid (TF) yields, as well as maximized antioxidant activity determined by 1,1-diphenyl-2-picrylhydrazyl and reducing power assays. Box-Behnken experimental design with response surface methodology (RSM) was used for optimization of MAE. Extraction time (X1 , 15-35 min), ethanol concentration (X2 , 50-90% w/w) and irradiation power (X3 , 400-800 W) were investigated as independent variables. Experimentally obtained values of investigated responses were fitted to a second-order polynomial model, and multiple regression analysis and analysis of variance were used to determine fitness of the model and optimal conditions. The optimal MAE conditions for simultaneous maximization of polyphenol yield and increased antioxidant activity were an extraction time of 19 min, an ethanol concentration of 63% and an irradiation power of 570 W, while predicted values of TP, TF, IC50 and EC50 at optimal MAE conditions were 311.23 mg gallic acid equivalent per 100 g dry weight (DW), 213.66 mg catechin equivalent per 100 g DW, 0.0315 mg mL(-1) and 0.1311 mg mL(-1) respectively. RSM was successfully used for multi-response optimization of coriander seed polyphenols. Comparison of optimized MAE with conventional extraction techniques confirmed that MAE provides significantly higher polyphenol yields and extracts with increased antioxidant activity. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.

  19. Hadamard Factorization of Stable Polynomials

    NASA Astrophysics Data System (ADS)

    Loredo-Villalobos, Carlos Arturo; Aguirre-Hernández, Baltazar

    2011-11-01

    The stable (Hurwitz) polynomials are important in the study of differential equations systems and control theory (see [7] and [19]). A property of these polynomials is related to Hadamard product. Consider two polynomials p,q ∈ R[x]:p(x) = anxn+an-1xn-1+...+a1x+a0q(x) = bmx m+bm-1xm-1+...+b1x+b0the Hadamard product (p × q) is defined as (p×q)(x) = akbkxk+ak-1bk-1xk-1+...+a1b1x+a0b0where k = min(m,n). Some results (see [16]) shows that if p,q ∈R[x] are stable polynomials then (p×q) is stable, also, i.e. the Hadamard product is closed; however, the reciprocal is not always true, that is, not all stable polynomial has a factorization into two stable polynomials the same degree n, if n> 4 (see [15]).In this work we will give some conditions to Hadamard factorization existence for stable polynomials.

  20. On the construction of recurrence relations for the expansion and connection coefficients in series of Jacobi polynomials

    NASA Astrophysics Data System (ADS)

    Doha, E. H.

    2004-01-01

    Formulae expressing explicitly the Jacobi coefficients of a general-order derivative (integral) of an infinitely differentiable function in terms of its original expansion coefficients, and formulae for the derivatives (integrals) of Jacobi polynomials in terms of Jacobi polynomials themselves are stated. A formula for the Jacobi coefficients of the moments of one single Jacobi polynomial of certain degree is proved. Another formula for the Jacobi coefficients of the moments of a general-order derivative of an infinitely differentiable function in terms of its original expanded coefficients is also given. A simple approach in order to construct and solve recursively for the connection coefficients between Jacobi-Jacobi polynomials is described. Explicit formulae for these coefficients between ultraspherical and Jacobi polynomials are deduced, of which the Chebyshev polynomials of the first and second kinds and Legendre polynomials are important special cases. Two analytical formulae for the connection coefficients between Laguerre-Jacobi and Hermite-Jacobi are developed.

  1. Parameterization of the shape of intracranial saccular aneurysms using Legendre polynomials.

    PubMed

    Banatwala, M; Farley, C; Feinberg, D; Humphrey, J D

    2005-04-01

    Our recent studies of the nonlinear mechanics of saccular aneurysms suggest that it is unlikely that these lesions enlarge or rupture via material (limit point) or dynamic (resonance) instabilities. Rather, there is a growing body of evidence from both vascular biology and biomechanical analyses that implicate mechanosensitive growth and remodeling processes. There is, therefore, a pressing need to quantify regional multiaxial wall stresses which, because of the membrane-like behavior of many aneurysms, necessitates better information on the applied loads and regional surface curvatures. Herein, we present and illustrate a method whereby regional curvatures can be estimated easily for sub-classes of human aneurysms based on clinically available data from magnetic resonance angiography (MRA). Whereas Legendre polynomials are used to illustrate this approach, different functions may prove useful for different sub-classes of lesions.

  2. Optimisation of Ethanol-Reflux Extraction of Saponins from Steamed Panax notoginseng by Response Surface Methodology and Evaluation of Hematopoiesis Effect.

    PubMed

    Hu, Yupiao; Cui, Xiuming; Zhang, Zejun; Chen, Lijuan; Zhang, Yiming; Wang, Chengxiao; Yang, Xiaoyan; Qu, Yuan; Xiong, Yin

    2018-05-17

    The present study aims to optimize the ethanol-reflux extraction conditions for extracting saponins from steamed Panax notoginseng (SPN). Four variables including the extraction time (0.5⁻2.5 h), ethanol concentration (50⁻90%), water to solid ratio (W/S, 8⁻16), and times of extraction (1⁻5) were investigated by using the Box-Behnken design response surface methodology (BBD-RSM). For each response, a second-order polynomial model with high R² values (>0.9690) was developed using multiple linear regression analysis and the optimum conditions to maximize the yield (31.96%), content (70.49 mg/g), and antioxidant activity (EC 50 value of 0.0421 mg/mL) for saponins extracted from SPN were obtained with a extraction time of 1.51 h, ethanol concentration of 60%, extraction done 3 times, and a W/S of 10. The experimental values were in good consistency with the predicted ones. In addition, the extracted SPN saponins could significantly increase the levels of blood routine parameters compared with the model group (p < 0.01) and there was no significant difference in the hematopoiesis effect between the SPN group and the SPN saponins group, of which the dose was 15 times lower than the former one. It is suggested that the SPN saponins extracted by the optimized method had similar functions of "blood tonifying" at a much lower dose.

  3. Optimization of the canola oil based vitamin E nanoemulsions stabilized by food grade mixed surfactants using response surface methodology.

    PubMed

    Mehmood, Tahir

    2015-09-15

    The objective of the present study was to prepare canola oil based vitamin E nanoemulsions by using food grade mixed surfactants (Tween:80 and lecithin; 3:1) to replace some concentration of nonionic surfactants (Tween 80) with natural surfactant (soya lecithin) and to optimize their preparation conditions. RBD (Refined, Bleached and Deodorized) canola oil and vitamin E acetate were used in water/vitamin E/oil/surfactant system due to their nutritional benefits and oxidative stability, respectively. Response surface methodology (RSM) was used to optimize the preparation conditions. The effects of homogenization pressure (75-155MPa), oil concentrations (4-12% w/w), surfactant concentrations (3-11% w/w) and vitamin E acetate contents (0.4-1.2% w/w) on the particle size and emulsion stability were studied. RSM analysis has shown that the experimental data could be fitted well into second-order polynomial model with the coefficient of determinations of 0.9464 and 0.9278 for particle size and emulsion stability, respectively. The optimum values of independent variables were 135MPa homogenization pressure, 6.18% oil contents, 6.39% surfactant concentration and 1% vitamin E acetate concentration. The optimized response values for particle size and emulsion stability were 150.10nm and 0.338, respectively. Whereas, the experimental values for particle size and nanoemulsion stability were 156.13±2.3nm and 0.328±0.015, respectively. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. Optimization of photocatalytic degradation of methyl blue using silver ion doped titanium dioxide by combination of experimental design and response surface approach.

    PubMed

    Sahoo, C; Gupta, A K

    2012-05-15

    Photocatalytic degradation of methyl blue (MYB) was studied using Ag(+) doped TiO(2) under UV irradiation in a batch reactor. Catalytic dose, initial concentration of dye and pH of the reaction mixture were found to influence the degradation process most. The degradation was found to be effective in the range catalytic dose (0.5-1.5g/L), initial dye concentration (25-100ppm) and pH of reaction mixture (5-9). Using the three factors three levels Box-Behnken design of experiment technique 15 sets of experiments were designed considering the effective ranges of the influential parameters. The results of the experiments were fitted to two quadratic polynomial models developed using response surface methodology (RSM), representing functional relationship between the decolorization and mineralization of MYB and the experimental parameters. Design Expert software version 8.0.6.1 was used to optimize the effects of the experimental parameters on the responses. The optimum values of the parameters were dose of Ag(+) doped TiO(2) 0.99g/L, initial concentration of MYB 57.68ppm and pH of reaction mixture 7.76. Under the optimal condition the predicted decolorization and mineralization rate of MYB were 95.97% and 80.33%, respectively. Regression analysis with R(2) values >0.99 showed goodness of fit of the experimental results with predicted values. Copyright © 2012 Elsevier B.V. All rights reserved.

  5. Multi-objective optimization of oxidative desulfurization in a sono-photochemical airlift reactor.

    PubMed

    Behin, Jamshid; Farhadian, Negin

    2017-09-01

    Response surface methodology (RSM) was employed to optimize ultrasound/ultraviolet-assisted oxidative desulfurization in an airlift reactor. Ultrasonic waves were incorporated in a novel-geometry reactor to investigate the synergistic effects of sono-chemistry and enhanced gas-liquid mass transfer. Non-hydrotreated kerosene containing sulfur and aromatic compounds was chosen as a case study. Experimental runs were conducted based on a face-centered central composite design and analyzed using RSM. The effects of two categorical factors, i.e., ultrasound and ultraviolet irradiation and two numerical factors, i.e., superficial gas velocity and oxidation time were investigated on two responses, i.e., desulfurization and de-aromatization yields. Two-factor interaction (2FI) polynomial model was developed for the responses and the desirability function associate with overlay graphs was applied to find optimum conditions. The results showed enhancement in desulfurization ability corresponds to more reduction in aromatic content of kerosene in each combination. Based on desirability approach and certain criteria considered for desulfurization/de-aromatization, the optimal desulfurization and de-aromatization yields of 91.7% and 48% were obtained in US/UV/O 3 /H 2 O 2 combination, respectively. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Stable Numerical Approach for Fractional Delay Differential Equations

    NASA Astrophysics Data System (ADS)

    Singh, Harendra; Pandey, Rajesh K.; Baleanu, D.

    2017-12-01

    In this paper, we present a new stable numerical approach based on the operational matrix of integration of Jacobi polynomials for solving fractional delay differential equations (FDDEs). The operational matrix approach converts the FDDE into a system of linear equations, and hence the numerical solution is obtained by solving the linear system. The error analysis of the proposed method is also established. Further, a comparative study of the approximate solutions is provided for the test examples of the FDDE by varying the values of the parameters in the Jacobi polynomials. As in special case, the Jacobi polynomials reduce to the well-known polynomials such as (1) Legendre polynomial, (2) Chebyshev polynomial of second kind, (3) Chebyshev polynomial of third and (4) Chebyshev polynomial of fourth kind respectively. Maximum absolute error and root mean square error are calculated for the illustrated examples and presented in form of tables for the comparison purpose. Numerical stability of the presented method with respect to all four kind of polynomials are discussed. Further, the obtained numerical results are compared with some known methods from the literature and it is observed that obtained results from the proposed method is better than these methods.

  7. Percolation critical polynomial as a graph invariant

    DOE PAGES

    Scullard, Christian R.

    2012-10-18

    Every lattice for which the bond percolation critical probability can be found exactly possesses a critical polynomial, with the root in [0; 1] providing the threshold. Recent work has demonstrated that this polynomial may be generalized through a definition that can be applied on any periodic lattice. The polynomial depends on the lattice and on its decomposition into identical finite subgraphs, but once these are specified, the polynomial is essentially unique. On lattices for which the exact percolation threshold is unknown, the polynomials provide approximations for the critical probability with the estimates appearing to converge to the exact answer withmore » increasing subgraph size. In this paper, I show how the critical polynomial can be viewed as a graph invariant like the Tutte polynomial. In particular, the critical polynomial is computed on a finite graph and may be found using the deletion-contraction algorithm. This allows calculation on a computer, and I present such results for the kagome lattice using subgraphs of up to 36 bonds. For one of these, I find the prediction p c = 0:52440572:::, which differs from the numerical value, p c = 0:52440503(5), by only 6:9 X 10 -7.« less

  8. On Certain Wronskians of Multiple Orthogonal Polynomials

    NASA Astrophysics Data System (ADS)

    Zhang, Lun; Filipuk, Galina

    2014-11-01

    We consider determinants of Wronskian type whose entries are multiple orthogonal polynomials associated with a path connecting two multi-indices. By assuming that the weight functions form an algebraic Chebyshev (AT) system, we show that the polynomials represented by the Wronskians keep a constant sign in some cases, while in some other cases oscillatory behavior appears, which generalizes classical results for orthogonal polynomials due to Karlin and Szegő. There are two applications of our results. The first application arises from the observation that the m-th moment of the average characteristic polynomials for multiple orthogonal polynomial ensembles can be expressed as a Wronskian of the type II multiple orthogonal polynomials. Hence, it is straightforward to obtain the distinct behavior of the moments for odd and even m in a special multiple orthogonal ensemble - the AT ensemble. As the second application, we derive some Turán type inequalities for m! ultiple Hermite and multiple Laguerre polynomials (of two kinds). Finally, we study numerically the geometric configuration of zeros for the Wronskians of these multiple orthogonal polynomials. We observe that the zeros have regular configurations in the complex plane, which might be of independent interest.

  9. Defluoridation of water using activated alumina in presence of natural organic matter via response surface methodology.

    PubMed

    Samarghandi, Mohammad Reza; Khiadani, Mehdi; Foroughi, Maryam; Zolghadr Nasab, Hasan

    2016-01-01

    Adsorption by activated alumina is considered to be one of the most practiced methods for defluoridation of freshwater. This study was conducted, therefore, to investigate the effect of natural organic matters (NOMs) on the removal of fluoride by activated alumina using response surface methodology. To the authors' knowledge, this has not been previously investigated. Physico-chemical characterization of the alumina was determined by scanning electron microscope (SEM), Brunauer-Emmett-Teller (BET), Fourier transform infrared spectroscopy (FTIR), X-ray fluorescence (XRF), and X-ray diffractometer (XRD). Response surface methodology (RSM) was applied to evaluate the effect of single and combined parameters on the independent variables such as the initial concentration of fluoride, NOMs, and pH on the process. The results revealed that while presence of NOM and increase of pH enhance fluoride adsorption on the activated alumina, initial concentration of fluoride has an adverse effect on the efficiency. The experimental data were analyzed and found to be accurately and reliably fitted to a second-order polynomial model. Under optimum removal condition (fluoride concentration 20 mg/L, NOM concentration 20 mg/L, and pH 7) with a desirability value of 0.93 and fluoride removal efficiency of 80.6%, no significant difference was noticed with the previously reported sequence of the co-exiting ion affinity to activated alumina for fluoride removal. Moreover, aluminum residual was found to be below the recommended value by the guideline for drinking water. Also, the increase of fluoride adsorption on the activated alumina, as NOM concentrations increase, could be due to the complexation between fluoride and adsorbed NOM. Graphical abstract ᅟ.

  10. Riemann-Liouville Fractional Calculus of Certain Finite Class of Classical Orthogonal Polynomials

    NASA Astrophysics Data System (ADS)

    Malik, Pradeep; Swaminathan, A.

    2010-11-01

    In this work we consider certain class of classical orthogonal polynomials defined on the positive real line. These polynomials have their weight function related to the probability density function of F distribution and are finite in number up to orthogonality. We generalize these polynomials for fractional order by considering the Riemann-Liouville type operator on these polynomials. Various properties like explicit representation in terms of hypergeometric functions, differential equations, recurrence relations are derived.

  11. Automated Decision Tree Classification of Corneal Shape

    PubMed Central

    Twa, Michael D.; Parthasarathy, Srinivasan; Roberts, Cynthia; Mahmoud, Ashraf M.; Raasch, Thomas W.; Bullimore, Mark A.

    2011-01-01

    Purpose The volume and complexity of data produced during videokeratography examinations present a challenge of interpretation. As a consequence, results are often analyzed qualitatively by subjective pattern recognition or reduced to comparisons of summary indices. We describe the application of decision tree induction, an automated machine learning classification method, to discriminate between normal and keratoconic corneal shapes in an objective and quantitative way. We then compared this method with other known classification methods. Methods The corneal surface was modeled with a seventh-order Zernike polynomial for 132 normal eyes of 92 subjects and 112 eyes of 71 subjects diagnosed with keratoconus. A decision tree classifier was induced using the C4.5 algorithm, and its classification performance was compared with the modified Rabinowitz–McDonnell index, Schwiegerling’s Z3 index (Z3), Keratoconus Prediction Index (KPI), KISA%, and Cone Location and Magnitude Index using recommended classification thresholds for each method. We also evaluated the area under the receiver operator characteristic (ROC) curve for each classification method. Results Our decision tree classifier performed equal to or better than the other classifiers tested: accuracy was 92% and the area under the ROC curve was 0.97. Our decision tree classifier reduced the information needed to distinguish between normal and keratoconus eyes using four of 36 Zernike polynomial coefficients. The four surface features selected as classification attributes by the decision tree method were inferior elevation, greater sagittal depth, oblique toricity, and trefoil. Conclusions Automated decision tree classification of corneal shape through Zernike polynomials is an accurate quantitative method of classification that is interpretable and can be generated from any instrument platform capable of raw elevation data output. This method of pattern classification is extendable to other classification problems. PMID:16357645

  12. Laguerre-Freud Equations for the Recurrence Coefficients of Some Discrete Semi-Classical Orthogonal Polynomials of Class Two

    NASA Astrophysics Data System (ADS)

    Hounga, C.; Hounkonnou, M. N.; Ronveaux, A.

    2006-10-01

    In this paper, we give Laguerre-Freud equations for the recurrence coefficients of discrete semi-classical orthogonal polynomials of class two, when the polynomials in the Pearson equation are of the same degree. The case of generalized Charlier polynomials is also presented.

  13. The Gibbs Phenomenon for Series of Orthogonal Polynomials

    ERIC Educational Resources Information Center

    Fay, T. H.; Kloppers, P. Hendrik

    2006-01-01

    This note considers the four classes of orthogonal polynomials--Chebyshev, Hermite, Laguerre, Legendre--and investigates the Gibbs phenomenon at a jump discontinuity for the corresponding orthogonal polynomial series expansions. The perhaps unexpected thing is that the Gibbs constant that arises for each class of polynomials appears to be the same…

  14. Determinants with orthogonal polynomial entries

    NASA Astrophysics Data System (ADS)

    Ismail, Mourad E. H.

    2005-06-01

    We use moment representations of orthogonal polynomials to evaluate the corresponding Hankel determinants formed by the orthogonal polynomials. We also study the Hankel determinants which start with pn on the top left-hand corner. As examples we evaluate the Hankel determinants whose entries are q-ultraspherical or Al-Salam-Chihara polynomials.

  15. Modeling and control for closed environment plant production systems

    NASA Technical Reports Server (NTRS)

    Fleisher, David H.; Ting, K. C.; Janes, H. W. (Principal Investigator)

    2002-01-01

    A computer program was developed to study multiple crop production and control in controlled environment plant production systems. The program simulates crop growth and development under nominal and off-nominal environments. Time-series crop models for wheat (Triticum aestivum), soybean (Glycine max), and white potato (Solanum tuberosum) are integrated with a model-based predictive controller. The controller evaluates and compensates for effects of environmental disturbances on crop production scheduling. The crop models consist of a set of nonlinear polynomial equations, six for each crop, developed using multivariate polynomial regression (MPR). Simulated data from DSSAT crop models, previously modified for crop production in controlled environments with hydroponics under elevated atmospheric carbon dioxide concentration, were used for the MPR fitting. The model-based predictive controller adjusts light intensity, air temperature, and carbon dioxide concentration set points in response to environmental perturbations. Control signals are determined from minimization of a cost function, which is based on the weighted control effort and squared-error between the system response and desired reference signal.

  16. Small-amplitude oscillations of electrostatically levitated drops

    NASA Astrophysics Data System (ADS)

    Feng, J. Q.; Beard, K. V.

    1990-07-01

    The nature of axisymmetric oscillations of electrostatically levitated drops is examined using an analytical method of multiple-parameter perturbations. The solution for the quiescent equilibrium shape exhibits both stretching of the drop surface along the direction of the externally applied electric field and asymmetry about the drop's equatorial plane. In the presence of electric and gravitational fields, small-amplitude oscillations of charged drops differ from the linear modes first analyzed by Rayleigh. The oscillatory response at each frequency consists of several Legendre polynomials rather than just one, and the characteristic frequency for each axisymmetric mode decreases from that calculated by Rayleigh as the electric field strength increases. This lowering of the characteristic frequencies is enhanced by the net electric charge required for levitation against gravity. Since the contributions of the various forces appear explicitly in the analytic solutions, physical insight is readily gained into their causative role in drop behavior.

  17. A Global Optimization Methodology for Rocket Propulsion Applications

    NASA Technical Reports Server (NTRS)

    2001-01-01

    While the response surface method is an effective method in engineering optimization, its accuracy is often affected by the use of limited amount of data points for model construction. In this chapter, the issues related to the accuracy of the RS approximations and possible ways of improving the RS model using appropriate treatments, including the iteratively re-weighted least square (IRLS) technique and the radial-basis neural networks, are investigated. A main interest is to identify ways to offer added capabilities for the RS method to be able to at least selectively improve the accuracy in regions of importance. An example is to target the high efficiency region of a fluid machinery design space so that the predictive power of the RS can be maximized when it matters most. Analytical models based on polynomials, with controlled level of noise, are used to assess the performance of these techniques.

  18. A Monte Carlo simulation based inverse propagation method for stochastic model updating

    NASA Astrophysics Data System (ADS)

    Bao, Nuo; Wang, Chunjie

    2015-08-01

    This paper presents an efficient stochastic model updating method based on statistical theory. Significant parameters have been selected implementing the F-test evaluation and design of experiments, and then the incomplete fourth-order polynomial response surface model (RSM) has been developed. Exploiting of the RSM combined with Monte Carlo simulation (MCS), reduces the calculation amount and the rapid random sampling becomes possible. The inverse uncertainty propagation is given by the equally weighted sum of mean and covariance matrix objective functions. The mean and covariance of parameters are estimated synchronously by minimizing the weighted objective function through hybrid of particle-swarm and Nelder-Mead simplex optimization method, thus the better correlation between simulation and test is achieved. Numerical examples of a three degree-of-freedom mass-spring system under different conditions and GARTEUR assembly structure validated the feasibility and effectiveness of the proposed method.

  19. Optimization of Parameter Ranges for Composite Tape Winding Process Based on Sensitivity Analysis

    NASA Astrophysics Data System (ADS)

    Yu, Tao; Shi, Yaoyao; He, Xiaodong; Kang, Chao; Deng, Bo; Song, Shibo

    2017-08-01

    This study is focus on the parameters sensitivity of winding process for composite prepreg tape. The methods of multi-parameter relative sensitivity analysis and single-parameter sensitivity analysis are proposed. The polynomial empirical model of interlaminar shear strength is established by response surface experimental method. Using this model, the relative sensitivity of key process parameters including temperature, tension, pressure and velocity is calculated, while the single-parameter sensitivity curves are obtained. According to the analysis of sensitivity curves, the stability and instability range of each parameter are recognized. Finally, the optimization method of winding process parameters is developed. The analysis results show that the optimized ranges of the process parameters for interlaminar shear strength are: temperature within [100 °C, 150 °C], tension within [275 N, 387 N], pressure within [800 N, 1500 N], and velocity within [0.2 m/s, 0.4 m/s], respectively.

  20. Theoretical study of the dynamic magnetic response of ferrofluid to static and alternating magnetic fields

    NASA Astrophysics Data System (ADS)

    Batrudinov, Timur M.; Ambarov, Alexander V.; Elfimova, Ekaterina A.; Zverev, Vladimir S.; Ivanov, Alexey O.

    2017-06-01

    The dynamic magnetic response of ferrofluid in a static uniform external magnetic field to a weak, linear polarized, alternating magnetic field is investigated theoretically. The ferrofluid is modeled as a system of dipolar hard spheres, suspended in a long cylindrical tube whose long axis is parallel to the direction of the static and alternating magnetic fields. The theory is based on the Fokker-Planck-Brown equation formulated for the case when the both static and alternating magnetic fields are applied. The solution of the Fokker-Planck-Brown equation describing the orientational probability density of a randomly chosen dipolar particle is expressed as a series in terms of the spherical Legendre polynomials. The obtained analytical expression connecting three neighboring coefficients of the series makes possible to determine the probability density with any order of accuracy in terms of Legendre polynomials. The analytical formula for the probability density truncated at the first Legendre polynomial is evaluated and used for the calculation of the magnetization and dynamic susceptibility spectra. In the absence of the static magnetic field the presented theory gives the correct single-particle Debye-theory result, which is the exact solution of the Fokker-Planck-Brown equation for the case of applied weak alternating magnetic field. The influence of the static magnetic field on the dynamic susceptibility is analyzed in terms of the low-frequency behavior of the real part and the position of the peak in the imaginary part.

  1. Beampattern control of a microphone array to minimize secondary source contamination.

    PubMed

    Jordan, Peter; Fitzpatrick, John A; Meskell, Craig

    2003-10-01

    A null-steering technique is adapted and applied to a linear delay-and-sum beamformer in order to measure the noise generated by one of the propellers of a 1/8 scale twin propeller aircraft model. The technique involves shading the linear array using a set of weights, which are calculated according to the locations onto which the nulls need to be steered (in this case onto the second propeller). The technique is based on an established microwave antenna theory, and uses a plane-wave, or far field formulation in order to represent the response of the array by an nth-order polynomial, where n is the number of array elements. The roots of this polynomial correspond to the minima of the array response, and so by an appropriate choice of roots, a polynomial can be generated, the coefficients of which are the weights needed to achieve the prespecified set of null positions. It is shown that, for the technique to work with actual data, the cross-spectral matrix must be conditioned before array shading is implemented. This ensures that the shading function is not distorted by the intrinsic element weighting which can occur as a result of the directional nature of aeroacoustic systems. A difference of 6 dB between measurements before and after null steering shows the technique to have been effective in eliminating the contribution from one of the propellers, thus providing a quantitative measure of the acoustic energy from the other.

  2. A Maximum Likelihood Approach to Determine Sensor Radiometric Response Coefficients for NPP VIIRS Reflective Solar Bands

    NASA Technical Reports Server (NTRS)

    Lei, Ning; Chiang, Kwo-Fu; Oudrari, Hassan; Xiong, Xiaoxiong

    2011-01-01

    Optical sensors aboard Earth orbiting satellites such as the next generation Visible/Infrared Imager/Radiometer Suite (VIIRS) assume that the sensors radiometric response in the Reflective Solar Bands (RSB) is described by a quadratic polynomial, in relating the aperture spectral radiance to the sensor Digital Number (DN) readout. For VIIRS Flight Unit 1, the coefficients are to be determined before launch by an attenuation method, although the linear coefficient will be further determined on-orbit through observing the Solar Diffuser. In determining the quadratic polynomial coefficients by the attenuation method, a Maximum Likelihood approach is applied in carrying out the least-squares procedure. Crucial to the Maximum Likelihood least-squares procedure is the computation of the weight. The weight not only has a contribution from the noise of the sensor s digital count, with an important contribution from digitization error, but also is affected heavily by the mathematical expression used to predict the value of the dependent variable, because both the independent and the dependent variables contain random noise. In addition, model errors have a major impact on the uncertainties of the coefficients. The Maximum Likelihood approach demonstrates the inadequacy of the attenuation method model with a quadratic polynomial for the retrieved spectral radiance. We show that using the inadequate model dramatically increases the uncertainties of the coefficients. We compute the coefficient values and their uncertainties, considering both measurement and model errors.

  3. From sequences to polynomials and back, via operator orderings

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amdeberhan, Tewodros, E-mail: tamdeber@tulane.edu; Dixit, Atul, E-mail: adixit@tulane.edu; Moll, Victor H., E-mail: vhm@tulane.edu

    2013-12-15

    Bender and Dunne [“Polynomials and operator orderings,” J. Math. Phys. 29, 1727–1731 (1988)] showed that linear combinations of words q{sup k}p{sup n}q{sup n−k}, where p and q are subject to the relation qp − pq = ı, may be expressed as a polynomial in the symbol z=1/2 (qp+pq). Relations between such polynomials and linear combinations of the transformed coefficients are explored. In particular, examples yielding orthogonal polynomials are provided.

  4. Relaxation distribution function of intracellular dielectric zones as an indicator of tumorous transition of living cells.

    PubMed

    Thornton, B S; Hung, W T; Irving, J

    1991-01-01

    The response decay data of living cells subject to electric polarization is associated with their relaxation distribution function (RDF) and can be determined using the inverse Laplace transform method. A new polynomial, involving a series of associated Laguerre polynomials, has been used as the approximating function for evaluating the RDF, with the advantage of avoiding the usual arbitrary trial values of a particular parameter in the numerical computations. Some numerical examples are given, followed by an application to cervical tissue. It is found that the average relaxation time and the peak amplitude of the RDF exhibit higher values for tumorous cells than normal cells and might be used as parameters to differentiate them and their associated tissues.

  5. A dimension-wise analysis method for the structural-acoustic system with interval parameters

    NASA Astrophysics Data System (ADS)

    Xu, Menghui; Du, Jianke; Wang, Chong; Li, Yunlong

    2017-04-01

    The interval structural-acoustic analysis is mainly accomplished by interval and subinterval perturbation methods. Potential limitations for these intrusive methods include overestimation or interval translation effect for the former and prohibitive computational cost for the latter. In this paper, a dimension-wise analysis method is thus proposed to overcome these potential limitations. In this method, a sectional curve of the system response surface along each input dimensionality is firstly extracted, the minimal and maximal points of which are identified based on its Legendre polynomial approximation. And two input vectors, i.e. the minimal and maximal input vectors, are dimension-wisely assembled by the minimal and maximal points of all sectional curves. Finally, the lower and upper bounds of system response are computed by deterministic finite element analysis at the two input vectors. Two numerical examples are studied to demonstrate the effectiveness of the proposed method and show that, compared to the interval and subinterval perturbation method, a better accuracy is achieved without much compromise on efficiency by the proposed method, especially for nonlinear problems with large interval parameters.

  6. Optimization of pulsed laser welding process parameters in order to attain minimum underfill and undercut defects in thin 316L stainless steel foils

    NASA Astrophysics Data System (ADS)

    Pakmanesh, M. R.; Shamanian, M.

    2018-02-01

    In this study, the optimization of pulsed Nd:YAG laser welding parameters was done on the lap-joint of a 316L stainless steel foil with the aim of reducing weld defects through response surface methodology. For this purpose, the effects of peak power, pulse-duration, and frequency were investigated. The most important weld defects seen in this method include underfill and undercut. By presenting a second-order polynomial, the above-mentioned statistical method was managed to be well employed to balance the welding parameters. The results showed that underfill increased with the increased power and reduced frequency, it first increased and then decreased with the increased pulse-duration; and the most important parameter affecting it was the power, whose effect was 65%. The undercut increased with the increased power, pulse-duration, and frequency; and the most important parameter affecting it was the power, whose effect was 64%. Finally, by superimposing different responses, improved conditions were presented to attain a weld with no defects.

  7. Optimization of tannase production by a novel Klebsiella pneumoniae KP715242 using central composite design.

    PubMed

    Kumar, Mukesh; Rana, Shiny; Beniwal, Vikas; Salar, Raj Kumar

    2015-09-01

    A novel tannase producing bacterial strain was isolated from rhizospheric soil of Acacia species and identified as Klebsiella pneumoniae KP715242. A 3.25-fold increase in tannase production was achieved upon optimization with central composite design using response surface methodology. Four variables namely pH, temperature, incubation period, and agitation speed were used to optimize significant correlation between the effects of these variables on tannase production. A second-order polynomial was fitted to data and validated by ANOVA. The results showed a complex relationship between variables and response given that all factors were significant and could explain 99.6% of the total variation. The maximum production was obtained at 5.2 pH, 34.97 °C temperature, 103.34 rpm agitation speed and 91.34 h of incubation time. The experimental values were in good agreement with the predicted ones and the models were highly significant with a correlation coefficient ( R 2 ) of 0.99 and a highly significant F-value of 319.37.

  8. Advanced oxidation of commercial herbicides mixture: experimental design and phytotoxicity evaluation.

    PubMed

    López, Alejandro; Coll, Andrea; Lescano, Maia; Zalazar, Cristina

    2017-05-05

    In this work, the suitability of the UV/H 2 O 2 process for commercial herbicides mixture degradation was studied. Glyphosate, the herbicide most widely used in the world, was mixed with other herbicides that have residual activity as 2,4-D and atrazine. Modeling of the process response related to specific operating conditions like initial pH and initial H 2 O 2 to total organic carbon molar ratio was assessed by the response surface methodology (RSM). Results have shown that second-order polynomial regression model could well describe and predict the system behavior within the tested experimental region. It also correctly explained the variability in the experimental data. Experimental values were in good agreement with the modeled ones confirming the significance of the model and highlighting the success of RSM for UV/H 2 O 2 process modeling. Phytotoxicity evolution throughout the photolytic degradation process was checked through germination tests indicating that the phytotoxicity of the herbicides mixture was significantly reduced after the treatment. The end point for the treatment at the operating conditions for maximum TOC conversion was also identified.

  9. Constrained Surface-Level Gateway Placement for Underwater Acoustic Wireless Sensor Networks

    NASA Astrophysics Data System (ADS)

    Li, Deying; Li, Zheng; Ma, Wenkai; Chen, Hong

    One approach to guarantee the performance of underwater acoustic sensor networks is to deploy multiple Surface-level Gateways (SGs) at the surface. This paper addresses the connected (or survivable) Constrained Surface-level Gateway Placement (C-SGP) problem for 3-D underwater acoustic sensor networks. Given a set of candidate locations where SGs can be placed, our objective is to place minimum number of SGs at a subset of candidate locations such that it is connected (or 2-connected) from any USN to the base station. We propose a polynomial time approximation algorithm for the connected C-SGP problem and survivable C-SGP problem, respectively. Simulations are conducted to verify our algorithms' efficiency.

  10. Extending Romanovski polynomials in quantum mechanics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Quesne, C.

    2013-12-15

    Some extensions of the (third-class) Romanovski polynomials (also called Romanovski/pseudo-Jacobi polynomials), which appear in bound-state wavefunctions of rationally extended Scarf II and Rosen-Morse I potentials, are considered. For the former potentials, the generalized polynomials satisfy a finite orthogonality relation, while for the latter an infinite set of relations among polynomials with degree-dependent parameters is obtained. Both types of relations are counterparts of those known for conventional polynomials. In the absence of any direct information on the zeros of the Romanovski polynomials present in denominators, the regularity of the constructed potentials is checked by taking advantage of the disconjugacy properties ofmore » second-order differential equations of Schrödinger type. It is also shown that on going from Scarf I to Scarf II or from Rosen-Morse II to Rosen-Morse I potentials, the variety of rational extensions is narrowed down from types I, II, and III to type III only.« less

  11. Polynomial solutions of the Monge-Ampère equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Aminov, Yu A

    2014-11-30

    The question of the existence of polynomial solutions to the Monge-Ampère equation z{sub xx}z{sub yy}−z{sub xy}{sup 2}=f(x,y) is considered in the case when f(x,y) is a polynomial. It is proved that if f is a polynomial of the second degree, which is positive for all values of its arguments and has a positive squared part, then no polynomial solution exists. On the other hand, a solution which is not polynomial but is analytic in the whole of the x, y-plane is produced. Necessary and sufficient conditions for the existence of polynomial solutions of degree up to 4 are found and methods for the construction ofmore » such solutions are indicated. An approximation theorem is proved. Bibliography: 10 titles.« less

  12. Solving the interval type-2 fuzzy polynomial equation using the ranking method

    NASA Astrophysics Data System (ADS)

    Rahman, Nurhakimah Ab.; Abdullah, Lazim

    2014-07-01

    Polynomial equations with trapezoidal and triangular fuzzy numbers have attracted some interest among researchers in mathematics, engineering and social sciences. There are some methods that have been developed in order to solve these equations. In this study we are interested in introducing the interval type-2 fuzzy polynomial equation and solving it using the ranking method of fuzzy numbers. The ranking method concept was firstly proposed to find real roots of fuzzy polynomial equation. Therefore, the ranking method is applied to find real roots of the interval type-2 fuzzy polynomial equation. We transform the interval type-2 fuzzy polynomial equation to a system of crisp interval type-2 fuzzy polynomial equation. This transformation is performed using the ranking method of fuzzy numbers based on three parameters, namely value, ambiguity and fuzziness. Finally, we illustrate our approach by numerical example.

  13. Parallel multigrid smoothing: polynomial versus Gauss-Seidel

    NASA Astrophysics Data System (ADS)

    Adams, Mark; Brezina, Marian; Hu, Jonathan; Tuminaro, Ray

    2003-07-01

    Gauss-Seidel is often the smoother of choice within multigrid applications. In the context of unstructured meshes, however, maintaining good parallel efficiency is difficult with multiplicative iterative methods such as Gauss-Seidel. This leads us to consider alternative smoothers. We discuss the computational advantages of polynomial smoothers within parallel multigrid algorithms for positive definite symmetric systems. Two particular polynomials are considered: Chebyshev and a multilevel specific polynomial. The advantages of polynomial smoothing over traditional smoothers such as Gauss-Seidel are illustrated on several applications: Poisson's equation, thin-body elasticity, and eddy current approximations to Maxwell's equations. While parallelizing the Gauss-Seidel method typically involves a compromise between a scalable convergence rate and maintaining high flop rates, polynomial smoothers achieve parallel scalable multigrid convergence rates without sacrificing flop rates. We show that, although parallel computers are the main motivation, polynomial smoothers are often surprisingly competitive with Gauss-Seidel smoothers on serial machines.

  14. Multiple zeros of polynomials

    NASA Technical Reports Server (NTRS)

    Wood, C. A.

    1974-01-01

    For polynomials of higher degree, iterative numerical methods must be used. Four iterative methods are presented for approximating the zeros of a polynomial using a digital computer. Newton's method and Muller's method are two well known iterative methods which are presented. They extract the zeros of a polynomial by generating a sequence of approximations converging to each zero. However, both of these methods are very unstable when used on a polynomial which has multiple zeros. That is, either they fail to converge to some or all of the zeros, or they converge to very bad approximations of the polynomial's zeros. This material introduces two new methods, the greatest common divisor (G.C.D.) method and the repeated greatest common divisor (repeated G.C.D.) method, which are superior methods for numerically approximating the zeros of a polynomial having multiple zeros. These methods were programmed in FORTRAN 4 and comparisons in time and accuracy are given.

  15. Approximating Exponential and Logarithmic Functions Using Polynomial Interpolation

    ERIC Educational Resources Information Center

    Gordon, Sheldon P.; Yang, Yajun

    2017-01-01

    This article takes a closer look at the problem of approximating the exponential and logarithmic functions using polynomials. Either as an alternative to or a precursor to Taylor polynomial approximations at the precalculus level, interpolating polynomials are considered. A measure of error is given and the behaviour of the error function is…

  16. Interpolation and Polynomial Curve Fitting

    ERIC Educational Resources Information Center

    Yang, Yajun; Gordon, Sheldon P.

    2014-01-01

    Two points determine a line. Three noncollinear points determine a quadratic function. Four points that do not lie on a lower-degree polynomial curve determine a cubic function. In general, n + 1 points uniquely determine a polynomial of degree n, presuming that they do not fall onto a polynomial of lower degree. The process of finding such a…

  17. A note on the zeros of Freud-Sobolev orthogonal polynomials

    NASA Astrophysics Data System (ADS)

    Moreno-Balcazar, Juan J.

    2007-10-01

    We prove that the zeros of a certain family of Sobolev orthogonal polynomials involving the Freud weight function e-x4 on are real, simple, and interlace with the zeros of the Freud polynomials, i.e., those polynomials orthogonal with respect to the weight function e-x4. Some numerical examples are shown.

  18. Optimal Chebyshev polynomials on ellipses in the complex plane

    NASA Technical Reports Server (NTRS)

    Fischer, Bernd; Freund, Roland

    1989-01-01

    The design of iterative schemes for sparse matrix computations often leads to constrained polynomial approximation problems on sets in the complex plane. For the case of ellipses, we introduce a new class of complex polynomials which are in general very good approximations to the best polynomials and even optimal in most cases.

  19. Geometrical Theory of Spherical Harmonics for Geosciences

    NASA Astrophysics Data System (ADS)

    Svehla, Drazen

    2010-05-01

    Spherical harmonics play a central role in the modelling of spatial and temporal processes in the system Earth. The gravity field of the Earth and its temporal variations, sea surface topography, geomagnetic field, ionosphere etc., are just a few examples where spherical harmonics are used to represent processes in the system Earth. We introduce a novel method for the computation and rotation of spherical harmonics, Legendre polynomials and associated Legendre functions without making use of recursive relations. This novel geometrical approach allows calculation of spherical harmonics without any numerical instability up to an arbitrary degree and order, e.g. up to degree and order 106 and beyond. The algorithm is based on the trigonometric reduction of Legendre polynomials and the geometric rotation in hyperspace. It is shown that Legendre polynomials can be computed using trigonometric series by pre-computing amplitudes and translation terms for all angular arguments. It is shown that they can be treated as vectors in the Hilbert hyperspace leading to unitary hermitian rotation matrices with geometric properties. Thus, rotation of spherical harmonics about e.g. a polar or an equatorial axis can be represented in the similar way. This novel method allows stable calculation of spherical harmonics up to an arbitrary degree and order, i.e. up to degree and order 106 and beyond.

  20. A FAST POLYNOMIAL TRANSFORM PROGRAM WITH A MODULARIZED STRUCTURE

    NASA Technical Reports Server (NTRS)

    Truong, T. K.

    1994-01-01

    This program utilizes a fast polynomial transformation (FPT) algorithm applicable to two-dimensional mathematical convolutions. Two-dimensional convolution has many applications, particularly in image processing. Two-dimensional cyclic convolutions can be converted to a one-dimensional convolution in a polynomial ring. Traditional FPT methods decompose the one-dimensional cyclic polynomial into polynomial convolutions of different lengths. This program will decompose a cyclic polynomial into polynomial convolutions of the same length. Thus, only FPTs and Fast Fourier Transforms of the same length are required. This modular approach can save computational resources. To further enhance its appeal, the program is written in the transportable 'C' language. The steps in the algorithm are: 1) formulate the modulus reduction equations, 2) calculate the polynomial transforms, 3) multiply the transforms using a generalized fast Fourier transformation, 4) compute the inverse polynomial transforms, and 5) reconstruct the final matrices using the Chinese remainder theorem. Input to this program is comprised of the row and column dimensions and the initial two matrices. The matrices are printed out at all steps, ending with the final reconstruction. This program is written in 'C' for batch execution and has been implemented on the IBM PC series of computers under DOS with a central memory requirement of approximately 18K of 8 bit bytes. This program was developed in 1986.

  1. AKLSQF - LEAST SQUARES CURVE FITTING

    NASA Technical Reports Server (NTRS)

    Kantak, A. V.

    1994-01-01

    The Least Squares Curve Fitting program, AKLSQF, computes the polynomial which will least square fit uniformly spaced data easily and efficiently. The program allows the user to specify the tolerable least squares error in the fitting or allows the user to specify the polynomial degree. In both cases AKLSQF returns the polynomial and the actual least squares fit error incurred in the operation. The data may be supplied to the routine either by direct keyboard entry or via a file. AKLSQF produces the least squares polynomial in two steps. First, the data points are least squares fitted using the orthogonal factorial polynomials. The result is then reduced to a regular polynomial using Sterling numbers of the first kind. If an error tolerance is specified, the program starts with a polynomial of degree 1 and computes the least squares fit error. The degree of the polynomial used for fitting is then increased successively until the error criterion specified by the user is met. At every step the polynomial as well as the least squares fitting error is printed to the screen. In general, the program can produce a curve fitting up to a 100 degree polynomial. All computations in the program are carried out under Double Precision format for real numbers and under long integer format for integers to provide the maximum accuracy possible. AKLSQF was written for an IBM PC X/AT or compatible using Microsoft's Quick Basic compiler. It has been implemented under DOS 3.2.1 using 23K of RAM. AKLSQF was developed in 1989.

  2. Hierarchical cluster-based partial least squares regression (HC-PLSR) is an efficient tool for metamodelling of nonlinear dynamic models.

    PubMed

    Tøndel, Kristin; Indahl, Ulf G; Gjuvsland, Arne B; Vik, Jon Olav; Hunter, Peter; Omholt, Stig W; Martens, Harald

    2011-06-01

    Deterministic dynamic models of complex biological systems contain a large number of parameters and state variables, related through nonlinear differential equations with various types of feedback. A metamodel of such a dynamic model is a statistical approximation model that maps variation in parameters and initial conditions (inputs) to variation in features of the trajectories of the state variables (outputs) throughout the entire biologically relevant input space. A sufficiently accurate mapping can be exploited both instrumentally and epistemically. Multivariate regression methodology is a commonly used approach for emulating dynamic models. However, when the input-output relations are highly nonlinear or non-monotone, a standard linear regression approach is prone to give suboptimal results. We therefore hypothesised that a more accurate mapping can be obtained by locally linear or locally polynomial regression. We present here a new method for local regression modelling, Hierarchical Cluster-based PLS regression (HC-PLSR), where fuzzy C-means clustering is used to separate the data set into parts according to the structure of the response surface. We compare the metamodelling performance of HC-PLSR with polynomial partial least squares regression (PLSR) and ordinary least squares (OLS) regression on various systems: six different gene regulatory network models with various types of feedback, a deterministic mathematical model of the mammalian circadian clock and a model of the mouse ventricular myocyte function. Our results indicate that multivariate regression is well suited for emulating dynamic models in systems biology. The hierarchical approach turned out to be superior to both polynomial PLSR and OLS regression in all three test cases. The advantage, in terms of explained variance and prediction accuracy, was largest in systems with highly nonlinear functional relationships and in systems with positive feedback loops. HC-PLSR is a promising approach for metamodelling in systems biology, especially for highly nonlinear or non-monotone parameter to phenotype maps. The algorithm can be flexibly adjusted to suit the complexity of the dynamic model behaviour, inviting automation in the metamodelling of complex systems.

  3. Hierarchical Cluster-based Partial Least Squares Regression (HC-PLSR) is an efficient tool for metamodelling of nonlinear dynamic models

    PubMed Central

    2011-01-01

    Background Deterministic dynamic models of complex biological systems contain a large number of parameters and state variables, related through nonlinear differential equations with various types of feedback. A metamodel of such a dynamic model is a statistical approximation model that maps variation in parameters and initial conditions (inputs) to variation in features of the trajectories of the state variables (outputs) throughout the entire biologically relevant input space. A sufficiently accurate mapping can be exploited both instrumentally and epistemically. Multivariate regression methodology is a commonly used approach for emulating dynamic models. However, when the input-output relations are highly nonlinear or non-monotone, a standard linear regression approach is prone to give suboptimal results. We therefore hypothesised that a more accurate mapping can be obtained by locally linear or locally polynomial regression. We present here a new method for local regression modelling, Hierarchical Cluster-based PLS regression (HC-PLSR), where fuzzy C-means clustering is used to separate the data set into parts according to the structure of the response surface. We compare the metamodelling performance of HC-PLSR with polynomial partial least squares regression (PLSR) and ordinary least squares (OLS) regression on various systems: six different gene regulatory network models with various types of feedback, a deterministic mathematical model of the mammalian circadian clock and a model of the mouse ventricular myocyte function. Results Our results indicate that multivariate regression is well suited for emulating dynamic models in systems biology. The hierarchical approach turned out to be superior to both polynomial PLSR and OLS regression in all three test cases. The advantage, in terms of explained variance and prediction accuracy, was largest in systems with highly nonlinear functional relationships and in systems with positive feedback loops. Conclusions HC-PLSR is a promising approach for metamodelling in systems biology, especially for highly nonlinear or non-monotone parameter to phenotype maps. The algorithm can be flexibly adjusted to suit the complexity of the dynamic model behaviour, inviting automation in the metamodelling of complex systems. PMID:21627852

  4. Hydrodynamics-based functional forms of activity metabolism: a case for the power-law polynomial function in animal swimming energetics.

    PubMed

    Papadopoulos, Anthony

    2009-01-01

    The first-degree power-law polynomial function is frequently used to describe activity metabolism for steady swimming animals. This function has been used in hydrodynamics-based metabolic studies to evaluate important parameters of energetic costs, such as the standard metabolic rate and the drag power indices. In theory, however, the power-law polynomial function of any degree greater than one can be used to describe activity metabolism for steady swimming animals. In fact, activity metabolism has been described by the conventional exponential function and the cubic polynomial function, although only the power-law polynomial function models drag power since it conforms to hydrodynamic laws. Consequently, the first-degree power-law polynomial function yields incorrect parameter values of energetic costs if activity metabolism is governed by the power-law polynomial function of any degree greater than one. This issue is important in bioenergetics because correct comparisons of energetic costs among different steady swimming animals cannot be made unless the degree of the power-law polynomial function derives from activity metabolism. In other words, a hydrodynamics-based functional form of activity metabolism is a power-law polynomial function of any degree greater than or equal to one. Therefore, the degree of the power-law polynomial function should be treated as a parameter, not as a constant. This new treatment not only conforms to hydrodynamic laws, but also ensures correct comparisons of energetic costs among different steady swimming animals. Furthermore, the exponential power-law function, which is a new hydrodynamics-based functional form of activity metabolism, is a special case of the power-law polynomial function. Hence, the link between the hydrodynamics of steady swimming and the exponential-based metabolic model is defined.

  5. Stochastic Estimation via Polynomial Chaos

    DTIC Science & Technology

    2015-10-01

    AFRL-RW-EG-TR-2015-108 Stochastic Estimation via Polynomial Chaos Douglas V. Nance Air Force Research...COVERED (From - To) 20-04-2015 – 07-08-2015 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER Stochastic Estimation via Polynomial Chaos ...This expository report discusses fundamental aspects of the polynomial chaos method for representing the properties of second order stochastic

  6. Vehicle Sprung Mass Estimation for Rough Terrain

    DTIC Science & Technology

    2011-03-01

    distributions are greater than zero. The multivariate polynomials are functions of the Legendre polynomials (Poularikas (1999...developed methods based on polynomial chaos theory and on the maximum likelihood approach to estimate the most likely value of the vehicle sprung...mass. The polynomial chaos estimator is compared to benchmark algorithms including recursive least squares, recursive total least squares, extended

  7. Degenerate r-Stirling Numbers and r-Bell Polynomials

    NASA Astrophysics Data System (ADS)

    Kim, T.; Yao, Y.; Kim, D. S.; Jang, G.-W.

    2018-01-01

    The purpose of this paper is to exploit umbral calculus in order to derive some properties, recurrence relations, and identities related to the degenerate r-Stirling numbers of the second kind and the degenerate r-Bell polynomials. Especially, we will express the degenerate r-Bell polynomials as linear combinations of many well-known families of special polynomials.

  8. From Chebyshev to Bernstein: A Tour of Polynomials Small and Large

    ERIC Educational Resources Information Center

    Boelkins, Matthew; Miller, Jennifer; Vugteveen, Benjamin

    2006-01-01

    Consider the family of monic polynomials of degree n having zeros at -1 and +1 and all their other real zeros in between these two values. This article explores the size of these polynomials using the supremum of the absolute value on [-1, 1], showing that scaled Chebyshev and Bernstein polynomials give the extremes.

  9. Improving Global Models of Remotely Sensed Ocean Chlorophyll Content Using Partial Least Squares and Geographically Weighted Regression

    NASA Astrophysics Data System (ADS)

    Gholizadeh, H.; Robeson, S. M.

    2015-12-01

    Empirical models have been widely used to estimate global chlorophyll content from remotely sensed data. Here, we focus on the standard NASA empirical models that use blue-green band ratios. These band ratio ocean color (OC) algorithms are in the form of fourth-order polynomials and the parameters of these polynomials (i.e. coefficients) are estimated from the NASA bio-Optical Marine Algorithm Data set (NOMAD). Most of the points in this data set have been sampled from tropical and temperate regions. However, polynomial coefficients obtained from this data set are used to estimate chlorophyll content in all ocean regions with different properties such as sea-surface temperature, salinity, and downwelling/upwelling patterns. Further, the polynomial terms in these models are highly correlated. In sum, the limitations of these empirical models are as follows: 1) the independent variables within the empirical models, in their current form, are correlated (multicollinear), and 2) current algorithms are global approaches and are based on the spatial stationarity assumption, so they are independent of location. Multicollinearity problem is resolved by using partial least squares (PLS). PLS, which transforms the data into a set of independent components, can be considered as a combined form of principal component regression (PCR) and multiple regression. Geographically weighted regression (GWR) is also used to investigate the validity of spatial stationarity assumption. GWR solves a regression model over each sample point by using the observations within its neighbourhood. PLS results show that the empirical method underestimates chlorophyll content in high latitudes, including the Southern Ocean region, when compared to PLS (see Figure 1). Cluster analysis of GWR coefficients also shows that the spatial stationarity assumption in empirical models is not likely a valid assumption.

  10. Finding the Best-Fit Polynomial Approximation in Evaluating Drill Data: the Application of a Generalized Inverse Matrix / Poszukiwanie Najlepszej ZGODNOŚCI W PRZYBLIŻENIU Wielomianowym Wykorzystanej do Oceny Danych Z ODWIERTÓW - Zastosowanie UOGÓLNIONEJ Macierzy Odwrotnej

    NASA Astrophysics Data System (ADS)

    Karakus, Dogan

    2013-12-01

    In mining, various estimation models are used to accurately assess the size and the grade distribution of an ore body. The estimation of the positional properties of unknown regions using random samples with known positional properties was first performed using polynomial approximations. Although the emergence of computer technologies and statistical evaluation of random variables after the 1950s rendered the polynomial approximations less important, theoretically the best surface passing through the random variables can be expressed as a polynomial approximation. In geoscience studies, in which the number of random variables is high, reliable solutions can be obtained only with high-order polynomials. Finding the coefficients of these types of high-order polynomials can be computationally intensive. In this study, the solution coefficients of high-order polynomials were calculated using a generalized inverse matrix method. A computer algorithm was developed to calculate the polynomial degree giving the best regression between the values obtained for solutions of different polynomial degrees and random observational data with known values, and this solution was tested with data derived from a practical application. In this application, the calorie values for data from 83 drilling points in a coal site located in southwestern Turkey were used, and the results are discussed in the context of this study. W górnictwie wykorzystuje się rozmaite modele estymacji do dokładnego określenia wielkości i rozkładu zawartości pierwiastka użytecznego w rudzie. Estymację położenia i właściwości skał w nieznanych obszarach z wykorzystaniem próbek losowych o znanym położeniu przeprowadzano na początku z wykorzystaniem przybliżenia wielomianowego. Pomimo tego, że rozwój technik komputerowych i statystycznych metod ewaluacji próbek losowych sprawiły, że po roku 1950 metody przybliżenia wielomianowego straciły na znaczeniu, nadal teoretyczna powierzchnia najlepszej zgodności przechodząca przez zmienne losowe wyrażana jest właśnie poprzez przybliżenie wielomianowe. W geofizyce, gdzie liczba próbek losowych jest zazwyczaj bardzo wysoka, wiarygodne rozwiązania uzyskać można jedynie przy wykorzystaniu wielomianów wyższych stopni. Określenie współczynników w tego typu wielomia nach jest skomplikowaną procedurą obliczeniową. W pracy tej poszukiwane współczynniki wielomianu wyższych stopni obliczono przy zastosowaniu metody uogólnionej macierzy odwrotnej. Opracowano odpowiedni algorytm komputerowy do obliczania stopnia wielomianu, zapewniający najlepszą regresję pomiędzy wartościami otrzymanymi z rozwiązań bazujących na wielomianach różnych stopni i losowymi danymi z obserwacji, o znanych wartościach. Rozwiązanie to przetestowano z użyciem danych uzyskanych z zastosowań praktycznych. W tym zastosowaniu użyto danych o wartości opałowej pochodzących z 83 odwiertów wykonanych w zagłębiu węglowym w południowo- zachodniej Turcji, wyniki obliczeń przedyskutowano w kontekście zagadnień uwzględnionych w niniejszej pracy.

  11. Optimization of mucilage extraction from chia seeds (Salvia hispanica L.) using response surface methodology.

    PubMed

    Orifici, Stefania C; Capitani, Marianela I; Tomás, Mabel C; Nolasco, Susana M

    2018-02-25

    Chia mucilage has potential application as a functional ingredient; advances on maximizing its extraction yield could represent a significant technological and economic impact for the food industry. Thus, first, the effect of mechanical agitation time (1-3 h) on the exudation of chia mucilage was analyzed. Then, response surface methodology was used to determine the optimal combination of the independent variables temperature (15-85 °C) and seed: water ratio (1: 12-1: 40.8 w/v) for the 2 h exudation that give maximum chia mucilage yield. Experiments were designed according to central composite rotatable design. A second-order polynomial model predicted the variation in extraction mucilage yield with the variables temperature and seed: water ratio. The optimal operating conditions were found to be temperature 85 °C and a seed: water ratio of 1: 31 (w/v), reaching an experimental extraction yield of 116 ± 0.21 g kg -1 (dry basis). The mucilage obtained exhibited good functional properties, mainly in terms of water-holding capacity, emulsifying activity, and emulsion stability. The results obtained show that temperature, seed: water ratio, and exudation time are important variables of the process that affect the extraction yield and the quality of the chia mucilage, determined according to its physicochemical and functional properties. © 2018 Society of Chemical Industry. © 2018 Society of Chemical Industry.

  12. Study of the filtration performance of a plain wave fabric filter using response surface methodology.

    PubMed

    Qian, Fuping; Wang, Haigang

    2010-04-15

    The gas-solid two-phase flows in the plain wave fabric filter were simulated by computational fluid dynamics (CFD) technology, and the warps and wefts of the fabric filter were made of filaments with different dimensions. The numerical solutions were carried out using commercial computational fluid dynamics (CFD) code Fluent 6.1. The filtration performances of the plain wave fabric filter with different geometry parameters and operating condition, including the horizontal distance, the vertical distance and the face velocity were calculated. The effects of geometry parameters and operating condition on filtration efficiency and pressure drop were studied using response surface methodology (RSM) by means of the statistical software (Minitab V14), and two second-order polynomial models were obtained with regard to the effect of the three factors as stated above. Moreover, the models were modified by dismissing the insignificant terms. The results show that the horizontal distance, vertical distance and the face velocity all play an important role in influencing the filtration efficiency and pressure drop of the plane wave fabric filters. The horizontal distance of 3.8 times the fiber diameter, the vertical distance of 4.0 times the fiber diameter and Reynolds number of 0.98 are found to be the optimal conditions to achieve the highest filtration efficiency at the same face velocity, while maintaining an acceptable pressure drop. 2009 Elsevier B.V. All rights reserved.

  13. Analysis of the inter- and extracellular formation of platinum nanoparticles by Fusarium oxysporum f. sp. lycopersici using response surface methodology

    NASA Astrophysics Data System (ADS)

    Riddin, T. L.; Gericke, M.; Whiteley, C. G.

    2006-07-01

    Fusarium oxysporum fungal strain was screened and found to be successful for the inter- and extracellular production of platinum nanoparticles. Nanoparticle formation was visually observed, over time, by the colour of the extracellular solution and/or the fungal biomass turning from yellow to dark brown, and their concentration was determined from the amount of residual hexachloroplatinic acid measured from a standard curve at 456 nm. The extracellular nanoparticles were characterized by transmission electron microscopy. Nanoparticles of varying size (10-100 nm) and shape (hexagons, pentagons, circles, squares, rectangles) were produced at both extracellular and intercellular levels by the Fusarium oxysporum. The particles precipitate out of solution and bioaccumulate by nucleation either intercellularly, on the cell wall/membrane, or extracellularly in the surrounding medium. The importance of pH, temperature and hexachloroplatinic acid (H2PtCl6) concentration in nanoparticle formation was examined through the use of a statistical response surface methodology. Only the extracellular production of nanoparticles proved to be statistically significant, with a concentration yield of 4.85 mg l-1 estimated by a first-order regression model. From a second-order polynomial regression, the predicted yield of nanoparticles increased to 5.66 mg l-1 and, after a backward step, regression gave a final model with a yield of 6.59 mg l-1.

  14. Umbral orthogonal polynomials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lopez-Sendino, J. E.; del Olmo, M. A.

    2010-12-23

    We present an umbral operator version of the classical orthogonal polynomials. We obtain three families which are the umbral counterpart of the Jacobi, Laguerre and Hermite polynomials in the classical case.

  15. Explaining Support Vector Machines: A Color Based Nomogram

    PubMed Central

    Van Belle, Vanya; Van Calster, Ben; Van Huffel, Sabine; Suykens, Johan A. K.; Lisboa, Paulo

    2016-01-01

    Problem setting Support vector machines (SVMs) are very popular tools for classification, regression and other problems. Due to the large choice of kernels they can be applied with, a large variety of data can be analysed using these tools. Machine learning thanks its popularity to the good performance of the resulting models. However, interpreting the models is far from obvious, especially when non-linear kernels are used. Hence, the methods are used as black boxes. As a consequence, the use of SVMs is less supported in areas where interpretability is important and where people are held responsible for the decisions made by models. Objective In this work, we investigate whether SVMs using linear, polynomial and RBF kernels can be explained such that interpretations for model-based decisions can be provided. We further indicate when SVMs can be explained and in which situations interpretation of SVMs is (hitherto) not possible. Here, explainability is defined as the ability to produce the final decision based on a sum of contributions which depend on one single or at most two input variables. Results Our experiments on simulated and real-life data show that explainability of an SVM depends on the chosen parameter values (degree of polynomial kernel, width of RBF kernel and regularization constant). When several combinations of parameter values yield the same cross-validation performance, combinations with a lower polynomial degree or a larger kernel width have a higher chance of being explainable. Conclusions This work summarizes SVM classifiers obtained with linear, polynomial and RBF kernels in a single plot. Linear and polynomial kernels up to the second degree are represented exactly. For other kernels an indication of the reliability of the approximation is presented. The complete methodology is available as an R package and two apps and a movie are provided to illustrate the possibilities offered by the method. PMID:27723811

  16. Design and Use of a Learning Object for Finding Complex Polynomial Roots

    ERIC Educational Resources Information Center

    Benitez, Julio; Gimenez, Marcos H.; Hueso, Jose L.; Martinez, Eulalia; Riera, Jaime

    2013-01-01

    Complex numbers are essential in many fields of engineering, but students often fail to have a natural insight of them. We present a learning object for the study of complex polynomials that graphically shows that any complex polynomials has a root and, furthermore, is useful to find the approximate roots of a complex polynomial. Moreover, we…

  17. Extending a Property of Cubic Polynomials to Higher-Degree Polynomials

    ERIC Educational Resources Information Center

    Miller, David A.; Moseley, James

    2012-01-01

    In this paper, the authors examine a property that holds for all cubic polynomials given two zeros. This property is discovered after reviewing a variety of ways to determine the equation of a cubic polynomial given specific conditions through algebra and calculus. At the end of the article, they will connect the property to a very famous method…

  18. Computing Galois Groups of Eisenstein Polynomials Over P-adic Fields

    NASA Astrophysics Data System (ADS)

    Milstead, Jonathan

    The most efficient algorithms for computing Galois groups of polynomials over global fields are based on Stauduhar's relative resolvent method. These methods are not directly generalizable to the local field case, since they require a field that contains the global field in which all roots of the polynomial can be approximated. We present splitting field-independent methods for computing the Galois group of an Eisenstein polynomial over a p-adic field. Our approach is to combine information from different disciplines. We primarily, make use of the ramification polygon of the polynomial, which is the Newton polygon of a related polynomial. This allows us to quickly calculate several invariants that serve to reduce the number of possible Galois groups. Algorithms by Greve and Pauli very efficiently return the Galois group of polynomials where the ramification polygon consists of one segment as well as information about the subfields of the stem field. Second, we look at the factorization of linear absolute resolvents to further narrow the pool of possible groups.

  19. On polynomial preconditioning for indefinite Hermitian matrices

    NASA Technical Reports Server (NTRS)

    Freund, Roland W.

    1989-01-01

    The minimal residual method is studied combined with polynomial preconditioning for solving large linear systems (Ax = b) with indefinite Hermitian coefficient matrices (A). The standard approach for choosing the polynomial preconditioners leads to preconditioned systems which are positive definite. Here, a different strategy is studied which leaves the preconditioned coefficient matrix indefinite. More precisely, the polynomial preconditioner is designed to cluster the positive, resp. negative eigenvalues of A around 1, resp. around some negative constant. In particular, it is shown that such indefinite polynomial preconditioners can be obtained as the optimal solutions of a certain two parameter family of Chebyshev approximation problems. Some basic results are established for these approximation problems and a Remez type algorithm is sketched for their numerical solution. The problem of selecting the parameters such that the resulting indefinite polynomial preconditioners speeds up the convergence of minimal residual method optimally is also addressed. An approach is proposed based on the concept of asymptotic convergence factors. Finally, some numerical examples of indefinite polynomial preconditioners are given.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Genest, Vincent X.; Vinet, Luc; Zhedanov, Alexei

    The algebra H of the dual -1 Hahn polynomials is derived and shown to arise in the Clebsch-Gordan problem of sl{sub -1}(2). The dual -1 Hahn polynomials are the bispectral polynomials of a discrete argument obtained from the q{yields}-1 limit of the dual q-Hahn polynomials. The Hopf algebra sl{sub -1}(2) has four generators including an involution, it is also a q{yields}-1 limit of the quantum algebra sl{sub q}(2) and furthermore, the dynamical algebra of the parabose oscillator. The algebra H, a two-parameter generalization of u(2) with an involution as additional generator, is first derived from the recurrence relation of themore » -1 Hahn polynomials. It is then shown that H can be realized in terms of the generators of two added sl{sub -1}(2) algebras, so that the Clebsch-Gordan coefficients of sl{sub -1}(2) are dual -1 Hahn polynomials. An irreducible representation of H involving five-diagonal matrices and connected to the difference equation of the dual -1 Hahn polynomials is constructed.« less

  1. [Using fractional polynomials to estimate the safety threshold of fluoride in drinking water].

    PubMed

    Pan, Shenling; An, Wei; Li, Hongyan; Yang, Min

    2014-01-01

    To study the dose-response relationship between fluoride content in drinking water and prevalence of dental fluorosis on the national scale, then to determine the safety threshold of fluoride in drinking water. Meta-regression analysis was applied to the 2001-2002 national endemic fluorosis survey data of key wards. First, fractional polynomial (FP) was adopted to establish fixed effect model, determining the best FP structure, after that restricted maximum likelihood (REML) was adopted to estimate between-study variance, then the best random effect model was established. The best FP structure was first-order logarithmic transformation. Based on the best random effect model, the benchmark dose (BMD) of fluoride in drinking water and its lower limit (BMDL) was calculated as 0.98 mg/L and 0.78 mg/L. Fluoride in drinking water can only explain 35.8% of the variability of the prevalence, among other influencing factors, ward type was a significant factor, while temperature condition and altitude were not. Fractional polynomial-based meta-regression method is simple, practical and can provide good fitting effect, based on it, the safety threshold of fluoride in drinking water of our country is determined as 0.8 mg/L.

  2. A method for the design of unsymmetrical optical systems using freeform surfaces

    NASA Astrophysics Data System (ADS)

    Reshidko, Dmitry; Sasian, Jose

    2017-11-01

    Optical systems that do not have axial symmetry can provide useful and unique solutions to certain imaging problems. However, the complexity of the optical design task grows as the degrees of symmetry are reduced and lost: there are more aberration terms to control, and achieving a sharp image over a wide field-of-view at fast optical speeds becomes challenging. Plane-symmetric optical systems represent a large family of practical non-axially symmetric systems that are simple enough to be easily described and thus are well understood. Design methodologies and aberration theory of plane-symmetric optical systems have been discussed in the literature, and various interesting solutions have been reported [1-4]. The little discussed in the literature technique of confocal systems is effective for the design of unsymmetrical optics. A confocal unsymmetrical system is constructed in such a way that there is sharp image along a given ray (called the optical axis ray (OAR)) surface after surface. It is possible to show that such a system can have a reduced number of field aberrations, and that the system will behave closer to an axially symmetric system [5-6]. In this paper, we review a methodology for the design of unsymmetrical optical systems. We utilize an aspherical/freeform surface constructed by superposition of a conic expressed in a coordinate system that is centered on the off-axis surface segment rather than centered on the axis of symmetry, and an XY polynomial. The conic part of the aspherical/freeform surface describes the base shape that is required to achieve stigmatic imaging surface after surface along the OAR. The XY polynomial adds a more refined shape description to the surface sag and provides effective degrees of freedom for higher-order aberration correction. This aspheric/freeform surface profile is able to best model the ideal reflective surface and to allow one to intelligently approach the optical design. Examples of two- and threemirror unobscured wide field-of-view reflective systems are provided to show how the methods and corresponding aspheric/freeform surface are applied. We also demonstrate how the method can be extended to design a monolithic freeform objective.

  3. Interbasis expansions in the Zernike system

    NASA Astrophysics Data System (ADS)

    Atakishiyev, Natig M.; Pogosyan, George S.; Wolf, Kurt Bernardo; Yakhno, Alexander

    2017-10-01

    The differential equation with free boundary conditions on the unit disk that was proposed by Frits Zernike in 1934 to find Jacobi polynomial solutions (indicated as I) serves to define a classical system and a quantum system which have been found to be superintegrable. We have determined two new orthogonal polynomial solutions (indicated as II and III) that are separable and involve Legendre and Gegenbauer polynomials. Here we report on their three interbasis expansion coefficients: between the I-II and I-III bases, they are given by F32(⋯|1 ) polynomials that are also special su(2) Clebsch-Gordan coefficients and Hahn polynomials. Between the II-III bases, we find an expansion expressed by F43(⋯|1 ) 's and Racah polynomials that are related to the Wigner 6j coefficients.

  4. Gravity Gradient Tensor of Arbitrary 3D Polyhedral Bodies with up to Third-Order Polynomial Horizontal and Vertical Mass Contrasts

    NASA Astrophysics Data System (ADS)

    Ren, Zhengyong; Zhong, Yiyuan; Chen, Chaojian; Tang, Jingtian; Kalscheuer, Thomas; Maurer, Hansruedi; Li, Yang

    2018-03-01

    During the last 20 years, geophysicists have developed great interest in using gravity gradient tensor signals to study bodies of anomalous density in the Earth. Deriving exact solutions of the gravity gradient tensor signals has become a dominating task in exploration geophysics or geodetic fields. In this study, we developed a compact and simple framework to derive exact solutions of gravity gradient tensor measurements for polyhedral bodies, in which the density contrast is represented by a general polynomial function. The polynomial mass contrast can continuously vary in both horizontal and vertical directions. In our framework, the original three-dimensional volume integral of gravity gradient tensor signals is transformed into a set of one-dimensional line integrals along edges of the polyhedral body by sequentially invoking the volume and surface gradient (divergence) theorems. In terms of an orthogonal local coordinate system defined on these edges, exact solutions are derived for these line integrals. We successfully derived a set of unified exact solutions of gravity gradient tensors for constant, linear, quadratic and cubic polynomial orders. The exact solutions for constant and linear cases cover all previously published vertex-type exact solutions of the gravity gradient tensor for a polygonal body, though the associated algorithms may differ in numerical stability. In addition, to our best knowledge, it is the first time that exact solutions of gravity gradient tensor signals are derived for a polyhedral body with a polynomial mass contrast of order higher than one (that is quadratic and cubic orders). Three synthetic models (a prismatic body with depth-dependent density contrasts, an irregular polyhedron with linear density contrast and a tetrahedral body with horizontally and vertically varying density contrasts) are used to verify the correctness and the efficiency of our newly developed closed-form solutions. Excellent agreements are obtained between our solutions and other published exact solutions. In addition, stability tests are performed to demonstrate that our exact solutions can safely be used to detect shallow subsurface targets.

  5. Zeros and logarithmic asymptotics of Sobolev orthogonal polynomials for exponential weights

    NASA Astrophysics Data System (ADS)

    Díaz Mendoza, C.; Orive, R.; Pijeira Cabrera, H.

    2009-12-01

    We obtain the (contracted) weak zero asymptotics for orthogonal polynomials with respect to Sobolev inner products with exponential weights in the real semiaxis, of the form , with [gamma]>0, which include as particular cases the counterparts of the so-called Freud (i.e., when [phi] has a polynomial growth at infinity) and Erdös (when [phi] grows faster than any polynomial at infinity) weights. In addition, the boundness of the distance of the zeros of these Sobolev orthogonal polynomials to the convex hull of the support and, as a consequence, a result on logarithmic asymptotics are derived.

  6. Combinatorial theory of Macdonald polynomials I: proof of Haglund's formula.

    PubMed

    Haglund, J; Haiman, M; Loehr, N

    2005-02-22

    Haglund recently proposed a combinatorial interpretation of the modified Macdonald polynomials H(mu). We give a combinatorial proof of this conjecture, which establishes the existence and integrality of H(mu). As corollaries, we obtain the cocharge formula of Lascoux and Schutzenberger for Hall-Littlewood polynomials, a formula of Sahi and Knop for Jack's symmetric functions, a generalization of this result to the integral Macdonald polynomials J(mu), a formula for H(mu) in terms of Lascoux-Leclerc-Thibon polynomials, and combinatorial expressions for the Kostka-Macdonald coefficients K(lambda,mu) when mu is a two-column shape.

  7. Multi-indexed (q-)Racah polynomials

    NASA Astrophysics Data System (ADS)

    Odake, Satoru; Sasaki, Ryu

    2012-09-01

    As the second stage of the project multi-indexed orthogonal polynomials, we present, in the framework of ‘discrete quantum mechanics’ with real shifts in one dimension, the multi-indexed (q-)Racah polynomials. They are obtained from the (q-)Racah polynomials by the multiple application of the discrete analogue of the Darboux transformations or the Crum-Krein-Adler deletion of ‘virtual state’ vectors, in a similar way to the multi-indexed Laguerre and Jacobi polynomials reported earlier. The virtual state vectors are the ‘solutions’ of the matrix Schrödinger equation with negative ‘eigenvalues’, except for one of the two boundary points.

  8. Polynomial Chaos decomposition applied to stochastic dosimetry: study of the influence of the magnetic field orientation on the pregnant woman exposure at 50 Hz.

    PubMed

    Liorni, I; Parazzini, M; Fiocchi, S; Guadagnin, V; Ravazzani, P

    2014-01-01

    Polynomial Chaos (PC) is a decomposition method used to build a meta-model, which approximates the unknown response of a model. In this paper the PC method is applied to the stochastic dosimetry to assess the variability of human exposure due to the change of the orientation of the B-field vector respect to the human body. In detail, the analysis of the pregnant woman exposure at 7 months of gestational age is carried out, to build-up a statistical meta-model of the induced electric field for each fetal tissue and in the fetal whole-body by means of the PC expansion as a function of the B-field orientation, considering a uniform exposure at 50 Hz.

  9. Conformal Galilei algebras, symmetric polynomials and singular vectors

    NASA Astrophysics Data System (ADS)

    Křižka, Libor; Somberg, Petr

    2018-01-01

    We classify and explicitly describe homomorphisms of Verma modules for conformal Galilei algebras cga_ℓ (d,C) with d=1 for any integer value ℓ \\in N. The homomorphisms are uniquely determined by singular vectors as solutions of certain differential operators of flag type and identified with specific polynomials arising as coefficients in the expansion of a parametric family of symmetric polynomials into power sum symmetric polynomials.

  10. Identities associated with Milne-Thomson type polynomials and special numbers.

    PubMed

    Simsek, Yilmaz; Cakic, Nenad

    2018-01-01

    The purpose of this paper is to give identities and relations including the Milne-Thomson polynomials, the Hermite polynomials, the Bernoulli numbers, the Euler numbers, the Stirling numbers, the central factorial numbers, and the Cauchy numbers. By using fermionic and bosonic p -adic integrals, we derive some new relations and formulas related to these numbers and polynomials, and also the combinatorial sums.

  11. Reactive Collisions and Final State Analysis in Hypersonic Flight Regime

    DTIC Science & Technology

    2016-09-13

    Kelvin.[7] The gas-phase, surface reactions and energy transfer at these tempera- tures are essentially uncharacterized and the experimental methodologies...high temperatures (1000 to 20000 K) and compared with results from experimentally derived thermodynamics quantities from the NASA CEA (NASA Chemical...with a reproducing kernel Hilbert space (RKHS) method[13] combined with Legendre polynomials; (2) quasi classical trajectory (QCT) calculations to study

  12. Principal Curves and Surfaces

    DTIC Science & Technology

    1984-11-01

    welL The subipace is found by using the usual linear eigenv’ctor solution in th3 new enlarged space. This technique was first suggested by Gnanadesikan ...Wilk (1966, 1968), and a good description can be found in Gnanadesikan (1977). They suggested using polynomial functions’ of the original p co...Heidelberg, Springer Ver- lag. Gnanadesikan , R. (1977), Methods for Statistical Data Analysis of Multivariate Observa- tions, Wiley, New York

  13. Optical Estimation of the 3D Shape of a Solar Illuminated, Reflecting Satellite Surface

    NASA Astrophysics Data System (ADS)

    Antolin, J.; Yu, Z.; Prasad, S.

    2016-09-01

    The spatial distribution of the polarized component of the power reflected by a macroscopically smooth but microscopically roughened curved surface under highly directional illumination, as characterized by an appropriate bi-directional reflectance distribution function (BRDF), carries information about the three-dimensional (3D) shape of the surface. This information can be exploited to recover the surface shape locally under rather general conditions whenever power reflectance data for at least two different illumination or observation directions can be obtained. We present here two different parametric approaches for surface reconstruction, amounting to the recovery of the surface parameters that are either the global parameters of the family to which the surface is known a priori to belong or the coefficients of a low-order polynomial that can be employed to characterize a smoothly varying surface locally over the observed patch.

  14. Response surface optimization of substrates for thermophilic anaerobic codigestion of sewage sludge and food waste.

    PubMed

    Kim, Hyun-Woo; Shin, Hang-Sik; Han, Sun-Kee; Oh, Sae-Eun

    2007-03-01

    This study investigated the effects of food waste constituents on thermophilic (55 degrees C) anaerobic codigestion of sewage sludge and food waste by using statistical techniques based on biochemical methane potential tests. Various combinations of grain, vegetable, and meat as cosubstrate were tested, and then the data of methane potential (MP), methane production rate (MPR), and first-order kinetic constant of hydrolysis (kH) were collected for further analyses. Response surface methodology by the Box-Behnken design can verify the effects and their interactions of three variables on responses efficiently. MP was mainly affected by grain, whereas MPR and kH were affected by both vegetable and meat. Estimated polynomial regression models can properly explain the variability of experimental data with a high-adjusted R2 of 0.727, 0.836, and 0.915, respectively. By applying a series of optimization techniques, it was possible to find the proper criteria of cosubstrate. The optimal cosubstrate region was suggested based on overlay contours of overall mean responses. With the desirability contour plots, it was found that optimal conditions of cosubstrate for the maximum MPR (56.6 mL of CH4/g of chemical oxygen demand [COD]/day) were 0.71 g of COD/L of grain, 0.18 g of COD/L of vegetable, and 0.38 g of COD/L of meat by the simultaneous consideration of MP, MPR, and kH. Within the range of each factor examined, the corresponding optimal ratio of sewage sludge to cosubstrate was 71:29 as the COD basis. Elaborate discussions could yield practical operational strategies for the enhanced thermophilic anaerobic codigestion of sewage sludge and food waste.

  15. Application of response surface methodology in optimization of lactic acid fermentation of radish: effect of addition of salt, additives and growth stimulators.

    PubMed

    Joshi, V K; Chauhan, Arjun; Devi, Sarita; Kumar, Vikas

    2015-08-01

    Lactic acid fermentation of radish was conducted using various additive and growth stimulators such as salt (2 %-3 %), lactose, MgSO4 + MnSO4 and Mustard (1 %, 1.5 % and 2 %) to optimize the process. Response surface methodology (Design expert, Trial version 8.0.5.2) was applied to the experimental data for the optimization of process variables in lactic acid fermentation of radish. Out of various treatments studied, only the treatments having ground mustard had an appreciable effect on lactic acid fermentation. Both linear and quadratic terms of the variables studied had a significant effect on the responses studied. The interactions between the variables were found to contribute to the response at a significant level. The best results were obtained in the treatment with 2.5 % salt, 1.5 % lactose, 1.5 % (MgSO4 + MnSO4) and 1.5 % mustard. These optimized concentrations increased titrable acidity and LAB count, but lowered pH. The second-order polynomial regression model determined that the highest titrable acidity (1.69), lowest pH (2.49) and maximum LAB count (10 × 10(8) cfu/ml) would be obtained at these concentrations of additives. Among 30 runs conducted, run 2 has got the optimum concentration of salt- 2.5 %, lactose- 1.5 %, MgSO4 + MnSO4- 1.5 % and mustard- 1.5 % for lactic acid fermentation of radish. The values for different additives and growth stimulators optimized in this study could successfully be employed for the lactic acid fermentation of radish as a postharvest reduction tool and for product development.

  16. Approximating smooth functions using algebraic-trigonometric polynomials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharapudinov, Idris I

    2011-01-14

    The problem under consideration is that of approximating classes of smooth functions by algebraic-trigonometric polynomials of the form p{sub n}(t)+{tau}{sub m}(t), where p{sub n}(t) is an algebraic polynomial of degree n and {tau}{sub m}(t)=a{sub 0}+{Sigma}{sub k=1}{sup m}a{sub k} cos k{pi}t + b{sub k} sin k{pi}t is a trigonometric polynomial of order m. The precise order of approximation by such polynomials in the classes W{sup r}{sub {infinity}(}M) and an upper bound for similar approximations in the class W{sup r}{sub p}(M) with 4/3

  17. Parameter reduction in nonlinear state-space identification of hysteresis

    NASA Astrophysics Data System (ADS)

    Fakhrizadeh Esfahani, Alireza; Dreesen, Philippe; Tiels, Koen; Noël, Jean-Philippe; Schoukens, Johan

    2018-05-01

    Recent work on black-box polynomial nonlinear state-space modeling for hysteresis identification has provided promising results, but struggles with a large number of parameters due to the use of multivariate polynomials. This drawback is tackled in the current paper by applying a decoupling approach that results in a more parsimonious representation involving univariate polynomials. This work is carried out numerically on input-output data generated by a Bouc-Wen hysteretic model and follows up on earlier work of the authors. The current article discusses the polynomial decoupling approach and explores the selection of the number of univariate polynomials with the polynomial degree. We have found that the presented decoupling approach is able to reduce the number of parameters of the full nonlinear model up to about 50%, while maintaining a comparable output error level.

  18. Breeding value accuracy estimates for growth traits using random regression and multi-trait models in Nelore cattle.

    PubMed

    Boligon, A A; Baldi, F; Mercadante, M E Z; Lobo, R B; Pereira, R J; Albuquerque, L G

    2011-06-28

    We quantified the potential increase in accuracy of expected breeding value for weights of Nelore cattle, from birth to mature age, using multi-trait and random regression models on Legendre polynomials and B-spline functions. A total of 87,712 weight records from 8144 females were used, recorded every three months from birth to mature age from the Nelore Brazil Program. For random regression analyses, all female weight records from birth to eight years of age (data set I) were considered. From this general data set, a subset was created (data set II), which included only nine weight records: at birth, weaning, 365 and 550 days of age, and 2, 3, 4, 5, and 6 years of age. Data set II was analyzed using random regression and multi-trait models. The model of analysis included the contemporary group as fixed effects and age of dam as a linear and quadratic covariable. In the random regression analyses, average growth trends were modeled using a cubic regression on orthogonal polynomials of age. Residual variances were modeled by a step function with five classes. Legendre polynomials of fourth and sixth order were utilized to model the direct genetic and animal permanent environmental effects, respectively, while third-order Legendre polynomials were considered for maternal genetic and maternal permanent environmental effects. Quadratic polynomials were applied to model all random effects in random regression models on B-spline functions. Direct genetic and animal permanent environmental effects were modeled using three segments or five coefficients, and genetic maternal and maternal permanent environmental effects were modeled with one segment or three coefficients in the random regression models on B-spline functions. For both data sets (I and II), animals ranked differently according to expected breeding value obtained by random regression or multi-trait models. With random regression models, the highest gains in accuracy were obtained at ages with a low number of weight records. The results indicate that random regression models provide more accurate expected breeding values than the traditionally finite multi-trait models. Thus, higher genetic responses are expected for beef cattle growth traits by replacing a multi-trait model with random regression models for genetic evaluation. B-spline functions could be applied as an alternative to Legendre polynomials to model covariance functions for weights from birth to mature age.

  19. Constructing general partial differential equations using polynomial and neural networks.

    PubMed

    Zjavka, Ladislav; Pedrycz, Witold

    2016-01-01

    Sum fraction terms can approximate multi-variable functions on the basis of discrete observations, replacing a partial differential equation definition with polynomial elementary data relation descriptions. Artificial neural networks commonly transform the weighted sum of inputs to describe overall similarity relationships of trained and new testing input patterns. Differential polynomial neural networks form a new class of neural networks, which construct and solve an unknown general partial differential equation of a function of interest with selected substitution relative terms using non-linear multi-variable composite polynomials. The layers of the network generate simple and composite relative substitution terms whose convergent series combinations can describe partial dependent derivative changes of the input variables. This regression is based on trained generalized partial derivative data relations, decomposed into a multi-layer polynomial network structure. The sigmoidal function, commonly used as a nonlinear activation of artificial neurons, may transform some polynomial items together with the parameters with the aim to improve the polynomial derivative term series ability to approximate complicated periodic functions, as simple low order polynomials are not able to fully make up for the complete cycles. The similarity analysis facilitates substitutions for differential equations or can form dimensional units from data samples to describe real-world problems. Copyright © 2015 Elsevier Ltd. All rights reserved.

  20. Learning polynomial feedforward neural networks by genetic programming and backpropagation.

    PubMed

    Nikolaev, N Y; Iba, H

    2003-01-01

    This paper presents an approach to learning polynomial feedforward neural networks (PFNNs). The approach suggests, first, finding the polynomial network structure by means of a population-based search technique relying on the genetic programming paradigm, and second, further adjustment of the best discovered network weights by an especially derived backpropagation algorithm for higher order networks with polynomial activation functions. These two stages of the PFNN learning process enable us to identify networks with good training as well as generalization performance. Empirical results show that this approach finds PFNN which outperform considerably some previous constructive polynomial network algorithms on processing benchmark time series.

  1. Quasi-kernel polynomials and convergence results for quasi-minimal residual iterations

    NASA Technical Reports Server (NTRS)

    Freund, Roland W.

    1992-01-01

    Recently, Freund and Nachtigal have proposed a novel polynominal-based iteration, the quasi-minimal residual algorithm (QMR), for solving general nonsingular non-Hermitian linear systems. Motivated by the QMR method, we have introduced the general concept of quasi-kernel polynomials, and we have shown that the QMR algorithm is based on a particular instance of quasi-kernel polynomials. In this paper, we continue our study of quasi-kernel polynomials. In particular, we derive bounds for the norms of quasi-kernel polynomials. These results are then applied to obtain convergence theorems both for the QMR method and for a transpose-free variant of QMR, the TFQMR algorithm.

  2. On universal knot polynomials

    NASA Astrophysics Data System (ADS)

    Mironov, A.; Mkrtchyan, R.; Morozov, A.

    2016-02-01

    We present a universal knot polynomials for 2- and 3-strand torus knots in adjoint representation, by universalization of appropriate Rosso-Jones formula. According to universality, these polynomials coincide with adjoined colored HOMFLY and Kauffman polynomials at SL and SO/Sp lines on Vogel's plane, respectively and give their exceptional group's counterparts on exceptional line. We demonstrate that [m,n]=[n,m] topological invariance, when applicable, take place on the entire Vogel's plane. We also suggest the universal form of invariant of figure eight knot in adjoint representation, and suggest existence of such universalization for any knot in adjoint and its descendant representations. Properties of universal polynomials and applications of these results are discussed.

  3. Zernike Basis to Cartesian Transformations

    NASA Astrophysics Data System (ADS)

    Mathar, R. J.

    2009-12-01

    The radial polynomials of the 2D (circular) and 3D (spherical) Zernike functions are tabulated as powers of the radial distance. The reciprocal tabulation of powers of the radial distance in series of radial polynomials is also given, based on projections that take advantage of the orthogonality of the polynomials over the unit interval. They play a role in the expansion of products of the polynomials into sums, which is demonstrated by some examples. Multiplication of the polynomials by the angular bases (azimuth, polar angle) defines the Zernike functions, for which we derive transformations to and from the Cartesian coordinate system centered at the middle of the circle or sphere.

  4. Chaos, Fractals, and Polynomials.

    ERIC Educational Resources Information Center

    Tylee, J. Louis; Tylee, Thomas B.

    1996-01-01

    Discusses chaos theory; linear algebraic equations and the numerical solution of polynomials, including the use of the Newton-Raphson technique to find polynomial roots; fractals; search region and coordinate systems; convergence; and generating color fractals on a computer. (LRW)

  5. Universal Racah matrices and adjoint knot polynomials: Arborescent knots

    NASA Astrophysics Data System (ADS)

    Mironov, A.; Morozov, A.

    2016-04-01

    By now it is well established that the quantum dimensions of descendants of the adjoint representation can be described in a universal form, independent of a particular family of simple Lie algebras. The Rosso-Jones formula then implies a universal description of the adjoint knot polynomials for torus knots, which in particular unifies the HOMFLY (SUN) and Kauffman (SON) polynomials. For E8 the adjoint representation is also fundamental. We suggest to extend the universality from the dimensions to the Racah matrices and this immediately produces a unified description of the adjoint knot polynomials for all arborescent (double-fat) knots, including twist, 2-bridge and pretzel. Technically we develop together the universality and the "eigenvalue conjecture", which expresses the Racah and mixing matrices through the eigenvalues of the quantum R-matrix, and for dealing with the adjoint polynomials one has to extend it to the previously unknown 6 × 6 case. The adjoint polynomials do not distinguish between mutants and therefore are not very efficient in knot theory, however, universal polynomials in higher representations can probably be better in this respect.

  6. Imaging characteristics of Zernike and annular polynomial aberrations.

    PubMed

    Mahajan, Virendra N; Díaz, José Antonio

    2013-04-01

    The general equations for the point-spread function (PSF) and optical transfer function (OTF) are given for any pupil shape, and they are applied to optical imaging systems with circular and annular pupils. The symmetry properties of the PSF, the real and imaginary parts of the OTF, and the modulation transfer function (MTF) of a system with a circular pupil aberrated by a Zernike circle polynomial aberration are derived. The interferograms and PSFs are illustrated for some typical polynomial aberrations with a sigma value of one wave, and 3D PSFs and MTFs are shown for 0.1 wave. The Strehl ratio is also calculated for polynomial aberrations with a sigma value of 0.1 wave, and shown to be well estimated from the sigma value. The numerical results are compared with the corresponding results in the literature. Because of the same angular dependence of the corresponding annular and circle polynomial aberrations, the symmetry properties of systems with annular pupils aberrated by an annular polynomial aberration are the same as those for a circular pupil aberrated by a corresponding circle polynomial aberration. They are also illustrated with numerical examples.

  7. Applications of polynomial optimization in financial risk investment

    NASA Astrophysics Data System (ADS)

    Zeng, Meilan; Fu, Hongwei

    2017-09-01

    Recently, polynomial optimization has many important applications in optimization, financial economics and eigenvalues of tensor, etc. This paper studies the applications of polynomial optimization in financial risk investment. We consider the standard mean-variance risk measurement model and the mean-variance risk measurement model with transaction costs. We use Lasserre's hierarchy of semidefinite programming (SDP) relaxations to solve the specific cases. The results show that polynomial optimization is effective for some financial optimization problems.

  8. A Stochastic Mixed Finite Element Heterogeneous Multiscale Method for Flow in Porous Media

    DTIC Science & Technology

    2010-08-01

    applicable for flow in porous media has drawn significant interest in the last few years. Several techniques like generalized polynomial chaos expansions (gPC...represents the stochastic solution as a polynomial approxima- tion. This interpolant is constructed via independent function calls to the de- terministic...of orthogonal polynomials [34,38] or sparse grid approximations [39–41]. It is well known that the global polynomial interpolation cannot resolve lo

  9. A Set of Orthogonal Polynomials That Generalize the Racah Coefficients or 6 - j Symbols.

    DTIC Science & Technology

    1978-03-01

    Generalized Hypergeometric Functions, Cambridge Univ. Press, Cambridge, 1966. [11] D. Stanton, Some basic hypergeometric polynomials arising from... Some bas ic hypergeometr ic an a logues of the classical orthogonal polynomials and applications , to appear. [3] C. de Boor and G. H. Golub , The...Report #1833 A SET OF ORTHOGONAL POLYNOMIALS THAT GENERALIZE THE RACAR COEFFICIENTS OR 6 — j SYMBOLS Richard Askey and James Wilson •

  10. DIFFERENTIAL CROSS SECTION ANALYSIS IN KAON PHOTOPRODUCTION USING ASSOCIATED LEGENDRE POLYNOMIALS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    P. T. P. HUTAURUK, D. G. IRELAND, G. ROSNER

    2009-04-01

    Angular distributions of differential cross sections from the latest CLAS data sets,6 for the reaction γ + p→K+ + Λ have been analyzed using associated Legendre polynomials. This analysis is based upon theoretical calculations in Ref. 1 where all sixteen observables in kaon photoproduction can be classified into four Legendre classes. Each observable can be described by an expansion of associated Legendre polynomial functions. One of the questions to be addressed is how many associated Legendre polynomials are required to describe the data. In this preliminary analysis, we used data models with different numbers of associated Legendre polynomials. We thenmore » compared these models by calculating posterior probabilities of the models. We found that the CLAS data set needs no more than four associated Legendre polynomials to describe the differential cross section data. In addition, we also show the extracted coefficients of the best model.« less

  11. Tutte polynomial in functional magnetic resonance imaging

    NASA Astrophysics Data System (ADS)

    García-Castillón, Marlly V.

    2015-09-01

    Methods of graph theory are applied to the processing of functional magnetic resonance images. Specifically the Tutte polynomial is used to analyze such kind of images. Functional Magnetic Resonance Imaging provide us connectivity networks in the brain which are represented by graphs and the Tutte polynomial will be applied. The problem of computing the Tutte polynomial for a given graph is #P-hard even for planar graphs. For a practical application the maple packages "GraphTheory" and "SpecialGraphs" will be used. We will consider certain diagram which is depicting functional connectivity, specifically between frontal and posterior areas, in autism during an inferential text comprehension task. The Tutte polynomial for the resulting neural networks will be computed and some numerical invariants for such network will be obtained. Our results show that the Tutte polynomial is a powerful tool to analyze and characterize the networks obtained from functional magnetic resonance imaging.

  12. On the coefficients of integrated expansions and integrals of ultraspherical polynomials and their applications for solving differential equations

    NASA Astrophysics Data System (ADS)

    Doha, E. H.

    2002-02-01

    An analytical formula expressing the ultraspherical coefficients of an expansion for an infinitely differentiable function that has been integrated an arbitrary number of times in terms of the coefficients of the original expansion of the function is stated in a more compact form and proved in a simpler way than the formula suggested by Phillips and Karageorghis (27 (1990) 823). A new formula expressing explicitly the integrals of ultraspherical polynomials of any degree that has been integrated an arbitrary number of times of ultraspherical polynomials is given. The tensor product of ultraspherical polynomials is used to approximate a function of more than one variable. Formulae expressing the coefficients of differentiated expansions of double and triple ultraspherical polynomials in terms of the original expansion are stated and proved. Some applications of how to use ultraspherical polynomials for solving ordinary and partial differential equations are described.

  13. Effects of homogenization process parameters on physicochemical properties of astaxanthin nanodispersions prepared using a solvent-diffusion technique

    PubMed Central

    Anarjan, Navideh; Jafarizadeh-Malmiri, Hoda; Nehdi, Imededdine Arbi; Sbihi, Hassen Mohamed; Al-Resayes, Saud Ibrahim; Tan, Chin Ping

    2015-01-01

    Nanodispersion systems allow incorporation of lipophilic bioactives, such as astaxanthin (a fat soluble carotenoid) into aqueous systems, which can improve their solubility, bioavailability, and stability, and widen their uses in water-based pharmaceutical and food products. In this study, response surface methodology was used to investigate the influences of homogenization time (0.5–20 minutes) and speed (1,000–9,000 rpm) in the formation of astaxanthin nanodispersions via the solvent-diffusion process. The product was characterized for particle size and astaxanthin concentration using laser diffraction particle size analysis and high performance liquid chromatography, respectively. Relatively high determination coefficients (ranging from 0.896 to 0.969) were obtained for all suggested polynomial regression models. The overall optimal homogenization conditions were determined by multiple response optimization analysis to be 6,000 rpm for 7 minutes. In vitro cellular uptake of astaxanthin from the suggested individual and multiple optimized astaxanthin nanodispersions was also evaluated. The cellular uptake of astaxanthin was found to be considerably increased (by more than five times) as it became incorporated into optimum nanodispersion systems. The lack of a significant difference between predicted and experimental values confirms the suitability of the regression equations connecting the response variables studied to the independent parameters. PMID:25709435

  14. Optimization of Culture Conditions for Enrichment of Saccharomyces cerevisiae with Dl-α-Tocopherol by Response Surface Methodology.

    PubMed

    Mohajeri Amiri, Morteza; Fazeli, Mohammad Reza; Amini, Mohsen; Hayati Roodbari, Nasim; Samadi, Nasrin

    2017-01-01

    Designing enriched probiotic supplements may have some advantages including protection of probiotic microorganism from oxidative destruction, improving enzyme activity of the gastrointestinal tract, and probably increasing half-life of micronutrient. In this study Saccharomyces cerevisiae enriched with dl-α-tocopherol was produced as an accumulator and transporter of a lipid soluble vitamin for the first time. By using one variable at the time screening studies, three independent variables were selected. Optimization of the level of dl-α-tocopherol entrapment in S. cerevisiae cells was performed by using Box-Behnken design via design expert software. A modified quadratic polynomial model appropriately fit the data. The convex shape of three-dimensional plots reveal that we could calculate the optimal point of the response in the range of parameters. The optimum points of independent parameters to maximize the response were dl-α-tocopherol initial concentration of 7625.82 µg/mL, sucrose concentration of 6.86 % w/v, and shaking speed of 137.70 rpm. Under these conditions, the maximum level of dl-α-tocopherol in dry cell weight of S. cerevisiae was 5.74 µg/g. The resemblance between the R-squared and adjusted R-squared and acceptable value of C.V% revealed acceptability and accuracy of the model.

  15. Parabolic northern-hemisphere river flow teleconnections to El Niño-Southern Oscillation and the Arctic Oscillation

    NASA Astrophysics Data System (ADS)

    Fleming, S. W.; Dahlke, H. E.

    2014-10-01

    It is almost universally assumed in statistical hydroclimatology that relationships between large-scale climate indices and local-scale hydrometeorological responses, though possibly nonlinear, are monotonic. However, recent work suggests that northern-hemisphere atmospheric teleconnections to El Niño-Southern Oscillation (ENSO) and the Arctic Oscillation can be parabolic. The effect has recently been explicitly confirmed in hydrologic responses, though associations are complicated by land surface characteristics and processes, and investigation of water resource implications has been limited to date. Here, we apply an Akaike Information Criterion-based polynomial selection approach to investigate annual flow volume teleconnections for 42 of the northern hemisphere’s largest ocean-reaching rivers. Though we find a rich diversity of responses, parabolic relationships are formally consistent with the data for almost half the rivers, and the optimal model for eight. These highly nonlinear water supply teleconnections could radically alter the standard conceptual model of how water resources respond to climatic variability. For example, the Sacramento river in drought-ridden California exhibits no significant monotonic ENSO teleconnection but a 0.92 probability of a quadratic relationship, reducing mean predictive error by up to 65% and suggesting greater opportunity for climate index-based water supply forecasts than previously appreciated.

  16. Quadratically Convergent Method for Simultaneously Approaching the Roots of Polynomial Solutions of a Class of Differential Equations

    NASA Astrophysics Data System (ADS)

    Recchioni, Maria Cristina

    2001-12-01

    This paper investigates the application of the method introduced by L. Pasquini (1989) for simultaneously approaching the zeros of polynomial solutions to a class of second-order linear homogeneous ordinary differential equations with polynomial coefficients to a particular case in which these polynomial solutions have zeros symmetrically arranged with respect to the origin. The method is based on a family of nonlinear equations which is associated with a given class of differential equations. The roots of the nonlinear equations are related to the roots of the polynomial solutions of differential equations considered. Newton's method is applied to find the roots of these nonlinear equations. In (Pasquini, 1994) the nonsingularity of the roots of these nonlinear equations is studied. In this paper, following the lines in (Pasquini, 1994), the nonsingularity of the roots of these nonlinear equations is studied. More favourable results than the ones in (Pasquini, 1994) are proven in the particular case of polynomial solutions with symmetrical zeros. The method is applied to approximate the roots of Hermite-Sobolev type polynomials and Freud polynomials. A lower bound for the smallest positive root of Hermite-Sobolev type polynomials is given via the nonlinear equation. The quadratic convergence of the method is proven. A comparison with a classical method that uses the Jacobi matrices is carried out. We show that the algorithm derived by the proposed method is sometimes preferable to the classical QR type algorithms for computing the eigenvalues of the Jacobi matrices even if these matrices are real and symmetric.

  17. On a Family of Multivariate Modified Humbert Polynomials

    PubMed Central

    Aktaş, Rabia; Erkuş-Duman, Esra

    2013-01-01

    This paper attempts to present a multivariable extension of generalized Humbert polynomials. The results obtained here include various families of multilinear and multilateral generating functions, miscellaneous properties, and also some special cases for these multivariable polynomials. PMID:23935411

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lue Xing; Sun Kun; Wang Pan

    In the framework of Bell-polynomial manipulations, under investigation hereby are three single-field bilinearizable equations: the (1+1)-dimensional shallow water wave model, Boiti-Leon-Manna-Pempinelli model, and (2+1)-dimensional Sawada-Kotera model. Based on the concept of scale invariance, a direct and unifying Bell-polynomial scheme is employed to achieve the Baecklund transformations and Lax pairs associated with those three soliton equations. Note that the Bell-polynomial expressions and Bell-polynomial-typed Baecklund transformations for those three soliton equations can be, respectively, cast into the bilinear equations and bilinear Baecklund transformations with symbolic computation. Consequently, it is also shown that the Bell-polynomial-typed Baecklund transformations can be linearized into the correspondingmore » Lax pairs.« less

  19. An O(log sup 2 N) parallel algorithm for computing the eigenvalues of a symmetric tridiagonal matrix

    NASA Technical Reports Server (NTRS)

    Swarztrauber, Paul N.

    1989-01-01

    An O(log sup 2 N) parallel algorithm is presented for computing the eigenvalues of a symmetric tridiagonal matrix using a parallel algorithm for computing the zeros of the characteristic polynomial. The method is based on a quadratic recurrence in which the characteristic polynomial is constructed on a binary tree from polynomials whose degree doubles at each level. Intervals that contain exactly one zero are determined by the zeros of polynomials at the previous level which ensures that different processors compute different zeros. The exact behavior of the polynomials at the interval endpoints is used to eliminate the usual problems induced by finite precision arithmetic.

  20. Discrete Tchebycheff orthonormal polynomials and applications

    NASA Technical Reports Server (NTRS)

    Lear, W. M.

    1980-01-01

    Discrete Tchebycheff orthonormal polynomials offer a convenient way to make least squares polynomial fits of uniformly spaced discrete data. Computer programs to do so are simple and fast, and appear to be less affected by computer roundoff error, for the higher order fits, than conventional least squares programs. They are useful for any application of polynomial least squares fits: approximation of mathematical functions, noise analysis of radar data, and real time smoothing of noisy data, to name a few.

  1. An approximation technique for predicting the transient response of a second order nonlinear equation

    NASA Technical Reports Server (NTRS)

    Laurenson, R. M.; Baumgarten, J. R.

    1975-01-01

    An approximation technique has been developed for determining the transient response of a nonlinear dynamic system. The nonlinearities in the system which has been considered appear in the system's dissipation function. This function was expressed as a second order polynomial in the system's velocity. The developed approximation is an extension of the classic Kryloff-Bogoliuboff technique. Two examples of the developed approximation are presented for comparative purposes with other approximation methods.

  2. Nonlinear Thermoelastic Effects in Surface Mechanics.

    DTIC Science & Technology

    1980-01-01

    remaining quartic polynomial generated by det(A) .0 is presumed to not yield real roots (real characteristics) associated with elastic waves because...0253 UNCLASSIFIED NL NONINEAR THEMLOEIASTIC EFF’ECTS IN SUFC MECHANICS D T ICX2 ) J.1. PFirin General Electric Company. JUN 1 8 8 Schenectady, New York...f - Generalized analytic functions Ei Lagrangian strain components lk - Generalized Cauchy kernels, Eq. (1I) E - Young’s modulus, Pa ulk

  3. Surface topography estimated by inversion of satellite gravity gradiometry observations

    NASA Astrophysics Data System (ADS)

    Ramillien, Guillaume

    2015-04-01

    An integration of mass elements is presented for evaluating the six components of the 2-order gravity tensor (i.e., second derivatives of the Newtonian mass integral for the gravitational potential) created by an uneven sphere topography consisting of juxtaposed vertical prisms. The method is based on Legendre polynomial series with the originality of taking elastic compensation of the topography by the Earth's surface into account. The speed of computation of the polynomial series increases logically with the observing altitude from the source of anomaly. Such a forward modelling can be easily used for reduction of observed gravity gradient anomalies by the effects of any spherical interface of density. Moreover, an iterative least-square inversion of the observed gravity tensor values Γαβ is proposed to estimate a regional set of topographic heights. Several tests of recovery have been made by considering simulated gradiometry anomaly data, and for varying satellite altitudes and a priori levels of accuracy. In the case of GOCE-type gradiometry anomalies measured at an altitude of ~300 km, the search converges down to a stable and smooth topography after 20-30 iterations while the final r.m.s. error is ~100 m. The possibility of cumulating satellite information from different orbit geometries is also examined for improving the prediction.

  4. On the optimal selection of interpolation methods for groundwater contouring: An example of propagation of uncertainty regarding inter-aquifer exchange

    NASA Astrophysics Data System (ADS)

    Ohmer, Marc; Liesch, Tanja; Goeppert, Nadine; Goldscheider, Nico

    2017-11-01

    The selection of the best possible method to interpolate a continuous groundwater surface from point data of groundwater levels is a controversial issue. In the present study four deterministic and five geostatistical interpolation methods (global polynomial interpolation, local polynomial interpolation, inverse distance weighting, radial basis function, simple-, ordinary-, universal-, empirical Bayesian and co-Kriging) and six error statistics (ME, MAE, MAPE, RMSE, RMSSE, Pearson R) were examined for a Jurassic karst aquifer and a Quaternary alluvial aquifer. We investigated the possible propagation of uncertainty of the chosen interpolation method on the calculation of the estimated vertical groundwater exchange between the aquifers. Furthermore, we validated the results with eco-hydrogeological data including the comparison between calculated groundwater depths and geographic locations of karst springs, wetlands and surface waters. These results show, that calculated inter-aquifer exchange rates based on different interpolations of groundwater potentials may vary greatly depending on the chosen interpolation method (by factor >10). Therefore, the choice of an interpolation method should be made with care, taking different error measures as well as additional data for plausibility control into account. The most accurate results have been obtained with co-Kriging incorporating secondary data (e.g. topography, river levels).

  5. High aperture efficiency symmetric reflector antennas with up to 60 deg field of view

    NASA Astrophysics Data System (ADS)

    Rappaport, Carey M.; Craig, William P.

    1991-03-01

    A microwave single-reflector scanning antenna derived from an ellipse (rather than the usual parabola) which gives a much greater field of view is presented. This reflector combines reasonable scanning in one plane with good focusing in the other, and its scanning ability is superior to the torus and other single reflectors because it has much greater aperture efficiency and is thus smaller while having the same performance. The reflector surface is derived in two steps: a fourth-order even polynomial profile curve in the scan plane is found using least squares to minimize the scanned ray errors; then even polynomial terms in x and y that minimize astigmatism for both the unscanned and maximally scanned beams are added to form the three-dimensional surface. Numerical simulations of radiation patterns for a variety of antenna diameter and field-of-view cases give excellent results. The 60 deg scan case with 30-lambda-diameter aperture has only 0.2-dB peak gain deviation from ideal and first sidelobe levels below 14 dB down from peak gain. The 17 deg, 500-lambda case has only 0.8-dB gain variation and -14 to -11 dB sidelobe levels for approximately +/-68 beamwidths of scan, with focal length to aperture diameter ratio equal to about one.

  6. Optimal Sharpening of Compensated Comb Decimation Filters: Analysis and Design

    PubMed Central

    Troncoso Romero, David Ernesto

    2014-01-01

    Comb filters are a class of low-complexity filters especially useful for multistage decimation processes. However, the magnitude response of comb filters presents a droop in the passband region and low stopband attenuation, which is undesirable in many applications. In this work, it is shown that, for stringent magnitude specifications, sharpening compensated comb filters requires a lower-degree sharpening polynomial compared to sharpening comb filters without compensation, resulting in a solution with lower computational complexity. Using a simple three-addition compensator and an optimization-based derivation of sharpening polynomials, we introduce an effective low-complexity filtering scheme. Design examples are presented in order to show the performance improvement in terms of passband distortion and selectivity compared to other methods based on the traditional Kaiser-Hamming sharpening and the Chebyshev sharpening techniques recently introduced in the literature. PMID:24578674

  7. Optimal sharpening of compensated comb decimation filters: analysis and design.

    PubMed

    Troncoso Romero, David Ernesto; Laddomada, Massimiliano; Jovanovic Dolecek, Gordana

    2014-01-01

    Comb filters are a class of low-complexity filters especially useful for multistage decimation processes. However, the magnitude response of comb filters presents a droop in the passband region and low stopband attenuation, which is undesirable in many applications. In this work, it is shown that, for stringent magnitude specifications, sharpening compensated comb filters requires a lower-degree sharpening polynomial compared to sharpening comb filters without compensation, resulting in a solution with lower computational complexity. Using a simple three-addition compensator and an optimization-based derivation of sharpening polynomials, we introduce an effective low-complexity filtering scheme. Design examples are presented in order to show the performance improvement in terms of passband distortion and selectivity compared to other methods based on the traditional Kaiser-Hamming sharpening and the Chebyshev sharpening techniques recently introduced in the literature.

  8. Polynomial time blackbox identity testers for depth-3 circuits : the field doesn't matter.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Seshadhri, Comandur; Saxena, Nitin

    Let C be a depth-3 circuit with n variables, degree d and top fanin k (called {Sigma}{Pi}{Sigma}(k, d, n) circuits) over base field F. It is a major open problem to design a deterministic polynomial time blackbox algorithm that tests if C is identically zero. Klivans & Spielman (STOC 2001) observed that the problem is open even when k is a constant. This case has been subjected to a serious study over the past few years, starting from the work of Dvir & Shpilka (STOC 2005). We give the first polynomial time blackbox algorithm for this problem. Our algorithm runsmore » in time poly(n)d{sup k}, regardless of the base field. The only field for which polynomial time algorithms were previously known is F = Q (Kayal & Saraf, FOCS 2009, and Saxena & Seshadhri, FOCS 2010). This is the first blackbox algorithm for depth-3 circuits that does not use the rank based approaches of Karnin & Shpilka (CCC 2008). We prove an important tool for the study of depth-3 identities. We design a blackbox polynomial time transformation that reduces the number of variables in a {Sigma}{Pi}{Sigma}(k, d, n) circuit to k variables, but preserves the identity structure. Polynomial identity testing (PIT) is a major open problem in theoretical computer science. The input is an arithmetic circuit that computes a polynomial p(x{sub 1}, x{sub 2},..., x{sub n}) over a base field F. We wish to check if p is the zero polynomial, or in other words, is identically zero. We may be provided with an explicit circuit, or may only have blackbox access. In the latter case, we can only evaluate the polynomial p at various domain points. The main goal is to devise a deterministic blackbox polynomial time algorithm for PIT.« less

  9. Analysis on the misalignment errors between Hartmann-Shack sensor and 45-element deformable mirror

    NASA Astrophysics Data System (ADS)

    Liu, Lihui; Zhang, Yi; Tao, Jianjun; Cao, Fen; Long, Yin; Tian, Pingchuan; Chen, Shangwu

    2017-02-01

    Aiming at 45-element adaptive optics system, the model of 45-element deformable mirror is truly built by COMSOL Multiphysics, and every actuator's influence function is acquired by finite element method. The process of this system correcting optical aberration is simulated by making use of procedure, and aiming for Strehl ratio of corrected diffraction facula, in the condition of existing different translation and rotation error between Hartmann-Shack sensor and deformable mirror, the system's correction ability for 3-20 Zernike polynomial wave aberration is analyzed. The computed result shows: the system's correction ability for 3-9 Zernike polynomial wave aberration is higher than that of 10-20 Zernike polynomial wave aberration. The correction ability for 3-20 Zernike polynomial wave aberration does not change with misalignment error changing. With rotation error between Hartmann-Shack sensor and deformable mirror increasing, the correction ability for 3-20 Zernike polynomial wave aberration gradually goes down, and with translation error increasing, the correction ability for 3-9 Zernike polynomial wave aberration gradually goes down, but the correction ability for 10-20 Zernike polynomial wave aberration behave up-and-down depression.

  10. Stability analysis of fuzzy parametric uncertain systems.

    PubMed

    Bhiwani, R J; Patre, B M

    2011-10-01

    In this paper, the determination of stability margin, gain and phase margin aspects of fuzzy parametric uncertain systems are dealt. The stability analysis of uncertain linear systems with coefficients described by fuzzy functions is studied. A complexity reduced technique for determining the stability margin for FPUS is proposed. The method suggested is dependent on the order of the characteristic polynomial. In order to find the stability margin of interval polynomials of order less than 5, it is not always necessary to determine and check all four Kharitonov's polynomials. It has been shown that, for determining stability margin of FPUS of order five, four, and three we require only 3, 2, and 1 Kharitonov's polynomials respectively. Only for sixth and higher order polynomials, a complete set of Kharitonov's polynomials are needed to determine the stability margin. Thus for lower order systems, the calculations are reduced to a large extent. This idea has been extended to determine the stability margin of fuzzy interval polynomials. It is also shown that the gain and phase margin of FPUS can be determined analytically without using graphical techniques. Copyright © 2011 ISA. Published by Elsevier Ltd. All rights reserved.

  11. Recurrences and explicit formulae for the expansion and connection coefficients in series of Bessel polynomials

    NASA Astrophysics Data System (ADS)

    Doha, E. H.; Ahmed, H. M.

    2004-08-01

    A formula expressing explicitly the derivatives of Bessel polynomials of any degree and for any order in terms of the Bessel polynomials themselves is proved. Another explicit formula, which expresses the Bessel expansion coefficients of a general-order derivative of an infinitely differentiable function in terms of its original Bessel coefficients, is also given. A formula for the Bessel coefficients of the moments of one single Bessel polynomial of certain degree is proved. A formula for the Bessel coefficients of the moments of a general-order derivative of an infinitely differentiable function in terms of its Bessel coefficients is also obtained. Application of these formulae for solving ordinary differential equations with varying coefficients, by reducing them to recurrence relations in the expansion coefficients of the solution, is explained. An algebraic symbolic approach (using Mathematica) in order to build and solve recursively for the connection coefficients between Bessel-Bessel polynomials is described. An explicit formula for these coefficients between Jacobi and Bessel polynomials is given, of which the ultraspherical polynomial and its consequences are important special cases. Two analytical formulae for the connection coefficients between Laguerre-Bessel and Hermite-Bessel are also developed.

  12. Nanoparticle surface characterization and clustering through concentration-dependent surface adsorption modeling.

    PubMed

    Chen, Ran; Zhang, Yuntao; Sahneh, Faryad Darabi; Scoglio, Caterina M; Wohlleben, Wendel; Haase, Andrea; Monteiro-Riviere, Nancy A; Riviere, Jim E

    2014-09-23

    Quantitative characterization of nanoparticle interactions with their surrounding environment is vital for safe nanotechnological development and standardization. A recent quantitative measure, the biological surface adsorption index (BSAI), has demonstrated promising applications in nanomaterial surface characterization and biological/environmental prediction. This paper further advances the approach beyond the application of five descriptors in the original BSAI to address the concentration dependence of the descriptors, enabling better prediction of the adsorption profile and more accurate categorization of nanomaterials based on their surface properties. Statistical analysis on the obtained adsorption data was performed based on three different models: the original BSAI, a concentration-dependent polynomial model, and an infinite dilution model. These advancements in BSAI modeling showed a promising development in the application of quantitative predictive modeling in biological applications, nanomedicine, and environmental safety assessment of nanomaterials.

  13. Spherical harmonics analysis of surface density fluctuations of spherical ionic SDS and nonionic C12E8 micelles: A molecular dynamics study

    NASA Astrophysics Data System (ADS)

    Yoshii, Noriyuki; Nimura, Yuki; Fujimoto, Kazushi; Okazaki, Susumu

    2017-07-01

    The surface structure and its fluctuation of spherical micelles were investigated using a series of density correlation functions newly defined by spherical harmonics and Legendre polynomials based on the molecular dynamics calculations. To investigate the influence of head-group charges on the micelle surface structure, ionic sodium dodecyl sulfate and nonionic octaethyleneglycol monododecylether (C12E8) micelles were investigated as model systems. Large-scale density fluctuations were observed for both micelles in the calculated surface static structure factor. The area compressibility of the micelle surface evaluated by the surface static structure factor was tens-of-times larger than a typical value of a lipid membrane surface. The structural relaxation time, which was evaluated from the surface intermediate scattering function, indicates that the relaxation mechanism of the long-range surface structure can be well described by the hydrostatic approximation. The density fluctuation on the two-dimensional micelle surface has similar characteristics to that of three-dimensional fluids near the critical point.

  14. Spherical harmonics analysis of surface density fluctuations of spherical ionic SDS and nonionic C12E8 micelles: A molecular dynamics study.

    PubMed

    Yoshii, Noriyuki; Nimura, Yuki; Fujimoto, Kazushi; Okazaki, Susumu

    2017-07-21

    The surface structure and its fluctuation of spherical micelles were investigated using a series of density correlation functions newly defined by spherical harmonics and Legendre polynomials based on the molecular dynamics calculations. To investigate the influence of head-group charges on the micelle surface structure, ionic sodium dodecyl sulfate and nonionic octaethyleneglycol monododecylether (C 12 E 8 ) micelles were investigated as model systems. Large-scale density fluctuations were observed for both micelles in the calculated surface static structure factor. The area compressibility of the micelle surface evaluated by the surface static structure factor was tens-of-times larger than a typical value of a lipid membrane surface. The structural relaxation time, which was evaluated from the surface intermediate scattering function, indicates that the relaxation mechanism of the long-range surface structure can be well described by the hydrostatic approximation. The density fluctuation on the two-dimensional micelle surface has similar characteristics to that of three-dimensional fluids near the critical point.

  15. On the coefficients of differentiated expansions of ultraspherical polynomials

    NASA Technical Reports Server (NTRS)

    Karageorghis, Andreas; Phillips, Timothy N.

    1989-01-01

    A formula expressing the coefficients of an expression of ultraspherical polynomials which has been differentiated an arbitrary number of times in terms of the coefficients of the original expansion is proved. The particular examples of Chebyshev and Legendre polynomials are considered.

  16. On Polynomial Solutions of Linear Differential Equations with Polynomial Coefficients

    ERIC Educational Resources Information Center

    Si, Do Tan

    1977-01-01

    Demonstrates a method for solving linear differential equations with polynomial coefficients based on the fact that the operators z and D + d/dz are known to be Hermitian conjugates with respect to the Bargman and Louck-Galbraith scalar products. (MLH)

  17. Algorithms for computing solvents of unilateral second-order matrix polynomials over prime finite fields using lambda-matrices

    NASA Astrophysics Data System (ADS)

    Burtyka, Filipp

    2018-01-01

    The paper considers algorithms for finding diagonalizable and non-diagonalizable roots (so called solvents) of monic arbitrary unilateral second-order matrix polynomial over prime finite field. These algorithms are based on polynomial matrices (lambda-matrices). This is an extension of existing general methods for computing solvents of matrix polynomials over field of complex numbers. We analyze how techniques for complex numbers can be adapted for finite field and estimate asymptotic complexity of the obtained algorithms.

  18. On the Analytical and Numerical Properties of the Truncated Laplace Transform I

    DTIC Science & Technology

    2014-09-05

    contains generalizations and conclusions. 2 2 Preliminaries 2.1 The Legendre Polynomials In this subsection we summarize some of the properties of the the...standard Legendre Polynomi - als, and restate these properties for shifted and normalized forms of the Legendre Polynomials . We define the Shifted... Legendre Polynomial of degree k = 0, 1, ..., which we will be denoting by P ∗k , by the formula P ∗k (x) = Pk(2x− 1), (5) where Pk is the Legendre

  19. Development of Fast Deterministic Physically Accurate Solvers for Kinetic Collision Integral for Applications of Near Space Flight and Control Devices

    DTIC Science & Technology

    2015-08-31

    following functions were used: where are the Legendre polynomials of degree . It is assumed that the coefficient standing with has the form...enforce relaxation rates of high order moments, higher order polynomial basis functions are used. The use of high order polynomials results in strong...enforced while only polynomials up to second degree were used in the representation of the collision frequency. It can be seen that the new model

  20. Effects of Air Drag and Lunar Third-Body Perturbations on Motion Near a Reference KAM Torus

    DTIC Science & Technology

    2011-03-01

    body m 1) mass of satellite; 2) order of associated Legendre polynomial n 1) mean motion; 2) degree of associated Legendre polynomial n3 mean motion...physical momentum pi ith physical momentum Pmn associated Legendre polynomial of order m and degree n q̇ physical coordinate derivatives vector, [q̇1...are constants specifying the shape of the gravitational field; and Pmn are associated Legendre polynomials . When m = n = 0, the geopotential function

  1. Luigi Gatteschi's work on asymptotics of special functions and their zeros

    NASA Astrophysics Data System (ADS)

    Gautschi, Walter; Giordano, Carla

    2008-12-01

    A good portion of Gatteschi's research publications-about 65%-is devoted to asymptotics of special functions and their zeros. Most prominently among the special functions studied figure classical orthogonal polynomials, notably Jacobi polynomials and their special cases, Laguerre polynomials, and Hermite polynomials by implication. Other important classes of special functions dealt with are Bessel functions of the first and second kind, Airy functions, and confluent hypergeometric functions, both in Tricomi's and Whittaker's form. This work is reviewed here, and organized along methodological lines.

  2. Polynomial compensation, inversion, and approximation of discrete time linear systems

    NASA Technical Reports Server (NTRS)

    Baram, Yoram

    1987-01-01

    The least-squares transformation of a discrete-time multivariable linear system into a desired one by convolving the first with a polynomial system yields optimal polynomial solutions to the problems of system compensation, inversion, and approximation. The polynomial coefficients are obtained from the solution to a so-called normal linear matrix equation, whose coefficients are shown to be the weighting patterns of certain linear systems. These, in turn, can be used in the recursive solution of the normal equation.

  3. Three-Dimensional Solution of the Free Vibration Problem for Metal-Ceramic Shells Using the Method of Sampling Surfaces

    NASA Astrophysics Data System (ADS)

    Kulikov, G. M.; Plotnikova, S. V.

    2017-03-01

    The possibility of using the method of sampling surfaces (SaS) for solving the free vibration problem of threedimensional elasticity for metal-ceramic shells is studied. According to this method, in the shell body, an arbitrary number of SaS parallel to its middle surface are selected in order to take displacements of these surfaces as unknowns. The SaS pass through the nodes of a Chebyshev polynomial, which improves the convergence of the SaS method significantly. As a result, the SaS method can be used to obtain analytical solutions of the vibration problem for metal-ceramic plates and cylindrical shells that asymptotically approach the exact solutions of elasticity as the number of SaS tends to infinity.

  4. NFIRAOS beamsplitters subsystems optomechanical design

    NASA Astrophysics Data System (ADS)

    Lamontagne, Frédéric; Desnoyers, Nichola; Nash, Reston; Boucher, Marc-André; Martin, Olivier; Buteau-Vaillancourt, Louis; Châteauneuf, François; Atwood, Jenny; Hill, Alexis; Byrnes, Peter W. G.; Herriot, Glen; Véran, Jean-Pierre

    2016-07-01

    The early-light facility adaptive optics system for the Thirty Meter Telescope (TMT) is the Narrow-Field InfraRed Adaptive Optics System (NFIRAOS). The science beam splitter changer mechanism and the visible light beam splitter are subsystems of NFIRAOS. This paper presents the opto-mechanical design of the NFIRAOS beam splitters subsystems (NBS). In addition to the modal and the structural analyses, the beam splitters surface deformations are computed considering the environmental constraints during operation. Surface deformations are fit to Zernike polynomials using SigFit software. Rigid body motion as well as residual RMS and peak-to-valley surface deformations are calculated. Finally, deformed surfaces are exported to Zemax to evaluate the transmitted and reflected wave front error. The simulation results of this integrated opto-mechanical analysis have shown compliance with all optical requirements.

  5. Polynomial fuzzy observer designs: a sum-of-squares approach.

    PubMed

    Tanaka, Kazuo; Ohtake, Hiroshi; Seo, Toshiaki; Tanaka, Motoyasu; Wang, Hua O

    2012-10-01

    This paper presents a sum-of-squares (SOS) approach to polynomial fuzzy observer designs for three classes of polynomial fuzzy systems. The proposed SOS-based framework provides a number of innovations and improvements over the existing linear matrix inequality (LMI)-based approaches to Takagi-Sugeno (T-S) fuzzy controller and observer designs. First, we briefly summarize previous results with respect to a polynomial fuzzy system that is a more general representation of the well-known T-S fuzzy system. Next, we propose polynomial fuzzy observers to estimate states in three classes of polynomial fuzzy systems and derive SOS conditions to design polynomial fuzzy controllers and observers. A remarkable feature of the SOS design conditions for the first two classes (Classes I and II) is that they realize the so-called separation principle, i.e., the polynomial fuzzy controller and observer for each class can be separately designed without lack of guaranteeing the stability of the overall control system in addition to converging state-estimation error (via the observer) to zero. Although, for the last class (Class III), the separation principle does not hold, we propose an algorithm to design polynomial fuzzy controller and observer satisfying the stability of the overall control system in addition to converging state-estimation error (via the observer) to zero. All the design conditions in the proposed approach can be represented in terms of SOS and are symbolically and numerically solved via the recently developed SOSTOOLS and a semidefinite-program solver, respectively. To illustrate the validity and applicability of the proposed approach, three design examples are provided. The examples demonstrate the advantages of the SOS-based approaches for the existing LMI approaches to T-S fuzzy observer designs.

  6. Optimization of process variables for decolorization of Disperse Yellow 211 by Bacillus subtilis using Box-Behnken design.

    PubMed

    Sharma, Praveen; Singh, Lakhvinder; Dilbaghi, Neeraj

    2009-05-30

    Decolorization of textile azo dye Disperse Yellow 211 (DY 211) was carried out from simulated aqueous solution by bacterial strain Bacillus subtilis. Response surface methodology (RSM), involving Box-Behnken design matrix in three most important operating variables; temperature, pH and initial dye concentration was successfully employed for the study and optimization of decolorization process. The total 17 experiments were conducted in the study towards the construction of a quadratic model. According to analysis of variance (ANOVA) results, the proposed model can be used to navigate the design space. Under optimized conditions the bacterial strain was able to decolorize DY 211 up to 80%. Model indicated that initial dye concentration of 100 mgl(-1), pH 7 and a temperature of 32.5 degrees C were found optimum for maximum % decolorization. Very high regression coefficient between the variables and the response (R(2)=0.9930) indicated excellent evaluation of experimental data by polynomial regression model. The combination of the three variables predicted through RSM was confirmed through confirmatory experiments, hence the bacterial strain holds a great potential for the treatment of colored textile effluents.

  7. The extraction process optimization of antioxidant polysaccharides from Marshmallow (Althaea officinalis L.) roots.

    PubMed

    Pakrokh Ghavi, Peyman

    2015-04-01

    Response surface methodology (RSM) with a central composite rotatable design (CCRD) based on five levels was employed to model and optimize four experimental operating conditions of extraction temperature (10-90 °C) and time (6-30 h), particle size (6-24 mm) and water to solid (W/S, 10-50) ratio, obtaining polysaccharides from Althaea officinalis roots with high yield and antioxidant activity. For each response, a second-order polynomial model with high R(2) values (> 0.966) was developed using multiple linear regression analysis. Results showed that the most significant (P < 0.05) extraction conditions that affect the yield and antioxidant activity of extracted polysaccharides were the main effect of extraction temperature and the interaction effect of the particle size and W/S ratio. The optimum conditions to maximize yield (10.80%) and antioxidant activity (84.09%) for polysaccharides extraction from A. officinalis roots were extraction temperature 60.90 °C, extraction time 12.01 h, particle size 12.0mm and W/S ratio of 40.0. The experimental values were found to be in agreement with those predicted, indicating the models suitability for optimizing the polysaccharides extraction conditions. Copyright © 2015 Elsevier B.V. All rights reserved.

  8. Optimization of processing conditions to improve antioxidant activities of apple juice and whey based novel beverage fermented by kefir grains.

    PubMed

    Sabokbar, Nayereh; Khodaiyan, Faramarz; Moosavi-Nasab, Marzieh

    2015-06-01

    A central composite design (CCD) was used to evaluate the effects of fermentation temperature (20-30 ºC) and kefir grains amount (2-8%w/v) on total phenolic content and antioxidant activities of apple juice and whey based novel beverage fermented by kefir grains. The response surface methodology (RSM) showed that the significant second-order polynomial regression equation with high R(2) (>0.86) was successfully fitted for all response as function of independent variable. The overall optimum region was found to be at the combined level of 7.56%w/v kefir grains and temperature of 24.82 ºC with the highest value for total phenolic content (TPC) and antioxidant activities. At this optimum point TPC, 1, 1-Diphenyl-2-picrylhydrazyl radical scavenging, metal chelating effect, reducing power, inhibition of linoleic acid autoxidation and inhibition of ascorbate autoxidation were 165.02 mgGA/l, 0.38 ml/1 ml, 0.757 (absorbance at 700 nm), 46.12 %, 65.33 % and 21 %, respectively. No significant difference (p < 0.05) was found between actual values and predicated values.

  9. Surface electromyography based muscle fatigue detection using high-resolution time-frequency methods and machine learning algorithms.

    PubMed

    Karthick, P A; Ghosh, Diptasree Maitra; Ramakrishnan, S

    2018-02-01

    Surface electromyography (sEMG) based muscle fatigue research is widely preferred in sports science and occupational/rehabilitation studies due to its noninvasiveness. However, these signals are complex, multicomponent and highly nonstationary with large inter-subject variations, particularly during dynamic contractions. Hence, time-frequency based machine learning methodologies can improve the design of automated system for these signals. In this work, the analysis based on high-resolution time-frequency methods, namely, Stockwell transform (S-transform), B-distribution (BD) and extended modified B-distribution (EMBD) are proposed to differentiate the dynamic muscle nonfatigue and fatigue conditions. The nonfatigue and fatigue segments of sEMG signals recorded from the biceps brachii of 52 healthy volunteers are preprocessed and subjected to S-transform, BD and EMBD. Twelve features are extracted from each method and prominent features are selected using genetic algorithm (GA) and binary particle swarm optimization (BPSO). Five machine learning algorithms, namely, naïve Bayes, support vector machine (SVM) of polynomial and radial basis kernel, random forest and rotation forests are used for the classification. The results show that all the proposed time-frequency distributions (TFDs) are able to show the nonstationary variations of sEMG signals. Most of the features exhibit statistically significant difference in the muscle fatigue and nonfatigue conditions. The maximum number of features (66%) is reduced by GA and BPSO for EMBD and BD-TFD respectively. The combination of EMBD- polynomial kernel based SVM is found to be most accurate (91% accuracy) in classifying the conditions with the features selected using GA. The proposed methods are found to be capable of handling the nonstationary and multicomponent variations of sEMG signals recorded in dynamic fatiguing contractions. Particularly, the combination of EMBD- polynomial kernel based SVM could be used to detect the dynamic muscle fatigue conditions. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Recurrence relations for orthogonal polynomials for PDEs in polar and cylindrical geometries.

    PubMed

    Richardson, Megan; Lambers, James V

    2016-01-01

    This paper introduces two families of orthogonal polynomials on the interval (-1,1), with weight function [Formula: see text]. The first family satisfies the boundary condition [Formula: see text], and the second one satisfies the boundary conditions [Formula: see text]. These boundary conditions arise naturally from PDEs defined on a disk with Dirichlet boundary conditions and the requirement of regularity in Cartesian coordinates. The families of orthogonal polynomials are obtained by orthogonalizing short linear combinations of Legendre polynomials that satisfy the same boundary conditions. Then, the three-term recurrence relations are derived. Finally, it is shown that from these recurrence relations, one can efficiently compute the corresponding recurrences for generalized Jacobi polynomials that satisfy the same boundary conditions.

  11. Gaussian quadrature for multiple orthogonal polynomials

    NASA Astrophysics Data System (ADS)

    Coussement, Jonathan; van Assche, Walter

    2005-06-01

    We study multiple orthogonal polynomials of type I and type II, which have orthogonality conditions with respect to r measures. These polynomials are connected by their recurrence relation of order r+1. First we show a relation with the eigenvalue problem of a banded lower Hessenberg matrix Ln, containing the recurrence coefficients. As a consequence, we easily find that the multiple orthogonal polynomials of type I and type II satisfy a generalized Christoffel-Darboux identity. Furthermore, we explain the notion of multiple Gaussian quadrature (for proper multi-indices), which is an extension of the theory of Gaussian quadrature for orthogonal polynomials and was introduced by Borges. In particular, we show that the quadrature points and quadrature weights can be expressed in terms of the eigenvalue problem of Ln.

  12. A proposed metric for assessing the measurement quality of individual microarrays

    PubMed Central

    Kim, Kyoungmi; Page, Grier P; Beasley, T Mark; Barnes, Stephen; Scheirer, Katherine E; Allison, David B

    2006-01-01

    Background High-density microarray technology is increasingly applied to study gene expression levels on a large scale. Microarray experiments rely on several critical steps that may introduce error and uncertainty in analyses. These steps include mRNA sample extraction, amplification and labeling, hybridization, and scanning. In some cases this may be manifested as systematic spatial variation on the surface of microarray in which expression measurements within an individual array may vary as a function of geographic position on the array surface. Results We hypothesized that an index of the degree of spatiality of gene expression measurements associated with their physical geographic locations on an array could indicate the summary of the physical reliability of the microarray. We introduced a novel way to formulate this index using a statistical analysis tool. Our approach regressed gene expression intensity measurements on a polynomial response surface of the microarray's Cartesian coordinates. We demonstrated this method using a fixed model and presented results from real and simulated datasets. Conclusion We demonstrated the potential of such a quantitative metric for assessing the reliability of individual arrays. Moreover, we showed that this procedure can be incorporated into laboratory practice as a means to set quality control specifications and as a tool to determine whether an array has sufficient quality to be retained in terms of spatial correlation of gene expression measurements. PMID:16430768

  13. A simplified procedure for correcting both errors and erasures of a Reed-Solomon code using the Euclidean algorithm

    NASA Technical Reports Server (NTRS)

    Truong, T. K.; Hsu, I. S.; Eastman, W. L.; Reed, I. S.

    1987-01-01

    It is well known that the Euclidean algorithm or its equivalent, continued fractions, can be used to find the error locator polynomial and the error evaluator polynomial in Berlekamp's key equation needed to decode a Reed-Solomon (RS) code. A simplified procedure is developed and proved to correct erasures as well as errors by replacing the initial condition of the Euclidean algorithm by the erasure locator polynomial and the Forney syndrome polynomial. By this means, the errata locator polynomial and the errata evaluator polynomial can be obtained, simultaneously and simply, by the Euclidean algorithm only. With this improved technique the complexity of time domain RS decoders for correcting both errors and erasures is reduced substantially from previous approaches. As a consequence, decoders for correcting both errors and erasures of RS codes can be made more modular, regular, simple, and naturally suitable for both VLSI and software implementation. An example illustrating this modified decoding procedure is given for a (15, 9) RS code.

  14. Non-stationary component extraction in noisy multicomponent signal using polynomial chirping Fourier transform.

    PubMed

    Lu, Wenlong; Xie, Junwei; Wang, Heming; Sheng, Chuan

    2016-01-01

    Inspired by track-before-detection technology in radar, a novel time-frequency transform, namely polynomial chirping Fourier transform (PCFT), is exploited to extract components from noisy multicomponent signal. The PCFT combines advantages of Fourier transform and polynomial chirplet transform to accumulate component energy along a polynomial chirping curve in the time-frequency plane. The particle swarm optimization algorithm is employed to search optimal polynomial parameters with which the PCFT will achieve a most concentrated energy ridge in the time-frequency plane for the target component. The component can be well separated in the polynomial chirping Fourier domain with a narrow-band filter and then reconstructed by inverse PCFT. Furthermore, an iterative procedure, involving parameter estimation, PCFT, filtering and recovery, is introduced to extract components from a noisy multicomponent signal successively. The Simulations and experiments show that the proposed method has better performance in component extraction from noisy multicomponent signal as well as provides more time-frequency details about the analyzed signal than conventional methods.

  15. Minimum Sobolev norm interpolation of scattered derivative data

    NASA Astrophysics Data System (ADS)

    Chandrasekaran, S.; Gorman, C. H.; Mhaskar, H. N.

    2018-07-01

    We study the problem of reconstructing a function on a manifold satisfying some mild conditions, given data of the values and some derivatives of the function at arbitrary points on the manifold. While the problem of finding a polynomial of two variables with total degree ≤n given the values of the polynomial and some of its derivatives at exactly the same number of points as the dimension of the polynomial space is sometimes impossible, we show that such a problem always has a solution in a very general situation if the degree of the polynomials is sufficiently large. We give estimates on how large the degree should be, and give explicit constructions for such a polynomial even in a far more general case. As the number of sampling points at which the data is available increases, our polynomials converge to the target function on the set where the sampling points are dense. Numerical examples in single and double precision show that this method is stable, efficient, and of high-order.

  16. Systematic parameter inference in stochastic mesoscopic modeling

    NASA Astrophysics Data System (ADS)

    Lei, Huan; Yang, Xiu; Li, Zhen; Karniadakis, George Em

    2017-02-01

    We propose a method to efficiently determine the optimal coarse-grained force field in mesoscopic stochastic simulations of Newtonian fluid and polymer melt systems modeled by dissipative particle dynamics (DPD) and energy conserving dissipative particle dynamics (eDPD). The response surfaces of various target properties (viscosity, diffusivity, pressure, etc.) with respect to model parameters are constructed based on the generalized polynomial chaos (gPC) expansion using simulation results on sampling points (e.g., individual parameter sets). To alleviate the computational cost to evaluate the target properties, we employ the compressive sensing method to compute the coefficients of the dominant gPC terms given the prior knowledge that the coefficients are "sparse". The proposed method shows comparable accuracy with the standard probabilistic collocation method (PCM) while it imposes a much weaker restriction on the number of the simulation samples especially for systems with high dimensional parametric space. Fully access to the response surfaces within the confidence range enables us to infer the optimal force parameters given the desirable values of target properties at the macroscopic scale. Moreover, it enables us to investigate the intrinsic relationship between the model parameters, identify possible degeneracies in the parameter space, and optimize the model by eliminating model redundancies. The proposed method provides an efficient alternative approach for constructing mesoscopic models by inferring model parameters to recover target properties of the physics systems (e.g., from experimental measurements), where those force field parameters and formulation cannot be derived from the microscopic level in a straight forward way.

  17. Pilot-scale treatment of atrazine production wastewater by UV/O3/ultrasound: Factor effects and system optimization.

    PubMed

    Jing, Liang; Chen, Bing; Wen, Diya; Zheng, Jisi; Zhang, Baiyu

    2017-12-01

    This study shed light on removing atrazine from pesticide production wastewater using a pilot-scale UV/O 3 /ultrasound flow-through system. A significant quadratic polynomial prediction model with an adjusted R 2 of 0.90 was obtained from central composite design with response surface methodology. The optimal atrazine removal rate (97.68%) was obtained at the conditions of 75 W UV power, 10.75 g h -1 O 3 flow rate and 142.5 W ultrasound power. A Monte Carlo simulation aided artificial neural networks model was further developed to quantify the importance of O 3 flow rate (40%), UV power (30%) and ultrasound power (30%). Their individual and interaction effects were also discussed in terms of reaction kinetics. UV and ultrasound could both enhance the decomposition of O 3 and promote hydroxyl radical (OH·) formation. Nonetheless, the dose of O 3 was the dominant factor and must be optimized because excess O 3 can react with OH·, thereby reducing the rate of atrazine degradation. The presence of other organic compounds in the background matrix appreciably inhibited the degradation of atrazine, while the effects of Cl - , CO 3 2- and HCO 3 - were comparatively negligible. It was concluded that the optimization of system performance using response surface methodology and neural networks would be beneficial for scaling up the treatment by UV/O 3 /ultrasound at industrial level. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Optimizing supercritical carbon dioxide in the inactivation of bacteria in clinical solid waste by using response surface methodology.

    PubMed

    Hossain, Md Sohrab; Nik Ab Rahman, Nik Norulaini; Balakrishnan, Venugopal; Alkarkhi, Abbas F M; Ahmad Rajion, Zainul; Ab Kadir, Mohd Omar

    2015-04-01

    Clinical solid waste (CSW) poses a challenge to health care facilities because of the presence of pathogenic microorganisms, leading to concerns in the effective sterilization of the CSW for safe handling and elimination of infectious disease transmission. In the present study, supercritical carbon dioxide (SC-CO2) was applied to inactivate gram-positive Staphylococcus aureus, Enterococcus faecalis, Bacillus subtilis, and gram-negative Escherichia coli in CSW. The effects of SC-CO2 sterilization parameters such as pressure, temperature, and time were investigated and optimized by response surface methodology (RSM). Results showed that the data were adequately fitted into the second-order polynomial model. The linear quadratic terms and interaction between pressure and temperature had significant effects on the inactivation of S. aureus, E. coli, E. faecalis, and B. subtilis in CSW. Optimum conditions for the complete inactivation of bacteria within the experimental range of the studied variables were 20 MPa, 60 °C, and 60 min. The SC-CO2-treated bacterial cells, observed under a scanning electron microscope, showed morphological changes, including cell breakage and dislodged cell walls, which could have caused the inactivation. This espouses the inference that SC-CO2 exerts strong inactivating effects on the bacteria present in CSW, and has the potential to be used in CSW management for the safe handling and recycling-reuse of CSW materials. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Optimization of tetracycline hydrochloride adsorption on amino modified SBA-15 using response surface methodology.

    PubMed

    Hashemikia, Samaneh; Hemmatinejad, Nahid; Ahmadi, Ebrahim; Montazer, Majid

    2015-04-01

    Several researchers are focused on preparation of mesoporous silica as drug carriers with high loading efficiency to control or sustain the drug release. Carriers with highly loaded drug are utilized to minimize the time of drug intake. In this study, amino modified SBA-15 was synthesized through grafting with amino propyl triethoxy silane and then loaded with tetracycline hydrochloride. The drug loading was optimized by using the response surface method considering various factors including drug to silica ratio, operation time, and temperature. The drug to silica ratio indicated as the most influential factor on the drug loading yield. Further, a quadratic polynomial equation was developed to predict the loading percentage. The experimental results indicated reasonable agreement with the predicted values. The modified and drug loaded mesoporous particles were characterized by FT-IR, SEM, TEM, X-ray diffraction (XRD), elemental analysis and N2 adsorption-desorption. The release profiles of tetracycline-loaded particles were studied in different pH. Also, Higuchi equation was used to analyze the release profile of the drug and to evaluate the kinetic of drug release. The drug release rate followed the conventional Higuchi model that could be controlled by amino-functionalized SBA-15. Further, the drug delivery system based on amino modified SBA-15 exhibits novel features with an appropriate usage as an anti-bacterial drug delivery system with effective management of drug adsorption and release. Copyright © 2014 Elsevier Inc. All rights reserved.

  20. Optimization of extraction parameters of pentacyclic triterpenoids from Swertia chirata stem using response surface methodology.

    PubMed

    Pandey, Devendra Kumar; Kaur, Prabhjot

    2018-03-01

    In the present investigation, pentacyclic triterpenoids were extracted from different parts of Swertia chirata by solid-liquid reflux extraction methods. The total pentacyclic triterpenoids (UA, OA, and BA) in extracted samples were determined by HPTLC method. Preliminary studies showed that stem part contains the maximum pentacyclic triterpenoid and was chosen for further studies. Response surface methodology (RSM) has been employed successfully by solid-liquid reflux extraction methods for the optimization of different extraction variables viz., temperature ( X 1 35-70 °C), extraction time ( X 2 30-60 min), solvent composition ( X 3 20-80%), solvent-to-solid ratio ( X 4 30-60 mlg -1 ), and particle size ( X 5 3-6 mm) on maximum recovery of triterpenoid from stem parts of Swertia chirata . A Plackett-Burman design has been used initially to screen out the three extraction factors viz., particle size, temperature, and solvent composition on yield of triterpenoid. Moreover, central composite design (CCD) was implemented to optimize the significant extraction parameters for maximum triterpenoid yield. Three extraction parameters viz., mean particle size (3 mm), temperature (65 °C), and methanol-ethyl acetate solvent composition (45%) can be considered as significant for the better yield of triterpenoid A second-order polynomial model satisfactorily fitted the experimental data with the R 2 values of 0.98 for the triterpenoid yield ( p  < 0.001), implying good agreement between the experimental triterpenoid yield (3.71%) to the predicted value (3.79%).

  1. Cyclic Evolution of Coronal Fields from a Coupled Dynamo Potential-Field Source-Surface Model.

    PubMed

    Dikpati, Mausumi; Suresh, Akshaya; Burkepile, Joan

    The structure of the Sun's corona varies with the solar-cycle phase, from a near spherical symmetry at solar maximum to an axial dipole at solar minimum. It is widely accepted that the large-scale coronal structure is governed by magnetic fields that are most likely generated by dynamo action in the solar interior. In order to understand the variation in coronal structure, we couple a potential-field source-surface model with a cyclic dynamo model. In this coupled model, the magnetic field inside the convection zone is governed by the dynamo equation; these dynamo-generated fields are extended from the photosphere to the corona using a potential-field source-surface model. Assuming axisymmetry, we take linear combinations of associated Legendre polynomials that match the more complex coronal structures. Choosing images of the global corona from the Mauna Loa Solar Observatory at each Carrington rotation over half a cycle (1986 - 1991), we compute the coefficients of the associated Legendre polynomials up to degree eight and compare with observations. We show that at minimum the dipole term dominates, but it fades as the cycle progresses; higher-order multipolar terms begin to dominate. The amplitudes of these terms are not exactly the same for the two limbs, indicating that there is a longitude dependence. While both the 1986 and the 1996 minimum coronas were dipolar, the minimum in 2008 was unusual, since there was a substantial departure from a dipole. We investigate the physical cause of this departure by including a North-South asymmetry in the surface source of the magnetic fields in our flux-transport dynamo model, and find that this asymmetry could be one of the reasons for departure from the dipole in the 2008 minimum.

  2. Modeling State-Space Aeroelastic Systems Using a Simple Matrix Polynomial Approach for the Unsteady Aerodynamics

    NASA Technical Reports Server (NTRS)

    Pototzky, Anthony S.

    2008-01-01

    A simple matrix polynomial approach is introduced for approximating unsteady aerodynamics in the s-plane and ultimately, after combining matrix polynomial coefficients with matrices defining the structure, a matrix polynomial of the flutter equations of motion (EOM) is formed. A technique of recasting the matrix-polynomial form of the flutter EOM into a first order form is also presented that can be used to determine the eigenvalues near the origin and everywhere on the complex plane. An aeroservoelastic (ASE) EOM have been generalized to include the gust terms on the right-hand side. The reasons for developing the new matrix polynomial approach are also presented, which are the following: first, the "workhorse" methods such as the NASTRAN flutter analysis lack the capability to consistently find roots near the origin, along the real axis or accurately find roots farther away from the imaginary axis of the complex plane; and, second, the existing s-plane methods, such as the Roger s s-plane approximation method as implemented in ISAC, do not always give suitable fits of some tabular data of the unsteady aerodynamics. A method available in MATLAB is introduced that will accurately fit generalized aerodynamic force (GAF) coefficients in a tabular data form into the coefficients of a matrix polynomial form. The root-locus results from the NASTRAN pknl flutter analysis, the ISAC-Roger's s-plane method and the present matrix polynomial method are presented and compared for accuracy and for the number and locations of roots.

  3. Weighted Iterative Bayesian Compressive Sensing (WIBCS) for High Dimensional Polynomial Surrogate Construction

    NASA Astrophysics Data System (ADS)

    Sargsyan, K.; Ricciuto, D. M.; Safta, C.; Debusschere, B.; Najm, H. N.; Thornton, P. E.

    2016-12-01

    Surrogate construction has become a routine procedure when facing computationally intensive studies requiring multiple evaluations of complex models. In particular, surrogate models, otherwise called emulators or response surfaces, replace complex models in uncertainty quantification (UQ) studies, including uncertainty propagation (forward UQ) and parameter estimation (inverse UQ). Further, surrogates based on Polynomial Chaos (PC) expansions are especially convenient for forward UQ and global sensitivity analysis, also known as variance-based decomposition. However, the PC surrogate construction strongly suffers from the curse of dimensionality. With a large number of input parameters, the number of model simulations required for accurate surrogate construction is prohibitively large. Relatedly, non-adaptive PC expansions typically include infeasibly large number of basis terms far exceeding the number of available model evaluations. We develop Weighted Iterative Bayesian Compressive Sensing (WIBCS) algorithm for adaptive basis growth and PC surrogate construction leading to a sparse, high-dimensional PC surrogate with a very few model evaluations. The surrogate is then readily employed for global sensitivity analysis leading to further dimensionality reduction. Besides numerical tests, we demonstrate the construction on the example of Accelerated Climate Model for Energy (ACME) Land Model for several output QoIs at nearly 100 FLUXNET sites covering multiple plant functional types and climates, varying 65 input parameters over broad ranges of possible values. This work is supported by the U.S. Department of Energy, Office of Science, Biological and Environmental Research, Accelerated Climate Modeling for Energy (ACME) project. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  4. Semi-empirical study of ortho-cresol photo degradation in manganese-doped zinc oxide nanoparticles suspensions

    PubMed Central

    2012-01-01

    The optimization processes of photo degradation are complicated and expensive when it is performed with traditional methods such as one variable at a time. In this research, the condition of ortho-cresol (o-cresol) photo degradation was optimized by using a semi empirical method. First of all, the experiments were designed with four effective factors including irradiation time, pH, photo catalyst’s amount, o-cresol concentration and photo degradation % as response by response surface methodology (RSM). The RSM used central composite design (CCD) method consists of 30 runs to obtain the actual responses. The actual responses were fitted with the second order algebraic polynomial equation to select a model (suggested model). The suggested model was validated by a few numbers of excellent statistical evidences in analysis of variance (ANOVA). The used evidences include high F-value (143.12), very low P-value (<0.0001), non-significant lack of fit, the determination coefficient (R2 = 0.99) and the adequate precision (47.067). To visualize the optimum, the validated model simulated the condition of variables and response (photo degradation %) be using a few number of three dimensional plots (3D). To confirm the model, the optimums were performed in laboratory. The results of performed experiments were quite close to the predicted values. In conclusion, the study indicated that the model is successful to simulate the optimum condition of o-cresol photo degradation under visible-light irradiation by manganese doped ZnO nanoparticles. PMID:22909072

  5. Numerical solutions for Helmholtz equations using Bernoulli polynomials

    NASA Astrophysics Data System (ADS)

    Bicer, Kubra Erdem; Yalcinbas, Salih

    2017-07-01

    This paper reports a new numerical method based on Bernoulli polynomials for the solution of Helmholtz equations. The method uses matrix forms of Bernoulli polynomials and their derivatives by means of collocation points. Aim of this paper is to solve Helmholtz equations using this matrix relations.

  6. Box-Behnken design based statistical modeling for ultrasound-assisted extraction of corn silk polysaccharide.

    PubMed

    Prakash Maran, J; Manikandan, S; Thirugnanasambandham, K; Vigna Nivetha, C; Dinesh, R

    2013-01-30

    In this study, ultrasound assisted extraction (UAE) conditions on the yield of polysaccharide from corn silk were studied using three factors, three level Box-Behnken response surface design. Process parameters, which affect the efficiency of UAE such as extraction temperature (40-60 °C), time (10-30 min) and solid-liquid ratio (1:10-1:30 g/ml) were investigated. The results showed that, the extraction conditions have significant effects on extraction yield of polysaccharide. The obtained experimental data were fitted to a second-order polynomial equation using multiple regression analysis with high coefficient of determination value (R(2)) of 0.994. An optimization study using Derringer's desired function methodology was performed and the optimal conditions based on both individual and combinations of all independent variables (extraction temperature of 56 °C, time of 17 min and solid-liquid ratio of 1:20 g/ml) were determined with maximum polysaccharide yield of 6.06%, which was confirmed through validation experiments. Copyright © 2012 Elsevier Ltd. All rights reserved.

  7. Psyllium husk gum: an attractive carbohydrate biopolymer for the production of stable canthaxanthin emulsions.

    PubMed

    Gharibzahedi, Seyed Mohammad Taghi; Razavi, Seyed Hadi; Mousavi, Seyed Mohammad

    2013-02-15

    The physical stability of the ultrasonically prepared emulsions containing canthaxanthin (CX) produced by Dietzia natronolimnaea HS-1 strain was maximized using a face central composite design (FCCD) of response surface methodology (RSM). The linear and interaction effects of main emulsion components (whey protein isolate (WPI, 0.4-1.2 wt%), psyllium husk gum (PHG, 1.5-4.5 wt%) and coconut oil (CO, 5-10 wt%)) on the stability were studied. The density, turbidity and droplet size of emulsions were also characterized to interpret the stability data. A significant second-order polynomial model was established (p<0.0001). Maximum stability of 98.8% was predicted at the optimum levels of formulation variables (WPI concentration 1.20 wt%, PHG content 3.30 wt%, CO concentration 5.43 wt%). The results also demonstrated that CO and WPI concentration had greater effect on the droplet size and density values, whereas the PHG:WPI ratio had a rather greater effect on the turbidity values. Copyright © 2012 Elsevier Ltd. All rights reserved.

  8. Parametric Investigation of Thrust Augmentation by Ejectors on a Pulsed Detonation Tube

    NASA Technical Reports Server (NTRS)

    Wilson, Jack; Sgondea, Alexandru; Paxson, Daniel E.; Rosenthal, Bruce N.

    2006-01-01

    A parametric investigation has been made of thrust augmentation of a 1 in. diameter pulsed detonation tube by ejectors. A set of ejectors was used which permitted variation of the ejector length, diameter, and nose radius, according to a statistical design of experiment scheme. The maximum augmentation ratios for each ejector were fitted using a polynomial response surface, from which the optimum ratios of ejector diameter to detonation tube diameter, and ejector length and nose radius to ejector diameter, were found. Thrust augmentation ratios above a factor of 2 were measured. In these tests, the pulsed detonation device was run on approximately stoichiometric air-hydrogen mixtures, at a frequency of 20 Hz. Later measurements at a frequency of 40 Hz gave lower values of thrust augmentation. Measurements of thrust augmentation as a function of ejector entrance to detonation tube exit distance showed two maxima, one with the ejector entrance upstream, and one downstream, of the detonation tube exit. A thrust augmentation of 2.5 was observed using a tapered ejector.

  9. Gas-liquid countercurrent integration process for continuous biodiesel production using a microporous solid base KF/CaO as catalyst.

    PubMed

    Hu, Shengyang; Wen, Libai; Wang, Yun; Zheng, Xinsheng; Han, Heyou

    2012-11-01

    A continuous-flow integration process was developed for biodiesel production using rapeseed oil as feedstock, based on the countercurrent contact reaction between gas and liquid, separation of glycerol on-line and cyclic utilization of methanol. Orthogonal experimental design and response surface methodology were adopted to optimize technological parameters. A second-order polynomial model for the biodiesel yield was established and validated experimentally. The high determination coefficient (R(2)=98.98%) and the low probability value (Pr<0.0001) proved that the model matched the experimental data, and had a high predictive ability. The optimal technological parameters were: 81.5°C reaction temperature, 51.7cm fill height of catalyst KF/CaO and 105.98kPa system pressure. Under these conditions, the average yield of triplicate experiments was 93.7%, indicating the continuous-flow process has good potential in the manufacture of biodiesel. Copyright © 2012 Elsevier Ltd. All rights reserved.

  10. Water soluble polysaccharides from Spirulina platensis: extraction and in vitro anti-cancer activity.

    PubMed

    Kurd, Forouzan; Samavati, Vahid

    2015-03-01

    Polysaccharides from Spirulina platensis algae (SP) were extracted by ultrasound-assisted extraction procedure. The optimal conditions for ultrasonic extraction of SP were determined by response surface methodology. The four parameters were, extraction time (X1), extraction temperature (X2), ultrasonic power (X3) and the ratio of water to raw material (X4), respectively. The experimental data obtained were fitted to a second-order polynomial equation. The optimum conditions were extraction time of 25 min, extraction temperature 85°C, ultrasonic power 90 W and ratio of water to raw material 20 mL/g. Under these optimal conditions, the experimental yield was 13.583±0.51%, well matched with the predicted models with the coefficients of determination (R2) of 0.9971. Then, we demonstrated that SP polysaccharides had strong scavenging activities in vitro on DPPH and hydroxyl radicals. Overall, SP may have potential applications in the medical and food industries. Copyright © 2015 Elsevier B.V. All rights reserved.

  11. Statistical optimization of process parameters for the production of tannase by Aspergillus flavus under submerged fermentation.

    PubMed

    Mohan, S K; Viruthagiri, T; Arunkumar, C

    2014-04-01

    Production of tannase by Aspergillus flavus (MTCC 3783) using tamarind seed powder as substrate was studied in submerged fermentation. Plackett-Burman design was applied for the screening of 12 medium nutrients. From the results, the significant nutrients were identified as tannic acid, magnesium sulfate, ferrous sulfate and ammonium sulfate. Further the optimization of process parameters was carried out using response surface methodology (RSM). RSM has been applied for designing of experiments to evaluate the interactive effects through a full 31 factorial design. The optimum conditions were tannic acid concentration, 3.22 %; fermentation period, 96 h; temperature, 35.1 °C; and pH 5.4. Higher value of the regression coefficient (R 2  = 0.9638) indicates excellent evaluation of experimental data by second-order polynomial regression model. The RSM revealed that a maximum tannase production of 139.3 U/ml was obtained at the optimum conditions.

  12. Statistical optimization of recycled-paper enzymatic hydrolysis for simultaneous saccharification and fermentation via central composite design.

    PubMed

    Liu, Qing; Cheng, Ke-ke; Zhang, Jian-an; Li, Jin-ping; Wang, Ge-hua

    2010-01-01

    A central composite design of the response surface methodology (RSM) was employed to study the effects of temperature, enzyme concentration, and stirring rate on recycled-paper enzymatic hydrolysis. Among the three variables, temperature and enzyme concentration significantly affected the conversion efficiency of substrate, whereas stirring rate was not effective. A quadratic polynomial equation was obtained for enzymatic hydrolysis by multiple regression analysis using RSM. The results of validation experiments were coincident with the predicted model. The optimum conditions for enzymatic hydrolysis were temperature, enzyme concentration, and stirring rate of 43.1 degrees C, 20 FPU g(-1) substrate, and 145 rpm, respectively. In the subsequent simultaneous saccharification and fermentation (SSF) experiment under the optimum conditions, the highest 28.7 g ethanol l(-1) was reached in the fed-batch SSF when 5% (w/v) substrate concentration was used initially, and another 5% added after 12 h fermentation. This ethanol output corresponded to 77.7% of the theoretical yield based on the glucose content in the raw material.

  13. Application of Box-Behnken experimental design to optimize the extraction of insecticidal Cry1Ac from soil.

    PubMed

    Li, Yan-Liang; Fang, Zhi-Xiang; You, Jing

    2013-02-20

    A validated method for analyzing Cry proteins is a premise to study the fate and ecological effects of contaminants associated with genetically engineered Bacillus thuringiensis crops. The current study has optimized the extraction method to analyze Cry1Ac protein in soil using a response surface methodology with a three-level-three-factor Box-Behnken experimental design (BBD). The optimum extraction conditions were at 21 °C and 630 rpm for 2 h. Regression analysis showed a good fit of the experimental data to the second-order polynomial model with a coefficient of determination of 0.96. The method was sensitive and precise with a method detection limit of 0.8 ng/g dry weight and relative standard deviations at 7.3%. Finally, the established method was applied for analyzing Cry1Ac protein residues in field-collected soil samples. Trace amounts of Cry1Ac protein were detected in the soils where transgenic crops have been planted for 8 and 12 years.

  14. Application of mixture experimental design to simvastatin apparent solubility predictions in the microemulsifion formed by self-microemulsifying.

    PubMed

    Meng, Jian; Zheng, Liangyuan

    2007-09-01

    Self-microemulsifying drug delivery systems (SMEDDS) are useful to improve the bioavailability of poorly water-soluble drugs by increasing their apparent solubility through solubilization. However, very few studies, to date, have systematically examined the level of drug apparent solubility in o/w microemulsion formed by self-microemulsifying. In this study, a mixture experimental design was used to simulate the influence of the compositions on simvastatin apparent solubility quantitatively through an empirical model. The reduced cubic polynomial equation successfully modeled the evolution of simvastatin apparent solubility. The results were presented using an analysis of response surface showing a scale of possible simvastatin apparent solubility between 0.0024 ~ 29.0 mg/mL. Moreover, this technique showed that simvastatin apparent solubility was mainly influenced by microemulsion concentration and, suggested that the drug would precipitate in the gastrointestinal tract due to dilution by gastrointestinal fluids. Furthermore, the model would help us design the formulation to maximize the drug apparent solubility and avoid precipitation of the drug.

  15. Parametric Investigation of Thrust Augmentation by Ejectors on a Pulsed Detonation Tube

    NASA Technical Reports Server (NTRS)

    Wilson, Jack; Sgondea, Alexandru; Paxson, Daniel E.; Rosenthal, Bruce N.

    2005-01-01

    A parametric investigation has been made of thrust augmentation of a 1 inch diameter pulsed detonation tube by ejectors. A set of ejectors was used which permitted variation of the ejector length, diameter, and nose radius, according to a statistical design of experiment scheme. The maximum augmentations for each ejector were fitted using a polynomial response surface, from which the optimum ejector diameters, and nose radius, were found. Thrust augmentations above a factor of 2 were measured. In these tests, the pulsed detonation device was run on approximately stoichiometric air-hydrogen mixtures, at a frequency of 20 Hz. Later measurements at a frequency of 40 Hz gave lower values of thrust augmentation. Measurements of thrust augmentation as a function of ejector entrance to detonation tube exit distance showed two maxima, one with the ejector entrance upstream, and one downstream, of the detonation tube exit. A thrust augmentation of 2.5 was observed using a tapered ejector.

  16. Optimization of electrocoagulation process for the treatment of landfill leachate

    NASA Astrophysics Data System (ADS)

    Huda, N.; Raman, A. A.; Ramesh, S.

    2017-06-01

    The main problem of landfill leachate is its diverse composition comprising of persistent organic pollutants (POPs) which must be removed before being discharge into the environment. In this study, the treatment of leachate using electrocoagulation (EC) was investigated. Iron was used as both the anode and cathode. Response surface methodology was used for experimental design and to study the effects of operational parameters. Central Composite Design was used to study the effects of initial pH, inter-electrode distance, and electrolyte concentration on color, and COD removals. The process could remove up to 84 % color and 49.5 % COD. The experimental data was fitted onto second order polynomial equations. All three factors were found to be significantly affect the color removal. On the other hand, electrolyte concentration was the most significant parameter affecting the COD removal. Numerical optimization was conducted to obtain the optimum process performance. Further work will be conducted towards integrating EC with other wastewater treatment processes such as electro-Fenton.

  17. Exploring the use of random regression models with legendre polynomials to analyze measures of volume of ejaculate in Holstein bulls.

    PubMed

    Carabaño, M J; Díaz, C; Ugarte, C; Serrano, M

    2007-02-01

    Artificial insemination centers routinely collect records of quantity and quality of semen of bulls throughout the animals' productive period. The goal of this paper was to explore the use of random regression models with orthogonal polynomials to analyze repeated measures of semen production of Spanish Holstein bulls. A total of 8,773 records of volume of first ejaculate (VFE) collected between 12 and 30 mo of age from 213 Spanish Holstein bulls was analyzed under alternative random regression models. Legendre polynomial functions of increasing order (0 to 6) were fitted to the average trajectory, additive genetic and permanent environmental effects. Age at collection and days in production were used as time variables. Heterogeneous and homogeneous residual variances were alternatively assumed. Analyses were carried out within a Bayesian framework. The logarithm of the marginal density and the cross-validation predictive ability of the data were used as model comparison criteria. Based on both criteria, age at collection as a time variable and heterogeneous residuals models are recommended to analyze changes of VFE over time. Both criteria indicated that fitting random curves for genetic and permanent environmental components as well as for the average trajector improved the quality of models. Furthermore, models with a higher order polynomial for the permanent environmental (5 to 6) than for the genetic components (4 to 5) and the average trajectory (2 to 3) tended to perform best. High-order polynomials were needed to accommodate the highly oscillating nature of the phenotypic values. Heritability and repeatability estimates, disregarding the extremes of the studied period, ranged from 0.15 to 0.35 and from 0.20 to 0.50, respectively, indicating that selection for VFE may be effective at any stage. Small differences among models were observed. Apart from the extremes, estimated correlations between ages decreased steadily from 0.9 and 0.4 for measures 1 mo apart to 0.4 and 0.2 for most distant measures for additive genetic and phenotypic components, respectively. Further investigation to account for environmental factors that may be responsible for the oscillating observations of VFE is needed.

  18. Advanced Stochastic Collocation Methods for Polynomial Chaos in RAVEN

    NASA Astrophysics Data System (ADS)

    Talbot, Paul W.

    As experiment complexity in fields such as nuclear engineering continually increases, so does the demand for robust computational methods to simulate them. In many simulations, input design parameters and intrinsic experiment properties are sources of uncertainty. Often small perturbations in uncertain parameters have significant impact on the experiment outcome. For instance, in nuclear fuel performance, small changes in fuel thermal conductivity can greatly affect maximum stress on the surrounding cladding. The difficulty quantifying input uncertainty impact in such systems has grown with the complexity of numerical models. Traditionally, uncertainty quantification has been approached using random sampling methods like Monte Carlo. For some models, the input parametric space and corresponding response output space is sufficiently explored with few low-cost calculations. For other models, it is computationally costly to obtain good understanding of the output space. To combat the expense of random sampling, this research explores the possibilities of using advanced methods in Stochastic Collocation for generalized Polynomial Chaos (SCgPC) as an alternative to traditional uncertainty quantification techniques such as Monte Carlo (MC) and Latin Hypercube Sampling (LHS) methods for applications in nuclear engineering. We consider traditional SCgPC construction strategies as well as truncated polynomial spaces using Total Degree and Hyperbolic Cross constructions. We also consider applying anisotropy (unequal treatment of different dimensions) to the polynomial space, and offer methods whereby optimal levels of anisotropy can be approximated. We contribute development to existing adaptive polynomial construction strategies. Finally, we consider High-Dimensional Model Reduction (HDMR) expansions, using SCgPC representations for the subspace terms, and contribute new adaptive methods to construct them. We apply these methods on a series of models of increasing complexity. We use analytic models of various levels of complexity, then demonstrate performance on two engineering-scale problems: a single-physics nuclear reactor neutronics problem, and a multiphysics fuel cell problem coupling fuels performance and neutronics. Lastly, we demonstrate sensitivity analysis for a time-dependent fuels performance problem. We demonstrate the application of all the algorithms in RAVEN, a production-level uncertainty quantification framework.

  19. bcc-to-hcp transformation pathways for iron versus hydrostatic pressure: Coupled shuffle and shear modes

    NASA Astrophysics Data System (ADS)

    Liu, J. B.; Johnson, D. D.

    2009-04-01

    Using density-functional theory, we calculate the potential-energy surface (PES), minimum-energy pathway (MEP), and transition state (TS) versus hydrostatic pressure σhyd for the reconstructive transformation in Fe from body-centered cubic (bcc) to hexagonal closed-packed (hcp). At fixed σhyd , the PES is described by coupled shear (γ) and shuffle (η) modes and is determined from structurally minimized hcp-bcc energy differences at a set of (η,γ) . We fit the PES using symmetry-adapted polynomials, permitting the MEP to be found analytically. The MEP is continuous and fully explains the transformation and its associated magnetization and volume discontinuity at TS. We show that σhyd (while not able to induce shear) dramatically alters the MEP to drive reconstruction by a shuffle-only mode at ≤30GPa , as observed. Finally, we relate our polynomial-based results to Landau and nudge-elastic-band approaches and show they yield incorrect MEP in general.

  20. Third-order polynomial model for analyzing stickup state laminated structure in flexible electronics

    NASA Astrophysics Data System (ADS)

    Meng, Xianhong; Wang, Zihao; Liu, Boya; Wang, Shuodao

    2018-02-01

    Laminated hard-soft integrated structures play a significant role in the fabrication and development of flexible electronics devices. Flexible electronics have advantageous characteristics such as soft and light-weight, can be folded, twisted, flipped inside-out, or be pasted onto other surfaces of arbitrary shapes. In this paper, an analytical model is presented to study the mechanics of laminated hard-soft structures in flexible electronics under a stickup state. Third-order polynomials are used to describe the displacement field, and the principle of virtual work is adopted to derive the governing equations and boundary conditions. The normal strain and the shear stress along the thickness direction in the bi-material region are obtained analytically, which agree well with the results from finite element analysis. The analytical model can be used to analyze stickup state laminated structures, and can serve as a valuable reference for the failure prediction and optimal design of flexible electronics in the future.

  1. A new basis set for molecular bending degrees of freedom.

    PubMed

    Jutier, Laurent

    2010-07-21

    We present a new basis set as an alternative to Legendre polynomials for the variational treatment of bending vibrational degrees of freedom in order to highly reduce the number of basis functions. This basis set is inspired from the harmonic oscillator eigenfunctions but is defined for a bending angle in the range theta in [0:pi]. The aim is to bring the basis functions closer to the final (ro)vibronic wave functions nature. Our methodology is extended to complicated potential energy surfaces, such as quasilinearity or multiequilibrium geometries, by using several free parameters in the basis functions. These parameters allow several density maxima, linear or not, around which the basis functions will be mainly located. Divergences at linearity in integral computations are resolved as generalized Legendre polynomials. All integral computations required for the evaluation of molecular Hamiltonian matrix elements are given for both discrete variable representation and finite basis representation. Convergence tests for the low energy vibronic states of HCCH(++), HCCH(+), and HCCS are presented.

  2. iDriving (Intelligent Driving)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Malikopoulos, Andreas

    2012-09-17

    iDriving identifies the driving style factors that have a major impact on fuel economy. An optimization framework is used with the aim of optimizing a driving style with respect to these driving factors. A set of polynomial metamodels is constructed to reflect the responses produced in fuel economy by changing the driving factors. The optimization framework is used to develop a real-time feedback system, including visual instructions, to enable drivers to alter their driving styles in responses to actual driving conditions to improve fuel efficiency.

  3. Analytical approximation of a distorted reflector surface defined by a discrete set of points

    NASA Technical Reports Server (NTRS)

    Acosta, Roberto J.; Zaman, Afroz A.

    1988-01-01

    Reflector antennas on Earth orbiting spacecrafts generally cannot be described analytically. The reflector surface is subjected to a large temperature fluctuation and gradients, and is thus warped from its true geometrical shape. Aside from distortion by thermal stresses, reflector surfaces are often purposely shaped to minimize phase aberrations and scanning losses. To analyze distorted reflector antennas defined by discrete surface points, a numerical technique must be applied to compute an interpolatory surface passing through a grid of discrete points. In this paper, the distorted reflector surface points are approximated by two analytical components: an undistorted surface component and a surface error component. The undistorted surface component is a best fit paraboloid polynomial for the given set of points and the surface error component is a Fourier series expansion of the deviation of the actual surface points, from the best fit paraboloid. By applying the numerical technique to approximate the surface normals of the distorted reflector surface, the induced surface current can be obtained using physical optics technique. These surface currents are integrated to find the far field radiation pattern.

  4. Translation of Bernstein Coefficients Under an Affine Mapping of the Unit Interval

    NASA Technical Reports Server (NTRS)

    Alford, John A., II

    2012-01-01

    We derive an expression connecting the coefficients of a polynomial expanded in the Bernstein basis to the coefficients of an equivalent expansion of the polynomial under an affine mapping of the domain. The expression may be useful in the calculation of bounds for multi-variate polynomials.

  5. On polynomial selection for the general number field sieve

    NASA Astrophysics Data System (ADS)

    Kleinjung, Thorsten

    2006-12-01

    The general number field sieve (GNFS) is the asymptotically fastest algorithm for factoring large integers. Its runtime depends on a good choice of a polynomial pair. In this article we present an improvement of the polynomial selection method of Montgomery and Murphy which has been used in recent GNFS records.

  6. Graphical Solution of Polynomial Equations

    ERIC Educational Resources Information Center

    Grishin, Anatole

    2009-01-01

    Graphing utilities, such as the ubiquitous graphing calculator, are often used in finding the approximate real roots of polynomial equations. In this paper the author offers a simple graphing technique that allows one to find all solutions of a polynomial equation (1) of arbitrary degree; (2) with real or complex coefficients; and (3) possessing…

  7. Evaluation of more general integrals involving universal associated Legendre polynomials

    NASA Astrophysics Data System (ADS)

    You, Yuan; Chen, Chang-Yuan; Tahir, Farida; Dong, Shi-Hai

    2017-05-01

    We find that the solution of the polar angular differential equation can be written as the universal associated Legendre polynomials. We present a popular integral formula which includes universal associated Legendre polynomials and we also evaluate some important integrals involving the product of two universal associated Legendre polynomials Pl' m'(x ) , Pk' n'(x ) and x2 a(1-x2 ) -p -1, xb(1±x2 ) -p, and xc(1-x2 ) -p(1±x ) -1, where l'≠k' and m'≠n'. Their selection rules are also mentioned.

  8. Neck curve polynomials in neck rupture model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kurniadi, Rizal; Perkasa, Yudha S.; Waris, Abdul

    2012-06-06

    The Neck Rupture Model is a model that explains the scission process which has smallest radius in liquid drop at certain position. Old fashion of rupture position is determined randomly so that has been called as Random Neck Rupture Model (RNRM). The neck curve polynomials have been employed in the Neck Rupture Model for calculation the fission yield of neutron induced fission reaction of {sup 280}X{sub 90} with changing of order of polynomials as well as temperature. The neck curve polynomials approximation shows the important effects in shaping of fission yield curve.

  9. More on rotations as spin matrix polynomials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Curtright, Thomas L.

    2015-09-15

    Any nonsingular function of spin j matrices always reduces to a matrix polynomial of order 2j. The challenge is to find a convenient form for the coefficients of the matrix polynomial. The theory of biorthogonal systems is a useful framework to meet this challenge. Central factorial numbers play a key role in the theoretical development. Explicit polynomial coefficients for rotations expressed either as exponentials or as rational Cayley transforms are considered here. Structural features of the results are discussed and compared, and large j limits of the coefficients are examined.

  10. Robust stability of fractional order polynomials with complicated uncertainty structure

    PubMed Central

    Şenol, Bilal; Pekař, Libor

    2017-01-01

    The main aim of this article is to present a graphical approach to robust stability analysis for families of fractional order (quasi-)polynomials with complicated uncertainty structure. More specifically, the work emphasizes the multilinear, polynomial and general structures of uncertainty and, moreover, the retarded quasi-polynomials with parametric uncertainty are studied. Since the families with these complex uncertainty structures suffer from the lack of analytical tools, their robust stability is investigated by numerical calculation and depiction of the value sets and subsequent application of the zero exclusion condition. PMID:28662173

  11. Application of polynomial su(1, 1) algebra to Pöschl-Teller potentials

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Hong-Biao, E-mail: zhanghb017@nenu.edu.cn; Lu, Lu

    2013-12-15

    Two novel polynomial su(1, 1) algebras for the physical systems with the first and second Pöschl-Teller (PT) potentials are constructed, and their specific representations are presented. Meanwhile, these polynomial su(1, 1) algebras are used as an algebraic technique to solve eigenvalues and eigenfunctions of the Hamiltonians associated with the first and second PT potentials. The algebraic approach explores an appropriate new pair of raising and lowing operators K-circumflex{sub ±} of polynomial su(1, 1) algebra as a pair of shift operators of our Hamiltonians. In addition, two usual su(1, 1) algebras associated with the first and second PT potentials are derivedmore » naturally from the polynomial su(1, 1) algebras built by us.« less

  12. Polynomials to model the growth of young bulls in performance tests.

    PubMed

    Scalez, D C B; Fragomeni, B O; Passafaro, T L; Pereira, I G; Toral, F L B

    2014-03-01

    The use of polynomial functions to describe the average growth trajectory and covariance functions of Nellore and MA (21/32 Charolais+11/32 Nellore) young bulls in performance tests was studied. The average growth trajectories and additive genetic and permanent environmental covariance functions were fit with Legendre (linear through quintic) and quadratic B-spline (with two to four intervals) polynomials. In general, the Legendre and quadratic B-spline models that included more covariance parameters provided a better fit with the data. When comparing models with the same number of parameters, the quadratic B-spline provided a better fit than the Legendre polynomials. The quadratic B-spline with four intervals provided the best fit for the Nellore and MA groups. The fitting of random regression models with different types of polynomials (Legendre polynomials or B-spline) affected neither the genetic parameters estimates nor the ranking of the Nellore young bulls. However, fitting different type of polynomials affected the genetic parameters estimates and the ranking of the MA young bulls. Parsimonious Legendre or quadratic B-spline models could be used for genetic evaluation of body weight of Nellore young bulls in performance tests, whereas these parsimonious models were less efficient for animals of the MA genetic group owing to limited data at the extreme ages.

  13. Generating the patterns of variation with GeoGebra: the case of polynomial approximations

    NASA Astrophysics Data System (ADS)

    Attorps, Iiris; Björk, Kjell; Radic, Mirko

    2016-01-01

    In this paper, we report a teaching experiment regarding the theory of polynomial approximations at the university mathematics teaching in Sweden. The experiment was designed by applying Variation theory and by using the free dynamic mathematics software GeoGebra. The aim of this study was to investigate if the technology-assisted teaching of Taylor polynomials compared with traditional way of work at the university level can support the teaching and learning of mathematical concepts and ideas. An engineering student group (n = 19) was taught Taylor polynomials with the assistance of GeoGebra while a control group (n = 18) was taught in a traditional way. The data were gathered by video recording of the lectures, by doing a post-test concerning Taylor polynomials in both groups and by giving one question regarding Taylor polynomials at the final exam for the course in Real Analysis in one variable. In the analysis of the lectures, we found Variation theory combined with GeoGebra to be a potentially powerful tool for revealing some critical aspects of Taylor Polynomials. Furthermore, the research results indicated that applying Variation theory, when planning the technology-assisted teaching, supported and enriched students' learning opportunities in the study group compared with the control group.

  14. Models for formation and choice of variants for organizing digital electronics manufacturing

    NASA Astrophysics Data System (ADS)

    Korshunov, G. I.; Lapkova, M. Y.; Polyakov, S. L.; Frolova, E. A.

    2018-03-01

    The directions of organizing digital electronics manufacturing are considered by the example of surface mount technology. The basic equipment choice has to include not only individual characteristics, but also mutual influence of individual machines and the results of design for manufacturing. Application of special cases of the Utility function which are complicated in the general representation of polynomial functions are proposed for estimation of product quality in a staged automation.

  15. Dark texture in artworks

    NASA Astrophysics Data System (ADS)

    Parraman, Carinna

    2012-01-01

    This presentation highlights issues relating to the digital capture printing of 2D and 3D artefacts and accurate colour reproduction of 3D objects. There are a range of opportunities and technologies for the scanning and printing of two-dimensional and threedimensional artefacts [1]. A successful approach of Polynomial Texture Mapping (PTM) technique, to create a Reflectance Transformation Image (RTI) [2-4] is being used for the conservation and heritage of artworks as these methods are non invasive or non destructive of fragile artefacts. This approach captures surface detail of twodimensional artworks using a multidimensional approach that by using a hemispherical dome comprising 64 lamps to create an entire surface topography. The benefits of this approach are to provide a highly detailed visualization of the surface of materials and objects.

  16. High-Level, First-Principles, Full-Dimensional Quantum Calculation of the Ro-vibrational Spectrum of the Simplest Criegee Intermediate (CH2OO).

    PubMed

    Li, Jun; Carter, Stuart; Bowman, Joel M; Dawes, Richard; Xie, Daiqian; Guo, Hua

    2014-07-03

    The ro-vibrational spectrum of the simplest Criegee intermediate (CH2OO) has been determined quantum mechanically based on nine-dimensional potential energy and dipole surfaces for its ground electronic state. The potential energy surface is fitted to more than 50 000 high-level ab initio points with a root-mean-square error of 25 cm(-1), using a recently proposed permutation invariant polynomial neural network method. The calculated rotational constants, vibrational frequencies, and spectral intensities of CH2OO are in excellent agreement with experiment. The potential energy surface provides a valuable platform for studying highly excited vibrational and unimolecular reaction dynamics of this important molecule.

  17. Modeling, Uncertainty Quantification and Sensitivity Analysis of Subsurface Fluid Migration in the Above Zone Monitoring Interval of a Geologic Carbon Storage

    NASA Astrophysics Data System (ADS)

    Namhata, A.; Dilmore, R. M.; Oladyshkin, S.; Zhang, L.; Nakles, D. V.

    2015-12-01

    Carbon dioxide (CO2) storage into geological formations has significant potential for mitigating anthropogenic CO2 emissions. An increasing emphasis on the commercialization and implementation of this approach to store CO2 has led to the investigation of the physical processes involved and to the development of system-wide mathematical models for the evaluation of potential geologic storage sites and the risk associated with them. The sub-system components under investigation include the storage reservoir, caprock seals, and the above zone monitoring interval, or AZMI, to name a few. Diffusive leakage of CO2 through the caprock seal to overlying formations may occur due to its intrinsic permeability and/or the presence of natural/induced fractures. This results in a potential risk to environmental receptors such as underground sources of drinking water. In some instances, leaking CO2 also has the potential to reach the ground surface and result in atmospheric impacts. In this work, fluid (i.e., CO2 and brine) flow above the caprock, in the region designated as the AZMI, is modeled for a leakage event of a typical geologic storage system with different possible boundary scenarios. An analytical and approximate solution for radial migration of fluids in the AZMI with continuous inflow of fluids from the reservoir through the caprock has been developed. In its present form, the AZMI model predicts the spatial changes in pressure - gas saturations over time in a layer immediately above the caprock. The modeling is performed for a benchmark case and the data-driven approach of arbitrary Polynomial Chaos (aPC) Expansion is used to quantify the uncertainty of the model outputs based on the uncertainty of model input parameters such as porosity, permeability, formation thickness, and residual brine saturation. The recently developed aPC approach performs stochastic model reduction and approximates the models by a polynomial-based response surface. Finally, a global sensitivity analysis was performed with Sobol indices based on the aPC technique to determine the relative importance of these input parameters on the model output space.

  18. Simulating Cyclic Evolution of Coronal Magnetic Fields using a Potential Field Source Surface Model Coupled with a Dynamo Model

    NASA Astrophysics Data System (ADS)

    Suresh, A.; Dikpati, M.; Burkepile, J.; de Toma, G.

    2013-12-01

    The structure of the Sun's corona varies with solar cycle, from a near spherical symmetry at solar maximum to an axial dipole at solar minimum. Why does this pattern occur? It is widely accepted that large-scale coronal structure is governed by magnetic fields, which are most likely generated by the dynamo action in the solar interior. In order to understand the variation in coronal structure, we couple a potential field source surface model with a cyclic dynamo model. In this coupled model, the magnetic field inside the convection zone is governed by the dynamo equation and above the photosphere these dynamo-generated fields are extended from the photosphere to the corona by using a potential field source surface model. Under the assumption of axisymmetry, the large-scale poloidal fields can be written in terms of the curl of a vector potential. Since from the photosphere and above the magnetic diffusivity is essentially infinite, the evolution of the vector potential is given by Laplace's Equation, the solution of which is obtained in the form of a first order Associated Legendre Polynomial. By taking linear combinations of these polynomial terms, we find solutions that match more complex coronal structures. Choosing images of the global corona from the Mauna Loa Solar Observatory at each Carrington rotation over half a cycle (1986-1991), we compute the coefficients of the Associated Legendre Polynomials up to degree eight and compare with observation. We reproduce some previous results that at minimum the dipole term dominates, but that this term fades with the progress of the cycle and higher order multipole terms begin to dominate. We find that the amplitudes of these terms are not exactly the same in the two limbs, indicating that there is some phi dependence. Furthermore, by comparing the solar minimum corona during the past three minima (1986, 1996, and 2008), we find that, while both the 1986 and 1996 minima were dipolar, the minimum in 2008 was unusual, as there was departure from a dipole. In order to investigate the physical cause of this departure from dipole, we implement north-south asymmetry in the surface source of the magnetic fields in our model, and find that such n/s asymmetry in solar cycle could be one of the reasons for this departure. This work is partially supported by NASA's LWS grant with award number NNX08AQ34G. NCAR is sponsored by the NSF.

  19. Optimization study for Pb(II) and COD sequestration by consortium of sulphate-reducing bacteria

    NASA Astrophysics Data System (ADS)

    Verma, Anamika; Bishnoi, Narsi R.; Gupta, Asha

    2017-09-01

    In this study, initial minimum inhibitory concentration (MIC) of Pb(II) ions was analysed to check optimum concentration of Pb(II) ions at which the growth of sulphate-reducing consortium (SRC) was found to be maximum. 80 ppm of Pb(II) ions was investigated as minimum inhibitory concentration for SRC. Influence of electron donors such as lactose, sucrose, glucose and sodium lactate was examined to investigate best carbon source for growth and activity of sulphate-reducing bacteria. Sodium lactate was found to be the prime carbon source for SRC. Later optimization of various parameters was executed using Box-Behnken design model of response surface methodology to explore the effectiveness of three independent operating variables, namely, pH (5.0-9.0), temperature (32-42 °C) and time (5.0-9.0 days), on dependent variables, i.e. protein content, precipitation of Pb(II) ions, and removal of COD by SRC biomass. Maximum removal of COD and Pb(II) was observed to be 91 and 98 %, respectively, at pH 7.0 and temperature 37 °C and incubation time 7 days. According to response surface analysis and analysis of variance, the experimental data were perfectly fitted to the quadratic model, and the interactive influence of pH, temperature and time on Pb(II) and COD removal was highly significant. A high regression coefficient between the variables and response ( r 2 = 0.9974) corroborate eminent evaluation of experimental data by second-order polynomial regression model. SEM and Fourier transform infrared analysis was performed to investigate morphology of PbS precipitates, sorption mechanism and involved functional groups in metal-free and metal-loaded biomass of SRC for Pb(II) binding.

  20. Compression of head-related transfer function using autoregressive-moving-average models and Legendre polynomials.

    PubMed

    Shekarchi, Sayedali; Hallam, John; Christensen-Dalsgaard, Jakob

    2013-11-01

    Head-related transfer functions (HRTFs) are generally large datasets, which can be an important constraint for embedded real-time applications. A method is proposed here to reduce redundancy and compress the datasets. In this method, HRTFs are first compressed by conversion into autoregressive-moving-average (ARMA) filters whose coefficients are calculated using Prony's method. Such filters are specified by a few coefficients which can generate the full head-related impulse responses (HRIRs). Next, Legendre polynomials (LPs) are used to compress the ARMA filter coefficients. LPs are derived on the sphere and form an orthonormal basis set for spherical functions. Higher-order LPs capture increasingly fine spatial details. The number of LPs needed to represent an HRTF, therefore, is indicative of its spatial complexity. The results indicate that compression ratios can exceed 98% while maintaining a spectral error of less than 4 dB in the recovered HRTFs.

  1. High Dynamic Range Imaging Using Multiple Exposures

    NASA Astrophysics Data System (ADS)

    Hou, Xinglin; Luo, Haibo; Zhou, Peipei; Zhou, Wei

    2017-06-01

    It is challenging to capture a high-dynamic range (HDR) scene using a low-dynamic range (LDR) camera. This paper presents an approach for improving the dynamic range of cameras by using multiple exposure images of same scene taken under different exposure times. First, the camera response function (CRF) is recovered by solving a high-order polynomial in which only the ratios of the exposures are used. Then, the HDR radiance image is reconstructed by weighted summation of the each radiance maps. After that, a novel local tone mapping (TM) operator is proposed for the display of the HDR radiance image. By solving the high-order polynomial, the CRF can be recovered quickly and easily. Taken the local image feature and characteristic of histogram statics into consideration, the proposed TM operator could preserve the local details efficiently. Experimental result demonstrates the effectiveness of our method. By comparison, the method outperforms other methods in terms of imaging quality.

  2. The spectral cell method in nonlinear earthquake modeling

    NASA Astrophysics Data System (ADS)

    Giraldo, Daniel; Restrepo, Doriam

    2017-12-01

    This study examines the applicability of the spectral cell method (SCM) to compute the nonlinear earthquake response of complex basins. SCM combines fictitious-domain concepts with the spectral-version of the finite element method to solve the wave equations in heterogeneous geophysical domains. Nonlinear behavior is considered by implementing the Mohr-Coulomb and Drucker-Prager yielding criteria. We illustrate the performance of SCM with numerical examples of nonlinear basins exhibiting physically and computationally challenging conditions. The numerical experiments are benchmarked with results from overkill solutions, and using MIDAS GTS NX, a finite element software for geotechnical applications. Our findings show good agreement between the two sets of results. Traditional spectral elements implementations allow points per wavelength as low as PPW = 4.5 for high-order polynomials. Our findings show that in the presence of nonlinearity, high-order polynomials (p ≥ 3) require mesh resolutions above of PPW ≥ 10 to ensure displacement errors below 10%.

  3. A general U-block model-based design procedure for nonlinear polynomial control systems

    NASA Astrophysics Data System (ADS)

    Zhu, Q. M.; Zhao, D. Y.; Zhang, Jianhua

    2016-10-01

    The proposition of U-model concept (in terms of 'providing concise and applicable solutions for complex problems') and a corresponding basic U-control design algorithm was originated in the first author's PhD thesis. The term of U-model appeared (not rigorously defined) for the first time in the first author's other journal paper, which established a framework for using linear polynomial control system design approaches to design nonlinear polynomial control systems (in brief, linear polynomial approaches → nonlinear polynomial plants). This paper represents the next milestone work - using linear state-space approaches to design nonlinear polynomial control systems (in brief, linear state-space approaches → nonlinear polynomial plants). The overall aim of the study is to establish a framework, defined as the U-block model, which provides a generic prototype for using linear state-space-based approaches to design the control systems with smooth nonlinear plants/processes described by polynomial models. For analysing the feasibility and effectiveness, sliding mode control design approach is selected as an exemplary case study. Numerical simulation studies provide a user-friendly step-by-step procedure for the readers/users with interest in their ad hoc applications. In formality, this is the first paper to present the U-model-oriented control system design in a formal way and to study the associated properties and theorems. The previous publications, in the main, have been algorithm-based studies and simulation demonstrations. In some sense, this paper can be treated as a landmark for the U-model-based research from intuitive/heuristic stage to rigour/formal/comprehensive studies.

  4. Do cyanobacteria swim using traveling surface waves?

    PubMed Central

    Ehlers, K M; Samuel, A D; Berg, H C; Montgomery, R

    1996-01-01

    Bacteria that swim without the benefit of flagella might do so by generating longitudinal or transverse surface waves. For example, swimming speeds of order 25 microns/s are expected for a spherical cell propagating longitudinal waves of 0.2 micron length, 0.02 micron amplitude, and 160 microns/s speed. This problem was solved earlier by mathematicians who were interested in the locomotion of ciliates and who considered the undulations of the envelope swept out by ciliary tips. A new solution is given for spheres propagating sinusoidal waveforms rather than Legendre polynomials. The earlier work is reviewed and possible experimental tests are suggested. Images Fig. 1 PMID:8710872

  5. Theoretical Study of the Effect of Enamel Parameters on Laser-Induced Surface Acoustic Waves in Human Incisor

    NASA Astrophysics Data System (ADS)

    Yuan, Ling; Sun, Kaihua; Shen, Zhonghua; Ni, Xiaowu; Lu, Jian

    2015-06-01

    The laser ultrasound technique has great potential for clinical diagnosis of teeth because of its many advantages. To study laser surface acoustic wave (LSAW) propagation in human teeth, two theoretical methods, the finite element method (FEM) and Laguerre polynomial extension method (LPEM), are presented. The full field temperature values and SAW displacements in an incisor can be obtained by the FEM. The SAW phase velocity in a healthy incisor and dental caries is obtained by the LPEM. The methods and results of this work can provide a theoretical basis for nondestructive evaluation of human teeth with LSAWs.

  6. Riemann surfaces of complex classical trajectories and tunnelling splitting in one-dimensional systems

    NASA Astrophysics Data System (ADS)

    Harada, Hiromitsu; Mouchet, Amaury; Shudo, Akira

    2017-10-01

    The topology of complex classical paths is investigated to discuss quantum tunnelling splittings in one-dimensional systems. Here the Hamiltonian is assumed to be given as polynomial functions, so the fundamental group for the Riemann surface provides complete information on the topology of complex paths, which allows us to enumerate all the possible candidates contributing to the semiclassical sum formula for tunnelling splittings. This naturally leads to action relations among classically disjoined regions, revealing entirely non-local nature in the quantization condition. The importance of the proper treatment of Stokes phenomena is also discussed in Hamiltonians in the normal form.

  7. Using multi-dimensional Smolyak interpolation to make a sum-of-products potential

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Avila, Gustavo, E-mail: Gustavo-Avila@telefonica.net; Carrington, Tucker, E-mail: Tucker.Carrington@queensu.ca

    2015-07-28

    We propose a new method for obtaining potential energy surfaces in sum-of-products (SOP) form. If the number of terms is small enough, a SOP potential surface significantly reduces the cost of quantum dynamics calculations by obviating the need to do multidimensional integrals by quadrature. The method is based on a Smolyak interpolation technique and uses polynomial-like or spectral basis functions and 1D Lagrange-type functions. When written in terms of the basis functions from which the Lagrange-type functions are built, the Smolyak interpolant has only a modest number of terms. The ideas are tested for HONO (nitrous acid)

  8. A generalization of algebraic surface drawing

    NASA Technical Reports Server (NTRS)

    Blinn, J. F.

    1982-01-01

    An implicit surface mathematical description of three-dimensional space is defined in terms of all points which satisfy some equation F(x, y, z) equals 0. This form is ideal for space-shaded picture drawing, where the coordinates are substituted for x and y and the equation is solved for z. A new algorithm is presented which is applicable to functional forms other than those of first- and second-order polynomial functions, such as the summation of several Gaussian density distributions. The algorithm was created in order to model electron density maps of molecular structures, but is shown to be capable of generating shapes of esthetic interest.

  9. Selective corneal optical aberration (SCOA) for customized ablation

    NASA Astrophysics Data System (ADS)

    Jean, Benedikt J.; Bende, Thomas

    2001-06-01

    Wavefront analysis still have some technical problems which may be solved within the next years. There are some limitations to use wavefront as a diagnostic tool for customized ablation alone. An ideal combination would be wavefront and topography. Meanwhile Selective Corneal Aberration is a method to visualize the optical quality of a measured corneal surface. It is based on a true measured 3D elevation information of a video topometer. Thus values can be interpreted either using Zernike polynomials or visualized as a so called color coded surface quality map. This map gives a quality factor (corneal aberration) for each measured point of the cornea.

  10. Trend-surface analysis of morphometric parameters: A case study in southeastern Brazil

    NASA Astrophysics Data System (ADS)

    Grohmann, Carlos Henrique

    2005-10-01

    Trend-surface analysis was carried out on data from morphometric parameters isobase and hydraulic gradient. The study area, located in the eastern border of Quadrilátero Ferrífero, southeastern Brazil, presents four main geomorphological units, one characterized by fluvial dissection, two of mountainous relief, with a scarp of hundreds of meters of fall between them, and a flat plateau in the central portion of the fluvially dissected terrains. Morphometric maps were evaluated in GRASS-GIS and statistics were made on R statistical language, using the spatial package. Analysis of variance (ANOVA) was made to test the significance of each surface and the significance of increasing polynomial degree. The best results were achieved with sixth-order surface for isobase and second-order surface for hydraulic gradient. Shape and orientation of residual maps contours for selected trends were compared with structures inferred from several morphometric maps, and a good correlation is present.

  11. Animating Nested Taylor Polynomials to Approximate a Function

    ERIC Educational Resources Information Center

    Mazzone, Eric F.; Piper, Bruce R.

    2010-01-01

    The way that Taylor polynomials approximate functions can be demonstrated by moving the center point while keeping the degree fixed. These animations are particularly nice when the Taylor polynomials do not intersect and form a nested family. We prove a result that shows when this nesting occurs. The animations can be shown in class or…

  12. Polynomial Conjoint Analysis of Similarities: A Model for Constructing Polynomial Conjoint Measurement Algorithms.

    ERIC Educational Resources Information Center

    Young, Forrest W.

    A model permitting construction of algorithms for the polynomial conjoint analysis of similarities is presented. This model, which is based on concepts used in nonmetric scaling, permits one to obtain the best approximate solution. The concepts used to construct nonmetric scaling algorithms are reviewed. Finally, examples of algorithmic models for…

  13. Dual exponential polynomials and linear differential equations

    NASA Astrophysics Data System (ADS)

    Wen, Zhi-Tao; Gundersen, Gary G.; Heittokangas, Janne

    2018-01-01

    We study linear differential equations with exponential polynomial coefficients, where exactly one coefficient is of order greater than all the others. The main result shows that a nontrivial exponential polynomial solution of such an equation has a certain dual relationship with the maximum order coefficient. Several examples illustrate our results and exhibit possibilities that can occur.

  14. Polynomial Graphs and Symmetry

    ERIC Educational Resources Information Center

    Goehle, Geoff; Kobayashi, Mitsuo

    2013-01-01

    Most quadratic functions are not even, but every parabola has symmetry with respect to some vertical line. Similarly, every cubic has rotational symmetry with respect to some point, though most cubics are not odd. We show that every polynomial has at most one point of symmetry and give conditions under which the polynomial has rotational or…

  15. Why the Faulhaber Polynomials Are Sums of Even or Odd Powers of (n + 1/2)

    ERIC Educational Resources Information Center

    Hersh, Reuben

    2012-01-01

    By extending Faulhaber's polynomial to negative values of n, the sum of the p'th powers of the first n integers is seen to be an even or odd polynomial in (n + 1/2) and therefore expressible in terms of the sum of the first n integers.

  16. Self-Replicating Quadratics

    ERIC Educational Resources Information Center

    Withers, Christopher S.; Nadarajah, Saralees

    2012-01-01

    We show that there are exactly four quadratic polynomials, Q(x) = x [superscript 2] + ax + b, such that (x[superscript 2] + ax + b) (x[superscript 2] - ax + b) = (x[superscript 4] + ax[superscript 2] + b). For n = 1, 2, ..., these quadratic polynomials can be written as the product of N = 2[superscript n] quadratic polynomials in x[superscript…

  17. Polynomial expansions of single-mode motions around equilibrium points in the circular restricted three-body problem

    NASA Astrophysics Data System (ADS)

    Lei, Hanlun; Xu, Bo; Circi, Christian

    2018-05-01

    In this work, the single-mode motions around the collinear and triangular libration points in the circular restricted three-body problem are studied. To describe these motions, we adopt an invariant manifold approach, which states that a suitable pair of independent variables are taken as modal coordinates and the remaining state variables are expressed as polynomial series of them. Based on the invariant manifold approach, the general procedure on constructing polynomial expansions up to a certain order is outlined. Taking the Earth-Moon system as the example dynamical model, we construct the polynomial expansions up to the tenth order for the single-mode motions around collinear libration points, and up to order eight and six for the planar and vertical-periodic motions around triangular libration point, respectively. The application of the polynomial expansions constructed lies in that they can be used to determine the initial states for the single-mode motions around equilibrium points. To check the validity, the accuracy of initial states determined by the polynomial expansions is evaluated.

  18. Orbifold E-functions of dual invertible polynomials

    NASA Astrophysics Data System (ADS)

    Ebeling, Wolfgang; Gusein-Zade, Sabir M.; Takahashi, Atsushi

    2016-08-01

    An invertible polynomial is a weighted homogeneous polynomial with the number of monomials coinciding with the number of variables and such that the weights of the variables and the quasi-degree are well defined. In the framework of the search for mirror symmetric orbifold Landau-Ginzburg models, P. Berglund and M. Henningson considered a pair (f , G) consisting of an invertible polynomial f and an abelian group G of its symmetries together with a dual pair (f ˜ , G ˜) . We consider the so-called orbifold E-function of such a pair (f , G) which is a generating function for the exponents of the monodromy action on an orbifold version of the mixed Hodge structure on the Milnor fibre of f. We prove that the orbifold E-functions of Berglund-Henningson dual pairs coincide up to a sign depending on the number of variables and a simple change of variables. The proof is based on a relation between monomials (say, elements of a monomial basis of the Milnor algebra of an invertible polynomial) and elements of the whole symmetry group of the dual polynomial.

  19. Local polynomial estimation of heteroscedasticity in a multivariate linear regression model and its applications in economics.

    PubMed

    Su, Liyun; Zhao, Yanyong; Yan, Tianshun; Li, Fenglan

    2012-01-01

    Multivariate local polynomial fitting is applied to the multivariate linear heteroscedastic regression model. Firstly, the local polynomial fitting is applied to estimate heteroscedastic function, then the coefficients of regression model are obtained by using generalized least squares method. One noteworthy feature of our approach is that we avoid the testing for heteroscedasticity by improving the traditional two-stage method. Due to non-parametric technique of local polynomial estimation, it is unnecessary to know the form of heteroscedastic function. Therefore, we can improve the estimation precision, when the heteroscedastic function is unknown. Furthermore, we verify that the regression coefficients is asymptotic normal based on numerical simulations and normal Q-Q plots of residuals. Finally, the simulation results and the local polynomial estimation of real data indicate that our approach is surely effective in finite-sample situations.

  20. Symmetric polynomials in information theory: Entropy and subentropy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jozsa, Richard; Mitchison, Graeme

    2015-06-15

    Entropy and other fundamental quantities of information theory are customarily expressed and manipulated as functions of probabilities. Here we study the entropy H and subentropy Q as functions of the elementary symmetric polynomials in the probabilities and reveal a series of remarkable properties. Derivatives of all orders are shown to satisfy a complete monotonicity property. H and Q themselves become multivariate Bernstein functions and we derive the density functions of their Levy-Khintchine representations. We also show that H and Q are Pick functions in each symmetric polynomial variable separately. Furthermore, we see that H and the intrinsically quantum informational quantitymore » Q become surprisingly closely related in functional form, suggesting a special significance for the symmetric polynomials in quantum information theory. Using the symmetric polynomials, we also derive a series of further properties of H and Q.« less

Top