Sample records for modified newton-raphson algorithm

  1. Neural Generalized Predictive Control: A Newton-Raphson Implementation

    NASA Technical Reports Server (NTRS)

    Soloway, Donald; Haley, Pamela J.

    1997-01-01

    An efficient implementation of Generalized Predictive Control using a multi-layer feedforward neural network as the plant's nonlinear model is presented. In using Newton-Raphson as the optimization algorithm, the number of iterations needed for convergence is significantly reduced from other techniques. The main cost of the Newton-Raphson algorithm is in the calculation of the Hessian, but even with this overhead the low iteration numbers make Newton-Raphson faster than other techniques and a viable algorithm for real-time control. This paper presents a detailed derivation of the Neural Generalized Predictive Control algorithm with Newton-Raphson as the minimization algorithm. Simulation results show convergence to a good solution within two iterations and timing data show that real-time control is possible. Comments about the algorithm's implementation are also included.

  2. Modified Newton-Raphson GRAPE methods for optimal control of spin systems

    NASA Astrophysics Data System (ADS)

    Goodwin, D. L.; Kuprov, Ilya

    2016-05-01

    Quadratic convergence throughout the active space is achieved for the gradient ascent pulse engineering (GRAPE) family of quantum optimal control algorithms. We demonstrate in this communication that the Hessian of the GRAPE fidelity functional is unusually cheap, having the same asymptotic complexity scaling as the functional itself. This leads to the possibility of using very efficient numerical optimization techniques. In particular, the Newton-Raphson method with a rational function optimization (RFO) regularized Hessian is shown in this work to require fewer system trajectory evaluations than any other algorithm in the GRAPE family. This communication describes algebraic and numerical implementation aspects (matrix exponential recycling, Hessian regularization, etc.) for the RFO Newton-Raphson version of GRAPE and reports benchmarks for common spin state control problems in magnetic resonance spectroscopy.

  3. Iterative procedures for space shuttle main engine performance models

    NASA Technical Reports Server (NTRS)

    Santi, L. Michael

    1989-01-01

    Performance models of the Space Shuttle Main Engine (SSME) contain iterative strategies for determining approximate solutions to nonlinear equations reflecting fundamental mass, energy, and pressure balances within engine flow systems. Both univariate and multivariate Newton-Raphson algorithms are employed in the current version of the engine Test Information Program (TIP). Computational efficiency and reliability of these procedures is examined. A modified trust region form of the multivariate Newton-Raphson method is implemented and shown to be superior for off nominal engine performance predictions. A heuristic form of Broyden's Rank One method is also tested and favorable results based on this algorithm are presented.

  4. An Improved Newton's Method.

    ERIC Educational Resources Information Center

    Mathews, John H.

    1989-01-01

    Describes Newton's method to locate roots of an equation using the Newton-Raphson iteration formula. Develops an adaptive method overcoming limitations of the iteration method. Provides the algorithm and computer program of the adaptive Newton-Raphson method. (YP)

  5. Self-adaptive Solution Strategies

    NASA Technical Reports Server (NTRS)

    Padovan, J.

    1984-01-01

    The development of enhancements to current generation nonlinear finite element algorithms of the incremental Newton-Raphson type was overviewed. Work was introduced on alternative formulations which lead to improve algorithms that avoid the need for global level updating and inversion. To quantify the enhanced Newton-Raphson scheme and the new alternative algorithm, the results of several benchmarks are presented.

  6. Computing Maximum Likelihood Estimates of Loglinear Models from Marginal Sums with Special Attention to Loglinear Item Response Theory.

    ERIC Educational Resources Information Center

    Kelderman, Henk

    1992-01-01

    Describes algorithms used in the computer program LOGIMO for obtaining maximum likelihood estimates of the parameters in loglinear models. These algorithms are also useful for the analysis of loglinear item-response theory models. Presents modified versions of the iterative proportional fitting and Newton-Raphson algorithms. Simulated data…

  7. A General Program for Item-Response Analysis That Employs the Stabilized Newton-Raphson Algorithm. Research Report. ETS RR-13-32

    ERIC Educational Resources Information Center

    Haberman, Shelby J.

    2013-01-01

    A general program for item-response analysis is described that uses the stabilized Newton-Raphson algorithm. This program is written to be compliant with Fortran 2003 standards and is sufficiently general to handle independent variables, multidimensional ability parameters, and matrix sampling. The ability variables may be either polytomous or…

  8. A Differential Evolution Based Approach to Estimate the Shape and Size of Complex Shaped Anomalies Using EIT Measurements

    NASA Astrophysics Data System (ADS)

    Rashid, Ahmar; Khambampati, Anil Kumar; Kim, Bong Seok; Liu, Dong; Kim, Sin; Kim, Kyung Youn

    EIT image reconstruction is an ill-posed problem, the spatial resolution of the estimated conductivity distribution is usually poor and the external voltage measurements are subject to variable noise. Therefore, EIT conductivity estimation cannot be used in the raw form to correctly estimate the shape and size of complex shaped regional anomalies. An efficient algorithm employing a shape based estimation scheme is needed. The performance of traditional inverse algorithms, such as the Newton Raphson method, used for this purpose is below par and depends upon the initial guess and the gradient of the cost functional. This paper presents the application of differential evolution (DE) algorithm to estimate complex shaped region boundaries, expressed as coefficients of truncated Fourier series, using EIT. DE is a simple yet powerful population-based, heuristic algorithm with the desired features to solve global optimization problems under realistic conditions. The performance of the algorithm has been tested through numerical simulations, comparing its results with that of the traditional modified Newton Raphson (mNR) method.

  9. Computing Maximum Likelihood Estimates of Loglinear Models from Marginal Sums with Special Attention to Loglinear Item Response Theory. [Project Psychometric Aspects of Item Banking No. 53.] Research Report 91-1.

    ERIC Educational Resources Information Center

    Kelderman, Henk

    In this paper, algorithms are described for obtaining the maximum likelihood estimates of the parameters in log-linear models. Modified versions of the iterative proportional fitting and Newton-Raphson algorithms are described that work on the minimal sufficient statistics rather than on the usual counts in the full contingency table. This is…

  10. R programming for parameters estimation of geographically weighted ordinal logistic regression (GWOLR) model based on Newton Raphson

    NASA Astrophysics Data System (ADS)

    Zuhdi, Shaifudin; Saputro, Dewi Retno Sari

    2017-03-01

    GWOLR model used for represent relationship between dependent variable has categories and scale of category is ordinal with independent variable influenced the geographical location of the observation site. Parameters estimation of GWOLR model use maximum likelihood provide system of nonlinear equations and hard to be found the result in analytic resolution. By finishing it, it means determine the maximum completion, this thing associated with optimizing problem. The completion nonlinear system of equations optimize use numerical approximation, which one is Newton Raphson method. The purpose of this research is to make iteration algorithm Newton Raphson and program using R software to estimate GWOLR model. Based on the research obtained that program in R can be used to estimate the parameters of GWOLR model by forming a syntax program with command "while".

  11. Harmonic analysis of spacecraft power systems using a personal computer

    NASA Technical Reports Server (NTRS)

    Williamson, Frank; Sheble, Gerald B.

    1989-01-01

    The effects that nonlinear devices such as ac/dc converters, HVDC transmission links, and motor drives have on spacecraft power systems are discussed. The nonsinusoidal currents, along with the corresponding voltages, are calculated by a harmonic power flow which decouples and solves for each harmonic component individually using an iterative Newton-Raphson algorithm. The sparsity of the harmonic equations and the overall Jacobian matrix is used to an advantage in terms of saving computer memory space and in terms of reducing computation time. The algorithm could also be modified to analyze each harmonic separately instead of all at the same time.

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Goodwin, D. L.; Kuprov, Ilya, E-mail: i.kuprov@soton.ac.uk

    Quadratic convergence throughout the active space is achieved for the gradient ascent pulse engineering (GRAPE) family of quantum optimal control algorithms. We demonstrate in this communication that the Hessian of the GRAPE fidelity functional is unusually cheap, having the same asymptotic complexity scaling as the functional itself. This leads to the possibility of using very efficient numerical optimization techniques. In particular, the Newton-Raphson method with a rational function optimization (RFO) regularized Hessian is shown in this work to require fewer system trajectory evaluations than any other algorithm in the GRAPE family. This communication describes algebraic and numerical implementation aspects (matrixmore » exponential recycling, Hessian regularization, etc.) for the RFO Newton-Raphson version of GRAPE and reports benchmarks for common spin state control problems in magnetic resonance spectroscopy.« less

  13. A different approach to estimate nonlinear regression model using numerical methods

    NASA Astrophysics Data System (ADS)

    Mahaboob, B.; Venkateswarlu, B.; Mokeshrayalu, G.; Balasiddamuni, P.

    2017-11-01

    This research paper concerns with the computational methods namely the Gauss-Newton method, Gradient algorithm methods (Newton-Raphson method, Steepest Descent or Steepest Ascent algorithm method, the Method of Scoring, the Method of Quadratic Hill-Climbing) based on numerical analysis to estimate parameters of nonlinear regression model in a very different way. Principles of matrix calculus have been used to discuss the Gradient-Algorithm methods. Yonathan Bard [1] discussed a comparison of gradient methods for the solution of nonlinear parameter estimation problems. However this article discusses an analytical approach to the gradient algorithm methods in a different way. This paper describes a new iterative technique namely Gauss-Newton method which differs from the iterative technique proposed by Gorden K. Smyth [2]. Hans Georg Bock et.al [10] proposed numerical methods for parameter estimation in DAE’s (Differential algebraic equation). Isabel Reis Dos Santos et al [11], Introduced weighted least squares procedure for estimating the unknown parameters of a nonlinear regression metamodel. For large-scale non smooth convex minimization the Hager and Zhang (HZ) conjugate gradient Method and the modified HZ (MHZ) method were presented by Gonglin Yuan et al [12].

  14. Systems identification using a modified Newton-Raphson method: A FORTRAN program

    NASA Technical Reports Server (NTRS)

    Taylor, L. W., Jr.; Iliff, K. W.

    1972-01-01

    A FORTRAN program is offered which computes a maximum likelihood estimate of the parameters of any linear, constant coefficient, state space model. For the case considered, the maximum likelihood estimate can be identical to that which minimizes simultaneously the weighted mean square difference between the computed and measured response of a system and the weighted square of the difference between the estimated and a priori parameter values. A modified Newton-Raphson or quasilinearization method is used to perform the minimization which typically requires several iterations. A starting technique is used which insures convergence for any initial values of the unknown parameters. The program and its operation are described in sufficient detail to enable the user to apply the program to his particular problem with a minimum of difficulty.

  15. A novel speckle pattern—Adaptive digital image correlation approach with robust strain calculation

    NASA Astrophysics Data System (ADS)

    Cofaru, Corneliu; Philips, Wilfried; Van Paepegem, Wim

    2012-02-01

    Digital image correlation (DIC) has seen widespread acceptance and usage as a non-contact method for the determination of full-field displacements and strains in experimental mechanics. The advances of imaging hardware in the last decades led to high resolution and speed cameras being more affordable than in the past making large amounts of data image available for typical DIC experimental scenarios. The work presented in this paper is aimed at maximizing both the accuracy and speed of DIC methods when employed with such images. A low-level framework for speckle image partitioning which replaces regularly shaped blocks with image-adaptive cells in the displacement calculation is introduced. The Newton-Raphson DIC method is modified to use the image pixels of the cells and to perform adaptive regularization to increase the spatial consistency of the displacements. Furthermore, a novel robust framework for strain calculation based also on the Newton-Raphson algorithm is introduced. The proposed methods are evaluated in five experimental scenarios, out of which four use numerically deformed images and one uses real experimental data. Results indicate that, as the desired strain density increases, significant computational gains can be obtained while maintaining or improving accuracy and rigid-body rotation sensitivity.

  16. Real-time Adaptive Control Using Neural Generalized Predictive Control

    NASA Technical Reports Server (NTRS)

    Haley, Pam; Soloway, Don; Gold, Brian

    1999-01-01

    The objective of this paper is to demonstrate the feasibility of a Nonlinear Generalized Predictive Control algorithm by showing real-time adaptive control on a plant with relatively fast time-constants. Generalized Predictive Control has classically been used in process control where linear control laws were formulated for plants with relatively slow time-constants. The plant of interest for this paper is a magnetic levitation device that is nonlinear and open-loop unstable. In this application, the reference model of the plant is a neural network that has an embedded nominal linear model in the network weights. The control based on the linear model provides initial stability at the beginning of network training. In using a neural network the control laws are nonlinear and online adaptation of the model is possible to capture unmodeled or time-varying dynamics. Newton-Raphson is the minimization algorithm. Newton-Raphson requires the calculation of the Hessian, but even with this computational expense the low iteration rate make this a viable algorithm for real-time control.

  17. Algorithms for accelerated convergence of adaptive PCA.

    PubMed

    Chatterjee, C; Kang, Z; Roychowdhury, V P

    2000-01-01

    We derive and discuss new adaptive algorithms for principal component analysis (PCA) that are shown to converge faster than the traditional PCA algorithms due to Oja, Sanger, and Xu. It is well known that traditional PCA algorithms that are derived by using gradient descent on an objective function are slow to converge. Furthermore, the convergence of these algorithms depends on appropriate choices of the gain sequences. Since online applications demand faster convergence and an automatic selection of gains, we present new adaptive algorithms to solve these problems. We first present an unconstrained objective function, which can be minimized to obtain the principal components. We derive adaptive algorithms from this objective function by using: 1) gradient descent; 2) steepest descent; 3) conjugate direction; and 4) Newton-Raphson methods. Although gradient descent produces Xu's LMSER algorithm, the steepest descent, conjugate direction, and Newton-Raphson methods produce new adaptive algorithms for PCA. We also provide a discussion on the landscape of the objective function, and present a global convergence proof of the adaptive gradient descent PCA algorithm using stochastic approximation theory. Extensive experiments with stationary and nonstationary multidimensional Gaussian sequences show faster convergence of the new algorithms over the traditional gradient descent methods.We also compare the steepest descent adaptive algorithm with state-of-the-art methods on stationary and nonstationary sequences.

  18. High-order Newton-penalty algorithms

    NASA Astrophysics Data System (ADS)

    Dussault, Jean-Pierre

    2005-10-01

    Recent efforts in differentiable non-linear programming have been focused on interior point methods, akin to penalty and barrier algorithms. In this paper, we address the classical equality constrained program solved using the simple quadratic loss penalty function/algorithm. The suggestion to use extrapolations to track the differentiable trajectory associated with penalized subproblems goes back to the classic monograph of Fiacco & McCormick. This idea was further developed by Gould who obtained a two-steps quadratically convergent algorithm using prediction steps and Newton correction. Dussault interpreted the prediction step as a combined extrapolation with respect to the penalty parameter and the residual of the first order optimality conditions. Extrapolation with respect to the residual coincides with a Newton step.We explore here higher-order extrapolations, thus higher-order Newton-like methods. We first consider high-order variants of the Newton-Raphson method applied to non-linear systems of equations. Next, we obtain improved asymptotic convergence results for the quadratic loss penalty algorithm by using high-order extrapolation steps.

  19. Unstructured Finite Volume Computational Thermo-Fluid Dynamic Method for Multi-Disciplinary Analysis and Design Optimization

    NASA Technical Reports Server (NTRS)

    Majumdar, Alok; Schallhorn, Paul

    1998-01-01

    This paper describes a finite volume computational thermo-fluid dynamics method to solve for Navier-Stokes equations in conjunction with energy equation and thermodynamic equation of state in an unstructured coordinate system. The system of equations have been solved by a simultaneous Newton-Raphson method and compared with several benchmark solutions. Excellent agreements have been obtained in each case and the method has been found to be significantly faster than conventional Computational Fluid Dynamic(CFD) methods and therefore has the potential for implementation in Multi-Disciplinary analysis and design optimization in fluid and thermal systems. The paper also describes an algorithm of design optimization based on Newton-Raphson method which has been recently tested in a turbomachinery application.

  20. Development of parallel algorithms for electrical power management in space applications

    NASA Technical Reports Server (NTRS)

    Berry, Frederick C.

    1989-01-01

    The application of parallel techniques for electrical power system analysis is discussed. The Newton-Raphson method of load flow analysis was used along with the decomposition-coordination technique to perform load flow analysis. The decomposition-coordination technique enables tasks to be performed in parallel by partitioning the electrical power system into independent local problems. Each independent local problem represents a portion of the total electrical power system on which a loan flow analysis can be performed. The load flow analysis is performed on these partitioned elements by using the Newton-Raphson load flow method. These independent local problems will produce results for voltage and power which can then be passed to the coordinator portion of the solution procedure. The coordinator problem uses the results of the local problems to determine if any correction is needed on the local problems. The coordinator problem is also solved by an iterative method much like the local problem. The iterative method for the coordination problem will also be the Newton-Raphson method. Therefore, each iteration at the coordination level will result in new values for the local problems. The local problems will have to be solved again along with the coordinator problem until some convergence conditions are met.

  1. Interpolation bias for the inverse compositional Gauss-Newton algorithm in digital image correlation

    NASA Astrophysics Data System (ADS)

    Su, Yong; Zhang, Qingchuan; Xu, Xiaohai; Gao, Zeren; Wu, Shangquan

    2018-01-01

    It is believed that the classic forward additive Newton-Raphson (FA-NR) algorithm and the recently introduced inverse compositional Gauss-Newton (IC-GN) algorithm give rise to roughly equal interpolation bias. Questioning the correctness of this statement, this paper presents a thorough analysis of interpolation bias for the IC-GN algorithm. A theoretical model is built to analytically characterize the dependence of interpolation bias upon speckle image, target image interpolation, and reference image gradient estimation. The interpolation biases of the FA-NR algorithm and the IC-GN algorithm can be significantly different, whose relative difference can exceed 80%. For the IC-GN algorithm, the gradient estimator can strongly affect the interpolation bias; the relative difference can reach 178%. Since the mean bias errors are insensitive to image noise, the theoretical model proposed remains valid in the presence of noise. To provide more implementation details, source codes are uploaded as a supplement.

  2. MHOST: An efficient finite element program for inelastic analysis of solids and structures

    NASA Technical Reports Server (NTRS)

    Nakazawa, S.

    1988-01-01

    An efficient finite element program for 3-D inelastic analysis of gas turbine hot section components was constructed and validated. A novel mixed iterative solution strategy is derived from the augmented Hu-Washizu variational principle in order to nodally interpolate coordinates, displacements, deformation, strains, stresses and material properties. A series of increasingly sophisticated material models incorporated in MHOST include elasticity, secant plasticity, infinitesimal and finite deformation plasticity, creep and unified viscoplastic constitutive model proposed by Walker. A library of high performance elements is built into this computer program utilizing the concepts of selective reduced integrations and independent strain interpolations. A family of efficient solution algorithms is implemented in MHOST for linear and nonlinear equation solution including the classical Newton-Raphson, modified, quasi and secant Newton methods with optional line search and the conjugate gradient method.

  3. Theoretical and experimental study on near infrared time-resolved optical diffuse tomography

    NASA Astrophysics Data System (ADS)

    Zhao, Huijuan; Gao, Feng; Tanikawa, Yukari; Yamada, Yukio

    2006-08-01

    Parts of the works of our group in the past five years on near infrared time-resolved (TR) optical tomography are summarized in this paper. The image reconstruction algorithm is based on Newton Raphson scheme with a datatype R generated from modified Generalized Pulse Spectrum Technique. Firstly, the algorithm is evaluated with simulated data from a 2-D model and the datatype R is compared with other popularly used datatypes. In this second part of the paper, the in vitro and in vivo NIR DOT imaging on a chicken leg and a human forearm, respectively are presented for evaluating both the image reconstruction algorithm and the TR measurement system. The third part of this paper is about the differential pathlength factor of human head while monitoring head activity with NIRS and applying the modified Lambert-Beer law. Benefiting from the TR system, the measured DPF maps of the three import areas of human head are presented in this paper.

  4. An actuator extension transformation for a motion simulator and an inverse transformation applying Newton-Raphson's method

    NASA Technical Reports Server (NTRS)

    Dieudonne, J. E.

    1972-01-01

    A set of equations which transform position and angular orientation of the centroid of the payload platform of a six-degree-of-freedom motion simulator into extensions of the simulator's actuators has been derived and is based on a geometrical representation of the system. An iterative scheme, Newton-Raphson's method, has been successfully used in a real time environment in the calculation of the position and angular orientation of the centroid of the payload platform when the magnitude of the actuator extensions is known. Sufficient accuracy is obtained by using only one Newton-Raphson iteration per integration step of the real time environment.

  5. New Method of Calibrating IRT Models.

    ERIC Educational Resources Information Center

    Jiang, Hai; Tang, K. Linda

    This discussion of new methods for calibrating item response theory (IRT) models looks into new optimization procedures, such as the Genetic Algorithm (GA) to improve on the use of the Newton-Raphson procedure. The advantages of using a global optimization procedure like GA is that this kind of procedure is not easily affected by local optima and…

  6. How Can Multivariate Item Response Theory Be Used in Reporting of Susbcores? Research Report. ETS RR-10-09

    ERIC Educational Resources Information Center

    Haberman, Shelby J.; Sinharay, Sandip

    2010-01-01

    Recently, there has been increasing interest in reporting diagnostic scores. This paper examines reporting of subscores using multidimensional item response theory (MIRT) models. An MIRT model is fitted using a stabilized Newton-Raphson algorithm (Haberman, 1974, 1988) with adaptive Gauss-Hermite quadrature (Haberman, von Davier, & Lee, 2008).…

  7. On the Latent Regression Model of Item Response Theory. Research Report. ETS RR-07-12

    ERIC Educational Resources Information Center

    Antal, Tamás

    2007-01-01

    Full account of the latent regression model for the National Assessment of Educational Progress is given. The treatment includes derivation of the EM algorithm, Newton-Raphson method, and the asymptotic standard errors. The paper also features the use of the adaptive Gauss-Hermite numerical integration method as a basic tool to evaluate…

  8. Applying a Weighted Maximum Likelihood Latent Trait Estimator to the Generalized Partial Credit Model

    ERIC Educational Resources Information Center

    Penfield, Randall D.; Bergeron, Jennifer M.

    2005-01-01

    This article applies a weighted maximum likelihood (WML) latent trait estimator to the generalized partial credit model (GPCM). The relevant equations required to obtain the WML estimator using the Newton-Raphson algorithm are presented, and a simulation study is described that compared the properties of the WML estimator to those of the maximum…

  9. ASCAL: A Microcomputer Program for Estimating Logistic IRT Item Parameters.

    ERIC Educational Resources Information Center

    Vale, C. David; Gialluca, Kathleen A.

    ASCAL is a microcomputer-based program for calibrating items according to the three-parameter logistic model of item response theory. It uses a modified multivariate Newton-Raphson procedure for estimating item parameters. This study evaluated this procedure using Monte Carlo Simulation Techniques. The current version of ASCAL was then compared to…

  10. Reduce beam hardening artifacts of polychromatic X-ray computed tomography by an iterative approximation approach.

    PubMed

    Shi, Hongli; Yang, Zhi; Luo, Shuqian

    2017-01-01

    The beam hardening artifact is one of most important modalities of metal artifact for polychromatic X-ray computed tomography (CT), which can impair the image quality seriously. An iterative approach is proposed to reduce beam hardening artifact caused by metallic components in polychromatic X-ray CT. According to Lambert-Beer law, the (detected) projections can be expressed as monotonic nonlinear functions of element geometry projections, which are the theoretical projections produced only by the pixel intensities (image grayscale) of certain element (component). With help of a prior knowledge on spectrum distribution of X-ray beam source and energy-dependent attenuation coefficients, the functions have explicit expressions. Newton-Raphson algorithm is employed to solve the functions. The solutions are named as the synthetical geometry projections, which are the nearly linear weighted sum of element geometry projections with respect to mean of each attenuation coefficient. In this process, the attenuation coefficients are modified to make Newton-Raphson iterative functions satisfy the convergence conditions of fixed pointed iteration(FPI) so that the solutions will approach the true synthetical geometry projections stably. The underlying images are obtained using the projections by general reconstruction algorithms such as the filtered back projection (FBP). The image gray values are adjusted according to the attenuation coefficient means to obtain proper CT numbers. Several examples demonstrate the proposed approach is efficient in reducing beam hardening artifacts and has satisfactory performance in the term of some general criteria. In a simulation example, the normalized root mean square difference (NRMSD) can be reduced 17.52% compared to a newest algorithm. Since the element geometry projections are free from the effect of beam hardening, the nearly linear weighted sum of them, the synthetical geometry projections, are almost free from the effect of beam hardening. By working out the synthetical geometry projections, the proposed approach becomes quite efficient in reducing beam hardening artifacts.

  11. Development of computer program NAS3D using Vector processing for geometric nonlinear analysis of structures

    NASA Technical Reports Server (NTRS)

    Mangalgiri, P. D.; Prabhakaran, R.

    1986-01-01

    An algorithm for vectorized computation of stiffness matrices of an 8 noded isoparametric hexahedron element for geometric nonlinear analysis was developed. This was used in conjunction with the earlier 2-D program GAMNAS to develop the new program NAS3D for geometric nonlinear analysis. A conventional, modified Newton-Raphson process is used for the nonlinear analysis. New schemes for the computation of stiffness and strain energy release rates is presented. The organization the program is explained and some results on four sample problems are given. The study of CPU times showed that savings by a factor of 11 to 13 were achieved when vectorized computation was used for the stiffness instead of the conventional scalar one. Finally, the scheme of inputting data is explained.

  12. Flux-vector splitting algorithm for chain-rule conservation-law form

    NASA Technical Reports Server (NTRS)

    Shih, T. I.-P.; Nguyen, H. L.; Willis, E. A.; Steinthorsson, E.; Li, Z.

    1991-01-01

    A flux-vector splitting algorithm with Newton-Raphson iteration was developed for the 'full compressible' Navier-Stokes equations cast in chain-rule conservation-law form. The algorithm is intended for problems with deforming spatial domains and for problems whose governing equations cannot be cast in strong conservation-law form. The usefulness of the algorithm for such problems was demonstrated by applying it to analyze the unsteady, two- and three-dimensional flows inside one combustion chamber of a Wankel engine under nonfiring conditions. Solutions were obtained to examine the algorithm in terms of conservation error, robustness, and ability to handle complex flows on time-dependent grid systems.

  13. Input-output-controlled nonlinear equation solvers

    NASA Technical Reports Server (NTRS)

    Padovan, Joseph

    1988-01-01

    To upgrade the efficiency and stability of the successive substitution (SS) and Newton-Raphson (NR) schemes, the concept of input-output-controlled solvers (IOCS) is introduced. By employing the formal properties of the constrained version of the SS and NR schemes, the IOCS algorithm can handle indefiniteness of the system Jacobian, can maintain iterate monotonicity, and provide for separate control of load incrementation and iterate excursions, as well as having other features. To illustrate the algorithmic properties, the results for several benchmark examples are presented. These define the associated numerical efficiency and stability of the IOCS.

  14. Scalable parallel elastic-plastic finite element analysis using a quasi-Newton method with a balancing domain decomposition preconditioner

    NASA Astrophysics Data System (ADS)

    Yusa, Yasunori; Okada, Hiroshi; Yamada, Tomonori; Yoshimura, Shinobu

    2018-04-01

    A domain decomposition method for large-scale elastic-plastic problems is proposed. The proposed method is based on a quasi-Newton method in conjunction with a balancing domain decomposition preconditioner. The use of a quasi-Newton method overcomes two problems associated with the conventional domain decomposition method based on the Newton-Raphson method: (1) avoidance of a double-loop iteration algorithm, which generally has large computational complexity, and (2) consideration of the local concentration of nonlinear deformation, which is observed in elastic-plastic problems with stress concentration. Moreover, the application of a balancing domain decomposition preconditioner ensures scalability. Using the conventional and proposed domain decomposition methods, several numerical tests, including weak scaling tests, were performed. The convergence performance of the proposed method is comparable to that of the conventional method. In particular, in elastic-plastic analysis, the proposed method exhibits better convergence performance than the conventional method.

  15. Peak Seeking Control for Reduced Fuel Consumption with Preliminary Flight Test Results

    NASA Technical Reports Server (NTRS)

    Brown, Nelson

    2012-01-01

    The Environmentally Responsible Aviation project seeks to accomplish the simultaneous reduction of fuel burn, noise, and emissions. A project at NASA Dryden Flight Research Center is contributing to ERAs goals by exploring the practical application of real-time trim configuration optimization for enhanced performance and reduced fuel consumption. This peak-seeking control approach is based on Newton-Raphson algorithm using a time-varying Kalman filter to estimate the gradient of the performance function. In real-time operation, deflection of symmetric ailerons, trailing-edge flaps, and leading-edge flaps of a modified F-18 are directly optimized, and the horizontal stabilators and angle of attack are indirectly optimized. Preliminary results from three research flights are presented herein. The optimization system found a trim configuration that required approximately 3.5% less fuel flow than the baseline trim at the given flight condition. The algorithm consistently rediscovered the solution from several initial conditions. These preliminary results show the algorithm has good performance and is expected to show similar results at other flight conditions and aircraft configurations.

  16. An historical survey of computational methods in optimal control.

    NASA Technical Reports Server (NTRS)

    Polak, E.

    1973-01-01

    Review of some of the salient theoretical developments in the specific area of optimal control algorithms. The first algorithms for optimal control were aimed at unconstrained problems and were derived by using first- and second-variation methods of the calculus of variations. These methods have subsequently been recognized as gradient, Newton-Raphson, or Gauss-Newton methods in function space. A much more recent addition to the arsenal of unconstrained optimal control algorithms are several variations of conjugate-gradient methods. At first, constrained optimal control problems could only be solved by exterior penalty function methods. Later algorithms specifically designed for constrained problems have appeared. Among these are methods for solving the unconstrained linear quadratic regulator problem, as well as certain constrained minimum-time and minimum-energy problems. Differential-dynamic programming was developed from dynamic programming considerations. The conditional-gradient method, the gradient-projection method, and a couple of feasible directions methods were obtained as extensions or adaptations of related algorithms for finite-dimensional problems. Finally, the so-called epsilon-methods combine the Ritz method with penalty function techniques.

  17. Differentially private distributed logistic regression using private and public data.

    PubMed

    Ji, Zhanglong; Jiang, Xiaoqian; Wang, Shuang; Xiong, Li; Ohno-Machado, Lucila

    2014-01-01

    Privacy protecting is an important issue in medical informatics and differential privacy is a state-of-the-art framework for data privacy research. Differential privacy offers provable privacy against attackers who have auxiliary information, and can be applied to data mining models (for example, logistic regression). However, differentially private methods sometimes introduce too much noise and make outputs less useful. Given available public data in medical research (e.g. from patients who sign open-consent agreements), we can design algorithms that use both public and private data sets to decrease the amount of noise that is introduced. In this paper, we modify the update step in Newton-Raphson method to propose a differentially private distributed logistic regression model based on both public and private data. We try our algorithm on three different data sets, and show its advantage over: (1) a logistic regression model based solely on public data, and (2) a differentially private distributed logistic regression model based on private data under various scenarios. Logistic regression models built with our new algorithm based on both private and public datasets demonstrate better utility than models that trained on private or public datasets alone without sacrificing the rigorous privacy guarantee.

  18. Trust-region based return mapping algorithm for implicit integration of elastic-plastic constitutive models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lester, Brian; Scherzinger, William

    2017-01-19

    Here, a new method for the solution of the non-linear equations forming the core of constitutive model integration is proposed. Specifically, the trust-region method that has been developed in the numerical optimization community is successfully modified for use in implicit integration of elastic-plastic models. Although attention here is restricted to these rate-independent formulations, the proposed approach holds substantial promise for adoption with models incorporating complex physics, multiple inelastic mechanisms, and/or multiphysics. As a first step, the non-quadratic Hosford yield surface is used as a representative case to investigate computationally challenging constitutive models. The theory and implementation are presented, discussed, andmore » compared to other common integration schemes. Multiple boundary value problems are studied and used to verify the proposed algorithm and demonstrate the capabilities of this approach over more common methodologies. Robustness and speed are then investigated and compared to existing algorithms. Through these efforts, it is shown that the utilization of a trust-region approach leads to superior performance versus a traditional closest-point projection Newton-Raphson method and comparable speed and robustness to a line search augmented scheme.« less

  19. Trust-region based return mapping algorithm for implicit integration of elastic-plastic constitutive models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lester, Brian T.; Scherzinger, William M.

    2017-01-19

    A new method for the solution of the non-linear equations forming the core of constitutive model integration is proposed. Specifically, the trust-region method that has been developed in the numerical optimization community is successfully modified for use in implicit integration of elastic-plastic models. Although attention here is restricted to these rate-independent formulations, the proposed approach holds substantial promise for adoption with models incorporating complex physics, multiple inelastic mechanisms, and/or multiphysics. As a first step, the non-quadratic Hosford yield surface is used as a representative case to investigate computationally challenging constitutive models. The theory and implementation are presented, discussed, and comparedmore » to other common integration schemes. Multiple boundary value problems are studied and used to verify the proposed algorithm and demonstrate the capabilities of this approach over more common methodologies. Robustness and speed are then investigated and compared to existing algorithms. As a result through these efforts, it is shown that the utilization of a trust-region approach leads to superior performance versus a traditional closest-point projection Newton-Raphson method and comparable speed and robustness to a line search augmented scheme.« less

  20. Low Angle-of-Attack Longitudinal Aerodynamic Parameters of Navy T-2 Trainer Aircraft Extracted from Flight Data: A Comparison of Identification Techniques. Volume I. Data Acquisition and Modified Newton-Raphson Analysis

    DTIC Science & Technology

    1975-06-23

    SYSTEM • The numbunng o» tec^nic«! pioject (ep<.iii muert My the N<»v-«l AN Development Center is ariaogert (or specific identitf.ition onrposti E«i.h...chord (W.P. + 73,92) Av Sweepback (257. chord) Airfoil Section lv Tall length (.25 cw to .25 cv) VERTICAL FIN Sf Area (including 2.14 ft 2

  1. Harmonic Optimization in Voltage Source Inverter for PV Application using Heuristic Algorithms

    NASA Astrophysics Data System (ADS)

    Kandil, Shaimaa A.; Ali, A. A.; El Samahy, Adel; Wasfi, Sherif M.; Malik, O. P.

    2016-12-01

    Selective Harmonic Elimination (SHE) technique is the fundamental switching frequency scheme that is used to eliminate specific order harmonics. Its application to minimize low order harmonics in a three level inverter is proposed in this paper. The modulation strategy used here is SHEPWM and the nonlinear equations, that characterize the low order harmonics, are solved using Harmony Search Algorithm (HSA) to obtain the optimal switching angles that minimize the required harmonics and maintain the fundamental at the desired value. Total Harmonic Distortion (THD) of the output voltage is minimized maintaining selected harmonics within allowable limits. A comparison has been drawn between HSA, Genetic Algorithm (GA) and Newton Raphson (NR) technique using MATLAB software to determine the effectiveness of getting optimized switching angles.

  2. Characterization and Implementation of a Real-World Target Tracking Algorithm on Field Programmable Gate Arrays with Kalman Filter Test Case

    DTIC Science & Technology

    2008-03-01

    to predict its exact position. To locate Ceres, Carl Friedrich Gauss , a mere 24 years old at the time, developed a method called least-squares...dividend to produce the quotient. This method converges to the reciprocal quadratically [11]. For the special case of: 1 H × P (:, :, k)×H ′ + R (3.9) the...high-speed computation of reciprocals within the overall system. The Newton-Raphson method is also expanded for use in calculat- ing square-roots in

  3. A Comparison of Two Algorithms for the Simulation of Non-Homogeneous Poisson Processes with Degree-Two Exponential Polynomial Intensity Function.

    DTIC Science & Technology

    1977-09-01

    process with an event streaa intensity (rate) function that is of degree-two exponential pclyncaial foru. (The use of exponential pclynoaials is...4 \\v 01 ^3 C \\ \\ •r- S_ \\ \\ O \\ \\ a \\ \\ V IA C 4-> \\ \\ •«- c \\ 1 <— 3 • o \\ \\ Ol (J \\ \\ O U —1 <o \\ I...would serve as a good initial approxiaation t* , f-r the Newton-Raphson aethod. However, for the purpose of this implementation, the end point which

  4. Differentially private distributed logistic regression using private and public data

    PubMed Central

    2014-01-01

    Background Privacy protecting is an important issue in medical informatics and differential privacy is a state-of-the-art framework for data privacy research. Differential privacy offers provable privacy against attackers who have auxiliary information, and can be applied to data mining models (for example, logistic regression). However, differentially private methods sometimes introduce too much noise and make outputs less useful. Given available public data in medical research (e.g. from patients who sign open-consent agreements), we can design algorithms that use both public and private data sets to decrease the amount of noise that is introduced. Methodology In this paper, we modify the update step in Newton-Raphson method to propose a differentially private distributed logistic regression model based on both public and private data. Experiments and results We try our algorithm on three different data sets, and show its advantage over: (1) a logistic regression model based solely on public data, and (2) a differentially private distributed logistic regression model based on private data under various scenarios. Conclusion Logistic regression models built with our new algorithm based on both private and public datasets demonstrate better utility than models that trained on private or public datasets alone without sacrificing the rigorous privacy guarantee. PMID:25079786

  5. Efficiency trade-offs of steady-state methods using FEM and FDM. [iterative solutions for nonlinear flow equations

    NASA Technical Reports Server (NTRS)

    Gartling, D. K.; Roache, P. J.

    1978-01-01

    The efficiency characteristics of finite element and finite difference approximations for the steady-state solution of the Navier-Stokes equations are examined. The finite element method discussed is a standard Galerkin formulation of the incompressible, steady-state Navier-Stokes equations. The finite difference formulation uses simple centered differences that are O(delta x-squared). Operation counts indicate that a rapidly converging Newton-Raphson-Kantorovitch iteration scheme is generally preferable over a Picard method. A split NOS Picard iterative algorithm for the finite difference method was most efficient.

  6. Study on the variable cycle engine modeling techniques based on the component method

    NASA Astrophysics Data System (ADS)

    Zhang, Lihua; Xue, Hui; Bao, Yuhai; Li, Jijun; Yan, Lan

    2016-01-01

    Based on the structure platform of the gas turbine engine, the components of variable cycle engine were simulated by using the component method. The mathematical model of nonlinear equations correspondeing to each component of the gas turbine engine was established. Based on Matlab programming, the nonlinear equations were solved by using Newton-Raphson steady-state algorithm, and the performance of the components for engine was calculated. The numerical simulation results showed that the model bulit can describe the basic performance of the gas turbine engine, which verified the validity of the model.

  7. Modèle tridimensionnel pour coupler les équations magnétiques et électriques dans le cas de la magnétostatique

    NASA Astrophysics Data System (ADS)

    Piriou, F.; Razek, A.

    1991-03-01

    In this paper a 3D model for coupling of magnetic and electric equations is presented. The magnetic equations are solved with the help of finite element method using the magnetic vector potential formulation. To take into account the effects of magnetic saturation we use the Newton-Raphson algorithm. We develop the analysis permitting the coupling of magnetic and electric equations to obtain a difrerential system equations which can be solved with numerical integration. As example we model an iron core coil and the validity of our model is verified by a comparison of the obtained results with an analytical solution and a 2D code calculation. Dans cet article est présenté un modèle 3D qui permet de coupler les équations magnétiques et électriques. Les équations magnétiques sont résolues à l'aide de la méthode des éléments finis en utilisant une formulation en potentiel vecteur magnétique. Dans le modèle proposé les effets de la saturation du circuit magnétique sont pris en compte en utilisant l'algorithme de Newton-Raphson. On montre comment relier les équations magnétiques avec celles du circuit électrique pour aboutir à un système d'équations différentielles que l'on résout avec une intégration numérique. A titre d'exemple on modélise une bobine à noyau ferromagnétique et pour montrer la validité du modèle on compare les résultats obtenus avec une solution analytique et un code de calcul 2D.

  8. An improved computer program for calculating the theoretical performance parameters of a propeller type wind turbine. An appendix to the final report on feasibility of using wind power to pump irrigation water (Texas). [PROP Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barieau, R.E.

    1977-03-01

    The PROP Program of Wilson and Lissaman has been modified by adding the Newton-Raphson Method and a Step Wise Search Method, as options for the method of solution. In addition, an optimization method is included. Twist angles, tip speed ratio and the pitch angle may be varied to produce maximum power coefficient. The computer program listing is presented along with sample input and output data. Further improvements to the program are discussed.

  9. Iteration with Spreadsheets.

    ERIC Educational Resources Information Center

    Smith, Michael

    1990-01-01

    Presents several examples of the iteration method using computer spreadsheets. Examples included are simple iterative sequences and the solution of equations using the Newton-Raphson formula, linear interpolation, and interval bisection. (YP)

  10. Results of a feasibility study using the Newton-Raphson digital computer program to identify lifting body derivatives from flight data

    NASA Technical Reports Server (NTRS)

    Sim, A. G.

    1973-01-01

    A brief study was made to assess the applicability of the Newton-Raphson digital computer program as a routine technique for extracting aerodynamic derivatives from flight tests of lifting body types of vehicles. Lateral-direction flight data from flight tests of the HL-10 lifting body reserch vehicle were utilized. The results in general, show the computer program to be a reliable and expedient means for extracting derivatives for this class of vehicles as a standard procedure. This result was true even when stability augmentation was used. As a result of the study, a credible set of HL-10 lateral-directional derivatives was obtained from flight data. These derivatives are compared with results from wind-tunnel tests.

  11. Numerical solutions of 2-D multi-stage rotor/stator unsteady flow interactions

    NASA Astrophysics Data System (ADS)

    Yang, R.-J.; Lin, S.-J.

    1991-01-01

    The Rai method of single-stage rotor/stator flow interaction is extended to handle multistage configurations. In this study, a two-dimensional Navier-Stokes multi-zone approach was used to investigate unsteady flow interactions within two multistage axial turbines. The governing equations are solved by an iterative, factored, implicit finite-difference, upwind algorithm. Numerical accuracy is checked by investigating the effect of time step size, the effect of subiteration in the Newton-Raphson technique, and the effect of full viscous versus thin-layer approximation. Computer results compared well with experimental data. Unsteady flow interactions, wake cutting, and the associated evolution of vortical entities are discussed.

  12. Design of optimally normal minimum gain controllers by continuation method

    NASA Technical Reports Server (NTRS)

    Lim, K. B.; Juang, J.-N.; Kim, Z. C.

    1989-01-01

    A measure of the departure from normality is investigated for system robustness. An attractive feature of the normality index is its simplicity for pole placement designs. To allow a tradeoff between system robustness and control effort, a cost function consisting of the sum of a norm of weighted gain matrix and a normality index is minimized. First- and second-order necessary conditions for the constrained optimization problem are derived and solved by a Newton-Raphson algorithm imbedded into a one-parameter family of neighboring zero problems. The method presented allows the direct computation of optimal gains in terms of robustness and control effort for pole placement problems.

  13. Voltage profile program for the Kennedy Space Center electric power distribution system

    NASA Technical Reports Server (NTRS)

    1976-01-01

    The Kennedy Space Center voltage profile program computes voltages at all busses greater than 1 Kv in the network under various conditions of load. The computation is based upon power flow principles and utilizes a Newton-Raphson iterative load flow algorithm. Power flow conditions throughout the network are also provided. The computer program is designed for both steady state and transient operation. In the steady state mode, automatic tap changing of primary distribution transformers is incorporated. Under transient conditions, such as motor starts etc., it is assumed that tap changing is not accomplished so that transformer secondary voltage is allowed to sag.

  14. Real-Time Adaptive Least-Squares Drag Minimization for Performance Adaptive Aeroelastic Wing

    NASA Technical Reports Server (NTRS)

    Ferrier, Yvonne L.; Nguyen, Nhan T.; Ting, Eric

    2016-01-01

    This paper contains a simulation study of a real-time adaptive least-squares drag minimization algorithm for an aeroelastic model of a flexible wing aircraft. The aircraft model is based on the NASA Generic Transport Model (GTM). The wing structures incorporate a novel aerodynamic control surface known as the Variable Camber Continuous Trailing Edge Flap (VCCTEF). The drag minimization algorithm uses the Newton-Raphson method to find the optimal VCCTEF deflections for minimum drag in the context of an altitude-hold flight control mode at cruise conditions. The aerodynamic coefficient parameters used in this optimization method are identified in real-time using Recursive Least Squares (RLS). The results demonstrate the potential of the VCCTEF to improve aerodynamic efficiency for drag minimization for transport aircraft.

  15. Finite element implementation of state variable-based viscoplasticity models

    NASA Technical Reports Server (NTRS)

    Iskovitz, I.; Chang, T. Y. P.; Saleeb, A. F.

    1991-01-01

    The implementation of state variable-based viscoplasticity models is made in a general purpose finite element code for structural applications of metals deformed at elevated temperatures. Two constitutive models, Walker's and Robinson's models, are studied in conjunction with two implicit integration methods: the trapezoidal rule with Newton-Raphson iterations and an asymptotic integration algorithm. A comparison is made between the two integration methods, and the latter method appears to be computationally more appealing in terms of numerical accuracy and CPU time. However, in order to make the asymptotic algorithm robust, it is necessary to include a self adaptive scheme with subincremental step control and error checking of the Jacobian matrix at the integration points. Three examples are given to illustrate the numerical aspects of the integration methods tested.

  16. A Method for Approximating the Bivariate Normal Correlation Coefficient.

    ERIC Educational Resources Information Center

    Kirk, David B.

    Improvements of the Gaussian quadrature in conjunction with the Newton-Raphson iteration technique (TM 000 789) are discussed as effective methods of calculating the bivariate normal correlation coefficient. (CK)

  17. Close Encounters of a Sparse Kind.

    ERIC Educational Resources Information Center

    Westerberg, Arthur W.

    1980-01-01

    By providing an example problem in solving sets of nonlinear algebraic equations, the advantages and disadvantages of two methods for its solution, the tearing approach v the Newton-Raphson approach, are elucidated. (CS)

  18. The Beta-Geometric Model Applied to Fecundability in a Sample of Married Women

    NASA Astrophysics Data System (ADS)

    Adekanmbi, D. B.; Bamiduro, T. A.

    2006-10-01

    The time required to achieve pregnancy among married couples termed fecundability has been proposed to follow a beta-geometric distribution. The accuracy of the method used in estimating the parameters of the model has an implication on the goodness of fit of the model. In this study, the parameters of the model are estimated using the Method of Moments and Newton-Raphson estimation procedure. The goodness of fit of the model was considered, using estimates from the two methods of estimation, as well as the asymptotic relative efficiency of the estimates. A noticeable improvement in the fit of the model to the data on time to conception was observed, when the parameters are estimated by Newton-Raphson procedure, and thereby estimating reasonable expectations of fecundability for married female population in the country.

  19. Fractal basins of attraction in the restricted four-body problem when the primaries are triaxial rigid bodies

    NASA Astrophysics Data System (ADS)

    Suraj, Md Sanam; Asique, Md Chand; Prasad, Umakant; Hassan, M. R.; Shalini, Kumari

    2017-11-01

    The planar equilateral restricted four-body problem, formulated on the basis of Lagrange's triangular solutions is used to determine the existence and locations of libration points and the Newton-Raphson basins of convergence associated with these libration points. We have supposed that all the three primaries situated on the vertices of an equilateral triangle are triaxial rigid bodies. This paper also deals with the effect of these triaxiality parameters on the regions of motion where the test particle is free to move. Further, the regions on the configuration plane filled by the basins of attraction are determined by using the multivariate version of the Newton-Raphson iterative system. The numerical study reveals that the triaxiality of the primaries is one of the most influential parameters in the four-body problem.

  20. The SPAR thermal analyzer: Present and future

    NASA Astrophysics Data System (ADS)

    Marlowe, M. B.; Whetstone, W. D.; Robinson, J. C.

    The SPAR thermal analyzer, a system of finite-element processors for performing steady-state and transient thermal analyses, is described. The processors communicate with each other through the SPAR random access data base. As each processor is executed, all pertinent source data is extracted from the data base and results are stored in the data base. Steady state temperature distributions are determined by a direct solution method for linear problems and a modified Newton-Raphson method for nonlinear problems. An explicit and several implicit methods are available for the solution of transient heat transfer problems. Finite element plotting capability is available for model checkout and verification.

  1. The SPAR thermal analyzer: Present and future

    NASA Technical Reports Server (NTRS)

    Marlowe, M. B.; Whetstone, W. D.; Robinson, J. C.

    1982-01-01

    The SPAR thermal analyzer, a system of finite-element processors for performing steady-state and transient thermal analyses, is described. The processors communicate with each other through the SPAR random access data base. As each processor is executed, all pertinent source data is extracted from the data base and results are stored in the data base. Steady state temperature distributions are determined by a direct solution method for linear problems and a modified Newton-Raphson method for nonlinear problems. An explicit and several implicit methods are available for the solution of transient heat transfer problems. Finite element plotting capability is available for model checkout and verification.

  2. Chaos, Fractals, and Polynomials.

    ERIC Educational Resources Information Center

    Tylee, J. Louis; Tylee, Thomas B.

    1996-01-01

    Discusses chaos theory; linear algebraic equations and the numerical solution of polynomials, including the use of the Newton-Raphson technique to find polynomial roots; fractals; search region and coordinate systems; convergence; and generating color fractals on a computer. (LRW)

  3. Two-component mixture model: Application to palm oil and exchange rate

    NASA Astrophysics Data System (ADS)

    Phoong, Seuk-Yen; Ismail, Mohd Tahir; Hamzah, Firdaus Mohamad

    2014-12-01

    Palm oil is a seed crop which is widely adopt for food and non-food products such as cookie, vegetable oil, cosmetics, household products and others. Palm oil is majority growth in Malaysia and Indonesia. However, the demand for palm oil is getting growth and rapidly running out over the years. This phenomenal cause illegal logging of trees and destroy the natural habitat. Hence, the present paper investigates the relationship between exchange rate and palm oil price in Malaysia by using Maximum Likelihood Estimation via Newton-Raphson algorithm to fit a two components mixture model. Besides, this paper proposes a mixture of normal distribution to accommodate with asymmetry characteristics and platykurtic time series data.

  4. Equilibrium paths of an imperfect plate with respect to its aspect ratio

    NASA Astrophysics Data System (ADS)

    Psotny, Martin

    2017-07-01

    The stability analysis of a rectangular plate loaded in compression is presented, a specialized code based on FEM has been created. Special finite element with 48 degrees of freedom has been used for analysis. The nonlinear finite element method equations are derived from the variational principle of minimum of total potential energy. To trace the complete nonlinear equilibrium paths, the Newton-Raphson iteration algorithm is used, load versus displacement control was changed during the calculation process. The peculiarities of the effects of the initial imperfections on the load-deflection paths are investigated with respect to aspect ratio of the plate. Special attention is paid to the influence of imperfections on the post-critical buckling mode.

  5. Nonlinear analysis of 0-3 polarized PLZT microplate based on the new modified couple stress theory

    NASA Astrophysics Data System (ADS)

    Wang, Liming; Zheng, Shijie

    2018-02-01

    In this study, based on the new modified couple stress theory, the size- dependent model for nonlinear bending analysis of a pure 0-3 polarized PLZT plate is developed for the first time. The equilibrium equations are derived from a variational formulation based on the potential energy principle and the new modified couple stress theory. The Galerkin method is adopted to derive the nonlinear algebraic equations from governing differential equations. And then the nonlinear algebraic equations are solved by using Newton-Raphson method. After simplification, the new model includes only a material length scale parameter. In addition, numerical examples are carried out to study the effect of material length scale parameter on the nonlinear bending of a simply supported pure 0-3 polarized PLZT plate subjected to light illumination and uniform distributed load. The results indicate the new model is able to capture the size effect and geometric nonlinearity.

  6. Spectral-luminosity evolution of active galactic nuclei (AGN)

    NASA Technical Reports Server (NTRS)

    Leiter, Darryl; Boldt, Elihu

    1992-01-01

    The origin of the cosmic X-ray and gamma-ray backgrounds is explained via the mechanism of AGN spectral-luminosity evolution. The spectral evolution of precursor active galaxies into AGN, and Newton-Raphson input and output parameters are discussed.

  7. A second-order unconstrained optimization method for canonical-ensemble density-functional methods

    NASA Astrophysics Data System (ADS)

    Nygaard, Cecilie R.; Olsen, Jeppe

    2013-03-01

    A second order converging method of ensemble optimization (SOEO) in the framework of Kohn-Sham Density-Functional Theory is presented, where the energy is minimized with respect to an ensemble density matrix. It is general in the sense that the number of fractionally occupied orbitals is not predefined, but rather it is optimized by the algorithm. SOEO is a second order Newton-Raphson method of optimization, where both the form of the orbitals and the occupation numbers are optimized simultaneously. To keep the occupation numbers between zero and two, a set of occupation angles is defined, from which the occupation numbers are expressed as trigonometric functions. The total number of electrons is controlled by a built-in second order restriction of the Newton-Raphson equations, which can be deactivated in the case of a grand-canonical ensemble (where the total number of electrons is allowed to change). To test the optimization method, dissociation curves for diatomic carbon are produced using different functionals for the exchange-correlation energy. These curves show that SOEO favors symmetry broken pure-state solutions when using functionals with exact exchange such as Hartree-Fock and Becke three-parameter Lee-Yang-Parr. This is explained by an unphysical contribution to the exact exchange energy from interactions between fractional occupations. For functionals without exact exchange, such as local density approximation or Becke Lee-Yang-Parr, ensemble solutions are favored at interatomic distances larger than the equilibrium distance. Calculations on the chromium dimer are also discussed. They show that SOEO is able to converge to ensemble solutions for systems that are more complicated than diatomic carbon.

  8. Impedance computed tomography using an adaptive smoothing coefficient algorithm.

    PubMed

    Suzuki, A; Uchiyama, A

    2001-01-01

    In impedance computed tomography, a fixed coefficient regularization algorithm has been frequently used to improve the ill-conditioning problem of the Newton-Raphson algorithm. However, a lot of experimental data and a long period of computation time are needed to determine a good smoothing coefficient because a good smoothing coefficient has to be manually chosen from a number of coefficients and is a constant for each iteration calculation. Thus, sometimes the fixed coefficient regularization algorithm distorts the information or fails to obtain any effect. In this paper, a new adaptive smoothing coefficient algorithm is proposed. This algorithm automatically calculates the smoothing coefficient from the eigenvalue of the ill-conditioned matrix. Therefore, the effective images can be obtained within a short computation time. Also the smoothing coefficient is automatically adjusted by the information related to the real resistivity distribution and the data collection method. In our impedance system, we have reconstructed the resistivity distributions of two phantoms using this algorithm. As a result, this algorithm only needs one-fifth the computation time compared to the fixed coefficient regularization algorithm. When compared to the fixed coefficient regularization algorithm, it shows that the image is obtained more rapidly and applicable in real-time monitoring of the blood vessel.

  9. A study of the parallel algorithm for large-scale DC simulation of nonlinear systems

    NASA Astrophysics Data System (ADS)

    Cortés Udave, Diego Ernesto; Ogrodzki, Jan; Gutiérrez de Anda, Miguel Angel

    Newton-Raphson DC analysis of large-scale nonlinear circuits may be an extremely time consuming process even if sparse matrix techniques and bypassing of nonlinear models calculation are used. A slight decrease in the time required for this task may be enabled on multi-core, multithread computers if the calculation of the mathematical models for the nonlinear elements as well as the stamp management of the sparse matrix entries are managed through concurrent processes. This numerical complexity can be further reduced via the circuit decomposition and parallel solution of blocks taking as a departure point the BBD matrix structure. This block-parallel approach may give a considerable profit though it is strongly dependent on the system topology and, of course, on the processor type. This contribution presents the easy-parallelizable decomposition-based algorithm for DC simulation and provides a detailed study of its effectiveness.

  10. Performance analysis of improved iterated cubature Kalman filter and its application to GNSS/INS.

    PubMed

    Cui, Bingbo; Chen, Xiyuan; Xu, Yuan; Huang, Haoqian; Liu, Xiao

    2017-01-01

    In order to improve the accuracy and robustness of GNSS/INS navigation system, an improved iterated cubature Kalman filter (IICKF) is proposed by considering the state-dependent noise and system uncertainty. First, a simplified framework of iterated Gaussian filter is derived by using damped Newton-Raphson algorithm and online noise estimator. Then the effect of state-dependent noise coming from iterated update is analyzed theoretically, and an augmented form of CKF algorithm is applied to improve the estimation accuracy. The performance of IICKF is verified by field test and numerical simulation, and results reveal that, compared with non-iterated filter, iterated filter is less sensitive to the system uncertainty, and IICKF improves the accuracy of yaw, roll and pitch by 48.9%, 73.1% and 83.3%, respectively, compared with traditional iterated KF. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  11. Optimization of Time-Dependent Particle Tracing Using Tetrahedral Decomposition

    NASA Technical Reports Server (NTRS)

    Kenwright, David; Lane, David

    1995-01-01

    An efficient algorithm is presented for computing particle paths, streak lines and time lines in time-dependent flows with moving curvilinear grids. The integration, velocity interpolation and step-size control are all performed in physical space which avoids the need to transform the velocity field into computational space. This leads to higher accuracy because there are no Jacobian matrix approximations or expensive matrix inversions. Integration accuracy is maintained using an adaptive step-size control scheme which is regulated by the path line curvature. The problem of cell-searching, point location and interpolation in physical space is simplified by decomposing hexahedral cells into tetrahedral cells. This enables the point location to be done analytically and substantially faster than with a Newton-Raphson iterative method. Results presented show this algorithm is up to six times faster than particle tracers which operate on hexahedral cells yet produces almost identical particle trajectories.

  12. Modified kinetic-hydraulic UASB reactor model for treatment of wastewater containing biodegradable organic substrates.

    PubMed

    El-Seddik, Mostafa M; Galal, Mona M; Radwan, A G; Abdel-Halim, Hisham S

    2016-01-01

    This paper addresses a modified kinetic-hydraulic model for up-flow anaerobic sludge blanket (UASB) reactor aimed to treat wastewater of biodegradable organic substrates as acetic acid based on Van der Meer model incorporated with biological granules inclusion. This dynamic model illustrates the biomass kinetic reaction rate for both direct and indirect growth of microorganisms coupled with the amount of biogas produced by methanogenic bacteria in bed and blanket zones of reactor. Moreover, the pH value required for substrate degradation at the peak specific growth rate of bacteria is discussed for Andrews' kinetics. The sensitivity analyses of biomass concentration with respect to fraction of volume of reactor occupied by granules and up-flow velocity are also demonstrated. Furthermore, the modified mass balance equations of reactor are applied during steady state using Newton Raphson technique to obtain a suitable degree of freedom for the modified model matching with the measured results of UASB Sanhour wastewater treatment plant in Fayoum, Egypt.

  13. Broad-search algorithms for the spacecraft trajectory design of Callisto-Ganymede-Io triple flyby sequences from 2024 to 2040, Part II: Lambert pathfinding and trajectory solutions

    NASA Astrophysics Data System (ADS)

    Lynam, Alfred E.

    2014-01-01

    Triple-satellite-aided capture employs gravity-assist flybys of three of the Galilean moons of Jupiter in order to decrease the amount of ΔV required to capture a spacecraft into Jupiter orbit. Similarly, triple flybys can be used within a Jupiter satellite tour to rapidly modify the orbital parameters of a Jovicentric orbit, or to increase the number of science flybys. In order to provide a nearly comprehensive search of the solution space of Callisto-Ganymede-Io triple flybys from 2024 to 2040, a third-order, Chebyshev's method variant of the p-iteration solution to Lambert's problem is paired with a second-order, Newton-Raphson method, time of flight iteration solution to the V∞-matching problem. The iterative solutions of these problems provide the orbital parameters of the Callisto-Ganymede transfer, the Ganymede flyby, and the Ganymede-Io transfer, but the characteristics of the Callisto and Io flybys are unconstrained, so they are permitted to vary in order to produce an even larger number of trajectory solutions. The vast amount of solution data is searched to find the best triple-satellite-aided capture window between 2024 and 2040.

  14. A methodology for airplane parameter estimation and confidence interval determination in nonlinear estimation problems. Ph.D. Thesis - George Washington Univ., Apr. 1985

    NASA Technical Reports Server (NTRS)

    Murphy, P. C.

    1986-01-01

    An algorithm for maximum likelihood (ML) estimation is developed with an efficient method for approximating the sensitivities. The ML algorithm relies on a new optimization method referred to as a modified Newton-Raphson with estimated sensitivities (MNRES). MNRES determines sensitivities by using slope information from local surface approximations of each output variable in parameter space. With the fitted surface, sensitivity information can be updated at each iteration with less computational effort than that required by either a finite-difference method or integration of the analytically determined sensitivity equations. MNRES eliminates the need to derive sensitivity equations for each new model, and thus provides flexibility to use model equations in any convenient format. A random search technique for determining the confidence limits of ML parameter estimates is applied to nonlinear estimation problems for airplanes. The confidence intervals obtained by the search are compared with Cramer-Rao (CR) bounds at the same confidence level. The degree of nonlinearity in the estimation problem is an important factor in the relationship between CR bounds and the error bounds determined by the search technique. Beale's measure of nonlinearity is developed in this study for airplane identification problems; it is used to empirically correct confidence levels and to predict the degree of agreement between CR bounds and search estimates.

  15. Simulation on Natural Convection of a Nanofluid along an Isothermal Inclined Plate

    NASA Astrophysics Data System (ADS)

    Mitra, Asish

    2017-08-01

    A numerical algorithm is presented for studying laminar natural convection flow of a nanofluid along an isothermal inclined plate. By means of similarity transformation, the original nonlinear partial differential equations of flow are transformed to a set of nonlinear ordinary differential equations. Subsequently they are reduced to a first order system and integrated using Newton Raphson and adaptive Runge-Kutta methods. The computer codes are developed for this numerical analysis in Matlab environment. Dimensionless velocity, temperature profiles and nanoparticle concentration for various angles of inclination are illustrated graphically. The effects of Prandtl number, Brownian motion parameter and thermophoresis parameter on Nusselt number are also discussed. The results of the present simulation are then compared with previous one available in literature with good agreement.

  16. Function Invariant and Parameter Scale-Free Transformation Methods

    ERIC Educational Resources Information Center

    Bentler, P. M.; Wingard, Joseph A.

    1977-01-01

    A scale-invariant simple structure function of previously studied function components for principal component analysis and factor analysis is defined. First and second partial derivatives are obtained, and Newton-Raphson iterations are utilized. The resulting solutions are locally optimal and subjectively pleasing. (Author/JKS)

  17. An Algorithm for Efficient Maximum Likelihood Estimation and Confidence Interval Determination in Nonlinear Estimation Problems

    NASA Technical Reports Server (NTRS)

    Murphy, Patrick Charles

    1985-01-01

    An algorithm for maximum likelihood (ML) estimation is developed with an efficient method for approximating the sensitivities. The algorithm was developed for airplane parameter estimation problems but is well suited for most nonlinear, multivariable, dynamic systems. The ML algorithm relies on a new optimization method referred to as a modified Newton-Raphson with estimated sensitivities (MNRES). MNRES determines sensitivities by using slope information from local surface approximations of each output variable in parameter space. The fitted surface allows sensitivity information to be updated at each iteration with a significant reduction in computational effort. MNRES determines the sensitivities with less computational effort than using either a finite-difference method or integrating the analytically determined sensitivity equations. MNRES eliminates the need to derive sensitivity equations for each new model, thus eliminating algorithm reformulation with each new model and providing flexibility to use model equations in any format that is convenient. A random search technique for determining the confidence limits of ML parameter estimates is applied to nonlinear estimation problems for airplanes. The confidence intervals obtained by the search are compared with Cramer-Rao (CR) bounds at the same confidence level. It is observed that the degree of nonlinearity in the estimation problem is an important factor in the relationship between CR bounds and the error bounds determined by the search technique. The CR bounds were found to be close to the bounds determined by the search when the degree of nonlinearity was small. Beale's measure of nonlinearity is developed in this study for airplane identification problems; it is used to empirically correct confidence levels for the parameter confidence limits. The primary utility of the measure, however, was found to be in predicting the degree of agreement between Cramer-Rao bounds and search estimates.

  18. HICOV - Newton-Raphson calculus of variation with automatic transversalities

    NASA Technical Reports Server (NTRS)

    Heintschel, T. J.

    1968-01-01

    Computer program generates trajectories that are optimum with respect to payload placed in an earth orbit. It uses a subroutine package which produces the terminal and transversality conditions and their partial derivatives. This program is written in FORTRAN 4 and FORMAC for the IBM 7094 computer.

  19. A robust return-map algorithm for general multisurface plasticity

    DOE PAGES

    Adhikary, Deepak P.; Jayasundara, Chandana T.; Podgorney, Robert K.; ...

    2016-06-16

    Three new contributions to the field of multisurface plasticity are presented for general situations with an arbitrary number of nonlinear yield surfaces with hardening or softening. A method for handling linearly dependent flow directions is described. A residual that can be used in a line search is defined. An algorithm that has been implemented and comprehensively tested is discussed in detail. Examples are presented to illustrate the computational cost of various components of the algorithm. The overall result is that a single Newton-Raphson iteration of the algorithm costs between 1.5 and 2 times that of an elastic calculation. Examples alsomore » illustrate the successful convergence of the algorithm in complicated situations. For example, without using the new contributions presented here, the algorithm fails to converge for approximately 50% of the trial stresses for a common geomechanical model of sedementary rocks, while the current algorithm results in complete success. Since it involves no approximations, the algorithm is used to quantify the accuracy of an efficient, pragmatic, but approximate, algorithm used for sedimentary-rock plasticity in a commercial software package. Furthermore, the main weakness of the algorithm is identified as the difficulty of correctly choosing the set of initially active constraints in the general setting.« less

  20. Robust Adaptive Modified Newton Algorithm for Generalized Eigendecomposition and Its Application

    NASA Astrophysics Data System (ADS)

    Yang, Jian; Yang, Feng; Xi, Hong-Sheng; Guo, Wei; Sheng, Yanmin

    2007-12-01

    We propose a robust adaptive algorithm for generalized eigendecomposition problems that arise in modern signal processing applications. To that extent, the generalized eigendecomposition problem is reinterpreted as an unconstrained nonlinear optimization problem. Starting from the proposed cost function and making use of an approximation of the Hessian matrix, a robust modified Newton algorithm is derived. A rigorous analysis of its convergence properties is presented by using stochastic approximation theory. We also apply this theory to solve the signal reception problem of multicarrier DS-CDMA to illustrate its practical application. The simulation results show that the proposed algorithm has fast convergence and excellent tracking capability, which are important in a practical time-varying communication environment.

  1. Estimating the Parameters of the Beta-Binomial Distribution.

    ERIC Educational Resources Information Center

    Wilcox, Rand R.

    1979-01-01

    For some situations the beta-binomial distribution might be used to describe the marginal distribution of test scores for a particular population of examinees. Several different methods of approximating the maximum likelihood estimate were investigated, and it was found that the Newton-Raphson method should be used when it yields admissable…

  2. A Note on the Computation of the Second-Order Derivatives of the Elementary Symmetric Functions in the Rasch Model.

    ERIC Educational Resources Information Center

    Formann, Anton K.

    1986-01-01

    It is shown that for equal parameters explicit formulas exist, facilitating the application of the Newton-Raphson procedure to estimate the parameters in the Rasch model and related models according to the conditional maximum likelihood principle. (Author/LMO)

  3. The influence of image reconstruction algorithms on linear thorax EIT image analysis of ventilation.

    PubMed

    Zhao, Zhanqi; Frerichs, Inéz; Pulletz, Sven; Müller-Lisse, Ullrich; Möller, Knut

    2014-06-01

    Analysis methods of electrical impedance tomography (EIT) images based on different reconstruction algorithms were examined. EIT measurements were performed on eight mechanically ventilated patients with acute respiratory distress syndrome. A maneuver with step increase of airway pressure was performed. EIT raw data were reconstructed offline with (1) filtered back-projection (BP); (2) the Dräger algorithm based on linearized Newton-Raphson (DR); (3) the GREIT (Graz consensus reconstruction algorithm for EIT) reconstruction algorithm with a circular forward model (GR(C)) and (4) GREIT with individual thorax geometry (GR(T)). Individual thorax contours were automatically determined from the routine computed tomography images. Five indices were calculated on the resulting EIT images respectively: (a) the ratio between tidal and deep inflation impedance changes; (b) tidal impedance changes in the right and left lungs; (c) center of gravity; (d) the global inhomogeneity index and (e) ventilation delay at mid-dorsal regions. No significant differences were found in all examined indices among the four reconstruction algorithms (p > 0.2, Kruskal-Wallis test). The examined algorithms used for EIT image reconstruction do not influence the selected indices derived from the EIT image analysis. Indices that validated for images with one reconstruction algorithm are also valid for other reconstruction algorithms.

  4. Designing stellarator coils by a modified Newton method using FOCUS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhu, Caoxiang; Hudson, Stuart R.; Song, Yuntao

    To find the optimal coils for stellarators, nonlinear optimization algorithms are applied in existing coil design codes. However, none of these codes have used the information from the second-order derivatives. In this paper, we present a modified Newton method in the recently developed code FOCUS. The Hessian matrix is calculated with analytically derived equations. Its inverse is approximated by a modified Cholesky factorization and applied in the iterative scheme of a classical Newton method. Using this method, FOCUS is able to recover the W7-X modular coils starting from a simple initial guess. Results demonstrate significant advantages.

  5. Designing stellarator coils by a modified Newton method using FOCUS

    NASA Astrophysics Data System (ADS)

    Zhu, Caoxiang; Hudson, Stuart R.; Song, Yuntao; Wan, Yuanxi

    2018-06-01

    To find the optimal coils for stellarators, nonlinear optimization algorithms are applied in existing coil design codes. However, none of these codes have used the information from the second-order derivatives. In this paper, we present a modified Newton method in the recently developed code FOCUS. The Hessian matrix is calculated with analytically derived equations. Its inverse is approximated by a modified Cholesky factorization and applied in the iterative scheme of a classical Newton method. Using this method, FOCUS is able to recover the W7-X modular coils starting from a simple initial guess. Results demonstrate significant advantages.

  6. Designing stellarator coils by a modified Newton method using FOCUS

    DOE PAGES

    Zhu, Caoxiang; Hudson, Stuart R.; Song, Yuntao; ...

    2018-03-22

    To find the optimal coils for stellarators, nonlinear optimization algorithms are applied in existing coil design codes. However, none of these codes have used the information from the second-order derivatives. In this paper, we present a modified Newton method in the recently developed code FOCUS. The Hessian matrix is calculated with analytically derived equations. Its inverse is approximated by a modified Cholesky factorization and applied in the iterative scheme of a classical Newton method. Using this method, FOCUS is able to recover the W7-X modular coils starting from a simple initial guess. Results demonstrate significant advantages.

  7. A kinematically driven anisotropic viscoelastic constitutive model applied to tires

    NASA Technical Reports Server (NTRS)

    Johnson, Arthur R.; Tanner, John A.; Mason, Angela J.

    1995-01-01

    Aircraft tires are composite structures manufactured with viscoelastic materials such as carbon black filled rubber and nylon cords. When loaded they experience large deflections and moderately large strains. Detailed structural models of tires require the use of either nonlinear shell or nonlinear three dimensional solid finite elements. Computational predictions of the dynamic response of tires must consider the composite viscoelastic material behavior in a realistic fashion. We describe a modification to a nonlinear anisotropic shell finite element so it can be used to model viscoelastic stresses during general deformations. The model is developed by introducing internal variables of the type used to model elastic strain energy. The internal variables are strains, curvatures, and transverse shear angles which are in a one-to-one correspondence with the generalized coordinates used to model the elastic strain energy for nonlinear response. A difference-relaxation equation is used to relate changes in the observable strain field to changes in the internal strain field. The internal stress state is introduced into the equilibrium equations by converting it to nodal loads associated with the element's displacement degrees of freedom. In this form the tangent matrix in the Newton-Raphson solution algorithm is not modified from its form for the nonlinear statics problem. Only the gradient vector is modified and the modification is not computationally costly. The existing finite element model for the Space Shuttle nose gear tire is used to provide examples of the algorithm. In the first example, the tire's rim is displaced at a constant rate up to a fixed value. In the second example, the tire's rim is enforced to follow a saw tooth load and unload curve to generate hysteresis loops.

  8. A kinematically driven anisotropic viscoelastic constitutive model applied to tires

    NASA Astrophysics Data System (ADS)

    Johnson, Arthur R.; Tanner, John A.; Mason, Angela J.

    1995-08-01

    Aircraft tires are composite structures manufactured with viscoelastic materials such as carbon black filled rubber and nylon cords. When loaded they experience large deflections and moderately large strains. Detailed structural models of tires require the use of either nonlinear shell or nonlinear three dimensional solid finite elements. Computational predictions of the dynamic response of tires must consider the composite viscoelastic material behavior in a realistic fashion. We describe a modification to a nonlinear anisotropic shell finite element so it can be used to model viscoelastic stresses during general deformations. The model is developed by introducing internal variables of the type used to model elastic strain energy. The internal variables are strains, curvatures, and transverse shear angles which are in a one-to-one correspondence with the generalized coordinates used to model the elastic strain energy for nonlinear response. A difference-relaxation equation is used to relate changes in the observable strain field to changes in the internal strain field. The internal stress state is introduced into the equilibrium equations by converting it to nodal loads associated with the element's displacement degrees of freedom. In this form the tangent matrix in the Newton-Raphson solution algorithm is not modified from its form for the nonlinear statics problem. Only the gradient vector is modified and the modification is not computationally costly. The existing finite element model for the Space Shuttle nose gear tire is used to provide examples of the algorithm. In the first example, the tire's rim is displaced at a constant rate up to a fixed value. In the second example, the tire's rim is enforced to follow a saw tooth load and unload curve to generate hysteresis loops.

  9. A mixed parallel strategy for the solution of coupled multi-scale problems at finite strains

    NASA Astrophysics Data System (ADS)

    Lopes, I. A. Rodrigues; Pires, F. M. Andrade; Reis, F. J. P.

    2018-02-01

    A mixed parallel strategy for the solution of homogenization-based multi-scale constitutive problems undergoing finite strains is proposed. The approach aims to reduce the computational time and memory requirements of non-linear coupled simulations that use finite element discretization at both scales (FE^2). In the first level of the algorithm, a non-conforming domain decomposition technique, based on the FETI method combined with a mortar discretization at the interface of macroscopic subdomains, is employed. A master-slave scheme, which distributes tasks by macroscopic element and adopts dynamic scheduling, is then used for each macroscopic subdomain composing the second level of the algorithm. This strategy allows the parallelization of FE^2 simulations in computers with either shared memory or distributed memory architectures. The proposed strategy preserves the quadratic rates of asymptotic convergence that characterize the Newton-Raphson scheme. Several examples are presented to demonstrate the robustness and efficiency of the proposed parallel strategy.

  10. Simultaneous source and attenuation reconstruction in SPECT using ballistic and single scattering data

    NASA Astrophysics Data System (ADS)

    Courdurier, M.; Monard, F.; Osses, A.; Romero, F.

    2015-09-01

    In medical single-photon emission computed tomography (SPECT) imaging, we seek to simultaneously obtain the internal radioactive sources and the attenuation map using not only ballistic measurements but also first-order scattering measurements and assuming a very specific scattering regime. The problem is modeled using the radiative transfer equation by means of an explicit non-linear operator that gives the ballistic and scattering measurements as a function of the radioactive source and attenuation distributions. First, by differentiating this non-linear operator we obtain a linearized inverse problem. Then, under regularity hypothesis for the source distribution and attenuation map and considering small attenuations, we rigorously prove that the linear operator is invertible and we compute its inverse explicitly. This allows proof of local uniqueness for the non-linear inverse problem. Finally, using the previous inversion result for the linear operator, we propose a new type of iterative algorithm for simultaneous source and attenuation recovery for SPECT based on the Neumann series and a Newton-Raphson algorithm.

  11. Cross-entropy embedding of high-dimensional data using the neural gas model.

    PubMed

    Estévez, Pablo A; Figueroa, Cristián J; Saito, Kazumi

    2005-01-01

    A cross-entropy approach to mapping high-dimensional data into a low-dimensional space embedding is presented. The method allows to project simultaneously the input data and the codebook vectors, obtained with the Neural Gas (NG) quantizer algorithm, into a low-dimensional output space. The aim of this approach is to preserve the relationship defined by the NG neighborhood function for each pair of input and codebook vectors. A cost function based on the cross-entropy between input and output probabilities is minimized by using a Newton-Raphson method. The new approach is compared with Sammon's non-linear mapping (NLM) and the hierarchical approach of combining a vector quantizer such as the self-organizing feature map (SOM) or NG with the NLM recall algorithm. In comparison with these techniques, our method delivers a clear visualization of both data points and codebooks, and it achieves a better mapping quality in terms of the topology preservation measure q(m).

  12. Smoothing spline ANOVA frailty model for recurrent event data.

    PubMed

    Du, Pang; Jiang, Yihua; Wang, Yuedong

    2011-12-01

    Gap time hazard estimation is of particular interest in recurrent event data. This article proposes a fully nonparametric approach for estimating the gap time hazard. Smoothing spline analysis of variance (ANOVA) decompositions are used to model the log gap time hazard as a joint function of gap time and covariates, and general frailty is introduced to account for between-subject heterogeneity and within-subject correlation. We estimate the nonparametric gap time hazard function and parameters in the frailty distribution using a combination of the Newton-Raphson procedure, the stochastic approximation algorithm (SAA), and the Markov chain Monte Carlo (MCMC) method. The convergence of the algorithm is guaranteed by decreasing the step size of parameter update and/or increasing the MCMC sample size along iterations. Model selection procedure is also developed to identify negligible components in a functional ANOVA decomposition of the log gap time hazard. We evaluate the proposed methods with simulation studies and illustrate its use through the analysis of bladder tumor data. © 2011, The International Biometric Society.

  13. Self adaptive solution strategies: Locally bound constrained Newton Raphson solution algorithms

    NASA Technical Reports Server (NTRS)

    Padovan, Joe

    1991-01-01

    A summary is given of strategies which enable the automatic adjustment of the constraint surfaces recently used to extend the range and numerical stability/efficiency of nonlinear finite element equation solvers. In addition to handling kinematic and material induced nonlinearity, both pre-and postbuckling behavior can be treated. The scheme employs localized bounds on various hierarchical partitions of the field variables. These are used to resize, shape, and orient the global constraint surface, thereby enabling essentially automatic load/deflection incrementation. Due to the generality of the approach taken, it can be implemented in conjunction with the constraints of an arbitrary functional type. To benchmark the method, several numerical experiments are presented. These include problems involving kinematic and material nonlinearity, as well as pre- and postbuckling characteristics. Also included is a list of papers published in the course of the work.

  14. A Numerical-Analytical Approach Based on Canonical Transformations for Computing Optimal Low-Thrust Transfers

    NASA Astrophysics Data System (ADS)

    da Silva Fernandes, S.; das Chagas Carvalho, F.; Bateli Romão, J. V.

    2018-04-01

    A numerical-analytical procedure based on infinitesimal canonical transformations is developed for computing optimal time-fixed low-thrust limited power transfers (no rendezvous) between coplanar orbits with small eccentricities in an inverse-square force field. The optimization problem is formulated as a Mayer problem with a set of non-singular orbital elements as state variables. Second order terms in eccentricity are considered in the development of the maximum Hamiltonian describing the optimal trajectories. The two-point boundary value problem of going from an initial orbit to a final orbit is solved by means of a two-stage Newton-Raphson algorithm which uses an infinitesimal canonical transformation. Numerical results are presented for some transfers between circular orbits with moderate radius ratio, including a preliminary analysis of Earth-Mars and Earth-Venus missions.

  15. The Application of Simulation Method in Isothermal Elastic Natural Gas Pipeline

    NASA Astrophysics Data System (ADS)

    Xing, Chunlei; Guan, Shiming; Zhao, Yue; Cao, Jinggang; Chu, Yanji

    2018-02-01

    This Elastic pipeline mathematic model is of crucial importance in natural gas pipeline simulation because of its compliance with the practical industrial cases. The numerical model of elastic pipeline will bring non-linear complexity to the discretized equations. Hence the Newton-Raphson method cannot achieve fast convergence in this kind of problems. Therefore A new Newton Based method with Powell-Wolfe Condition to simulate the Isothermal elastic pipeline flow is presented. The results obtained by the new method aregiven based on the defined boundary conditions. It is shown that the method converges in all cases and reduces significant computational cost.

  16. Tables Of Gaussian-Type Orbital Basis Functions

    NASA Technical Reports Server (NTRS)

    Partridge, Harry

    1992-01-01

    NASA technical memorandum contains tables of estimated Hartree-Fock wave functions for atoms lithium through neon and potassium through krypton. Sets contain optimized Gaussian-type orbital exponents and coefficients, and near Hartree-Fock quality. Orbital exponents optimized by minimizing restricted Hartree-Fock energy via scaled Newton-Raphson scheme in which Hessian evaluated numerically by use of analytically determined gradients.

  17. Development of an autonomous video rendezvous and docking system, phase 3

    NASA Technical Reports Server (NTRS)

    Tietz, J. C.

    1984-01-01

    Field-of-view limitations proved troublesome. Higher resolution was required. Side thrusters were too weak. The strategy logic was improved and the Kalman filter was augmented to estimate target attitude and tumble rate. Two separate filters were used. The new filter estimates target attitude and angular momentum. The Newton-Raphson iteration improves image interpretation.

  18. A simple suboptimal least-squares algorithm for attitude determination with multiple sensors

    NASA Technical Reports Server (NTRS)

    Brozenec, Thomas F.; Bender, Douglas J.

    1994-01-01

    Three-axis attitude determination is equivalent to finding a coordinate transformation matrix which transforms a set of reference vectors fixed in inertial space to a set of measurement vectors fixed in the spacecraft. The attitude determination problem can be expressed as a constrained optimization problem. The constraint is that a coordinate transformation matrix must be proper, real, and orthogonal. A transformation matrix can be thought of as optimal in the least-squares sense if it maps the measurement vectors to the reference vectors with minimal 2-norm errors and meets the above constraint. This constrained optimization problem is known as Wahba's problem. Several algorithms which solve Wahba's problem exactly have been developed and used. These algorithms, while steadily improving, are all rather complicated. Furthermore, they involve such numerically unstable or sensitive operations as matrix determinant, matrix adjoint, and Newton-Raphson iterations. This paper describes an algorithm which minimizes Wahba's loss function, but without the constraint. When the constraint is ignored, the problem can be solved by a straightforward, numerically stable least-squares algorithm such as QR decomposition. Even though the algorithm does not explicitly take the constraint into account, it still yields a nearly orthogonal matrix for most practical cases; orthogonality only becomes corrupted when the sensor measurements are very noisy, on the same order of magnitude as the attitude rotations. The algorithm can be simplified if the attitude rotations are small enough so that the approximation sin(theta) approximately equals theta holds. We then compare the computational requirements for several well-known algorithms. For the general large-angle case, the QR least-squares algorithm is competitive with all other know algorithms and faster than most. If attitude rotations are small, the least-squares algorithm can be modified to run faster, and this modified algorithm is faster than all but a similarly specialized version of the QUEST algorithm. We also introduce a novel measurement averaging technique which reduces the n-measurement case to the two measurement case for our particular application, a star tracker and earth sensor mounted on an earth-pointed geosynchronous communications satellite. Using this technique, many n-measurement problems reduce to less than or equal to 3 measurements; this reduces the amount of required calculation without significant degradation in accuracy. Finally, we present the results of some tests which compare the least-squares algorithm with the QUEST and FOAM algorithms in the two-measurement case. For our example case, all three algorithms performed with similar accuracy.

  19. NLSCIDNT user's guide maximum likehood parameter identification computer program with nonlinear rotorcraft model

    NASA Technical Reports Server (NTRS)

    1979-01-01

    A nonlinear, maximum likelihood, parameter identification computer program (NLSCIDNT) is described which evaluates rotorcraft stability and control coefficients from flight test data. The optimal estimates of the parameters (stability and control coefficients) are determined (identified) by minimizing the negative log likelihood cost function. The minimization technique is the Levenberg-Marquardt method, which behaves like the steepest descent method when it is far from the minimum and behaves like the modified Newton-Raphson method when it is nearer the minimum. Twenty-one states and 40 measurement variables are modeled, and any subset may be selected. States which are not integrated may be fixed at an input value, or time history data may be substituted for the state in the equations of motion. Any aerodynamic coefficient may be expressed as a nonlinear polynomial function of selected 'expansion variables'.

  20. Calculation of symmetric and asymmetric vortex seperation on cones and tangent ogives based on discrete vortex models

    NASA Technical Reports Server (NTRS)

    Chin, S.; Lan, C. Edward

    1988-01-01

    An inviscid discrete vortex model, with newly derived expressions for the tangential velocity imposed at the separation points, is used to investigate the symmetric and asymmetric vortex separation on cones and tangent ogives. The circumferential locations of separation are taken from experimental data. Based on a slender body theory, the resulting simultaneous nonlinear algebraic equations in a cross-flow plane are solved with Broyden's modified Newton-Raphson method. Total force coefficients are obtained through momentum principle with new expressions for nonconical flow. It is shown through the method of function deflation that multiple solutions exist at large enough angles of attack, even with symmetric separation points. These additional solutions are asymmetric in vortex separation and produce side force coefficients which agree well with data for cones and tangent ogives.

  1. Mixed formulation for frictionless contact problems

    NASA Technical Reports Server (NTRS)

    Noor, Ahmed K.; Kim, Kyun O.

    1989-01-01

    Simple mixed finite element models and a computational precedure are presented for the solution of frictionless contact problems. The analytical formulation is based on a form of Reissner's large rotation theory of the structure with the effects of transverse shear deformation included. The contact conditions are incorporated into the formulation by using a perturbed Lagrangian approach with the fundamental unknowns consisting of the internal forces (stress resultants), the generalized displacements, and the Lagrange multipliers associated with the contact conditions. The element characteristic array are obtained by using a modified form of the two-field Hellinger-Reissner mixed variational principle. The internal forces and the Lagrange multipliers are allowed to be discontinuous at interelement boundaries. The Newton-Raphson iterative scheme is used for the solution of the nonlinear algebraic equations, and the determination of the contact area and the contact pressures.

  2. Commercial Non-Dispersive Infrared Spectroscopy Sensors for Sub-Ambient Carbon Dioxide Detection

    NASA Technical Reports Server (NTRS)

    Swickrath, Michael J.; Anderson, Molly S.; McMillin, Summer; Broerman, Craig

    2013-01-01

    Carbon dioxide produced through respiration can accumulate rapidly within closed spaces. If not managed, a crew's respiratory rate increases, headaches and hyperventilation occur, vision and hearing are affected, and cognitive abilities decrease. Consequently, development continues on a number of CO2 removal technologies for human spacecraft and spacesuits. Terrestrially, technology development requires precise performance characterization to qualify promising air revitalization equipment. On-orbit, instrumentation is required to identify and eliminate unsafe conditions. This necessitates accurate in situ CO2 detection. Recursive compensation algorithms were developed for sub-ambient detection of CO2 with commercial off-the-shelf (COTS) non-dispersive infrared (NDIR) sensors. In addition, the source of the exponential loss in accuracy is developed theoretically. The basis of the loss can be explained through thermal, Doppler, and Lorentz broadening effects that arise as a result of the temperature, pressure, and composition of the gas mixture under analysis. The objective was to develop a mathematical routine to compensate COTS CO2 sensors relying on NDIR over pressures, temperatures, and compositions far from calibration conditions. The routine relies on a power-law relationship for the pressure dependency of the sensors along with an equivalent pressure to account for the composition dependency. A Newton-Raphson iterative technique solves for actual carbon dioxide concentration based on the reported concentration. Moreover, first principles routines were established to predict mixed-gas spectra based on sensor specifications (e.g., optical path length). The first principles model can be used to parametrically optimize sensors or sensor arrays across a wide variety of pressures/temperatures/ compositions. In this work, heuristic scaling arguments were utilized to develop reasonable compensation techniques. Experimental results confirmed this approach and provided evidence that composition broadening significantly alters spectra when pressure is reduced. Consequently, a recursive compensation technique was developed with the Newton-Raphson method, which was subsequently verified through experimentation.

  3. Finite element method for viscoelastic medium with damage and the application to structural analysis of solid rocket motor grain

    NASA Astrophysics Data System (ADS)

    Deng, Bin; Shen, ZhiBin; Duan, JingBo; Tang, GuoJin

    2014-05-01

    This paper studies the damage-viscoelastic behavior of composite solid propellants of solid rocket motors (SRM). Based on viscoelastic theories and strain equivalent hypothesis in damage mechanics, a three-dimensional (3-D) nonlinear viscoelastic constitutive model incorporating with damage is developed. The resulting viscoelastic constitutive equations are numerically discretized by integration algorithm, and a stress-updating method is presented by solving nonlinear equations according to the Newton-Raphson method. A material subroutine of stress-updating is made up and embedded into commercial code of Abaqus. The material subroutine is validated through typical examples. Our results indicate that the finite element results are in good agreement with the analytical ones and have high accuracy, and the suggested method and designed subroutine are efficient and can be further applied to damage-coupling structural analysis of practical SRM grain.

  4. Thermal Model of a Current-Carrying Wire in a Vacuum

    NASA Technical Reports Server (NTRS)

    Border, James

    2006-01-01

    A computer program implements a thermal model of an insulated wire carrying electric current and surrounded by a vacuum. The model includes the effects of Joule heating, conduction of heat along the wire, and radiation of heat from the outer surface of the insulation on the wire. The model takes account of the temperature dependences of the thermal and electrical properties of the wire, the emissivity of the insulation, and the possibility that not only can temperature vary along the wire but, in addition, the ends of the wire can be thermally grounded at different temperatures. The resulting second-order differential equation for the steady-state temperature as a function of position along the wire is highly nonlinear. The wire is discretized along its length, and the equation is solved numerically by use of an iterative algorithm that utilizes a multidimensional version of the Newton-Raphson method.

  5. [Poverty profile regarding households participating in a food assistance program].

    PubMed

    Álvarez-Uribe, Martha C; Aguirre-Acevedo, Daniel C

    2012-06-01

    This study was aimed at establishing subgroups having specific socioeconomic characteristics by using latent class analysis as a method for segmenting target population members of the MANA-ICBF supplementary food program in the Antioquia department of Colombia and determine their differences regarding poverty and health conditions in efficiently addressing pertinent resources, programs and policies. The target population consisted of 200,000 children and their households involved in the MANA food assistance program; a representative sample by region was used. Latent class analysis was used, as were the expectation-maximization and Newton Raphson algorithms for identifying the appropriate number of classes. The final model classified the households into four clusters or classes, differing according to well-defined socio-demographic conditions affecting children's health. Some homes had a greater depth of poverty, therefore lowering the families' quality of life and affecting the health of the children in this age group.

  6. Application of an enriched FEM technique in thermo-mechanical contact problems

    NASA Astrophysics Data System (ADS)

    Khoei, A. R.; Bahmani, B.

    2018-02-01

    In this paper, an enriched FEM technique is employed for thermo-mechanical contact problem based on the extended finite element method. A fully coupled thermo-mechanical contact formulation is presented in the framework of X-FEM technique that takes into account the deformable continuum mechanics and the transient heat transfer analysis. The Coulomb frictional law is applied for the mechanical contact problem and a pressure dependent thermal contact model is employed through an explicit formulation in the weak form of X-FEM method. The equilibrium equations are discretized by the Newmark time splitting method and the final set of non-linear equations are solved based on the Newton-Raphson method using a staggered algorithm. Finally, in order to illustrate the capability of the proposed computational model several numerical examples are solved and the results are compared with those reported in literature.

  7. Real-time absorption and scattering characterization of slab-shaped turbid samples obtained by a combination of angular and spatially resolved measurements.

    PubMed

    Dam, Jan S; Yavari, Nazila; Sørensen, Søren; Andersson-Engels, Stefan

    2005-07-10

    We present a fast and accurate method for real-time determination of the absorption coefficient, the scattering coefficient, and the anisotropy factor of thin turbid samples by using simple continuous-wave noncoherent light sources. The three optical properties are extracted from recordings of angularly resolved transmittance in addition to spatially resolved diffuse reflectance and transmittance. The applied multivariate calibration and prediction techniques are based on multiple polynomial regression in combination with a Newton--Raphson algorithm. The numerical test results based on Monte Carlo simulations showed mean prediction errors of approximately 0.5% for all three optical properties within ranges typical for biological media. Preliminary experimental results are also presented yielding errors of approximately 5%. Thus the presented methods show a substantial potential for simultaneous absorption and scattering characterization of turbid media.

  8. Likelihood-based confidence intervals for estimating floods with given return periods

    NASA Astrophysics Data System (ADS)

    Martins, Eduardo Sávio P. R.; Clarke, Robin T.

    1993-06-01

    This paper discusses aspects of the calculation of likelihood-based confidence intervals for T-year floods, with particular reference to (1) the two-parameter gamma distribution; (2) the Gumbel distribution; (3) the two-parameter log-normal distribution, and other distributions related to the normal by Box-Cox transformations. Calculation of the confidence limits is straightforward using the Nelder-Mead algorithm with a constraint incorporated, although care is necessary to ensure convergence either of the Nelder-Mead algorithm, or of the Newton-Raphson calculation of maximum-likelihood estimates. Methods are illustrated using records from 18 gauging stations in the basin of the River Itajai-Acu, State of Santa Catarina, southern Brazil. A small and restricted simulation compared likelihood-based confidence limits with those given by use of the central limit theorem; for the same confidence probability, the confidence limits of the simulation were wider than those of the central limit theorem, which failed more frequently to contain the true quantile being estimated. The paper discusses possible applications of likelihood-based confidence intervals in other areas of hydrological analysis.

  9. Energy minimization in medical image analysis: Methodologies and applications.

    PubMed

    Zhao, Feng; Xie, Xianghua

    2016-02-01

    Energy minimization is of particular interest in medical image analysis. In the past two decades, a variety of optimization schemes have been developed. In this paper, we present a comprehensive survey of the state-of-the-art optimization approaches. These algorithms are mainly classified into two categories: continuous method and discrete method. The former includes Newton-Raphson method, gradient descent method, conjugate gradient method, proximal gradient method, coordinate descent method, and genetic algorithm-based method, while the latter covers graph cuts method, belief propagation method, tree-reweighted message passing method, linear programming method, maximum margin learning method, simulated annealing method, and iterated conditional modes method. We also discuss the minimal surface method, primal-dual method, and the multi-objective optimization method. In addition, we review several comparative studies that evaluate the performance of different minimization techniques in terms of accuracy, efficiency, or complexity. These optimization techniques are widely used in many medical applications, for example, image segmentation, registration, reconstruction, motion tracking, and compressed sensing. We thus give an overview on those applications as well. Copyright © 2015 John Wiley & Sons, Ltd.

  10. Adaptive control of turbulence intensity is accelerated by frugal flow sampling.

    PubMed

    Quinn, Daniel B; van Halder, Yous; Lentink, David

    2017-11-01

    The aerodynamic performance of vehicles and animals, as well as the productivity of turbines and energy harvesters, depends on the turbulence intensity of the incoming flow. Previous studies have pointed at the potential benefits of active closed-loop turbulence control. However, it is unclear what the minimal sensory and algorithmic requirements are for realizing this control. Here we show that very low-bandwidth anemometers record sufficient information for an adaptive control algorithm to converge quickly. Our online Newton-Raphson algorithm tunes the turbulence in a recirculating wind tunnel by taking readings from an anemometer in the test section. After starting at 9% turbulence intensity, the algorithm converges on values ranging from 10% to 45% in less than 12 iterations within 1% accuracy. By down-sampling our measurements, we show that very-low-bandwidth anemometers record sufficient information for convergence. Furthermore, down-sampling accelerates convergence by smoothing gradients in turbulence intensity. Our results explain why low-bandwidth anemometers in engineering and mechanoreceptors in biology may be sufficient for adaptive control of turbulence intensity. Finally, our analysis suggests that, if certain turbulent eddy sizes are more important to control than others, frugal adaptive control schemes can be particularly computationally effective for improving performance. © 2017 The Author(s).

  11. HADY-I, a FORTRAN program for the compressible stability analysis of three-dimensional boundary layers. [on swept and tapered wings

    NASA Technical Reports Server (NTRS)

    El-Hady, N. M.

    1981-01-01

    A computer program HADY-I for calculating the linear incompressible or compressible stability characteristics of the laminar boundary layer on swept and tapered wings is described. The eigenvalue problem and its adjoint arising from the linearized disturbance equations with the appropriate boundary conditions are solved numerically using a combination of Newton-Raphson interative scheme and a variable step size integrator based on the Runge-Kutta-Fehlburh fifth-order formulas. The integrator is used in conjunction with a modified Gram-Schmidt orthonormalization procedure. The computer program HADY-I calculates the growth rates of crossflow or streamwise Tollmien-Schlichting instabilities. It also calculates the group velocities of these disturbances. It is restricted to parallel stability calculations, where the boundary layer (meanflow) is assumed to be parallel. The meanflow solution is an input to the program.

  12. Direct numerical simulation of laminar-turbulent flow over a flat plate at hypersonic flow speeds

    NASA Astrophysics Data System (ADS)

    Egorov, I. V.; Novikov, A. V.

    2016-06-01

    A method for direct numerical simulation of a laminar-turbulent flow around bodies at hypersonic flow speeds is proposed. The simulation is performed by solving the full three-dimensional unsteady Navier-Stokes equations. The method of calculation is oriented to application of supercomputers and is based on implicit monotonic approximation schemes and a modified Newton-Raphson method for solving nonlinear difference equations. By this method, the development of three-dimensional perturbations in the boundary layer over a flat plate and in a near-wall flow in a compression corner is studied at the Mach numbers of the free-stream of M = 5.37. In addition to pulsation characteristic, distributions of the mean coefficients of the viscous flow in the transient section of the streamlined surface are obtained, which enables one to determine the beginning of the laminar-turbulent transition and estimate the characteristics of the turbulent flow in the boundary layer.

  13. PROCESS SIMULATION OF COLD PRESSING OF ARMSTRONG CP-Ti POWDERS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sabau, Adrian S; Gorti, Sarma B; Peter, William H

    A computational methodology is presented for the process simulation of cold pressing of Armstrong CP-Ti Powders. The computational model was implemented in the commercial finite element program ABAQUSTM. Since the powder deformation and consolidation is governed by specific pressure-dependent constitutive equations, several solution algorithms were developed for the ABAQUS user material subroutine, UMAT. The solution algorithms were developed for computing the plastic strain increments based on an implicit integration of the nonlinear yield function, flow rule, and hardening equations that describe the evolution of the state variables. Since ABAQUS requires the use of a full Newton-Raphson algorithm for the stress-strainmore » equations, an algorithm for obtaining the tangent/linearization moduli, which is consistent with the return-mapping algorithm, also was developed. Numerical simulation results are presented for the cold compaction of the Ti powders. Several simulations were conducted for cylindrical samples with different aspect ratios. The numerical simulation results showed that for the disk samples, the minimum von Mises stress was approximately half than its maximum value. The hydrostatic stress distribution exhibits a variation smaller than that of the von Mises stress. It was found that for the disk and cylinder samples the minimum hydrostatic stresses were approximately 23 and 50% less than its maximum value, respectively. It was also found that the minimum density was noticeably affected by the sample height.« less

  14. Tension Cutoff and Parameter Identification for the Viscoplastic Cap Model.

    DTIC Science & Technology

    1983-04-01

    computer program "VPDRVR" which employs a Crank-Nicolson time integration scheme and a Newton-Raphson iterative solution procedure. Numerical studies were...parameters was illustrated for triaxial stress and uniaxial strain loading for a well- studied sand material (McCormick Ranch Sand). Lastly, a finite element...viscoplastic tension-cutoff cri- terion and to establish parameter identification techniques with experimental data. Herein lies the impetus of this study

  15. Dynamic Failure of Materials. Volume 1 - Experiments and Analyses

    DTIC Science & Technology

    1998-11-01

    initial increments of voids do not lead to substantial relaxation of stress. In this case, the condition (8.1) gives Vv = Ao- aVv * ß=— (8.10) pc...enough strain steps to define the process accurately. At each strain step, a combined Newton-Raphson and regulafalsi solution technique (multiple trials ...laser surgery. Clinical studies have demonstrated that, for some applications, surgical lasers are superior to conventional surgical procedures

  16. An efficient strongly coupled immersed boundary method for deforming bodies

    NASA Astrophysics Data System (ADS)

    Goza, Andres; Colonius, Tim

    2016-11-01

    Immersed boundary methods treat the fluid and immersed solid with separate domains. As a result, a nonlinear interface constraint must be satisfied when these methods are applied to flow-structure interaction problems. This typically results in a large nonlinear system of equations that is difficult to solve efficiently. Often, this system is solved with a block Gauss-Seidel procedure, which is easy to implement but can require many iterations to converge for small solid-to-fluid mass ratios. Alternatively, a Newton-Raphson procedure can be used to solve the nonlinear system. This typically leads to convergence in a small number of iterations for arbitrary mass ratios, but involves the use of large Jacobian matrices. We present an immersed boundary formulation that, like the Newton-Raphson approach, uses a linearization of the system to perform iterations. It therefore inherits the same favorable convergence behavior. However, we avoid large Jacobian matrices by using a block LU factorization of the linearized system. We derive our method for general deforming surfaces and perform verification on 2D test problems of flow past beams. These test problems involve large amplitude flapping and a wide range of mass ratios. This work was partially supported by the Jet Propulsion Laboratory and Air Force Office of Scientific Research.

  17. Comparison of three newton-like nonlinear least-squares methods for estimating parameters of ground-water flow models

    USGS Publications Warehouse

    Cooley, R.L.; Hill, M.C.

    1992-01-01

    Three methods of solving nonlinear least-squares problems were compared for robustness and efficiency using a series of hypothetical and field problems. A modified Gauss-Newton/full Newton hybrid method (MGN/FN) and an analogous method for which part of the Hessian matrix was replaced by a quasi-Newton approximation (MGN/QN) solved some of the problems with appreciably fewer iterations than required using only a modified Gauss-Newton (MGN) method. In these problems, model nonlinearity and a large variance for the observed data apparently caused MGN to converge more slowly than MGN/FN or MGN/QN after the sum of squared errors had almost stabilized. Other problems were solved as efficiently with MGN as with MGN/FN or MGN/QN. Because MGN/FN can require significantly more computer time per iteration and more computer storage for transient problems, it is less attractive for a general purpose algorithm than MGN/QN.

  18. Localization of source with unknown amplitude using IPMC sensor arrays

    NASA Astrophysics Data System (ADS)

    Abdulsadda, Ahmad T.; Zhang, Feitian; Tan, Xiaobo

    2011-04-01

    The lateral line system, consisting of arrays of neuromasts functioning as flow sensors, is an important sensory organ for fish that enables them to detect predators, locate preys, perform rheotaxis, and coordinate schooling. Creating artificial lateral line systems is of significant interest since it will provide a new sensing mechanism for control and coordination of underwater robots and vehicles. In this paper we propose recursive algorithms for localizing a vibrating sphere, also known as a dipole source, based on measurements from an array of flow sensors. A dipole source is frequently used in the study of biological lateral lines, as a surrogate for underwater motion sources such as a flapping fish fin. We first formulate a nonlinear estimation problem based on an analytical model for the dipole-generated flow field. Two algorithms are presented to estimate both the source location and the vibration amplitude, one based on the least squares method and the other based on the Newton-Raphson method. Simulation results show that both methods deliver comparable performance in source localization. A prototype of artificial lateral line system comprising four ionic polymer-metal composite (IPMC) sensors is built, and experimental results are further presented to demonstrate the effectiveness of IPMC lateral line systems and the proposed estimation algorithms.

  19. Computational modeling of chemo-electro-mechanical coupling: A novel implicit monolithic finite element approach

    PubMed Central

    Wong, J.; Göktepe, S.; Kuhl, E.

    2014-01-01

    Summary Computational modeling of the human heart allows us to predict how chemical, electrical, and mechanical fields interact throughout a cardiac cycle. Pharmacological treatment of cardiac disease has advanced significantly over the past decades, yet it remains unclear how the local biochemistry of an individual heart cell translates into global cardiac function. Here we propose a novel, unified strategy to simulate excitable biological systems across three biological scales. To discretize the governing chemical, electrical, and mechanical equations in space, we propose a monolithic finite element scheme. We apply a highly efficient and inherently modular global-local split, in which the deformation and the transmembrane potential are introduced globally as nodal degrees of freedom, while the chemical state variables are treated locally as internal variables. To ensure unconditional algorithmic stability, we apply an implicit backward Euler finite difference scheme to discretize the resulting system in time. To increase algorithmic robustness and guarantee optimal quadratic convergence, we suggest an incremental iterative Newton-Raphson scheme. The proposed algorithm allows us to simulate the interaction of chemical, electrical, and mechanical fields during a representative cardiac cycle on a patient-specific geometry, robust and stable, with calculation times on the order of four days on a standard desktop computer. PMID:23798328

  20. Fast simulation techniques for switching converters

    NASA Technical Reports Server (NTRS)

    King, Roger J.

    1987-01-01

    Techniques for simulating a switching converter are examined. The state equations for the equivalent circuits, which represent the switching converter, are presented and explained. The uses of the Newton-Raphson iteration, low ripple approximation, half-cycle symmetry, and discrete time equations to compute the interval durations are described. An example is presented in which these methods are illustrated by applying them to a parallel-loaded resonant inverter with three equivalent circuits for its continuous mode of operation.

  1. Inverse solutions for electrical impedance tomography based on conjugate gradients methods

    NASA Astrophysics Data System (ADS)

    Wang, M.

    2002-01-01

    A multistep inverse solution for two-dimensional electric field distribution is developed to deal with the nonlinear inverse problem of electric field distribution in relation to its boundary condition and the problem of divergence due to errors introduced by the ill-conditioned sensitivity matrix and the noise produced by electrode modelling and instruments. This solution is based on a normalized linear approximation method where the change in mutual impedance is derived from the sensitivity theorem and a method of error vector decomposition. This paper presents an algebraic solution of the linear equations at each inverse step, using a generalized conjugate gradients method. Limiting the number of iterations in the generalized conjugate gradients method controls the artificial errors introduced by the assumption of linearity and the ill-conditioned sensitivity matrix. The solution of the nonlinear problem is approached using a multistep inversion. This paper also reviews the mathematical and physical definitions of the sensitivity back-projection algorithm based on the sensitivity theorem. Simulations and discussion based on the multistep algorithm, the sensitivity coefficient back-projection method and the Newton-Raphson method are given. Examples of imaging gas-liquid mixing and a human hand in brine are presented.

  2. Cascade Optimization for Aircraft Engines With Regression and Neural Network Analysis - Approximators

    NASA Technical Reports Server (NTRS)

    Patnaik, Surya N.; Guptill, James D.; Hopkins, Dale A.; Lavelle, Thomas M.

    2000-01-01

    The NASA Engine Performance Program (NEPP) can configure and analyze almost any type of gas turbine engine that can be generated through the interconnection of a set of standard physical components. In addition, the code can optimize engine performance by changing adjustable variables under a set of constraints. However, for engine cycle problems at certain operating points, the NEPP code can encounter difficulties: nonconvergence in the currently implemented Powell's optimization algorithm and deficiencies in the Newton-Raphson solver during engine balancing. A project was undertaken to correct these deficiencies. Nonconvergence was avoided through a cascade optimization strategy, and deficiencies associated with engine balancing were eliminated through neural network and linear regression methods. An approximation-interspersed cascade strategy was used to optimize the engine's operation over its flight envelope. Replacement of Powell's algorithm by the cascade strategy improved the optimization segment of the NEPP code. The performance of the linear regression and neural network methods as alternative engine analyzers was found to be satisfactory. This report considers two examples-a supersonic mixed-flow turbofan engine and a subsonic waverotor-topped engine-to illustrate the results, and it discusses insights gained from the improved version of the NEPP code.

  3. Conversion from Engineering Units to Telemetry Counts on Dryden Flight Simulators

    NASA Technical Reports Server (NTRS)

    Fantini, Jay A.

    1998-01-01

    Dryden real-time flight simulators encompass the simulation of pulse code modulation (PCM) telemetry signals. This paper presents a new method whereby the calibration polynomial (from first to sixth order), representing the conversion from counts to engineering units (EU), is numerically inverted in real time. The result is less than one-count error for valid EU inputs. The Newton-Raphson method is used to numerically invert the polynomial. A reverse linear interpolation between the EU limits is used to obtain an initial value for the desired telemetry count. The method presented here is not new. What is new is how classical numerical techniques are optimized to take advantage of modem computer power to perform the desired calculations in real time. This technique makes the method simple to understand and implement. There are no interpolation tables to store in memory as in traditional methods. The NASA F-15 simulation converts and transmits over 1000 parameters at 80 times/sec. This paper presents algorithm development, FORTRAN code, and performance results.

  4. Transmission Loss Calculation using A and B Loss Coefficients in Dynamic Economic Dispatch Problem

    NASA Astrophysics Data System (ADS)

    Jethmalani, C. H. Ram; Dumpa, Poornima; Simon, Sishaj P.; Sundareswaran, K.

    2016-04-01

    This paper analyzes the performance of A-loss coefficients while evaluating transmission losses in a Dynamic Economic Dispatch (DED) Problem. The performance analysis is carried out by comparing the losses computed using nominal A loss coefficients and nominal B loss coefficients in reference with load flow solution obtained by standard Newton-Raphson (NR) method. Density based clustering method based on connected regions with sufficiently high density (DBSCAN) is employed in identifying the best regions of A and B loss coefficients. Based on the results obtained through cluster analysis, a novel approach in improving the accuracy of network loss calculation is proposed. Here, based on the change in per unit load values between the load intervals, loss coefficients are updated for calculating the transmission losses. The proposed algorithm is tested and validated on IEEE 6 bus system, IEEE 14 bus, system IEEE 30 bus system and IEEE 118 bus system. All simulations are carried out using SCILAB 5.4 (www.scilab.org) which is an open source software.

  5. Logistic regression for circular data

    NASA Astrophysics Data System (ADS)

    Al-Daffaie, Kadhem; Khan, Shahjahan

    2017-05-01

    This paper considers the relationship between a binary response and a circular predictor. It develops the logistic regression model by employing the linear-circular regression approach. The maximum likelihood method is used to estimate the parameters. The Newton-Raphson numerical method is used to find the estimated values of the parameters. A data set from weather records of Toowoomba city is analysed by the proposed methods. Moreover, a simulation study is considered. The R software is used for all computations and simulations.

  6. Analysis of an arched outer-race ball bearing considering centrifugal forces.

    NASA Technical Reports Server (NTRS)

    Hamrock, B. J.; Anderson, W. J.

    1972-01-01

    Thrust-load analysis of a 150-mm angular contact ball bearing, taking into account centrifugal forces but omitting gyroscopics, elastohydrodynamics, and thermal effects. A Newton-Raphson method of iteration is used to evaluate the radial and axial projection of the distance between the ball center and the outer raceway groove curvature center. Fatigue life of the bearing is evaluated. Results for life, contact loads, and angles are given for a conventional bearing and two arched bearings.

  7. Nonlinear study of the parallel velocity/tearing instability using an implicit, nonlinear resistive MHD solver

    NASA Astrophysics Data System (ADS)

    Chacon, L.; Finn, J. M.; Knoll, D. A.

    2000-10-01

    Recently, a new parallel velocity instability has been found.(J. M. Finn, Phys. Plasmas), 2, 12 (1995) This mode is a tearing mode driven unstable by curvature effects and sound wave coupling in the presence of parallel velocity shear. Under such conditions, linear theory predicts that tearing instabilities will grow even in situations in which the classical tearing mode is stable. This could then be a viable seed mechanism for the neoclassical tearing mode, and hence a non-linear study is of interest. Here, the linear and non-linear stages of this instability are explored using a fully implicit, fully nonlinear 2D reduced resistive MHD code,(L. Chacon et al), ``Implicit, Jacobian-free Newton-Krylov 2D reduced resistive MHD nonlinear solver,'' submitted to J. Comput. Phys. (2000) including viscosity and particle transport effects. The nonlinear implicit time integration is performed using the Newton-Raphson iterative algorithm. Krylov iterative techniques are employed for the required algebraic matrix inversions, implemented Jacobian-free (i.e., without ever forming and storing the Jacobian matrix), and preconditioned with a ``physics-based'' preconditioner. Nonlinear results indicate that, for large total plasma beta and large parallel velocity shear, the instability results in the generation of large poloidal shear flows and large magnetic islands even in regimes when the classical tearing mode is absolutely stable. For small viscosity, the time asymptotic state can be turbulent.

  8. Cubic spline anchored grid pattern algorithm for high-resolution detection of subsurface cavities by the IR-CAT method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kassab, A.J.; Pollard, J.E.

    An algorithm is presented for the high-resolution detection of irregular-shaped subsurface cavities within irregular-shaped bodies by the IR-CAT method. The theoretical basis of the algorithm is rooted in the solution of an inverse geometric steady-state heat conduction problem. A Cauchy boundary condition is prescribed at the exposed surface, and the inverse geometric heat conduction problem is formulated by specifying the thermal condition at the inner cavities walls, whose unknown geometries are to be detected. The location of the inner cavities is initially estimated, and the domain boundaries are discretized. Linear boundary elements are used in conjunction with cubic splines formore » high resolution of the cavity walls. An anchored grid pattern (AGP) is established to constrain the cubic spline knots that control the inner cavity geometry to evolve along the AGP at each iterative step. A residual is defined measuring the difference between imposed and computed boundary conditions. A Newton-Raphson method with a Broyden update is used to automate the detection of inner cavity walls. During the iterative procedure, the movement of the inner cavity walls is restricted to physically realistic intermediate solutions. Numerical simulation demonstrates the superior resolution of the cubic spline AGP algorithm over the linear spline-based AGP in the detection of an irregular-shaped cavity. Numerical simulation is also used to test the sensitivity of the linear and cubic spline AGP algorithms by simulating bias and random error in measured surface temperature. The proposed AGP algorithm is shown to satisfactorily detect cavities with these simulated data.« less

  9. A fast algorithm to compute precise type-2 centroids for real-time control applications.

    PubMed

    Chakraborty, Sumantra; Konar, Amit; Ralescu, Anca; Pal, Nikhil R

    2015-02-01

    An interval type-2 fuzzy set (IT2 FS) is characterized by its upper and lower membership functions containing all possible embedded fuzzy sets, which together is referred to as the footprint of uncertainty (FOU). The FOU results in a span of uncertainty measured in the defuzzified space and is determined by the positional difference of the centroids of all the embedded fuzzy sets taken together. This paper provides a closed-form formula to evaluate the span of uncertainty of an IT2 FS. The closed-form formula offers a precise measurement of the degree of uncertainty in an IT2 FS with a runtime complexity less than that of the classical iterative Karnik-Mendel algorithm and other formulations employing the iterative Newton-Raphson algorithm. This paper also demonstrates a real-time control application using the proposed closed-form formula of centroids with reduced root mean square error and computational overhead than those of the existing methods. Computer simulations for this real-time control application indicate that parallel realization of the IT2 defuzzification outperforms its competitors with respect to maximum overshoot even at high sampling rates. Furthermore, in the presence of measurement noise in system (plant) states, the proposed IT2 FS based scheme outperforms its type-1 counterpart with respect to peak overshoot and root mean square error in plant response.

  10. Aircraft automatic-flight-control system with inversion of the model in the feed-forward path using a Newton-Raphson technique for the inversion

    NASA Technical Reports Server (NTRS)

    Smith, G. A.; Meyer, G.; Nordstrom, M.

    1986-01-01

    A new automatic flight control system concept suitable for aircraft with highly nonlinear aerodynamic and propulsion characteristics and which must operate over a wide flight envelope was investigated. This exact model follower inverts a complete nonlinear model of the aircraft as part of the feed-forward path. The inversion is accomplished by a Newton-Raphson trim of the model at each digital computer cycle time of 0.05 seconds. The combination of the inverse model and the actual aircraft in the feed-forward path alloys the translational and rotational regulators in the feedback path to be easily designed by linear methods. An explanation of the model inversion procedure is presented. An extensive set of simulation data for essentially the full flight envelope for a vertical attitude takeoff and landing aircraft (VATOL) is presented. These data demonstrate the successful, smooth, and precise control that can be achieved with this concept. The trajectory includes conventional flight from 200 to 900 ft/sec with path accelerations and decelerations, altitude changes of over 6000 ft and 2g and 3g turns. Vertical attitude maneuvering as a tail sitter along all axes is demonstrated. A transition trajectory from 200 ft/sec in conventional flight to stationary hover in the vertical attitude includes satisfactory operation through lift-cure slope reversal as attitude goes from horizontal to vertical at constant altitude. A vertical attitude takeoff from stationary hover to conventional flight is also demonstrated.

  11. Dynamic imaging in electrical impedance tomography of the human chest with online transition matrix identification.

    PubMed

    Moura, Fernando Silva; Aya, Julio Cesar Ceballos; Fleury, Agenor Toledo; Amato, Marcelo Britto Passos; Lima, Raul Gonzalez

    2010-02-01

    One of the electrical impedance tomography objectives is to estimate the electrical resistivity distribution in a domain based only on electrical potential measurements at its boundary generated by an imposed electrical current distribution into the boundary. One of the methods used in dynamic estimation is the Kalman filter. In biomedical applications, the random walk model is frequently used as evolution model and, under this conditions, poor tracking ability of the extended Kalman filter (EKF) is achieved. An analytically developed evolution model is not feasible at this moment. The paper investigates the identification of the evolution model in parallel to the EKF and updating the evolution model with certain periodicity. The evolution model transition matrix is identified using the history of the estimated resistivity distribution obtained by a sensitivity matrix based algorithm and a Newton-Raphson algorithm. To numerically identify the linear evolution model, the Ibrahim time-domain method is used. The investigation is performed by numerical simulations of a domain with time-varying resistivity and by experimental data collected from the boundary of a human chest during normal breathing. The obtained dynamic resistivity values lie within the expected values for the tissues of a human chest. The EKF results suggest that the tracking ability is significantly improved with this approach.

  12. Nonlinear damage identification of breathing cracks in Truss system

    NASA Astrophysics Data System (ADS)

    Zhao, Jie; DeSmidt, Hans

    2014-03-01

    The breathing cracks in truss system are detected by Frequency Response Function (FRF) based damage identification method. This method utilizes damage-induced changes of frequency response functions to estimate the severity and location of structural damage. This approach enables the possibility of arbitrary interrogation frequency and multiple inputs/outputs which greatly enrich the dataset for damage identification. The dynamical model of truss system is built using the finite element method and the crack model is based on fracture mechanics. Since the crack is driven by tensional and compressive forces of truss member, only one damage parameter is needed to represent the stiffness reduction of each truss member. Assuming that the crack constantly breathes with the exciting frequency, the linear damage detection algorithm is developed in frequency/time domain using Least Square and Newton Raphson methods. Then, the dynamic response of the truss system with breathing cracks is simulated in the time domain and meanwhile the crack breathing status for each member is determined by the feedback from real-time displacements of member's nodes. Harmonic Fourier Coefficients (HFCs) of dynamical response are computed by processing the data through convolution and moving average filters. Finally, the results show the effectiveness of linear damage detection algorithm in identifying the nonlinear breathing cracks using different combinations of HFCs and sensors.

  13. The Effect of Plug-in Electric Vehicles on Harmonic Analysis of Smart Grid

    NASA Astrophysics Data System (ADS)

    Heidarian, T.; Joorabian, M.; Reza, A.

    2015-12-01

    In this paper, the effect of plug-in electric vehicles is studied on the smart distribution system with a standard IEEE 30-bus network. At first, harmonic power flow analysis is performed by Newton-Raphson method and by considering distorted substation voltage. Afterward, proper sizes of capacitors is selected by cuckoo optimization algorithm to reduce the power losses and cost and by imposing acceptable limit for total harmonic distortion and RMS voltages. It is proposed that the impact of generated current harmonics by electric vehicle battery chargers should be factored into overall load control strategies of smart appliances. This study is generalized to the different hours of a day by using daily load curve, and then optimum time for charging of electric vehicles batteries in the parking lots are determined by cuckoo optimization algorithm. The results show that injecting harmonic currents of plug-in electric vehicles causes a drop in the voltage profile and increases power loss. Moreover, charging the vehicle batteries has more impact on increasing the power losses rather than the harmonic currents effect. Also, the findings showed that the current harmonics has a great influence on increasing of THD. Finally, optimum working times of all parking lots was obtained for the utilization cost reduction.

  14. SINFAC - SYSTEMS IMPROVED NUMERICAL FLUIDS ANALYSIS CODE

    NASA Technical Reports Server (NTRS)

    Costello, F. A.

    1994-01-01

    The Systems Improved Numerical Fluids Analysis Code, SINFAC, consists of additional routines added to the April 1983 revision of SINDA, a general thermal analyzer program. The purpose of the additional routines is to allow for the modeling of active heat transfer loops. The modeler can simulate the steady-state and pseudo-transient operations of 16 different heat transfer loop components including radiators, evaporators, condensers, mechanical pumps, reservoirs and many types of valves and fittings. In addition, the program contains a property analysis routine that can be used to compute the thermodynamic properties of 20 different refrigerants. SINFAC can simulate the response to transient boundary conditions. SINFAC was first developed as a method for computing the steady-state performance of two phase systems. It was then modified using CNFRWD, SINDA's explicit time-integration scheme, to accommodate transient thermal models. However, SINFAC cannot simulate pressure drops due to time-dependent fluid acceleration, transient boil-out, or transient fill-up, except in the accumulator. SINFAC also requires the user to be familiar with SINDA. The solution procedure used by SINFAC is similar to that which an engineer would use to solve a system manually. The solution to a system requires the determination of all of the outlet conditions of each component such as the flow rate, pressure, and enthalpy. To obtain these values, the user first estimates the inlet conditions to the first component of the system, then computes the outlet conditions from the data supplied by the manufacturer of the first component. The user then estimates the temperature at the outlet of the third component and computes the corresponding flow resistance of the second component. With the flow resistance of the second component, the user computes the conditions down stream, namely the inlet conditions of the third. The computations follow for the rest of the system, back to the first component. On the first pass, the user finds that the calculated outlet conditions of the last component do not match the estimated inlet conditions of the first. The user then modifies the estimated inlet conditions of the first component in an attempt to match the calculated values. The user estimated values are called State Variables. The differences between the user estimated values and calculated values are called the Error Variables. The procedure systematically changes the State Variables until all of the Error Variables are less than the user-specified iteration limits. The solution procedure is referred to as SCX. It consists of two phases, the Systems phase and the Controller phase. The X is to imply experimental. SCX computes each next set of State Variables in two phases. In the first phase, SCX fixes the controller positions and modifies the other State Variables by the Newton-Raphson method. This first phase is the Systems phase. Once the Newton-Raphson method has solved the problem for the fixed controller positions, SCX next calculates new controller positions based on Newton's method while treating each sensor-controller pair independently but allowing all to change in one iteration. This phase is the Controller phase. SINFAC is available by license for a period of ten (10) years to approved licensees. The licenced program product includes the source code for the additional routines to SINDA, the SINDA object code, command procedures, sample data and supporting documentation. Additional documentation may be purchased at the price below. SINFAC was created for use on a DEC VAX under VMS. Source code is written in FORTRAN 77, requires 180k of memory, and should be fully transportable. The program was developed in 1988.

  15. Elasto-Plastic Behavior of Aluminum Foams Subjected to Compression Loading

    NASA Astrophysics Data System (ADS)

    Silva, H. M.; Carvalho, C. D.; Peixinho, N. R.

    2017-05-01

    The non-linear behavior of uniform-size cellular foams made of aluminum is investigated when subjected to compressive loads while comparing numerical results obtained in the Finite Element Method software (FEM) ANSYS workbench and ANSYS Mechanical APDL (ANSYS Parametric Design Language). The numerical model is built on AUTODESK INVENTOR, being imported into ANSYS and solved by the Newton-Raphson iterative method. The most similar conditions were used in ANSYS mechanical and ANSYS workbench, as possible. The obtained numerical results and the differences between the two programs are presented and discussed

  16. Proceedings of Workshop on Atmospheric Density and Aerodynamic Drag Models for Air Force Operations Held at Air Force Geophysics Laboratory on 20-22 October 1987. Volume 1

    DTIC Science & Technology

    1990-02-13

    considered with these production processes in a simple photochemical equilibrium calculation , we are able to determine the contribution each makes to the...Hessian matrix of second derivatives (which is required in the Newton-Raphson procedure) by the vector product of the gradient (VJ) and its transpose...was focused on the altitude region 80-250 Km. Papers were presented in the folowing areas: Air Force requirements , physics of density and drag

  17. Flight instrumentation specification for parameter identification: Program user's guide. [instrument errors/error analysis

    NASA Technical Reports Server (NTRS)

    Mohr, R. L.

    1975-01-01

    A set of four digital computer programs is presented which can be used to investigate the effects of instrumentation errors on the accuracy of aircraft and helicopter stability-and-control derivatives identified from flight test data. The programs assume that the differential equations of motion are linear and consist of small perturbations about a quasi-steady flight condition. It is also assumed that a Newton-Raphson optimization technique is used for identifying the estimates of the parameters. Flow charts and printouts are included.

  18. An index of refraction algorithm for seawater over temperature, pressure, salinity, density, and wavelength

    NASA Astrophysics Data System (ADS)

    Millard, R. C.; Seaver, G.

    1990-12-01

    A 27-term index of refraction algorithm for pure and sea waters has been developed using four experimental data sets of differing accuracies. They cover the range 500-700 nm in wavelength, 0-30°C in temperature, 0-40 psu in salinity, and 0-11,000 db in pressure. The index of refraction algorithm has an accuracy that varies from 0.4 ppm for pure water at atmospheric pressure to 80 ppm at high pressures, but preserves the accuracy of each original data set. This algorithm is a significant improvement over existing descriptions as it is in analytical form with a better and more carefully defined accuracy. A salinometer algorithm with the same uncertainty has been created by numerically inverting the index algorithm using the Newton-Raphson method. The 27-term index algorithm was used to generate a pseudo-data set at the sodium D wavelength (589.26 nm) from which a 6-term densitometer algorithm was constructed. The densitometer algorithm also produces salinity as an intermediate step in the salinity inversion. The densitometer residuals have a standard deviation of 0.049 kg m -3 which is not accurate enough for most oceanographic applications. However, the densitometer algorithm was used to explore the sensitivity of density from this technique to temperature and pressure uncertainties. To achieve a deep ocean densitometer of 0.001 kg m -3 accuracy would require the index of refraction to have an accuracy of 0.3 ppm, the temperature an accuracy of 0.01°C and the pressure 1 db. Our assessment of the currently available index of refraction measurements finds that only the data for fresh water at atmospheric pressure produce an algorithm satisfactory for oceanographic use (density to 0.4 ppm). The data base for the algorithm at higher pressures and various salinities requires an order of magnitude or better improvement in index measurement accuracy before the resultant density accuracy will be comparable to the currently available oceanographic algorithm.

  19. Finite element model correlation of a composite UAV wing using modal frequencies

    NASA Astrophysics Data System (ADS)

    Oliver, Joseph A.; Kosmatka, John B.; Hemez, François M.; Farrar, Charles R.

    2007-04-01

    The current work details the implementation of a meta-model based correlation technique on a composite UAV wing test piece and associated finite element (FE) model. This method involves training polynomial models to emulate the FE input-output behavior and then using numerical optimization to produce a set of correlated parameters which can be returned to the FE model. After discussions about the practical implementation, the technique is validated on a composite plate structure and then applied to the UAV wing structure, where it is furthermore compared to a more traditional Newton-Raphson technique which iteratively uses first-order Taylor-series sensitivity. The experimental testpiece wing comprises two graphite/epoxy prepreg and Nomex honeycomb co-cured skins and two prepreg spars bonded together in a secondary process. MSC.Nastran FE models of the four structural components are correlated independently, using modal frequencies as correlation features, before being joined together into the assembled structure and compared to experimentally measured frequencies from the assembled wing in a cantilever configuration. Results show that significant improvements can be made to the assembled model fidelity, with the meta-model procedure producing slightly superior results to Newton-Raphson iteration. Final evaluation of component correlation using the assembled wing comparison showed worse results for each correlation technique, with the meta-model technique worse overall. This can be most likely be attributed to difficultly in correlating the open-section spars; however, there is also some question about non-unique update variable combinations in the current configuration, which lead correlation away from physically probably values.

  20. Measurement Uncertainty of Dew-Point Temperature in a Two-Pressure Humidity Generator

    NASA Astrophysics Data System (ADS)

    Martins, L. Lages; Ribeiro, A. Silva; Alves e Sousa, J.; Forbes, Alistair B.

    2012-09-01

    This article describes the measurement uncertainty evaluation of the dew-point temperature when using a two-pressure humidity generator as a reference standard. The estimation of the dew-point temperature involves the solution of a non-linear equation for which iterative solution techniques, such as the Newton-Raphson method, are required. Previous studies have already been carried out using the GUM method and the Monte Carlo method but have not discussed the impact of the approximate numerical method used to provide the temperature estimation. One of the aims of this article is to take this approximation into account. Following the guidelines presented in the GUM Supplement 1, two alternative approaches can be developed: the forward measurement uncertainty propagation by the Monte Carlo method when using the Newton-Raphson numerical procedure; and the inverse measurement uncertainty propagation by Bayesian inference, based on prior available information regarding the usual dispersion of values obtained by the calibration process. The measurement uncertainties obtained using these two methods can be compared with previous results. Other relevant issues concerning this research are the broad application to measurements that require hygrometric conditions obtained from two-pressure humidity generators and, also, the ability to provide a solution that can be applied to similar iterative models. The research also studied the factors influencing both the use of the Monte Carlo method (such as the seed value and the convergence parameter) and the inverse uncertainty propagation using Bayesian inference (such as the pre-assigned tolerance, prior estimate, and standard deviation) in terms of their accuracy and adequacy.

  1. Fisher Scoring Method for Parameter Estimation of Geographically Weighted Ordinal Logistic Regression (GWOLR) Model

    NASA Astrophysics Data System (ADS)

    Widyaningsih, Purnami; Retno Sari Saputro, Dewi; Nugrahani Putri, Aulia

    2017-06-01

    GWOLR model combines geographically weighted regression (GWR) and (ordinal logistic reression) OLR models. Its parameter estimation employs maximum likelihood estimation. Such parameter estimation, however, yields difficult-to-solve system of nonlinear equations, and therefore numerical approximation approach is required. The iterative approximation approach, in general, uses Newton-Raphson (NR) method. The NR method has a disadvantage—its Hessian matrix is always the second derivatives of each iteration so it does not always produce converging results. With regard to this matter, NR model is modified by substituting its Hessian matrix into Fisher information matrix, which is termed Fisher scoring (FS). The present research seeks to determine GWOLR model parameter estimation using Fisher scoring method and apply the estimation on data of the level of vulnerability to Dengue Hemorrhagic Fever (DHF) in Semarang. The research concludes that health facilities give the greatest contribution to the probability of the number of DHF sufferers in both villages. Based on the number of the sufferers, IR category of DHF in both villages can be determined.

  2. Kinematics of an in-parallel actuated manipulator based on the Stewart platform mechanism

    NASA Technical Reports Server (NTRS)

    Williams, Robert L., II

    1992-01-01

    This paper presents kinematic equations and solutions for an in-parallel actuated robotic mechanism based on Stewart's platform. These equations are required for inverse position and resolved rate (inverse velocity) platform control. NASA LaRC has a Vehicle Emulator System (VES) platform designed by MIT which is based on Stewart's platform. The inverse position solution is straight-forward and computationally inexpensive. Given the desired position and orientation of the moving platform with respect to the base, the lengths of the prismatic leg actuators are calculated. The forward position solution is more complicated and theoretically has 16 solutions. The position and orientation of the moving platform with respect to the base is calculated given the leg actuator lengths. Two methods are pursued in this paper to solve this problem. The resolved rate (inverse velocity) solution is derived. Given the desired Cartesian velocity of the end-effector, the required leg actuator rates are calculated. The Newton-Raphson Jacobian matrix resulting from the second forward position kinematics solution is a modified inverse Jacobian matrix. Examples and simulations are given for the VES.

  3. Survey on the Performance of Source Localization Algorithms.

    PubMed

    Fresno, José Manuel; Robles, Guillermo; Martínez-Tarifa, Juan Manuel; Stewart, Brian G

    2017-11-18

    The localization of emitters using an array of sensors or antennas is a prevalent issue approached in several applications. There exist different techniques for source localization, which can be classified into multilateration, received signal strength (RSS) and proximity methods. The performance of multilateration techniques relies on measured time variables: the time of flight (ToF) of the emission from the emitter to the sensor, the time differences of arrival (TDoA) of the emission between sensors and the pseudo-time of flight (pToF) of the emission to the sensors. The multilateration algorithms presented and compared in this paper can be classified as iterative and non-iterative methods. Both standard least squares (SLS) and hyperbolic least squares (HLS) are iterative and based on the Newton-Raphson technique to solve the non-linear equation system. The metaheuristic technique particle swarm optimization (PSO) used for source localisation is also studied. This optimization technique estimates the source position as the optimum of an objective function based on HLS and is also iterative in nature. Three non-iterative algorithms, namely the hyperbolic positioning algorithms (HPA), the maximum likelihood estimator (MLE) and Bancroft algorithm, are also presented. A non-iterative combined algorithm, MLE-HLS, based on MLE and HLS, is further proposed in this paper. The performance of all algorithms is analysed and compared in terms of accuracy in the localization of the position of the emitter and in terms of computational time. The analysis is also undertaken with three different sensor layouts since the positions of the sensors affect the localization; several source positions are also evaluated to make the comparison more robust. The analysis is carried out using theoretical time differences, as well as including errors due to the effect of digital sampling of the time variables. It is shown that the most balanced algorithm, yielding better results than the other algorithms in terms of accuracy and short computational time, is the combined MLE-HLS algorithm.

  4. Maximum Likelihood and Minimum Distance Applied to Univariate Mixture Distributions.

    ERIC Educational Resources Information Center

    Wang, Yuh-Yin Wu; Schafer, William D.

    This Monte-Carlo study compared modified Newton (NW), expectation-maximization algorithm (EM), and minimum Cramer-von Mises distance (MD), used to estimate parameters of univariate mixtures of two components. Data sets were fixed at size 160 and manipulated by mean separation, variance ratio, component proportion, and non-normality. Results…

  5. Hierarchically partitioned nonlinear equation solvers

    NASA Technical Reports Server (NTRS)

    Padovan, Joseph

    1987-01-01

    By partitioning solution space into a number of subspaces, a new multiply constrained partitioned Newton-Raphson nonlinear equation solver is developed. Specifically, for a given iteration, each of the various separate partitions are individually and simultaneously controlled. Due to the generality of the scheme, a hierarchy of partition levels can be employed. For finite-element-type applications, this includes the possibility of degree-of-freedom, nodal, elemental, geometric substructural, material and kinematically nonlinear group controls. It is noted that such partitioning can be continuously updated, depending on solution conditioning. In this context, convergence is ascertained at the individual partition level.

  6. The terminal area automated path generation problem

    NASA Technical Reports Server (NTRS)

    Hsin, C.-C.

    1977-01-01

    The automated terminal area path generation problem in the advanced Air Traffic Control System (ATC), has been studied. Definitions, input, output and the interrelationships with other ATC functions have been discussed. Alternatives in modeling the problem have been identified. Problem formulations and solution techniques are presented. In particular, the solution of a minimum effort path stretching problem (path generation on a given schedule) has been carried out using the Newton-Raphson trajectory optimization method. Discussions are presented on the effect of different delivery time, aircraft entry position, initial guess on the boundary conditions, etc. Recommendations are made on real-world implementations.

  7. Comparison and Tensorial Formulation of Inelastic Constitutive Models of Salt Rock Behaviour and Efficient Numerical Implementatio

    NASA Astrophysics Data System (ADS)

    Nagel, T.; Böttcher, N.; Görke, U. J.; Kolditz, O.

    2014-12-01

    The design process of geotechnical installations includes the application of numerical simulation tools for safety assessment, dimensioning and long term effectiveness estimations. Underground salt caverns can be used for the storage of natural gas, hydrogen, oil, waste or compressed air. For their design one has to take into account fluctuating internal pressures due to different levels of filling, the stresses imposed by the surrounding rock mass, irregular geometries and possibly heterogeneous material properties [3] in order to estimate long term cavern convergence as well as locally critical wall stresses. Constitutive models applied to rock salt are usually viscoplastic in nature and most often based on a Burgers-type rheological model extended by non-linear viscosity functions and/or plastic friction elements. Besides plastic dilatation, healing and damage are sometimes accounted for as well [2]. The scales of the geotechnical system to be simulated and the laboratory tests from which material parameters are determined are vastly different. The most common material testing modalities to determine material parameters in geoengineering are the uniaxial and the triaxial compression tests. Some constitutive formulations in widespread use are formulated based on equivalent rather than tensorial quantities valid under these specific test conditions and are subsequently applied to heterogeneous underground systems and complex 3D load cases. We show here that this procedure is inappropriate and can lead to erroneous results. We further propose alternative formulations of the constitutive models in question that restore their validity under arbitrary loading conditions. For an efficient numerical simulation, the discussed constitutive models are integrated locally with a Newton-Raphson algorithm that directly provides the algorithmically consistent tangent matrix for the global Newton iteration of the displacement based finite element formulation. Finally, the finite element implementations of the proposed constitutive formulations are employed to simulate an underground salt cavern used for compressed air energy storage with OpenGeoSys [1]. Transient convergence and stress fields are evaluated for typical fluctuating operation pressure regimes.

  8. An improved algorithm for the determination of the system paramters of a visual binary by least squares

    NASA Astrophysics Data System (ADS)

    Xu, Yu-Lin

    The problem of computing the orbit of a visual binary from a set of observed positions is reconsidered. It is a least squares adjustment problem, if the observational errors follow a bias-free multivariate Gaussian distribution and the covariance matrix of the observations is assumed to be known. The condition equations are constructed to satisfy both the conic section equation and the area theorem, which are nonlinear in both the observations and the adjustment parameters. The traditional least squares algorithm, which employs condition equations that are solved with respect to the uncorrelated observations and either linear in the adjustment parameters or linearized by developing them in Taylor series by first-order approximation, is inadequate in our orbit problem. D.C. Brown proposed an algorithm solving a more general least squares adjustment problem in which the scalar residual function, however, is still constructed by first-order approximation. Not long ago, a completely general solution was published by W.H Jefferys, who proposed a rigorous adjustment algorithm for models in which the observations appear nonlinearly in the condition equations and may be correlated, and in which construction of the normal equations and the residual function involves no approximation. This method was successfully applied in our problem. The normal equations were first solved by Newton's scheme. Practical examples show that this converges fast if the observational errors are sufficiently small and the initial approximate solution is sufficiently accurate, and that it fails otherwise. Newton's method was modified to yield a definitive solution in the case the normal approach fails, by combination with the method of steepest descent and other sophisticated algorithms. Practical examples show that the modified Newton scheme can always lead to a final solution. The weighting of observations, the orthogonal parameters and the efficiency of a set of adjustment parameters are also considered. The definition of efficiency is revised.

  9. Regularization and computational methods for precise solution of perturbed orbit transfer problems

    NASA Astrophysics Data System (ADS)

    Woollands, Robyn Michele

    The author has developed a suite of algorithms for solving the perturbed Lambert's problem in celestial mechanics. These algorithms have been implemented as a parallel computation tool that has broad applicability. This tool is composed of four component algorithms and each provides unique benefits for solving a particular type of orbit transfer problem. The first one utilizes a Keplerian solver (a-iteration) for solving the unperturbed Lambert's problem. This algorithm not only provides a "warm start" for solving the perturbed problem but is also used to identify which of several perturbed solvers is best suited for the job. The second algorithm solves the perturbed Lambert's problem using a variant of the modified Chebyshev-Picard iteration initial value solver that solves two-point boundary value problems. This method converges over about one third of an orbit and does not require a Newton-type shooting method and thus no state transition matrix needs to be computed. The third algorithm makes use of regularization of the differential equations through the Kustaanheimo-Stiefel transformation and extends the domain of convergence over which the modified Chebyshev-Picard iteration two-point boundary value solver will converge, from about one third of an orbit to almost a full orbit. This algorithm also does not require a Newton-type shooting method. The fourth algorithm uses the method of particular solutions and the modified Chebyshev-Picard iteration initial value solver to solve the perturbed two-impulse Lambert problem over multiple revolutions. The method of particular solutions is a shooting method but differs from the Newton-type shooting methods in that it does not require integration of the state transition matrix. The mathematical developments that underlie these four algorithms are derived in the chapters of this dissertation. For each of the algorithms, some orbit transfer test cases are included to provide insight on accuracy and efficiency of these individual algorithms. Following this discussion, the combined parallel algorithm, known as the unified Lambert tool, is presented and an explanation is given as to how it automatically selects which of the three perturbed solvers to compute the perturbed solution for a particular orbit transfer. The unified Lambert tool may be used to determine a single orbit transfer or for generating of an extremal field map. A case study is presented for a mission that is required to rendezvous with two pieces of orbit debris (spent rocket boosters). The unified Lambert tool software developed in this dissertation is already being utilized by several industrial partners and we are confident that it will play a significant role in practical applications, including solution of Lambert problems that arise in the current applications focused on enhanced space situational awareness.

  10. OGS#PETSc approach for robust and efficient simulations of strongly coupled hydrothermal processes in EGS reservoirs

    NASA Astrophysics Data System (ADS)

    Watanabe, Norihiro; Blucher, Guido; Cacace, Mauro; Kolditz, Olaf

    2016-04-01

    A robust and computationally efficient solution is important for 3D modelling of EGS reservoirs. This is particularly the case when the reservoir model includes hydraulic conduits such as induced or natural fractures, fault zones, and wellbore open-hole sections. The existence of such hydraulic conduits results in heterogeneous flow fields and in a strengthened coupling between fluid flow and heat transport processes via temperature dependent fluid properties (e.g. density and viscosity). A commonly employed partitioned solution (or operator-splitting solution) may not robustly work for such strongly coupled problems its applicability being limited by small time step sizes (e.g. 5-10 days) whereas the processes have to be simulated for 10-100 years. To overcome this limitation, an alternative approach is desired which can guarantee a robust solution of the coupled problem with minor constraints on time step sizes. In this work, we present a Newton-Raphson based monolithic coupling approach implemented in the OpenGeoSys simulator (OGS) combined with the Portable, Extensible Toolkit for Scientific Computation (PETSc) library. The PETSc library is used for both linear and nonlinear solvers as well as MPI-based parallel computations. The suggested method has been tested by application to the 3D reservoir site of Groß Schönebeck, in northern Germany. Results show that the exact Newton-Raphson approach can also be limited to small time step sizes (e.g. one day) due to slight oscillations in the temperature field. The usage of a line search technique and modification of the Jacobian matrix were necessary to achieve robust convergence of the nonlinear solution. For the studied example, the proposed monolithic approach worked even with a very large time step size of 3.5 years.

  11. Statistical efficiency of adaptive algorithms.

    PubMed

    Widrow, Bernard; Kamenetsky, Max

    2003-01-01

    The statistical efficiency of a learning algorithm applied to the adaptation of a given set of variable weights is defined as the ratio of the quality of the converged solution to the amount of data used in training the weights. Statistical efficiency is computed by averaging over an ensemble of learning experiences. A high quality solution is very close to optimal, while a low quality solution corresponds to noisy weights and less than optimal performance. In this work, two gradient descent adaptive algorithms are compared, the LMS algorithm and the LMS/Newton algorithm. LMS is simple and practical, and is used in many applications worldwide. LMS/Newton is based on Newton's method and the LMS algorithm. LMS/Newton is optimal in the least squares sense. It maximizes the quality of its adaptive solution while minimizing the use of training data. Many least squares adaptive algorithms have been devised over the years, but no other least squares algorithm can give better performance, on average, than LMS/Newton. LMS is easily implemented, but LMS/Newton, although of great mathematical interest, cannot be implemented in most practical applications. Because of its optimality, LMS/Newton serves as a benchmark for all least squares adaptive algorithms. The performances of LMS and LMS/Newton are compared, and it is found that under many circumstances, both algorithms provide equal performance. For example, when both algorithms are tested with statistically nonstationary input signals, their average performances are equal. When adapting with stationary input signals and with random initial conditions, their respective learning times are on average equal. However, under worst-case initial conditions, the learning time of LMS can be much greater than that of LMS/Newton, and this is the principal disadvantage of the LMS algorithm. But the strong points of LMS are ease of implementation and optimal performance under important practical conditions. For these reasons, the LMS algorithm has enjoyed very widespread application. It is used in almost every modem for channel equalization and echo cancelling. Furthermore, it is related to the famous backpropagation algorithm used for training neural networks.

  12. Molecular dynamics simulation of a needle-sphere binary mixture

    NASA Astrophysics Data System (ADS)

    Raghavan, Karthik

    This paper investigates the dynamic behaviour of a hard needle-sphere binary system using a novel numerical technique called the Newton homotopy continuation (NHC) method. This mixture is representative of a polymer melt where both long chain molecules and monomers coexist. Since the intermolecular forces are generated from hard body interactions, the consequence of missed collisions or incorrect collision sequences have a significant bearing on the dynamic properties of the fluid. To overcome this problem, in earlier work NHC was chosen over traditional Newton-Raphson methods to solve the hard body dynamics of a needle fluid in random media composed of overlapping spheres. Furthermore, the simplicity of interactions and dynamics allows us to focus our research directly on the effects of particle shape and density on the transport behaviour of the mixture. These studies are also compared with earlier works that examined molecular chains in porous media primarily to understand the differences in molecular transport in the bulk versus porous systems.

  13. Soil heating and evaporation under extreme conditions: Forest fires and slash pile burns

    NASA Astrophysics Data System (ADS)

    Massman, W. J.

    2011-12-01

    Heating any soil during a sufficiently intense wild fire or prescribed burn can alter soil irreversibly, resulting in many significant and well known, long term biological, chemical, and hydrological effects. To better understand how fire impacts soil, especially considering the increasing probability of wildfires that is being driven by climate change and the increasing use of prescribe burns by land managers, it is important to better understand the dynamics of the coupled heat and moisture transport in soil during these extreme heating events. Furthermore, improving understanding of heat and mass transport during such extreme conditions should also provide insights into the associated transport mechanisms under more normal conditions as well. Here I describe the development of a new model designed to simulate soil heat and moisture transport during fires where the surface heating often ranges between 10,000 and 100,000 Wm-2 for several minutes to several hours. Model performance is tested against laboratory measurements of soil temperature and moisture changes at several depths during controlled heating events created with an extremely intense radiant heater. The laboratory tests employed well described soils with well known physical properties. The model, on the other hand, is somewhat unusual in that it employs formulations for temperature dependencies of the soil specific heat, thermal conductivity, and the water retention curve (relation between soil moisture and soil moisture potential). It also employs a new formulation for the surface evaporation rate as a component of the upper boundary condition, as well as the Newton-Raphson method and the generalized Thomas algorithm for inverting block tri-diagonal matrices to solve for soil temperature and soil moisture potential. Model results show rapid evaporation rates with significant vapor transfer not only to the free atmosphere above the soil, but to lower depths of the soil, where the vapor re-condenses ahead of the heating front. Consequently the trajectory of the solution (soil volumetric water content versus soil temperature) is very unusual and highly nonlinear, which may explain why more traditional methods (i.e., those based on finite difference or finite element approaches) tend to show more numerical instabilities than the Newton-Raphson method when used to model these extreme conditions. But, despite the intuitive and qualitative appeal of the model's numerical solution, it underestimates the rate of soil moisture loss observed during the laboratory trials, although the soil temperatures are reasonably well simulated.

  14. Three-dimensional flow measurements in a vaneless radial turbine scroll

    NASA Technical Reports Server (NTRS)

    Tabakoff, W.; Wood, B.; Vittal, B. V. R.

    1982-01-01

    The flow behavior in a vaneless radial turbine scroll was examined experimentally. The data was obtained using the slant sensor technique of hot film anemometry. This method used the unsymmetric heat transfer characteristics of a constant temperature hot film sensor to detect the flow direction and magnitude. This was achieved by obtaining a velocity vector measurement at three sensor positions with respect to the flow. The true magnitude and direction of the velocity vector was then found using these values and a Newton-Raphson numerical technique. The through flow and secondary flow velocity components are measured at various points in three scroll sections.

  15. What Information Theory Says about Bounded Rational Best Response

    NASA Technical Reports Server (NTRS)

    Wolpert, David H.

    2005-01-01

    Probability Collectives (PC) provides the information-theoretic extension of conventional full-rationality game theory to bounded rational games. Here an explicit solution to the equations giving the bounded rationality equilibrium of a game is presented. Then PC is used to investigate games in which the players use bounded rational best-response strategies. Next it is shown that in the continuum-time limit, bounded rational best response games result in a variant of the replicator dynamics of evolutionary game theory. It is then shown that for team (shared-payoff) games, this variant of replicator dynamics is identical to Newton-Raphson iterative optimization of the shared utility function.

  16. Nonlinear Transient Problems Using Structure Compatible Heat Transfer Code

    NASA Technical Reports Server (NTRS)

    Hou, Gene

    2000-01-01

    The report documents the recent effort to enhance a transient linear heat transfer code so as to solve nonlinear problems. The linear heat transfer code was originally developed by Dr. Kim Bey of NASA Largely and called the Structure-Compatible Heat Transfer (SCHT) code. The report includes four parts. The first part outlines the formulation of the heat transfer problem of concern. The second and the third parts give detailed procedures to construct the nonlinear finite element equations and the required Jacobian matrices for the nonlinear iterative method, Newton-Raphson method. The final part summarizes the results of the numerical experiments on the newly enhanced SCHT code.

  17. Arc-Length Continuation and Multi-Grid Techniques for Nonlinear Elliptic Eigenvalue Problems,

    DTIC Science & Technology

    1981-03-19

    size of the finest grid. We use the (AM) adaptive version of the Cycle C algorithm , unless otherwise stated. The first modified algorithm is the...by computing the derivative, uk, at a known solution and use it to get a better initial guess for the next value of X in a predictor - corrector fashion...factorization of the Jacobian Gu computed already in the Newton step. Using such a predictor - corrector method will often allow us to take a much bigger step

  18. Development of an integrated BEM approach for hot fluid structure interaction

    NASA Technical Reports Server (NTRS)

    Dargush, G. F.; Banerjee, P. K.; Shi, Y.

    1991-01-01

    The development of a comprehensive fluid-structure interaction capability within a boundary element computer code is described. This new capability is implemented in a completely general manner, so that quite arbitrary geometry, material properties and boundary conditions may be specified. Thus, a single analysis code can be used to run structures-only problems, fluids-only problems, or the combined fluid-structure problem. In all three cases, steady or transient conditions can be selected, with or without thermal effects. Nonlinear analyses can be solved via direct iteration or by employing a modified Newton-Raphson approach. A number of detailed numerical examples are included at the end of these two sections to validate the formulations and to emphasize both the accuracy and generality of the computer code. A brief review of the recent applicable boundary element literature is included for completeness. The fluid-structure interaction facility is discussed. Once again, several examples are provided to highlight this unique capability. A collection of potential boundary element applications that have been uncovered as a result of work related to the present grant is given. For most of those problems, satisfactory analysis techniques do not currently exist.

  19. A multi-reference filtered-x-Newton narrowband algorithm for active isolation of vibration and experimental investigations

    NASA Astrophysics Data System (ADS)

    Wang, Chun-yu; He, Lin; Li, Yan; Shuai, Chang-geng

    2018-01-01

    In engineering applications, ship machinery vibration may be induced by multiple rotational machines sharing a common vibration isolation platform and operating at the same time, and multiple sinusoidal components may be excited. These components may be located at frequencies with large differences or at very close frequencies. A multi-reference filtered-x Newton narrowband (MRFx-Newton) algorithm is proposed to control these multiple sinusoidal components in an MIMO (multiple input and multiple output) system, especially for those located at very close frequencies. The proposed MRFx-Newton algorithm can decouple and suppress multiple sinusoidal components located in the same narrow frequency band even though such components cannot be separated from each other by a narrowband-pass filter. Like the Fx-Newton algorithm, good real-time performance is also achieved by the faster convergence speed brought by the 2nd-order inverse secondary-path filter in the time domain. Experiments are also conducted to verify the feasibility and test the performance of the proposed algorithm installed in an active-passive vibration isolation system in suppressing the vibration excited by an artificial source and air compressor/s. The results show that the proposed algorithm not only has comparable convergence rate as the Fx-Newton algorithm but also has better real-time performance and robustness than the Fx-Newton algorithm in active control of the vibration induced by multiple sound sources/rotational machines working on a shared platform.

  20. Unified Lambert Tool for Massively Parallel Applications in Space Situational Awareness

    NASA Astrophysics Data System (ADS)

    Woollands, Robyn M.; Read, Julie; Hernandez, Kevin; Probe, Austin; Junkins, John L.

    2018-03-01

    This paper introduces a parallel-compiled tool that combines several of our recently developed methods for solving the perturbed Lambert problem using modified Chebyshev-Picard iteration. This tool (unified Lambert tool) consists of four individual algorithms, each of which is unique and better suited for solving a particular type of orbit transfer. The first is a Keplerian Lambert solver, which is used to provide a good initial guess (warm start) for solving the perturbed problem. It is also used to determine the appropriate algorithm to call for solving the perturbed problem. The arc length or true anomaly angle spanned by the transfer trajectory is the parameter that governs the automated selection of the appropriate perturbed algorithm, and is based on the respective algorithm convergence characteristics. The second algorithm solves the perturbed Lambert problem using the modified Chebyshev-Picard iteration two-point boundary value solver. This algorithm does not require a Newton-like shooting method and is the most efficient of the perturbed solvers presented herein, however the domain of convergence is limited to about a third of an orbit and is dependent on eccentricity. The third algorithm extends the domain of convergence of the modified Chebyshev-Picard iteration two-point boundary value solver to about 90% of an orbit, through regularization with the Kustaanheimo-Stiefel transformation. This is the second most efficient of the perturbed set of algorithms. The fourth algorithm uses the method of particular solutions and the modified Chebyshev-Picard iteration initial value solver for solving multiple revolution perturbed transfers. This method does require "shooting" but differs from Newton-like shooting methods in that it does not require propagation of a state transition matrix. The unified Lambert tool makes use of the General Mission Analysis Tool and we use it to compute thousands of perturbed Lambert trajectories in parallel on the Space Situational Awareness computer cluster at the LASR Lab, Texas A&M University. We demonstrate the power of our tool by solving a highly parallel example problem, that is the generation of extremal field maps for optimal spacecraft rendezvous (and eventual orbit debris removal). In addition we demonstrate the need for including perturbative effects in simulations for satellite tracking or data association. The unified Lambert tool is ideal for but not limited to space situational awareness applications.

  1. Convergence and Applications of a Gossip-Based Gauss-Newton Algorithm

    NASA Astrophysics Data System (ADS)

    Li, Xiao; Scaglione, Anna

    2013-11-01

    The Gauss-Newton algorithm is a popular and efficient centralized method for solving non-linear least squares problems. In this paper, we propose a multi-agent distributed version of this algorithm, named Gossip-based Gauss-Newton (GGN) algorithm, which can be applied in general problems with non-convex objectives. Furthermore, we analyze and present sufficient conditions for its convergence and show numerically that the GGN algorithm achieves performance comparable to the centralized algorithm, with graceful degradation in case of network failures. More importantly, the GGN algorithm provides significant performance gains compared to other distributed first order methods.

  2. Efficient stabilization and acceleration of numerical simulation of fluid flows by residual recombination

    NASA Astrophysics Data System (ADS)

    Citro, V.; Luchini, P.; Giannetti, F.; Auteri, F.

    2017-09-01

    The study of the stability of a dynamical system described by a set of partial differential equations (PDEs) requires the computation of unstable states as the control parameter exceeds its critical threshold. Unfortunately, the discretization of the governing equations, especially for fluid dynamic applications, often leads to very large discrete systems. As a consequence, matrix based methods, like for example the Newton-Raphson algorithm coupled with a direct inversion of the Jacobian matrix, lead to computational costs too large in terms of both memory and execution time. We present a novel iterative algorithm, inspired by Krylov-subspace methods, which is able to compute unstable steady states and/or accelerate the convergence to stable configurations. Our new algorithm is based on the minimization of the residual norm at each iteration step with a projection basis updated at each iteration rather than at periodic restarts like in the classical GMRES method. The algorithm is able to stabilize any dynamical system without increasing the computational time of the original numerical procedure used to solve the governing equations. Moreover, it can be easily inserted into a pre-existing relaxation (integration) procedure with a call to a single black-box subroutine. The procedure is discussed for problems of different sizes, ranging from a small two-dimensional system to a large three-dimensional problem involving the Navier-Stokes equations. We show that the proposed algorithm is able to improve the convergence of existing iterative schemes. In particular, the procedure is applied to the subcritical flow inside a lid-driven cavity. We also discuss the application of Boostconv to compute the unstable steady flow past a fixed circular cylinder (2D) and boundary-layer flow over a hemispherical roughness element (3D) for supercritical values of the Reynolds number. We show that Boostconv can be used effectively with any spatial discretization, be it a finite-difference, finite-volume, finite-element or spectral method.

  3. Kinematics and dynamics of a six-degree-of-freedom robot manipulator with closed kinematic chain mechanism

    NASA Technical Reports Server (NTRS)

    Nguyen, Charles C.; Pooran, Farhad J.

    1989-01-01

    This paper deals with a class of robot manipulators built based on the kinematic chain mechanism (CKCM). This class of CKCM manipulators consists of a fixed and a moving platform coupled together via a number of in-parallel actuators. A closed-form solution is derived for the inverse kinematic problem of a six-degre-of-freedom CKCM manipulator designed to study robotic applications in space. Iterative Newton-Raphson method is employed to solve the forward kinematic problem. Dynamics of the above manipulator is derived using the Lagrangian approach. Computer simulation of the dynamical equations shows that the actuating forces are strongly dependent on the mass and centroid of the robot links.

  4. TAP 1: A Finite Element Program for Steady-State Thermal Analysis of Convectively Cooled Structures

    NASA Technical Reports Server (NTRS)

    Thornton, E. A.

    1976-01-01

    The program has a finite element library of six elements: two conduction/convection elements to model heat transfer in a solid, two convection elements to model heat transfer in a fluid, and two integrated conduction/convection elements to represent combined heat transfer in tubular and plate/fin fluid passages. Nonlinear thermal analysis due to temperature dependent thermal parameters is performed using the Newton-Raphson iteration method. Program output includes nodal temperatures and element heat fluxes. Pressure drops in fluid passages may be computed as an option. A companion plotting program for displaying the finite element model and predicted temperature distributions is presented. User instructions and sample problems are presented in appendixes.

  5. A Method to Solve Interior and Exterior Camera Calibration Parameters for Image Resection

    NASA Technical Reports Server (NTRS)

    Samtaney, Ravi

    1999-01-01

    An iterative method is presented to solve the internal and external camera calibration parameters, given model target points and their images from one or more camera locations. The direct linear transform formulation was used to obtain a guess for the iterative method, and herein lies one of the strengths of the present method. In all test cases, the method converged to the correct solution. In general, an overdetermined system of nonlinear equations is solved in the least-squares sense. The iterative method presented is based on Newton-Raphson for solving systems of nonlinear algebraic equations. The Jacobian is analytically derived and the pseudo-inverse of the Jacobian is obtained by singular value decomposition.

  6. Aircraft interior noise reduction by alternate resonance tuning

    NASA Technical Reports Server (NTRS)

    Bliss, Donald B.; Gottwald, James A.; Gustaveson, Mark B.; Burton, James R., III

    1988-01-01

    Model problem development and analysis continues with the Alternate Resonance Tuning (ART) concept. The various topics described are presently at different stages of completion: investigation of the effectiveness of the ART concept under an external propagating pressure field associated with propeller passage by the fuselage; analysis of ART performance with a double panel wall mounted in a flexible frame model; development of a data fitting scheme using a branch analysis with a Newton-Raphson scheme in multiple dimensions to determine values of critical parameters in the actual experimental apparatus; and investigation of the ART effect with real panels as opposed to the spring-mass-damper systems currently used in much of the theory.

  7. Galerkin-collocation domain decomposition method for arbitrary binary black holes

    NASA Astrophysics Data System (ADS)

    Barreto, W.; Clemente, P. C. M.; de Oliveira, H. P.; Rodriguez-Mueller, B.

    2018-05-01

    We present a new computational framework for the Galerkin-collocation method for double domain in the context of ADM 3 +1 approach in numerical relativity. This work enables us to perform high resolution calculations for initial sets of two arbitrary black holes. We use the Bowen-York method for binary systems and the puncture method to solve the Hamiltonian constraint. The nonlinear numerical code solves the set of equations for the spectral modes using the standard Newton-Raphson method, LU decomposition and Gaussian quadratures. We show convergence of our code for the conformal factor and the ADM mass. Thus, we display features of the conformal factor for different masses, spins and linear momenta.

  8. Dose Titration Algorithm Tuning (DTAT) should supersede 'the' Maximum Tolerated Dose (MTD) in oncology dose-finding trials.

    PubMed

    Norris, David C

    2017-01-01

    Background . Absent adaptive, individualized dose-finding in early-phase oncology trials, subsequent 'confirmatory' Phase III trials risk suboptimal dosing, with resulting loss of statistical power and reduced probability of technical success for the investigational therapy. While progress has been made toward explicitly adaptive dose-finding and quantitative modeling of dose-response relationships, most such work continues to be organized around a concept of 'the' maximum tolerated dose (MTD). The purpose of this paper is to demonstrate concretely how the aim of early-phase trials might be conceived, not as 'dose-finding', but as dose titration algorithm (DTA) -finding. Methods. A Phase I dosing study is simulated, for a notional cytotoxic chemotherapy drug, with neutropenia constituting the critical dose-limiting toxicity. The drug's population pharmacokinetics and myelosuppression dynamics are simulated using published parameter estimates for docetaxel. The amenability of this model to linearization is explored empirically. The properties of a simple DTA targeting neutrophil nadir of 500 cells/mm 3 using a Newton-Raphson heuristic are explored through simulation in 25 simulated study subjects. Results. Individual-level myelosuppression dynamics in the simulation model approximately linearize under simple transformations of neutrophil concentration and drug dose. The simulated dose titration exhibits largely satisfactory convergence, with great variance in individualized optimal dosing. Some titration courses exhibit overshooting. Conclusions. The large inter-individual variability in simulated optimal dosing underscores the need to replace 'the' MTD with an individualized concept of MTD i . To illustrate this principle, the simplest possible DTA capable of realizing such a concept is demonstrated. Qualitative phenomena observed in this demonstration support discussion of the notion of tuning such algorithms. Although here illustrated specifically in relation to cytotoxic chemotherapy, the DTAT principle appears similarly applicable to Phase I studies of cancer immunotherapy and molecularly targeted agents.

  9. A modified sparse reconstruction method for three-dimensional synthetic aperture radar image

    NASA Astrophysics Data System (ADS)

    Zhang, Ziqiang; Ji, Kefeng; Song, Haibo; Zou, Huanxin

    2018-03-01

    There is an increasing interest in three-dimensional Synthetic Aperture Radar (3-D SAR) imaging from observed sparse scattering data. However, the existing 3-D sparse imaging method requires large computing times and storage capacity. In this paper, we propose a modified method for the sparse 3-D SAR imaging. The method processes the collection of noisy SAR measurements, usually collected over nonlinear flight paths, and outputs 3-D SAR imagery. Firstly, the 3-D sparse reconstruction problem is transformed into a series of 2-D slices reconstruction problem by range compression. Then the slices are reconstructed by the modified SL0 (smoothed l0 norm) reconstruction algorithm. The improved algorithm uses hyperbolic tangent function instead of the Gaussian function to approximate the l0 norm and uses the Newton direction instead of the steepest descent direction, which can speed up the convergence rate of the SL0 algorithm. Finally, numerical simulation results are given to demonstrate the effectiveness of the proposed algorithm. It is shown that our method, compared with existing 3-D sparse imaging method, performs better in reconstruction quality and the reconstruction time.

  10. Ion-dipole interactions in concentrated organic electrolytes.

    PubMed

    Chagnes, Alexandre; Nicolis, Stamatios; Carré, Bernard; Willmann, Patrick; Lemordant, Daniel

    2003-06-16

    An algorithm is proposed for calculating the energy of ion-dipole interactions in concentrated organic electrolytes. The ion-dipole interactions increase with increasing salt concentration and must be taken into account when the activation energy for the conductivity is calculated. In this case, the contribution of ion-dipole interactions to the activation energy for this transport process is of the same order of magnitude as the contribution of ion-ion interactions. The ion-dipole interaction energy was calculated for a cell of eight ions, alternatingly anions and cations, placed on the vertices of an expanded cubic lattice whose parameter is related to the mean interionic distance (pseudolattice theory). The solvent dipoles were introduced randomly into the cell by assuming a randomness compacity of 0.58. The energy of the dipole assembly in the cell was minimized by using a Newton-Raphson numerical method. The dielectric field gradient around ions was taken into account by a distance parameter and a dielectric constant of epsilon = 3 at the surfaces of the ions. A fair agreement between experimental and calculated activation energy has been found for systems composed of gamma-butyrolactone (BL) as solvent and lithium perchlorate (LiClO4), lithium tetrafluoroborate (LiBF4), lithium hexafluorophosphate (LiPF6), lithium hexafluoroarsenate (LiAsF6), and lithium bis(trifluoromethylsulfonyl)imide (LiTFSI) as salts.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rycroft, Chris H.; Bazant, Martin Z.

    An advection-diffusion-limited dissolution model of an object being eroded by a two-dimensional potential flow is presented. By taking advantage of the conformal invariance of the model, a numerical method is introduced that tracks the evolution of the object boundary in terms of a time-dependent Laurent series. Simulations of a variety of dissolving objects are shown, which shrink and collapse to a single point in finite time. The simulations reveal a surprising exact relationship, whereby the collapse point is the root of a non-Analytic function given in terms of the flow velocity and the Laurent series coefficients describing the initial shape.more » This result is subsequently derived using residue calculus. The structure of the non-Analytic function is examined for three different test cases, and a practical approach to determine the collapse point using a generalized Newton-Raphson root-finding algorithm is outlined. These examples also illustrate the possibility that the model breaks down in finite time prior to complete collapse, due to a topological singularity, as the dissolving boundary overlaps itself rather than breaking up into multiple domains (analogous to droplet pinch-off in fluid mechanics). In conclusion, the model raises fundamental mathematical questions about broken symmetries in finite-Time singularities of both continuous and stochastic dynamical systems.« less

  12. Flowing partially penetrating well: solution to a mixed-type boundary value problem

    NASA Astrophysics Data System (ADS)

    Cassiani, G.; Kabala, Z. J.; Medina, M. A.

    A new semi-analytic solution to the mixed-type boundary value problem for a flowing partially penetrating well with infinitesimal skin situated in an anisotropic aquifer is developed. The solution is suited to aquifers having a semi-infinite vertical extent or to packer tests with aquifer horizontal boundaries far enough from the tested area. The problem reduces to a system of dual integral equations (DE) and further to a deconvolution problem. Unlike the analogous Dagan's steady-state solution [Water Resour. Res. 1978; 14:929-34], our DE solution does not suffer from numerical oscillations. The new solution is validated by matching the corresponding finite-difference solution and is computationally much more efficient. An automated (Newton-Raphson) parameter identification algorithm is proposed for field test inversion, utilizing the DE solution for the forward model. The procedure is computationally efficient and converges to correct parameter values. A solution for the partially penetrating flowing well with no skin and a drawdown-drawdown discontinuous boundary condition, analogous to that by Novakowski [Can. Geotech. J. 1993; 30:600-6], is compared to the DE solution. The D-D solution leads to physically inconsistent infinite total flow rate to the well, when no skin effect is considered. The DE solution, on the other hand, produces accurate results.

  13. Optimizing the inner loop of the gravitational force interaction on modern processors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Warren, Michael S

    2010-12-08

    We have achieved superior performance on multiple generations of the fastest supercomputers in the world with our hashed oct-tree N-body code (HOT), spanning almost two decades and garnering multiple Gordon Bell Prizes for significant achievement in parallel processing. Execution time for our N-body code is largely influenced by the force calculation in the inner loop. Improvements to the inner loop using SSE3 instructions has enabled the calculation of over 200 million gravitational interactions per second per processor on a 2.6 GHz Opteron, for a computational rate of over 7 Gflops in single precision (700/0 of peak). We obtain optimal performancemore » some processors (including the Cell) by decomposing the reciprocal square root function required for a gravitational interaction into a table lookup, Chebychev polynomial interpolation, and Newton-Raphson iteration, using the algorithm of Karp. By unrolling the loop by a factor of six, and using SPU intrinsics to compute on vectors, we obtain performance of over 16 Gflops on a single Cell SPE. Aggregated over the 8 SPEs on a Cell processor, the overall performance is roughly 130 Gflops. In comparison, the ordinary C version of our inner loop only obtains 1.6 Gflops per SPE with the spuxlc compiler.« less

  14. Asymmetric collapse by dissolution or melting in a uniform flow

    DOE PAGES

    Rycroft, Chris H.; Bazant, Martin Z.

    2016-01-06

    An advection-diffusion-limited dissolution model of an object being eroded by a two-dimensional potential flow is presented. By taking advantage of the conformal invariance of the model, a numerical method is introduced that tracks the evolution of the object boundary in terms of a time-dependent Laurent series. Simulations of a variety of dissolving objects are shown, which shrink and collapse to a single point in finite time. The simulations reveal a surprising exact relationship, whereby the collapse point is the root of a non-Analytic function given in terms of the flow velocity and the Laurent series coefficients describing the initial shape.more » This result is subsequently derived using residue calculus. The structure of the non-Analytic function is examined for three different test cases, and a practical approach to determine the collapse point using a generalized Newton-Raphson root-finding algorithm is outlined. These examples also illustrate the possibility that the model breaks down in finite time prior to complete collapse, due to a topological singularity, as the dissolving boundary overlaps itself rather than breaking up into multiple domains (analogous to droplet pinch-off in fluid mechanics). In conclusion, the model raises fundamental mathematical questions about broken symmetries in finite-Time singularities of both continuous and stochastic dynamical systems.« less

  15. Numerical simulation of the nonlinear response of composite plates under combined thermal and acoustic loading

    NASA Technical Reports Server (NTRS)

    Mei, Chuh; Moorthy, Jayashree

    1995-01-01

    A time-domain study of the random response of a laminated plate subjected to combined acoustic and thermal loads is carried out. The features of this problem also include given uniform static inplane forces. The formulation takes into consideration a possible initial imperfection in the flatness of the plate. High decibel sound pressure levels along with high thermal gradients across thickness drive the plate response into nonlinear regimes. This calls for the analysis to use von Karman large deflection strain-displacement relationships. A finite element model that combines the von Karman strains with the first-order shear deformation plate theory is developed. The development of the analytical model can accommodate an anisotropic composite laminate built up of uniformly thick layers of orthotropic, linearly elastic laminae. The global system of finite element equations is then reduced to a modal system of equations. Numerical simulation using a single-step algorithm in the time-domain is then carried out to solve for the modal coordinates. Nonlinear algebraic equations within each time-step are solved by the Newton-Raphson method. The random gaussian filtered white noise load is generated using Monte Carlo simulation. The acoustic pressure distribution over the plate is capable of accounting for a grazing incidence wavefront. Numerical results are presented to study a variety of cases.

  16. Newton Algorithms for Analytic Rotation: An Implicit Function Approach

    ERIC Educational Resources Information Center

    Boik, Robert J.

    2008-01-01

    In this paper implicit function-based parameterizations for orthogonal and oblique rotation matrices are proposed. The parameterizations are used to construct Newton algorithms for minimizing differentiable rotation criteria applied to "m" factors and "p" variables. The speed of the new algorithms is compared to that of existing algorithms and to…

  17. A Perturbation Analysis of Harmonics Generation from Saturated Elements in Power Systems

    NASA Astrophysics Data System (ADS)

    Kumano, Teruhisa

    Nonlinear phenomena such as saturation in magnetic flux give considerable effects in power system analysis. It is reported that a failure in a real 500kV system triggered islanding operation, where resultant even harmonics caused malfunctions in protective relays. It is also reported that the major origin of this wave distortion is nothing but unidirectional magnetization of the transformer iron core. Time simulation is widely used today to analyze this type of phenomena, but it has basically two shortcomings. One is that the time simulation takes two much computing time in the vicinity of inflection points in the saturation characteristic curve because certain iterative procedure such as N-R (Newton-Raphson) should be used and such methods tend to be caught in an ill conditioned numerical hunting. The other is that such simulation methods sometimes do not help intuitive understanding of the studied phenomenon because the whole nonlinear equations are treated in a matrix form and not properly divided into understandable parts as done in linear systems. This paper proposes a new computation scheme which is based on so called perturbation method. Magnetic saturation in iron cores in a generator and a transformer are taken into account. The proposed method has a special feature against the first shortcoming of the N-R based time simulation method stated above. In the proposed method no iterative process is used to reduce the equation residue but uses perturbation series, which means free from the ill condition problem. Users have only to calculate each perturbation terms one by one until he reaches necessary accuracy. In a numerical example treated in the present paper the first order perturbation can make reasonably high accuracy, which means very fast computing. In numerical study three nonlinear elements are considered. Calculated results are almost identical to the conventional Newton-Raphson based time simulation, which shows the validity of the method. The proposed method would be effectively used in a screening where many case studies are needed.

  18. MODFLOW–USG version 1: An unstructured grid version of MODFLOW for simulating groundwater flow and tightly coupled processes using a control volume finite-difference formulation

    USGS Publications Warehouse

    Panday, Sorab; Langevin, Christian D.; Niswonger, Richard G.; Ibaraki, Motomu; Hughes, Joseph D.

    2013-01-01

    A new version of MODFLOW, called MODFLOW–USG (for UnStructured Grid), was developed to support a wide variety of structured and unstructured grid types, including nested grids and grids based on prismatic triangles, rectangles, hexagons, and other cell shapes. Flexibility in grid design can be used to focus resolution along rivers and around wells, for example, or to subdiscretize individual layers to better represent hydrostratigraphic units. MODFLOW–USG is based on an underlying control volume finite difference (CVFD) formulation in which a cell can be connected to an arbitrary number of adjacent cells. To improve accuracy of the CVFD formulation for irregular grid-cell geometries or nested grids, a generalized Ghost Node Correction (GNC) Package was developed, which uses interpolated heads in the flow calculation between adjacent connected cells. MODFLOW–USG includes a Groundwater Flow (GWF) Process, based on the GWF Process in MODFLOW–2005, as well as a new Connected Linear Network (CLN) Process to simulate the effects of multi-node wells, karst conduits, and tile drains, for example. The CLN Process is tightly coupled with the GWF Process in that the equations from both processes are formulated into one matrix equation and solved simultaneously. This robustness results from using an unstructured grid with unstructured matrix storage and solution schemes. MODFLOW–USG also contains an optional Newton-Raphson formulation, based on the formulation in MODFLOW–NWT, for improving solution convergence and avoiding problems with the drying and rewetting of cells. Because the existing MODFLOW solvers were developed for structured and symmetric matrices, they were replaced with a new Sparse Matrix Solver (SMS) Package developed specifically for MODFLOW–USG. The SMS Package provides several methods for resolving nonlinearities and multiple symmetric and asymmetric linear solution schemes to solve the matrix arising from the flow equations and the Newton-Raphson formulation, respectively.

  19. Development of an integrated BEM approach for hot fluid structure interaction

    NASA Technical Reports Server (NTRS)

    Dargush, G. F.; Banerjee, P. K.

    1989-01-01

    The progress made toward the development of a boundary element formulation for the study of hot fluid-structure interaction in Earth-to-Orbit engine hot section components is reported. The convective viscous integral formulation was derived and implemented in the general purpose computer program GP-BEST. The new convective kernel functions, in turn, necessitated the development of refined integration techniques. As a result, however, since the physics of the problem is embedded in these kernels, boundary element solutions can now be obtained at very high Reynolds number. Flow around obstacles can be solved approximately with an efficient linearized boundary-only analysis or, more exactly, by including all of the nonlinearities present in the neighborhood of the obstacle. The other major accomplishment was the development of a comprehensive fluid-structure interaction capability within GP-BEST. This new facility is implemented in a completely general manner, so that quite arbitrary geometry, material properties and boundary conditions may be specified. Thus, a single analysis code (GP-BEST) can be used to run structures-only problems, fluids-only problems, or the combined fluid-structure problem. In all three cases, steady or transient conditions can be selected, with or without thermal effects. Nonlinear analyses can be solved via direct iteration or by employing a modified Newton-Raphson approach.

  20. A displacement-based finite element formulation for incompressible and nearly-incompressible cardiac mechanics.

    PubMed

    Hadjicharalambous, Myrianthi; Lee, Jack; Smith, Nicolas P; Nordsletten, David A

    2014-06-01

    The Lagrange Multiplier (LM) and penalty methods are commonly used to enforce incompressibility and compressibility in models of cardiac mechanics. In this paper we show how both formulations may be equivalently thought of as a weakly penalized system derived from the statically condensed Perturbed Lagrangian formulation, which may be directly discretized maintaining the simplicity of penalty formulations with the convergence characteristics of LM techniques. A modified Shamanskii-Newton-Raphson scheme is introduced to enhance the nonlinear convergence of the weakly penalized system and, exploiting its equivalence, modifications are developed for the penalty form. Focusing on accuracy, we proceed to study the convergence behavior of these approaches using different interpolation schemes for both a simple test problem and more complex models of cardiac mechanics. Our results illustrate the well-known influence of locking phenomena on the penalty approach (particularly for lower order schemes) and its effect on accuracy for whole-cycle mechanics. Additionally, we verify that direct discretization of the weakly penalized form produces similar convergence behavior to mixed formulations while avoiding the use of an additional variable. Combining a simple structure which allows the solution of computationally challenging problems with good convergence characteristics, the weakly penalized form provides an accurate and efficient alternative to incompressibility and compressibility in cardiac mechanics.

  1. Efficient implementation of three-dimensional reference interaction site model self-consistent-field method: Application to solvatochromic shift calculations

    NASA Astrophysics Data System (ADS)

    Minezawa, Noriyuki; Kato, Shigeki

    2007-02-01

    The authors present an implementation of the three-dimensional reference interaction site model self-consistent-field (3D-RISM-SCF) method. First, they introduce a robust and efficient algorithm for solving the 3D-RISM equation. The algorithm is a hybrid of the Newton-Raphson and Picard methods. The Jacobian matrix is analytically expressed in a computationally useful form. Second, they discuss the solute-solvent electrostatic interaction. For the solute to solvent route, the electrostatic potential (ESP) map on a 3D grid is constructed directly from the electron density. The charge fitting procedure is not required to determine the ESP. For the solvent to solute route, the ESP acting on the solute molecule is derived from the solvent charge distribution obtained by solving the 3D-RISM equation. Matrix elements of the solute-solvent interaction are evaluated by the direct numerical integration. A remarkable reduction in the computational time is observed in both routes. Finally, the authors implement the first derivatives of the free energy with respect to the solute nuclear coordinates. They apply the present method to "solute" water and formaldehyde in aqueous solvent using the simple point charge model, and the results are compared with those from other methods: the six-dimensional molecular Ornstein-Zernike SCF, the one-dimensional site-site RISM-SCF, and the polarizable continuum model. The authors also calculate the solvatochromic shifts of acetone, benzonitrile, and nitrobenzene using the present method and compare them with the experimental and other theoretical results.

  2. Efficient implementation of three-dimensional reference interaction site model self-consistent-field method: application to solvatochromic shift calculations.

    PubMed

    Minezawa, Noriyuki; Kato, Shigeki

    2007-02-07

    The authors present an implementation of the three-dimensional reference interaction site model self-consistent-field (3D-RISM-SCF) method. First, they introduce a robust and efficient algorithm for solving the 3D-RISM equation. The algorithm is a hybrid of the Newton-Raphson and Picard methods. The Jacobian matrix is analytically expressed in a computationally useful form. Second, they discuss the solute-solvent electrostatic interaction. For the solute to solvent route, the electrostatic potential (ESP) map on a 3D grid is constructed directly from the electron density. The charge fitting procedure is not required to determine the ESP. For the solvent to solute route, the ESP acting on the solute molecule is derived from the solvent charge distribution obtained by solving the 3D-RISM equation. Matrix elements of the solute-solvent interaction are evaluated by the direct numerical integration. A remarkable reduction in the computational time is observed in both routes. Finally, the authors implement the first derivatives of the free energy with respect to the solute nuclear coordinates. They apply the present method to "solute" water and formaldehyde in aqueous solvent using the simple point charge model, and the results are compared with those from other methods: the six-dimensional molecular Ornstein-Zernike SCF, the one-dimensional site-site RISM-SCF, and the polarizable continuum model. The authors also calculate the solvatochromic shifts of acetone, benzonitrile, and nitrobenzene using the present method and compare them with the experimental and other theoretical results.

  3. Experiences on p-Version Time-Discontinuous Galerkin's Method for Nonlinear Heat Transfer Analysis and Sensitivity Analysis

    NASA Technical Reports Server (NTRS)

    Hou, Gene

    2004-01-01

    The focus of this research is on the development of analysis and sensitivity analysis equations for nonlinear, transient heat transfer problems modeled by p-version, time discontinuous finite element approximation. The resulting matrix equation of the state equation is simply in the form ofA(x)x = c, representing a single step, time marching scheme. The Newton-Raphson's method is used to solve the nonlinear equation. Examples are first provided to demonstrate the accuracy characteristics of the resultant finite element approximation. A direct differentiation approach is then used to compute the thermal sensitivities of a nonlinear heat transfer problem. The report shows that only minimal coding effort is required to enhance the analysis code with the sensitivity analysis capability.

  4. A triangular thin shell finite element: Nonlinear analysis. [structural analysis

    NASA Technical Reports Server (NTRS)

    Thomas, G. R.; Gallagher, R. H.

    1975-01-01

    Aspects of the formulation of a triangular thin shell finite element which pertain to geometrically nonlinear (small strain, finite displacement) behavior are described. The procedure for solution of the resulting nonlinear algebraic equations combines a one-step incremental (tangent stiffness) approach with one iteration in the Newton-Raphson mode. A method is presented which permits a rational estimation of step size in this procedure. Limit points are calculated by means of a superposition scheme coupled to the incremental side of the solution procedure while bifurcation points are calculated through a process of interpolation of the determinants of the tangent-stiffness matrix. Numerical results are obtained for a flat plate and two curved shell problems and are compared with alternative solutions.

  5. Linear stability analysis of scramjet unstart

    NASA Astrophysics Data System (ADS)

    Jang, Ik; Nichols, Joseph; Moin, Parviz

    2015-11-01

    We investigate the bifurcation structure of unstart and restart events in a dual-mode scramjet using the Reynolds-averaged Navier-Stokes equations. The scramjet of interest (HyShot II, Laurence et al., AIAA2011-2310) operates at a free-stream Mach number of approximately 8, and the length of the combustor chamber is 300mm. A heat-release model is applied to mimic the combustion process. Pseudo-arclength continuation with Newton-Raphson iteration is used to calculate multiple solution branches. Stability analysis based on linearized dynamics about the solution curves reveals a metric that optimally forewarns unstart. By combining direct and adjoint eigenmodes, structural sensitivity analysis suggests strategies for unstart mitigation, including changing the isolator length. This work is supported by DOE/NNSA and AFOSR.

  6. Robust large-scale parallel nonlinear solvers for simulations.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bader, Brett William; Pawlowski, Roger Patrick; Kolda, Tamara Gibson

    2005-11-01

    This report documents research to develop robust and efficient solution techniques for solving large-scale systems of nonlinear equations. The most widely used method for solving systems of nonlinear equations is Newton's method. While much research has been devoted to augmenting Newton-based solvers (usually with globalization techniques), little has been devoted to exploring the application of different models. Our research has been directed at evaluating techniques using different models than Newton's method: a lower order model, Broyden's method, and a higher order model, the tensor method. We have developed large-scale versions of each of these models and have demonstrated their usemore » in important applications at Sandia. Broyden's method replaces the Jacobian with an approximation, allowing codes that cannot evaluate a Jacobian or have an inaccurate Jacobian to converge to a solution. Limited-memory methods, which have been successful in optimization, allow us to extend this approach to large-scale problems. We compare the robustness and efficiency of Newton's method, modified Newton's method, Jacobian-free Newton-Krylov method, and our limited-memory Broyden method. Comparisons are carried out for large-scale applications of fluid flow simulations and electronic circuit simulations. Results show that, in cases where the Jacobian was inaccurate or could not be computed, Broyden's method converged in some cases where Newton's method failed to converge. We identify conditions where Broyden's method can be more efficient than Newton's method. We also present modifications to a large-scale tensor method, originally proposed by Bouaricha, for greater efficiency, better robustness, and wider applicability. Tensor methods are an alternative to Newton-based methods and are based on computing a step based on a local quadratic model rather than a linear model. The advantage of Bouaricha's method is that it can use any existing linear solver, which makes it simple to write and easily portable. However, the method usually takes twice as long to solve as Newton-GMRES on general problems because it solves two linear systems at each iteration. In this paper, we discuss modifications to Bouaricha's method for a practical implementation, including a special globalization technique and other modifications for greater efficiency. We present numerical results showing computational advantages over Newton-GMRES on some realistic problems. We further discuss a new approach for dealing with singular (or ill-conditioned) matrices. In particular, we modify an algorithm for identifying a turning point so that an increasingly ill-conditioned Jacobian does not prevent convergence.« less

  7. Robust integration schemes for generalized viscoplasticity with internal-state variables. Part 2: Algorithmic developments and implementation

    NASA Technical Reports Server (NTRS)

    Li, Wei; Saleeb, Atef F.

    1995-01-01

    This two-part report is concerned with the development of a general framework for the implicit time-stepping integrators for the flow and evolution equations in generalized viscoplastic models. The primary goal is to present a complete theoretical formulation, and to address in detail the algorithmic and numerical analysis aspects involved in its finite element implementation, as well as to critically assess the numerical performance of the developed schemes in a comprehensive set of test cases. On the theoretical side, the general framework is developed on the basis of the unconditionally-stable, backward-Euler difference scheme as a starting point. Its mathematical structure is of sufficient generality to allow a unified treatment of different classes of viscoplastic models with internal variables. In particular, two specific models of this type, which are representative of the present start-of-art in metal viscoplasticity, are considered in applications reported here; i.e., fully associative (GVIPS) and non-associative (NAV) models. The matrix forms developed for both these models are directly applicable for both initially isotropic and anisotropic materials, in general (three-dimensional) situations as well as subspace applications (i.e., plane stress/strain, axisymmetric, generalized plane stress in shells). On the computational side, issues related to efficiency and robustness are emphasized in developing the (local) interative algorithm. In particular, closed-form expressions for residual vectors and (consistent) material tangent stiffness arrays are given explicitly for both GVIPS and NAV models, with their maximum sizes 'optimized' to depend only on the number of independent stress components (but independent of the number of viscoplastic internal state parameters). Significant robustness of the local iterative solution is provided by complementing the basic Newton-Raphson scheme with a line-search strategy for convergence. In the present second part of the report, we focus on the specific details of the numerical schemes, and associated computer algorithms, for the finite-element implementation of GVIPS and NAV models.

  8. Gravitation in Material Media

    ERIC Educational Resources Information Center

    Ridgely, Charles T.

    2011-01-01

    When two gravitating bodies reside in a material medium, Newton's law of universal gravitation must be modified to account for the presence of the medium. A modified expression of Newton's law is known in the literature, but lacks a clear connection with existing gravitational theory. Newton's law in the presence of a homogeneous material medium…

  9. Combined genetic algorithm and multiple linear regression (GA-MLR) optimizer: Application to multi-exponential fluorescence decay surface.

    PubMed

    Fisz, Jacek J

    2006-12-07

    The optimization approach based on the genetic algorithm (GA) combined with multiple linear regression (MLR) method, is discussed. The GA-MLR optimizer is designed for the nonlinear least-squares problems in which the model functions are linear combinations of nonlinear functions. GA optimizes the nonlinear parameters, and the linear parameters are calculated from MLR. GA-MLR is an intuitive optimization approach and it exploits all advantages of the genetic algorithm technique. This optimization method results from an appropriate combination of two well-known optimization methods. The MLR method is embedded in the GA optimizer and linear and nonlinear model parameters are optimized in parallel. The MLR method is the only one strictly mathematical "tool" involved in GA-MLR. The GA-MLR approach simplifies and accelerates considerably the optimization process because the linear parameters are not the fitted ones. Its properties are exemplified by the analysis of the kinetic biexponential fluorescence decay surface corresponding to a two-excited-state interconversion process. A short discussion of the variable projection (VP) algorithm, designed for the same class of the optimization problems, is presented. VP is a very advanced mathematical formalism that involves the methods of nonlinear functionals, algebra of linear projectors, and the formalism of Fréchet derivatives and pseudo-inverses. Additional explanatory comments are added on the application of recently introduced the GA-NR optimizer to simultaneous recovery of linear and weakly nonlinear parameters occurring in the same optimization problem together with nonlinear parameters. The GA-NR optimizer combines the GA method with the NR method, in which the minimum-value condition for the quadratic approximation to chi(2), obtained from the Taylor series expansion of chi(2), is recovered by means of the Newton-Raphson algorithm. The application of the GA-NR optimizer to model functions which are multi-linear combinations of nonlinear functions, is indicated. The VP algorithm does not distinguish the weakly nonlinear parameters from the nonlinear ones and it does not apply to the model functions which are multi-linear combinations of nonlinear functions.

  10. A Numerical Comparison of Barrier and Modified Barrier Methods for Large-Scale Bound-Constrained Optimization

    NASA Technical Reports Server (NTRS)

    Nash, Stephen G.; Polyak, R.; Sofer, Ariela

    1994-01-01

    When a classical barrier method is applied to the solution of a nonlinear programming problem with inequality constraints, the Hessian matrix of the barrier function becomes increasingly ill-conditioned as the solution is approached. As a result, it may be desirable to consider alternative numerical algorithms. We compare the performance of two methods motivated by barrier functions. The first is a stabilized form of the classical barrier method, where a numerically stable approximation to the Newton direction is used when the barrier parameter is small. The second is a modified barrier method where a barrier function is applied to a shifted form of the problem, and the resulting barrier terms are scaled by estimates of the optimal Lagrange multipliers. The condition number of the Hessian matrix of the resulting modified barrier function remains bounded as the solution to the constrained optimization problem is approached. Both of these techniques can be used in the context of a truncated-Newton method, and hence can be applied to large problems, as well as on parallel computers. In this paper, both techniques are applied to problems with bound constraints and we compare their practical behavior.

  11. Method and system for training dynamic nonlinear adaptive filters which have embedded memory

    NASA Technical Reports Server (NTRS)

    Rabinowitz, Matthew (Inventor)

    2002-01-01

    Described herein is a method and system for training nonlinear adaptive filters (or neural networks) which have embedded memory. Such memory can arise in a multi-layer finite impulse response (FIR) architecture, or an infinite impulse response (IIR) architecture. We focus on filter architectures with separate linear dynamic components and static nonlinear components. Such filters can be structured so as to restrict their degrees of computational freedom based on a priori knowledge about the dynamic operation to be emulated. The method is detailed for an FIR architecture which consists of linear FIR filters together with nonlinear generalized single layer subnets. For the IIR case, we extend the methodology to a general nonlinear architecture which uses feedback. For these dynamic architectures, we describe how one can apply optimization techniques which make updates closer to the Newton direction than those of a steepest descent method, such as backpropagation. We detail a novel adaptive modified Gauss-Newton optimization technique, which uses an adaptive learning rate to determine both the magnitude and direction of update steps. For a wide range of adaptive filtering applications, the new training algorithm converges faster and to a smaller value of cost than both steepest-descent methods such as backpropagation-through-time, and standard quasi-Newton methods. We apply the algorithm to modeling the inverse of a nonlinear dynamic tracking system 5, as well as a nonlinear amplifier 6.

  12. Application of PSAT to Load Flow Analysis with STATCOM under Load Increase Scenario and Line Contingencies

    NASA Astrophysics Data System (ADS)

    Telang, Aparna S.; Bedekar, P. P.

    2017-09-01

    Load flow analysis is the initial and essential step for any power system computation. It is required for choosing better options for power system expansion to meet with ever increasing load demand. Implementation of Flexible AC Transmission System (FACTS) device like STATCOM, in the load flow, which is having fast and very flexible control, is one of the important tasks for power system researchers. This paper presents a simple and systematic approach for steady state power flow calculations with FACTS controller, static synchronous compensator (STATCOM) using command line usage of MATLAB tool-power system analysis toolbox (PSAT). The complexity of MATLAB language programming increases due to incorporation of STATCOM in an existing Newton-Raphson load flow algorithm. Thus, the main contribution of this paper is to show how command line usage of user friendly MATLAB tool, PSAT, can extensively be used for quicker and wider interpretation of the results of load flow with STATCOM. The novelty of this paper lies in the method of applying the load increase pattern, where the active and reactive loads have been changed simultaneously at all the load buses under consideration for creating stressed conditions for load flow analysis with STATCOM. The performance have been evaluated on many standard IEEE test systems and the results for standard IEEE-30 bus system, IEEE-57 bus system, and IEEE-118 bus system are presented.

  13. An Improved Computational Method for the Calculation of Mixture Liquid-Vapor Critical Points

    NASA Astrophysics Data System (ADS)

    Dimitrakopoulos, Panagiotis; Jia, Wenlong; Li, Changjun

    2014-05-01

    Knowledge of critical points is important to determine the phase behavior of a mixture. This work proposes a reliable and accurate method in order to locate the liquid-vapor critical point of a given mixture. The theoretical model is developed from the rigorous definition of critical points, based on the SRK equation of state (SRK EoS) or alternatively, on the PR EoS. In order to solve the resulting system of nonlinear equations, an improved method is introduced into an existing Newton-Raphson algorithm, which can calculate all the variables simultaneously in each iteration step. The improvements mainly focus on the derivatives of the Jacobian matrix, on the convergence criteria, and on the damping coefficient. As a result, all equations and related conditions required for the computation of the scheme are illustrated in this paper. Finally, experimental data for the critical points of 44 mixtures are adopted in order to validate the method. For the SRK EoS, average absolute errors of the predicted critical-pressure and critical-temperature values are 123.82 kPa and 3.11 K, respectively, whereas the commercial software package Calsep PVTSIM's prediction errors are 131.02 kPa and 3.24 K. For the PR EoS, the two above mentioned average absolute errors are 129.32 kPa and 2.45 K, while the PVTSIM's errors are 137.24 kPa and 2.55 K, respectively.

  14. Circular Regression in a Dual-Phase Lock-In Amplifier for Coherent Detection of Weak Signal

    PubMed Central

    Wang, Gaoxuan; Reboul, Serge; Fertein, Eric

    2017-01-01

    Lock-in amplification (LIA) is an effective approach for recovery of weak signal buried in noise. Determination of the input signal amplitude in a classical dual-phase LIA is based on incoherent detection which leads to a biased estimation at low signal-to-noise ratio. This article presents, for the first time to our knowledge, a new architecture of LIA involving phase estimation with a linear-circular regression for coherent detection. The proposed phase delay estimate, between the input signal and a reference, is defined as the maximum-likelihood of a set of observations distributed according to a von Mises distribution. In our implementation this maximum is obtained with a Newton Raphson algorithm. We show that the proposed LIA architecture provides an unbiased estimate of the input signal amplitude. Theoretical simulations with synthetic data demonstrate that the classical LIA estimates are biased for SNR of the input signal lower than −20 dB, while the proposed LIA is able to accurately recover the weak signal amplitude. The novel approach is applied to an optical sensor for accurate measurement of NO2 concentrations at the sub-ppbv level in the atmosphere. Side-by-side intercomparison measurements with a commercial LIA (SR830, Stanford Research Inc., Sunnyvale, CA, USA ) demonstrate that the proposed LIA has an identical performance in terms of measurement accuracy and precision but with simplified hardware architecture. PMID:29135951

  15. RIO: a new computational framework for accurate initial data of binary black holes

    NASA Astrophysics Data System (ADS)

    Barreto, W.; Clemente, P. C. M.; de Oliveira, H. P.; Rodriguez-Mueller, B.

    2018-06-01

    We present a computational framework ( Rio) in the ADM 3+1 approach for numerical relativity. This work enables us to carry out high resolution calculations for initial data of two arbitrary black holes. We use the transverse conformal treatment, the Bowen-York and the puncture methods. For the numerical solution of the Hamiltonian constraint we use the domain decomposition and the spectral decomposition of Galerkin-Collocation. The nonlinear numerical code solves the set of equations for the spectral modes using the standard Newton-Raphson method, LU decomposition and Gaussian quadratures. We show the convergence of the Rio code. This code allows for easy deployment of large calculations. We show how the spin of one of the black holes is manifest in the conformal factor.

  16. Allocation of Transaction Cost to Market Participants Using an Analytical Method in Deregulated Market

    NASA Astrophysics Data System (ADS)

    Jeyasankari, S.; Jeslin Drusila Nesamalar, J.; Charles Raja, S.; Venkatesh, P.

    2014-04-01

    Transmission cost allocation is one of the major challenges in transmission open access faced by the electric power sector. The purpose of this work is to provide an analytical method for allocating transmission transaction cost in deregulated market. This research work provides a usage based transaction cost allocation method based on line-flow impact factor (LIF) which relates the power flow in each line with respect to transacted power for the given transaction. This method provides the impact of line flows without running iterative power flow solution and is well suited for real time applications. The proposed method is compared with the Newton-Raphson (NR) method of cost allocation on sample six bus and practical Indian utility 69 bus systems by considering multilateral transaction.

  17. A robust direct-integration method for rotorcraft maneuver and periodic response

    NASA Technical Reports Server (NTRS)

    Panda, Brahmananda

    1992-01-01

    The Newmark-Beta method and the Newton-Raphson iteration scheme are combined to develop a direct-integration method for evaluating the maneuver and periodic-response expressions for rotorcraft. The method requires the generation of Jacobians and includes higher derivatives in the formulation of the geometric stiffness matrix to enhance the convergence of the system. The method leads to effective convergence with nonlinear structural dynamics and aerodynamic terms. Singularities in the matrices can be addressed with the method as they arise from a Lagrange multiplier approach for coupling equations with nonlinear constraints. The method is also shown to be general enough to handle singularities from quasisteady control-system models. The method is shown to be more general and robust than the similar 2GCHAS method for analyzing rotorcraft dynamics.

  18. Thermochemical nonequilibrium in atomic hydrogen at elevated temperatures

    NASA Technical Reports Server (NTRS)

    Scott, R. K.

    1972-01-01

    A numerical study of the nonequilibrium flow of atomic hydrogen in a cascade arc was performed to obtain insight into the physics of the hydrogen cascade arc. A rigorous mathematical model of the flow problem was formulated, incorporating the important nonequilibrium transport phenomena and atomic processes which occur in atomic hydrogen. Realistic boundary conditions, including consideration of the wall electrostatic sheath phenomenon, were included in the model. The governing equations of the asymptotic region of the cascade arc were obtained by writing conservation of mass and energy equations for the electron subgas, an energy conservation equation for heavy particles and an equation of state. Finite-difference operators for variable grid spacing were applied to the governing equations and the resulting system of strongly coupled, stiff equations were solved numerically by the Newton-Raphson method.

  19. A FORTRAN program for multivariate survival analysis on the personal computer.

    PubMed

    Mulder, P G

    1988-01-01

    In this paper a FORTRAN program is presented for multivariate survival or life table regression analysis in a competing risks' situation. The relevant failure rate (for example, a particular disease or mortality rate) is modelled as a log-linear function of a vector of (possibly time-dependent) explanatory variables. The explanatory variables may also include the variable time itself, which is useful for parameterizing piecewise exponential time-to-failure distributions in a Gompertz-like or Weibull-like way as a more efficient alternative to Cox's proportional hazards model. Maximum likelihood estimates of the coefficients of the log-linear relationship are obtained from the iterative Newton-Raphson method. The program runs on a personal computer under DOS; running time is quite acceptable, even for large samples.

  20. Method and Apparatus for Predicting Unsteady Pressure and Flow Rate Distribution in a Fluid Network

    NASA Technical Reports Server (NTRS)

    Majumdar, Alok K. (Inventor)

    2009-01-01

    A method and apparatus for analyzing steady state and transient flow in a complex fluid network, modeling phase changes, compressibility, mixture thermodynamics, external body forces such as gravity and centrifugal force and conjugate heat transfer. In some embodiments, a graphical user interface provides for the interactive development of a fluid network simulation having nodes and branches. In some embodiments, mass, energy, and specific conservation equations are solved at the nodes, and momentum conservation equations are solved in the branches. In some embodiments, contained herein are data objects for computing thermodynamic and thermophysical properties for fluids. In some embodiments, the systems of equations describing the fluid network are solved by a hybrid numerical method that is a combination of the Newton-Raphson and successive substitution methods.

  1. A parallel variable metric optimization algorithm

    NASA Technical Reports Server (NTRS)

    Straeter, T. A.

    1973-01-01

    An algorithm, designed to exploit the parallel computing or vector streaming (pipeline) capabilities of computers is presented. When p is the degree of parallelism, then one cycle of the parallel variable metric algorithm is defined as follows: first, the function and its gradient are computed in parallel at p different values of the independent variable; then the metric is modified by p rank-one corrections; and finally, a single univariant minimization is carried out in the Newton-like direction. Several properties of this algorithm are established. The convergence of the iterates to the solution is proved for a quadratic functional on a real separable Hilbert space. For a finite-dimensional space the convergence is in one cycle when p equals the dimension of the space. Results of numerical experiments indicate that the new algorithm will exploit parallel or pipeline computing capabilities to effect faster convergence than serial techniques.

  2. Density reconstruction in multiparameter elastic full-waveform inversion

    NASA Astrophysics Data System (ADS)

    Sun, Min'ao; Yang, Jizhong; Dong, Liangguo; Liu, Yuzhu; Huang, Chao

    2017-12-01

    Elastic full-waveform inversion (EFWI) is a quantitative data fitting procedure that recovers multiple subsurface parameters from multicomponent seismic data. As density is involved in addition to P- and S-wave velocities, the multiparameter EFWI suffers from more serious tradeoffs. In addition, compared with P- and S-wave velocities, the misfit function is less sensitive to density perturbation. Thus, a robust density reconstruction remains a difficult problem in multiparameter EFWI. In this paper, we develop an improved scattering-integral-based truncated Gauss-Newton method to simultaneously recover P- and S-wave velocities and density in EFWI. In this method, the inverse Gauss-Newton Hessian has been estimated by iteratively solving the Gauss-Newton equation with a matrix-free conjugate gradient algorithm. Therefore, it is able to properly handle the parameter tradeoffs. To give a detailed illustration of the tradeoffs between P- and S-wave velocities and density in EFWI, wavefield-separated sensitivity kernels and the Gauss-Newton Hessian are numerically computed, and their distribution characteristics are analyzed. Numerical experiments on a canonical inclusion model and a modified SEG/EAGE Overthrust model have demonstrated that the proposed method can effectively mitigate the tradeoff effects, and improve multiparameter gradients. Thus, a high convergence rate and an accurate density reconstruction can be achieved.

  3. Combined magnetic vector-scalar potential finite element computation of 3D magnetic field and performance of modified Lundell alternators in Space Station applications. Ph.D. Thesis

    NASA Technical Reports Server (NTRS)

    Wang, Ren H.

    1991-01-01

    A method of combined use of magnetic vector potential (MVP) based finite element (FE) formulations and magnetic scalar potential (MSP) based FE formulations for computation of three-dimensional (3D) magnetostatic fields is developed. This combined MVP-MSP 3D-FE method leads to considerable reduction by nearly a factor of 3 in the number of unknowns in comparison to the number of unknowns which must be computed in global MVP based FE solutions. This method allows one to incorporate portions of iron cores sandwiched in between coils (conductors) in current-carrying regions. Thus, it greatly simplifies the geometries of current carrying regions (in comparison with the exclusive MSP based methods) in electric machinery applications. A unique feature of this approach is that the global MSP solution is single valued in nature, that is, no branch cut is needed. This is again a superiority over the exclusive MSP based methods. A Newton-Raphson procedure with a concept of an adaptive relaxation factor was developed and successfully used in solving the 3D-FE problem with magnetic material anisotropy and nonlinearity. Accordingly, this combined MVP-MSP 3D-FE method is most suited for solution of large scale global type magnetic field computations in rotating electric machinery with very complex magnetic circuit geometries, as well as nonlinear and anisotropic material properties.

  4. Size-dependent axisymmetric vibration of functionally graded circular plates in bifurcation/limit point instability

    NASA Astrophysics Data System (ADS)

    Ashoori, A. R.; Vanini, S. A. Sadough; Salari, E.

    2017-04-01

    In the present paper, vibration behavior of size-dependent functionally graded (FG) circular microplates subjected to thermal loading are carried out in pre/post-buckling of bifurcation/limit-load instability for the first time. Two kinds of frequently used thermal loading, i.e., uniform temperature rise and heat conduction across the thickness direction are considered. Thermo-mechanical material properties of FG plate are supposed to vary smoothly and continuously throughout the thickness based on power law model. Modified couple stress theory is exploited to describe the size dependency of microplate. The nonlinear governing equations of motion and associated boundary conditions are extracted through generalized form of Hamilton's principle and von-Karman geometric nonlinearity for the vibration analysis of circular FG plates including size effects. Ritz finite element method is then employed to construct the matrix representation of governing equations which are solved by two different strategies including Newton-Raphson scheme and cylindrical arc-length method. Moreover, in the following a parametric study is accompanied to examine the effects of the several parameters such as material length scale parameter, temperature distributions, type of buckling, thickness to radius ratio, boundary conditions and power law index on the dimensionless frequency of post-buckled/snapped size-dependent FG plates in detail. It is found that the material length scale parameter and thermal loading have a significant effect on vibration characteristics of size-dependent circular FG plates.

  5. Dose Titration Algorithm Tuning (DTAT) should supersede ‘the’ Maximum Tolerated Dose (MTD) in oncology dose-finding trials

    PubMed Central

    Norris, David C.

    2017-01-01

    Background. Absent adaptive, individualized dose-finding in early-phase oncology trials, subsequent ‘confirmatory’ Phase III trials risk suboptimal dosing, with resulting loss of statistical power and reduced probability of technical success for the investigational therapy. While progress has been made toward explicitly adaptive dose-finding and quantitative modeling of dose-response relationships, most such work continues to be organized around a concept of ‘the’ maximum tolerated dose (MTD). The purpose of this paper is to demonstrate concretely how the aim of early-phase trials might be conceived, not as ‘dose-finding’, but as dose titration algorithm (DTA)-finding. Methods. A Phase I dosing study is simulated, for a notional cytotoxic chemotherapy drug, with neutropenia constituting the critical dose-limiting toxicity. The drug’s population pharmacokinetics and myelosuppression dynamics are simulated using published parameter estimates for docetaxel. The amenability of this model to linearization is explored empirically. The properties of a simple DTA targeting neutrophil nadir of 500 cells/mm 3 using a Newton-Raphson heuristic are explored through simulation in 25 simulated study subjects. Results. Individual-level myelosuppression dynamics in the simulation model approximately linearize under simple transformations of neutrophil concentration and drug dose. The simulated dose titration exhibits largely satisfactory convergence, with great variance in individualized optimal dosing. Some titration courses exhibit overshooting. Conclusions. The large inter-individual variability in simulated optimal dosing underscores the need to replace ‘the’ MTD with an individualized concept of MTD i . To illustrate this principle, the simplest possible DTA capable of realizing such a concept is demonstrated. Qualitative phenomena observed in this demonstration support discussion of the notion of tuning such algorithms. Although here illustrated specifically in relation to cytotoxic chemotherapy, the DTAT principle appears similarly applicable to Phase I studies of cancer immunotherapy and molecularly targeted agents. PMID:28663782

  6. Some modifications of Newton's method for the determination of the steady-state response of nonlinear oscillatory circuits

    NASA Astrophysics Data System (ADS)

    Grosz, F. B., Jr.; Trick, T. N.

    1982-07-01

    It is proposed that nondominant states should be eliminated from the Newton algorithm in the steady-state analysis of nonlinear oscillatory systems. This technique not only improves convergence, but also reduces the size of the sensitivity matrix so that less computation is required for each iteration. One or more periods of integration should be performed after each periodic state estimation before the sensitivity computations are made for the next periodic state estimation. These extra periods of integration between Newton iterations are found to allow the fast states due to parasitic effects to settle, which enables the Newton algorithm to make a better prediction. In addition, the reliability of the algorithm is improved in high Q oscillator circuits by both local and global damping in which the amount of damping is proportional to the difference between the initial and final state values.

  7. What can numerical computation do for the history of science? (a study of an orbit drawn by Newton in a letter to Hooke)

    NASA Astrophysics Data System (ADS)

    Cardozo Dias, Penha Maria; Stuchi, T. J.

    2013-11-01

    In a letter to Robert Hooke, Isaac Newton drew the orbit of a mass moving under a constant attracting central force. The drawing of the orbit may indicate how and when Newton developed dynamic categories. Some historians claim that Newton used a method contrived by Hooke; others that he used some method of curvature. We prove that Hooke’s method is a second-order symplectic area-preserving algorithm, and the method of curvature is a first-order algorithm without special features; then we integrate the Hamiltonian equations. Integration by the method of curvature can also be done, exploring the geometric properties of curves. We compare three methods: Hooke’s method, the method of curvature and a first-order method. A fourth-order algorithm sets a standard of comparison. We analyze which of these methods best explains Newton’s drawing.

  8. A system-approach to the elastohydrodynamic lubrication point-contact problem

    NASA Technical Reports Server (NTRS)

    Lim, Sang Gyu; Brewe, David E.

    1991-01-01

    The classical EHL (elastohydrodynamic lubrication) point contact problem is solved using a new system-approach, similar to that introduced by Houpert and Hamrock for the line-contact problem. Introducing a body-fitted coordinate system, the troublesome free-boundary is transformed to a fixed domain. The Newton-Raphson method can then be used to determine the pressure distribution and the cavitation boundary subject to the Reynolds boundary condition. This method provides an efficient and rigorous way of solving the EHL point contact problem with the aid of a supercomputer and a promising method to deal with the transient EHL point contact problem. A typical pressure distribution and film thickness profile are presented and the minimum film thicknesses are compared with the solution of Hamrock and Dowson. The details of the cavitation boundaries for various operating parameters are discussed.

  9. TAP 2: A finite element program for thermal analysis of convectively cooled structures

    NASA Technical Reports Server (NTRS)

    Thornton, E. A.

    1980-01-01

    A finite element computer program (TAP 2) for steady-state and transient thermal analyses of convectively cooled structures is presented. The program has a finite element library of six elements: two conduction/convection elements to model heat transfer in a solid, two convection elements to model heat transfer in a fluid, and two integrated conduction/convection elements to represent combined heat transfer in tubular and plate/fin fluid passages. Nonlinear thermal analysis due to temperature-dependent thermal parameters is performed using the Newton-Raphson iteration method. Transient analyses are performed using an implicit Crank-Nicolson time integration scheme with consistent or lumped capacitance matrices as an option. Program output includes nodal temperatures and element heat fluxes. Pressure drops in fluid passages may be computed as an option. User instructions and sample problems are presented in appendixes.

  10. Computing maximum-likelihood estimates for parameters of the National Descriptive Model of Mercury in Fish

    USGS Publications Warehouse

    Donato, David I.

    2012-01-01

    This report presents the mathematical expressions and the computational techniques required to compute maximum-likelihood estimates for the parameters of the National Descriptive Model of Mercury in Fish (NDMMF), a statistical model used to predict the concentration of methylmercury in fish tissue. The expressions and techniques reported here were prepared to support the development of custom software capable of computing NDMMF parameter estimates more quickly and using less computer memory than is currently possible with available general-purpose statistical software. Computation of maximum-likelihood estimates for the NDMMF by numerical solution of a system of simultaneous equations through repeated Newton-Raphson iterations is described. This report explains the derivation of the mathematical expressions required for computational parameter estimation in sufficient detail to facilitate future derivations for any revised versions of the NDMMF that may be developed.

  11. Optimum Suction Distribution for Transition Control

    NASA Technical Reports Server (NTRS)

    Balakumar, P.; Hall, P.

    1996-01-01

    The optimum suction distribution which gives the longest laminar region for a given total suction is computed. The goal here is to provide the designer with a method to find the best suction distribution subject to some overall constraint applied to the suction. We formulate the problem using the Lagrangian multiplier method with constraints. The resulting non-linear system of equations is solved using the Newton-Raphson technique. The computations are performed for a Blasius boundary layer on a flat-plate and crossflow cases. For the Blasius boundary layer, the optimum suction distribution peaks upstream of the maximum growth rate region and remains flat in the middle before it decreases to zero at the end of the transition point. For the stationary and travelling crossflow instability, the optimum suction peaks upstream of the maximum growth rate region and decreases gradually to zero.

  12. Higher Order Time Integration Schemes for the Unsteady Navier-Stokes Equations on Unstructured Meshes

    NASA Technical Reports Server (NTRS)

    Jothiprasad, Giridhar; Mavriplis, Dimitri J.; Caughey, David A.; Bushnell, Dennis M. (Technical Monitor)

    2002-01-01

    The efficiency gains obtained using higher-order implicit Runge-Kutta schemes as compared with the second-order accurate backward difference schemes for the unsteady Navier-Stokes equations are investigated. Three different algorithms for solving the nonlinear system of equations arising at each timestep are presented. The first algorithm (NMG) is a pseudo-time-stepping scheme which employs a non-linear full approximation storage (FAS) agglomeration multigrid method to accelerate convergence. The other two algorithms are based on Inexact Newton's methods. The linear system arising at each Newton step is solved using iterative/Krylov techniques and left preconditioning is used to accelerate convergence of the linear solvers. One of the methods (LMG) uses Richardson's iterative scheme for solving the linear system at each Newton step while the other (PGMRES) uses the Generalized Minimal Residual method. Results demonstrating the relative superiority of these Newton's methods based schemes are presented. Efficiency gains as high as 10 are obtained by combining the higher-order time integration schemes with the more efficient nonlinear solvers.

  13. A quasi-Newton algorithm for large-scale nonlinear equations.

    PubMed

    Huang, Linghua

    2017-01-01

    In this paper, the algorithm for large-scale nonlinear equations is designed by the following steps: (i) a conjugate gradient (CG) algorithm is designed as a sub-algorithm to obtain the initial points of the main algorithm, where the sub-algorithm's initial point does not have any restrictions; (ii) a quasi-Newton algorithm with the initial points given by sub-algorithm is defined as main algorithm, where a new nonmonotone line search technique is presented to get the step length [Formula: see text]. The given nonmonotone line search technique can avoid computing the Jacobian matrix. The global convergence and the [Formula: see text]-order convergent rate of the main algorithm are established under suitable conditions. Numerical results show that the proposed method is competitive with a similar method for large-scale problems.

  14. Multilevel Iterative Methods in Nonlinear Computational Plasma Physics

    NASA Astrophysics Data System (ADS)

    Knoll, D. A.; Finn, J. M.

    1997-11-01

    Many applications in computational plasma physics involve the implicit numerical solution of coupled systems of nonlinear partial differential equations or integro-differential equations. Such problems arise in MHD, systems of Vlasov-Fokker-Planck equations, edge plasma fluid equations. We have been developing matrix-free Newton-Krylov algorithms for such problems and have applied these algorithms to the edge plasma fluid equations [1,2] and to the Vlasov-Fokker-Planck equation [3]. Recently we have found that with increasing grid refinement, the number of Krylov iterations required per Newton iteration has grown unmanageable [4]. This has led us to the study of multigrid methods as a means of preconditioning matrix-free Newton-Krylov methods. In this poster we will give details of the general multigrid preconditioned Newton-Krylov algorithm, as well as algorithm performance details on problems of interest in the areas of magnetohydrodynamics and edge plasma physics. Work supported by US DoE 1. Knoll and McHugh, J. Comput. Phys., 116, pg. 281 (1995) 2. Knoll and McHugh, Comput. Phys. Comm., 88, pg. 141 (1995) 3. Mousseau and Knoll, J. Comput. Phys. (1997) (to appear) 4. Knoll and McHugh, SIAM J. Sci. Comput. 19, (1998) (to appear)

  15. Imbalance of Ecosystems and the Modified Newton's 3 Laws of Change

    NASA Astrophysics Data System (ADS)

    Lin, H.

    2013-12-01

    Sustainability calls for the unity of human knowledge that bridges the present "two cultures" gulf between the sciences and the humanities, and the transition from the age of machine to the age of the environment quests for harmony with nature (so-called eco-civilization). Ecosystems are fundamentally different from machines, where individual components contain complex organisms instead of identical nonliving entities. Because of heterogeneity, diversity, self-organization, and openness, imbalances abound in nature. These are reflected in entropy increase over time (S > 0) and gradient persistence over space (F > 0). In this paper, three modified Newton's laws of change for ecosystems are suggested, and examples of imbalances from landscape-soil-water-ecosystem-climate will be illustrated. ● Newton's 1st law of motion: ∑F=0 → dv/dt=0. i.e., if net force acting on an object is zero, then the object's velocity remains unchanged. Modified Newton's 1st law of change (imbalance #1): ∑F>0 → dv/dt≥0. i.e., unavoidable forcing exists in nature (∑F>0), thus change always happens; however, with inertia/resistance in some systems or minimum threshold needed to change, dv/dt≥0. ● Newton's 2nd law of motion: ∑F=ma. i.e., acceleration is inversely proportional to body mass. Modified Newton's 2nd law of change (imbalance #2): ∑F≠ma. i.e., either 1) it is hard to make change because of resilience, self-adjustment, nonlinearity of interactions-feedbacks in living systems (∑F≥ma), or 2) there is possible threshold behavior or sudden collapse of a system (∑F

  16. Peak-Seeking Optimization of Spanwise Lift Distribution for Wings in Formation Flight

    NASA Technical Reports Server (NTRS)

    Hanson, Curtis E.; Ryan, Jack

    2012-01-01

    A method is presented for the in-flight optimization of the lift distribution across the wing for minimum drag of an aircraft in formation flight. The usual elliptical distribution that is optimal for a given wing with a given span is no longer optimal for the trailing wing in a formation due to the asymmetric nature of the encountered flow field. Control surfaces along the trailing edge of the wing can be configured to obtain a non-elliptical profile that is more optimal in terms of minimum combined induced and profile drag. Due to the difficult-to-predict nature of formation flight aerodynamics, a Newton-Raphson peak-seeking controller is used to identify in real time the best aileron and flap deployment scheme for minimum total drag. Simulation results show that the peak-seeking controller correctly identifies an optimal trim configuration that provides additional drag savings above those achieved with conventional anti-symmetric aileron trim.

  17. Four-body trajectory optimization

    NASA Technical Reports Server (NTRS)

    Pu, C. L.; Edelbaum, T. N.

    1974-01-01

    A comprehensive optimization program has been developed for computing fuel-optimal trajectories between the earth and a point in the sun-earth-moon system. It presents methods for generating fuel optimal two-impulse trajectories which may originate at the earth or a point in space and fuel optimal three-impulse trajectories between two points in space. The extrapolation of the state vector and the computation of the state transition matrix are accomplished by the Stumpff-Weiss method. The cost and constraint gradients are computed analytically in terms of the terminal state and the state transition matrix. The 4-body Lambert problem is solved by using the Newton-Raphson method. An accelerated gradient projection method is used to optimize a 2-impulse trajectory with terminal constraint. The Davidon's Variance Method is used both in the accelerated gradient projection method and the outer loop of a 3-impulse trajectory optimization problem.

  18. A new constitutive analysis of hexagonal close-packed metal in equal channel angular pressing by crystal plasticity finite element method

    NASA Astrophysics Data System (ADS)

    Li, Hejie; Öchsner, Andreas; Yarlagadda, Prasad K. D. V.; Xiao, Yin; Furushima, Tsuyoshi; Wei, Dongbin; Jiang, Zhengyi; Manabe, Ken-ichi

    2018-01-01

    Most of hexagonal close-packed (HCP) metals are lightweight metals. With the increasing application of light metal products, the production of light metal is increasingly attracting the attentions of researchers worldwide. To obtain a better understanding of the deformation mechanism of HCP metals (especially for Mg and its alloys), a new constitutive analysis was carried out based on previous research. In this study, combining the theories of strain gradient and continuum mechanics, the equal channel angular pressing process is analyzed and a HCP crystal plasticity constitutive model is developed especially for Mg and its alloys. The influence of elevated temperature on the deformation mechanism of the Mg alloy (slip and twin) is novelly introduced into a crystal plasticity constitutive model. The solution for the new developed constitutive model is established on the basis of the Lagrangian iterations and Newton Raphson simplification.

  19. On the numerical solution of the dynamically loaded hydrodynamic lubrication of the point contact problem

    NASA Technical Reports Server (NTRS)

    Lim, Sang G.; Brewe, David E.; Prahl, Joseph M.

    1990-01-01

    The transient analysis of hydrodynamic lubrication of a point-contact is presented. A body-fitted coordinate system is introduced to transform the physical domain to a rectangular computational domain, enabling the use of the Newton-Raphson method for determining pressures and locating the cavitation boundary, where the Reynolds boundary condition is specified. In order to obtain the transient solution, an explicit Euler method is used to effect a time march. The transient dynamic load is a sinusoidal function of time with frequency, fractional loading, and mean load as parameters. Results include the variation of the minimum film thickness and phase-lag with time as functions of excitation frequency. The results are compared with the analytic solution to the transient step bearing problem with the same dynamic loading function. The similarities of the results suggest an approximate model of the point contact minimum film thickness solution.

  20. Development of kinematic equations and determination of workspace of a 6 DOF end-effector with closed-kinematic chain mechanism

    NASA Technical Reports Server (NTRS)

    Nguyen, Charles C.; Pooran, Farhad J.

    1989-01-01

    This report presents results from the research grant entitled Active Control of Robot Manipulators, funded by the Goddard Space Flight Center, under Grant NAG5-780, for the period July 1, 1988 to January 1, 1989. An analysis is presented of a 6 degree-of-freedom robot end-effector built to study telerobotic assembly of NASA hardware in space. Since the end-effector is required to perform high precision motion in a limited workspace, closed-kinematic mechanisms are chosen for its design. A closed-form solution is obtained for the inverse kinematic problem and an iterative procedure employing Newton-Raphson method is proposed to solve the forward kinematic problem. A study of the end-effector workspace results in a general procedure for the workspace determination based on link constraints. Computer simulation results are presented.

  1. Computer-implemented method and apparatus for autonomous position determination using magnetic field data

    NASA Technical Reports Server (NTRS)

    Ketchum, Eleanor A. (Inventor)

    2000-01-01

    A computer-implemented method and apparatus for determining position of a vehicle within 100 km autonomously from magnetic field measurements and attitude data without a priori knowledge of position. An inverted dipole solution of two possible position solutions for each measurement of magnetic field data are deterministically calculated by a program controlled processor solving the inverted first order spherical harmonic representation of the geomagnetic field for two unit position vectors 180 degrees apart and a vehicle distance from the center of the earth. Correction schemes such as a successive substitutions and a Newton-Raphson method are applied to each dipole. The two position solutions for each measurement are saved separately. Velocity vectors for the position solutions are calculated so that a total energy difference for each of the two resultant position paths is computed. The position path with the smaller absolute total energy difference is chosen as the true position path of the vehicle.

  2. Experimental study of trajectory planning and control of a high precision robot manipulator

    NASA Technical Reports Server (NTRS)

    Nguyen, Charles C.; Antrazi, Sami S.

    1991-01-01

    The kinematic and trajectory planning is presented for a 6 DOF end-effector whose design was based on the Stewart Platform mechanism. The end-effector was used as a testbed for studying robotic assembly of NASA hardware with passive compliance. Vector analysis was employed to derive a closed-form solution for the end-effector inverse kinematic transformation. A computationally efficient numerical solution was obtained for the end-effector forward kinematic transformation using Newton-Raphson method. Three trajectory planning schemes, two for fine motion and one for gross motion, were developed for the end-effector. Experiments conducted to evaluate the performance of the trajectory planning schemes showed excellent tracking quality with minimal errors. Current activities focus on implementing the developed trajectory planning schemes on mating and demating space-rated connectors and using the compliant platform to acquire forces/torques applied on the end-effector during the assembly task.

  3. Transparent binary-thickness coatings on metal substrates that produce binary patterns of orthogonal elliptical polarization states in reflected light

    NASA Astrophysics Data System (ADS)

    Azzam, Rasheed M. A.; Angel, Wade W.

    1992-12-01

    A reflective division-of-wavefront polarizing beam splitter is described that uses a dual- thickness transparent thin-film coating on a metal substrate. A previous design that used a partially clad substrate at the principal angle of the metal [Azzam, JOSA A 5, 1576 (1988)] is replaced by a more general one in which the substrate is coated throughout and the film thickness alternates between two non-zero levels. The incident linear polarization azimuth is chosen near, but not restricted to, 45 degree(s) (measured from the plane of incidence), and the angle of incidence may be selected over a range of values. The design procedure, which uses the two-dimensional Newton-Raphson method, is applied to the SiO2-Au film- substrate system at 633 nm wavelength, as an example, and the characteristics of the various possible coatings are presented.

  4. Revisiting the factors which control the angle of shear bands in geodynamic numerical models of brittle deformation

    NASA Astrophysics Data System (ADS)

    Thieulot, Cedric

    2017-04-01

    In this work I present Finite Element numerical simulations of brittle deformation in two-dimensional Cartesian systems subjected to compressional or extensional kinematical boundary conditions with a basal velocity discontinuity. The rheology is visco-plastic and is characterised by a cohesion and an angle of internal friction (Drucker-Prager type). I will explore the influence of the following factors on the recovered shear band angles when the angle of internal friction is varied: a) element type (quadrilateral vs triangle), b) element order, c) continuous vs discontinous pressure, d) visco-plasticity model implementation, e) the nonlinear tolerance value, f) the use of markers, g) Picard vs Newton-Raphson, h) velocity discontinuity nature. I will present these results in the light of already published literature (e.g. Lemiale et al, PEPI 171, 2008; Kaus, Tectonophysics 484, 2010).

  5. On nonlinear finite element analysis in single-, multi- and parallel-processors

    NASA Technical Reports Server (NTRS)

    Utku, S.; Melosh, R.; Islam, M.; Salama, M.

    1982-01-01

    Numerical solution of nonlinear equilibrium problems of structures by means of Newton-Raphson type iterations is reviewed. Each step of the iteration is shown to correspond to the solution of a linear problem, therefore the feasibility of the finite element method for nonlinear analysis is established. Organization and flow of data for various types of digital computers, such as single-processor/single-level memory, single-processor/two-level-memory, vector-processor/two-level-memory, and parallel-processors, with and without sub-structuring (i.e. partitioning) are given. The effect of the relative costs of computation, memory and data transfer on substructuring is shown. The idea of assigning comparable size substructures to parallel processors is exploited. Under Cholesky type factorization schemes, the efficiency of parallel processing is shown to decrease due to the occasional shared data, just as that due to the shared facilities.

  6. A theoretical study on tunneling based biosensor having a redox-active monolayer using physics based simulation

    NASA Astrophysics Data System (ADS)

    Kim, Kyoung Yeon; Lee, Won Cheol; Yun, Jun Yeon; Lee, Youngeun; Choi, Seoungwook; Jin, Seonghoon; Park, Young June

    2018-01-01

    We developed a numerical simulator to model the operation of a tunneling based biosensor which has a redox-active monolayer. The simulator takes a realistic device structure as a simulation domain, and it employs the drift-diffusion equation for ion transport, the non-equilibrium Green's function formalism for electron tunneling, and the Ramo-Shockley theorem for accurate calculation of non-faradaic current. We also accounted for the buffer reaction and the immobilized peptide layer. For efficient transient simulation, the implicit time integration scheme is employed where the solution at each time step is obtained from the coupled Newton-Raphson method. As an application, we studied the operation of a recently fabricated reference-electrode free biosensor in various bias conditions and confirmed the effect of buffer reaction and the current flowing mechanism. Using the simulator, we also found a strategy to maximize the sensitivity of the tunneling based sensor.

  7. Methods of computing steady-state voltage stability margins of power systems

    DOEpatents

    Chow, Joe Hong; Ghiocel, Scott Gordon

    2018-03-20

    In steady-state voltage stability analysis, as load increases toward a maximum, conventional Newton-Raphson power flow Jacobian matrix becomes increasingly ill-conditioned so power flow fails to converge before reaching maximum loading. A method to directly eliminate this singularity reformulates the power flow problem by introducing an AQ bus with specified bus angle and reactive power consumption of a load bus. For steady-state voltage stability analysis, the angle separation between the swing bus and AQ bus can be varied to control power transfer to the load, rather than specifying the load power itself. For an AQ bus, the power flow formulation is only made up of a reactive power equation, thus reducing the size of the Jacobian matrix by one. This reduced Jacobian matrix is nonsingular at the critical voltage point, eliminating a major difficulty in voltage stability analysis for power system operations.

  8. Numerical Modeling of Flow Distribution in Micro-Fluidics Systems

    NASA Technical Reports Server (NTRS)

    Majumdar, Alok; Cole, Helen; Chen, C. P.

    2005-01-01

    This paper describes an application of a general purpose computer program, GFSSP (Generalized Fluid System Simulation Program) for calculating flow distribution in a network of micro-channels. GFSSP employs a finite volume formulation of mass and momentum conservation equations in a network consisting of nodes and branches. Mass conservation equation is solved for pressures at the nodes while the momentum conservation equation is solved at the branches to calculate flowrate. The system of equations describing the fluid network is solved by a numerical method that is a combination of the Newton-Raphson and successive substitution methods. The numerical results have been compared with test data and detailed CFD (computational Fluid Dynamics) calculations. The agreement between test data and predictions is satisfactory. The discrepancies between the predictions and test data can be attributed to the frictional correlation which does not include the effect of surface tension or electro-kinetic effect.

  9. Direct numerical simulation of the laminar-turbulent transition at hypersonic flow speeds on a supercomputer

    NASA Astrophysics Data System (ADS)

    Egorov, I. V.; Novikov, A. V.; Fedorov, A. V.

    2017-08-01

    A method for direct numerical simulation of three-dimensional unsteady disturbances leading to a laminar-turbulent transition at hypersonic flow speeds is proposed. The simulation relies on solving the full three-dimensional unsteady Navier-Stokes equations. The computational technique is intended for multiprocessor supercomputers and is based on a fully implicit monotone approximation scheme and the Newton-Raphson method for solving systems of nonlinear difference equations. This approach is used to study the development of three-dimensional unstable disturbances in a flat-plate and compression-corner boundary layers in early laminar-turbulent transition stages at the free-stream Mach number M = 5.37. The three-dimensional disturbance field is visualized in order to reveal and discuss features of the instability development at the linear and nonlinear stages. The distribution of the skin friction coefficient is used to detect laminar and transient flow regimes and determine the onset of the laminar-turbulent transition.

  10. Newton's method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    More, J. J.; Sorensen, D. C.

    1982-02-01

    Newton's method plays a central role in the development of numerical techniques for optimization. In fact, most of the current practical methods for optimization can be viewed as variations on Newton's method. It is therefore important to understand Newton's method as an algorithm in its own right and as a key introduction to the most recent ideas in this area. One of the aims of this expository paper is to present and analyze two main approaches to Newton's method for unconstrained minimization: the line search approach and the trust region approach. The other aim is to present some of themore » recent developments in the optimization field which are related to Newton's method. In particular, we explore several variations on Newton's method which are appropriate for large scale problems, and we also show how quasi-Newton methods can be derived quite naturally from Newton's method.« less

  11. Quasi-Newton methods for parameter estimation in functional differential equations

    NASA Technical Reports Server (NTRS)

    Brewer, Dennis W.

    1988-01-01

    A state-space approach to parameter estimation in linear functional differential equations is developed using the theory of linear evolution equations. A locally convergent quasi-Newton type algorithm is applied to distributed systems with particular emphasis on parameters that induce unbounded perturbations of the state. The algorithm is computationally implemented on several functional differential equations, including coefficient and delay estimation in linear delay-differential equations.

  12. Parallel Newton-Krylov-Schwarz algorithms for the transonic full potential equation

    NASA Technical Reports Server (NTRS)

    Cai, Xiao-Chuan; Gropp, William D.; Keyes, David E.; Melvin, Robin G.; Young, David P.

    1996-01-01

    We study parallel two-level overlapping Schwarz algorithms for solving nonlinear finite element problems, in particular, for the full potential equation of aerodynamics discretized in two dimensions with bilinear elements. The overall algorithm, Newton-Krylov-Schwarz (NKS), employs an inexact finite-difference Newton method and a Krylov space iterative method, with a two-level overlapping Schwarz method as a preconditioner. We demonstrate that NKS, combined with a density upwinding continuation strategy for problems with weak shocks, is robust and, economical for this class of mixed elliptic-hyperbolic nonlinear partial differential equations, with proper specification of several parameters. We study upwinding parameters, inner convergence tolerance, coarse grid density, subdomain overlap, and the level of fill-in in the incomplete factorization, and report their effect on numerical convergence rate, overall execution time, and parallel efficiency on a distributed-memory parallel computer.

  13. Progress on a generalized coordinates tensor product finite element 3DPNS algorithm for subsonic

    NASA Technical Reports Server (NTRS)

    Baker, A. J.; Orzechowski, J. A.

    1983-01-01

    A generalized coordinates form of the penalty finite element algorithm for the 3-dimensional parabolic Navier-Stokes equations for turbulent subsonic flows was derived. This algorithm formulation requires only three distinct hypermatrices and is applicable using any boundary fitted coordinate transformation procedure. The tensor matrix product approximation to the Jacobian of the Newton linear algebra matrix statement was also derived. Tne Newton algorithm was restructured to replace large sparse matrix solution procedures with grid sweeping using alpha-block tridiagonal matrices, where alpha equals the number of dependent variables. Numerical experiments were conducted and the resultant data gives guidance on potentially preferred tensor product constructions for the penalty finite element 3DPNS algorithm.

  14. Semismooth Newton method for gradient constrained minimization problem

    NASA Astrophysics Data System (ADS)

    Anyyeva, Serbiniyaz; Kunisch, Karl

    2012-08-01

    In this paper we treat a gradient constrained minimization problem, particular case of which is the elasto-plastic torsion problem. In order to get the numerical approximation to the solution we have developed an algorithm in an infinite dimensional space framework using the concept of the generalized (Newton) differentiation. Regularization was done in order to approximate the problem with the unconstrained minimization problem and to make the pointwise maximum function Newton differentiable. Using semismooth Newton method, continuation method was developed in function space. For the numerical implementation the variational equations at Newton steps are discretized using finite elements method.

  15. Interior point techniques for LP and NLP

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Evtushenko, Y.

    By using surjective mapping the initial constrained optimization problem is transformed to a problem in a new space with only equality constraints. For the numerical solution of the latter problem we use the generalized gradient-projection method and Newton`s method. After inverse transformation to the initial space we obtain the family of numerical methods for solving optimization problems with equality and inequality constraints. In the linear programming case after some simplification we obtain Dikin`s algorithm, affine scaling algorithm and generalized primal dual interior point linear programming algorithm.

  16. Gravitation in material media

    NASA Astrophysics Data System (ADS)

    Ridgely, Charles T.

    2011-03-01

    When two gravitating bodies reside in a material medium, Newton's law of universal gravitation must be modified to account for the presence of the medium. A modified expression of Newton's law is known in the literature, but lacks a clear connection with existing gravitational theory. Newton's law in the presence of a homogeneous material medium is herein derived on the basis of classical, Newtonian gravitational theory and by a general relativistic use of Archimedes' principle. It is envisioned that the techniques presented herein will be most useful to graduate students and those undergraduate students having prior experience with vector analysis and potential theory.

  17. What can Numerical Computation do for the History of Science? (Study of an Orbit Drawn by Newton on a Letter to Hooke)

    NASA Astrophysics Data System (ADS)

    Stuchi, Teresa; Cardozo Dias, P.

    2013-05-01

    Abstract (2,250 Maximum Characters): On a letter to Robert Hooke, Isaac Newton drew the orbit of a mass moving under a constant attracting central force. How he drew the orbit may indicate how and when he developed dynamic categories. Some historians claim that Newton used a method contrived by Hooke; others that he used some method of curvature. We prove geometrically: Hooke’s method is a second order symplectic area preserving algorithm, and the method of curvature is a first order algorithm without special features; then we integrate the hamiltonian equations. Integration by the method of curvature can also be done exploring geometric properties of curves. We compare three methods: Hooke’s method, the method of curvature and a first order method. A fourth order algorithm sets a standard of comparison. We analyze which of these methods best explains Newton’s drawing.

  18. GlobiPack v. 1.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bartlett, Roscoe

    2010-03-31

    GlobiPack contains a small collection of optimization globalization algorithms. These algorithms are used by optimization and various nonlinear equation solver algorithms.Used as the line-search procedure with Newton and Quasi-Newton optimization and nonlinear equation solver methods. These are standard published 1-D line search algorithms such as are described in the book Nocedal and Wright Numerical Optimization: 2nd edition, 2006. One set of algorithms were copied and refactored from the existing open-source Trilinos package MOOCHO where the linear search code is used to globalize SQP methods. This software is generic to any mathematical optimization problem where smooth derivatives exist. There is nomore » specific connection or mention whatsoever to any specific application, period. You cannot find more general mathematical software.« less

  19. Robust integration schemes for generalized viscoplasticity with internal-state variables. Part 1: Theoretical developments and applications

    NASA Technical Reports Server (NTRS)

    Saleeb, Atef F.; Li, Wei

    1995-01-01

    This two-part report is concerned with the development of a general framework for the implicit time-stepping integrators for the flow and evolution equations in generalized viscoplastic models. The primary goal is to present a complete theoretical formulation, and to address in detail the algorithmic and numerical analysis aspects involved in its finite element implementation, as well as to critically assess the numerical performance of the developed schemes in a comprehensive set of test cases. On the theoretical side, the general framework is developed on the basis of the unconditionally-stable, backward-Euler difference scheme as a starting point. Its mathematical structure is of sufficient generality to allow a unified treatment of different classes of viscoplastic models with internal variables. In particular, two specific models of this type, which are representative of the present start-of-art in metal viscoplasticity, are considered in applications reported here; i.e., fully associative (GVIPS) and non-associative (NAV) models. The matrix forms developed for both these models are directly applicable for both initially isotropic and anisotropic materials, in general (three-dimensional) situations as well as subspace applications (i.e., plane stress/strain, axisymmetric, generalized plane stress in shells). On the computational side, issues related to efficiency and robustness are emphasized in developing the (local) interative algorithm. In particular, closed-form expressions for residual vectors and (consistent) material tangent stiffness arrays are given explicitly for both GVIPS and NAV models, with their maximum sizes 'optimized' to depend only on the number of independent stress components (but independent of the number of viscoplastic internal state parameters). Significant robustness of the local iterative solution is provided by complementing the basic Newton-Raphson scheme with a line-search strategy for convergence. In the present first part of the report, we focus on the theoretical developments, and discussions of the results of numerical-performance studies using the integration schemes for GVIPS and NAV models.

  20. A two-dimensional hydrodynamic model of a tidal estuary

    USGS Publications Warehouse

    Walters, Roy A.; Cheng, Ralph T.

    1979-01-01

    A finite element model is described which is used in the computation of tidal currents in an estuary. This numerical model is patterned after an existing algorithm and has been carefully tested in rectangular and curve-sided channels with constant and variable depth. One of the common uncertainties in this class of two-dimensional hydrodynamic models is the treatment of the lateral boundary conditions. Special attention is paid specifically to addressing this problem. To maintain continuity within the domain of interest, ‘smooth’ curve-sided elements must be used at all shoreline boundaries. The present model uses triangular, isoparametric elements with quadratic basis functions for the two velocity components and a linear basis function for water surface elevation. An implicit time integration is used and the model is unconditionally stable. The resultant governing equations are nonlinear owing to the advective and the bottom friction terms and are solved iteratively at each time step by the Newton-Raphson method. Model test runs have been made in the southern portion of San Francisco Bay, California (South Bay) as well as in the Bay west of Carquinez Strait. Owing to the complex bathymetry, the hydrodynamic characteristics of the Bay system are dictated by the generally shallow basins which contain deep, relict river channels. Great care must be exercised to ensure that the conservation equations remain locally as well as globally accurate. Simulations have been made over several representative tidal cycles using this finite element model, and the results compare favourably with existing data. In particular, the standing wave in South Bay and the progressive wave in the northern reach are well represented.

  1. Geometrical-optics code for computing the optical properties of large dielectric spheres.

    PubMed

    Zhou, Xiaobing; Li, Shusun; Stamnes, Knut

    2003-07-20

    Absorption of electromagnetic radiation by absorptive dielectric spheres such as snow grains in the near-infrared part of the solar spectrum cannot be neglected when radiative properties of snow are computed. Thus a new, to our knowledge, geometrical-optics code is developed to compute scattering and absorption cross sections of large dielectric particles of arbitrary complex refractive index. The number of internal reflections and transmissions are truncated on the basis of the ratio of the irradiance incident at the nth interface to the irradiance incident at the first interface for a specific optical ray. Thus the truncation number is a function of the angle of incidence. Phase functions for both near- and far-field absorption and scattering of electromagnetic radiation are calculated directly at any desired scattering angle by using a hybrid algorithm based on the bisection and Newton-Raphson methods. With these methods a large sphere's absorption and scattering properties of light can be calculated for any wavelength from the ultraviolet to the microwave regions. Assuming that large snow meltclusters (1-cm order), observed ubiquitously in the snow cover during summer, can be characterized as spheres, one may compute absorption and scattering efficiencies and the scattering phase function on the basis of this geometrical-optics method. A geometrical-optics method for sphere (GOMsphere) code is developed and tested against Wiscombe's Mie scattering code (MIE0) and a Monte Carlo code for a range of size parameters. GOMsphere can be combined with MIE0 to calculate the single-scattering properties of dielectric spheres of any size.

  2. A methodology for constraining power in finite element modeling of radiofrequency ablation.

    PubMed

    Jiang, Yansheng; Possebon, Ricardo; Mulier, Stefaan; Wang, Chong; Chen, Feng; Feng, Yuanbo; Xia, Qian; Liu, Yewei; Yin, Ting; Oyen, Raymond; Ni, Yicheng

    2017-07-01

    Radiofrequency ablation (RFA) is a minimally invasive thermal therapy for the treatment of cancer, hyperopia, and cardiac tachyarrhythmia. In RFA, the power delivered to the tissue is a key parameter. The objective of this study was to establish a methodology for the finite element modeling of RFA with constant power. Because of changes in the electric conductivity of tissue with temperature, a nonconventional boundary value problem arises in the mathematic modeling of RFA: neither the voltage (Dirichlet condition) nor the current (Neumann condition), but the power, that is, the product of voltage and current was prescribed on part of boundary. We solved the problem using Lagrange multiplier: the product of the voltage and current on the electrode surface is constrained to be equal to the Joule heating. We theoretically proved the equality between the product of the voltage and current on the surface of the electrode and the Joule heating in the domain. We also proved the well-posedness of the problem of solving the Laplace equation for the electric potential under a constant power constraint prescribed on the electrode surface. The Pennes bioheat transfer equation and the Laplace equation for electric potential augmented with the constraint of constant power were solved simultaneously using the Newton-Raphson algorithm. Three problems for validation were solved. Numerical results were compared either with an analytical solution deduced in this study or with results obtained by ANSYS or experiments. This work provides the finite element modeling of constant power RFA with a firm mathematical basis and opens pathway for achieving the optimal RFA power. Copyright © 2016 John Wiley & Sons, Ltd.

  3. Numerical Solutions for Nonlinear High Damping Rubber Bearing Isolators: Newmark's Method with Netwon-Raphson Iteration Revisited

    NASA Astrophysics Data System (ADS)

    Markou, A. A.; Manolis, G. D.

    2018-03-01

    Numerical methods for the solution of dynamical problems in engineering go back to 1950. The most famous and widely-used time stepping algorithm was developed by Newmark in 1959. In the present study, for the first time, the Newmark algorithm is developed for the case of the trilinear hysteretic model, a model that was used to describe the shear behaviour of high damping rubber bearings. This model is calibrated against free-vibration field tests implemented on a hybrid base isolated building, namely the Solarino project in Italy, as well as against laboratory experiments. A single-degree-of-freedom system is used to describe the behaviour of a low-rise building isolated with a hybrid system comprising high damping rubber bearings and low friction sliding bearings. The behaviour of the high damping rubber bearings is simulated by the trilinear hysteretic model, while the description of the behaviour of the low friction sliding bearings is modeled by a linear Coulomb friction model. In order to prove the effectiveness of the numerical method we compare the analytically solved trilinear hysteretic model calibrated from free-vibration field tests (Solarino project) against the same model solved with the Newmark method with Netwon-Raphson iteration. Almost perfect agreement is observed between the semi-analytical solution and the fully numerical solution with Newmark's time integration algorithm. This will allow for extension of the trilinear mechanical models to bidirectional horizontal motion, to time-varying vertical loads, to multi-degree-of-freedom-systems, as well to generalized models connected in parallel, where only numerical solutions are possible.

  4. Camera-pose estimation via projective Newton optimization on the manifold.

    PubMed

    Sarkis, Michel; Diepold, Klaus

    2012-04-01

    Determining the pose of a moving camera is an important task in computer vision. In this paper, we derive a projective Newton algorithm on the manifold to refine the pose estimate of a camera. The main idea is to benefit from the fact that the 3-D rigid motion is described by the special Euclidean group, which is a Riemannian manifold. The latter is equipped with a tangent space defined by the corresponding Lie algebra. This enables us to compute the optimization direction, i.e., the gradient and the Hessian, at each iteration of the projective Newton scheme on the tangent space of the manifold. Then, the motion is updated by projecting back the variables on the manifold itself. We also derive another version of the algorithm that employs homeomorphic parameterization to the special Euclidean group. We test the algorithm on several simulated and real image data sets. Compared with the standard Newton minimization scheme, we are now able to obtain the full numerical formula of the Hessian with a 60% decrease in computational complexity. Compared with Levenberg-Marquardt, the results obtained are more accurate while having a rather similar complexity.

  5. Implicit Plasma Kinetic Simulation Using The Jacobian-Free Newton-Krylov Method

    NASA Astrophysics Data System (ADS)

    Taitano, William; Knoll, Dana; Chacon, Luis

    2009-11-01

    The use of fully implicit time integration methods in kinetic simulation is still area of algorithmic research. A brute-force approach to simultaneously including the field equations and the particle distribution function would result in an intractable linear algebra problem. A number of algorithms have been put forward which rely on an extrapolation in time. They can be thought of as linearly implicit methods or one-step Newton methods. However, issues related to time accuracy of these methods still remain. We are pursuing a route to implicit plasma kinetic simulation which eliminates extrapolation, eliminates phase-space from the linear algebra problem, and converges the entire nonlinear system within a time step. We accomplish all this using the Jacobian-Free Newton-Krylov algorithm. The original research along these lines considered particle methods to advance the distribution function [1]. In the current research we are advancing the Vlasov equations on a grid. Results will be presented which highlight algorithmic details for single species electrostatic problems and coupled ion-electron electrostatic problems. [4pt] [1] H. J. Kim, L. Chac'on, G. Lapenta, ``Fully implicit particle in cell algorithm,'' 47th Annual Meeting of the Division of Plasma Physics, Oct. 24-28, 2005, Denver, CO

  6. A quasi-Newton approach to optimization problems with probability density constraints. [problem solving in mathematical programming

    NASA Technical Reports Server (NTRS)

    Tapia, R. A.; Vanrooy, D. L.

    1976-01-01

    A quasi-Newton method is presented for minimizing a nonlinear function while constraining the variables to be nonnegative and sum to one. The nonnegativity constraints were eliminated by working with the squares of the variables and the resulting problem was solved using Tapia's general theory of quasi-Newton methods for constrained optimization. A user's guide for a computer program implementing this algorithm is provided.

  7. C library for topological study of the electronic charge density.

    PubMed

    Vega, David; Aray, Yosslen; Rodríguez, Jesús

    2012-12-05

    The topological study of the electronic charge density is useful to obtain information about the kinds of bonds (ionic or covalent) and the atom charges on a molecule or crystal. For this study, it is necessary to calculate, at every space point, the electronic density and its electronic density derivatives values up to second order. In this work, a grid-based method for these calculations is described. The library, implemented for three dimensions, is based on a multidimensional Lagrange interpolation in a regular grid; by differentiating the resulting polynomial, the gradient vector, the Hessian matrix and the Laplacian formulas were obtained for every space point. More complex functions such as the Newton-Raphson method (to find the critical points, where the gradient is null) and the Cash-Karp Runge-Kutta method (used to make the gradient paths) were programmed. As in some crystals, the unit cell has angles different from 90°, the described library includes linear transformations to correct the gradient and Hessian when the grid is distorted (inclined). Functions were also developed to handle grid containing files (grd from DMol® program, CUBE from Gaussian® program and CHGCAR from VASP® program). Each one of these files contains the data for a molecular or crystal electronic property (such as charge density, spin density, electrostatic potential, and others) in a three-dimensional (3D) grid. The library can be adapted to make the topological study in any regular 3D grid by modifying the code of these functions. Copyright © 2012 Wiley Periodicals, Inc.

  8. Rheological effects of micropolar slime on the gliding motility of bacteria with slip boundary condition

    NASA Astrophysics Data System (ADS)

    Asghar, Z.; Ali, N.; Anwar Bég, O.; Javed, T.

    2018-06-01

    Gliding bacteria are virtually everywhere. These organisms are phylogenetically diverse with their hundreds of types, different shapes and several modes of motility. One possible mode of gliding motility in the rod shaped bacteria is that they propel themselves by producing undulating waves in their body. Few bacteria glides near the solid surface over the slime without any aid of flagella so the classical Navier-Stokes equations are incapable of explaining the slime rheology at the microscopic level. Micropolar fluid dynamics however provides a solid framework for mimicking bacterial physical phenomena at both micro and nano-scales, and therefore we use the micropolar fluid to characterize the rheology of a thin layer of slime and its dominant microrotation effects. It is also assumed that there is a certain degree of slip between slime and bacterial undulating surface and also between slime and solid substrate. The flow equations are formulated under long wavelength and low Reynolds number assumptions. Exact expressions for stream function and pressure gradient are obtained. The speed of the gliding bacteria is numerically calculated by using a modified Newton-Raphson method. Slip effects and effects of non-Newtonian slime parameters on bacterial speed and power are also quantified. In addition, when the glider is fixed, the effects of slip and rheological properties of micropolar slime parameters on the velocity, micro-rotation (angular velocity) of spherical slime particles, pressure rise per wavelength, pumping and trapping phenomena are also shown graphically and discussed in detail. The study is relevant to emerging biofuel cell technologies and also bacterial biophysics.

  9. CREKID: A computer code for transient, gas-phase combustion of kinetics

    NASA Technical Reports Server (NTRS)

    Pratt, D. T.; Radhakrishnan, K.

    1984-01-01

    A new algorithm was developed for fast, automatic integration of chemical kinetic rate equations describing homogeneous, gas-phase combustion at constant pressure. Particular attention is paid to the distinguishing physical and computational characteristics of the induction, heat-release and equilibration regimes. The two-part predictor-corrector algorithm, based on an exponentially-fitted trapezoidal rule, includes filtering of ill-posed initial conditions, automatic selection of Newton-Jacobi or Newton iteration for convergence to achieve maximum computational efficiency while observing a prescribed error tolerance. The new algorithm was found to compare favorably with LSODE on two representative test problems drawn from combustion kinetics.

  10. Globally convergent techniques in nonlinear Newton-Krylov

    NASA Technical Reports Server (NTRS)

    Brown, Peter N.; Saad, Youcef

    1989-01-01

    Some convergence theory is presented for nonlinear Krylov subspace methods. The basic idea of these methods is to use variants of Newton's iteration in conjunction with a Krylov subspace method for solving the Jacobian linear systems. These methods are variants of inexact Newton methods where the approximate Newton direction is taken from a subspace of small dimensions. The main focus is to analyze these methods when they are combined with global strategies such as linesearch techniques and model trust region algorithms. Most of the convergence results are formulated for projection onto general subspaces rather than just Krylov subspaces.

  11. CAD-Based Aerodynamic Design of Complex Configurations using a Cartesian Method

    NASA Technical Reports Server (NTRS)

    Nemec, Marian; Aftosmis, Michael J.; Pulliam, Thomas H.

    2003-01-01

    A modular framework for aerodynamic optimization of complex geometries is developed. By working directly with a parametric CAD system, complex-geometry models are modified nnd tessellated in an automatic fashion. The use of a component-based Cartesian method significantly reduces the demands on the CAD system, and also provides for robust and efficient flowfield analysis. The optimization is controlled using either a genetic or quasi-Newton algorithm. Parallel efficiency of the framework is maintained even when subject to limited CAD resources by dynamically re-allocating the processors of the flow solver. Overall, the resulting framework can explore designs incorporating large shape modifications and changes in topology.

  12. Buckling and limit states of composite profiles with top-hat channel section subjected to axial compression

    NASA Astrophysics Data System (ADS)

    RóŻyło, Patryk; Debski, Hubert; Kral, Jan

    2018-01-01

    The subject of the research was a short thin-walled top-hat cross-section composite profile. The tested structure was subjected to axial compression. As part of the critical state research, critical load and the corresponding buckling mode was determined. Later in the study laminate damage areas were determined throughout numerical analysis. It was assumed that the profile is simply supported on the cross sections ends. Experimental tests were carried out on a universal testing machine Zwick Z100 and the results were compared with the results of numerical calculations. The eigenvalue problem and a non-linear problem of stability of thin-walled structures were carried out by the use of commercial software ABAQUS®. In the presented cases, it was assumed that the material is linear-elastic and non-linearity of the model results from the large displacements. Solution to the geometrically nonlinear problem was conducted by the use of the incremental-iterative Newton-Raphson method.

  13. Numerical Modeling of Saturated Boiling in a Heated Tube

    NASA Technical Reports Server (NTRS)

    Majumdar, Alok; LeClair, Andre; Hartwig, Jason

    2017-01-01

    This paper describes a mathematical formulation and numerical solution of boiling in a heated tube. The mathematical formulation involves a discretization of the tube into a flow network consisting of fluid nodes and branches and a thermal network consisting of solid nodes and conductors. In the fluid network, the mass, momentum and energy conservation equations are solved and in the thermal network, the energy conservation equation of solids is solved. A pressure-based, finite-volume formulation has been used to solve the equations in the fluid network. The system of equations is solved by a hybrid numerical scheme which solves the mass and momentum conservation equations by a simultaneous Newton-Raphson method and the energy conservation equation by a successive substitution method. The fluid network and thermal network are coupled through heat transfer between the solid and fluid nodes which is computed by Chen's correlation of saturated boiling heat transfer. The computer model is developed using the Generalized Fluid System Simulation Program and the numerical predictions are compared with test data.

  14. Evaluation of an S-system root-finding method for estimating parameters in a metabolic reaction model.

    PubMed

    Iwata, Michio; Miyawaki-Kuwakado, Atsuko; Yoshida, Erika; Komori, Soichiro; Shiraishi, Fumihide

    2018-02-02

    In a mathematical model, estimation of parameters from time-series data of metabolic concentrations in cells is a challenging task. However, it seems that a promising approach for such estimation has not yet been established. Biochemical Systems Theory (BST) is a powerful methodology to construct a power-law type model for a given metabolic reaction system and to then characterize it efficiently. In this paper, we discuss the use of an S-system root-finding method (S-system method) to estimate parameters from time-series data of metabolite concentrations. We demonstrate that the S-system method is superior to the Newton-Raphson method in terms of the convergence region and iteration number. We also investigate the usefulness of a translocation technique and a complex-step differentiation method toward the practical application of the S-system method. The results indicate that the S-system method is useful to construct mathematical models for a variety of metabolic reaction networks. Copyright © 2018 Elsevier Inc. All rights reserved.

  15. Demonstration of the Dynamic Flowgraph Methodology using the Titan 2 Space Launch Vehicle Digital Flight Control System

    NASA Technical Reports Server (NTRS)

    Yau, M.; Guarro, S.; Apostolakis, G.

    1993-01-01

    Dynamic Flowgraph Methodology (DFM) is a new approach developed to integrate the modeling and analysis of the hardware and software components of an embedded system. The objective is to complement the traditional approaches which generally follow the philosophy of separating out the hardware and software portions of the assurance analysis. In this paper, the DFM approach is demonstrated using the Titan 2 Space Launch Vehicle Digital Flight Control System. The hardware and software portions of this embedded system are modeled in an integrated framework. In addition, the time dependent behavior and the switching logic can be captured by this DFM model. In the modeling process, it is found that constructing decision tables for software subroutines is very time consuming. A possible solution is suggested. This approach makes use of a well-known numerical method, the Newton-Raphson method, to solve the equations implemented in the subroutines in reverse. Convergence can be achieved in a few steps.

  16. Improving sub-grid scale accuracy of boundary features in regional finite-difference models

    USGS Publications Warehouse

    Panday, Sorab; Langevin, Christian D.

    2012-01-01

    As an alternative to grid refinement, the concept of a ghost node, which was developed for nested grid applications, has been extended towards improving sub-grid scale accuracy of flow to conduits, wells, rivers or other boundary features that interact with a finite-difference groundwater flow model. The formulation is presented for correcting the regular finite-difference groundwater flow equations for confined and unconfined cases, with or without Newton Raphson linearization of the nonlinearities, to include the Ghost Node Correction (GNC) for location displacement. The correction may be applied on the right-hand side vector for a symmetric finite-difference Picard implementation, or on the left-hand side matrix for an implicit but asymmetric implementation. The finite-difference matrix connectivity structure may be maintained for an implicit implementation by only selecting contributing nodes that are a part of the finite-difference connectivity. Proof of concept example problems are provided to demonstrate the improved accuracy that may be achieved through sub-grid scale corrections using the GNC schemes.

  17. Robustness of Modeling of Out-of-Service Gas Mechanical Face Seal

    NASA Technical Reports Server (NTRS)

    Green, Itzhak

    2007-01-01

    Gas lubricated mechanical face seal are ubiquitous in many high performance applications such as compressors and gas turbines. The literature contains various analyses of seals having orderly face patterns (radial taper, waves, spiral grooves, etc.). These are useful for design purposes and for performance predictions. However, seals returning from service (or from testing) inevitably contain wear tracks and warped faces that depart from the aforementioned orderly patterns. Questions then arise as to the heat generated at the interface, leakage rates, axial displacement and tilts, minimum film thickness, contact forces, etc. This work describes an analysis of seals that may inherit any (i.e., random) face pattern. A comprehensive computer code is developed, based upon the Newton- Raphson method, which solves for the equilibrium of the axial force and tilting moments that are generated by asperity contact and fluid film effects. A contact mechanics model is incorporated along with a finite volume method that solves the compressible Reynolds equation. Results are presented for a production seal that has sustained a testing cycle.

  18. Computation of the anharmonic orbits in two piecewise monotonic maps with a single discontinuity

    NASA Astrophysics Data System (ADS)

    Li, Yurong; Du, Zhengdong

    2017-02-01

    In this paper, the bifurcation values for two typical piecewise monotonic maps with a single discontinuity are computed. The variation of the parameter of those maps leads to a sequence of border-collision and period-doubling bifurcations, generating a sequence of anharmonic orbits on the boundary of chaos. The border-collision and period-doubling bifurcation values are computed by the word-lifting technique and the Maple fsolve function or the Newton-Raphson method, respectively. The scaling factors which measure the convergent rates of the bifurcation values and the width of the stable periodic windows, respectively, are investigated. We found that these scaling factors depend on the parameters of the maps, implying that they are not universal. Moreover, if one side of the maps is linear, our numerical results suggest that those quantities converge increasingly. In particular, for the linear-quadratic case, they converge to one of the Feigenbaum constants δ _F= 4.66920160\\cdots.

  19. Spongiosa Primary Development: A Biochemical Hypothesis by Turing Patterns Formations

    PubMed Central

    López-Vaca, Oscar Rodrigo; Garzón-Alvarado, Diego Alexander

    2012-01-01

    We propose a biochemical model describing the formation of primary spongiosa architecture through a bioregulatory model by metalloproteinase 13 (MMP13) and vascular endothelial growth factor (VEGF). It is assumed that MMP13 regulates cartilage degradation and the VEGF allows vascularization and advances in the ossification front through the presence of osteoblasts. The coupling of this set of molecules is represented by reaction-diffusion equations with parameters in the Turing space, creating a stable spatiotemporal pattern that leads to the formation of the trabeculae present in the spongy tissue. Experimental evidence has shown that the MMP13 regulates VEGF formation, and it is assumed that VEGF negatively regulates MMP13 formation. Thus, the patterns obtained by ossification may represent the primary spongiosa formation during endochondral ossification. Moreover, for the numerical solution, we used the finite element method with the Newton-Raphson method to approximate partial differential nonlinear equations. Ossification patterns obtained may represent the primary spongiosa formation during endochondral ossification. PMID:23193429

  20. Steady-state configuration and tension calculations of marine cables under complex currents via separated particle swarm optimization

    NASA Astrophysics Data System (ADS)

    Xu, Xue-song

    2014-12-01

    Under complex currents, the motion governing equations of marine cables are complex and nonlinear, and the calculations of cable configuration and tension become difficult compared with those under the uniform or simple currents. To obtain the numerical results, the usual Newton-Raphson iteration is often adopted, but its stability depends on the initial guessed solution to the governing equations. To improve the stability of numerical calculation, this paper proposed separated the particle swarm optimization, in which the variables are separated into several groups, and the dimension of search space is reduced to facilitate the particle swarm optimization. Via the separated particle swarm optimization, these governing nonlinear equations can be solved successfully with any initial solution, and the process of numerical calculation is very stable. For the calculations of cable configuration and tension of marine cables under complex currents, the proposed separated swarm particle optimization is more effective than the other particle swarm optimizations.

  1. Analysis of a closed-kinematic chain robot manipulator

    NASA Technical Reports Server (NTRS)

    Nguyen, Charles C.; Pooran, Farhad J.

    1988-01-01

    Presented are the research results from the research grant entitled: Active Control of Robot Manipulators, sponsored by the Goddard Space Flight Center (NASA) under grant number NAG-780. This report considers a class of robot manipulators based on the closed-kinematic chain mechanism (CKCM). This type of robot manipulators mainly consists of two platforms, one is stationary and the other moving, and they are coupled together through a number of in-parallel actuators. Using spatial geometry and homogeneous transformation, a closed-form solution is derived for the inverse kinematic problem of the six-degree-of-freedom manipulator, built to study robotic assembly in space. Iterative Newton Raphson method is employed to solve the forward kinematic problem. Finally, the equations of motion of the above manipulators are obtained by employing the Lagrangian method. Study of the manipulator dynamics is performed using computer simulation whose results show that the robot actuating forces are strongly dependent on the mass and centroid locations of the robot links.

  2. An hp symplectic pseudospectral method for nonlinear optimal control

    NASA Astrophysics Data System (ADS)

    Peng, Haijun; Wang, Xinwei; Li, Mingwu; Chen, Biaosong

    2017-01-01

    An adaptive symplectic pseudospectral method based on the dual variational principle is proposed and is successfully applied to solving nonlinear optimal control problems in this paper. The proposed method satisfies the first order necessary conditions of continuous optimal control problems, also the symplectic property of the original continuous Hamiltonian system is preserved. The original optimal control problem is transferred into a set of nonlinear equations which can be solved easily by Newton-Raphson iterations, and the Jacobian matrix is found to be sparse and symmetric. The proposed method, on one hand, exhibits exponent convergence rates when the number of collocation points are increasing with the fixed number of sub-intervals; on the other hand, exhibits linear convergence rates when the number of sub-intervals is increasing with the fixed number of collocation points. Furthermore, combining with the hp method based on the residual error of dynamic constraints, the proposed method can achieve given precisions in a few iterations. Five examples highlight the high precision and high computational efficiency of the proposed method.

  3. A monolithic homotopy continuation algorithm with application to computational fluid dynamics

    NASA Astrophysics Data System (ADS)

    Brown, David A.; Zingg, David W.

    2016-09-01

    A new class of homotopy continuation methods is developed suitable for globalizing quasi-Newton methods for large sparse nonlinear systems of equations. The new continuation methods, described as monolithic homotopy continuation, differ from the classical predictor-corrector algorithm in that the predictor and corrector phases are replaced with a single phase which includes both a predictor and corrector component. Conditional convergence and stability are proved analytically. Using a Laplacian-like operator to construct the homotopy, the new algorithm is shown to be more efficient than the predictor-corrector homotopy continuation algorithm as well as an implementation of the widely-used pseudo-transient continuation algorithm for some inviscid and turbulent, subsonic and transonic external aerodynamic flows over the ONERA M6 wing and the NACA 0012 airfoil using a parallel implicit Newton-Krylov finite-difference flow solver.

  4. A Newton method for the magnetohydrodynamic equilibrium equations

    NASA Astrophysics Data System (ADS)

    Oliver, Hilary James

    We have developed and implemented a (J, B) space Newton method to solve the full nonlinear three dimensional magnetohydrodynamic equilibrium equations in toroidal geometry. Various cases have been run successfully, demonstrating significant improvement over Picard iteration, including a 3D stellarator equilibrium at β = 2%. The algorithm first solves the equilibrium force balance equation for the current density J, given a guess for the magnetic field B. This step is taken from the Picard-iterative PIES 3D equilibrium code. Next, we apply Newton's method to Ampere's Law by expansion of the functional J(B), which is defined by the first step. An analytic calculation in magnetic coordinates, of how the Pfirsch-Schlüter currents vary in the plasma in response to a small change in the magnetic field, yields the Newton gradient term (analogous to ∇f . δx in Newton's method for f(x) = 0). The algorithm is computationally feasible because we do this analytically, and because the gradient term is flux surface local when expressed in terms of a vector potential in an Ar=0 gauge. The equations are discretized by a hybrid spectral/offset grid finite difference technique, and leading order radial dependence is factored from Fourier coefficients to improve finite- difference accuracy near the polar-like origin. After calculating the Newton gradient term we transfer the equation from the magnetic grid to a fixed background grid, which greatly improves the code's performance.

  5. Flexible parallel implicit modelling of coupled thermal-hydraulic-mechanical processes in fractured rocks

    NASA Astrophysics Data System (ADS)

    Cacace, Mauro; Jacquey, Antoine B.

    2017-09-01

    Theory and numerical implementation describing groundwater flow and the transport of heat and solute mass in fully saturated fractured rocks with elasto-plastic mechanical feedbacks are developed. In our formulation, fractures are considered as being of lower dimension than the hosting deformable porous rock and we consider their hydraulic and mechanical apertures as scaling parameters to ensure continuous exchange of fluid mass and energy within the fracture-solid matrix system. The coupled system of equations is implemented in a new simulator code that makes use of a Galerkin finite-element technique. The code builds on a flexible, object-oriented numerical framework (MOOSE, Multiphysics Object Oriented Simulation Environment) which provides an extensive scalable parallel and implicit coupling to solve for the multiphysics problem. The governing equations of groundwater flow, heat and mass transport, and rock deformation are solved in a weak sense (either by classical Newton-Raphson or by free Jacobian inexact Newton-Krylow schemes) on an underlying unstructured mesh. Nonlinear feedbacks among the active processes are enforced by considering evolving fluid and rock properties depending on the thermo-hydro-mechanical state of the system and the local structure, i.e. degree of connectivity, of the fracture system. A suite of applications is presented to illustrate the flexibility and capability of the new simulator to address problems of increasing complexity and occurring at different spatial (from centimetres to tens of kilometres) and temporal scales (from minutes to hundreds of years).

  6. Holistic irrigation water management approach based on stochastic soil water dynamics

    NASA Astrophysics Data System (ADS)

    Alizadeh, H.; Mousavi, S. J.

    2012-04-01

    Appreciating the essential gap between fundamental unsaturated zone transport processes and soil and water management due to low effectiveness of some of monitoring and modeling approaches, this study presents a mathematical programming model for irrigation management optimization based on stochastic soil water dynamics. The model is a nonlinear non-convex program with an economic objective function to address water productivity and profitability aspects in irrigation management through optimizing irrigation policy. Utilizing an optimization-simulation method, the model includes an eco-hydrological integrated simulation model consisting of an explicit stochastic module of soil moisture dynamics in the crop-root zone with shallow water table effects, a conceptual root-zone salt balance module, and the FAO crop yield module. Interdependent hydrology of soil unsaturated and saturated zones is treated in a semi-analytical approach in two steps. At first step analytical expressions are derived for the expected values of crop yield, total water requirement and soil water balance components assuming fixed level for shallow water table, while numerical Newton-Raphson procedure is employed at the second step to modify value of shallow water table level. Particle Swarm Optimization (PSO) algorithm, combined with the eco-hydrological simulation model, has been used to solve the non-convex program. Benefiting from semi-analytical framework of the simulation model, the optimization-simulation method with significantly better computational performance compared to a numerical Mote-Carlo simulation-based technique has led to an effective irrigation management tool that can contribute to bridging the gap between vadose zone theory and water management practice. In addition to precisely assessing the most influential processes at a growing season time scale, one can use the developed model in large scale systems such as irrigation districts and agricultural catchments. Accordingly, the model has been applied in Dasht-e-Abbas and Ein-khosh Fakkeh Irrigation Districts (DAID and EFID) of the Karkheh Basin in southwest of Iran. The area suffers from the water scarcity problem and therefore the trade-off between the level of deficit and economical profit should be assessed. Based on the results, while the maximum net benefit has been obtained for the stress-avoidance (SA) irrigation policy, the highest water profitability, defined by economical net benefit gained from unit irrigation water volume application, has been resulted when only about 60% of water used in the SA policy is applied.

  7. Hybrid DFP-CG method for solving unconstrained optimization problems

    NASA Astrophysics Data System (ADS)

    Osman, Wan Farah Hanan Wan; Asrul Hery Ibrahim, Mohd; Mamat, Mustafa

    2017-09-01

    The conjugate gradient (CG) method and quasi-Newton method are both well known method for solving unconstrained optimization method. In this paper, we proposed a new method by combining the search direction between conjugate gradient method and quasi-Newton method based on BFGS-CG method developed by Ibrahim et al. The Davidon-Fletcher-Powell (DFP) update formula is used as an approximation of Hessian for this new hybrid algorithm. Numerical result showed that the new algorithm perform well than the ordinary DFP method and proven to posses both sufficient descent and global convergence properties.

  8. Inverse problems with nonnegative and sparse solutions: algorithms and application to the phase retrieval problem

    NASA Astrophysics Data System (ADS)

    Quy Muoi, Pham; Nho Hào, Dinh; Sahoo, Sujit Kumar; Tang, Dongliang; Cong, Nguyen Huu; Dang, Cuong

    2018-05-01

    In this paper, we study a gradient-type method and a semismooth Newton method for minimization problems in regularizing inverse problems with nonnegative and sparse solutions. We propose a special penalty functional forcing the minimizers of regularized minimization problems to be nonnegative and sparse, and then we apply the proposed algorithms in a practical the problem. The strong convergence of the gradient-type method and the local superlinear convergence of the semismooth Newton method are proven. Then, we use these algorithms for the phase retrieval problem and illustrate their efficiency in numerical examples, particularly in the practical problem of optical imaging through scattering media where all the noises from experiment are presented.

  9. Variational nature, integration, and properties of Newton reaction path

    NASA Astrophysics Data System (ADS)

    Bofill, Josep Maria; Quapp, Wolfgang

    2011-02-01

    The distinguished coordinate path and the reduced gradient following path or its equivalent formulation, the Newton trajectory, are analyzed and unified using the theory of calculus of variations. It is shown that their minimum character is related to the fact that the curve is located in a valley region. In this case, we say that the Newton trajectory is a reaction path with the category of minimum energy path. In addition to these findings a Runge-Kutta-Fehlberg algorithm to integrate these curves is also proposed.

  10. Variational nature, integration, and properties of Newton reaction path.

    PubMed

    Bofill, Josep Maria; Quapp, Wolfgang

    2011-02-21

    The distinguished coordinate path and the reduced gradient following path or its equivalent formulation, the Newton trajectory, are analyzed and unified using the theory of calculus of variations. It is shown that their minimum character is related to the fact that the curve is located in a valley region. In this case, we say that the Newton trajectory is a reaction path with the category of minimum energy path. In addition to these findings a Runge-Kutta-Fehlberg algorithm to integrate these curves is also proposed.

  11. Subsampled Hessian Newton Methods for Supervised Learning.

    PubMed

    Wang, Chien-Chih; Huang, Chun-Heng; Lin, Chih-Jen

    2015-08-01

    Newton methods can be applied in many supervised learning approaches. However, for large-scale data, the use of the whole Hessian matrix can be time-consuming. Recently, subsampled Newton methods have been proposed to reduce the computational time by using only a subset of data for calculating an approximation of the Hessian matrix. Unfortunately, we find that in some situations, the running speed is worse than the standard Newton method because cheaper but less accurate search directions are used. In this work, we propose some novel techniques to improve the existing subsampled Hessian Newton method. The main idea is to solve a two-dimensional subproblem per iteration to adjust the search direction to better minimize the second-order approximation of the function value. We prove the theoretical convergence of the proposed method. Experiments on logistic regression, linear SVM, maximum entropy, and deep networks indicate that our techniques significantly reduce the running time of the subsampled Hessian Newton method. The resulting algorithm becomes a compelling alternative to the standard Newton method for large-scale data classification.

  12. Fast and exact Newton and Bidirectional fitting of Active Appearance Models.

    PubMed

    Kossaifi, Jean; Tzimiropoulos, Yorgos; Pantic, Maja

    2016-12-21

    Active Appearance Models (AAMs) are generative models of shape and appearance that have proven very attractive for their ability to handle wide changes in illumination, pose and occlusion when trained in the wild, while not requiring large training dataset like regression-based or deep learning methods. The problem of fitting an AAM is usually formulated as a non-linear least squares one and the main way of solving it is a standard Gauss-Newton algorithm. In this paper we extend Active Appearance Models in two ways: we first extend the Gauss-Newton framework by formulating a bidirectional fitting method that deforms both the image and the template to fit a new instance. We then formulate a second order method by deriving an efficient Newton method for AAMs fitting. We derive both methods in a unified framework for two types of Active Appearance Models, holistic and part-based, and additionally show how to exploit the structure in the problem to derive fast yet exact solutions. We perform a thorough evaluation of all algorithms on three challenging and recently annotated inthe- wild datasets, and investigate fitting accuracy, convergence properties and the influence of noise in the initialisation. We compare our proposed methods to other algorithms and show that they yield state-of-the-art results, out-performing other methods while having superior convergence properties.

  13. Distance majorization and its applications.

    PubMed

    Chi, Eric C; Zhou, Hua; Lange, Kenneth

    2014-08-01

    The problem of minimizing a continuously differentiable convex function over an intersection of closed convex sets is ubiquitous in applied mathematics. It is particularly interesting when it is easy to project onto each separate set, but nontrivial to project onto their intersection. Algorithms based on Newton's method such as the interior point method are viable for small to medium-scale problems. However, modern applications in statistics, engineering, and machine learning are posing problems with potentially tens of thousands of parameters or more. We revisit this convex programming problem and propose an algorithm that scales well with dimensionality. Our proposal is an instance of a sequential unconstrained minimization technique and revolves around three ideas: the majorization-minimization principle, the classical penalty method for constrained optimization, and quasi-Newton acceleration of fixed-point algorithms. The performance of our distance majorization algorithms is illustrated in several applications.

  14. Profiles of electrified drops and bubbles

    NASA Technical Reports Server (NTRS)

    Basaran, O. A.; Scriven, L. E.

    1982-01-01

    Axisymmetric equilibrium shapes of conducting drops and bubbles, (1) pendant or sessile on one face of a circular parallel-plate capacitor or (2) free and surface-charged, are found by solving simultaneously the free boundary problem consisting of the augmented Young-Laplace equation for surface shape and the Laplace equation for electrostatic field, given the surface potential. The problem is nonlinear and the method is a finite element algorithm employing Newton iteration, a modified frontal solver, and triangular as well as quadrilateral tessellations of the domain exterior to the drop in order to facilitate refined analysis of sharply curved drop tips seen in experiments. The stability limit predicted by this computer-aided theoretical analysis agrees well with experiments.

  15. A Strassen-Newton algorithm for high-speed parallelizable matrix inversion

    NASA Technical Reports Server (NTRS)

    Bailey, David H.; Ferguson, Helaman R. P.

    1988-01-01

    Techniques are described for computing matrix inverses by algorithms that are highly suited to massively parallel computation. The techniques are based on an algorithm suggested by Strassen (1969). Variations of this scheme use matrix Newton iterations and other methods to improve the numerical stability while at the same time preserving a very high level of parallelism. One-processor Cray-2 implementations of these schemes range from one that is up to 55 percent faster than a conventional library routine to one that is slower than a library routine but achieves excellent numerical stability. The problem of computing the solution to a single set of linear equations is discussed, and it is shown that this problem can also be solved efficiently using these techniques.

  16. A hybrid optimization algorithm to explore atomic configurations of TiO 2 nanoparticles

    DOE PAGES

    Inclan, Eric J.; Geohegan, David B.; Yoon, Mina

    2017-10-17

    Here in this paper we present a hybrid algorithm comprised of differential evolution, coupled with the Broyden–Fletcher–Goldfarb–Shanno quasi-Newton optimization algorithm, for the purpose of identifying a broad range of (meta)stable Ti nO 2n nanoparticles, as an example system, described by Buckingham interatomic potential. The potential and its gradient are modified to be piece-wise continuous to enable use of these continuous-domain, unconstrained algorithms, thereby improving compatibility. To measure computational effectiveness a regression on known structures is used. This approach defines effectiveness as the ability of an algorithm to produce a set of structures whose energy distribution follows the regression as themore » number of Ti nO 2n increases such that the shape of the distribution is consistent with the algorithm’s stated goals. Our calculation demonstrates that the hybrid algorithm finds global minimum configurations more effectively than the differential evolution algorithms, widely employed in the field of materials science. Specifically, the hybrid algorithm is shown to reproduce the global minimum energy structures reported in the literature up to n = 5, and retains good agreement with the regression up to n = 25. For 25 < n < 100, where literature structures are unavailable, the hybrid effectively obtains structures that are in lower energies per TiO 2 unit as the system size increases.« less

  17. INFLUENCE OF MATERIAL MODELS ON PREDICTING THE FIRE BEHAVIOR OF STEEL COLUMNS.

    PubMed

    Choe, Lisa; Zhang, Chao; Luecke, William E; Gross, John L; Varma, Amit H

    2017-01-01

    Finite-element (FE) analysis was used to compare the high-temperature responses of steel columns with two different stress-strain models: the Eurocode 3 model and the model proposed by National Institute of Standards and Technology (NIST). The comparisons were made in three different phases. The first phase compared the critical buckling temperatures predicted using forty seven column data from five different laboratories. The slenderness ratios varied from 34 to 137, and the applied axial load was 20-60 % of the room-temperature capacity. The results showed that the NIST model predicted the buckling temperature as or more accurately than the Eurocode 3 model for four of the five data sets. In the second phase, thirty unique FE models were developed to analyze the W8×35 and W14×53 column specimens with the slenderness ratio about 70. The column specimens were tested under steady-heating conditions with a target temperature in the range of 300-600 °C. The models were developed by combining the material model, temperature distributions in the specimens, and numerical scheme for non-linear analyses. Overall, the models with the NIST material properties and the measured temperature variations showed the results comparable to the test data. The deviations in the results from two different numerical approaches (modified Newton Raphson vs. arc-length) were negligible. The Eurocode 3 model made conservative predictions on the behavior of the column specimens since its retained elastic moduli are smaller than those of the NIST model at elevated temperatures. In the third phase, the column curves calibrated using the NIST model was compared with those prescribed in the ANSI/AISC-360 Appendix 4. The calibrated curve significantly deviated from the current design equation with increasing temperature, especially for the slenderness ratio from 50 to 100.

  18. An improved Newton iteration for the generalized inverse of a matrix, with applications

    NASA Technical Reports Server (NTRS)

    Pan, Victor; Schreiber, Robert

    1990-01-01

    The purpose here is to clarify and illustrate the potential for the use of variants of Newton's method of solving problems of practical interest on highly personal computers. The authors show how to accelerate the method substantially and how to modify it successfully to cope with ill-conditioned matrices. The authors conclude that Newton's method can be of value for some interesting computations, especially in parallel and other computing environments in which matrix products are especially easy to work with.

  19. RES: Regularized Stochastic BFGS Algorithm

    NASA Astrophysics Data System (ADS)

    Mokhtari, Aryan; Ribeiro, Alejandro

    2014-12-01

    RES, a regularized stochastic version of the Broyden-Fletcher-Goldfarb-Shanno (BFGS) quasi-Newton method is proposed to solve convex optimization problems with stochastic objectives. The use of stochastic gradient descent algorithms is widespread, but the number of iterations required to approximate optimal arguments can be prohibitive in high dimensional problems. Application of second order methods, on the other hand, is impracticable because computation of objective function Hessian inverses incurs excessive computational cost. BFGS modifies gradient descent by introducing a Hessian approximation matrix computed from finite gradient differences. RES utilizes stochastic gradients in lieu of deterministic gradients for both, the determination of descent directions and the approximation of the objective function's curvature. Since stochastic gradients can be computed at manageable computational cost RES is realizable and retains the convergence rate advantages of its deterministic counterparts. Convergence results show that lower and upper bounds on the Hessian egeinvalues of the sample functions are sufficient to guarantee convergence to optimal arguments. Numerical experiments showcase reductions in convergence time relative to stochastic gradient descent algorithms and non-regularized stochastic versions of BFGS. An application of RES to the implementation of support vector machines is developed.

  20. A superlinear interior points algorithm for engineering design optimization

    NASA Technical Reports Server (NTRS)

    Herskovits, J.; Asquier, J.

    1990-01-01

    We present a quasi-Newton interior points algorithm for nonlinear constrained optimization. It is based on a general approach consisting of the iterative solution in the primal and dual spaces of the equalities in Karush-Kuhn-Tucker optimality conditions. This is done in such a way to have primal and dual feasibility at each iteration, which ensures satisfaction of those optimality conditions at the limit points. This approach is very strong and efficient, since at each iteration it only requires the solution of two linear systems with the same matrix, instead of quadratic programming subproblems. It is also particularly appropriate for engineering design optimization inasmuch at each iteration a feasible design is obtained. The present algorithm uses a quasi-Newton approximation of the second derivative of the Lagrangian function in order to have superlinear asymptotic convergence. We discuss theoretical aspects of the algorithm and its computer implementation.

  1. How concept images affect students' interpretations of Newton's method

    NASA Astrophysics Data System (ADS)

    Engelke Infante, Nicole; Murphy, Kristen; Glenn, Celeste; Sealey, Vicki

    2018-07-01

    Knowing when students have the prerequisite knowledge to be able to read and understand a mathematical text is a perennial concern for instructors. Using text describing Newton's method and Vinner's notion of concept image, we exemplify how prerequisite knowledge influences understanding. Through clinical interviews with first-semester calculus students, we determined how evoked concept images of tangent lines and roots contributed to students' interpretation and application of Newton's method. Results show that some students' concept images of root and tangent line developed throughout the interview process, and most students were able to adequately interpret the text on Newton's method. However, students with insufficient concept images of tangent line and students who were unwilling or unable to modify their concept images of tangent line after reading the text were not successful in interpreting Newton's method.

  2. Newton-Krylov-Schwarz: An implicit solver for CFD

    NASA Technical Reports Server (NTRS)

    Cai, Xiao-Chuan; Keyes, David E.; Venkatakrishnan, V.

    1995-01-01

    Newton-Krylov methods and Krylov-Schwarz (domain decomposition) methods have begun to become established in computational fluid dynamics (CFD) over the past decade. The former employ a Krylov method inside of Newton's method in a Jacobian-free manner, through directional differencing. The latter employ an overlapping Schwarz domain decomposition to derive a preconditioner for the Krylov accelerator that relies primarily on local information, for data-parallel concurrency. They may be composed as Newton-Krylov-Schwarz (NKS) methods, which seem particularly well suited for solving nonlinear elliptic systems in high-latency, distributed-memory environments. We give a brief description of this family of algorithms, with an emphasis on domain decomposition iterative aspects. We then describe numerical simulations with Newton-Krylov-Schwarz methods on aerodynamics applications emphasizing comparisons with a standard defect-correction approach, subdomain preconditioner consistency, subdomain preconditioner quality, and the effect of a coarse grid.

  3. An analytical fuzzy-based approach to ?-gain optimal control of input-affine nonlinear systems using Newton-type algorithm

    NASA Astrophysics Data System (ADS)

    Milic, Vladimir; Kasac, Josip; Novakovic, Branko

    2015-10-01

    This paper is concerned with ?-gain optimisation of input-affine nonlinear systems controlled by analytic fuzzy logic system. Unlike the conventional fuzzy-based strategies, the non-conventional analytic fuzzy control method does not require an explicit fuzzy rule base. As the first contribution of this paper, we prove, by using the Stone-Weierstrass theorem, that the proposed fuzzy system without rule base is universal approximator. The second contribution of this paper is an algorithm for solving a finite-horizon minimax problem for ?-gain optimisation. The proposed algorithm consists of recursive chain rule for first- and second-order derivatives, Newton's method, multi-step Adams method and automatic differentiation. Finally, the results of this paper are evaluated on a second-order nonlinear system.

  4. The Programming Language Python In Earth System Simulations

    NASA Astrophysics Data System (ADS)

    Gross, L.; Imranullah, A.; Mora, P.; Saez, E.; Smillie, J.; Wang, C.

    2004-12-01

    Mathematical models in earth sciences base on the solution of systems of coupled, non-linear, time-dependent partial differential equations (PDEs). The spatial and time-scale vary from a planetary scale and million years for convection problems to 100km and 10 years for fault systems simulations. Various techniques are in use to deal with the time dependency (e.g. Crank-Nicholson), with the non-linearity (e.g. Newton-Raphson) and weakly coupled equations (e.g. non-linear Gauss-Seidel). Besides these high-level solution algorithms discretization methods (e.g. finite element method (FEM), boundary element method (BEM)) are used to deal with spatial derivatives. Typically, large-scale, three dimensional meshes are required to resolve geometrical complexity (e.g. in the case of fault systems) or features in the solution (e.g. in mantel convection simulations). The modelling environment escript allows the rapid implementation of new physics as required for the development of simulation codes in earth sciences. Its main object is to provide a programming language, where the user can define new models and rapidly develop high-level solution algorithms. The current implementation is linked with the finite element package finley as a PDE solver. However, the design is open and other discretization technologies such as finite differences and boundary element methods could be included. escript is implemented as an extension of the interactive programming environment python (see www.python.org). Key concepts introduced are Data objects, which are holding values on nodes or elements of the finite element mesh, and linearPDE objects, which are defining linear partial differential equations to be solved by the underlying discretization technology. In this paper we will show the basic concepts of escript and will show how escript is used to implement a simulation code for interacting fault systems. We will show some results of large-scale, parallel simulations on an SGI Altix system. Acknowledgements: Project work is supported by Australian Commonwealth Government through the Australian Computational Earth Systems Simulator Major National Research Facility, Queensland State Government Smart State Research Facility Fund, The University of Queensland and SGI.

  5. Simulation of the hot rolling of steel with direct iteration

    NASA Astrophysics Data System (ADS)

    Hanoglu, Umut; Šarler, Božidar

    2017-10-01

    In this study a simulation system based on the meshless Local Radial Basis Function Collocation Method (LRBFCM) is applied for the hot rolling of steel. Rolling is a complex, 3D, thermo-mechanical problem; however, 2D cross-sectional slices are used as computational domains that are aligned with the rolling direction and no heat flow or strain is considered in the direction that is orthogonal to the slices. For each predefined position with respect to the rolling direction, the solution procedure is repeated until the slice reaches the final rolling position. Collocation nodes are initially distributed over the domain and boundaries of the initial slice. A local solution is achieved by considering the overlapping influence domains with either 5 or 7 nodes. Radial Basis Functions (RBFs) are used for the temperature discretization in the thermal model and displacement discretization in the mechanical model. The meshless solution procedure does not require a mesh-generation algorithm in the classic sense. Strong-form mechanical and thermal models are run for each slice regarding the contact with the roll's surface. Ideal plastic material behavior is considered for the mechanical results, where the nonlinear stress-strain relation is solved with a direct iteration. The majority of the Finite Element Model (FEM) simulations, including commercial software, use a conventional Newton-Raphson algorithm. However, direct iteration is chosen here due to its better compatibility with meshless methods. In order to overcome any unforeseen stability issues, the redistribution of the nodes by Elliptic Node Generation (ENG) is applied to one or more slices throughout the simulation. The rolling simulation presented here helps the user to design, test and optimize different rolling schedules. The results can be seen minutes after the simulation's start in terms of temperature, displacement, stress and strain fields as well as important technological parameters, like the roll-separating forces, roll toque, etc. An example of a rolling simulation, in which an initial size of 110x110 mm steel is rolled to a round bar with 80 mm diameter, is shown in Fig. 3. A user-friendly computer application for industrial use is created by using the C# and .NET frameworks.

  6. Globalized Newton-Krylov-Schwarz Algorithms and Software for Parallel Implicit CFD

    NASA Technical Reports Server (NTRS)

    Gropp, W. D.; Keyes, D. E.; McInnes, L. C.; Tidriri, M. D.

    1998-01-01

    Implicit solution methods are important in applications modeled by PDEs with disparate temporal and spatial scales. Because such applications require high resolution with reasonable turnaround, "routine" parallelization is essential. The pseudo-transient matrix-free Newton-Krylov-Schwarz (Psi-NKS) algorithmic framework is presented as an answer. We show that, for the classical problem of three-dimensional transonic Euler flow about an M6 wing, Psi-NKS can simultaneously deliver: globalized, asymptotically rapid convergence through adaptive pseudo- transient continuation and Newton's method-, reasonable parallelizability for an implicit method through deferred synchronization and favorable communication-to-computation scaling in the Krylov linear solver; and high per- processor performance through attention to distributed memory and cache locality, especially through the Schwarz preconditioner. Two discouraging features of Psi-NKS methods are their sensitivity to the coding of the underlying PDE discretization and the large number of parameters that must be selected to govern convergence. We therefore distill several recommendations from our experience and from our reading of the literature on various algorithmic components of Psi-NKS, and we describe a freely available, MPI-based portable parallel software implementation of the solver employed here.

  7. A numerical method for measuring capacitive soft sensors through one channel

    NASA Astrophysics Data System (ADS)

    Tairych, Andreas; Anderson, Iain A.

    2018-03-01

    Soft capacitive stretch sensors are well suited for unobtrusive wearable body motion capture. Conventional sensing methods measure sensor capacitances through separate channels. In sensing garments with many sensors, this results in high wiring complexity, and a large footprint of rigid sensing circuit boards. We have developed a more efficient sensing method that detects multiple sensors through only one channel, and one set of wires. It is based on a R-C transmission line assembled from capacitive conductive fabric stretch sensors, and external resistors. The unknown capacitances are identified by solving a system of nonlinear equations. These equations are established by modelling and continuously measuring transmission line reactances at different frequencies. Solving these equations numerically with a Newton-Raphson solver for the unknown capacitances enables real time reading of all sensors. The method was verified with a prototype comprising three sensors that is capable of detecting both individually and simultaneously stretched sensors. Instead of using three channels and six wires to detect the sensors, the task was achieved with only one channel and two wires.

  8. What Information Theory Says About Best Response and About Binding Contracts

    NASA Technical Reports Server (NTRS)

    Wolpert, David H.

    2004-01-01

    Product Distribution (PD) theory is the information-theoretic extension of conventional full- rationality game theory to bounded rational games. Here PD theory is used to investigate games in which the players use bounded rational best-response strategies. This investigation illuminates how to determine the optimal organization chart for a corporation, or more generally how to order the sequence of moves of the players / employees so as to optimize an overall objective function. It is then shown that in the continuum-time limit, bounded rational best response games result in a variant of the replicator dynamics of evolutionary game theory. This variant is then investigated for team games, in which the players share the same utility function, by showing that such continuum- limit bounded rational best response is identical to Newton-Raphson iterative optimization of the shared utility function. Next PD theory is used to investigate changing the coordinate system of the game, i.e., changing the mapping from the joint move of the players to the arguments in the utility functions. Such a change couples those arguments, essentially by making each players move be an offered binding contract.

  9. Approaches to the simulation of unconfined flow and perched groundwater flow in MODFLOW

    USGS Publications Warehouse

    Bedekar, Vivek; Niswonger, Richard G.; Kipp, Kenneth; Panday, Sorab; Tonkin, Matthew

    2012-01-01

    Various approaches have been proposed to manage the nonlinearities associated with the unconfined flow equation and to simulate perched groundwater conditions using the MODFLOW family of codes. The approaches comprise a variety of numerical techniques to prevent dry cells from becoming inactive and to achieve a stable solution focused on formulations of the unconfined, partially-saturated, groundwater flow equation. Keeping dry cells active avoids a discontinuous head solution which in turn improves the effectiveness of parameter estimation software that relies on continuous derivatives. Most approaches implement an upstream weighting of intercell conductance and Newton-Raphson linearization to obtain robust convergence. In this study, several published approaches were implemented in a stepwise manner into MODFLOW for comparative analysis. First, a comparative analysis of the methods is presented using synthetic examples that create convergence issues or difficulty in handling perched conditions with the more common dry-cell simulation capabilities of MODFLOW. Next, a field-scale three-dimensional simulation is presented to examine the stability and performance of the discussed approaches in larger, practical, simulation settings.

  10. Bayesian Atmospheric Radiative Transfer (BART)Thermochemical Equilibrium Abundance (TEA) Code and Application to WASP-43b

    NASA Astrophysics Data System (ADS)

    Blecic, Jasmina; Harrington, Joseph; Bowman, Matthew O.; Cubillos, Patricio E.; Stemm, Madison; Foster, Andrew

    2014-11-01

    We present a new, open-source, Thermochemical Equilibrium Abundances (TEA) code that calculates the abundances of gaseous molecular species. TEA uses the Gibbs-free-energy minimization method with an iterative Lagrangian optimization scheme. It initializes the radiative-transfer calculation in our Bayesian Atmospheric Radiative Transfer (BART) code. Given elemental abundances, TEA calculates molecular abundances for a particular temperature and pressure or a list of temperature-pressure pairs. The code is tested against the original method developed by White at al. (1958), the analytic method developed by Burrows and Sharp (1999), and the Newton-Raphson method implemented in the open-source Chemical Equilibrium with Applications (CEA) code. TEA is written in Python and is available to the community via the open-source development site GitHub.com. We also present BART applied to eclipse depths of WASP-43b exoplanet, constraining atmospheric thermal and chemical parameters. This work was supported by NASA Planetary Atmospheres grant NNX12AI69G and NASA Astrophysics Data Analysis Program grant NNX13AF38G. JB holds a NASA Earth and Space Science Fellowship.

  11. Performance Analysis of a CO2 Heat Pump Water Heating System Under a Daily Change in a Simulated Demand

    NASA Astrophysics Data System (ADS)

    Yokoyama, Ryohei; Kohno, Yasuhiro; Wakui, Tetsuya; Takemura, Kazuhisa

    Air-to-water heat pumps using CO2 as a refrigerant have been developed. In addition, water heating systems each of which combines a CO2 heat pump with a hot water storage tank have been commercialized and widespread. They are expected to contribute to energy saving in residential hot water supply. It has become more and more important to enhance the system performance. In this paper, the performance of a CO2 heat pump water heating system is analyzed under a daily change in a simulated hot water demand by numerical simulation. A static model of a CO2 heat pump and a dynamic model of a storage tank result in a set of differential algebraic equations, and it is solved numerically by a hierarchical combination of Runge-Kutta and Newton-Raphson methods. Daily changes in the temperature distributions in the storage tank and the system performance criteria such as volumes of stored and unused hot water, coefficient of performance, and storage and system efficiencies are clarified under a series of daily hot water demands during a month.

  12. Generalized Gaussian wave packet dynamics: Integrable and chaotic systems.

    PubMed

    Pal, Harinder; Vyas, Manan; Tomsovic, Steven

    2016-01-01

    The ultimate semiclassical wave packet propagation technique is a complex, time-dependent Wentzel-Kramers-Brillouin method known as generalized Gaussian wave packet dynamics (GGWPD). It requires overcoming many technical difficulties in order to be carried out fully in practice. In its place roughly twenty years ago, linearized wave packet dynamics was generalized to methods that include sets of off-center, real trajectories for both classically integrable and chaotic dynamical systems that completely capture the dynamical transport. The connections between those methods and GGWPD are developed in a way that enables a far more practical implementation of GGWPD. The generally complex saddle-point trajectories at its foundation are found using a multidimensional Newton-Raphson root search method that begins with the set of off-center, real trajectories. This is possible because there is a one-to-one correspondence. The neighboring trajectories associated with each off-center, real trajectory form a path that crosses a unique saddle; there are exceptions that are straightforward to identify. The method is applied to the kicked rotor to demonstrate the accuracy improvement as a function of ℏ that comes with using the saddle-point trajectories.

  13. Testing of FTS fingers and interface using a passive compliant robot manipulator. [flight telerobot servicer

    NASA Technical Reports Server (NTRS)

    Nguyen, Charles C.; Antrazi, Sami S.

    1992-01-01

    This report deals with testing of a pair of robot fingers designed for the Flight Telerobotic Servicer (FTS) to grasp a cylinder type of Orbital Replaceable Unit (ORU) interface. The report first describes the objectives of the study and then the testbed consisting of a Stewart Platform-based manipulator equipped with a passive compliant platform which also serves as a force/torque sensor. Kinematic analysis is then performed to provide a closed-form solution for the force inverse kinematics and iterative solution for the force forward kinematics using the Newton's Raphson Method. Mathematical expressions are then derived to compute force/torques applied to the FTS fingers during the mating/demating with the interface. The report then presents the three parts of the experimental study on the feasibility and characteristics of the fingers. The first part obtains data of forces applied by the fingers to the interface under various misalignments, the second part determines the maximum allowable capture angles for mating, and the third part processes and interprets the obtained force/torque data.

  14. New preconditioning strategy for Jacobian-free solvers for variably saturated flows with Richards’ equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lipnikov, Konstantin; Moulton, David; Svyatskiy, Daniil

    2016-04-29

    We develop a new approach for solving the nonlinear Richards’ equation arising in variably saturated flow modeling. The growing complexity of geometric models for simulation of subsurface flows leads to the necessity of using unstructured meshes and advanced discretization methods. Typically, a numerical solution is obtained by first discretizing PDEs and then solving the resulting system of nonlinear discrete equations with a Newton-Raphson-type method. Efficiency and robustness of the existing solvers rely on many factors, including an empiric quality control of intermediate iterates, complexity of the employed discretization method and a customized preconditioner. We propose and analyze a new preconditioningmore » strategy that is based on a stable discretization of the continuum Jacobian. We will show with numerical experiments for challenging problems in subsurface hydrology that this new preconditioner improves convergence of the existing Jacobian-free solvers 3-20 times. Furthermore, we show that the Picard method with this preconditioner becomes a more efficient nonlinear solver than a few widely used Jacobian-free solvers.« less

  15. A nonlinear dynamic finite element approach for simulating muscular hydrostats.

    PubMed

    Vavourakis, V; Kazakidi, A; Tsakiris, D P; Ekaterinaris, J A

    2014-01-01

    An implicit nonlinear finite element model for simulating biological muscle mechanics is developed. The numerical method is suitable for dynamic simulations of three-dimensional, nonlinear, nearly incompressible, hyperelastic materials that undergo large deformations. These features characterise biological muscles, which consist of fibres and connective tissues. It can be assumed that the stress distribution inside the muscles is the superposition of stresses along the fibres and the connective tissues. The mechanical behaviour of the surrounding tissues is determined by adopting a Mooney-Rivlin constitutive model, while the mechanical description of fibres is considered to be the sum of active and passive stresses. Due to the nonlinear nature of the problem, evaluation of the Jacobian matrix is carried out in order to subsequently utilise the standard Newton-Raphson iterative procedure and to carry out time integration with an implicit scheme. The proposed methodology is implemented into our in-house, open source, finite element software, which is validated by comparing numerical results with experimental measurements and other numerical results. Finally, the numerical procedure is utilised to simulate primitive octopus arm manoeuvres, such as bending and reaching.

  16. Linear homotopy solution of nonlinear systems of equations in geodesy

    NASA Astrophysics Data System (ADS)

    Paláncz, Béla; Awange, Joseph L.; Zaletnyik, Piroska; Lewis, Robert H.

    2010-01-01

    A fundamental task in geodesy is solving systems of equations. Many geodetic problems are represented as systems of multivariate polynomials. A common problem in solving such systems is improper initial starting values for iterative methods, leading to convergence to solutions with no physical meaning, or to convergence that requires global methods. Though symbolic methods such as Groebner bases or resultants have been shown to be very efficient, i.e., providing solutions for determined systems such as 3-point problem of 3D affine transformation, the symbolic algebra can be very time consuming, even with special Computer Algebra Systems (CAS). This study proposes the Linear Homotopy method that can be implemented easily in high-level computer languages like C++ and Fortran that are faster than CAS by at least two orders of magnitude. Using Mathematica, the power of Homotopy is demonstrated in solving three nonlinear geodetic problems: resection, GPS positioning, and affine transformation. The method enlarging the domain of convergence is found to be efficient, less sensitive to rounding of numbers, and has lower complexity compared to other local methods like Newton-Raphson.

  17. Evaluating the accuracy performance of Lucas-Kanade algorithm in the circumstance of PIV application

    NASA Astrophysics Data System (ADS)

    Pan, Chong; Xue, Dong; Xu, Yang; Wang, JinJun; Wei, RunJie

    2015-10-01

    Lucas-Kanade (LK) algorithm, usually used in optical flow filed, has recently received increasing attention from PIV community due to its advanced calculation efficiency by GPU acceleration. Although applications of this algorithm are continuously emerging, a systematic performance evaluation is still lacking. This forms the primary aim of the present work. Three warping schemes in the family of LK algorithm: forward/inverse/symmetric warping, are evaluated in a prototype flow of a hierarchy of multiple two-dimensional vortices. Second-order Newton descent is also considered here. The accuracy & efficiency of all these LK variants are investigated under a large domain of various influential parameters. It is found that the constant displacement constraint, which is a necessary building block for GPU acceleration, is the most critical issue in affecting LK algorithm's accuracy, which can be somehow ameliorated by using second-order Newton descent. Moreover, symmetric warping outbids the other two warping schemes in accuracy level, robustness to noise, convergence speed and tolerance to displacement gradient, and might be the first choice when applying LK algorithm to PIV measurement.

  18. Development of iterative techniques for the solution of unsteady compressible viscous flows

    NASA Technical Reports Server (NTRS)

    Sankar, Lakshmi N.; Hixon, Duane

    1991-01-01

    Efficient iterative solution methods are being developed for the numerical solution of two- and three-dimensional compressible Navier-Stokes equations. Iterative time marching methods have several advantages over classical multi-step explicit time marching schemes, and non-iterative implicit time marching schemes. Iterative schemes have better stability characteristics than non-iterative explicit and implicit schemes. Thus, the extra work required by iterative schemes can also be designed to perform efficiently on current and future generation scalable, missively parallel machines. An obvious candidate for iteratively solving the system of coupled nonlinear algebraic equations arising in CFD applications is the Newton method. Newton's method was implemented in existing finite difference and finite volume methods. Depending on the complexity of the problem, the number of Newton iterations needed per step to solve the discretized system of equations can, however, vary dramatically from a few to several hundred. Another popular approach based on the classical conjugate gradient method, known as the GMRES (Generalized Minimum Residual) algorithm is investigated. The GMRES algorithm was used in the past by a number of researchers for solving steady viscous and inviscid flow problems with considerable success. Here, the suitability of this algorithm is investigated for solving the system of nonlinear equations that arise in unsteady Navier-Stokes solvers at each time step. Unlike the Newton method which attempts to drive the error in the solution at each and every node down to zero, the GMRES algorithm only seeks to minimize the L2 norm of the error. In the GMRES algorithm the changes in the flow properties from one time step to the next are assumed to be the sum of a set of orthogonal vectors. By choosing the number of vectors to a reasonably small value N (between 5 and 20) the work required for advancing the solution from one time step to the next may be kept to (N+1) times that of a noniterative scheme. Many of the operations required by the GMRES algorithm such as matrix-vector multiplies, matrix additions and subtractions can all be vectorized and parallelized efficiently.

  19. Finite Volume Algorithms for Heat Conduction

    DTIC Science & Technology

    2010-05-01

    scalar quantity). Although (3) is relatively easy to discretize by using finite differences , its form in generalized coordinates is not. Later, we...familiar with the finite difference method for discretizing differential equations. In fact, the Newton divided difference is the numerical analog for a...expression (8) for the average derivative matches the Newton divided difference formula, so for uniform one-dimensional meshes, the finite volume and

  20. Efficient iterative methods applied to the solution of transonic flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wissink, A.M.; Lyrintzis, A.S.; Chronopoulos, A.T.

    1996-02-01

    We investigate the use of an inexact Newton`s method to solve the potential equations in the transonic regime. As a test case, we solve the two-dimensional steady transonic small disturbance equation. Approximate factorization/ADI techniques have traditionally been employed for implicit solutions of this nonlinear equation. Instead, we apply Newton`s method using an exact analytical determination of the Jacobian with preconditioned conjugate gradient-like iterative solvers for solution of the linear systems in each Newton iteration. Two iterative solvers are tested; a block s-step version of the classical Orthomin(k) algorithm called orthogonal s-step Orthomin (OSOmin) and the well-known GIVIRES method. The preconditionermore » is a vectorizable and parallelizable version of incomplete LU (ILU) factorization. Efficiency of the Newton-Iterative method on vector and parallel computer architectures is the main issue addressed. In vectorized tests on a single processor of the Cray C-90, the performance of Newton-OSOmin is superior to Newton-GMRES and a more traditional monotone AF/ADI method (MAF) for a variety of transonic Mach numbers and mesh sizes. Newton- GIVIRES is superior to MAF for some cases. The parallel performance of the Newton method is also found to be very good on multiple processors of the Cray C-90 and on the massively parallel thinking machine CM-5, where very fast execution rates (up to 9 Gflops) are found for large problems. 38 refs., 14 figs., 7 tabs.« less

  1. Quasi-Newton parallel geometry optimization methods

    NASA Astrophysics Data System (ADS)

    Burger, Steven K.; Ayers, Paul W.

    2010-07-01

    Algorithms for parallel unconstrained minimization of molecular systems are examined. The overall framework of minimization is the same except for the choice of directions for updating the quasi-Newton Hessian. Ideally these directions are chosen so the updated Hessian gives steps that are same as using the Newton method. Three approaches to determine the directions for updating are presented: the straightforward approach of simply cycling through the Cartesian unit vectors (finite difference), a concurrent set of minimizations, and the Lanczos method. We show the importance of using preconditioning and a multiple secant update in these approaches. For the Lanczos algorithm, an initial set of directions is required to start the method, and a number of possibilities are explored. To test the methods we used the standard 50-dimensional analytic Rosenbrock function. Results are also reported for the histidine dipeptide, the isoleucine tripeptide, and cyclic adenosine monophosphate. All of these systems show a significant speed-up with the number of processors up to about eight processors.

  2. Multiple Revolution Solutions for the Perturbed Lambert Problem using the Method of Particular Solutions and Picard Iteration

    NASA Astrophysics Data System (ADS)

    Woollands, Robyn M.; Read, Julie L.; Probe, Austin B.; Junkins, John L.

    2017-12-01

    We present a new method for solving the multiple revolution perturbed Lambert problem using the method of particular solutions and modified Chebyshev-Picard iteration. The method of particular solutions differs from the well-known Newton-shooting method in that integration of the state transition matrix (36 additional differential equations) is not required, and instead it makes use of a reference trajectory and a set of n particular solutions. Any numerical integrator can be used for solving two-point boundary problems with the method of particular solutions, however we show that using modified Chebyshev-Picard iteration affords an avenue for increased efficiency that is not available with other step-by-step integrators. We take advantage of the path approximation nature of modified Chebyshev-Picard iteration (nodes iteratively converge to fixed points in space) and utilize a variable fidelity force model for propagating the reference trajectory. Remarkably, we demonstrate that computing the particular solutions with only low fidelity function evaluations greatly increases the efficiency of the algorithm while maintaining machine precision accuracy. Our study reveals that solving the perturbed Lambert's problem using the method of particular solutions with modified Chebyshev-Picard iteration is about an order of magnitude faster compared with the classical shooting method and a tenth-twelfth order Runge-Kutta integrator. It is well known that the solution to Lambert's problem over multiple revolutions is not unique and to ensure that all possible solutions are considered we make use of a reliable preexisting Keplerian Lambert solver to warm start our perturbed algorithm.

  3. A majorized Newton-CG augmented Lagrangian-based finite element method for 3D restoration of geological models

    NASA Astrophysics Data System (ADS)

    Tang, Peipei; Wang, Chengjing; Dai, Xiaoxia

    2016-04-01

    In this paper, we propose a majorized Newton-CG augmented Lagrangian-based finite element method for 3D elastic frictionless contact problems. In this scheme, we discretize the restoration problem via the finite element method and reformulate it to a constrained optimization problem. Then we apply the majorized Newton-CG augmented Lagrangian method to solve the optimization problem, which is very suitable for the ill-conditioned case. Numerical results demonstrate that the proposed method is a very efficient algorithm for various large-scale 3D restorations of geological models, especially for the restoration of geological models with complicated faults.

  4. Decentralized Quasi-Newton Methods

    NASA Astrophysics Data System (ADS)

    Eisen, Mark; Mokhtari, Aryan; Ribeiro, Alejandro

    2017-05-01

    We introduce the decentralized Broyden-Fletcher-Goldfarb-Shanno (D-BFGS) method as a variation of the BFGS quasi-Newton method for solving decentralized optimization problems. The D-BFGS method is of interest in problems that are not well conditioned, making first order decentralized methods ineffective, and in which second order information is not readily available, making second order decentralized methods impossible. D-BFGS is a fully distributed algorithm in which nodes approximate curvature information of themselves and their neighbors through the satisfaction of a secant condition. We additionally provide a formulation of the algorithm in asynchronous settings. Convergence of D-BFGS is established formally in both the synchronous and asynchronous settings and strong performance advantages relative to first order methods are shown numerically.

  5. Speed and convergence properties of gradient algorithms for optimization of IMRT.

    PubMed

    Zhang, Xiaodong; Liu, Helen; Wang, Xiaochun; Dong, Lei; Wu, Qiuwen; Mohan, Radhe

    2004-05-01

    Gradient algorithms are the most commonly employed search methods in the routine optimization of IMRT plans. It is well known that local minima can exist for dose-volume-based and biology-based objective functions. The purpose of this paper is to compare the relative speed of different gradient algorithms, to investigate the strategies for accelerating the optimization process, to assess the validity of these strategies, and to study the convergence properties of these algorithms for dose-volume and biological objective functions. With these aims in mind, we implemented Newton's, conjugate gradient (CG), and the steepest decent (SD) algorithms for dose-volume- and EUD-based objective functions. Our implementation of Newton's algorithm approximates the second derivative matrix (Hessian) by its diagonal. The standard SD algorithm and the CG algorithm with "line minimization" were also implemented. In addition, we investigated the use of a variation of the CG algorithm, called the "scaled conjugate gradient" (SCG) algorithm. To accelerate the optimization process, we investigated the validity of the use of a "hybrid optimization" strategy, in which approximations to calculated dose distributions are used during most of the iterations. Published studies have indicated that getting trapped in local minima is not a significant problem. To investigate this issue further, we first obtained, by trial and error, and starting with uniform intensity distributions, the parameters of the dose-volume- or EUD-based objective functions which produced IMRT plans that satisfied the clinical requirements. Using the resulting optimized intensity distributions as the initial guess, we investigated the possibility of getting trapped in a local minimum. For most of the results presented, we used a lung cancer case. To illustrate the generality of our methods, the results for a prostate case are also presented. For both dose-volume and EUD based objective functions, Newton's method far outperforms other algorithms in terms of speed. The SCG algorithm, which avoids expensive "line minimization," can speed up the standard CG algorithm by at least a factor of 2. For the same initial conditions, all algorithms converge essentially to the same plan. However, we demonstrate that for any of the algorithms studied, starting with previously optimized intensity distributions as the initial guess but for different objective function parameters, the solution frequently gets trapped in local minima. We found that the initial intensity distribution obtained from IMRT optimization utilizing objective function parameters, which favor a specific anatomic structure, would lead to a local minimum corresponding to that structure. Our results indicate that from among the gradient algorithms tested, Newton's method appears to be the fastest by far. Different gradient algorithms have the same convergence properties for dose-volume- and EUD-based objective functions. The hybrid dose calculation strategy is valid and can significantly accelerate the optimization process. The degree of acceleration achieved depends on the type of optimization problem being addressed (e.g., IMRT optimization, intensity modulated beam configuration optimization, or objective function parameter optimization). Under special conditions, gradient algorithms will get trapped in local minima, and reoptimization, starting with the results of previous optimization, will lead to solutions that are generally not significantly different from the local minimum.

  6. Genetic algorithm-based improved DOA estimation using fourth-order cumulants

    NASA Astrophysics Data System (ADS)

    Ahmed, Ammar; Tufail, Muhammad

    2017-05-01

    Genetic algorithm (GA)-based direction of arrival (DOA) estimation is proposed using fourth-order cumulants (FOC) and ESPRIT principle which results in Multiple Invariance Cumulant ESPRIT algorithm. In the existing FOC ESPRIT formulations, only one invariance is utilised to estimate DOAs. The unused multiple invariances (MIs) must be exploited simultaneously in order to improve the estimation accuracy. In this paper, a fitness function based on a carefully designed cumulant matrix is developed which incorporates MIs present in the sensor array. Better DOA estimation can be achieved by minimising this fitness function. Moreover, the effectiveness of Newton's method as well as GA for this optimisation problem has been illustrated. Simulation results show that the proposed algorithm provides improved estimation accuracy compared to existing algorithms, especially in the case of low SNR, less number of snapshots, closely spaced sources and high signal and noise correlation. Moreover, it is observed that the optimisation using Newton's method is more likely to converge to false local optima resulting in erroneous results. However, GA-based optimisation has been found attractive due to its global optimisation capability.

  7. A Class of Prediction-Correction Methods for Time-Varying Convex Optimization

    NASA Astrophysics Data System (ADS)

    Simonetto, Andrea; Mokhtari, Aryan; Koppel, Alec; Leus, Geert; Ribeiro, Alejandro

    2016-09-01

    This paper considers unconstrained convex optimization problems with time-varying objective functions. We propose algorithms with a discrete time-sampling scheme to find and track the solution trajectory based on prediction and correction steps, while sampling the problem data at a constant rate of $1/h$, where $h$ is the length of the sampling interval. The prediction step is derived by analyzing the iso-residual dynamics of the optimality conditions. The correction step adjusts for the distance between the current prediction and the optimizer at each time step, and consists either of one or multiple gradient steps or Newton steps, which respectively correspond to the gradient trajectory tracking (GTT) or Newton trajectory tracking (NTT) algorithms. Under suitable conditions, we establish that the asymptotic error incurred by both proposed methods behaves as $O(h^2)$, and in some cases as $O(h^4)$, which outperforms the state-of-the-art error bound of $O(h)$ for correction-only methods in the gradient-correction step. Moreover, when the characteristics of the objective function variation are not available, we propose approximate gradient and Newton tracking algorithms (AGT and ANT, respectively) that still attain these asymptotical error bounds. Numerical simulations demonstrate the practical utility of the proposed methods and that they improve upon existing techniques by several orders of magnitude.

  8. Efficient Iterative Methods Applied to the Solution of Transonic Flows

    NASA Astrophysics Data System (ADS)

    Wissink, Andrew M.; Lyrintzis, Anastasios S.; Chronopoulos, Anthony T.

    1996-02-01

    We investigate the use of an inexact Newton's method to solve the potential equations in the transonic regime. As a test case, we solve the two-dimensional steady transonic small disturbance equation. Approximate factorization/ADI techniques have traditionally been employed for implicit solutions of this nonlinear equation. Instead, we apply Newton's method using an exact analytical determination of the Jacobian with preconditioned conjugate gradient-like iterative solvers for solution of the linear systems in each Newton iteration. Two iterative solvers are tested; a block s-step version of the classical Orthomin(k) algorithm called orthogonal s-step Orthomin (OSOmin) and the well-known GMRES method. The preconditioner is a vectorizable and parallelizable version of incomplete LU (ILU) factorization. Efficiency of the Newton-Iterative method on vector and parallel computer architectures is the main issue addressed. In vectorized tests on a single processor of the Cray C-90, the performance of Newton-OSOmin is superior to Newton-GMRES and a more traditional monotone AF/ADI method (MAF) for a variety of transonic Mach numbers and mesh sizes. Newton-GMRES is superior to MAF for some cases. The parallel performance of the Newton method is also found to be very good on multiple processors of the Cray C-90 and on the massively parallel thinking machine CM-5, where very fast execution rates (up to 9 Gflops) are found for large problems.

  9. Accelerated gradient based diffuse optical tomographic image reconstruction.

    PubMed

    Biswas, Samir Kumar; Rajan, K; Vasu, R M

    2011-01-01

    Fast reconstruction of interior optical parameter distribution using a new approach called Broyden-based model iterative image reconstruction (BMOBIIR) and adjoint Broyden-based MOBIIR (ABMOBIIR) of a tissue and a tissue mimicking phantom from boundary measurement data in diffuse optical tomography (DOT). DOT is a nonlinear and ill-posed inverse problem. Newton-based MOBIIR algorithm, which is generally used, requires repeated evaluation of the Jacobian which consumes bulk of the computation time for reconstruction. In this study, we propose a Broyden approach-based accelerated scheme for Jacobian computation and it is combined with conjugate gradient scheme (CGS) for fast reconstruction. The method makes explicit use of secant and adjoint information that can be obtained from forward solution of the diffusion equation. This approach reduces the computational time many fold by approximating the system Jacobian successively through low-rank updates. Simulation studies have been carried out with single as well as multiple inhomogeneities. Algorithms are validated using an experimental study carried out on a pork tissue with fat acting as an inhomogeneity. The results obtained through the proposed BMOBIIR and ABMOBIIR approaches are compared with those of Newton-based MOBIIR algorithm. The mean squared error and execution time are used as metrics for comparing the results of reconstruction. We have shown through experimental and simulation studies that Broyden-based MOBIIR and adjoint Broyden-based methods are capable of reconstructing single as well as multiple inhomogeneities in tissue and a tissue-mimicking phantom. Broyden MOBIIR and adjoint Broyden MOBIIR methods are computationally simple and they result in much faster implementations because they avoid direct evaluation of Jacobian. The image reconstructions have been carried out with different initial values using Newton, Broyden, and adjoint Broyden approaches. These algorithms work well when the initial guess is close to the true solution. However, when initial guess is far away from true solution, Newton-based MOBIIR gives better reconstructed images. The proposed methods are found to be stable with noisy measurement data.

  10. A speciation solver for cement paste modeling and the semismooth Newton method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Georget, Fabien, E-mail: fabieng@princeton.edu; Prévost, Jean H., E-mail: prevost@princeton.edu; Vanderbei, Robert J., E-mail: rvdb@princeton.edu

    2015-02-15

    The mineral assemblage of a cement paste may vary considerably with its environment. In addition, the water content of a cement paste is relatively low and the ionic strength of the interstitial solution is often high. These conditions are extreme conditions with respect to the common assumptions made in speciation problem. Furthermore the common trial and error algorithm to find the phase assemblage does not provide any guarantee of convergence. We propose a speciation solver based on a semismooth Newton method adapted to the thermodynamic modeling of cement paste. The strong theoretical properties associated with these methods offer practical advantages.more » Results of numerical experiments indicate that the algorithm is reliable, robust, and efficient.« less

  11. Nonlinearly preconditioned semismooth Newton methods for variational inequality solution of two-phase flow in porous media

    NASA Astrophysics Data System (ADS)

    Yang, Haijian; Sun, Shuyu; Yang, Chao

    2017-03-01

    Most existing methods for solving two-phase flow problems in porous media do not take the physically feasible saturation fractions between 0 and 1 into account, which often destroys the numerical accuracy and physical interpretability of the simulation. To calculate the solution without the loss of this basic requirement, we introduce a variational inequality formulation of the saturation equilibrium with a box inequality constraint, and use a conservative finite element method for the spatial discretization and a backward differentiation formula with adaptive time stepping for the temporal integration. The resulting variational inequality system at each time step is solved by using a semismooth Newton algorithm. To accelerate the Newton convergence and improve the robustness, we employ a family of adaptive nonlinear elimination methods as a nonlinear preconditioner. Some numerical results are presented to demonstrate the robustness and efficiency of the proposed algorithm. A comparison is also included to show the superiority of the proposed fully implicit approach over the classical IMplicit Pressure-Explicit Saturation (IMPES) method in terms of the time step size and the total execution time measured on a parallel computer.

  12. The Broad Iron K-alpha line of Cygnus X-1 as Seen by XMM-Newton in the EPIC-pn Modified Timing Mode

    NASA Technical Reports Server (NTRS)

    Duro, Refiz; Dauser, Thomas; Wilms, Jorn; Pottschmidt, Katja; Nowak, Michael A.; Fritz, Sonja; Kendziorra, Eckhard; Kirsch, Marcus G. F.; Reynolds, Christopher S.; Staubert, Rudiger

    2011-01-01

    We present the analysis of the broadened, flourescent iron K(alpha) line in simultaneous XMM-Newton and RXTE data from the black hole Cygnus X-I. The XMM-Newton data were taken in a modified version of the Timing Mode of the EPIC-pn camera. In this mode the lower energy threshold of the instrument is increased to 2.8 keV to avoid telemetry drop outs due to the brightness of the source, while at the same time preserving the signal to noise ratio in the Fe K(alpha) band. We find that the best-fit spectrum consists of the sum of an exponentially cut-off power-law and relativistically smeared, ionized reflection. The shape of the broadened Fe K(alpha) feature is due to strong Compton broadening combined with relativistic broadening. Assuming a standard, thin accretion disk, the black hole is close to maximally rotating. Key words. X-rays: binaries - black hole physics - gravitation

  13. AUTOCLASSIFICATION OF THE VARIABLE 3XMM SOURCES USING THE RANDOM FOREST MACHINE LEARNING ALGORITHM

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Farrell, Sean A.; Murphy, Tara; Lo, Kitty K., E-mail: s.farrell@physics.usyd.edu.au

    In the current era of large surveys and massive data sets, autoclassification of astrophysical sources using intelligent algorithms is becoming increasingly important. In this paper we present the catalog of variable sources in the Third XMM-Newton Serendipitous Source catalog (3XMM) autoclassified using the Random Forest machine learning algorithm. We used a sample of manually classified variable sources from the second data release of the XMM-Newton catalogs (2XMMi-DR2) to train the classifier, obtaining an accuracy of ∼92%. We also evaluated the effectiveness of identifying spurious detections using a sample of spurious sources, achieving an accuracy of ∼95%. Manual investigation of amore » random sample of classified sources confirmed these accuracy levels and showed that the Random Forest machine learning algorithm is highly effective at automatically classifying 3XMM sources. Here we present the catalog of classified 3XMM variable sources. We also present three previously unidentified unusual sources that were flagged as outlier sources by the algorithm: a new candidate supergiant fast X-ray transient, a 400 s X-ray pulsar, and an eclipsing 5 hr binary system coincident with a known Cepheid.« less

  14. Development of A Thrust Stand to Meet LISA Mission Requirements

    NASA Technical Reports Server (NTRS)

    Willis, William D., III; Zakrzwski, C. M.; Bauer, Frank H. (Technical Monitor)

    2002-01-01

    A thrust stand has been built and tested that is capable of measuring the force-noise produced by electrostatic micro-Newton (micro-Newton) thrusters. The LISA mission's Disturbance Reduction System (DRS) requires thrusters that are capable of producing continuous thrust levels between 1-100 micro-Newton with a resolution of 0.1 micro-Newton. The stationary force-noise produced by these thrusters must not exceed 0.1 pN/4Hz in a 10 Hz bandwidth. The LISA Thrust Stand (LTS) is a torsion-balance type thrust stand designed to meet the following requirements: stationary force-noise measurements from 10(exp-4) to 1 Hz with 0.1 micro-Newton resolution, absolute thrust measurements from 1-100 micro-Newton with better than 0.1 micro-Newton resolution, and dynamic thruster response from 10(exp -4) to 10 Hz. The ITS employs a unique vertical configuration, autocollimator for angular position measurements, and electrostatic actuators that are used for dynamic pendulum control and null-mode measurements. Force-noise levels are measured indirectly by characterizing the thrust stand as a spring-mass system. The LTS was initially designed to test the indium FEEP thruster developed by the Austrian Research Center in Seibersdorf (ARCS), but can be modified for testing other thrusters of this type.

  15. Airbreathing engine selection criteria for SSTO propulsion system

    NASA Astrophysics Data System (ADS)

    Ohkami, Yoshiaki; Maita, Masataka

    1995-02-01

    This paper presents airbreathing engine selection criteria to be applied to the propulsion system of a Single Stage To Orbit (SSTO). To establish the criteria, a relation among three major parameters, i.e., delta-V capability, weight penalty, and effective specific impulse of the engine subsystem, is derived as compared to these parameters of the LH2/LOX rocket engine. The effective specific impulse is a function of the engine I(sub sp) and vehicle thrust-to-drag ratio which is approximated by a function of the vehicle velocity. The weight penalty includes the engine dry weight, cooling subsystem weight. The delta-V capability is defined by the velocity region starting from the minimum operating velocity up to the maximum velocity. The vehicle feasibility is investigated in terms of the structural and propellant weights, which requires an iteration process adjusting the system parameters. The system parameters are computed by iteration based on the Newton-Raphson method. It has been concluded that performance in the higher velocity region is extremely important so that the airbreathing engines are required to operate beyond the velocity equivalent to the rocket engine exhaust velocity (approximately 4500 m/s).

  16. Recursive inverse kinematics for robot arms via Kalman filtering and Bryson-Frazier smoothing

    NASA Technical Reports Server (NTRS)

    Rodriguez, G.; Scheid, R. E., Jr.

    1987-01-01

    This paper applies linear filtering and smoothing theory to solve recursively the inverse kinematics problem for serial multilink manipulators. This problem is to find a set of joint angles that achieve a prescribed tip position and/or orientation. A widely applicable numerical search solution is presented. The approach finds the minimum of a generalized distance between the desired and the actual manipulator tip position and/or orientation. Both a first-order steepest-descent gradient search and a second-order Newton-Raphson search are developed. The optimal relaxation factor required for the steepest descent method is computed recursively using an outward/inward procedure similar to those used typically for recursive inverse dynamics calculations. The second-order search requires evaluation of a gradient and an approximate Hessian. A Gauss-Markov approach is used to approximate the Hessian matrix in terms of products of first-order derivatives. This matrix is inverted recursively using a two-stage process of inward Kalman filtering followed by outward smoothing. This two-stage process is analogous to that recently developed by the author to solve by means of spatial filtering and smoothing the forward dynamics problem for serial manipulators.

  17. A numerical simulation of peristaltic motion in the ureter using fluid structure interactions.

    PubMed

    Vahidi, Bahman; Fatouraee, Nasser

    2007-01-01

    An axisymmetric model with fluid-structure interactions (FSI) is introduced and solved to perform ureter flow and stress analysis. The Navier-Stokes equations are solved for the fluid and a linear elastic model for ureter is used. The finite element equations for both the structure and the fluid were solved by the Newton-Raphson iterative method. Our results indicated that shear stresses were high around the throat of moving contracted wall. The pressure gradient magnitude along the ureter wall and the symmetry line had the maximum value around the throat of moving contracted wall which decreased as the peristalsis propagates toward the bladder. The flow rate at the ureter outlet at the end of the peristaltic motion was about 650 mm3/s. During propagation of the peristalsis toward the bladder, the inlet backward flow region was limited to the areas near symmetry line but the inner ureter backward flow regions extended to the whole ureter contraction part. The backward flow was vanished after 1.5 seconds of peristalsis propagation start up and after that time the urine flow was forward in the whole ureter length, so reflux is more probable to be present at the beginning of the wall peristaltic motion.

  18. Effect of initial strain and material nonlinearity on the nonlinear static and dynamic response of graphene sheets

    NASA Astrophysics Data System (ADS)

    Singh, Sandeep; Patel, B. P.

    2018-06-01

    Computationally efficient multiscale modelling based on Cauchy-Born rule in conjunction with finite element method is employed to study static and dynamic characteristics of graphene sheets, with/without considering initial strain, involving Green-Lagrange geometric and material nonlinearities. The strain energy density function at continuum level is established by coupling the deformation at continuum level to that at atomic level through Cauchy-Born rule. The atomic interactions between carbon atoms are modelled through Tersoff-Brenner potential. The governing equation of motion obtained using Hamilton's principle is solved through standard Newton-Raphson method for nonlinear static response and Newmark's time integration technique to obtain nonlinear transient response characteristics. Effect of initial strain on the linear free vibration frequencies, nonlinear static and dynamic response characteristics is investigated in detail. The present multiscale modelling based results are found to be in good agreement with those obtained through molecular mechanics simulation. Two different types of boundary constraints generally used in MM simulation are explored in detail and few interesting findings are brought out. The effect of initial strain is found to be greater in linear response when compared to that in nonlinear response.

  19. Deployment of a multi-link flexible structure

    NASA Astrophysics Data System (ADS)

    Na, Kyung-Su; Kim, Ji-Hwan

    2006-06-01

    Deployment of a multi-link beam structure undergoing locking is analyzed in the Timoshenko beam theory. In the modeling of the system, dynamic forces are assumed to be torques and restoring forces due to the torsion spring at each joint. Hamilton's principle is used to determine the equations of motion and the finite element method is adopted to analyze the system. Newmark time integration and Newton-Raphson iteration methods are used to solve for the non-linear equations of motion at each time step. The locking at the joints of the multi-link flexible structure is analyzed by the momentum balance method. Numerical results are compared with the previous experimental data. The angles and angular velocities of each joint, tip displacement, and velocity of each link are investigated to study the motions of the links at each time step. To analyze the effect of thickness on the motion of the link, the angle and the tip displacement of each link are compared according to the various slenderness ratios. Additionally, in order to investigate the effect of shear, the tip displacements of a Timoshenko beam are compared with those of an Euler-Bernoulli beam.

  20. Nonlinear modelling of high-speed catenary based on analytical expressions of cable and truss elements

    NASA Astrophysics Data System (ADS)

    Song, Yang; Liu, Zhigang; Wang, Hongrui; Lu, Xiaobing; Zhang, Jing

    2015-10-01

    Due to the intrinsic nonlinear characteristics and complex structure of the high-speed catenary system, a modelling method is proposed based on the analytical expressions of nonlinear cable and truss elements. The calculation procedure for solving the initial equilibrium state is proposed based on the Newton-Raphson iteration method. The deformed configuration of the catenary system as well as the initial length of each wire can be calculated. Its accuracy and validity of computing the initial equilibrium state are verified by comparison with the separate model method, absolute nodal coordinate formulation and other methods in the previous literatures. Then, the proposed model is combined with a lumped pantograph model and a dynamic simulation procedure is proposed. The accuracy is guaranteed by the multiple iterative calculations in each time step. The dynamic performance of the proposed model is validated by comparison with EN 50318, the results of the finite element method software and SIEMENS simulation report, respectively. At last, the influence of the catenary design parameters (such as the reserved sag and pre-tension) on the dynamic performance is preliminarily analysed by using the proposed model.

  1. Estimation of the longitudinal and lateral-directional aerodynamic parameters from flight data for the NASA F/A-18 HARV

    NASA Technical Reports Server (NTRS)

    Napolitano, Marcello R.

    1996-01-01

    This progress report presents the results of an investigation focused on parameter identification for the NASA F/A-18 HARV. This aircraft was used in the high alpha research program at the NASA Dryden Flight Research Center. In this study the longitudinal and lateral-directional stability derivatives are estimated from flight data using the Maximum Likelihood method coupled with a Newton-Raphson minimization technique. The objective is to estimate an aerodynamic model describing the aircraft dynamics over a range of angle of attack from 5 deg to 60 deg. The mathematical model is built using the traditional static and dynamic derivative buildup. Flight data used in this analysis were from a variety of maneuvers. The longitudinal maneuvers included large amplitude multiple doublets, optimal inputs, frequency sweeps, and pilot pitch stick inputs. The lateral-directional maneuvers consisted of large amplitude multiple doublets, optimal inputs and pilot stick and rudder inputs. The parameter estimation code pEst, developed at NASA Dryden, was used in this investigation. Results of the estimation process from alpha = 5 deg to alpha = 60 deg are presented and discussed.

  2. A Second Law Based Unstructured Finite Volume Procedure for Generalized Flow Simulation

    NASA Technical Reports Server (NTRS)

    Majumdar, Alok

    1998-01-01

    An unstructured finite volume procedure has been developed for steady and transient thermo-fluid dynamic analysis of fluid systems and components. The procedure is applicable for a flow network consisting of pipes and various fittings where flow is assumed to be one dimensional. It can also be used to simulate flow in a component by modeling a multi-dimensional flow using the same numerical scheme. The flow domain is discretized into a number of interconnected control volumes located arbitrarily in space. The conservation equations for each control volume account for the transport of mass, momentum and entropy from the neighboring control volumes. In addition, they also include the sources of each conserved variable and time dependent terms. The source term of entropy equation contains entropy generation due to heat transfer and fluid friction. Thermodynamic properties are computed from the equation of state of a real fluid. The system of equations is solved by a hybrid numerical method which is a combination of simultaneous Newton-Raphson and successive substitution schemes. The paper also describes the application and verification of the procedure by comparing its predictions with the analytical and numerical solution of several benchmark problems.

  3. Critical point analysis of phase envelope diagram

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Soetikno, Darmadi; Siagian, Ucok W. R.; Kusdiantara, Rudy, E-mail: rkusdiantara@s.itb.ac.id

    2014-03-24

    Phase diagram or phase envelope is a relation between temperature and pressure that shows the condition of equilibria between the different phases of chemical compounds, mixture of compounds, and solutions. Phase diagram is an important issue in chemical thermodynamics and hydrocarbon reservoir. It is very useful for process simulation, hydrocarbon reactor design, and petroleum engineering studies. It is constructed from the bubble line, dew line, and critical point. Bubble line and dew line are composed of bubble points and dew points, respectively. Bubble point is the first point at which the gas is formed when a liquid is heated. Meanwhile,more » dew point is the first point where the liquid is formed when the gas is cooled. Critical point is the point where all of the properties of gases and liquids are equal, such as temperature, pressure, amount of substance, and others. Critical point is very useful in fuel processing and dissolution of certain chemicals. Here in this paper, we will show the critical point analytically. Then, it will be compared with numerical calculations of Peng-Robinson equation by using Newton-Raphson method. As case studies, several hydrocarbon mixtures are simulated using by Matlab.« less

  4. Multiple environment single system quantum mechanical/molecular mechanical (MESS-QM/MM) calculations. 1. Estimation of polarization energies.

    PubMed

    Sodt, Alexander J; Mei, Ye; König, Gerhard; Tao, Peng; Steele, Ryan P; Brooks, Bernard R; Shao, Yihan

    2015-03-05

    In combined quantum mechanical/molecular mechanical (QM/MM) free energy calculations, it is often advantageous to have a frozen geometry for the quantum mechanical (QM) region. For such multiple-environment single-system (MESS) cases, two schemes are proposed here for estimating the polarization energy: the first scheme, termed MESS-E, involves a Roothaan step extrapolation of the self-consistent field (SCF) energy; whereas the other scheme, termed MESS-H, employs a Newton-Raphson correction using an approximate inverse electronic Hessian of the QM region (which is constructed only once). Both schemes are extremely efficient, because the expensive Fock updates and SCF iterations in standard QM/MM calculations are completely avoided at each configuration. They produce reasonably accurate QM/MM polarization energies: MESS-E can predict the polarization energy within 0.25 kcal/mol in terms of the mean signed error for two of our test cases, solvated methanol and solvated β-alanine, using the M06-2X or ωB97X-D functionals; MESS-H can reproduce the polarization energy within 0.2 kcal/mol for these two cases and for the oxyluciferin-luciferase complex, if the approximate inverse electronic Hessians are constructed with sufficient accuracy.

  5. Approximation of the Newton Step by a Defect Correction Process

    NASA Technical Reports Server (NTRS)

    Arian, E.; Batterman, A.; Sachs, E. W.

    1999-01-01

    In this paper, an optimal control problem governed by a partial differential equation is considered. The Newton step for this system can be computed by solving a coupled system of equations. To do this efficiently with an iterative defect correction process, a modifying operator is introduced into the system. This operator is motivated by local mode analysis. The operator can be used also for preconditioning in Generalized Minimum Residual (GMRES). We give a detailed convergence analysis for the defect correction process and show the derivation of the modifying operator. Numerical tests are done on the small disturbance shape optimization problem in two dimensions for the defect correction process and for GMRES.

  6. Adaptive Beam Loading Compensation in Room Temperature Bunching Cavities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Edelen, J. P.; Chase, B. E.; Cullerton, E.

    In this paper we present the design, simulation, and proof of principle results of an optimization based adaptive feedforward algorithm for beam-loading compensation in a high impedance room temperature cavity. We begin with an overview of prior developments in beam loading compensation. Then we discuss different techniques for adaptive beam loading compensation and why the use of Newton?s Method is of interest for this application. This is followed by simulation and initial experimental results of this method.

  7. Adaptive estimation of nonlinear parameters of a nonholonomic spherical robot using a modified fuzzy-based speed gradient algorithm

    NASA Astrophysics Data System (ADS)

    Roozegar, Mehdi; Mahjoob, Mohammad J.; Ayati, Moosa

    2017-05-01

    This paper deals with adaptive estimation of the unknown parameters and states of a pendulum-driven spherical robot (PDSR), which is a nonlinear in parameters (NLP) chaotic system with parametric uncertainties. Firstly, the mathematical model of the robot is deduced by applying the Newton-Euler methodology for a system of rigid bodies. Then, based on the speed gradient (SG) algorithm, the states and unknown parameters of the robot are estimated online for different step length gains and initial conditions. The estimated parameters are updated adaptively according to the error between estimated and true state values. Since the errors of the estimated states and parameters as well as the convergence rates depend significantly on the value of step length gain, this gain should be chosen optimally. Hence, a heuristic fuzzy logic controller is employed to adjust the gain adaptively. Simulation results indicate that the proposed approach is highly encouraging for identification of this NLP chaotic system even if the initial conditions change and the uncertainties increase; therefore, it is reliable to be implemented on a real robot.

  8. A novel approach to solve nonlinear Fredholm integral equations of the second kind.

    PubMed

    Li, Hu; Huang, Jin

    2016-01-01

    In this paper, we present a novel approach to solve nonlinear Fredholm integral equations of the second kind. This algorithm is constructed by the integral mean value theorem and Newton iteration. Convergence and error analysis of the numerical solutions are given. Moreover, Numerical examples show the algorithm is very effective and simple.

  9. Effects of ionic strength and ion pairing on (plant-wide) modelling of anaerobic digestion.

    PubMed

    Solon, Kimberly; Flores-Alsina, Xavier; Mbamba, Christian Kazadi; Volcke, Eveline I P; Tait, Stephan; Batstone, Damien; Gernaey, Krist V; Jeppsson, Ulf

    2015-03-01

    Plant-wide models of wastewater treatment (such as the Benchmark Simulation Model No. 2 or BSM2) are gaining popularity for use in holistic virtual studies of treatment plant control and operations. The objective of this study is to show the influence of ionic strength (as activity corrections) and ion pairing on modelling of anaerobic digestion processes in such plant-wide models of wastewater treatment. Using the BSM2 as a case study with a number of model variants and cationic load scenarios, this paper presents the effects of an improved physico-chemical description on model predictions and overall plant performance indicators, namely effluent quality index (EQI) and operational cost index (OCI). The acid-base equilibria implemented in the Anaerobic Digestion Model No. 1 (ADM1) are modified to account for non-ideal aqueous-phase chemistry. The model corrects for ionic strength via the Davies approach to consider chemical activities instead of molar concentrations. A speciation sub-routine based on a multi-dimensional Newton-Raphson (NR) iteration method is developed to address algebraic interdependencies. The model also includes ion pairs that play an important role in wastewater treatment. The paper describes: 1) how the anaerobic digester performance is affected by physico-chemical corrections; 2) the effect on pH and the anaerobic digestion products (CO2, CH4 and H2); and, 3) how these variations are propagated from the sludge treatment to the water line. Results at high ionic strength demonstrate that corrections to account for non-ideal conditions lead to significant differences in predicted process performance (up to 18% for effluent quality and 7% for operational cost) but that for pH prediction, activity corrections are more important than ion pairing effects. Both are likely to be required when precipitation is to be modelled. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. Newton-based optimization for Kullback-Leibler nonnegative tensor factorizations

    DOE PAGES

    Plantenga, Todd; Kolda, Tamara G.; Hansen, Samantha

    2015-04-30

    Tensor factorizations with nonnegativity constraints have found application in analysing data from cyber traffic, social networks, and other areas. We consider application data best described as being generated by a Poisson process (e.g. count data), which leads to sparse tensors that can be modelled by sparse factor matrices. In this paper, we investigate efficient techniques for computing an appropriate canonical polyadic tensor factorization based on the Kullback–Leibler divergence function. We propose novel subproblem solvers within the standard alternating block variable approach. Our new methods exploit structure and reformulate the optimization problem as small independent subproblems. We employ bound-constrained Newton andmore » quasi-Newton methods. Finally, we compare our algorithms against other codes, demonstrating superior speed for high accuracy results and the ability to quickly find sparse solutions.« less

  11. Composite Gauss-Legendre Quadrature with Error Control

    ERIC Educational Resources Information Center

    Prentice, J. S. C.

    2011-01-01

    We describe composite Gauss-Legendre quadrature for determining definite integrals, including a means of controlling the approximation error. We compare the form and performance of the algorithm with standard Newton-Cotes quadrature. (Contains 1 table.)

  12. Analysis of variation matrix array by bilinear least squares-residual bilinearization (BLLS-RBL) for resolving and quantifying of foodstuff dyes in a candy sample.

    PubMed

    Asadpour-Zeynali, Karim; Maryam Sajjadi, S; Taherzadeh, Fatemeh; Rahmanian, Reza

    2014-04-05

    Bilinear least square (BLLS) method is one of the most suitable algorithms for second-order calibration. Original BLLS method is not applicable to the second order pH-spectral data when an analyte has more than one spectroscopically active species. Bilinear least square-residual bilinearization (BLLS-RBL) was developed to achieve the second order advantage for analysis of complex mixtures. Although the modified method is useful, the pure profiles cannot be obtained and only the linear combination will be obtained. Moreover, for prediction of analyte in an unknown sample, the original algorithm of RBL may diverge; instead of converging to the desired analyte concentrations. Therefore, Gauss Newton-RLB algorithm should be used, which is not as simple as original protocol. Also, the analyte concentration can be predicted on the basis of each of the equilibrating species of the component of interest that are not exactly the same. The aim of the present work is to tackle the non-uniqueness problem in the second order calibration of monoprotic acid mixtures and divergence of RBL. Each pH-absorbance matrix was pretreated by subtraction of the first spectrum from other spectra in the data set to produce full rank array that is called variation matrix. Then variation matrices were analyzed uniquely by original BLLS-RBL that is more parsimonious than its modified counterpart. The proposed method was performed on the simulated as well as the analysis of real data. Sunset yellow and Carmosine as monoprotic acids were determined in candy sample in the presence of unknown interference by this method. Copyright © 2013 Elsevier B.V. All rights reserved.

  13. Reconstruction of multiple cracks from experimental electrostatic boundary measurements

    NASA Technical Reports Server (NTRS)

    Bryan, Kurt; Liepa, Valdis; Vogelius, Michael

    1993-01-01

    An algorithm for recovering a collection of linear cracks in a homogeneous electrical conductor from boundary measurements of voltages induced by specified current fluxes is described. The technique is a variation of Newton's method and is based on taking weighted averages of the boundary data. An apparatus that was constructed specifically for generating laboratory data on which to test the algorithm is also described. The algorithm is applied to a number of different test cases and the results are discussed.

  14. Marine Controlled-Source Electromagnetic 2D Inversion for synthetic models.

    NASA Astrophysics Data System (ADS)

    Liu, Y.; Li, Y.

    2016-12-01

    We present a 2D inverse algorithm for frequency domain marine controlled-source electromagnetic (CSEM) data, which is based on the regularized Gauss-Newton approach. As a forward solver, our parallel adaptive finite element forward modeling program is employed. It is a self-adaptive, goal-oriented grid refinement algorithm in which a finite element analysis is performed on a sequence of refined meshes. The mesh refinement process is guided by a dual error estimate weighting to bias refinement towards elements that affect the solution at the EM receiver locations. With the use of the direct solver (MUMPS), we can effectively compute the electromagnetic fields for multi-sources and parametric sensitivities. We also implement the parallel data domain decomposition approach of Key and Ovall (2011), with the goal of being able to compute accurate responses in parallel for complicated models and a full suite of data parameters typical of offshore CSEM surveys. All minimizations are carried out by using the Gauss-Newton algorithm and model perturbations at each iteration step are obtained by using the Inexact Conjugate Gradient iteration method. Synthetic test inversions are presented.

  15. Algorithms for the computation of solutions of the Ornstein-Zernike equation.

    PubMed

    Peplow, A T; Beardmore, R E; Bresme, F

    2006-10-01

    We introduce a robust and efficient methodology to solve the Ornstein-Zernike integral equation using the pseudoarc length (PAL) continuation method that reformulates the integral equation in an equivalent but nonstandard form. This enables the computation of solutions in regions where the compressibility experiences large changes or where the existence of multiple solutions and so-called branch points prevents Newton's method from converging. We illustrate the use of the algorithm with a difficult problem that arises in the numerical solution of integral equations, namely the evaluation of the so-called no-solution line of the Ornstein-Zernike hypernetted chain (HNC) integral equation for the Lennard-Jones potential. We are able to use the PAL algorithm to solve the integral equation along this line and to connect physical and nonphysical solution branches (both isotherms and isochores) where appropriate. We also show that PAL continuation can compute solutions within the no-solution region that cannot be computed when Newton and Picard methods are applied directly to the integral equation. While many solutions that we find are new, some correspond to states with negative compressibility and consequently are not physical.

  16. A nonlinear relaxation/quasi-Newton algorithm for the compressible Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Edwards, Jack R.; Mcrae, D. S.

    1992-01-01

    A highly efficient implicit method for the computation of steady, two-dimensional compressible Navier-Stokes flowfields is presented. The discretization of the governing equations is hybrid in nature, with flux-vector splitting utilized in the streamwise direction and central differences with flux-limited artificial dissipation used for the transverse fluxes. Line Jacobi relaxation is used to provide a suitable initial guess for a new nonlinear iteration strategy based on line Gauss-Seidel sweeps. The applicability of quasi-Newton methods as convergence accelerators for this and other line relaxation algorithms is discussed, and efficient implementations of such techniques are presented. Convergence histories and comparisons with experimental data are presented for supersonic flow over a flat plate and for several high-speed compression corner interactions. Results indicate a marked improvement in computational efficiency over more conventional upwind relaxation strategies, particularly for flowfields containing large pockets of streamwise subsonic flow.

  17. COMPARISON OF IMPLICIT SCHEMES TO SOLVE EQUATIONS OF RADIATION HYDRODYNAMICS WITH A FLUX-LIMITED DIFFUSION APPROXIMATION: NEWTON–RAPHSON, OPERATOR SPLITTING, AND LINEARIZATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tetsu, Hiroyuki; Nakamoto, Taishi, E-mail: h.tetsu@geo.titech.ac.jp

    Radiation is an important process of energy transport, a force, and a basis for synthetic observations, so radiation hydrodynamics (RHD) calculations have occupied an important place in astrophysics. However, although the progress in computational technology is remarkable, their high numerical cost is still a persistent problem. In this work, we compare the following schemes used to solve the nonlinear simultaneous equations of an RHD algorithm with the flux-limited diffusion approximation: the Newton–Raphson (NR) method, operator splitting, and linearization (LIN), from the perspective of the computational cost involved. For operator splitting, in addition to the traditional simple operator splitting (SOS) scheme,more » we examined the scheme developed by Douglas and Rachford (DROS). We solve three test problems (the thermal relaxation mode, the relaxation and the propagation of linear waves, and radiating shock) using these schemes and then compare their dependence on the time step size. As a result, we find the conditions of the time step size necessary for adopting each scheme. The LIN scheme is superior to other schemes if the ratio of radiation pressure to gas pressure is sufficiently low. On the other hand, DROS can be the most efficient scheme if the ratio is high. Although the NR scheme can be adopted independently of the regime, especially in a problem that involves optically thin regions, the convergence tends to be worse. In all cases, SOS is not practical.« less

  18. A general rough-surface inversion algorithm: Theory and application to SAR data

    NASA Technical Reports Server (NTRS)

    Moghaddam, M.

    1993-01-01

    Rough-surface inversion has significant applications in interpretation of SAR data obtained over bare soil surfaces and agricultural lands. Due to the sparsity of data and the large pixel size in SAR applications, it is not feasible to carry out inversions based on numerical scattering models. The alternative is to use parameter estimation techniques based on approximate analytical or empirical models. Hence, there are two issues to be addressed, namely, what model to choose and what estimation algorithm to apply. Here, a small perturbation model (SPM) is used to express the backscattering coefficients of the rough surface in terms of three surface parameters. The algorithm used to estimate these parameters is based on a nonlinear least-squares criterion. The least-squares optimization methods are widely used in estimation theory, but the distinguishing factor for SAR applications is incorporating the stochastic nature of both the unknown parameters and the data into formulation, which will be discussed in detail. The algorithm is tested with synthetic data, and several Newton-type least-squares minimization methods are discussed to compare their convergence characteristics. Finally, the algorithm is applied to multifrequency polarimetric SAR data obtained over some bare soil and agricultural fields. Results will be shown and compared to ground-truth measurements obtained from these areas. The strength of this general approach to inversion of SAR data is that it can be easily modified for use with any scattering model without changing any of the inversion steps. Note also that, for the same reason it is not limited to inversion of rough surfaces, and can be applied to any parameterized scattering process.

  19. Testing Bayesian and heuristic predictions of mass judgments of colliding objects

    PubMed Central

    Sanborn, Adam N.

    2014-01-01

    Mass judgments of colliding objects have been used to explore people's understanding of the physical world because they are ecologically relevant, yet people display biases that are most easily explained by a small set of heuristics. Recent work has challenged the heuristic explanation, by producing the same biases from a model that copes with perceptual uncertainty by using Bayesian inference with a prior based on the correct combination rules from Newtonian mechanics (noisy Newton). Here I test the predictions of the leading heuristic model (Gilden and Proffitt, 1989) against the noisy Newton model using a novel manipulation of the standard mass judgment task: making one of the objects invisible post-collision. The noisy Newton model uses the remaining information to predict above-chance performance, while the leading heuristic model predicts chance performance when one or the other final velocity is occluded. An experiment using two different types of occlusion showed better-than-chance performance and response patterns that followed the predictions of the noisy Newton model. The results demonstrate that people can make sensible physical judgments even when information critical for the judgment is missing, and that a Bayesian model can serve as a guide in these situations. Possible algorithmic-level accounts of this task that more closely correspond to the noisy Newton model are explored. PMID:25206345

  20. An advanced dissymmetric rolling model for online regulation

    NASA Astrophysics Data System (ADS)

    Cao, Trong-Son

    2017-10-01

    Roll-bite model is employed to predict the rolling force, torque as well as to estimate the forward slip for preset or online regulation at industrial rolling mills. The rolling process is often dissymmetric in terms of work-rolls rotation speeds and diameters as well as the friction conditions at upper and lower contact surfaces between work-rolls and the strip. The roll-bite model thus must be able to account for these dissymmetries and in the same time has to be accurate and fast enough for online applications. In the present study, a new method, namely Adapted Discretization Slab Method (ADSM) is proposed to obtain a robust roll-bite model, which can take into account the aforementioned dissymmetries and has a very short response time, lower than one millisecond. This model is based on the slab method, with an adaptive discretization and a global Newton-Raphson procedure to improve the convergence speed. The model was validated by comparing with other dissymmetric models proposed in the literature, as well as Finite Element simulations and industrial pilot trials. Furthermore, back-calculation tool was also constructed for friction management for both offline and online applications. With very short CPU time, the ADSM-based model is thus attractive for all online applications, both for cold and hot rolling.

  1. Documentation of computer program VS2D to solve the equations of fluid flow in variably saturated porous media

    USGS Publications Warehouse

    Lappala, E.G.; Healy, R.W.; Weeks, E.P.

    1987-01-01

    This report documents FORTRAN computer code for solving problems involving variably saturated single-phase flow in porous media. The flow equation is written with total hydraulic potential as the dependent variable, which allows straightforward treatment of both saturated and unsaturated conditions. The spatial derivatives in the flow equation are approximated by central differences, and time derivatives are approximated either by a fully implicit backward or by a centered-difference scheme. Nonlinear conductance and storage terms may be linearized using either an explicit method or an implicit Newton-Raphson method. Relative hydraulic conductivity is evaluated at cell boundaries by using either full upstream weighting, the arithmetic mean, or the geometric mean of values from adjacent cells. Nonlinear boundary conditions treated by the code include infiltration, evaporation, and seepage faces. Extraction by plant roots that is caused by atmospheric demand is included as a nonlinear sink term. These nonlinear boundary and sink terms are linearized implicitly. The code has been verified for several one-dimensional linear problems for which analytical solutions exist and against two nonlinear problems that have been simulated with other numerical models. A complete listing of data-entry requirements and data entry and results for three example problems are provided. (USGS)

  2. Cantera Integration with the Toolbox for Modeling and Analysis of Thermodynamic Systems (T-MATS)

    NASA Technical Reports Server (NTRS)

    Lavelle, Thomas M.; Chapman, Jeffryes W.; May, Ryan D.; Litt, Jonathan S.; Guo, Ten-Huei

    2014-01-01

    NASA Glenn Research Center (GRC) has recently developed a software package for modeling generic thermodynamic systems called the Toolbox for the Modeling and Analysis of Thermodynamic Systems (T-MATS). T-MATS is a library of building blocks that can be assembled to represent any thermodynamic system in the Simulink(Registered TradeMark) (The MathWorks, Inc.) environment. These elements, along with a Newton Raphson solver (also provided as part of the T-MATS package), enable users to create models of a wide variety of systems. The current version of T-MATS (v1.0.1) uses tabular data for providing information about a specific mixture of air, water (humidity), and hydrocarbon fuel in calculations of thermodynamic properties. The capabilities of T-MATS can be expanded by integrating it with the Cantera thermodynamic package. Cantera is an object-oriented analysis package that calculates thermodynamic solutions for any mixture defined by the user. Integration of Cantera with T-MATS extends the range of systems that may be modeled using the toolbox. In addition, the library of elements released with Cantera were developed using MATLAB native M-files, allowing for quicker prototyping of elements. This paper discusses how the new Cantera-based elements are created and provides examples for using T-MATS integrated with Cantera.

  3. Cantera Integration with the Toolbox for Modeling and Analysis of Thermodynamic Systems (T-MATS)

    NASA Technical Reports Server (NTRS)

    Lavelle, Thomas M.; Chapman, Jeffryes W.; May, Ryan D.; Litt, Jonathan S.; Guo, Ten-Huei

    2014-01-01

    NASA Glenn Research Center (GRC) has recently developed a software package for modeling generic thermodynamic systems called the Toolbox for the Modeling and Analysis of Thermodynamic Systems (T-MATS). T-MATS is a library of building blocks that can be assembled to represent any thermodynamic system in the Simulink (The MathWorks, Inc.) environment. These elements, along with a Newton Raphson solver (also provided as part of the T-MATS package), enable users to create models of a wide variety of systems. The current version of T-MATS (v1.0.1) uses tabular data for providing information about a specific mixture of air, water (humidity), and hydrocarbon fuel in calculations of thermodynamic properties. The capabilities of T-MATS can be expanded by integrating it with the Cantera thermodynamic package. Cantera is an object-oriented analysis package that calculates thermodynamic solutions for any mixture defined by the user. Integration of Cantera with T-MATS extends the range of systems that may be modeled using the toolbox. In addition, the library of elements released with Cantera were developed using MATLAB native M-files, allowing for quicker prototyping of elements. This paper discusses how the new Cantera-based elements are created and provides examples for using T-MATS integrated with Cantera.

  4. Modelling of hydrogen transport in silicon solar cell structures under equilibrium conditions

    NASA Astrophysics Data System (ADS)

    Hamer, P.; Hallam, B.; Bonilla, R. S.; Altermatt, P. P.; Wilshaw, P.; Wenham, S.

    2018-01-01

    This paper presents a model for the introduction and redistribution of hydrogen in silicon solar cells at temperatures between 300 and 700 °C based on a second order backwards difference formula evaluated using a single Newton-Raphson iteration. It includes the transport of hydrogen and interactions with impurities such as ionised dopants. The simulations lead to three primary conclusions: (1) hydrogen transport across an n-type emitter is heavily temperature dependent; (2) under equilibrium conditions, hydrogen is largely driven by its charged species, with the switch from a dominance of negatively charged hydrogen (H-) to positively charged hydrogen (H+) within the emitter region critical to significant transport across the junction; and (3) hydrogen transport across n-type emitters is critically dependent upon the doping profile within the emitter, and, in particular, the peak doping concentration. It is also observed that during thermal processes after an initial high temperature step, hydrogen preferentially migrates to the surface of a phosphorous doped emitter, drawing hydrogen out of the p-type bulk. This may play a role in several effects observed during post-firing anneals in relation to the passivation of recombination active defects and even the elimination of hydrogen-related defects in the bulk of silicon solar cells.

  5. Vibrations of an Euler-Bernoulli beam with hysteretic damping arising from dispersed frictional microcracks

    NASA Astrophysics Data System (ADS)

    Maiti, Soumyabrata; Bandyopadhyay, Ritwik; Chatterjee, Anindya

    2018-01-01

    We study free and harmonically forced vibrations of an Euler-Bernoulli beam with rate-independent hysteretic dissipation. The dissipation follows a model proposed elsewhere for materials with randomly dispersed frictional microcracks. The virtual work of distributed dissipative moments is approximated using Gaussian quadrature, yielding a few discrete internal hysteretic states. Lagrange's equations are obtained for the modal coordinates. Differential equations for the modal coordinates and internal states are integrated together. Free vibrations decay exponentially when a single mode dominates. With multiple modes active, higher modes initially decay rapidly while lower modes decay relatively slowly. Subsequently, lower modes show their own characteristic modal damping, while small amplitude higher modes show more erratic decay. Large dissipation, for the adopted model, leads mathematically to fast and damped oscillations in the limit, unlike viscously overdamped systems. Next, harmonically forced, lightly damped responses of the beam are studied using both a slow frequency sweep and a shooting-method based search for periodic solutions along with numerical continuation. Shooting method and frequency sweep results match for large ranges of frequency. The shooting method struggles near resonances, where internal states collapse into lower dimensional behavior and Newton-Raphson iterations fail. Near the primary resonances, simple numerically-aided harmonic balance gives excellent results. Insights are also obtained into the harmonic content of secondary resonances.

  6. Voltage Analysis Improvement of 150 kV Transmission Subsystem Using Static Synchronous Compensator (STATCOM)

    NASA Astrophysics Data System (ADS)

    Akbar, P. A.; Hakim, D. L.; Sucita, T.

    2018-02-01

    In this research, testing improvements to the distribution voltage electricity at 150 kV transmission subsystem Bandung Selatan and New Ujungberung using Flexible AC Transmission System (FACTS) technology. One of them is by doing the control of active and reactive power through the power electronics equipment Static Synchronous Compensator (STATCOM). The subsystem is tested because it has a voltage profile are relatively less well when based on the IEEE / ANSI C.84.1 (142.5 - 157.5 kV). This study was conducted by analyzing the Newton-Raphson power flow on the simulator DigSilent Power Factory 15 to determine the profile of the voltage (V) on the system. Bus which has the lowest voltage to be a reference in the installation of STATCOM. From this research is known that the voltage on the conditions of the existing bus 28, as many as 21-23 still below standard buses (142.5 kV), after the installation is done using STATCOM, voltage on the buses improved by increasing the number of tracks that follow the standard / is in the range 142.5 kV -157.5 kV as many as 23-27 buses or 78.6% - 96%, with the optimum mounting on a bus Rancaekek STATCOM II with a capacity of 300 MVA.

  7. LOGISTIC FUNCTION PROFILE FIT: A least-squares program for fitting interface profiles to an extended logistic function

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kirchhoff, William H.

    2012-09-15

    The extended logistic function provides a physically reasonable description of interfaces such as depth profiles or line scans of surface topological or compositional features. It describes these interfaces with the minimum number of parameters, namely, position, width, and asymmetry. Logistic Function Profile Fit (LFPF) is a robust, least-squares fitting program in which the nonlinear extended logistic function is linearized by a Taylor series expansion (equivalent to a Newton-Raphson approach) with no apparent introduction of bias in the analysis. The program provides reliable confidence limits for the parameters when systematic errors are minimal and provides a display of the residuals frommore » the fit for the detection of systematic errors. The program will aid researchers in applying ASTM E1636-10, 'Standard practice for analytically describing sputter-depth-profile and linescan-profile data by an extended logistic function,' and may also prove useful in applying ISO 18516: 2006, 'Surface chemical analysis-Auger electron spectroscopy and x-ray photoelectron spectroscopy-determination of lateral resolution.' Examples are given of LFPF fits to a secondary ion mass spectrometry depth profile, an Auger surface line scan, and synthetic data generated to exhibit known systematic errors for examining the significance of such errors to the extrapolation of partial profiles.« less

  8. Numerical and Experimental Investigation of Stratified Gas-Liquid Two-Phase Flow in Horizontal Circular Pipes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Faccini, J.L.H.; Sampaio, P.A.B. de; Su, J.

    This paper reports numerical and experimental investigation of stratified gas-liquid two-phase flow in horizontal circular pipes. The Reynolds averaged Navier Stokes equations (RANS) with the k-{omega} model for a fully developed stratified gas-liquid two-phase flow are solved by using the finite element method. A smooth and horizontal interface surface is assumed without considering the interfacial waves. The continuity of the shear stress across the interface is enforced with the continuity of the velocity being automatically satisfied by the variational formulation. For each given interface position and longitudinal pressure gradient, an inner iteration loop runs to solve the nonlinear equations. Themore » Newton-Raphson scheme is used to solve the transcendental equations by an outer iteration to determine the interface position and pressure gradient for a given pair of volumetric flow rates. The interface position in a 51.2 mm ID circular pipe was measured experimentally by the ultrasonic pulse-echo technique. The numerical results were also compared with experimental results in a 21 mm ID circular pipe reported by Masala [1]. The good agreement between the numerical and experimental results indicates that the k-{omega} model can be applied for the numerical simulation of stratified gas-liquid two-phase flow. (authors)« less

  9. Theoretical study of heat transfer with moving phase-change interface in thawing of frozen food

    NASA Astrophysics Data System (ADS)

    Leung, M.; Ching, W. H.; Leung, D. Y. C.; Lam, G. C. K.

    2005-02-01

    A theoretical solution was obtained for a transient phase-change heat transfer problem in thawing of frozen food. In the physical model, a sphere originally at a uniform temperature below the phase-change temperature is suddenly immersed in a fluid at a temperature above the phase-change temperature. As the body temperature increases, the phase-change interface will be first formed on the surface. Subsequently, the interface will absorb the latent heat and move towards the centre until the whole body undergoes complete phase change. In the mathematical formulation, the nonhomogeneous problem arises from the moving phase-change interface. The solution in terms of the time-dependent temperature field was obtained by use of Green's function. A one-step Newton-Raphson method was specially designed to solve for the position of the moving interface to satisfy the interface condition. The theoretical results were compared with numerical results generated by a finite difference model and experimental measurements collected from a cold water thawing process. As a good agreement was found, the theoretical solution developed in this study was verified numerically and experimentally. Besides thawing of frozen food, there are many other practical applications of the theoretical solution, such as food freezing, soil freezing/thawing, metal casting and bath quenching heat treatment, among others.

  10. Co-rotational thermo-mechanically coupled multi-field framework and finite element for the large displacement analysis of multi-layered shape memory alloy beam-like structures

    NASA Astrophysics Data System (ADS)

    Solomou, Alexandros G.; Machairas, Theodoros T.; Karakalas, Anargyros A.; Saravanos, Dimitris A.

    2017-06-01

    A thermo-mechanically coupled finite element (FE) for the simulation of multi-layered shape memory alloy (SMA) beams admitting large displacements and rotations (LDRs) is developed to capture the geometrically nonlinear effects which are present in many SMA applications. A generalized multi-field beam theory implementing a SMA constitutive model based on small strain theory, thermo-mechanically coupled governing equations and multi-field kinematic hypotheses combining first order shear deformation assumptions with a sixth order polynomial temperature field through the thickness of the beam section are extended to admit LDRs. The co-rotational formulation is adopted, where the motion of the beam is decomposed to rigid body motion and relative small deformation in the local frame. A new generalized multi-layered SMA FE is formulated. The nonlinear transient spatial discretized equations of motion of the SMA structure are synthesized and solved using the Newton-Raphson method combined with an implicit time integration scheme. Correlations of models incorporating the present beam FE with respective results of models incorporating plane stress SMA FEs, demonstrate excellent agreement of the predicted LDRs response, temperature and phase transformation fields, as well as, significant gains in computational time.

  11. Analysis of an arched outer-race ball bearing considering centrifugal forces

    NASA Technical Reports Server (NTRS)

    Hamrock, B. J.; Anderson, W. J.

    1972-01-01

    A Newton-Raphson method of iteration was used in evaluating the radial and axial projection of the distance between the ball center and the outer raceway groove curvature center (V and W). Fatigue life evaluations were made. The similar analysis of a conventional bearing can be directly obtained from the arched bearing analysis by simply letting the amount of arching be zero (g = 0) and not considering equations related to the unloaded half of the outer race. The analysis was applied to a 150-mm angular contact ball bearing. Results for life, contact loads, and angles are shown for a conventional bearing (g = 0) and two arched bearings (g = 0.127 mm (0.005 in.), and 0.254 mm (0.010 in.)). The results indicate that an arched bearing is highly desirable for high speed applications. In particular, for a DN value of 3 million (20,000 rpm) and an applied axial load of 4448 N (1000 lb), an arched bearing shows an improvement in life of 306 percent over that of a conventional bearing. At 4.2 million DN (28,000 rpm), the corresponding improvement is 340 percent. It was also found for low speeds, the arched bearing does not offer the advantages that it does for high speed applications.

  12. UNBIASED CORRECTION RELATIONS FOR GALAXY CLUSTER PROPERTIES DERIVED FROM CHANDRA AND XMM-NEWTON

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhao, Hai-Hui; Li, Cheng-Kui; Chen, Yong

    2015-01-20

    We use a sample of 62 clusters of galaxies to investigate the discrepancies between the gas temperature and total mass within r {sub 500} from XMM-Newton and Chandra data. Comparisons of the properties show that (1) both the de-projected and projected temperatures determined by Chandra are higher than those of XMM-Newton and there is a good linear relationship for the de-projected temperatures: T {sub Chandra} = 1.25 × T {sub XMM}–0.13. (2) The Chandra mass is much higher than the XMM-Newton mass with a bias of 0.15 and our mass relation is log{sub 10} M {sub Chandra} = 1.02 × log{sub 10}more » M {sub XMM}+0.15. To explore the reasons for the discrepancy in mass, we recalculate the Chandra mass (expressed as M{sub Ch}{sup mo/d}) by modifying its temperature with the de-projected temperature relation. The results show that M{sub Ch}{sup mo/d} is closer to the XMM-Newton mass with the bias reducing to 0.02. Moreover, M{sub Ch}{sup mo/d} are corrected with the r {sub 500} measured by XMM-Newton and the intrinsic scatter is significantly improved with the value reducing from 0.20 to 0.12. These mean that the temperature bias may be the main factor causing the mass bias. Finally, we find that M{sub Ch}{sup mo/d} is consistent with the corresponding XMM-Newton mass derived directly from our mass relation at a given Chandra mass. Thus, the de-projected temperature and mass relations can provide unbiased corrections for galaxy cluster properties derived from Chandra and XMM-Newton.« less

  13. High degree interpolation polynomial in Newton form

    NASA Technical Reports Server (NTRS)

    Tal-Ezer, Hillel

    1988-01-01

    Polynomial interpolation is an essential subject in numerical analysis. Dealing with a real interval, it is well known that even if f(x) is an analytic function, interpolating at equally spaced points can diverge. On the other hand, interpolating at the zeroes of the corresponding Chebyshev polynomial will converge. Using the Newton formula, this result of convergence is true only on the theoretical level. It is shown that the algorithm which computes the divided differences is numerically stable only if: (1) the interpolating points are arranged in a different order, and (2) the size of the interval is 4.

  14. Algorithms for Solvents and Spectral Factors of Matrix Polynomials

    DTIC Science & Technology

    1981-01-01

    spectral factors of matrix polynomials LEANG S. SHIEHt, YIH T. TSAYt and NORMAN P. COLEMANt A generalized Newton method , based on the contracted gradient...of a matrix poly- nomial, is derived for solving the right (left) solvents and spectral factors of matrix polynomials. Two methods of selecting initial...estimates for rapid convergence of the newly developed numerical method are proposed. Also, new algorithms for solving complete sets of the right

  15. On a class of Newton-like methods for solving nonlinear equations

    NASA Astrophysics Data System (ADS)

    Argyros, Ioannis K.

    2009-06-01

    We provide a semilocal convergence analysis for a certain class of Newton-like methods considered also in [I.K. Argyros, A unifying local-semilocal convergence analysis and applications for two-point Newton-like methods in Banach space, J. Math. Anal. Appl. 298 (2004) 374-397; I.K. Argyros, Computational theory of iterative methods, in: C.K. Chui, L. Wuytack (Eds.), Series: Studies in Computational Mathematics, vol. 15, Elsevier Publ. Co, New York, USA, 2007; J.E. Dennis, Toward a unified convergence theory for Newton-like methods, in: L.B. Rall (Ed.), Nonlinear Functional Analysis and Applications, Academic Press, New York, 1971], in order to approximate a locally unique solution of an equation in a Banach space. Using a combination of Lipschitz and center-Lipschitz conditions, instead of only Lipschitz conditions [F.A. Potra, Sharp error bounds for a class of Newton-like methods, Libertas Math. 5 (1985) 71-84], we provide an analysis with the following advantages over the work in [F.A. Potra, Sharp error bounds for a class of Newton-like methods, Libertas Math. 5 (1985) 71-84] which improved the works in [W.E. Bosarge, P.L. Falb, A multipoint method of third order, J. Optimiz. Theory Appl. 4 (1969) 156-166; W.E. Bosarge, P.L. Falb, Infinite dimensional multipoint methods and the solution of two point boundary value problems, Numer. Math. 14 (1970) 264-286; J.E. Dennis, On the Kantorovich hypothesis for Newton's method, SIAM J. Numer. Anal. 6 (3) (1969) 493-507; J.E. Dennis, Toward a unified convergence theory for Newton-like methods, in: L.B. Rall (Ed.), Nonlinear Functional Analysis and Applications, Academic Press, New York, 1971; H.J. Kornstaedt, Ein allgemeiner Konvergenzstaz fü r verschä rfte Newton-Verfahrem, in: ISNM, vol. 28, Birkhaü ser Verlag, Basel and Stuttgart, 1975, pp. 53-69; P. Laasonen, Ein überquadratisch konvergenter iterativer algorithmus, Ann. Acad. Sci. Fenn. Ser I 450 (1969) 1-10; F.A. Potra, On a modified secant method, L'analyse numérique et la theorie de l'approximation 8 (2) (1979) 203-214; F.A. Potra, An application of the induction method of V. Pták to the study of Regula Falsi, Aplikace Matematiky 26 (1981) 111-120; F.A. Potra, On the convergence of a class of Newton-like methods, in: Iterative Solution of Nonlinear Systems of Equations, in: Lecture Notes in Mathematics, vol. 953, Springer-Verlag, New York, 1982; F.A. Potra, V. Pták, Nondiscrete induction and double step secant method, Math. Scand. 46 (1980) 236-250; F.A. Potra, V. Pták, On a class of modified Newton processes, Numer. Funct. Anal. Optim. 2 (1) (1980) 107-120; F.A. Potra, Sharp error bounds for a class of Newton-like methods, Libertas Math. 5 (1985) 71-84; J.W. Schmidt, Untere Fehlerschranken für Regula-Falsi Verfahren, Period. Math. Hungar. 9 (3) (1978) 241-247; J.W. Schmidt, H. Schwetlick, Ableitungsfreie Verfhren mit höherer Konvergenzgeschwindifkeit, Computing 3 (1968) 215-226; J.F. Traub, Iterative Methods for the Solution of Equations, Prentice Hall, Englewood Cliffs, New Jersey, 1964; M.A. Wolfe, Extended iterative methods for the solution of operator equations, Numer. Math. 31 (1978) 153-174]: larger convergence domain and weaker sufficient convergence conditions. Numerical examples further validating the results are also provided.

  16. Solution of Tikhonov's Motion-Separation Problem Using the Modified Newton-Kantorovich Theorem

    NASA Astrophysics Data System (ADS)

    Belolipetskii, A. A.; Ter-Krikorov, A. M.

    2018-02-01

    The paper presents a new way to prove the existence of a solution of the well-known Tikhonov's problem on systems of ordinary differential equations in which one part of the variables performs "fast" motions and the other part, "slow" motions. Tikhonov's problem has been the subject of a large number of works in connection with its applications to a wide range of mathematical models in natural science and economics. Only a short list of publications, which present the proof of the existence of solutions in this problem, is cited. The aim of the paper is to demonstrate the possibility of applying the modified Newton-Kantorovich theorem to prove the existence of a solution in Tikhonov's problem. The technique proposed can be used to prove the existence of solutions of other classes of problems with a small parameter.

  17. Efficient numerical calculation of MHD equilibria with magnetic islands, with particular application to saturated neoclassical tearing modes

    NASA Astrophysics Data System (ADS)

    Raburn, Daniel Louis

    We have developed a preconditioned, globalized Jacobian-free Newton-Krylov (JFNK) solver for calculating equilibria with magnetic islands. The solver has been developed in conjunction with the Princeton Iterative Equilibrium Solver (PIES) and includes two notable enhancements over a traditional JFNK scheme: (1) globalization of the algorithm by a sophisticated backtracking scheme, which optimizes between the Newton and steepest-descent directions; and, (2) adaptive preconditioning, wherein information regarding the system Jacobian is reused between Newton iterations to form a preconditioner for our GMRES-like linear solver. We have developed a formulation for calculating saturated neoclassical tearing modes (NTMs) which accounts for the incomplete loss of a bootstrap current due to gradients of multiple physical quantities. We have applied the coupled PIES-JFNK solver to calculate saturated island widths on several shots from the Tokamak Fusion Test Reactor (TFTR) and have found reasonable agreement with experimental measurement.

  18. A NUMERICAL ALGORITHM FOR MODELING MULTIGROUP NEUTRINO-RADIATION HYDRODYNAMICS IN TWO SPATIAL DIMENSIONS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Swesty, F. Douglas; Myra, Eric S.

    It is now generally agreed that multidimensional, multigroup, neutrino-radiation hydrodynamics (RHD) is an indispensable element of any realistic model of stellar-core collapse, core-collapse supernovae, and proto-neutron star instabilities. We have developed a new, two-dimensional, multigroup algorithm that can model neutrino-RHD flows in core-collapse supernovae. Our algorithm uses an approach similar to the ZEUS family of algorithms, originally developed by Stone and Norman. However, this completely new implementation extends that previous work in three significant ways: first, we incorporate multispecies, multigroup RHD in a flux-limited-diffusion approximation. Our approach is capable of modeling pair-coupled neutrino-RHD, and includes effects of Pauli blocking inmore » the collision integrals. Blocking gives rise to nonlinearities in the discretized radiation-transport equations, which we evolve implicitly in time. We employ parallelized Newton-Krylov methods to obtain a solution of these nonlinear, implicit equations. Our second major extension to the ZEUS algorithm is the inclusion of an electron conservation equation that describes the evolution of electron-number density in the hydrodynamic flow. This permits calculating deleptonization of a stellar core. Our third extension modifies the hydrodynamics algorithm to accommodate realistic, complex equations of state, including those having nonconvex behavior. In this paper, we present a description of our complete algorithm, giving sufficient details to allow others to implement, reproduce, and extend our work. Finite-differencing details are presented in appendices. We also discuss implementation of this algorithm on state-of-the-art, parallel-computing architectures. Finally, we present results of verification tests that demonstrate the numerical accuracy of this algorithm on diverse hydrodynamic, gravitational, radiation-transport, and RHD sample problems. We believe our methods to be of general use in a variety of model settings where radiation transport or RHD is important. Extension of this work to three spatial dimensions is straightforward.« less

  19. Optimum and Heuristic Algorithms for Finite State Machine Decomposition and Partitioning

    DTIC Science & Technology

    1989-09-01

    Heuristic Algorithms for Finite State Machine Decomposition and Partitioning Pravnav Ashar, Srinivas Devadas , and A. Richard Newton , T E’,’ .,jpf~s’!i3...94720. Devadas : Department of Electrical Engineering and Computer Science, Room 36-848, MIT, Cambridge, MA 02139. (617) 253-0454. Copyright* 1989 MIT...and reduction, A finite state miachinie is represenutedl by its State Transition Graphi itodlitied froini two-level B ~oolean imiinimizers. Ilist

  20. A Practical, Robust and Fast Method for Location Localization in Range-Based Systems.

    PubMed

    Huang, Shiping; Wu, Zhifeng; Misra, Anil

    2017-12-11

    Location localization technology is used in a number of industrial and civil applications. Real time location localization accuracy is highly dependent on the quality of the distance measurements and efficiency of solving the localization equations. In this paper, we provide a novel approach to solve the nonlinear localization equations efficiently and simultaneously eliminate the bad measurement data in range-based systems. A geometric intersection model was developed to narrow the target search area, where Newton's Method and the Direct Search Method are used to search for the unknown position. Not only does the geometric intersection model offer a small bounded search domain for Newton's Method and the Direct Search Method, but also it can self-correct bad measurement data. The Direct Search Method is useful for the coarse localization or small target search domain, while the Newton's Method can be used for accurate localization. For accurate localization, by utilizing the proposed Modified Newton's Method (MNM), challenges of avoiding the local extrema, singularities, and initial value choice are addressed. The applicability and robustness of the developed method has been demonstrated by experiments with an indoor system.

  1. Parametric inference for biological sequence analysis.

    PubMed

    Pachter, Lior; Sturmfels, Bernd

    2004-11-16

    One of the major successes in computational biology has been the unification, by using the graphical model formalism, of a multitude of algorithms for annotating and comparing biological sequences. Graphical models that have been applied to these problems include hidden Markov models for annotation, tree models for phylogenetics, and pair hidden Markov models for alignment. A single algorithm, the sum-product algorithm, solves many of the inference problems that are associated with different statistical models. This article introduces the polytope propagation algorithm for computing the Newton polytope of an observation from a graphical model. This algorithm is a geometric version of the sum-product algorithm and is used to analyze the parametric behavior of maximum a posteriori inference calculations for graphical models.

  2. Efficient combination of a 3D Quasi-Newton inversion algorithm and a vector dual-primal finite element tearing and interconnecting method

    NASA Astrophysics Data System (ADS)

    Voznyuk, I.; Litman, A.; Tortel, H.

    2015-08-01

    A Quasi-Newton method for reconstructing the constitutive parameters of three-dimensional (3D) penetrable scatterers from scattered field measurements is presented. This method is adapted for handling large-scale electromagnetic problems while keeping the memory requirement and the time flexibility as low as possible. The forward scattering problem is solved by applying the finite-element tearing and interconnecting full-dual-primal (FETI-FDP2) method which shares the same spirit as the domain decomposition methods for finite element methods. The idea is to split the computational domain into smaller non-overlapping sub-domains in order to simultaneously solve local sub-problems. Various strategies are proposed in order to efficiently couple the inversion algorithm with the FETI-FDP2 method: a separation into permanent and non-permanent subdomains is performed, iterative solvers are favorized for resolving the interface problem and a marching-on-in-anything initial guess selection further accelerates the process. The computational burden is also reduced by applying the adjoint state vector methodology. Finally, the inversion algorithm is confronted to measurements extracted from the 3D Fresnel database.

  3. A plant-wide aqueous phase chemistry module describing pH variations and ion speciation/pairing in wastewater treatment process models.

    PubMed

    Flores-Alsina, Xavier; Kazadi Mbamba, Christian; Solon, Kimberly; Vrecko, Darko; Tait, Stephan; Batstone, Damien J; Jeppsson, Ulf; Gernaey, Krist V

    2015-11-15

    There is a growing interest within the Wastewater Treatment Plant (WWTP) modelling community to correctly describe physico-chemical processes after many years of mainly focusing on biokinetics. Indeed, future modelling needs, such as a plant-wide phosphorus (P) description, require a major, but unavoidable, additional degree of complexity when representing cationic/anionic behaviour in Activated Sludge (AS)/Anaerobic Digestion (AD) systems. In this paper, a plant-wide aqueous phase chemistry module describing pH variations plus ion speciation/pairing is presented and interfaced with industry standard models. The module accounts for extensive consideration of non-ideality, including ion activities instead of molar concentrations and complex ion pairing. The general equilibria are formulated as a set of Differential Algebraic Equations (DAEs) instead of Ordinary Differential Equations (ODEs) in order to reduce the overall stiffness of the system, thereby enhancing simulation speed. Additionally, a multi-dimensional version of the Newton-Raphson algorithm is applied to handle the existing multiple algebraic inter-dependencies. The latter is reinforced with the Simulated Annealing method to increase the robustness of the solver making the system not so dependent of the initial conditions. Simulation results show pH predictions when describing Biological Nutrient Removal (BNR) by the activated sludge models (ASM) 1, 2d and 3 comparing the performance of a nitrogen removal (WWTP1) and a combined nitrogen and phosphorus removal (WWTP2) treatment plant configuration under different anaerobic/anoxic/aerobic conditions. The same framework is implemented in the Benchmark Simulation Model No. 2 (BSM2) version of the Anaerobic Digestion Model No. 1 (ADM1) (WWTP3) as well, predicting pH values at different cationic/anionic loads. In this way, the general applicability/flexibility of the proposed approach is demonstrated, by implementing the aqueous phase chemistry module in some of the most frequently used WWTP process simulation models. Finally, it is shown how traditional wastewater modelling studies can be complemented with a rigorous description of aqueous phase and ion chemistry (pH, speciation, complexation). Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. Global convergence of inexact Newton methods for transonic flow

    NASA Technical Reports Server (NTRS)

    Young, David P.; Melvin, Robin G.; Bieterman, Michael B.; Johnson, Forrester T.; Samant, Satish S.

    1990-01-01

    In computational fluid dynamics, nonlinear differential equations are essential to represent important effects such as shock waves in transonic flow. Discretized versions of these nonlinear equations are solved using iterative methods. In this paper an inexact Newton method using the GMRES algorithm of Saad and Schultz is examined in the context of the full potential equation of aerodynamics. In this setting, reliable and efficient convergence of Newton methods is difficult to achieve. A poor initial solution guess often leads to divergence or very slow convergence. This paper examines several possible solutions to these problems, including a standard local damping strategy for Newton's method and two continuation methods, one of which utilizes interpolation from a coarse grid solution to obtain the initial guess on a finer grid. It is shown that the continuation methods can be used to augment the local damping strategy to achieve convergence for difficult transonic flow problems. These include simple wings with shock waves as well as problems involving engine power effects. These latter cases are modeled using the assumption that each exhaust plume is isentropic but has a different total pressure and/or temperature than the freestream.

  5. Wind tunnel tests of modified cross, hemisflo, and disk-gap-band parachutes with emphasis in the transonic range

    NASA Technical Reports Server (NTRS)

    Foughner, J. T., Jr.; Alexander, W. C.

    1974-01-01

    Transonic wind-tunnel studies were conducted with modified cross, hemisflo, and disk-gap-band parachute models in the wake of a cone-cylinder shape forebody. The basic cross design was modified with the addition of a circumferential constraining band at the lower edge of the canopy panels. The tests covered a Mach number range of 0.3 to 1.2 and a dynamic pressure range from 479 Newtons per square meter to 5746 Newtons per square meter. The parachute models were flexible textile-type structures and were tethered to a rigid forebody with a single flexible riser. Different size models of the modified cross and disk-gap-band canopies were tested to evaluate scale effects. Model reference diameters were 0.30, 0.61, and 1.07 meters (1.0, 2.0, and 3.5 ft) for the modified cross; and nominal diameters of 0.25 and 0.52 meter (0.83 and 1.7 ft) for the disk-gap-band; and 0.55 meter (1.8 ft) for the hemisflo. Reefing information is presented for the 0.61-meter-diameter cross and the 0.52-meter-diameter disk-gap-band. Results are presented in the form of the variation of steady-state average drag coefficient with Mach number. General stability characteristics of each parachute are discussed. Included are comments on canopy coning, spinning, and fluttering motions.

  6. Prediction of progressive damage and strength of plain weave composites using the finite element method

    NASA Astrophysics Data System (ADS)

    Srirengan, Kanthikannan

    The overall objective of this research was to develop the finite element code required to efficiently predict the strength of plain weave composite structures. Towards which, three-dimensional conventional progressive damage analysis was implemented to predict the strength of plain weave composites subjected to periodic boundary conditions. Also, modal technique for three-dimensional global/local stress analysis was developed to predict the failure initiation in plain weave composite structures. The progressive damage analysis was used to study the effect of quadrature order, mesh refinement and degradation models on the predicted damage and strength of plain weave composites subjected to uniaxial tension in the warp tow direction. A 1/32sp{nd} part of the representative volume element of a symmetrically stacked configuration was analyzed. The tow geometry was assumed to be sinusoidal. Graphite/Epoxy system was used. Maximum stress criteria and combined stress criteria were used to predict failure in the tows and maximum principal stress criterion was used to predict failure in the matrix. Degradation models based on logical reasoning, micromechanics idealization and experimental comparisons were used to calculate the effective material properties with of damage. Modified Newton-Raphson method was used to determine the incremental solution for each applied strain level. Using a refined mesh and the discount method based on experimental comparisons, the progressive damage and the strength of plain weave composites of waviness ratios 1/3 and 1/6 subjected to uniaxial tension in the warp direction have been characterized. Plain weave composites exhibit a brittle response in uniaxial tension. The strength decreases significantly with the increase in waviness ratio. Damage initiation and collapse were caused dominantly due to intra-tow cracking and inter-tow debonding respectively. The predicted strength of plain weave composites of racetrack geometry and waviness ratio 1/25.7 was compared with analytical predictions and experimental findings and was found to match well. To evaluate the performance of the modal technique, failure initiation in a short woven composite cantilevered plate subjected to end moment and transverse end load was predicted. The global/local predictions were found to reasonably match well with the conventional finite element predictions.

  7. On the restricted four-body problem with the effect of small perturbations in the Coriolis and centrifugal forces

    NASA Astrophysics Data System (ADS)

    Suraj, Md Sanam; Aggarwal, Rajiv; Arora, Monika

    2017-09-01

    We have studied the restricted four-body problem (R4BP) with the effect of the small perturbation in the Coriolis and centrifugal forces on the libration points and zero velocity curves (ZVCs). Further, we have supposed that all the primaries are set in an equilateral triangle configuration, moving in the circular orbits around their common centre of mass. We have observed that the effect of the small perturbation in centrifugal force has a substantial effect on the location of libration points but a small perturbation in the Coriolis force has no impact on the location of libration points. But the stability of the libration points is highly influenced by the effect of the small perturbation in the Coriolis force. It is observed that as the Coriolis parameter increases, the libration points become stable. Further, it is found that the effect of the small perturbation in the centrifugal force has a substantial influence on the regions of possible motion. Also, when the effect of small perturbation in the centrifugal force increases the forbidden region decreases; here the motion is not possible for the infinitesimal mass. It is observed when the value of the Jacobian constant decreases, the regions of possible motion increase. In addition, we have also discussed how small perturbations in the Coriolis and centrifugal forces influence the Newton-Raphson basins of convergence.

  8. On the photo-gravitational restricted four-body problem with variable mass

    NASA Astrophysics Data System (ADS)

    Mittal, Amit; Agarwal, Rajiv; Suraj, Md Sanam; Arora, Monika

    2018-05-01

    This paper deals with the photo-gravitational restricted four-body problem (PR4BP) with variable mass. Following the procedure given by Gascheau (C. R. 16:393-394, 1843) and Routh (Proc. Lond. Math. Soc. 6:86-97, 1875), the conditions of linear stability of Lagrange triangle solution in the PR4BP are determined. The three radiating primaries having masses m1, m2 and m3 in an equilateral triangle with m2=m3 will be stable as long as they satisfy the linear stability condition of the Lagrangian triangle solution. We have derived the equations of motion of the mentioned problem and observed that there exist eight libration points for a fixed value of parameters γ (m at time t/m at initial time, 0<γ≤1 ), α (the proportionality constant in Jeans' law (Astronomy and Cosmogony, Cambridge University Press, Cambridge, 1928), 0≤α≤2.2), the mass parameter μ=0.005 and radiation parameters qi, (0< qi≤1, i=1, 2, 3). All the libration points are non-collinear if q2≠ q3. It has been observed that the collinear and out-of-plane libration points also exist for q2=q3. In all the cases, each libration point is found to be unstable. Further, zero velocity curves (ZVCs) and Newton-Raphson basins of attraction are also discussed.

  9. A Flexible CUDA LU-based Solver for Small, Batched Linear Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tumeo, Antonino; Gawande, Nitin A.; Villa, Oreste

    This chapter presents the implementation of a batched CUDA solver based on LU factorization for small linear systems. This solver may be used in applications such as reactive flow transport models, which apply the Newton-Raphson technique to linearize and iteratively solve the sets of non linear equations that represent the reactions for ten of thousands to millions of physical locations. The implementation exploits somewhat counterintuitive GPGPU programming techniques: it assigns the solution of a matrix (representing a system) to a single CUDA thread, does not exploit shared memory and employs dynamic memory allocation on the GPUs. These techniques enable ourmore » implementation to simultaneously solve sets of systems with over 100 equations and to employ LU decomposition with complete pivoting, providing the higher numerical accuracy required by certain applications. Other currently available solutions for batched linear solvers are limited by size and only support partial pivoting, although they may result faster in certain conditions. We discuss the code of our implementation and present a comparison with the other implementations, discussing the various tradeoffs in terms of performance and flexibility. This work will enable developers that need batched linear solvers to choose whichever implementation is more appropriate to the features and the requirements of their applications, and even to implement dynamic switching approaches that can choose the best implementation depending on the input data.« less

  10. Numerical Analysis on the High-Strength Concrete Beams Ultimate Behaviour

    NASA Astrophysics Data System (ADS)

    Smarzewski, Piotr; Stolarski, Adam

    2017-10-01

    Development of technologies of high-strength concrete (HSC) beams production, with the aim of creating a secure and durable material, is closely linked with the numerical models of real objects. The three-dimensional nonlinear finite element models of reinforced high-strength concrete beams with a complex geometry has been investigated in this study. The numerical analysis is performed using the ANSYS finite element package. The arc-length (A-L) parameters and the adaptive descent (AD) parameters are used with Newton-Raphson method to trace the complete load-deflection curves. Experimental and finite element modelling results are compared graphically and numerically. Comparison of these results indicates the correctness of failure criteria assumed for the high-strength concrete and the steel reinforcement. The results of numerical simulation are sensitive to the modulus of elasticity and the shear transfer coefficient for an open crack assigned to high-strength concrete. The full nonlinear load-deflection curves at mid-span of the beams, the development of strain in compressive concrete and the development of strain in tensile bar are in good agreement with the experimental results. Numerical results for smeared crack patterns are qualitatively agreeable as to the location, direction, and distribution with the test data. The model was capable of predicting the introduction and propagation of flexural and diagonal cracks. It was concluded that the finite element model captured successfully the inelastic flexural behaviour of the beams to failure.

  11. Numerical modeling evapotranspiration flux components in shrub-encroached grassland in Inner Mongolia, China

    NASA Astrophysics Data System (ADS)

    Wang, Pei; Li, Xiao-Yan; Huang, Jie-Yu; Yang, Wen-Xin; Wang, Qi-Dan; Xu, Kun; Zheng, Xiao-Ran

    2016-04-01

    Shrub encroachment into arid grasslands occurs around the world. However, few works on shrub encroachment has been conducted in China. Moreover, its hydrological implications remain poorly investigated in arid and semiarid regions. This study combined a two-source energy balanced model and Newton-Raphson iteration scheme to simulate the evapotranspiration (ET) and their components of shrub-encroached(with 15.4% shrub coverage) grassland in Inner Mongolia. Good agreements of ET flux between modelled and measured by Bowen ratio method with relatively insensitive to uncertainties/errors in the assigned models parameters or in measured input variables for its components illustrated that our model was feasible for simulating evapotranspiration flux components in shrub-encroached grassland. The transpiration fraction(T /ET)account for 58±17%during the growing season. With the designed shrub encroachment extreme scenarios (maximum and minimum coverage),the contribution of shrub to local plant transpiration (Tshrub/T) was 20.06±7%during the growing season. Canopy conductance was the main controlling factor of T /ET. In diurnal scale short wave solar radiation was the direct influential factor while in seasonal scale leaf area index (LAI) and soil water content were the direct influential factors. We find that the seasonal variation of Tshrub/T has a good relationship with ratio of LAIshrub/LAI, and rainfall characteristics widened the difference of contribution of shrub and herbs to ecosystem evapotranspiration.

  12. Behaviour of Frictional Joints in Steel Arch Yielding Supports

    NASA Astrophysics Data System (ADS)

    Horyl, Petr; Šňupárek, Richard; Maršálek, Pavel

    2014-10-01

    The loading capacity and ability of steel arch supports to accept deformations from the surrounding rock mass is influenced significantly by the function of the connections and in particular, the tightening of the bolts. This contribution deals with computer modelling of the yielding bolt connections for different torques to determine the load-bearing capacity of the connections. Another parameter that affects the loading capacity significantly is the value of the friction coefficient of the contacts between the elements of the joints. The authors investigated both the behaviour and conditions of the individual parts for three values of tightening moment and the relation between the value of screw tightening and load-bearing capacity of the connections for different friction coefficients. ANSYS software and the finite element method were used for the computer modelling. The solution is nonlinear because of the bi-linear material properties of steel and the large deformations. The geometry of the computer model was created from designs of all four parts of the structure. The calculation also defines the weakest part of the joint's structure based on stress analysis. The load was divided into two loading steps: the pre-tensioning of connecting bolts and the deformation loading corresponding to 50-mm slip of one support. The full Newton-Raphson method was chosen for the solution. The calculations were carried out on a computer at the Supercomputing Centre VSB-Technical University of Ostrava.

  13. Parametric Studies of Square Solar Sails Using Finite Element Analysis

    NASA Technical Reports Server (NTRS)

    Sleight, David W.; Muheim, Danniella M.

    2004-01-01

    Parametric studies are performed on two generic square solar sail designs to identify parameters of interest. The studies are performed on systems-level models of full-scale solar sails, and include geometric nonlinearity and inertia relief, and use a Newton-Raphson scheme to apply sail pre-tensioning and solar pressure. Computational strategies and difficulties encountered during the analyses are also addressed. The purpose of this paper is not to compare the benefits of one sail design over the other. Instead, the results of the parametric studies may be used to identify general response trends, and areas of potential nonlinear structural interactions for future studies. The effects of sail size, sail membrane pre-stress, sail membrane thickness, and boom stiffness on the sail membrane and boom deformations, boom loads, and vibration frequencies are studied. Over the range of parameters studied, the maximum sail deflection and boom deformations are a nonlinear function of the sail properties. In general, the vibration frequencies and modes are closely spaced. For some vibration mode shapes, local deformation patterns that dominate the response are identified. These localized patterns are attributed to the presence of negative stresses in the sail membrane that are artifacts of the assumption of ignoring the effects of wrinkling in the modeling process, and are not believed to be physically meaningful. Over the range of parameters studied, several regions of potential nonlinear modal interaction are identified.

  14. Self-expanding/shrinking structures by 4D printing

    NASA Astrophysics Data System (ADS)

    Bodaghi, M.; Damanpack, A. R.; Liao, W. H.

    2016-10-01

    The aim of this paper is to create adaptive structures capable of self-expanding and self-shrinking by means of four-dimensional printing technology. An actuator unit is designed and fabricated directly by printing fibers of shape memory polymers (SMPs) in flexible beams with different arrangements. Experiments are conducted to determine thermo-mechanical material properties of the fabricated part revealing that the printing process introduced a strong anisotropy into the printed parts. The feasibility of the actuator unit with self-expanding and self-shrinking features is demonstrated experimentally. A phenomenological constitutive model together with analytical closed-form solutions are developed to replicate thermo-mechanical behaviors of SMPs. Governing equations of equilibrium are developed for printed structures based on the non-linear Green-Lagrange strain tensor and solved implementing a finite element method along with an iterative incremental Newton-Raphson scheme. The material-structural model is then applied to digitally design and print SMP adaptive lattices in planar and tubular shapes comprising a periodic arrangement of SMP actuator units that expand and then recover their original shape automatically. Numerical and experimental results reveal that the proposed planar lattice as meta-materials can be employed for plane actuators with self-expanding/shrinking features or as structural switches providing two different dynamic characteristics. It is also shown that the proposed tubular lattice with a self-expanding/shrinking mechanism can serve as tubular stents and grippers for bio-medical or piping applications.

  15. Progress on a Taylor weak statement finite element algorithm for high-speed aerodynamic flows

    NASA Technical Reports Server (NTRS)

    Baker, A. J.; Freels, J. D.

    1989-01-01

    A new finite element numerical Computational Fluid Dynamics (CFD) algorithm has matured to the point of efficiently solving two-dimensional high speed real-gas compressible flow problems in generalized coordinates on modern vector computer systems. The algorithm employs a Taylor Weak Statement classical Galerkin formulation, a variably implicit Newton iteration, and a tensor matrix product factorization of the linear algebra Jacobian under a generalized coordinate transformation. Allowing for a general two-dimensional conservation law system, the algorithm has been exercised on the Euler and laminar forms of the Navier-Stokes equations. Real-gas fluid properties are admitted, and numerical results verify solution accuracy, efficiency, and stability over a range of test problem parameters.

  16. A solution to the Navier-Stokes equations based upon the Newton Kantorovich method

    NASA Technical Reports Server (NTRS)

    Davis, J. E.; Gabrielsen, R. E.; Mehta, U. B.

    1977-01-01

    An implicit finite difference scheme based on the Newton-Kantorovich technique was developed for the numerical solution of the nonsteady, incompressible, two-dimensional Navier-Stokes equations in conservation-law form. The algorithm was second-order-time accurate, noniterative with regard to the nonlinear terms in the vorticity transport equation except at the earliest few time steps, and spatially factored. Numerical results were obtained with the technique for a circular cylinder at Reynolds number 15. Results indicate that the technique is in excellent agreement with other numerical techniques for all geometries and Reynolds numbers investigated, and indicates a potential for significant reduction in computation time over current iterative techniques.

  17. Efficient parallel implicit methods for rotary-wing aerodynamics calculations

    NASA Astrophysics Data System (ADS)

    Wissink, Andrew M.

    Euler/Navier-Stokes Computational Fluid Dynamics (CFD) methods are commonly used for prediction of the aerodynamics and aeroacoustics of modern rotary-wing aircraft. However, their widespread application to large complex problems is limited lack of adequate computing power. Parallel processing offers the potential for dramatic increases in computing power, but most conventional implicit solution methods are inefficient in parallel and new techniques must be adopted to realize its potential. This work proposes alternative implicit schemes for Euler/Navier-Stokes rotary-wing calculations which are robust and efficient in parallel. The first part of this work proposes an efficient parallelizable modification of the Lower Upper-Symmetric Gauss Seidel (LU-SGS) implicit operator used in the well-known Transonic Unsteady Rotor Navier Stokes (TURNS) code. The new hybrid LU-SGS scheme couples a point-relaxation approach of the Data Parallel-Lower Upper Relaxation (DP-LUR) algorithm for inter-processor communication with the Symmetric Gauss Seidel algorithm of LU-SGS for on-processor computations. With the modified operator, TURNS is implemented in parallel using Message Passing Interface (MPI) for communication. Numerical performance and parallel efficiency are evaluated on the IBM SP2 and Thinking Machines CM-5 multi-processors for a variety of steady-state and unsteady test cases. The hybrid LU-SGS scheme maintains the numerical performance of the original LU-SGS algorithm in all cases and shows a good degree of parallel efficiency. It experiences a higher degree of robustness than DP-LUR for third-order upwind solutions. The second part of this work examines use of Krylov subspace iterative solvers for the nonlinear CFD solutions. The hybrid LU-SGS scheme is used as a parallelizable preconditioner. Two iterative methods are tested, Generalized Minimum Residual (GMRES) and Orthogonal s-Step Generalized Conjugate Residual (OSGCR). The Newton method demonstrates good parallel performance on the IBM SP2, with OS-GCR giving slightly better performance than GMRES on large numbers of processors. For steady and quasi-steady calculations, the convergence rate is accelerated but the overall solution time remains about the same as the standard hybrid LU-SGS scheme. For unsteady calculations, however, the Newton method maintains a higher degree of time-accuracy which allows tbe use of larger timesteps and results in CPU savings of 20-35%.

  18. The Generation Model of Particle Physics and Galactic Dark Matter

    NASA Astrophysics Data System (ADS)

    Robson, B. A.

    2013-09-01

    Galactic dark matter is matter hypothesized to account for the discrepancy of the mass of a galaxy determined from its gravitational effects, assuming the validity of Newton's law of universal gravitation, and the mass calculated from the "luminous matter", stars, gas, dust, etc. observed to be contained within the galaxy. The conclusive observation from the rotation curves of spiral galaxies that the mass discrepancy is greater, the larger the distance scales involved implies that either Newton's law of universal gravitation requires modification or considerably more mass (dark matter) is required to be present in each galaxy. Both the modification of Newton's law of gravitation and the hypothesis of the existence of considerable dark matter in a galaxy are discussed. It is shown that the Generation Model (GM) of particle physics, which leads to a modification of Newton's law of gravitation, is found to be essentially equivalent to that of Milgrom's modified Newtonian dynamics (MOND) theory, with the GM providing a physical understanding of the MOND theory. The continuing success of MOND theory in describing the extragalactic mass discrepancy problems constitutes a strong argument against the existence of undetected dark matter haloes, consisting of unknown nonbaryonic matter, surrounding spiral galaxies.

  19. Multiscale Multilevel Approach to Solution of Nanotechnology Problems

    NASA Astrophysics Data System (ADS)

    Polyakov, Sergey; Podryga, Viktoriia

    2018-02-01

    The paper is devoted to a multiscale multilevel approach for the solution of nanotechnology problems on supercomputer systems. The approach uses the combination of continuum mechanics models and the Newton dynamics for individual particles. This combination includes three scale levels: macroscopic, mesoscopic and microscopic. For gas-metal technical systems the following models are used. The quasihydrodynamic system of equations is used as a mathematical model at the macrolevel for gas and solid states. The system of Newton equations is used as a mathematical model at the mesoand microlevels; it is written for nanoparticles of the medium and larger particles moving in the medium. The numerical implementation of the approach is based on the method of splitting into physical processes. The quasihydrodynamic equations are solved by the finite volume method on grids of different types. The Newton equations of motion are solved by Verlet integration in each cell of the grid independently or in groups of connected cells. In the framework of the general methodology, four classes of algorithms and methods of their parallelization are provided. The parallelization uses the principles of geometric parallelism and the efficient partitioning of the computational domain. A special dynamic algorithm is used for load balancing the solvers. The testing of the developed approach was made by the example of the nitrogen outflow from a balloon with high pressure to a vacuum chamber through a micronozzle and a microchannel. The obtained results confirm the high efficiency of the developed methodology.

  20. Transonic rotor tip design using numerical optimization

    NASA Technical Reports Server (NTRS)

    Tauber, Michael E.; Langhi, Ronald G.

    1985-01-01

    The aerodynamic design procedure for a new blade tip suitable for operation at transonic speeds is illustrated. For the first time, 3 dimensional numerical optimization was applied to rotor tip design, using the recent derivative of the ROT22 code, program R22OPT. Program R22OPT utilized an efficient quasi-Newton optimization algorithm. Multiple design objectives were specified. The delocalization of the shock wave was to be eliminated in forward flight for an advance ratio of 0.41 and a tip Mach number of 0.92 at psi = 90 deg. Simultaneously, it was sought to reduce torque requirements while maintaining effective restoring pitching moments. Only the outer 10 percent of the blade span was modified; the blade area was not to be reduced by more than 3 percent. The goal was to combine the advantages of both sweptback and sweptforward blade tips. A planform that featured inboard sweepback was combined with a sweptforward tip and a taper ratio of 0.5. Initially, the ROT22 code was used to find by trial and error a planform geometry which met the design goals. This configuration had an inboard section with a leading edge sweep of 20 deg and a tip section swept forward at 25 deg; in addition, the airfoils were modified.

  1. Limited memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) method for the parameter estimation on geographically weighted ordinal logistic regression model (GWOLR)

    NASA Astrophysics Data System (ADS)

    Saputro, Dewi Retno Sari; Widyaningsih, Purnami

    2017-08-01

    In general, the parameter estimation of GWOLR model uses maximum likelihood method, but it constructs a system of nonlinear equations, making it difficult to find the solution. Therefore, an approximate solution is needed. There are two popular numerical methods: the methods of Newton and Quasi-Newton (QN). Newton's method requires large-scale time in executing the computation program since it contains Jacobian matrix (derivative). QN method overcomes the drawback of Newton's method by substituting derivative computation into a function of direct computation. The QN method uses Hessian matrix approach which contains Davidon-Fletcher-Powell (DFP) formula. The Broyden-Fletcher-Goldfarb-Shanno (BFGS) method is categorized as the QN method which has the DFP formula attribute of having positive definite Hessian matrix. The BFGS method requires large memory in executing the program so another algorithm to decrease memory usage is needed, namely Low Memory BFGS (LBFGS). The purpose of this research is to compute the efficiency of the LBFGS method in the iterative and recursive computation of Hessian matrix and its inverse for the GWOLR parameter estimation. In reference to the research findings, we found out that the BFGS and LBFGS methods have arithmetic operation schemes, including O(n2) and O(nm).

  2. Documentation for the MODFLOW 6 Groundwater Flow Model

    USGS Publications Warehouse

    Langevin, Christian D.; Hughes, Joseph D.; Banta, Edward R.; Niswonger, Richard G.; Panday, Sorab; Provost, Alden M.

    2017-08-10

    This report documents the Groundwater Flow (GWF) Model for a new version of MODFLOW called MODFLOW 6. The GWF Model for MODFLOW 6 is based on a generalized control-volume finite-difference approach in which a cell can be hydraulically connected to any number of surrounding cells. Users can define the model grid using one of three discretization packages, including (1) a structured discretization package for defining regular MODFLOW grids consisting of layers, rows, and columns, (2) a discretization by ver­tices package for defining layered unstructured grids consisting of layers and cells, and (3) a general unstruc­tured discretization package for defining flexible grids comprised of cells and their connection properties. For layered grids, a new capability is available for removing thin cells and vertically connecting cells overlying and underlying the thin cells. For complex problems involving water-table conditions, an optional Newton-Raphson formulation, based on the formulations in MODFLOW-NWT and MODFLOW-USG, can be acti­vated. Use of the Newton-Raphson formulation will often improve model convergence and allow solutions to be obtained for difficult problems that cannot be solved using the traditional wetting and drying approach. The GWF Model is divided into “packages,” as was done in previous MODFLOW versions. A package is the part of the model that deals with a single aspect of simulation. Packages included with the GWF Model include those related to internal calculations of groundwater flow (discretization, initial conditions, hydraulic conduc­tance, and storage), stress packages (constant heads, wells, recharge, rivers, general head boundaries, drains, and evapotranspiration), and advanced stress packages (streamflow routing, lakes, multi-aquifer wells, and unsaturated zone flow). An additional package is also available for moving water available in one package into the individual features of the advanced stress packages. The GWF Model also has packages for obtaining and controlling output from the model. This report includes detailed explanations of physical and mathematical concepts on which the GWF Model and its packages are based.Like its predecessors, MODFLOW 6 is based on a highly modular structure; however, this structure has been extended into an object-oriented framework. The framework includes a robust and generalized numeri­cal solution object, which can be used to solve many different types of models. The numerical solution object has several different matrix preconditioning options as well as several methods for solving the linear system of equations. In this new framework, the GWF Model itself is an object as are each of the GWF Model packages. A benefit of the object-oriented structure is that multiple objects of the same type can be used in a single sim­ulation. Thus, a single forward run with MODFLOW 6 may contain multiple GWF Models. GWF Models can be hydraulically connected using GWF-GWF Exchange objects. Connecting GWF models in different ways permits the user to utilize a local grid refinement strategy consisting of parent and child models or to couple adjacent GWF Models. An advantage of the approach implemented in MODFLOW 6 is that multiple models and their exchanges can be incorporated into a single numerical solution object. With this design, models can be tightly coupled at the matrix level.

  3. Airfoil optimization for unsteady flows with application to high-lift noise reduction

    NASA Astrophysics Data System (ADS)

    Rumpfkeil, Markus Peer

    The use of steady-state aerodynamic optimization methods in the computational fluid dynamic (CFD) community is fairly well established. In particular, the use of adjoint methods has proven to be very beneficial because their cost is independent of the number of design variables. The application of numerical optimization to airframe-generated noise, however, has not received as much attention, but with the significant quieting of modern engines, airframe noise now competes with engine noise. Optimal control techniques for unsteady flows are needed in order to be able to reduce airframe-generated noise. In this thesis, a general framework is formulated to calculate the gradient of a cost function in a nonlinear unsteady flow environment via the discrete adjoint method. The unsteady optimization algorithm developed in this work utilizes a Newton-Krylov approach since the gradient-based optimizer uses the quasi-Newton method BFGS, Newton's method is applied to the nonlinear flow problem, GMRES is used to solve the resulting linear problem inexactly, and last but not least the linear adjoint problem is solved using Bi-CGSTAB. The flow is governed by the unsteady two-dimensional compressible Navier-Stokes equations in conjunction with a one-equation turbulence model, which are discretized using structured grids and a finite difference approach. The effectiveness of the unsteady optimization algorithm is demonstrated by applying it to several problems of interest including shocktubes, pulses in converging-diverging nozzles, rotating cylinders, transonic buffeting, and an unsteady trailing-edge flow. In order to address radiated far-field noise, an acoustic wave propagation program based on the Ffowcs Williams and Hawkings (FW-H) formulation is implemented and validated. The general framework is then used to derive the adjoint equations for a novel hybrid URANS/FW-H optimization algorithm in order to be able to optimize the shape of airfoils based on their calculated far-field pressure fluctuations. Validation and application results for this novel hybrid URANS/FW-H optimization algorithm show that it is possible to optimize the shape of an airfoil in an unsteady flow environment to minimize its radiated far-field noise while maintaining good aerodynamic performance.

  4. Speeding up N-body simulations of modified gravity: chameleon screening models

    NASA Astrophysics Data System (ADS)

    Bose, Sownak; Li, Baojiu; Barreira, Alexandre; He, Jian-hua; Hellwing, Wojciech A.; Koyama, Kazuya; Llinares, Claudio; Zhao, Gong-Bo

    2017-02-01

    We describe and demonstrate the potential of a new and very efficient method for simulating certain classes of modified gravity theories, such as the widely studied f(R) gravity models. High resolution simulations for such models are currently very slow due to the highly nonlinear partial differential equation that needs to be solved exactly to predict the modified gravitational force. This nonlinearity is partly inherent, but is also exacerbated by the specific numerical algorithm used, which employs a variable redefinition to prevent numerical instabilities. The standard Newton-Gauss-Seidel iterative method used to tackle this problem has a poor convergence rate. Our new method not only avoids this, but also allows the discretised equation to be written in a form that is analytically solvable. We show that this new method greatly improves the performance and efficiency of f(R) simulations. For example, a test simulation with 5123 particles in a box of size 512 Mpc/h is now 5 times faster than before, while a Millennium-resolution simulation for f(R) gravity is estimated to be more than 20 times faster than with the old method. Our new implementation will be particularly useful for running very high resolution, large-sized simulations which, to date, are only possible for the standard model, and also makes it feasible to run large numbers of lower resolution simulations for covariance analyses. We hope that the method will bring us to a new era for precision cosmological tests of gravity.

  5. On structure-exploiting trust-region regularized nonlinear least squares algorithms for neural-network learning.

    PubMed

    Mizutani, Eiji; Demmel, James W

    2003-01-01

    This paper briefly introduces our numerical linear algebra approaches for solving structured nonlinear least squares problems arising from 'multiple-output' neural-network (NN) models. Our algorithms feature trust-region regularization, and exploit sparsity of either the 'block-angular' residual Jacobian matrix or the 'block-arrow' Gauss-Newton Hessian (or Fisher information matrix in statistical sense) depending on problem scale so as to render a large class of NN-learning algorithms 'efficient' in both memory and operation costs. Using a relatively large real-world nonlinear regression application, we shall explain algorithmic strengths and weaknesses, analyzing simulation results obtained by both direct and iterative trust-region algorithms with two distinct NN models: 'multilayer perceptrons' (MLP) and 'complementary mixtures of MLP-experts' (or neuro-fuzzy modular networks).

  6. An efficient algorithm for the retarded time equation for noise from rotating sources

    NASA Astrophysics Data System (ADS)

    Loiodice, S.; Drikakis, D.; Kokkalis, A.

    2018-01-01

    This study concerns modelling of noise emanating from rotating sources such as helicopter rotors. We present an accurate and efficient algorithm for the solution of the retarded time equation, which can be used both in subsonic and supersonic flow regimes. A novel approach for the search of the roots of the retarded time function was developed based on considerations of the kinematics of rotating sources and of the bifurcation analysis of the retarded time function. It is shown that the proposed algorithm is faster than the classical Newton and Brent methods, especially in the presence of sources rotating supersonically.

  7. NLSE: Parameter-Based Inversion Algorithm

    NASA Astrophysics Data System (ADS)

    Sabbagh, Harold A.; Murphy, R. Kim; Sabbagh, Elias H.; Aldrin, John C.; Knopp, Jeremy S.

    Chapter 11 introduced us to the notion of an inverse problem and gave us some examples of the value of this idea to the solution of realistic industrial problems. The basic inversion algorithm described in Chap. 11 was based upon the Gauss-Newton theory of nonlinear least-squares estimation and is called NLSE in this book. In this chapter we will develop the mathematical background of this theory more fully, because this algorithm will be the foundation of inverse methods and their applications during the remainder of this book. We hope, thereby, to introduce the reader to the application of sophisticated mathematical concepts to engineering practice without introducing excessive mathematical sophistication.

  8. Newton Methods for Large Scale Problems in Machine Learning

    ERIC Educational Resources Information Center

    Hansen, Samantha Leigh

    2014-01-01

    The focus of this thesis is on practical ways of designing optimization algorithms for minimizing large-scale nonlinear functions with applications in machine learning. Chapter 1 introduces the overarching ideas in the thesis. Chapters 2 and 3 are geared towards supervised machine learning applications that involve minimizing a sum of loss…

  9. SIMULATIONS OF 2D AND 3D THERMOCAPILLARY FLOWS BY A LEAST-SQUARES FINITE ELEMENT METHOD. (R825200)

    EPA Science Inventory

    Numerical results for time-dependent 2D and 3D thermocapillary flows are presented in this work. The numerical algorithm is based on the Crank-Nicolson scheme for time integration, Newton's method for linearization, and a least-squares finite element method, together with a matri...

  10. NEWSUMT: A FORTRAN program for inequality constrained function minimization, users guide

    NASA Technical Reports Server (NTRS)

    Miura, H.; Schmit, L. A., Jr.

    1979-01-01

    A computer program written in FORTRAN subroutine form for the solution of linear and nonlinear constrained and unconstrained function minimization problems is presented. The algorithm is the sequence of unconstrained minimizations using the Newton's method for unconstrained function minimizations. The use of NEWSUMT and the definition of all parameters are described.

  11. Solving regularly and singularly perturbed reaction-diffusion equations in three space dimensions

    NASA Astrophysics Data System (ADS)

    Moore, Peter K.

    2007-06-01

    In [P.K. Moore, Effects of basis selection and h-refinement on error estimator reliability and solution efficiency for higher-order methods in three space dimensions, Int. J. Numer. Anal. Mod. 3 (2006) 21-51] a fixed, high-order h-refinement finite element algorithm, Href, was introduced for solving reaction-diffusion equations in three space dimensions. In this paper Href is coupled with continuation creating an automatic method for solving regularly and singularly perturbed reaction-diffusion equations. The simple quasilinear Newton solver of Moore, (2006) is replaced by the nonlinear solver NITSOL [M. Pernice, H.F. Walker, NITSOL: a Newton iterative solver for nonlinear systems, SIAM J. Sci. Comput. 19 (1998) 302-318]. Good initial guesses for the nonlinear solver are obtained using continuation in the small parameter ɛ. Two strategies allow adaptive selection of ɛ. The first depends on the rate of convergence of the nonlinear solver and the second implements backtracking in ɛ. Finally a simple method is used to select the initial ɛ. Several examples illustrate the effectiveness of the algorithm.

  12. Implementing a Matrix-free Analytical Jacobian to Handle Nonlinearities in Models of 3D Lithospheric Deformation

    NASA Astrophysics Data System (ADS)

    Kaus, B.; Popov, A.

    2015-12-01

    The analytical expression for the Jacobian is a key component to achieve fast and robust convergence of the nonlinear Newton-Raphson iterative solver. Accomplishing this task in practice often requires a significant algebraic effort. Therefore it is quite common to use a cheap alternative instead, for example by approximating the Jacobian with a finite difference estimation. Despite its simplicity it is a relatively fragile and unreliable technique that is sensitive to the scaling of the residual and unknowns, as well as to the perturbation parameter selection. Unfortunately no universal rule can be applied to provide both a robust scaling and a perturbation. The approach we use here is to derive the analytical Jacobian for the coupled set of momentum, mass, and energy conservation equations together with the elasto-visco-plastic rheology and a marker in cell/staggered finite difference method. The software project LaMEM (Lithosphere and Mantle Evolution Model) is primarily developed for the thermo-mechanically coupled modeling of the 3D lithospheric deformation. The code is based on a staggered grid finite difference discretization in space, and uses customized scalable solvers form PETSc library to efficiently run on the massively parallel machines (such as IBM Blue Gene/Q). Currently LaMEM relies on the Jacobian-Free Newton-Krylov (JFNK) nonlinear solver, which approximates the Jacobian-vector product using a simple finite difference formula. This approach never requires an assembled Jacobian matrix and uses only the residual computation routine. We use an approximate Jacobian (Picard) matrix to precondition the Krylov solver with the Galerkin geometric multigrid. Because of the inherent problems of the finite difference Jacobian estimation, this approach doesn't always result in stable convergence. In this work we present and discuss a matrix-free technique in which the Jacobian-vector product is replaced by analytically-derived expressions and compare results with those obtained with a finite difference approximation of the Jacobian. This project is funded by ERC Starting Grant 258830 and computer facilities were provided by Jülich supercomputer center (Germany).

  13. Survey on the Performance of Source Localization Algorithms

    PubMed Central

    2017-01-01

    The localization of emitters using an array of sensors or antennas is a prevalent issue approached in several applications. There exist different techniques for source localization, which can be classified into multilateration, received signal strength (RSS) and proximity methods. The performance of multilateration techniques relies on measured time variables: the time of flight (ToF) of the emission from the emitter to the sensor, the time differences of arrival (TDoA) of the emission between sensors and the pseudo-time of flight (pToF) of the emission to the sensors. The multilateration algorithms presented and compared in this paper can be classified as iterative and non-iterative methods. Both standard least squares (SLS) and hyperbolic least squares (HLS) are iterative and based on the Newton–Raphson technique to solve the non-linear equation system. The metaheuristic technique particle swarm optimization (PSO) used for source localisation is also studied. This optimization technique estimates the source position as the optimum of an objective function based on HLS and is also iterative in nature. Three non-iterative algorithms, namely the hyperbolic positioning algorithms (HPA), the maximum likelihood estimator (MLE) and Bancroft algorithm, are also presented. A non-iterative combined algorithm, MLE-HLS, based on MLE and HLS, is further proposed in this paper. The performance of all algorithms is analysed and compared in terms of accuracy in the localization of the position of the emitter and in terms of computational time. The analysis is also undertaken with three different sensor layouts since the positions of the sensors affect the localization; several source positions are also evaluated to make the comparison more robust. The analysis is carried out using theoretical time differences, as well as including errors due to the effect of digital sampling of the time variables. It is shown that the most balanced algorithm, yielding better results than the other algorithms in terms of accuracy and short computational time, is the combined MLE-HLS algorithm. PMID:29156565

  14. Numerical methods for the design of gradient-index optical coatings.

    PubMed

    Anzengruber, Stephan W; Klann, Esther; Ramlau, Ronny; Tonova, Diana

    2012-12-01

    We formulate the problem of designing gradient-index optical coatings as the task of solving a system of operator equations. We use iterative numerical procedures known from the theory of inverse problems to solve it with respect to the coating refractive index profile and thickness. The mathematical derivations necessary for the application of the procedures are presented, and different numerical methods (Landweber, Newton, and Gauss-Newton methods, Tikhonov minimization with surrogate functionals) are implemented. Procedures for the transformation of the gradient coating designs into quasi-gradient ones (i.e., multilayer stacks of homogeneous layers with different refractive indices) are also developed. The design algorithms work with physically available coating materials that could be produced with the modern coating technologies.

  15. Testing Pattern Hypotheses for Correlation Matrices

    ERIC Educational Resources Information Center

    McDonald, Roderick P.

    1975-01-01

    The treatment of covariance matrices given by McDonald (1974) can be readily modified to cover hypotheses prescribing zeros and equalities in the correlation matrix rather than the covariance matrix, still with the convenience of the closed-form Least Squares solution and the classical Newton method. (Author/RC)

  16. Use of Generalized Fluid System Simulation Program (GFSSP) for Teaching and Performing Senior Design Projects at the Educational Institutions

    NASA Technical Reports Server (NTRS)

    Majumdar, A. K.; Hedayat, A.

    2015-01-01

    This paper describes the experience of the authors in using the Generalized Fluid System Simulation Program (GFSSP) in teaching Design of Thermal Systems class at University of Alabama in Huntsville. GFSSP is a finite volume based thermo-fluid system network analysis code, developed at NASA/Marshall Space Flight Center, and is extensively used in NASA, Department of Defense, and aerospace industries for propulsion system design, analysis, and performance evaluation. The educational version of GFSSP is freely available to all US higher education institutions. The main purpose of the paper is to illustrate the utilization of this user-friendly code for the thermal systems design and fluid engineering courses and to encourage the instructors to utilize the code for the class assignments as well as senior design projects. The need for a generalized computer program for thermofluid analysis in a flow network has been felt for a long time in aerospace industries. Designers of thermofluid systems often need to know pressures, temperatures, flow rates, concentrations, and heat transfer rates at different parts of a flow circuit for steady state or transient conditions. Such applications occur in propulsion systems for tank pressurization, internal flow analysis of rocket engine turbopumps, chilldown of cryogenic tanks and transfer lines, and many other applications of gas-liquid systems involving fluid transients and conjugate heat and mass transfer. Computer resource requirements to perform time-dependent, three-dimensional Navier-Stokes computational fluid dynamic (CFD) analysis of such systems are prohibitive and therefore are not practical. Available commercial codes are generally suitable for steady state, single-phase incompressible flow. Because of the proprietary nature of such codes, it is not possible to extend their capability to satisfy the above-mentioned needs. Therefore, the Generalized Fluid System Simulation Program (GFSSP1) has been developed at NASA Marshall Space Flight Center (MSFC) as a general fluid flow system solver capable of handling phase changes, compressibility, mixture thermodynamics and transient operations. It also includes the capability to model external body forces such as gravity and centrifugal effects in a complex flow network. The objectives of GFSSP development are: a) to develop a robust and efficient numerical algorithm to solve a system of equations describing a flow network containing phase changes, mixing, and rotation; and b) to implement the algorithm in a structured, easy-to-use computer program. The analysis of thermofluid dynamics in a complex network requires resolution of the system into fluid nodes and branches, and solid nodes and conductors as shown in Figure 1. Figure 1 shows a schematic and GFSSP flow circuit of a counter-flow heat exchanger. Hot nitrogen gas is flowing through a pipe, colder nitrogen is flowing counter to the hot stream in the annulus pipe and heat transfer occurs through metal tubes. The problem considered is to calculate flowrates and temperature distributions in both streams. GFSSP has a unique data structure, as shown in Figure 2, that allows constructing all possible arrangements of a flow network with no limit on the number of elements. The elements of a flow network are boundary nodes where pressure and temperature are specified, internal nodes where pressure and temperature are calculated, and branches where flowrates are calculated. For conjugate heat transfer problems, there are three additional elements: solid node, ambient node, and conductor. The solid and fluid nodes are connected with solid-fluid conductors. GFSSP solves the conservation equations of mass and energy, and equation of state in internal nodes to calculate pressure, temperature and resident mass. The momentum conservation equation is solved in branches to calculate flowrate. It also solves for energy conservation equations to calculate temperatures of solid nodes. The equations are coupled and nonlinear; therefore, they are solved by an iterative numerical scheme. GFSSP employs a unique numerical scheme known as simultaneous adjustment with successive substitution (SASS), which is a combination of Newton-Raphson and successive substitution methods. The mass and momentum conservation equations and the equation of state are solved by the Newton-Raphson method while the conservation of energy and species are solved by the successive substitution method. GFSSP is linked with two thermodynamic property programs, GASP2 and WASP3 and GASPAK4, that provide thermodynamic and thermophysical properties of selected fluids. Both programs cover a range of pressure and temperature that allows fluid properties to be evaluated for liquid, liquid-vapor (saturation), and vapor region. GASP and WASP provide properties of 12 fluids. GASPAK includes a library of 36 fluids. GFSSP has three major parts. The first part is the graphical user interface (GUI), visual thermofluid analyzer of systems and components (VTASC). VTASC allows users to create a flow circuit by a 'point and click' paradigm. It creates the GFSSP input file after the completion of the model building process. GFSSP's GUI provides the users a platform to build and run their models. It also allows post-processing of results. The network flow circuit is first built using three basic elements: boundary node, internal node, and branch.

  17. Iterative algorithms for computing the feedback Nash equilibrium point for positive systems

    NASA Astrophysics Data System (ADS)

    Ivanov, I.; Imsland, Lars; Bogdanova, B.

    2017-03-01

    The paper studies N-player linear quadratic differential games on an infinite time horizon with deterministic feedback information structure. It introduces two iterative methods (the Newton method as well as its accelerated modification) in order to compute the stabilising solution of a set of generalised algebraic Riccati equations. The latter is related to the Nash equilibrium point of the considered game model. Moreover, we derive the sufficient conditions for convergence of the proposed methods. Finally, we discuss two numerical examples so as to illustrate the performance of both of the algorithms.

  18. Algorithms for the Equilibration of Matrices and Their Application to Limited-Memory Quasi-Newton Methods

    DTIC Science & Technology

    2010-05-01

    irreducible, by the Perron - Frobenius theorem (see, for example, Theorem 8.4.4 in [28]), the eigenvalue 1 is simple. Next, the rank-one matrix Q has the...We refer to (2.1) as the scaling equation. Although algorithms must use A, existence and unique- ness theory need consider only the nonnegative matrix...B. If p = 1 and A is nonnegative , then A = B. We reserve the term binormalization for the case p = 2. We say A is scalable if there exists x > 0

  19. An improved computational approach for multilevel optimum design

    NASA Technical Reports Server (NTRS)

    Haftka, R. T.

    1984-01-01

    A penalty-function algorithm employing Newton's method with approximate second derivatives (Haftka and Starnes, 1980) is developed for two-level hierarchical design optimization problems. The difficulties posed by discontinuous behavior in typical multilevel problems are explained and illustrated for the case of a three-bar truss; the algorithm is formulated; and its advantages are demonstrated in the problem of a portal framework having three beams (described by six cross-section parameters), subjected to two loading conditions, and to be constructed in six different materials for comparison. The final design parameters are listed in a table.

  20. Stochastic quasi-Newton molecular simulations

    NASA Astrophysics Data System (ADS)

    Chau, C. D.; Sevink, G. J. A.; Fraaije, J. G. E. M.

    2010-08-01

    We report a new and efficient factorized algorithm for the determination of the adaptive compound mobility matrix B in a stochastic quasi-Newton method (S-QN) that does not require additional potential evaluations. For one-dimensional and two-dimensional test systems, we previously showed that S-QN gives rise to efficient configurational space sampling with good thermodynamic consistency [C. D. Chau, G. J. A. Sevink, and J. G. E. M. Fraaije, J. Chem. Phys. 128, 244110 (2008)10.1063/1.2943313]. Potential applications of S-QN are quite ambitious, and include structure optimization, analysis of correlations and automated extraction of cooperative modes. However, the potential can only be fully exploited if the computational and memory requirements of the original algorithm are significantly reduced. In this paper, we consider a factorized mobility matrix B=JJT and focus on the nontrivial fundamentals of an efficient algorithm for updating the noise multiplier J . The new algorithm requires O(n2) multiplications per time step instead of the O(n3) multiplications in the original scheme due to Choleski decomposition. In a recursive form, the update scheme circumvents matrix storage and enables limited-memory implementation, in the spirit of the well-known limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) method, allowing for a further reduction of the computational effort to O(n) . We analyze in detail the performance of the factorized (FSU) and limited-memory (L-FSU) algorithms in terms of convergence and (multiscale) sampling, for an elementary but relevant system that involves multiple time and length scales. Finally, we use this analysis to formulate conditions for the simulation of the complex high-dimensional potential energy landscapes of interest.

  1. Recursive flexible multibody system dynamics using spatial operators

    NASA Technical Reports Server (NTRS)

    Jain, A.; Rodriguez, G.

    1992-01-01

    This paper uses spatial operators to develop new spatially recursive dynamics algorithms for flexible multibody systems. The operator description of the dynamics is identical to that for rigid multibody systems. Assumed-mode models are used for the deformation of each individual body. The algorithms are based on two spatial operator factorizations of the system mass matrix. The first (Newton-Euler) factorization of the mass matrix leads to recursive algorithms for the inverse dynamics, mass matrix evaluation, and composite-body forward dynamics for the systems. The second (innovations) factorization of the mass matrix, leads to an operator expression for the mass matrix inverse and to a recursive articulated-body forward dynamics algorithm. The primary focus is on serial chains, but extensions to general topologies are also described. A comparison of computational costs shows that the articulated-body, forward dynamics algorithm is much more efficient than the composite-body algorithm for most flexible multibody systems.

  2. Fast polar decomposition of an arbitrary matrix

    NASA Technical Reports Server (NTRS)

    Higham, Nicholas J.; Schreiber, Robert S.

    1988-01-01

    The polar decomposition of an m x n matrix A of full rank, where m is greater than or equal to n, can be computed using a quadratically convergent algorithm. The algorithm is based on a Newton iteration involving a matrix inverse. With the use of a preliminary complete orthogonal decomposition the algorithm can be extended to arbitrary A. How to use the algorithm to compute the positive semi-definite square root of a Hermitian positive semi-definite matrix is described. A hybrid algorithm which adaptively switches from the matrix inversion based iteration to a matrix multiplication based iteration due to Kovarik, and to Bjorck and Bowie is formulated. The decision when to switch is made using a condition estimator. This matrix multiplication rich algorithm is shown to be more efficient on machines for which matrix multiplication can be executed 1.5 times faster than matrix inversion.

  3. Recovery Discontinuous Galerkin Jacobian-free Newton-Krylov Method for all-speed flows

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    HyeongKae Park; Robert Nourgaliev; Vincent Mousseau

    2008-07-01

    There is an increasing interest to develop the next generation simulation tools for the advanced nuclear energy systems. These tools will utilize the state-of-art numerical algorithms and computer science technology in order to maximize the predictive capability, support advanced reactor designs, reduce uncertainty and increase safety margins. In analyzing nuclear energy systems, we are interested in compressible low-Mach number, high heat flux flows with a wide range of Re, Ra, and Pr numbers. Under these conditions, the focus is placed on turbulent heat transfer, in contrast to other industries whose main interest is in capturing turbulent mixing. Our objective ismore » to develop singlepoint turbulence closure models for large-scale engineering CFD code, using Direct Numerical Simulation (DNS) or Large Eddy Simulation (LES) tools, requireing very accurate and efficient numerical algorithms. The focus of this work is placed on fully-implicit, high-order spatiotemporal discretization based on the discontinuous Galerkin method solving the conservative form of the compressible Navier-Stokes equations. The method utilizes a local reconstruction procedure derived from weak formulation of the problem, which is inspired by the recovery diffusion flux algorithm of van Leer and Nomura [?] and by the piecewise parabolic reconstruction [?] in the finite volume method. The developed methodology is integrated into the Jacobianfree Newton-Krylov framework [?] to allow a fully-implicit solution of the problem.« less

  4. Finite element concepts in computational aerodynamics

    NASA Technical Reports Server (NTRS)

    Baker, A. J.

    1978-01-01

    Finite element theory was employed to establish an implicit numerical solution algorithm for the time averaged unsteady Navier-Stokes equations. Both the multidimensional and a time-split form of the algorithm were considered, the latter of particular interest for problem specification on a regular mesh. A Newton matrix iteration procedure is outlined for solving the resultant nonlinear algebraic equation systems. Multidimensional discretization procedures are discussed with emphasis on automated generation of specific nonuniform solution grids and accounting of curved surfaces. The time-split algorithm was evaluated with regards to accuracy and convergence properties for hyperbolic equations on rectangular coordinates. An overall assessment of the viability of the finite element concept for computational aerodynamics is made.

  5. Comparison of l₁-Norm SVR and Sparse Coding Algorithms for Linear Regression.

    PubMed

    Zhang, Qingtian; Hu, Xiaolin; Zhang, Bo

    2015-08-01

    Support vector regression (SVR) is a popular function estimation technique based on Vapnik's concept of support vector machine. Among many variants, the l1-norm SVR is known to be good at selecting useful features when the features are redundant. Sparse coding (SC) is a technique widely used in many areas and a number of efficient algorithms are available. Both l1-norm SVR and SC can be used for linear regression. In this brief, the close connection between the l1-norm SVR and SC is revealed and some typical algorithms are compared for linear regression. The results show that the SC algorithms outperform the Newton linear programming algorithm, an efficient l1-norm SVR algorithm, in efficiency. The algorithms are then used to design the radial basis function (RBF) neural networks. Experiments on some benchmark data sets demonstrate the high efficiency of the SC algorithms. In particular, one of the SC algorithms, the orthogonal matching pursuit is two orders of magnitude faster than a well-known RBF network designing algorithm, the orthogonal least squares algorithm.

  6. New insights in permafrost modelling

    NASA Astrophysics Data System (ADS)

    Tubini, Niccolò; Serafin, Francesco; Gruber, Stephan; Casulli, Vincenzo; Rigon, Riccardo

    2017-04-01

    Simulating freezing soil has ignored for long time in mainstream surface hydrology. However, it has indubitably a large influence on soil infiltrability and an even larger influence on the soil energy budget, and, over large spatial scales, a considerable feedback on climate. The topic is difficult because it involves concepts of disequilibrium Thermodynamics and also because, once solved the theoretical problem, integration of the resulting partial differential equations in a robust manner, is not trivial at all. In this abstract, we are presenting a new algorithm to estimate the water and energy budget in freezing soils. The first step is a derivation of a new equation for freezing soil mass budget (called generalized Richards equation) based on the freezing equals drying hypothesis (Miller 1965). The second step is the re-derivation of the energy budget. Finally there is the application of new techniques based on the double nested Newton algorithm (Casulli and Zanolli, 2010) to integrate the coupled equations. Some examples of the freezing dynamics and comparison with the Dall'Amico et al. (2011) algorithm are also shown. References Casulli, V., & Zanolli,P. (2010). A nested newton-type algorithm for finite colume methods solving Richards' equation in mixed form. SIAM J. SCI. Comput., 32(4), 2225-2273. Dall'Amico, M., Endrizzi, S., Gruber, S., & Rigon, R. (2011). A robust and energy-conserving model of freezing variably-saturated soil. The Cryosphere, 5(2), 469-484. http://doi.org/10.5194/tc-5-469-2011 Miller, R.: Phase equilibria and soil freezing, in: Permafrost: Proceedings of the Second International Conference. Washington DC: National Academy of Science-National Research Council, 287, 193-197, 1965.

  7. An algorithm to locate optimal bond breaking points on a potential energy surface for applications in mechanochemistry and catalysis.

    PubMed

    Bofill, Josep Maria; Ribas-Ariño, Jordi; García, Sergio Pablo; Quapp, Wolfgang

    2017-10-21

    The reaction path of a mechanically induced chemical transformation changes under stress. It is well established that the force-induced structural changes of minima and saddle points, i.e., the movement of the stationary points on the original or stress-free potential energy surface, can be described by a Newton Trajectory (NT). Given a reactive molecular system, a well-fitted pulling direction, and a sufficiently large value of the force, the minimum configuration of the reactant and the saddle point configuration of a transition state collapse at a point on the corresponding NT trajectory. This point is called barrier breakdown point or bond breaking point (BBP). The Hessian matrix at the BBP has a zero eigenvector which coincides with the gradient. It indicates which force (both in magnitude and direction) should be applied to the system to induce the reaction in a barrierless process. Within the manifold of BBPs, there exist optimal BBPs which indicate what is the optimal pulling direction and what is the minimal magnitude of the force to be applied for a given mechanochemical transformation. Since these special points are very important in the context of mechanochemistry and catalysis, it is crucial to develop efficient algorithms for their location. Here, we propose a Gauss-Newton algorithm that is based on the minimization of a positively defined function (the so-called σ-function). The behavior and efficiency of the new algorithm are shown for 2D test functions and for a real chemical example.

  8. Projective-Dual Method for Solving Systems of Linear Equations with Nonnegative Variables

    NASA Astrophysics Data System (ADS)

    Ganin, B. V.; Golikov, A. I.; Evtushenko, Yu. G.

    2018-02-01

    In order to solve an underdetermined system of linear equations with nonnegative variables, the projection of a given point onto its solutions set is sought. The dual of this problem—the problem of unconstrained maximization of a piecewise-quadratic function—is solved by Newton's method. The problem of unconstrained optimization dual of the regularized problem of finding the projection onto the solution set of the system is considered. A connection of duality theory and Newton's method with some known algorithms of projecting onto a standard simplex is shown. On the example of taking into account the specifics of the constraints of the transport linear programming problem, the possibility to increase the efficiency of calculating the generalized Hessian matrix is demonstrated. Some examples of numerical calculations using MATLAB are presented.

  9. Solution of free-boundary problems using finite-element/Newton methods and locally refined grids - Application to analysis of solidification microstructure

    NASA Technical Reports Server (NTRS)

    Tsiveriotis, K.; Brown, R. A.

    1993-01-01

    A new method is presented for the solution of free-boundary problems using Lagrangian finite element approximations defined on locally refined grids. The formulation allows for direct transition from coarse to fine grids without introducing non-conforming basis functions. The calculation of elemental stiffness matrices and residual vectors are unaffected by changes in the refinement level, which are accounted for in the loading of elemental data to the global stiffness matrix and residual vector. This technique for local mesh refinement is combined with recently developed mapping methods and Newton's method to form an efficient algorithm for the solution of free-boundary problems, as demonstrated here by sample calculations of cellular interfacial microstructure during directional solidification of a binary alloy.

  10. Numerical modelling of thin-walled Z-columns made of general laminates subjected to uniform shortening

    NASA Astrophysics Data System (ADS)

    Teter, Andrzej; Kolakowski, Zbigniew

    2018-01-01

    The numerical modelling of a plate structure was performed with the finite element method and a one-mode approach based on Koiter's method. The first order approximation of Koiter's method enables one to solve the eigenvalue problem. The second order approximation describes post-buckling equilibrium paths. In the finite element analysis, the Lanczos method was used to solve the linear problem of buckling. Simulations of the non-linear problem were performed with the Newton-Raphson method. Detailed calculations were carried out for a short Z-column made of general laminates. Configurations of laminated layers were non-symmetric. Due to possibilities of its application, the general laminate is very interesting. The length of the samples was chosen to obtain the lowest value of local buckling load. The amplitude of initial imperfections was 10% of the wall thickness. Thin-walled structures were simply supported on both ends. The numerical results were verified in experimental tests. A strain-gauge technique was applied. A static compression test was performed on a universal testing machine and a special grip, which consisted of two rigid steel plates and clamping sleeves, was used. Specimens were obtained with an autoclave technique. Tests were performed at a constant velocity of the cross-bar equal to 2 mm/min. The compressive load was less than 150% of the bifurcation load. Additionally, soft and thin pads were used to reduce inaccuracy of the sample ends.

  11. Multiple steady states in atmospheric chemistry

    NASA Technical Reports Server (NTRS)

    Stewart, Richard W.

    1993-01-01

    The equations describing the distributions and concentrations of trace species are nonlinear and may thus possess more than one solution. This paper develops methods for searching for multiple physical solutions to chemical continuity equations and applies these to subsets of equations describing tropospheric chemistry. The calculations are carried out with a box model and use two basic strategies. The first strategy is a 'search' method. This involves fixing model parameters at specified values, choosing a wide range of initial guesses at a solution, and using a Newton-Raphson technique to determine if different initial points converge to different solutions. The second strategy involves a set of techniques known as homotopy methods. These do not require an initial guess, are globally convergent, and are guaranteed, in principle, to find all solutions of the continuity equations. The first method is efficient but essentially 'hit or miss' in the sense that it cannot guarantee that all solutions which may exist will be found. The second method is computationally burdensome but can, in principle, determine all the solutions of a photochemical system. Multiple solutions have been found for models that contain a basic complement of photochemical reactions involving O(x), HO(x), NO(x), and CH4. In the present calculations, transitions occur between stable branches of a multiple solution set as a control parameter is varied. These transitions are manifestations of hysteresis phenomena in the photochemical system and may be triggered by increasing the NO flux or decreasing the CH4 flux from current mean tropospheric levels.

  12. Optimization design combined with coupled structural-electrostatic analysis for the electrostatically controlled deployable membrane reflector

    NASA Astrophysics Data System (ADS)

    Liu, Chao; Yang, Guigeng; Zhang, Yiqun

    2015-01-01

    The electrostatically controlled deployable membrane reflector (ECDMR) is a promising scheme to construct large size and high precision space deployable reflector antennas. This paper presents a novel design method for the large size and small F/D ECDMR considering the coupled structure-electrostatic problem. First, the fully coupled structural-electrostatic system is described by a three field formulation, in which the structure and passive electrical field is modeled by finite element method, and the deformation of the electrostatic domain is predicted by a finite element formulation of a fictitious elastic structure. A residual formulation of the structural-electrostatic field finite element model is established and solved by Newton-Raphson method. The coupled structural-electrostatic analysis procedure is summarized. Then, with the aid of this coupled analysis procedure, an integrated optimization method of membrane shape accuracy and stress uniformity is proposed, which is divided into inner and outer iterative loops. The initial state of relatively high shape accuracy and uniform stress distribution is achieved by applying the uniform prestress on the membrane design shape and optimizing the voltages, in which the optimal voltage is computed by a sensitivity analysis. The shape accuracy is further improved by the iterative prestress modification using the reposition balance method. Finally, the results of the uncoupled and coupled methods are compared and the proposed optimization method is applied to design an ECDMR. The results validate the effectiveness of this proposed methods.

  13. Effect of carbon nano tube working electrode thickness on charge transport kinetics and photo-electrochemical characteristics of dye-sensitized solar cells

    NASA Astrophysics Data System (ADS)

    Gacemi, Yahia; Cheknane, Ali; Hilal, Hikmat S.

    2018-02-01

    Physiochemical processes at the photo-electrode and the counter electrode of dye sensitized solar cells (DSSCs) involving having carbon nanotubes (CNTs) instead of the TiO2 layer, within the working electrode, are simulated in this work. Attention is paid to find the effect of CNT layer thickness on photo-electrochemical (PEC) characteristics of the CNT-DSSCs. Comparison with other conventional TiO2-DSSC systems, taking into account the working electrode film thickness, is also described here. To achieve these goals, a model is presented to explain charge transport and electron recombination which involve electron photo-excitation in dye molecules, injection of electrons from the excited dye to CNT working electrode conduction band, diffusion of electrons inside the CNT electrode, charge transfer between oxidized dye and (I-) and recombination of electrons. The simulation is based on solving non-linear equations using the Newton-Raphson numerical method. This concept is proposed for modelling numerical Faradaic impedance at the photo-electrode and the platinum counter electrode. It then simulates the cell impedance spectrum describing the locus of the three semicircles in the Nyquist diagram. The transient equivalent circuit model is also presented based on optimizing current-voltage curves of CNT-DSSCs so as to optimize the fill factor (FF) and conversion efficiency (η). The results show that the simulated characteristics of CNT-DSSCs, with different active CNT layer thicknesses, are superior to conventional TiO2-DSSCs.

  14. Increasing dimension of structures by 4D printing shape memory polymers via fused deposition modeling

    NASA Astrophysics Data System (ADS)

    Hu, G. F.; Damanpack, A. R.; Bodaghi, M.; Liao, W. H.

    2017-12-01

    The main objective of this paper is to introduce a 4D printing method to program shape memory polymers (SMPs) during fabrication process. Fused deposition modeling (FDM) as a filament-based printing method is employed to program SMPs during depositing the material. This method is implemented to fabricate complicated polymeric structures by self-bending features without need of any post-programming. Experiments are conducted to demonstrate feasibility of one-dimensional (1D)-to 2D and 2D-to-3D self-bending. It is shown that 3D printed plate structures can transform into masonry-inspired 3D curved shell structures by simply heating. Good reliability of SMP programming during printing process is also demonstrated. A 3D macroscopic constitutive model is established to simulate thermo-mechanical features of the printed SMPs. Governing equations are also derived to simulate programming mechanism during printing process and shape change of self-bending structures. In this respect, a finite element formulation is developed considering von-Kármán geometric nonlinearity and solved by implementing iterative Newton-Raphson scheme. The accuracy of the computational approach is checked with experimental results. It is demonstrated that the theoretical model is able to replicate the main characteristics observed in the experiments. This research is likely to advance the state of the art FDM 4D printing, and provide pertinent results and computational tool that are instrumental in design of smart materials and structures with self-bending features.

  15. Linear and nonlinear dynamic analysis of redundant load path bearingless rotor systems

    NASA Technical Reports Server (NTRS)

    Murthy, V. R.; Shultz, Louis A.

    1994-01-01

    The goal of this research is to develop the transfer matrix method to treat nonlinear autonomous boundary value problems with multiple branches. The application is the complete nonlinear aeroelastic analysis of multiple-branched rotor blades. Once the development is complete, it can be incorporated into the existing transfer matrix analyses. There are several difficulties to be overcome in reaching this objective. The conventional transfer matrix method is limited in that it is applicable only to linear branch chain-like structures, but consideration of multiple branch modeling is important for bearingless rotors. Also, hingeless and bearingless rotor blade dynamic characteristics (particularly their aeroelasticity problems) are inherently nonlinear. The nonlinear equations of motion and the multiple-branched boundary value problem are treated together using a direct transfer matrix method. First, the formulation is applied to a nonlinear single-branch blade to validate the nonlinear portion of the formulation. The nonlinear system of equations is iteratively solved using a form of Newton-Raphson iteration scheme developed for differential equations of continuous systems. The formulation is then applied to determine the nonlinear steady state trim and aeroelastic stability of a rotor blade in hover with two branches at the root. A comprehensive computer program is developed and is used to obtain numerical results for the (1) free vibration, (2) nonlinearly deformed steady state, (3) free vibration about the nonlinearly deformed steady state, and (4) aeroelastic stability tasks. The numerical results obtained by the present method agree with results from other methods.

  16. An efficient solution technique for shockwave-boundary layer interactions with flow separation and slot suction effects

    NASA Technical Reports Server (NTRS)

    Edwards, Jack R.; Mcrae, D. Scott

    1991-01-01

    An efficient method for computing two-dimensional compressible Navier-Stokes flow fields is presented. The solution algorithm is a fully-implicit approximate factorization technique based on an unsymmetric line Gauss-Seidel splitting of the equation system Jacobian matrix. Convergence characteristics are improved by the addition of acceleration techniques based on Shamanskii's method for nonlinear equations and Broyden's quasi-Newton update. Characteristic-based differencing of the equations is provided by means of Van Leer's flux vector splitting. In this investigation, emphasis is placed on the fast and accurate computation of shock-wave-boundary layer interactions with and without slot suction effects. In the latter context, a set of numerical boundary conditions for simulating the transpiration flow in an open slot is devised. Both laminar and turbulent cases are considered, with turbulent closure provided by a modified Cebeci-Smith algebraic model. Comparisons with computational and experimental data sets are presented for a variety of interactions, and a fully-coupled simulation of a plenum chamber/inlet flowfield with shock interaction and suction is also shown and discussed.

  17. Pressure-based high-order TVD methodology for dynamic stall control

    NASA Astrophysics Data System (ADS)

    Yang, H. Q.; Przekwas, A. J.

    1992-01-01

    The quantitative prediction of the dynamics of separating unsteady flows, such as dynamic stall, is of crucial importance. This six-month SBIR Phase 1 study has developed several new pressure-based methodologies for solving 3D Navier-Stokes equations in both stationary and moving (body-comforting) coordinates. The present pressure-based algorithm is equally efficient for low speed incompressible flows and high speed compressible flows. The discretization of convective terms by the presently developed high-order TVD schemes requires no artificial dissipation and can properly resolve the concentrated vortices in the wing-body with minimum numerical diffusion. It is demonstrated that the proposed Newton's iteration technique not only increases the convergence rate but also strongly couples the iteration between pressure and velocities. The proposed hyperbolization of the pressure correction equation is shown to increase the solver's efficiency. The above proposed methodologies were implemented in an existing CFD code, REFLEQS. The modified code was used to simulate both static and dynamic stalls on two- and three-dimensional wing-body configurations. Three-dimensional effect and flow physics are discussed.

  18. Coupling fluid-structure interaction with phase-field fracture

    NASA Astrophysics Data System (ADS)

    Wick, Thomas

    2016-12-01

    In this work, a concept for coupling fluid-structure interaction with brittle fracture in elasticity is proposed. The fluid-structure interaction problem is modeled in terms of the arbitrary Lagrangian-Eulerian technique and couples the isothermal, incompressible Navier-Stokes equations with nonlinear elastodynamics using the Saint-Venant Kirchhoff solid model. The brittle fracture model is based on a phase-field approach for cracks in elasticity and pressurized elastic solids. In order to derive a common framework, the phase-field approach is re-formulated in Lagrangian coordinates to combine it with fluid-structure interaction. A crack irreversibility condition, that is mathematically characterized as an inequality constraint in time, is enforced with the help of an augmented Lagrangian iteration. The resulting problem is highly nonlinear and solved with a modified Newton method (e.g., error-oriented) that specifically allows for a temporary increase of the residuals. The proposed framework is substantiated with several numerical tests. In these examples, computational stability in space and time is shown for several goal functionals, which demonstrates reliability of numerical modeling and algorithmic techniques. But also current limitations such as the necessity of using solid damping are addressed.

  19. Biomolecular Interaction Analysis Using an Optical Surface Plasmon Resonance Biosensor: The Marquardt Algorithm vs Newton Iteration Algorithm

    PubMed Central

    Hu, Jiandong; Ma, Liuzheng; Wang, Shun; Yang, Jianming; Chang, Keke; Hu, Xinran; Sun, Xiaohui; Chen, Ruipeng; Jiang, Min; Zhu, Juanhua; Zhao, Yuanyuan

    2015-01-01

    Kinetic analysis of biomolecular interactions are powerfully used to quantify the binding kinetic constants for the determination of a complex formed or dissociated within a given time span. Surface plasmon resonance biosensors provide an essential approach in the analysis of the biomolecular interactions including the interaction process of antigen-antibody and receptors-ligand. The binding affinity of the antibody to the antigen (or the receptor to the ligand) reflects the biological activities of the control antibodies (or receptors) and the corresponding immune signal responses in the pathologic process. Moreover, both the association rate and dissociation rate of the receptor to ligand are the substantial parameters for the study of signal transmission between cells. A number of experimental data may lead to complicated real-time curves that do not fit well to the kinetic model. This paper presented an analysis approach of biomolecular interactions established by utilizing the Marquardt algorithm. This algorithm was intensively considered to implement in the homemade bioanalyzer to perform the nonlinear curve-fitting of the association and disassociation process of the receptor to ligand. Compared with the results from the Newton iteration algorithm, it shows that the Marquardt algorithm does not only reduce the dependence of the initial value to avoid the divergence but also can greatly reduce the iterative regression times. The association and dissociation rate constants, ka, kd and the affinity parameters for the biomolecular interaction, KA, KD, were experimentally obtained 6.969×105 mL·g-1·s-1, 0.00073 s-1, 9.5466×108 mL·g-1 and 1.0475×10-9 g·mL-1, respectively from the injection of the HBsAg solution with the concentration of 16ng·mL-1. The kinetic constants were evaluated distinctly by using the obtained data from the curve-fitting results. PMID:26147997

  20. Carney v Newton: expert evidence about the standard of clinical notes.

    PubMed

    Faunce, Thomas; Hammer, Ingrid; Jefferys, Susannah

    2007-12-01

    In Carney v Newton [2006] TASSC 4 the Tasmanian Supreme Court heard a claim that the defendant breached his duty of care by failing to properly diagnose and treat a node positive carcinoma in the plaintiff's breast tissue. At trial, argument turned on the actual dialogue that took place during the initial consultation, with significant reliance on the clinical notes of the defendant. The court gave considerable weight to "expert" witnesses in ascertaining the acceptability of the defendant's conduct concerning the maintenance and interpretation of his clinical notes. This raises important questions in relation to proof of quality of medical records as part of the current professional standard of care, as modified by recent legislation in most jurisdictions.

  1. Self-adaptive predictor-corrector algorithm for static nonlinear structural analysis

    NASA Technical Reports Server (NTRS)

    Padovan, J.

    1981-01-01

    A multiphase selfadaptive predictor corrector type algorithm was developed. This algorithm enables the solution of highly nonlinear structural responses including kinematic, kinetic and material effects as well as pro/post buckling behavior. The strategy involves three main phases: (1) the use of a warpable hyperelliptic constraint surface which serves to upperbound dependent iterate excursions during successive incremental Newton Ramphson (INR) type iterations; (20 uses an energy constraint to scale the generation of successive iterates so as to maintain the appropriate form of local convergence behavior; (3) the use of quality of convergence checks which enable various self adaptive modifications of the algorithmic structure when necessary. The restructuring is achieved by tightening various conditioning parameters as well as switch to different algorithmic levels to improve the convergence process. The capabilities of the procedure to handle various types of static nonlinear structural behavior are illustrated.

  2. Teaching Newton's 3rd law of motion using learning by design approach

    NASA Astrophysics Data System (ADS)

    Aquino, Jiezel G.; Caliguid, Mariel P.; Buan, Amelia T.; Magsayod, Joy R.; Lahoylahoy, Myrna E.

    2018-01-01

    This paper presents the process and implementation of Learning by Design Approach in teaching Newton's 3rd Law of Motion. A lesson activity from integrative STEM education was adapted, modified and enhanced through pilot testing. After revisions, the implementation was done to one class. The respondent's prior knowledge was first assessed by a pretest. PPIT (present the scenario, plan, implement and test) was the framework followed in the implementation of Learning by Design. Worksheets were then utilized to measure their conceptual understanding and perception. A score guide was also used to evaluate the student's output. Paired t-test analysis showed that there is a significant difference in the pretest and posttest achievement scores. This implies that the performance of the students have improved during the implementation of the Learning by Design. The Analysis of variance also depicts that the low, average and high benefited in the Learning by Design approach. The results of this study suggests that Learning by Design is an effective approach in teaching Newton's 3rd Law of Motion and thus be used in a Science classroom.

  3. Resolving the Large Scale Spectral Variability of the Luminous Seyfert 1 Galaxy 1H 0419-577

    NASA Technical Reports Server (NTRS)

    Pounds, K. A.; Reeves, J. N.; Page, K. L.; OBrien, P. T.

    2004-01-01

    An XMM-Newton observation of the luminous Seyfert 1 galaxy 1H 0419-577 in September 2002, when the source was in an extreme low-flux state, found a very hard X-ray spectrum at 1-10 keV with a strong soft excess below approximately 1 keV. Comparison with an earlier XMM-Newton observation when 1H 0419-577 was X-ray bright indicated the dominant spectral variability was due to a steep power law or cool Comptonized thermal emission. Four further XMM-Newton observations, with 1H 0419-577 in intermediate flux states, now support that conclusion, while we also find the variable emission component in intermediate state difference spectra to be strongly modified by absorption in low ionisation matter. The variable soft excess is seen to be an artefact of absorption of the underlying continuum while the core soft emission is attributed to recombination in an extended region of more highly ionised gas. This new analysis underlines the importance of fully accounting for absorption in characterizing AGN X-ray spectra.

  4. Computation of the shock-wave boundary layer interaction with flow separation

    NASA Technical Reports Server (NTRS)

    Ardonceau, P.; Alziary, T.; Aymer, D.

    1980-01-01

    The boundary layer concept is used to describe the flow near the wall. The external flow is approximated by a pressure displacement relationship (tangent wedge in linearized supersonic flow). The boundary layer equations are solved in finite difference form and the question of the presence and unicity of the solution is considered for the direct problem (assumed pressure) or converse problem (assumed displacement thickness, friction ratio). The coupling algorithm presented implicitly processes the downstream boundary condition necessary to correctly define the interacting boundary layer problem. The algorithm uses a Newton linearization technique to provide a fast convergence.

  5. 3-D minimum-structure inversion of magnetotelluric data using the finite-element method and tetrahedral grids

    NASA Astrophysics Data System (ADS)

    Jahandari, H.; Farquharson, C. G.

    2017-11-01

    Unstructured grids enable representing arbitrary structures more accurately and with fewer cells compared to regular structured grids. These grids also allow more efficient refinements compared to rectilinear meshes. In this study, tetrahedral grids are used for the inversion of magnetotelluric (MT) data, which allows for the direct inclusion of topography in the model, for constraining an inversion using a wireframe-based geological model and for local refinement at the observation stations. A minimum-structure method with an iterative model-space Gauss-Newton algorithm for optimization is used. An iterative solver is employed for solving the normal system of equations at each Gauss-Newton step and the sensitivity matrix-vector products that are required by this solver are calculated using pseudo-forward problems. This method alleviates the need to explicitly form the Hessian or Jacobian matrices which significantly reduces the required computation memory. Forward problems are formulated using an edge-based finite-element approach and a sparse direct solver is used for the solutions. This solver allows saving and re-using the factorization of matrices for similar pseudo-forward problems within a Gauss-Newton iteration which greatly minimizes the computation time. Two examples are presented to show the capability of the algorithm: the first example uses a benchmark model while the second example represents a realistic geological setting with topography and a sulphide deposit. The data that are inverted are the full-tensor impedance and the magnetic transfer function vector. The inversions sufficiently recovered the models and reproduced the data, which shows the effectiveness of unstructured grids for complex and realistic MT inversion scenarios. The first example is also used to demonstrate the computational efficiency of the presented model-space method by comparison with its data-space counterpart.

  6. Conservative tightly-coupled simulations of stochastic multiscale systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Taverniers, Søren; Pigarov, Alexander Y.; Tartakovsky, Daniel M., E-mail: dmt@ucsd.edu

    2016-05-15

    Multiphysics problems often involve components whose macroscopic dynamics is driven by microscopic random fluctuations. The fidelity of simulations of such systems depends on their ability to propagate these random fluctuations throughout a computational domain, including subdomains represented by deterministic solvers. When the constituent processes take place in nonoverlapping subdomains, system behavior can be modeled via a domain-decomposition approach that couples separate components at the interfaces between these subdomains. Its coupling algorithm has to maintain a stable and efficient numerical time integration even at high noise strength. We propose a conservative domain-decomposition algorithm in which tight coupling is achieved by employingmore » either Picard's or Newton's iterative method. Coupled diffusion equations, one of which has a Gaussian white-noise source term, provide a computational testbed for analysis of these two coupling strategies. Fully-converged (“implicit”) coupling with Newton's method typically outperforms its Picard counterpart, especially at high noise levels. This is because the number of Newton iterations scales linearly with the amplitude of the Gaussian noise, while the number of Picard iterations can scale superlinearly. At large time intervals between two subsequent inter-solver communications, the solution error for single-iteration (“explicit”) Picard's coupling can be several orders of magnitude higher than that for implicit coupling. Increasing the explicit coupling's communication frequency reduces this difference, but the resulting increase in computational cost can make it less efficient than implicit coupling at similar levels of solution error, depending on the communication frequency of the latter and the noise strength. This trend carries over into higher dimensions, although at high noise strength explicit coupling may be the only computationally viable option.« less

  7. Segregated Methods for Two-Fluid Models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Prosperetti, Andrea; Sundaresan, Sankaran; Pannala, Sreekanth

    2007-01-01

    The previous chapter, with its direct simulation of the fluid flow and a modeling approach to the particle phase, may be seen as a transition between the methods for a fully resolved simulation described in the first part of this book and those for a coarse grained description based on the averaging approach described in chapter ??. We now turn to the latter, which in practice are the only methods able to deal with the complex flows encountered in most situations of practical interest such as fluidized beds, pipelines, energy generation, sediment transport, and others. This chapter and the nextmore » one are devoted to numerical methods for so-called two-fluid models in which the phases are treated as inter-penetrating continua describing, e.g., a liquid and a gas, or a fluid and a suspended solid phase. These models can be extended to deal with more than two continua and, then, the denomination multi-fluid models might be more appropriate. For example, the commercial code OLGA (Bendiksen et al. 1991), widely used in the oil industry, recognizes three phases, all treated as interpenetrating continua: a continuous liquid, a gas, and a disperse liquid phase present as drops suspended in the gas phase. The more recent PeTra (Petroleum Transport, Larsen et al. 1997) also describes three phases, gas, oil, and water. Recent approaches to the description of complex boiling flows recognize four inter-penetrating phases: a liquid phase present both as a continuum and as a dispersion of droplets, and a gas/vapor phase also present as a continuum and a dispersion of bubbles. Methods for these multi-fluid models are based on those developed for the two-fluid model to which we limit ourselves. In principle, one could simply take the model equations, discretize them, and solve them by a method suitable for non-linear problems, e.g. Newton-Raphson iteration. In practice, the computational cost of such a frontal attack is nearly always prohibitive in terms of storage requirement and execution time. It is therefore necessary to devise different, less direct strategies. Two principal classes of algorithms have been developed for this purpose. The first one, described in this chapter, consists of algorithms derived from the pressure based schemes widely used in single-phase flow, such as SIMPLE and its variations (see e.g. Patankar 1980). In this approach, the model equations are solved sequentially and, therefore, these methods are often referred to as segregated algorithms to distinguish them from a second class of methods, object of the next chapter, in which a coupled or semi-coupled time-marching solution strategy is adopted. Broadly speaking, the first class of methods is suitable for relatively slow transients, such as fluidized beds, or phenomena with a long duration, such as flow in pipelines. The methods in the second group have been designed to deal principally with fast transients, such as those hypothesized in nuclear reactor safety. Since in segregated solvers the equations are solved one by one, it is possible to add equations to the mathematical model - to describe e.g. turbulence - at a later stage after the development of the initial code without major modifications of the algorithm.« less

  8. The XMM-Newton serendipitous survey. VII. The third XMM-Newton serendipitous source catalogue

    NASA Astrophysics Data System (ADS)

    Rosen, S. R.; Webb, N. A.; Watson, M. G.; Ballet, J.; Barret, D.; Braito, V.; Carrera, F. J.; Ceballos, M. T.; Coriat, M.; Della Ceca, R.; Denkinson, G.; Esquej, P.; Farrell, S. A.; Freyberg, M.; Grisé, F.; Guillout, P.; Heil, L.; Koliopanos, F.; Law-Green, D.; Lamer, G.; Lin, D.; Martino, R.; Michel, L.; Motch, C.; Nebot Gomez-Moran, A.; Page, C. G.; Page, K.; Page, M.; Pakull, M. W.; Pye, J.; Read, A.; Rodriguez, P.; Sakano, M.; Saxton, R.; Schwope, A.; Scott, A. E.; Sturm, R.; Traulsen, I.; Yershov, V.; Zolotukhin, I.

    2016-05-01

    Context. Thanks to the large collecting area (3 ×~1500 cm2 at 1.5 keV) and wide field of view (30' across in full field mode) of the X-ray cameras on board the European Space Agency X-ray observatory XMM-Newton, each individual pointing can result in the detection of up to several hundred X-ray sources, most of which are newly discovered objects. Since XMM-Newton has now been in orbit for more than 15 yr, hundreds of thousands of sources have been detected. Aims: Recently, many improvements in the XMM-Newton data reduction algorithms have been made. These include enhanced source characterisation and reduced spurious source detections, refined astrometric precision of sources, greater net sensitivity for source detection, and the extraction of spectra and time series for fainter sources, both with better signal-to-noise. Thanks to these enhancements, the quality of the catalogue products has been much improved over earlier catalogues. Furthermore, almost 50% more observations are in the public domain compared to 2XMMi-DR3, allowing the XMM-Newton Survey Science Centre to produce a much larger and better quality X-ray source catalogue. Methods: The XMM-Newton Survey Science Centre has developed a pipeline to reduce the XMM-Newton data automatically. Using the latest version of this pipeline, along with better calibration, a new version of the catalogue has been produced, using XMM-Newton X-ray observations made public on or before 2013 December 31. Manual screening of all of the X-ray detections ensures the highest data quality. This catalogue is known as 3XMM. Results: In the latest release of the 3XMM catalogue, 3XMM-DR5, there are 565 962 X-ray detections comprising 396 910 unique X-ray sources. Spectra and lightcurves are provided for the 133 000 brightest sources. For all detections, the positions on the sky, a measure of the quality of the detection, and an evaluation of the X-ray variability is provided, along with the fluxes and count rates in 7 X-ray energy bands, the total 0.2-12 keV band counts, and four hardness ratios. With the aim of identifying the detections, a cross correlation with 228 catalogues of sources detected in all wavebands is also provided for each X-ray detection. Conclusions: 3XMM-DR5 is the largest X-ray source catalogue ever produced. Thanks to the large array of data products associated with each detection and each source, it is an excellent resource for finding new and extreme objects. Based on observations obtained with XMM-Newton, an ESA science mission with instruments and contributions directly funded by ESA Member States and NASA.The catalogue is available at http://cdsarc.u-strasbg.fr/viz-bin/VizieR?-meta.foot&-source=IX/46

  9. Ellipsoidal terrain correction based on multi-cylindrical equal-area map projection of the reference ellipsoid

    NASA Astrophysics Data System (ADS)

    Ardalan, A. A.; Safari, A.

    2004-09-01

    An operational algorithm for computation of terrain correction (or local gravity field modeling) based on application of closed-form solution of the Newton integral in terms of Cartesian coordinates in multi-cylindrical equal-area map projection of the reference ellipsoid is presented. Multi-cylindrical equal-area map projection of the reference ellipsoid has been derived and is described in detail for the first time. Ellipsoidal mass elements with various sizes on the surface of the reference ellipsoid are selected and the gravitational potential and vector of gravitational intensity (i.e. gravitational acceleration) of the mass elements are computed via numerical solution of the Newton integral in terms of geodetic coordinates {λ,ϕ,h}. Four base- edge points of the ellipsoidal mass elements are transformed into a multi-cylindrical equal-area map projection surface to build Cartesian mass elements by associating the height of the corresponding ellipsoidal mass elements to the transformed area elements. Using the closed-form solution of the Newton integral in terms of Cartesian coordinates, the gravitational potential and vector of gravitational intensity of the transformed Cartesian mass elements are computed and compared with those of the numerical solution of the Newton integral for the ellipsoidal mass elements in terms of geodetic coordinates. Numerical tests indicate that the difference between the two computations, i.e. numerical solution of the Newton integral for ellipsoidal mass elements in terms of geodetic coordinates and closed-form solution of the Newton integral in terms of Cartesian coordinates, in a multi-cylindrical equal-area map projection, is less than 1.6×10-8 m2/s2 for a mass element with a cross section area of 10×10 m and a height of 10,000 m. For a mass element with a cross section area of 1×1 km and a height of 10,000 m the difference is less than 1.5×10-4m2/s2. Since 1.5× 10-4 m2/s2 is equivalent to 1.5×10-5m in the vertical direction, it can be concluded that a method for terrain correction (or local gravity field modeling) based on closed-form solution of the Newton integral in terms of Cartesian coordinates of a multi-cylindrical equal-area map projection of the reference ellipsoid has been developed which has the accuracy of terrain correction (or local gravity field modeling) based on the Newton integral in terms of ellipsoidal coordinates.

  10. Comparison result of inversion of gravity data of a fault by particle swarm optimization and Levenberg-Marquardt methods.

    PubMed

    Toushmalani, Reza

    2013-01-01

    The purpose of this study was to compare the performance of two methods for gravity inversion of a fault. First method [Particle swarm optimization (PSO)] is a heuristic global optimization method and also an optimization algorithm, which is based on swarm intelligence. It comes from the research on the bird and fish flock movement behavior. Second method [The Levenberg-Marquardt algorithm (LM)] is an approximation to the Newton method used also for training ANNs. In this paper first we discussed the gravity field of a fault, then describes the algorithms of PSO and LM And presents application of Levenberg-Marquardt algorithm, and a particle swarm algorithm in solving inverse problem of a fault. Most importantly the parameters for the algorithms are given for the individual tests. Inverse solution reveals that fault model parameters are agree quite well with the known results. A more agreement has been found between the predicted model anomaly and the observed gravity anomaly in PSO method rather than LM method.

  11. Motion of a Point Mass in a Rotating Disc: A Quantitative Analysis of the Coriolis and Centrifugal Force

    NASA Astrophysics Data System (ADS)

    Haddout, Soufiane

    2016-06-01

    In Newtonian mechanics, the non-inertial reference frames is a generalization of Newton's laws to any reference frames. While this approach simplifies some problems, there is often little physical insight into the motion, in particular into the effects of the Coriolis force. The fictitious Coriolis force can be used by anyone in that frame of reference to explain why objects follow curved paths. In this paper, a mathematical solution based on differential equations in non-inertial reference is used to study different types of motion in rotating system. In addition, the experimental data measured on a turntable device, using a video camera in a mechanics laboratory was conducted to compare with mathematical solution in case of parabolically curved, solving non-linear least-squares problems, based on Levenberg-Marquardt's and Gauss-Newton algorithms.

  12. Computation of Quasiperiodic Normally Hyperbolic Invariant Tori: Rigorous Results

    NASA Astrophysics Data System (ADS)

    Canadell, Marta; Haro, Àlex

    2017-12-01

    The development of efficient methods for detecting quasiperiodic oscillations and computing the corresponding invariant tori is a subject of great importance in dynamical systems and their applications in science and engineering. In this paper, we prove the convergence of a new Newton-like method for computing quasiperiodic normally hyperbolic invariant tori carrying quasiperiodic motion in smooth families of real-analytic dynamical systems. The main result is stated as an a posteriori KAM-like theorem that allows controlling the inner dynamics on the torus with appropriate detuning parameters, in order to obtain a prescribed quasiperiodic motion. The Newton-like method leads to several fast and efficient computational algorithms, which are discussed and tested in a companion paper (Canadell and Haro in J Nonlinear Sci, 2017. doi: 10.1007/s00332-017-9388-z), in which new mechanisms of breakdown are presented.

  13. Simulation of fluid-structure interaction in micropumps by coupling of two commercial finite element programs

    NASA Astrophysics Data System (ADS)

    Klein, Andreas; Gerlach, Gerald

    1998-09-01

    This paper deals with the simulation of the fluid-structure interaction phenomena in micropumps. The proposed solution approach is based on external coupling of two different solvers, which are considered here as `black boxes'. Therefore, no specific intervention is necessary into the program code, and solvers can be exchanged arbitrarily. For the realization of the external iteration loop, two algorithms are considered: the relaxation-based Gauss-Seidel method and the computationally more extensive Newton method. It is demonstrated in terms of a simplified test case, that for rather weak coupling, the Gauss-Seidel method is sufficient. However, by simply changing the considered fluid from air to water, the two physical domains become strongly coupled, and the Gauss-Seidel method fails to converge in this case. The Newton iteration scheme must be used instead.

  14. Spatio-temporal imaging of the hemoglobin in the compressed breast with diffuse optical tomography

    NASA Astrophysics Data System (ADS)

    Boverman, Gregory; Fang, Qianqian; Carp, Stefan A.; Miller, Eric L.; Brooks, Dana H.; Selb, Juliette; Moore, Richard H.; Kopans, Daniel B.; Boas, David A.

    2007-07-01

    We develop algorithms for imaging the time-varying optical absorption within the breast given diffuse optical tomographic data collected over a time span that is long compared to the dynamics of the medium. Multispectral measurements allow for the determination of the time-varying total hemoglobin concentration and of oxygen saturation. To facilitate the image reconstruction, we decompose the hemodynamics in time into a linear combination of spatio-temporal basis functions, the coefficients of which are estimated using all of the data simultaneously, making use of a Newton-based nonlinear optimization algorithm. The solution of the extremely large least-squares problem which arises in computing the Newton update is obtained iteratively using the LSQR algorithm. A Laplacian spatial regularization operator is applied, and, in addition, we make use of temporal regularization which tends to encourage similarity between the images of the spatio-temporal coefficients. Results are shown for an extensive simulation, in which we are able to image and quantify localized changes in both total hemoglobin concentration and oxygen saturation. Finally, a breast compression study has been performed for a normal breast cancer screening subject, using an instrument which allows for highly accurate co-registration of multispectral diffuse optical measurements with an x-ray tomosynthesis image of the breast. We are able to quantify the global return of blood to the breast following compression, and, in addition, localized changes are observed which correspond to the glandular region of the breast.

  15. Unifying inflation with ΛCDM epoch in modified f(R) gravity consistent with Solar System tests

    NASA Astrophysics Data System (ADS)

    Nojiri, Shin'ichi; Odintsov, Sergei D.

    2007-12-01

    We suggest two realistic f(R) and one F(G) modified gravities which are consistent with local tests and cosmological bounds. The typical property of such theories is the presence of the effective cosmological constant epochs in such a way that early-time inflation and late-time cosmic acceleration are naturally unified within single model. It is shown that classical instability does not appear here and Newton law is respected. Some discussion of possible anti-gravity regime appearance and related modification of the theory is done.

  16. 2D Seismic Imaging of Elastic Parameters by Frequency Domain Full Waveform Inversion

    NASA Astrophysics Data System (ADS)

    Brossier, R.; Virieux, J.; Operto, S.

    2008-12-01

    Thanks to recent advances in parallel computing, full waveform inversion is today a tractable seismic imaging method to reconstruct physical parameters of the earth interior at different scales ranging from the near- surface to the deep crust. We present a massively parallel 2D frequency-domain full-waveform algorithm for imaging visco-elastic media from multi-component seismic data. The forward problem (i.e. the resolution of the frequency-domain 2D PSV elastodynamics equations) is based on low-order Discontinuous Galerkin (DG) method (P0 and/or P1 interpolations). Thanks to triangular unstructured meshes, the DG method allows accurate modeling of both body waves and surface waves in case of complex topography for a discretization of 10 to 15 cells per shear wavelength. The frequency-domain DG system is solved efficiently for multiple sources with the parallel direct solver MUMPS. The local inversion procedure (i.e. minimization of residuals between observed and computed data) is based on the adjoint-state method which allows to efficiently compute the gradient of the objective function. Applying the inversion hierarchically from the low frequencies to the higher ones defines a multiresolution imaging strategy which helps convergence towards the global minimum. In place of expensive Newton algorithm, the combined use of the diagonal terms of the approximate Hessian matrix and optimization algorithms based on quasi-Newton methods (Conjugate Gradient, LBFGS, ...) allows to improve the convergence of the iterative inversion. The distribution of forward problem solutions over processors driven by a mesh partitioning performed by METIS allows to apply most of the inversion in parallel. We shall present the main features of the parallel modeling/inversion algorithm, assess its scalability and illustrate its performances with realistic synthetic case studies.

  17. Second derivative time integration methods for discontinuous Galerkin solutions of unsteady compressible flows

    NASA Astrophysics Data System (ADS)

    Nigro, A.; De Bartolo, C.; Crivellini, A.; Bassi, F.

    2017-12-01

    In this paper we investigate the possibility of using the high-order accurate A (α) -stable Second Derivative (SD) schemes proposed by Enright for the implicit time integration of the Discontinuous Galerkin (DG) space-discretized Navier-Stokes equations. These multistep schemes are A-stable up to fourth-order, but their use results in a system matrix difficult to compute. Furthermore, the evaluation of the nonlinear function is computationally very demanding. We propose here a Matrix-Free (MF) implementation of Enright schemes that allows to obtain a method without the costs of forming, storing and factorizing the system matrix, which is much less computationally expensive than its matrix-explicit counterpart, and which performs competitively with other implicit schemes, such as the Modified Extended Backward Differentiation Formulae (MEBDF). The algorithm makes use of the preconditioned GMRES algorithm for solving the linear system of equations. The preconditioner is based on the ILU(0) factorization of an approximated but computationally cheaper form of the system matrix, and it has been reused for several time steps to improve the efficiency of the MF Newton-Krylov solver. We additionally employ a polynomial extrapolation technique to compute an accurate initial guess to the implicit nonlinear system. The stability properties of SD schemes have been analyzed by solving a linear model problem. For the analysis on the Navier-Stokes equations, two-dimensional inviscid and viscous test cases, both with a known analytical solution, are solved to assess the accuracy properties of the proposed time integration method for nonlinear autonomous and non-autonomous systems, respectively. The performance of the SD algorithm is compared with the ones obtained by using an MF-MEBDF solver, in order to evaluate its effectiveness, identifying its limitations and suggesting possible further improvements.

  18. Concerning an application of the method of least squares with a variable weight matrix

    NASA Technical Reports Server (NTRS)

    Sukhanov, A. A.

    1979-01-01

    An estimate of a state vector for a physical system when the weight matrix in the method of least squares is a function of this vector is considered. An iterative procedure is proposed for calculating the desired estimate. Conditions for the existence and uniqueness of the limit of this procedure are obtained, and a domain is found which contains the limit estimate. A second method for calculating the desired estimate which reduces to the solution of a system of algebraic equations is proposed. The question of applying Newton's method of tangents to solving the given system of algebraic equations is considered and conditions for the convergence of the modified Newton's method are obtained. Certain properties of the estimate obtained are presented together with an example.

  19. Molar distalization with the assistance of Temporary Anchorage Devices.

    PubMed

    Palencar, Adrian J

    2015-01-01

    This article describes efficient techniques for distalization of maxillary and mandibular molars with the assistance of Temporary Anchorage Devices (TADs). There are numerous occasions where the distalization of molars is required in lieu of the odontectomy of bicuspids. In the past, extra-oral force has been used, (i.e. Cervical or Combination Head Gear, or intra-oral force, i.e. Posterior Sagittal Appliance, Modified Greenfield Appliance, Williams DMJ 20001, CD Distalizer, Magill Sagittal, Pendulum Appliance, etc.). All the intra-oral appliances have a common denominator the orthodontic clinician has to deal with, the undesirable expression of the Third Law of Newton. The utilization of TADs allows us to circumvent this shortcoming, establishing an absolute anchorage, and thus completely negate the expression of the Third Law of Newton.

  20. Primal-dual and forward gradient implementation for quantitative susceptibility mapping.

    PubMed

    Kee, Youngwook; Deh, Kofi; Dimov, Alexey; Spincemaille, Pascal; Wang, Yi

    2017-12-01

    To investigate the computational aspects of the prior term in quantitative susceptibility mapping (QSM) by (i) comparing the Gauss-Newton conjugate gradient (GNCG) algorithm that uses numerical conditioning (ie, modifies the prior term) with a primal-dual (PD) formulation that avoids this, and (ii) carrying out a comparison between a central and forward difference scheme for the discretization of the prior term. A spatially continuous formulation of the regularized QSM inversion problem and its PD formulation were derived. The Chambolle-Pock algorithm for PD was implemented and its convergence behavior was compared with that of GNCG for the original QSM. Forward and central difference schemes were compared in terms of the presence of checkerboard artifacts. All methods were tested and validated on a gadolinium phantom, ex vivo brain blocks, and in vivo brain MRI data with respect to COSMOS. The PD approach provided a faster convergence rate than GNCG. The GNCG convergence rate slowed considerably with smaller (more accurate) values of the conditioning parameter. Using a forward difference suppressed the checkerboard artifacts in QSM, as compared with the central difference. The accuracy of PD and GNCG were validated based on excellent correlation with COSMOS. The PD approach with forward difference for the gradient showed improved convergence and accuracy over the GNCG method using central difference. Magn Reson Med 78:2416-2427, 2017. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  1. Full waveform inversion using a decomposed single frequency component from a spectrogram

    NASA Astrophysics Data System (ADS)

    Ha, Jiho; Kim, Seongpil; Koo, Namhyung; Kim, Young-Ju; Woo, Nam-Sub; Han, Sang-Mok; Chung, Wookeen; Shin, Sungryul; Shin, Changsoo; Lee, Jaejoon

    2018-06-01

    Although many full waveform inversion methods have been developed to construct velocity models of subsurface, various approaches have been presented to obtain an inversion result with long-wavelength features even though seismic data lacking low-frequency components were used. In this study, a new full waveform inversion algorithm was proposed to recover a long-wavelength velocity model that reflects the inherent characteristics of each frequency component of seismic data using a single-frequency component decomposed from the spectrogram. We utilized the wavelet transform method to obtain the spectrogram, and the decomposed signal from the spectrogram was used as transformed data. The Gauss-Newton method with the diagonal elements of an approximate Hessian matrix was used to update the model parameters at each iteration. Based on the results of time-frequency analysis in the spectrogram, numerical tests with some decomposed frequency components were performed using a modified SEG/EAGE salt dome (A-A‧) line to demonstrate the feasibility of the proposed inversion algorithm. This demonstrated that a reasonable inverted velocity model with long-wavelength structures can be obtained using a single frequency component. It was also confirmed that when strong noise occurs in part of the frequency band, it is feasible to obtain a long-wavelength velocity model from the noise data with a frequency component that is less affected by the noise. Finally, it was confirmed that the results obtained from the spectrogram inversion can be used as an initial velocity model in conventional inversion methods.

  2. Distance majorization and its applications

    PubMed Central

    Chi, Eric C.; Zhou, Hua; Lange, Kenneth

    2014-01-01

    The problem of minimizing a continuously differentiable convex function over an intersection of closed convex sets is ubiquitous in applied mathematics. It is particularly interesting when it is easy to project onto each separate set, but nontrivial to project onto their intersection. Algorithms based on Newton’s method such as the interior point method are viable for small to medium-scale problems. However, modern applications in statistics, engineering, and machine learning are posing problems with potentially tens of thousands of parameters or more. We revisit this convex programming problem and propose an algorithm that scales well with dimensionality. Our proposal is an instance of a sequential unconstrained minimization technique and revolves around three ideas: the majorization-minimization principle, the classical penalty method for constrained optimization, and quasi-Newton acceleration of fixed-point algorithms. The performance of our distance majorization algorithms is illustrated in several applications. PMID:25392563

  3. Gravitation and Special Relativity from Compton Wave Interactions at the Planck Scale: An Algorithmic Approach

    NASA Technical Reports Server (NTRS)

    Blackwell, William C., Jr.

    2004-01-01

    In this paper space is modeled as a lattice of Compton wave oscillators (CWOs) of near- Planck size. It is shown that gravitation and special relativity emerge from the interaction between particles Compton waves. To develop this CWO model an algorithmic approach was taken, incorporating simple rules of interaction at the Planck-scale developed using well known physical laws. This technique naturally leads to Newton s law of gravitation and a new form of doubly special relativity. The model is in apparent agreement with the holographic principle, and it predicts a cutoff energy for ultrahigh-energy cosmic rays that is consistent with observational data.

  4. On Some Separated Algorithms for Separable Nonlinear Least Squares Problems.

    PubMed

    Gan, Min; Chen, C L Philip; Chen, Guang-Yong; Chen, Long

    2017-10-03

    For a class of nonlinear least squares problems, it is usually very beneficial to separate the variables into a linear and a nonlinear part and take full advantage of reliable linear least squares techniques. Consequently, the original problem is turned into a reduced problem which involves only nonlinear parameters. We consider in this paper four separated algorithms for such problems. The first one is the variable projection (VP) algorithm with full Jacobian matrix of Golub and Pereyra. The second and third ones are VP algorithms with simplified Jacobian matrices proposed by Kaufman and Ruano et al. respectively. The fourth one only uses the gradient of the reduced problem. Monte Carlo experiments are conducted to compare the performance of these four algorithms. From the results of the experiments, we find that: 1) the simplified Jacobian proposed by Ruano et al. is not a good choice for the VP algorithm; moreover, it may render the algorithm hard to converge; 2) the fourth algorithm perform moderately among these four algorithms; 3) the VP algorithm with the full Jacobian matrix perform more stable than that of the VP algorithm with Kuafman's simplified one; and 4) the combination of VP algorithm and Levenberg-Marquardt method is more effective than the combination of VP algorithm and Gauss-Newton method.

  5. A modified MOD16 algorithm to estimate evapotranspiration over alpine meadow on the Tibetan Plateau, China

    NASA Astrophysics Data System (ADS)

    Chang, Yaping; Qin, Dahe; Ding, Yongjian; Zhao, Qiudong; Zhang, Shiqiang

    2018-06-01

    The long-term change of evapotranspiration (ET) is crucial for managing water resources in areas with extreme climates, such as the Tibetan Plateau (TP). This study proposed a modified algorithm for estimating ET based on the MOD16 algorithm on a global scale over alpine meadow on the TP in China. Wind speed and vegetation height were integrated to estimate aerodynamic resistance, while the temperature and moisture constraints for stomatal conductance were revised based on the technique proposed by Fisher et al. (2008). Moreover, Fisher's method for soil evaporation was adopted to reduce the uncertainty in soil evaporation estimation. Five representative alpine meadow sites on the TP were selected to investigate the performance of the modified algorithm. Comparisons were made between the ET observed using the Eddy Covariance (EC) and estimated using both the original and modified algorithms. The results revealed that the modified algorithm performed better than the original MOD16 algorithm with the coefficient of determination (R2) increasing from 0.26 to 0.68, and root mean square error (RMSE) decreasing from 1.56 to 0.78 mm d-1. The modified algorithm performed slightly better with a higher R2 (0.70) and lower RMSE (0.61 mm d-1) for after-precipitation days than for non-precipitation days at Suli site. Contrarily, better results were obtained for non-precipitation days than for after-precipitation days at Arou, Tanggula, and Hulugou sites, indicating that the modified algorithm may be more suitable for estimating ET for non-precipitation days with higher accuracy than for after-precipitation days, which had large observation errors. The comparisons between the modified algorithm and two mainstream methods suggested that the modified algorithm could produce high accuracy ET over the alpine meadow sites on the TP.

  6. Testing local Lorentz invariance with short-range gravity

    DOE PAGES

    Kostelecký, V. Alan; Mewes, Matthew

    2017-01-10

    The Newton limit of gravity is studied in the presence of Lorentz-violating gravitational operators of arbitrary mass dimension. The linearized modified Einstein equations are obtained and the perturbative solutions are constructed and characterized. We develop a formalism for data analysis in laboratory experiments testing gravity at short range and demonstrate that these tests provide unique sensitivity to deviations from local Lorentz invariance.

  7. Automated Design of a High-Velocity Channel

    DTIC Science & Technology

    2006-05-01

    using Newton’s method. 2.2.2 Groundwater Applications Optimization methods are also very useful for solving groundwater problems. Townley et al... Townley 85] apply present computational algorithms to steady and transient models for groundwater °ow. The aquifer storage coe±cients, transmissivities...Reliability Analysis", Water Resources Research, Vol. 28, No. 12, December 1992, pp. 3269-3280. [ Townley 85] Townley , L. R. and Wilson, J. L

  8. A Microcomputer-Based Network Optimization Package.

    DTIC Science & Technology

    1981-09-01

    from either cases a or c as Truncated-Newton directions. It can be shown [Ref. 27] that the TNCG algorithm is globally convergent and capable of...nonzero values of LGB indicate bounds at which arcs are fixed or reversed. Fixed arcs have negative T ( ) while free arcs have positive T ( ) values...Solution of Generalized Network Problems," Working Paper, Department of Finance and Business Economics , School of Business , University of Southern

  9. Extended behavioural modelling of FET and lattice-mismatched HEMT devices

    NASA Astrophysics Data System (ADS)

    Khawam, Yahya; Albasha, Lutfi

    2017-07-01

    This study presents an improved large signal model that can be used for high electron mobility transistors (HEMTs) and field effect transistors using measurement-based behavioural modelling techniques. The steps for accurate large and small signal modelling for transistor are also discussed. The proposed DC model is based on the Fager model since it compensates between the number of model's parameters and accuracy. The objective is to increase the accuracy of the drain-source current model with respect to any change in gate or drain voltages. Also, the objective is to extend the improved DC model to account for soft breakdown and kink effect found in some variants of HEMT devices. A hybrid Newton's-Genetic algorithm is used in order to determine the unknown parameters in the developed model. In addition to accurate modelling of a transistor's DC characteristics, the complete large signal model is modelled using multi-bias s-parameter measurements. The way that the complete model is performed is by using a hybrid multi-objective optimisation technique (Non-dominated Sorting Genetic Algorithm II) and local minimum search (multivariable Newton's method) for parasitic elements extraction. Finally, the results of DC modelling and multi-bias s-parameters modelling are presented, and three-device modelling recommendations are discussed.

  10. A 2D electrostatic PIC code for the Mark III Hypercube

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ferraro, R.D.; Liewer, P.C.; Decyk, V.K.

    We have implemented a 2D electrostastic plasma particle in cell (PIC) simulation code on the Caltech/JPL Mark IIIfp Hypercube. The code simulates plasma effects by evolving in time the trajectories of thousands to millions of charged particles subject to their self-consistent fields. Each particle`s position and velocity is advanced in time using a leap frog method for integrating Newton`s equations of motion in electric and magnetic fields. The electric field due to these moving charged particles is calculated on a spatial grid at each time by solving Poisson`s equation in Fourier space. These two tasks represent the largest part ofmore » the computation. To obtain efficient operation on a distributed memory parallel computer, we are using the General Concurrent PIC (GCPIC) algorithm previously developed for a 1D parallel PIC code.« less

  11. ℓ1-Regularized full-waveform inversion with prior model information based on orthant-wise limited memory quasi-Newton method

    NASA Astrophysics Data System (ADS)

    Dai, Meng-Xue; Chen, Jing-Bo; Cao, Jian

    2017-07-01

    Full-waveform inversion (FWI) is an ill-posed optimization problem which is sensitive to noise and initial model. To alleviate the ill-posedness of the problem, regularization techniques are usually adopted. The ℓ1-norm penalty is a robust regularization method that preserves contrasts and edges. The Orthant-Wise Limited-Memory Quasi-Newton (OWL-QN) method extends the widely-used limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) method to the ℓ1-regularized optimization problems and inherits the efficiency of L-BFGS. To take advantage of the ℓ1-regularized method and the prior model information obtained from sonic logs and geological information, we implement OWL-QN algorithm in ℓ1-regularized FWI with prior model information in this paper. Numerical experiments show that this method not only improve the inversion results but also has a strong anti-noise ability.

  12. Energy-modeled flight in a wind field

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feldman, M.A.; Cliff, E.M.

    Optimal shaping of aerospace trajectories has provided the motivation for much modern study of optimization theory and algorithms. Current industrial practice favors approaches where the continuous-time optimal control problem is transcribed to a finite-dimensional nonlinear programming problem (NLP) by a discretization process. Two such formulations are implemented in the POST and the OTIS codes. In the present paper we use a discretization that is specially adapted to the flight problem of interest. Among the unique aspects of the present discretization are: a least-squares formulation for certain kinematic constraints; the use of an energy ideas to enforce Newton`s Laws; and, themore » inclusion of large magnitude horizontal winds. In the next section we shall provide a description of the flight problem and its NLP representation. Following this we provide some details of the constraint formulation. Finally, we present an overview of the NLP problem.« less

  13. Analysis of Accuracy and Epoch on Back-propagation BFGS Quasi-Newton

    NASA Astrophysics Data System (ADS)

    Silaban, Herlan; Zarlis, Muhammad; Sawaluddin

    2017-12-01

    Back-propagation is one of the learning algorithms on artificial neural networks that have been widely used to solve various problems, such as pattern recognition, prediction and classification. The Back-propagation architecture will affect the outcome of learning processed. BFGS Quasi-Newton is one of the functions that can be used to change the weight of back-propagation. This research tested some back-propagation architectures using classical back-propagation and back-propagation with BFGS. There are 7 architectures that have been tested on glass dataset with various numbers of neurons, 6 architectures with 1 hidden layer and 1 architecture with 2 hidden layers. BP with BFGS improves the convergence of the learning process. The average improvement convergence is 98.34%. BP with BFGS is more optimal on architectures with smaller number of neurons with decreased epoch number is 94.37% with the increase of accuracy about 0.5%.

  14. Extending Newton's Universal Theory of Gravity

    NASA Astrophysics Data System (ADS)

    Aisenberg, Sol

    2011-11-01

    This should remove the mystery of Dark Matter. Newton's universal theory of gravity only used the observations of the motion of planets in our solar system. Hubble later used observations of fixed stars in the universe, and showed that the fixed stars were actually galaxies with very large numbers of stars. Newton's universal law of gravity could not explain these new observations without the mystery of dark matter for the additional gravity. In science, when a theory is not able to explain new observations it is necessary to modify the theory or abandon the theory. Rubin observed flat (constant velocity) rotation curves for stars in spiral galaxies. Dark matter was proposed to provide the missing gravity. The equation balancing gravitational force and centripetal force is M*G=v*v*r and for the observed constant velocity v this requires M*G to be a linear function of distance r. If the linear dependence is instead assigned to G instead of M to give a new value for Gn as G+A*r, this will explain the observations in the cosmos and also in our solar system for small r. See ``The Misunderstood Universe'' for more details.

  15. Resolving the Large Scale Spectral Variability of the Luminous Seyfert 1 Galaxy 1H 0419-577: Evidence for a New Emission Component and Absorption by Cold Dense Matter

    NASA Technical Reports Server (NTRS)

    Pounds, K. A.; Reeves, J. N.; Page, K. L.; OBrien, P. T.

    2004-01-01

    An XMM-Newton observation of the luminous Seyfert 1 galaxy 1H 0419-577 in September 2002, when the source was in an extreme low-flux state, found a very hard X-ray spectrum at 1-10 keV with a strong soft excess below -1 keV. Comparison with an earlier XMM-Newton observation when 1H 0419-577 was X-ray bright indicated the dominant spectral variability was due to a steep power law or cool Comptonised thermal emission. Four further XMM-Newton observations, with 1H 0419-577 in intermediate flux states, now support that conclusion, while we also find the variable emission component in intermediate state difference spectra to be strongly modified by absorption in low ionisation matter. The variable soft excess then appears to be an artefact of absorption of the underlying continuum while the core soft emission can be attributed to re- combination in an extended region of more highly ionised gas. We note the wider implications of finding substantial cold dense matter overlying (or embedded in) the X-ray continuum source in a luminous Seyfert 1 galaxy.

  16. A fast identification algorithm for Box-Cox transformation based radial basis function neural network.

    PubMed

    Hong, Xia

    2006-07-01

    In this letter, a Box-Cox transformation-based radial basis function (RBF) neural network is introduced using the RBF neural network to represent the transformed system output. Initially a fixed and moderate sized RBF model base is derived based on a rank revealing orthogonal matrix triangularization (QR decomposition). Then a new fast identification algorithm is introduced using Gauss-Newton algorithm to derive the required Box-Cox transformation, based on a maximum likelihood estimator. The main contribution of this letter is to explore the special structure of the proposed RBF neural network for computational efficiency by utilizing the inverse of matrix block decomposition lemma. Finally, the Box-Cox transformation-based RBF neural network, with good generalization and sparsity, is identified based on the derived optimal Box-Cox transformation and a D-optimality-based orthogonal forward regression algorithm. The proposed algorithm and its efficacy are demonstrated with an illustrative example in comparison with support vector machine regression.

  17. Performance study of LMS based adaptive algorithms for unknown system identification

    NASA Astrophysics Data System (ADS)

    Javed, Shazia; Ahmad, Noor Atinah

    2014-07-01

    Adaptive filtering techniques have gained much popularity in the modeling of unknown system identification problem. These techniques can be classified as either iterative or direct. Iterative techniques include stochastic descent method and its improved versions in affine space. In this paper we present a comparative study of the least mean square (LMS) algorithm and some improved versions of LMS, more precisely the normalized LMS (NLMS), LMS-Newton, transform domain LMS (TDLMS) and affine projection algorithm (APA). The performance evaluation of these algorithms is carried out using adaptive system identification (ASI) model with random input signals, in which the unknown (measured) signal is assumed to be contaminated by output noise. Simulation results are recorded to compare the performance in terms of convergence speed, robustness, misalignment, and their sensitivity to the spectral properties of input signals. Main objective of this comparative study is to observe the effects of fast convergence rate of improved versions of LMS algorithms on their robustness and misalignment.

  18. Performance study of LMS based adaptive algorithms for unknown system identification

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Javed, Shazia; Ahmad, Noor Atinah

    Adaptive filtering techniques have gained much popularity in the modeling of unknown system identification problem. These techniques can be classified as either iterative or direct. Iterative techniques include stochastic descent method and its improved versions in affine space. In this paper we present a comparative study of the least mean square (LMS) algorithm and some improved versions of LMS, more precisely the normalized LMS (NLMS), LMS-Newton, transform domain LMS (TDLMS) and affine projection algorithm (APA). The performance evaluation of these algorithms is carried out using adaptive system identification (ASI) model with random input signals, in which the unknown (measured) signalmore » is assumed to be contaminated by output noise. Simulation results are recorded to compare the performance in terms of convergence speed, robustness, misalignment, and their sensitivity to the spectral properties of input signals. Main objective of this comparative study is to observe the effects of fast convergence rate of improved versions of LMS algorithms on their robustness and misalignment.« less

  19. SPIDERS: selection of spectroscopic targets using AGN candidates detected in all-sky X-ray surveys

    NASA Astrophysics Data System (ADS)

    Dwelly, T.; Salvato, M.; Merloni, A.; Brusa, M.; Buchner, J.; Anderson, S. F.; Boller, Th.; Brandt, W. N.; Budavári, T.; Clerc, N.; Coffey, D.; Del Moro, A.; Georgakakis, A.; Green, P. J.; Jin, C.; Menzel, M.-L.; Myers, A. D.; Nandra, K.; Nichol, R. C.; Ridl, J.; Schwope, A. D.; Simm, T.

    2017-07-01

    SPIDERS (SPectroscopic IDentification of eROSITA Sources) is a Sloan Digital Sky Survey IV (SDSS-IV) survey running in parallel to the Extended Baryon Oscillation Spectroscopic Survey (eBOSS) cosmology project. SPIDERS will obtain optical spectroscopy for large numbers of X-ray-selected active galactic nuclei (AGN) and galaxy cluster members detected in wide-area eROSITA, XMM-Newton and ROSAT surveys. We describe the methods used to choose spectroscopic targets for two sub-programmes of SPIDERS X-ray selected AGN candidates detected in the ROSAT All Sky and the XMM-Newton Slew surveys. We have exploited a Bayesian cross-matching algorithm, guided by priors based on mid-IR colour-magnitude information from the Wide-field Infrared Survey Explorer survey, to select the most probable optical counterpart to each X-ray detection. We empirically demonstrate the high fidelity of our counterpart selection method using a reference sample of bright well-localized X-ray sources collated from XMM-Newton, Chandra and Swift-XRT serendipitous catalogues, and also by examining blank-sky locations. We describe the down-selection steps which resulted in the final set of SPIDERS-AGN targets put forward for spectroscopy within the eBOSS/TDSS/SPIDERS survey, and present catalogues of these targets. We also present catalogues of ˜12 000 ROSAT and ˜1500 XMM-Newton Slew survey sources that have existing optical spectroscopy from SDSS-DR12, including the results of our visual inspections. On completion of the SPIDERS programme, we expect to have collected homogeneous spectroscopic redshift information over a footprint of ˜7500 deg2 for >85 per cent of the ROSAT and XMM-Newton Slew survey sources having optical counterparts in the magnitude range 17 < r < 22.5, producing a large and highly complete sample of bright X-ray-selected AGN suitable for statistical studies of AGN evolution and clustering.

  20. Evaluation of the diffusion coefficient for controlled release of oxytetracycline from alginate/chitosan/poly(ethylene glycol) microbeads in simulated gastrointestinal environments.

    PubMed

    Cruz, Maria C Pinto; Ravagnani, Sergio P; Brogna, Fabio M S; Campana, Sérgio P; Triviño, Galo Cardenas; Lisboa, Antonio C Luz; Mei, Lucia H Innocentini

    2004-12-01

    Diffusion studies of OTC (oxytetracycline) entrapped in microbeads of calcium alginate, calcium alginate coacervated with chitosan (of high, medium and low viscosity) and calcium alginate coacervated with chitosan of low viscosity, covered with PEG [poly(ethylene glycol) of molecular mass 2, 4.6 and 10 kDa, were carried out at 37+/-0.5 degrees C, in pH 7.4 and pH 1.2 buffer solutions - conditions similar to those found in the gastrointestinal system. The diffusion coefficient, or diffusivity (D), of OTC was calculated by equations provided by Crank [(1975) Mathematics in Diffusion, p. 85, Clarendon Press, Oxford] for diffusion, which follows Fick's [(1855) Ann. Physik (Leipzig) 170, 59] second law, considering the diffusion from the inner parts to the surface of the microbeads. The least-squares and the Newton-Raphson [Carnahan, Luther and Wilkes (1969) Applied Numerical Methods, p. 319, John Wiley & Sons, New York] methods were used to obtain the diffusion coefficients. The microbead swelling at pH 7.4 and OTC diffusion is classically Fickian, suggesting that the OTC transport, in this case, is controlled by the exchange rates of free water and relaxation of calcium alginate chains. In case of acid media, it was observed that the phenomenon did not follow Fick's law, owing, probably, to the high solubility of the OTC in this environment. It was possible to modulate the release rate of OTC in several types of microbeads. The presence of cracks formed during the process of drying the microbeads was observed by scanning electron microscopy.

  1. A three-dimensional multiphase flow model for assesing NAPL contamination in porous and fractured media, 1. Formulation

    NASA Astrophysics Data System (ADS)

    Huyakorn, P. S.; Panday, S.; Wu, Y. S.

    1994-06-01

    A three-dimensional, three-phase numerical model is presented for stimulating the movement on non-aqueous-phase liquids (NAPL's) through porous and fractured media. The model is designed for practical application to a wide variety of contamination and remediation scenarios involving light or dense NAPL's in heterogeneous subsurface systems. The model formulation is first derived for three-phase flow of water, NAPL and air (or vapor) in porous media. The formulation is then extended to handle fractured systems using the dual-porosity and discrete-fracture modeling approaches The model accommodates a wide variety of boundary conditions, including withdrawal and injection well conditions which are treated rigorously using fully implicit schemes. The three-phase of formulation collapses to its simpler forms when air-phase dynamics are neglected, capillary effects are neglected, or two-phase-air-liquid, liquid-liquid systems with one or two active phases are considered. A Galerkin procedure with upstream weighting of fluid mobilities, storage matrix lumping, and fully implicit treatment of nonlinear coefficients and well conditions is used. A variety of nodal connectivity schemes leading to finite-difference, finite-element and hybrid spatial approximations in three dimensions are incorporated in the formulation. Selection of primary variables and evaluation of the terms of the Jacobian matrix for the Newton-Raphson linearized equations is discussed. The various nodal lattice options, and their significance to the computational time and memory requirements with regards to the block-Orthomin solution scheme are noted. Aggressive time-stepping schemes and under-relaxation formulas implemented in the code further alleviate the computational burden.

  2. Fully Coupled Nonlinear Fluid Flow and Poroelasticity in Arbitrarily Fractured Porous Media: A Hybrid-Dimensional Computational Model

    NASA Astrophysics Data System (ADS)

    Jin, L.; Zoback, M. D.

    2017-10-01

    We formulate the problem of fully coupled transient fluid flow and quasi-static poroelasticity in arbitrarily fractured, deformable porous media saturated with a single-phase compressible fluid. The fractures we consider are hydraulically highly conductive, allowing discontinuous fluid flux across them; mechanically, they act as finite-thickness shear deformation zones prior to failure (i.e., nonslipping and nonpropagating), leading to "apparent discontinuity" in strain and stress across them. Local nonlinearity arising from pressure-dependent permeability of fractures is also included. Taking advantage of typically high aspect ratio of a fracture, we do not resolve transversal variations and instead assume uniform flow velocity and simple shear strain within each fracture, rendering the coupled problem numerically more tractable. Fractures are discretized as lower dimensional zero-thickness elements tangentially conforming to unstructured matrix elements. A hybrid-dimensional, equal-low-order, two-field mixed finite element method is developed, which is free from stability issues for a drained coupled system. The fully implicit backward Euler scheme is employed for advancing the fully coupled solution in time, and the Newton-Raphson scheme is implemented for linearization. We show that the fully discretized system retains a canonical form of a fracture-free poromechanical problem; the effect of fractures is translated to the modification of some existing terms as well as the addition of several terms to the capacity, conductivity, and stiffness matrices therefore allowing the development of independent subroutines for treating fractures within a standard computational framework. Our computational model provides more realistic inputs for some fracture-dominated poromechanical problems like fluid-induced seismicity.

  3. Advanced computational techniques for incompressible/compressible fluid-structure interactions

    NASA Astrophysics Data System (ADS)

    Kumar, Vinod

    2005-07-01

    Fluid-Structure Interaction (FSI) problems are of great importance to many fields of engineering and pose tremendous challenges to numerical analyst. This thesis addresses some of the hurdles faced for both 2D and 3D real life time-dependent FSI problems with particular emphasis on parachute systems. The techniques developed here would help improve the design of parachutes and are of direct relevance to several other FSI problems. The fluid system is solved using the Deforming-Spatial-Domain/Stabilized Space-Time (DSD/SST) finite element formulation for the Navier-Stokes equations of incompressible and compressible flows. The structural dynamics solver is based on a total Lagrangian finite element formulation. Newton-Raphson method is employed to linearize the otherwise nonlinear system resulting from the fluid and structure formulations. The fluid and structural systems are solved in decoupled fashion at each nonlinear iteration. While rigorous coupling methods are desirable for FSI simulations, the decoupled solution techniques provide sufficient convergence in the time-dependent problems considered here. In this thesis, common problems in the FSI simulations of parachutes are discussed and possible remedies for a few of them are presented. Further, the effects of the porosity model on the aerodynamic forces of round parachutes are analyzed. Techniques for solving compressible FSI problems are also discussed. Subsequently, a better stabilization technique is proposed to efficiently capture and accurately predict the shocks in supersonic flows. The numerical examples simulated here require high performance computing. Therefore, numerical tools using distributed memory supercomputers with message passing interface (MPI) libraries were developed.

  4. Development of an hp-version finite element method for computational optimal control

    NASA Technical Reports Server (NTRS)

    Hodges, Dewey H.; Warner, Michael S.

    1993-01-01

    The purpose of this research effort was to begin the study of the application of hp-version finite elements to the numerical solution of optimal control problems. Under NAG-939, the hybrid MACSYMA/FORTRAN code GENCODE was developed which utilized h-version finite elements to successfully approximate solutions to a wide class of optimal control problems. In that code the means for improvement of the solution was the refinement of the time-discretization mesh. With the extension to hp-version finite elements, the degrees of freedom include both nodal values and extra interior values associated with the unknown states, co-states, and controls, the number of which depends on the order of the shape functions in each element. One possible drawback is the increased computational effort within each element required in implementing hp-version finite elements. We are trying to determine whether this computational effort is sufficiently offset by the reduction in the number of time elements used and improved Newton-Raphson convergence so as to be useful in solving optimal control problems in real time. Because certain of the element interior unknowns can be eliminated at the element level by solving a small set of nonlinear algebraic equations in which the nodal values are taken as given, the scheme may turn out to be especially powerful in a parallel computing environment. A different processor could be assigned to each element. The number of processors, strictly speaking, is not required to be any larger than the number of sub-regions which are free of discontinuities of any kind.

  5. A Newton-Raphson Method Approach to Adjusting Multi-Source Solar Simulators

    NASA Technical Reports Server (NTRS)

    Snyder, David B.; Wolford, David S.

    2012-01-01

    NASA Glenn Research Center has been using an in house designed X25 based multi-source solar simulator since 2003. The simulator is set up for triple junction solar cells prior to measurements b y adjusting the three sources to produce the correct short circuit current, lsc, in each of three AM0 calibrated sub-cells. The past practice has been to adjust one source on one sub-cell at a time, iterating until all the sub-cells have the calibrated Isc. The new approach is to create a matrix of measured lsc for small source changes on each sub-cell. A matrix, A, is produced. This is normalized to unit changes in the sources so that Ax(delta)s = (delta)isc. This matrix can now be inverted and used with the known Isc differences from the AM0 calibrated values to indicate changes in the source settings, (delta)s = A ·'x.(delta)isc This approach is still an iterative one, but all sources are changed during each iteration step. It typically takes four to six steps to converge on the calibrated lsc values. Even though the source lamps may degrade over time, the initial matrix evaluation i s not performed each time, since measurement matrix needs to be only approximate. Because an iterative approach is used the method will still continue to be valid. This method may become more important as state-of-the-art solar cell junction responses overlap the sources of the simulator. Also, as the number of cell junctions and sources increase, this method should remain applicable.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bose, Sownak; Li, Baojiu; He, Jian-hua

    We describe and demonstrate the potential of a new and very efficient method for simulating certain classes of modified gravity theories, such as the widely studied f ( R ) gravity models. High resolution simulations for such models are currently very slow due to the highly nonlinear partial differential equation that needs to be solved exactly to predict the modified gravitational force. This nonlinearity is partly inherent, but is also exacerbated by the specific numerical algorithm used, which employs a variable redefinition to prevent numerical instabilities. The standard Newton-Gauss-Seidel iterative method used to tackle this problem has a poor convergencemore » rate. Our new method not only avoids this, but also allows the discretised equation to be written in a form that is analytically solvable. We show that this new method greatly improves the performance and efficiency of f ( R ) simulations. For example, a test simulation with 512{sup 3} particles in a box of size 512 Mpc/ h is now 5 times faster than before, while a Millennium-resolution simulation for f ( R ) gravity is estimated to be more than 20 times faster than with the old method. Our new implementation will be particularly useful for running very high resolution, large-sized simulations which, to date, are only possible for the standard model, and also makes it feasible to run large numbers of lower resolution simulations for covariance analyses. We hope that the method will bring us to a new era for precision cosmological tests of gravity.« less

  7. Modified Interior Distance Functions (Theory and Methods)

    NASA Technical Reports Server (NTRS)

    Polyak, Roman A.

    1995-01-01

    In this paper we introduced and developed the theory of Modified Interior Distance Functions (MIDF's). The MIDF is a Classical Lagrangian (CL) for a constrained optimization problem which is equivalent to the initial one and can be obtained from the latter by monotone transformation both the objective function and constraints. In contrast to the Interior Distance Functions (IDF's), which played a fundamental role in Interior Point Methods (IPM's), the MIDF's are defined on an extended feasible set and along with center, have two extra tools, which control the computational process: the barrier parameter and the vector of Lagrange multipliers. The extra tools allow to attach to the MEDF's very important properties of Augmented Lagrangeans. One can consider the MIDFs as Interior Augmented Lagrangeans. It makes MIDF's similar in spirit to Modified Barrier Functions (MBF's), although there is a fundamental difference between them both in theory and methods. Based on MIDF's theory, Modified Center Methods (MCM's) have been developed and analyzed. The MCM's find an unconstrained minimizer in primal space and update the Lagrange multipliers, while both the center and the barrier parameter can be fixed or updated at each step. The MCM's convergence was investigated, and their rate of convergence was estimated. The extension of the feasible set and the special role of the Lagrange multipliers allow to develop MCM's, which produce, in case of nondegenerate constrained optimization, a primal and dual sequences that converge to the primal-dual solutions with linear rate, even when both the center and the barrier parameter are fixed. Moreover, every Lagrange multipliers update shrinks the distance to the primal dual solution by a factor 0 less than gamma less than 1 which can be made as small as one wants by choosing a fixed interior point as a 'center' and a fixed but large enough barrier parameter. The numericai realization of MCM leads to the Newton MCM (NMCM). The approximation for the primal minimizer one finds by Newton Method followed by the Lagrange multipliers update. Due to the MCM convergence, when both the center and the barrier parameter are fixed, the condition of the MDF Hessism and the neighborhood of the primal ninimizer where Newton method is 'well' defined remains stable. It contributes to both the complexity and the numerical stability of the NMCM.

  8. Kepler unbound: Some elegant curiosities of classical mechanics

    NASA Astrophysics Data System (ADS)

    MacKay, Niall J.; Salour, Sam

    2015-01-01

    We explain two exotic systems of classical mechanics: the McIntosh-Cisneros-Zwanziger ("MICZ") Kepler system, of motion of a charged particle in the presence of a modified dyon; and Gibbons and Manton's description of the slow motion of well-separated solitonic ("BPS") monopoles using Taub-NUT space. Each system is characterized by the conservation of a Laplace-Runge-Lenz vector, and we use elementary vector techniques to show that each obeys a subtly different variation on Kepler's three laws for the Newton-Coulomb two-body problem, including a new modified Kepler third law for BPS monopoles.

  9. The RNA Newton polytope and learnability of energy parameters.

    PubMed

    Forouzmand, Elmirasadat; Chitsaz, Hamidreza

    2013-07-01

    Computational RNA structure prediction is a mature important problem that has received a new wave of attention with the discovery of regulatory non-coding RNAs and the advent of high-throughput transcriptome sequencing. Despite nearly two score years of research on RNA secondary structure and RNA-RNA interaction prediction, the accuracy of the state-of-the-art algorithms are still far from satisfactory. So far, researchers have proposed increasingly complex energy models and improved parameter estimation methods, experimental and/or computational, in anticipation of endowing their methods with enough power to solve the problem. The output has disappointingly been only modest improvements, not matching the expectations. Even recent massively featured machine learning approaches were not able to break the barrier. Why is that? The first step toward high-accuracy structure prediction is to pick an energy model that is inherently capable of predicting each and every one of known structures to date. In this article, we introduce the notion of learnability of the parameters of an energy model as a measure of such an inherent capability. We say that the parameters of an energy model are learnable iff there exists at least one set of such parameters that renders every known RNA structure to date the minimum free energy structure. We derive a necessary condition for the learnability and give a dynamic programming algorithm to assess it. Our algorithm computes the convex hull of the feature vectors of all feasible structures in the ensemble of a given input sequence. Interestingly, that convex hull coincides with the Newton polytope of the partition function as a polynomial in energy parameters. To the best of our knowledge, this is the first approach toward computing the RNA Newton polytope and a systematic assessment of the inherent capabilities of an energy model. The worst case complexity of our algorithm is exponential in the number of features. However, dimensionality reduction techniques can provide approximate solutions to avoid the curse of dimensionality. We demonstrated the application of our theory to a simple energy model consisting of a weighted count of A-U, C-G and G-U base pairs. Our results show that this simple energy model satisfies the necessary condition for more than half of the input unpseudoknotted sequence-structure pairs (55%) chosen from the RNA STRAND v2.0 database and severely violates the condition for ~ 13%, which provide a set of hard cases that require further investigation. From 1350 RNA strands, the observed 3D feature vector for 749 strands is on the surface of the computed polytope. For 289 RNA strands, the observed feature vector is not on the boundary of the polytope but its distance from the boundary is not more than one. A distance of one essentially means one base pair difference between the observed structure and the closest point on the boundary of the polytope, which need not be the feature vector of a structure. For 171 sequences, this distance is larger than two, and for only 11 sequences, this distance is larger than five. The source code is available on http://compbio.cs.wayne.edu/software/rna-newton-polytope.

  10. A Gauss-Newton full-waveform inversion in PML-truncated domains using scalar probing waves

    NASA Astrophysics Data System (ADS)

    Pakravan, Alireza; Kang, Jun Won; Newtson, Craig M.

    2017-12-01

    This study considers the characterization of subsurface shear wave velocity profiles in semi-infinite media using scalar waves. Using surficial responses caused by probing waves, a reconstruction of the material profile is sought using a Gauss-Newton full-waveform inversion method in a two-dimensional domain truncated by perfectly matched layer (PML) wave-absorbing boundaries. The PML is introduced to limit the semi-infinite extent of the half-space and to prevent reflections from the truncated boundaries. A hybrid unsplit-field PML is formulated in the inversion framework to enable more efficient wave simulations than with a fully mixed PML. The full-waveform inversion method is based on a constrained optimization framework that is implemented using Karush-Kuhn-Tucker (KKT) optimality conditions to minimize the objective functional augmented by PML-endowed wave equations via Lagrange multipliers. The KKT conditions consist of state, adjoint, and control problems, and are solved iteratively to update the shear wave velocity profile of the PML-truncated domain. Numerical examples show that the developed Gauss-Newton inversion method is accurate enough and more efficient than another inversion method. The algorithm's performance is demonstrated by the numerical examples including the case of noisy measurement responses and the case of reduced number of sources and receivers.

  11. Geomagnetic matching navigation algorithm based on robust estimation

    NASA Astrophysics Data System (ADS)

    Xie, Weinan; Huang, Liping; Qu, Zhenshen; Wang, Zhenhuan

    2017-08-01

    The outliers in the geomagnetic survey data seriously affect the precision of the geomagnetic matching navigation and badly disrupt its reliability. A novel algorithm which can eliminate the outliers influence is investigated in this paper. First, the weight function is designed and its principle of the robust estimation is introduced. By combining the relation equation between the matching trajectory and the reference trajectory with the Taylor series expansion for geomagnetic information, a mathematical expression of the longitude, latitude and heading errors is acquired. The robust target function is obtained by the weight function and the mathematical expression. Then the geomagnetic matching problem is converted to the solutions of nonlinear equations. Finally, Newton iteration is applied to implement the novel algorithm. Simulation results show that the matching error of the novel algorithm is decreased to 7.75% compared to the conventional mean square difference (MSD) algorithm, and is decreased to 18.39% to the conventional iterative contour matching algorithm when the outlier is 40nT. Meanwhile, the position error of the novel algorithm is 0.017° while the other two algorithms fail to match when the outlier is 400nT.

  12. Angular distributions for the inelastic scattering of NO(X2Π ) with O2(X3Σg-)

    NASA Astrophysics Data System (ADS)

    Brouard, M.; Gordon, S. D. S.; Nichols, B.; Squires, E.; Walpole, V.; Aoiz, F. J.; Stolte, S.

    2017-05-01

    The inelastic scattering of NO(X2Π ) by O2(X3Σg-) was studied at a mean collision energy of 550 cm-1 using velocity-map ion imaging. The initial quantum state of the NO(X2Π , v = 0, j = 0.5, Ω =0.5 , 𝜖 = -1 , f) molecule was selected using a hexapole electric field, and specific Λ-doublet levels of scattered NO were probed using (1 +1' ) resonantly enhanced multiphoton ionization. A modified "onion-peeling" algorithm was employed to extract angular scattering information from the series of "pancaked," nested Newton spheres arising as a consequence of the rotational excitation of the molecular oxygen collision partner. The extracted differential cross sections for NO(X) f →f and f →e Λ-doublet resolved, spin-orbit conserving transitions, partially resolved in the oxygen co-product rotational quantum state, are reported, along with O2 fragment pair-correlated rotational state population. The inelastic scattering of NO with O2 is shown to share many similarities with the scattering of NO(X) with the rare gases. However, subtle differences in the angular distributions between the two collision partners are observed.

  13. Numerical modeling of Gaussian beam propagation and diffraction in inhomogeneous media based on the complex eikonal equation

    NASA Astrophysics Data System (ADS)

    Huang, Xingguo; Sun, Hui

    2018-05-01

    Gaussian beam is an important complex geometrical optical technology for modeling seismic wave propagation and diffraction in the subsurface with complex geological structure. Current methods for Gaussian beam modeling rely on the dynamic ray tracing and the evanescent wave tracking. However, the dynamic ray tracing method is based on the paraxial ray approximation and the evanescent wave tracking method cannot describe strongly evanescent fields. This leads to inaccuracy of the computed wave fields in the region with a strong inhomogeneous medium. To address this problem, we compute Gaussian beam wave fields using the complex phase by directly solving the complex eikonal equation. In this method, the fast marching method, which is widely used for phase calculation, is combined with Gauss-Newton optimization algorithm to obtain the complex phase at the regular grid points. The main theoretical challenge in combination of this method with Gaussian beam modeling is to address the irregular boundary near the curved central ray. To cope with this challenge, we present the non-uniform finite difference operator and a modified fast marching method. The numerical results confirm the proposed approach.

  14. Hybrid adaptive ascent flight control for a flexible launch vehicle

    NASA Astrophysics Data System (ADS)

    Lefevre, Brian D.

    For the purpose of maintaining dynamic stability and improving guidance command tracking performance under off-nominal flight conditions, a hybrid adaptive control scheme is selected and modified for use as a launch vehicle flight controller. This architecture merges a model reference adaptive approach, which utilizes both direct and indirect adaptive elements, with a classical dynamic inversion controller. This structure is chosen for a number of reasons: the properties of the reference model can be easily adjusted to tune the desired handling qualities of the spacecraft, the indirect adaptive element (which consists of an online parameter identification algorithm) continually refines the estimates of the evolving characteristic parameters utilized in the dynamic inversion, and the direct adaptive element (which consists of a neural network) augments the linear feedback signal to compensate for any nonlinearities in the vehicle dynamics. The combination of these elements enables the control system to retain the nonlinear capabilities of an adaptive network while relying heavily on the linear portion of the feedback signal to dictate the dynamic response under most operating conditions. To begin the analysis, the ascent dynamics of a launch vehicle with a single 1st stage rocket motor (typical of the Ares 1 spacecraft) are characterized. The dynamics are then linearized with assumptions that are appropriate for a launch vehicle, so that the resulting equations may be inverted by the flight controller in order to compute the control signals necessary to generate the desired response from the vehicle. Next, the development of the hybrid adaptive launch vehicle ascent flight control architecture is discussed in detail. Alterations of the generic hybrid adaptive control architecture include the incorporation of a command conversion operation which transforms guidance input from quaternion form (as provided by NASA) to the body-fixed angular rate commands needed by the hybrid adaptive flight controller, development of a Newton's method based online parameter update that is modified to include a step size which regulates the rate of change in the parameter estimates, comparison of the modified Newton's method and recursive least squares online parameter update algorithms, modification of the neural network's input structure to accommodate for the nature of the nonlinearities present in a launch vehicle's ascent flight, examination of both tracking error based and modeling error based neural network weight update laws, and integration of feedback filters for the purpose of preventing harmful interaction between the flight control system and flexible structural modes. To validate the hybrid adaptive controller, a high-fidelity Ares I ascent flight simulator and a classical gain-scheduled proportional-integral-derivative (PID) ascent flight controller were obtained from the NASA Marshall Space Flight Center. The classical PID flight controller is used as a benchmark when analyzing the performance of the hybrid adaptive flight controller. Simulations are conducted which model both nominal and off-nominal flight conditions with structural flexibility of the vehicle either enabled or disabled. First, rigid body ascent simulations are performed with the hybrid adaptive controller under nominal flight conditions for the purpose of selecting the update laws which drive the indirect and direct adaptive components. With the neural network disabled, the results revealed that the recursive least squares online parameter update caused high frequency oscillations to appear in the engine gimbal commands. This is highly undesirable for long and slender launch vehicles, such as the Ares I, because such oscillation of the rocket nozzle could excite unstable structural flex modes. In contrast, the modified Newton's method online parameter update produced smooth control signals and was thus selected for use in the hybrid adaptive launch vehicle flight controller. In the simulations where the online parameter identification algorithm was disabled, the tracking error based neural network weight update law forced the network's output to diverge despite repeated reductions of the adaptive learning rate. As a result, the modeling error based neural network weight update law (which generated bounded signals) is utilized by the hybrid adaptive controller in all subsequent simulations. Comparing the PID and hybrid adaptive flight controllers under nominal flight conditions in rigid body ascent simulations showed that their tracking error magnitudes are similar for a period of time during the middle of the ascent phase. Though the PID controller performs better for a short interval around the 20 second mark, the hybrid adaptive controller performs far better from roughly 70 to 120 seconds. Elevating the aerodynamic loads by increasing the force and moment coefficients produced results very similar to the nominal case. However, applying a 5% or 10% thrust reduction to the first stage rocket motor causes the tracking error magnitude observed by the PID controller to be significantly elevated and diverge rapidly as the simulation concludes. In contrast, the hybrid adaptive controller steadily maintains smaller errors (often less than 50% of the corresponding PID value). Under the same sets of flight conditions with flexibility enabled, the results exhibit similar trends with the hybrid adaptive controller performing even better in each case. Again, the reduction of the first stage rocket motor's thrust clearly illustrated the superior robustness of the hybrid adaptive flight controller.

  15. Iterative Methods for Solving Nonlinear Parabolic Problem in Pension Saving Management

    NASA Astrophysics Data System (ADS)

    Koleva, M. N.

    2011-11-01

    In this work we consider a nonlinear parabolic equation, obtained from Riccati like transformation of the Hamilton-Jacobi-Bellman equation, arising in pension saving management. We discuss two numerical iterative methods for solving the model problem—fully implicit Picard method and mixed Picard-Newton method, which preserves the parabolic characteristics of the differential problem. Numerical experiments for comparison the accuracy and effectiveness of the algorithms are discussed. Finally, observations are given.

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wollaber, Allan Benton; Park, HyeongKae; Lowrie, Robert Byron

    Moment-based acceleration via the development of “high-order, low-order” (HO-LO) algorithms has provided substantial accuracy and efficiency enhancements for solutions of the nonlinear, thermal radiative transfer equations by CCS-2 and T-3 staff members. Accuracy enhancements over traditional, linearized methods are obtained by solving a nonlinear, timeimplicit HO-LO system via a Jacobian-free Newton Krylov procedure. This also prevents the appearance of non-physical maximum principle violations (“temperature spikes”) associated with linearization. Efficiency enhancements are obtained in part by removing “effective scattering” from the linearized system. In this highlight, we summarize recent work in which we formally extended the HO-LO radiation algorithm to includemore » operator-split radiation-hydrodynamics.« less

  17. A modified approach combining FNEA and watershed algorithms for segmenting remotely-sensed optical images

    NASA Astrophysics Data System (ADS)

    Liu, Likun

    2018-01-01

    In the field of remote sensing image processing, remote sensing image segmentation is a preliminary step for later analysis of remote sensing image processing and semi-auto human interpretation, fully-automatic machine recognition and learning. Since 2000, a technique of object-oriented remote sensing image processing method and its basic thought prevails. The core of the approach is Fractal Net Evolution Approach (FNEA) multi-scale segmentation algorithm. The paper is intent on the research and improvement of the algorithm, which analyzes present segmentation algorithms and selects optimum watershed algorithm as an initialization. Meanwhile, the algorithm is modified by modifying an area parameter, and then combining area parameter with a heterogeneous parameter further. After that, several experiments is carried on to prove the modified FNEA algorithm, compared with traditional pixel-based method (FCM algorithm based on neighborhood information) and combination of FNEA and watershed, has a better segmentation result.

  18. Terrain Correction on the moving equal area cylindrical map projection of the surface of a reference ellipsoid

    NASA Astrophysics Data System (ADS)

    Ardalan, A.; Safari, A.; Grafarend, E.

    2003-04-01

    An operational algorithm for computing the ellipsoidal terrain correction based on application of closed form solution of the Newton integral in terms of Cartesian coordinates in the cylindrical equal area map projected surface of a reference ellipsoid has been developed. As the first step the mapping of the points on the surface of a reference ellipsoid onto the cylindrical equal area map projection of a cylinder tangent to a point on the surface of reference ellipsoid closely studied and the map projection formulas are computed. Ellipsoidal mass elements with various sizes on the surface of the reference ellipsoid is considered and the gravitational potential and the vector of gravitational intensity of these mass elements has been computed via the solution of Newton integral in terms of ellipsoidal coordinates. The geographical cross section areas of the selected ellipsoidal mass elements are transferred into cylindrical equal area map projection and based on the transformed area elements Cartesian mass elements with the same height as that of the ellipsoidal mass elements are constructed. Using the close form solution of the Newton integral in terms of Cartesian coordinates the potential of the Cartesian mass elements are computed and compared with the same results based on the application of the ellipsoidal Newton integral over the ellipsoidal mass elements. The results of the numerical computations show that difference between computed gravitational potential of the ellipsoidal mass elements and Cartesian mass element in the cylindrical equal area map projection is of the order of 1.6 × 10-8m^2/s^2 for a mass element with the cross section size of 10 km × 10 km and the height of 1000 m. For a 1 km × 1 km mass element with the same height, this difference is less than 1.5 × 10-4 m^2}/s^2. The results of the numerical computations indicate that a new method for computing the terrain correction based on the closed form solution of the Newton integral in terms of Cartesian coordinates and with accuracy of ellipsoidal terrain correction has been achieved! In this way one can enjoy the simplicity of the solution of the Newton integral in terms of Cartesian coordinates and at the same time the accuracy of the ellipsoidal terrain correction, which is needed for the modern theory of geoid computations.

  19. Preconditioning strategies for nonlinear conjugate gradient methods, based on quasi-Newton updates

    NASA Astrophysics Data System (ADS)

    Andrea, Caliciotti; Giovanni, Fasano; Massimo, Roma

    2016-10-01

    This paper reports two proposals of possible preconditioners for the Nonlinear Conjugate Gradient (NCG) method, in large scale unconstrained optimization. On one hand, the common idea of our preconditioners is inspired to L-BFGS quasi-Newton updates, on the other hand we aim at explicitly approximating in some sense the inverse of the Hessian matrix. Since we deal with large scale optimization problems, we propose matrix-free approaches where the preconditioners are built using symmetric low-rank updating formulae. Our distinctive new contributions rely on using information on the objective function collected as by-product of the NCG, at previous iterations. Broadly speaking, our first approach exploits the secant equation, in order to impose interpolation conditions on the objective function. In the second proposal we adopt and ad hoc modified-secant approach, in order to possibly guarantee some additional theoretical properties.

  20. On the method of least squares. II. [for calculation of covariance matrices and optimization algorithms

    NASA Technical Reports Server (NTRS)

    Jefferys, W. H.

    1981-01-01

    A least squares method proposed previously for solving a general class of problems is expanded in two ways. First, covariance matrices related to the solution are calculated and their interpretation is given. Second, improved methods of solving the normal equations related to those of Marquardt (1963) and Fletcher and Powell (1963) are developed for this approach. These methods may converge in cases where Newton's method diverges or converges slowly.

  1. Application of artificial neural network to predict clay sensitivity in a high landslide prone area using CPTu data- A case study in Southwest of Sweden

    NASA Astrophysics Data System (ADS)

    Shahri, Abbas; Mousavinaseri, Mahsasadat; Naderi, Shima; Espersson, Maria

    2015-04-01

    Application of Artificial Neural Networks (ANNs) in many areas of engineering, in particular to geotechnical engineering problems such as site characterization has demonstrated some degree of success. The present paper aims to evaluate the feasibility of several various types of ANN models to predict the clay sensitivity of soft clays form piezocone penetration test data (CPTu). To get the aim, a research database of CPTu data of 70 test points around the Göta River near the Lilli Edet in the southwest of Sweden which is a high prone land slide area were collected and considered as input for ANNs. For training algorithms the quick propagation, conjugate gradient descent, quasi-Newton, limited memory quasi-Newton and Levenberg-Marquardt were developed tested and trained using the CPTu data to provide a comparison between the results of field investigation and ANN models to estimate the clay sensitivity. The reason of using the clay sensitivity parameter in this study is due to its relation to landslides in Sweden.A special high sensitive clay namely quick clay is considered as the main responsible for experienced landslides in Sweden which has high sensitivity and prone to slide. The training and testing program was started with 3-2-1 ANN architecture structure. By testing and trying several various architecture structures and changing the hidden layer in order to have a higher output resolution the 3-4-4-3-1 architecture structure for ANN in this study was confirmed. The tested algorithm showed that increasing the hidden layers up to 4 layers in ANN can improve the results and the 3-4-4-3-1 architecture structure ANNs for prediction of clay sensitivity represent reliable and reasonable response. The obtained results showed that the conjugate gradient descent algorithm with R2=0.897 has the best performance among the tested algorithms. Keywords: clay sensitivity, landslide, Artificial Neural Network

  2. Evidence for a decay of the faint flaring rate of Sgr A* from 2013 Aug., 13 months before a rise of the before a rise of the bright one

    NASA Astrophysics Data System (ADS)

    Mossoux, E.; Grosso, N.

    2017-10-01

    Thanks to the overall 1999-2015 Chandra, XMM-Newton and Swift observations of the supermassive black hole at the center of our Galaxy, Sgr A*, we tested the significance and persistence of the increase of 'bright and very bright' X-ray flaring rate (FR) argued by Ponti et al. (2015). We detected the flares observed with Swift using the binned light curves whereas those observed by XMM-Newton and Chandra were detected using the two-steps Bayesian blocks (BB) algorithm with a prior number of change-points properly calibrated. We then applied this algorithm on the flare arrival times corrected from the detection efficiency computed for each observation thanks to the observed distribution of flare fluxes and durations. We confirmed a constant overall FR and a rise of the FR for the faintest flares from 2014 Aug. 31 and identified a decay of the FR for the brightest flares from 2013 Aug. and Nov. A mass transfer from the Dusty S-cluster Object/G2 to Sgr A* is not required to produce the rise of bright FR since the energy saved by the decay of the number of faint flares during a long time period may be later released by several bright flares during a shorter time period.

  3. Deterministic and stochastic algorithms for resolving the flow fields in ducts and networks using energy minimization

    NASA Astrophysics Data System (ADS)

    Sochi, Taha

    2016-09-01

    Several deterministic and stochastic multi-variable global optimization algorithms (Conjugate Gradient, Nelder-Mead, Quasi-Newton and global) are investigated in conjunction with energy minimization principle to resolve the pressure and volumetric flow rate fields in single ducts and networks of interconnected ducts. The algorithms are tested with seven types of fluid: Newtonian, power law, Bingham, Herschel-Bulkley, Ellis, Ree-Eyring and Casson. The results obtained from all those algorithms for all these types of fluid agree very well with the analytically derived solutions as obtained from the traditional methods which are based on the conservation principles and fluid constitutive relations. The results confirm and generalize the findings of our previous investigations that the energy minimization principle is at the heart of the flow dynamics systems. The investigation also enriches the methods of computational fluid dynamics for solving the flow fields in tubes and networks for various types of Newtonian and non-Newtonian fluids.

  4. Normalization and Implementation of Three Gravitational Acceleration Models

    NASA Technical Reports Server (NTRS)

    Eckman, Randy A.; Brown, Aaron J.; Adamo, Daniel R.; Gottlieb, Robert G.

    2016-01-01

    Unlike the uniform density spherical shell approximations of Newton, the consequence of spaceflight in the real universe is that gravitational fields are sensitive to the asphericity of their generating central bodies. The gravitational potential of an aspherical central body is typically resolved using spherical harmonic approximations. However, attempting to directly calculate the spherical harmonic approximations results in at least two singularities that must be removed to generalize the method and solve for any possible orbit, including polar orbits. Samuel Pines, Bill Lear, and Robert Gottlieb developed three unique algorithms to eliminate these singularities. This paper documents the methodical normalization of two of the three known formulations for singularity-free gravitational acceleration (namely, the Lear and Gottlieb algorithms) and formulates a general method for defining normalization parameters used to generate normalized Legendre polynomials and Associated Legendre Functions (ALFs) for any algorithm. A treatment of the conventional formulation of the gravitational potential and acceleration is also provided, in addition to a brief overview of the philosophical differences between the three known singularity-free algorithms.

  5. Marcus canonical integral for non-Gaussian processes and its computation: pathwise simulation and tau-leaping algorithm.

    PubMed

    Li, Tiejun; Min, Bin; Wang, Zhiming

    2013-03-14

    The stochastic integral ensuring the Newton-Leibnitz chain rule is essential in stochastic energetics. Marcus canonical integral has this property and can be understood as the Wong-Zakai type smoothing limit when the driving process is non-Gaussian. However, this important concept seems not well-known for physicists. In this paper, we discuss Marcus integral for non-Gaussian processes and its computation in the context of stochastic energetics. We give a comprehensive introduction to Marcus integral and compare three equivalent definitions in the literature. We introduce the exact pathwise simulation algorithm and give the error analysis. We show how to compute the thermodynamic quantities based on the pathwise simulation algorithm. We highlight the information hidden in the Marcus mapping, which plays the key role in determining thermodynamic quantities. We further propose the tau-leaping algorithm, which advance the process with deterministic time steps when tau-leaping condition is satisfied. The numerical experiments and its efficiency analysis show that it is very promising.

  6. Flight Tests of a 40-Foot Nominal Diameter Modified Ringsail Parachute Deployed at Mach 1.64 and Dynamic Pressure of 9.1 Pounds Per Square Foot

    NASA Technical Reports Server (NTRS)

    Eckstrom, Clinton V.; Murrow, Harold N.; Preisser, John S.

    1967-01-01

    A ringsail parachute, which had a nominal diameter of 40 feet (12.2 meters) and reference area of 1256 square feet (117 m(exp 2)) and was modified to provide a total geometric porosity of 15 percent of the reference area, was flight tested as part of the rocket launch portion of the NASA Planetary Entry Parachute Program. The payload for the flight test was an instrumented capsule from which the test parachute was ejected by a deployment mortar when the system was at a Mach number of 1.64 and a dynamic pressure of 9.1 pounds per square foot (43.6 newtons per m(exp 2)). The parachute deployed to suspension line stretch in 0.45 second with a resulting snatch force of 1620 pounds (7200 newtons). Canopy inflation began 0.07 second later and the parachute projected area increased slowly to a maximum of 20 percent of that expected for full inflation. During this test, the suspension lines twisted, primarily because the partially inflated canopy could not restrict the twisting to the attachment bridle and risers. This twisting of the suspension lines hampered canopy inflation at a time when velocity and dynamic-pressure conditions were more favorable.

  7. Ion-Conserving Modified Poisson-Boltzmann Theory Considering a Steric Effect in an Electrolyte

    NASA Astrophysics Data System (ADS)

    Sugioka, Hideyuki

    2016-12-01

    The modified Poisson-Nernst-Planck (MPNP) and modified Poisson-Boltzmann (MPB) equations are well known as fundamental equations that consider a steric effect, which prevents unphysical ion concentrations. However, it is unclear whether they are equivalent or not. To clarify this problem, we propose an improved free energy formulation that considers a steric limit with an ion-conserving condition and successfully derive the ion-conserving modified Poisson-Boltzmann (IC-MPB) equations that are equivalent to the MPNP equations. Furthermore, we numerically examine the equivalence by comparing between the IC-MPB solutions obtained by the Newton method and the steady MPNP solutions obtained by the finite-element finite-volume method. A surprising aspect of our finding is that the MPB solutions are much different from the MPNP (IC-MPB) solutions in a confined space. We consider that our findings will significantly contribute to understanding the surface science between solids and liquids.

  8. Multi-Sensor Data Fusion Identification for Shearer Cutting Conditions Based on Parallel Quasi-Newton Neural Networks and the Dempster-Shafer Theory.

    PubMed

    Si, Lei; Wang, Zhongbin; Liu, Xinhua; Tan, Chao; Xu, Jing; Zheng, Kehong

    2015-11-13

    In order to efficiently and accurately identify the cutting condition of a shearer, this paper proposed an intelligent multi-sensor data fusion identification method using the parallel quasi-Newton neural network (PQN-NN) and the Dempster-Shafer (DS) theory. The vibration acceleration signals and current signal of six cutting conditions were collected from a self-designed experimental system and some special state features were extracted from the intrinsic mode functions (IMFs) based on the ensemble empirical mode decomposition (EEMD). In the experiment, three classifiers were trained and tested by the selected features of the measured data, and the DS theory was used to combine the identification results of three single classifiers. Furthermore, some comparisons with other methods were carried out. The experimental results indicate that the proposed method performs with higher detection accuracy and credibility than the competing algorithms. Finally, an industrial application example in the fully mechanized coal mining face was demonstrated to specify the effect of the proposed system.

  9. Issues associated with Galilean invariance on a moving solid boundary in the lattice Boltzmann method

    NASA Astrophysics Data System (ADS)

    Peng, Cheng; Geneva, Nicholas; Guo, Zhaoli; Wang, Lian-Ping

    2017-01-01

    In lattice Boltzmann simulations involving moving solid boundaries, the momentum exchange between the solid and fluid phases was recently found to be not fully consistent with the principle of local Galilean invariance (GI) when the bounce-back schemes (BBS) and the momentum exchange method (MEM) are used. In the past, this inconsistency was resolved by introducing modified MEM schemes so that the overall moving-boundary algorithm could be more consistent with GI. However, in this paper we argue that the true origin of this violation of Galilean invariance (VGI) in the presence of a moving solid-fluid interface is due to the BBS itself, as the VGI error not only exists in the hydrodynamic force acting on the solid phase, but also in the boundary force exerted on the fluid phase, according to Newton's Third Law. The latter, however, has so far gone unnoticed in previously proposed modified MEM schemes. Based on this argument, we conclude that the previous modifications to the momentum exchange method are incomplete solutions to the VGI error in the lattice Boltzmann method (LBM). An implicit remedy to the VGI error in the LBM and its limitation is then revealed. To address the VGI error for a case when this implicit remedy does not exist, a bounce-back scheme based on coordinate transformation is proposed. Numerical tests in both laminar and turbulent flows show that the proposed scheme can effectively eliminate the errors associated with the usual bounce-back implementations on a no-slip solid boundary, and it can maintain an accurate momentum exchange calculation with minimal computational overhead.

  10. A numerical algorithm for optimal feedback gains in high dimensional linear quadratic regulator problems

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Ito, K.

    1991-01-01

    A hybrid method for computing the feedback gains in linear quadratic regulator problem is proposed. The method, which combines use of a Chandrasekhar type system with an iteration of the Newton-Kleinman form with variable acceleration parameter Smith schemes, is formulated to efficiently compute directly the feedback gains rather than solutions of an associated Riccati equation. The hybrid method is particularly appropriate when used with large dimensional systems such as those arising in approximating infinite-dimensional (distributed parameter) control systems (e.g., those governed by delay-differential and partial differential equations). Computational advantages of the proposed algorithm over the standard eigenvector (Potter, Laub-Schur) based techniques are discussed, and numerical evidence of the efficacy of these ideas is presented.

  11. A numerical algorithm for optimal feedback gains in high dimensional LQR problems

    NASA Technical Reports Server (NTRS)

    Banks, H. T.; Ito, K.

    1986-01-01

    A hybrid method for computing the feedback gains in linear quadratic regulator problems is proposed. The method, which combines the use of a Chandrasekhar type system with an iteration of the Newton-Kleinman form with variable acceleration parameter Smith schemes, is formulated so as to efficiently compute directly the feedback gains rather than solutions of an associated Riccati equation. The hybrid method is particularly appropriate when used with large dimensional systems such as those arising in approximating infinite dimensional (distributed parameter) control systems (e.g., those governed by delay-differential and partial differential equations). Computational advantage of the proposed algorithm over the standard eigenvector (Potter, Laub-Schur) based techniques are discussed and numerical evidence of the efficacy of our ideas presented.

  12. Three-Dimensional Navier-Stokes Method with Two-Equation Turbulence Models for Efficient Numerical Simulation of Hypersonic Flows

    NASA Technical Reports Server (NTRS)

    Bardina, J. E.

    1994-01-01

    A new computational efficient 3-D compressible Reynolds-averaged implicit Navier-Stokes method with advanced two equation turbulence models for high speed flows is presented. All convective terms are modeled using an entropy satisfying higher-order Total Variation Diminishing (TVD) scheme based on implicit upwind flux-difference split approximations and arithmetic averaging procedure of primitive variables. This method combines the best features of data management and computational efficiency of space marching procedures with the generality and stability of time dependent Navier-Stokes procedures to solve flows with mixed supersonic and subsonic zones, including streamwise separated flows. Its robust stability derives from a combination of conservative implicit upwind flux-difference splitting with Roe's property U to provide accurate shock capturing capability that non-conservative schemes do not guarantee, alternating symmetric Gauss-Seidel 'method of planes' relaxation procedure coupled with a three-dimensional two-factor diagonal-dominant approximate factorization scheme, TVD flux limiters of higher-order flux differences satisfying realizability, and well-posed characteristic-based implicit boundary-point a'pproximations consistent with the local characteristics domain of dependence. The efficiency of the method is highly increased with Newton Raphson acceleration which allows convergence in essentially one forward sweep for supersonic flows. The method is verified by comparing with experiment and other Navier-Stokes methods. Here, results of adiabatic and cooled flat plate flows, compression corner flow, and 3-D hypersonic shock-wave/turbulent boundary layer interaction flows are presented. The robust 3-D method achieves a better computational efficiency of at least one order of magnitude over the CNS Navier-Stokes code. It provides cost-effective aerodynamic predictions in agreement with experiment, and the capability of predicting complex flow structures in complex geometries with good accuracy.

  13. Combined interventions for mitigation of an influenza A (H1N1) 2009 outbreak in a physical training camp in Beijing, China.

    PubMed

    Chu, Chen-Yi; de Silva, U Chandimal; Guo, Jin-Peng; Wang, Yong; Wen, Liang; Lee, Vernon J; Li, Shen-Long; Huang, Liu-Yu

    2017-07-01

    Many studies have suggested the effectiveness of single control measures in the containment and mitigation of pandemic influenza A (H1N1) 2009. The effects of combined interventions by multiple control measures in reducing the impact of an influenza A (H1N1) 2009 outbreak in a closed physical training camp in Beijing, China were evaluated. Oseltamivir was prescribed for the treatment of confirmed cases and possible cases and as prophylaxis for all other participants in this training camp. Public health control measures were applied simultaneously, including the isolation of patients and possible cases, personal protection and hygiene, and social distancing measures. Symptom surveillance of all participants was initiated, and the actual attack rate was calculated. For comparison, the theoretical attack rate for this outbreak was projected using the Newton-Raphson numerical method. A total of 3256 persons were present at the physical training camp. During the outbreak, 405 (68.3%) possible cases and 26 (4.4%) confirmed cases were reported before the intervention and completed oseltamivir treatment; 162 (27.3%) possible cases were reported after the intervention and received part treatment and part prophylaxis. The other 2663 participants completed oseltamivir prophylaxis. Of the possible cases, 181 with fever ≥38.5°C were isolated. The actual attack rate for this outbreak of pandemic influenza A (H1N1) 2009 was 18.2%, which is much lower than the theoretical attack rate of 80% projected. Combined interventions of large-scale antiviral ring prophylaxis and treatment and public health control measures could be applied to reduce the magnitude of influenza A (H1N1) 2009 outbreaks in closed settings. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  14. Methods for calculating confidence and credible intervals for the residual between-study variance in random effects meta-regression models

    PubMed Central

    2014-01-01

    Background Meta-regression is becoming increasingly used to model study level covariate effects. However this type of statistical analysis presents many difficulties and challenges. Here two methods for calculating confidence intervals for the magnitude of the residual between-study variance in random effects meta-regression models are developed. A further suggestion for calculating credible intervals using informative prior distributions for the residual between-study variance is presented. Methods Two recently proposed and, under the assumptions of the random effects model, exact methods for constructing confidence intervals for the between-study variance in random effects meta-analyses are extended to the meta-regression setting. The use of Generalised Cochran heterogeneity statistics is extended to the meta-regression setting and a Newton-Raphson procedure is developed to implement the Q profile method for meta-analysis and meta-regression. WinBUGS is used to implement informative priors for the residual between-study variance in the context of Bayesian meta-regressions. Results Results are obtained for two contrasting examples, where the first example involves a binary covariate and the second involves a continuous covariate. Intervals for the residual between-study variance are wide for both examples. Conclusions Statistical methods, and R computer software, are available to compute exact confidence intervals for the residual between-study variance under the random effects model for meta-regression. These frequentist methods are almost as easily implemented as their established counterparts for meta-analysis. Bayesian meta-regressions are also easily performed by analysts who are comfortable using WinBUGS. Estimates of the residual between-study variance in random effects meta-regressions should be routinely reported and accompanied by some measure of their uncertainty. Confidence and/or credible intervals are well-suited to this purpose. PMID:25196829

  15. The One-Water Hydrologic Flow Model - The next generation in fully integrated hydrologic simulation software

    NASA Astrophysics Data System (ADS)

    Boyce, S. E.; Hanson, R. T.

    2015-12-01

    The One-Water Hydrologic Flow Model (MF-OWHM) is a MODFLOW-based integrated hydrologic flow model that is the most complete version, to date, of the MODFLOW family of hydrologic simulators needed for the analysis of a broad range of conjunctive-use issues. MF-OWHM fully links the movement and use of groundwater, surface water, and imported water for consumption by agriculture and natural vegetation on the landscape, and for potable and other uses within a supply-and-demand framework. MF-OWHM is based on the Farm Process for MODFLOW-2005 combined with Local Grid Refinement, Streamflow Routing, Surface-water Routing Process, Seawater Intrusion, Riparian Evapotranspiration, and the Newton-Raphson solver. MF-OWHM also includes linkages for deformation-, flow-, and head-dependent flows; additional observation and parameter options for higher-order calibrations; and redesigned code for facilitation of self-updating models and faster simulation run times. The next version of MF-OWHM, currently under development, will include a new surface-water operations module that simulates dynamic reservoir operations, the conduit flow process for karst aquifers and leaky pipe networks, a new subsidence and aquifer compaction package, and additional features and enhancements to enable more integration and cross communication between traditional MODFLOW packages. By retaining and tracking the water within the hydrosphere, MF-OWHM accounts for "all of the water everywhere and all of the time." This philosophy provides more confidence in the water accounting by the scientific community and provides the public a foundation needed to address wider classes of problems such as evaluation of conjunctive-use alternatives and sustainability analysis, including potential adaptation and mitigation strategies, and best management practices. By Scott E. Boyce and Randall T. Hanson

  16. A Novel Approach for Modeling Chemical Reaction in Generalized Fluid System Simulation Program

    NASA Technical Reports Server (NTRS)

    Sozen, Mehmet; Majumdar, Alok

    2002-01-01

    The Generalized Fluid System Simulation Program (GFSSP) is a computer code developed at NASA Marshall Space Flight Center for analyzing steady state and transient flow rates, pressures, temperatures, and concentrations in a complex flow network. The code, which performs system level simulation, can handle compressible and incompressible flows as well as phase change and mixture thermodynamics. Thermodynamic and thermophysical property programs, GASP, WASP and GASPAK provide the necessary data for fluids such as helium, methane, neon, nitrogen, carbon monoxide, oxygen, argon, carbon dioxide, fluorine, hydrogen, water, a hydrogen, isobutane, butane, deuterium, ethane, ethylene, hydrogen sulfide, krypton, propane, xenon, several refrigerants, nitrogen trifluoride and ammonia. The program which was developed out of need for an easy to use system level simulation tool for complex flow networks, has been used for the following purposes to name a few: Space Shuttle Main Engine (SSME) High Pressure Oxidizer Turbopump Secondary Flow Circuits, Axial Thrust Balance of the Fastrac Engine Turbopump, Pressurized Propellant Feed System for the Propulsion Test Article at Stennis Space Center, X-34 Main Propulsion System, X-33 Reaction Control System and Thermal Protection System, and International Space Station Environmental Control and Life Support System design. There has been an increasing demand for implementing a combustion simulation capability into GFSSP in order to increase its system level simulation capability of a liquid rocket propulsion system starting from the propellant tanks up to the thruster nozzle for spacecraft as well as launch vehicles. The present work was undertaken for addressing this need. The chemical equilibrium equations derived from the second law of thermodynamics and the energy conservation equation derived from the first law of thermodynamics are solved simultaneously by a Newton-Raphson method. The numerical scheme was implemented as a User Subroutine in GFSSP.

  17. Computation of iodine species concentrations in water

    NASA Technical Reports Server (NTRS)

    Schultz, John R.; Mudgett, Paul D.; Flanagan, David T.; Sauer, Richard L.

    1994-01-01

    During an evaluation of the use of iodine as a water disinfectant and the development of methods for measuring various iodine species in water onboard Space Freedom, it became necessary to compute the concentration of the various species based on equilibrium principles alone. Of particular concern was the case when various amounts of iodine, iodide, strong acid, and strong base are added to water. Such solutions can be used to evaluate the performance of various monitoring methods being considered. The authors of this paper present an overview of aqueous iodine chemistry, a set of nonlinear equations which can be used to model the above case, and a computer program for solving this system of equations using the Newton-Raphson method. The program was validated by comparing results over a range of concentrations and pH values with those previously presented by Gottardi for a given pH. Use of this program indicated that there are multiple roots to many cases and selecting an appropriate initial guess is important. Comparison of program results with laboratory results for the case when only iodine is added to water indicates the program gives high pH values for the iodine concentrations normally used for water disinfection. Extending the model to include the effects of iodate formation results in the computer pH values being closer to those observed, but the model with iodate does not agree well for the case in which base is added in addition to iodine to raise the pH. Potential explanations include failure to obtain equilibrium conditions in the lab, inaccuracies in published values for the equilibrium constants, and inadequate model of iodine chemistry and/or the lack of adequate analytical methods for measuring the various iodine species in water.

  18. A Generalized Fluid System Simulation Program to Model Flow Distribution in Fluid Networks

    NASA Technical Reports Server (NTRS)

    Majumdar, Alok; Bailey, John W.; Schallhorn, Paul; Steadman, Todd

    1998-01-01

    This paper describes a general purpose computer program for analyzing steady state and transient flow in a complex network. The program is capable of modeling phase changes, compressibility, mixture thermodynamics and external body forces such as gravity and centrifugal. The program's preprocessor allows the user to interactively develop a fluid network simulation consisting of nodes and branches. Mass, energy and specie conservation equations are solved at the nodes; the momentum conservation equations are solved in the branches. The program contains subroutines for computing "real fluid" thermodynamic and thermophysical properties for 33 fluids. The fluids are: helium, methane, neon, nitrogen, carbon monoxide, oxygen, argon, carbon dioxide, fluorine, hydrogen, parahydrogen, water, kerosene (RP-1), isobutane, butane, deuterium, ethane, ethylene, hydrogen sulfide, krypton, propane, xenon, R-11, R-12, R-22, R-32, R-123, R-124, R-125, R-134A, R-152A, nitrogen trifluoride and ammonia. The program also provides the options of using any incompressible fluid with constant density and viscosity or ideal gas. Seventeen different resistance/source options are provided for modeling momentum sources or sinks in the branches. These options include: pipe flow, flow through a restriction, non-circular duct, pipe flow with entrance and/or exit losses, thin sharp orifice, thick orifice, square edge reduction, square edge expansion, rotating annular duct, rotating radial duct, labyrinth seal, parallel plates, common fittings and valves, pump characteristics, pump power, valve with a given loss coefficient, and a Joule-Thompson device. The system of equations describing the fluid network is solved by a hybrid numerical method that is a combination of the Newton-Raphson and successive substitution methods. This paper also illustrates the application and verification of the code by comparison with Hardy Cross method for steady state flow and analytical solution for unsteady flow.

  19. Material parameter measurements at high temperatures

    NASA Technical Reports Server (NTRS)

    Dominek, A.; Park, A.; Peters, L., Jr.

    1988-01-01

    Alternate fixtures of techniques for the measurement of the constitutive material parameters at elevated temperatures are presented. The technique utilizes scattered field data from material coated cylinders between parallel plates or material coated hemispheres over a finite size groundplane. The data acquisition is centered around the HP 8510B Network Analyzer. The parameters are then found from a numerical search algorithm using the Newton-Ralphson technique with the measured and calculated fields from these canonical scatters. Numerical and experimental results are shown.

  20. Computational structures for robotic computations

    NASA Technical Reports Server (NTRS)

    Lee, C. S. G.; Chang, P. R.

    1987-01-01

    The computational problem of inverse kinematics and inverse dynamics of robot manipulators by taking advantage of parallelism and pipelining architectures is discussed. For the computation of inverse kinematic position solution, a maximum pipelined CORDIC architecture has been designed based on a functional decomposition of the closed-form joint equations. For the inverse dynamics computation, an efficient p-fold parallel algorithm to overcome the recurrence problem of the Newton-Euler equations of motion to achieve the time lower bound of O(log sub 2 n) has also been developed.

Top