A Non-smooth Newton Method for Multibody Dynamics
Erleben, K.; Ortiz, R.
2008-09-01
In this paper we deal with the simulation of rigid bodies. Rigid body dynamics have become very important for simulating rigid body motion in interactive applications, such as computer games or virtual reality. We present a novel way of computing contact forces using a Newton method. The contact problem is reformulated as a system of non-linear and non-smooth equations, and we solve this system using a non-smooth version of Newton's method. One of the main contribution of this paper is the reformulation of the complementarity problems, used to model impacts, as a system of equations that can be solved using traditional methods.
Inexact Newton dogleg methods.
Shadid, John Nicolas; Simonis, Joseph P.; Pawlowski, Roger Patrick; Walker, Homer Franklin
2005-05-01
The dogleg method is a classical trust-region technique for globalizing Newton's method. While it is widely used in optimization, including large-scale optimization via truncated-Newton approaches, its implementation in general inexact Newton methods for systems of nonlinear equations can be problematic. In this paper, we first outline a very general dogleg method suitable for the general inexact Newton context and provide a global convergence analysis for it. We then discuss certain issues that may arise with the standard dogleg implementational strategy and propose modified strategies that address them. Newton-Krylov methods have provided important motivation for this work, and we conclude with a report on numerical experiments involving a Newton-GMRES dogleg method applied to benchmark CFD problems.
Newton modified barrier method in constrained optimization
NASA Technical Reports Server (NTRS)
Polyak, R.
1990-01-01
In this paper, we develop and investigate the Newton method for solving constrained (non-smooth) optimization problems. This approach is based on the modified barrier functions (MBF) theory and on the global converging step-size version of the Newton method for smooth unconstrained optimization. Due to the excellent properties of the MBF near primal-dual solution, the Newton modified barrier method (NMBM) has a better rate of convergence, better complexity bound, and is much more stable in the final stage of the computational process than the methods which are based on the classical barrier functions (CBF).
Sometimes "Newton's Method" Always "Cycles"
ERIC Educational Resources Information Center
Latulippe, Joe; Switkes, Jennifer
2012-01-01
Are there functions for which Newton's method cycles for all non-trivial initial guesses? We construct and solve a differential equation whose solution is a real-valued function that two-cycles under Newton iteration. Higher-order cycles of Newton's method iterates are explored in the complex plane using complex powers of "x." We find a class of…
Sometimes "Newton's Method" Always "Cycles"
ERIC Educational Resources Information Center
Latulippe, Joe; Switkes, Jennifer
2012-01-01
Are there functions for which Newton's method cycles for all non-trivial initial guesses? We construct and solve a differential equation whose solution is a real-valued function that two-cycles under Newton iteration. Higher-order cycles of Newton's method iterates are explored in the complex plane using complex powers of "x." We find a class of…
Structural optimization using Newton Modified Barrier Method
NASA Astrophysics Data System (ADS)
Khot, N. S.; Polyak, R.; Schneur, R.
1992-09-01
The Newton Modified Barrier Method (NMBM) was applied to a structural optimization problem with large numbers of design variables and constraints. This mathematical optimization algorithm was based on Modified Barrier Function (MBF) theory and the global converging step version of the Newton Method for smooth unconstrained optimization. For illustrating the convergence characteristics of this method to structural optimization, a truss structure with 721 design variables with constraints on displacements and minimum size requirements was solved. The convergence to the optimum was found to be monotonic. The rate of convergence was compared with solving the same problem with ASTROS and optimality criteria approach.
Fractal aspects and convergence of Newton`s method
Drexler, M.
1996-12-31
Newton`s Method is a widely established iterative algorithm for solving non-linear systems. Its appeal lies in its great simplicity, easy generalization to multiple dimensions and a quadratic local convergence rate. Despite these features, little is known about its global behavior. In this paper, we will explain a seemingly random global convergence pattern using fractal concepts and show that the behavior of the residual is entirely explicable. We will also establish quantitative results for the convergence rates. Knowing the mechanism of fractal generation, we present a stabilization to the orthodox Newton method that remedies the fractal behavior and improves convergence.
[Isaac Newton's Anguli Contactus method].
Wawrzycki, Jarosław
2014-01-01
In this paper we discuss the geometrical method for calculating the curvature of a class of curves from the third Book of Isaac Newton's Principia. The method involves any curve which is generated from an elementary curve (actually from any curve whose curvature we known of) by means of transformation increasing the polar angular coordinate in a constant ratio, but unchanging the polar radial angular coordinate.
Kepler equation and accelerated Newton method
NASA Astrophysics Data System (ADS)
Palacios, M.
2002-01-01
We study the efficiency of the accelerated Newton method (Garlach, SIAM Rev. 36 (1994) 272-276) for several orders of convergence versus Danby's method for the resolution of Kepler's equation; we find that the cited method of order three is competitive with Danby's method and the classical Newton's method. We also generalize the accelerated Newton method for the resolution of system of algebraic equations, obtaining a formula of order three and a proof of its convergence; its application to several examples shows that its efficiency is greater than Newton's method.
Third-order modification of Newton's method
NASA Astrophysics Data System (ADS)
Jisheng, Kou; Yitian, Li; Xiuhua, Wang
2007-08-01
In this paper, we present a new modification of Newton's method for solving non-linear equations. Analysis of convergence shows that the new method is cubically convergent. Numerical examples show that the new method can compete with the classical Newton's method.
The Lyapunov spectrum as the Newton method
NASA Astrophysics Data System (ADS)
Iommi, Godofredo
2012-05-01
For a class of dynamical systems, the cookie-cutter maps, we prove that the Lyapunov spectrum coincides with the map given by the Newton-Raphson method applied to the derivative of the pressure function.
Subsampled Hessian Newton Methods for Supervised Learning.
Wang, Chien-Chih; Huang, Chun-Heng; Lin, Chih-Jen
2015-08-01
Newton methods can be applied in many supervised learning approaches. However, for large-scale data, the use of the whole Hessian matrix can be time-consuming. Recently, subsampled Newton methods have been proposed to reduce the computational time by using only a subset of data for calculating an approximation of the Hessian matrix. Unfortunately, we find that in some situations, the running speed is worse than the standard Newton method because cheaper but less accurate search directions are used. In this work, we propose some novel techniques to improve the existing subsampled Hessian Newton method. The main idea is to solve a two-dimensional subproblem per iteration to adjust the search direction to better minimize the second-order approximation of the function value. We prove the theoretical convergence of the proposed method. Experiments on logistic regression, linear SVM, maximum entropy, and deep networks indicate that our techniques significantly reduce the running time of the subsampled Hessian Newton method. The resulting algorithm becomes a compelling alternative to the standard Newton method for large-scale data classification.
Generalized Newton Method for Energy Formulation in Image Processing
2008-04-01
Blurred (b) - Newton with LH (c) - Standard Newton (d) - Newton with Ls Fig. 5.2. Deblurring of the clown image with different Newton-like methods...proposed method, the inner product can be adapted to the problem at hand. In the second example, Figure 5.2, the 330 × 291 clown image was additionally
A combined modification of Newton`s method for systems of nonlinear equations
Monteiro, M.T.; Fernandes, E.M.G.P.
1996-12-31
To improve the performance of Newton`s method for the solution of systems of nonlinear equations a modification to the Newton iteration is implemented. The modified step is taken as a linear combination of Newton step and steepest descent directions. In the paper we describe how the coefficients of the combination can be generated to make effective use of the two component steps. Numerical results that show the usefulness of the combined modification are presented.
The Newton Modified Barrier Method for QP Problems
NASA Technical Reports Server (NTRS)
Melman, A.; Polyak, R.
1996-01-01
The Modified Barrier Functions (MBF) have elements of both Classical Lagrangians (CL) and Classical Barrier Functions (CBF). The MBF methods find an unconstrained minimizer of some smooth barrier function in primal space and then update the Lagrange multipliers, while the barrier parameter either remains fixed or can be updated at each step. The numerical realization of the MBF method leads to the Newton MBF method, where the primal minimizer is found by using Newton's method. This minimizer is then used to update the Lagrange multipliers. In this paper, we examine the Newton MBF method for the Quadratic Programming (QP) problem. It will be shown that under standard second-order optimality conditions, there is a ball around the primal solution and a cut cone in the dual space such that for a set of Lagrange multipliers in this cut cone, the method converges quadratically to the primal minimizer from any point in the aforementioned ball, and continues to do so after each Lagrange multiplier update. The Lagrange multipliers remain within the cut cone and converge linearly to their optimal values. Any point in this ball will be called a "hot start". Starting at such a "hot start", at most Omicron(1n 1n epsilon(exp -1)) Newton steps are sufficient to perform the primal minimization which is necessary for the Lagrange multiplier update. Here, epsilon > 0 is the desired accuracy. Because of the linear convergence of the Lagrange multipliers, this means that only Omicron(1n epsilon(exp -1))omicron(ln 1n epsilon(exp-1)) Newton steps are required to reach an epsilon-approximation to the solution from any "hot start". In order to reach the "hot start", one has to perform Omicron(square root(m) 1n C) Newton steps, where m characterizes the size of the problem and C > 0 is the condition number of the QP problem. This condition number will be characterized explicitly in terms of key parameters of the QP problem, which in turn depend on the input data and the size of the problem.
The Newton Modified Barrier Method for QP Problems
NASA Technical Reports Server (NTRS)
Melman, A.; Polyak, R.
1996-01-01
The Modified Barrier Functions (MBF) have elements of both Classical Lagrangians (CL) and Classical Barrier Functions (CBF). The MBF methods find an unconstrained minimizer of some smooth barrier function in primal space and then update the Lagrange multipliers, while the barrier parameter either remains fixed or can be updated at each step. The numerical realization of the MBF method leads to the Newton MBF method, where the primal minimizer is found by using Newton's method. This minimizer is then used to update the Lagrange multipliers. In this paper, we examine the Newton MBF method for the Quadratic Programming (QP) problem. It will be shown that under standard second-order optimality conditions, there is a ball around the primal solution and a cut cone in the dual space such that for a set of Lagrange multipliers in this cut cone, the method converges quadratically to the primal minimizer from any point in the aforementioned ball, and continues to do so after each Lagrange multiplier update. The Lagrange multipliers remain within the cut cone and converge linearly to their optimal values. Any point in this ball will be called a "hot start". Starting at such a "hot start", at most Omicron(1n 1n epsilon(exp -1)) Newton steps are sufficient to perform the primal minimization which is necessary for the Lagrange multiplier update. Here, epsilon > 0 is the desired accuracy. Because of the linear convergence of the Lagrange multipliers, this means that only Omicron(1n epsilon(exp -1))omicron(ln 1n epsilon(exp-1)) Newton steps are required to reach an epsilon-approximation to the solution from any "hot start". In order to reach the "hot start", one has to perform Omicron(square root(m) 1n C) Newton steps, where m characterizes the size of the problem and C > 0 is the condition number of the QP problem. This condition number will be characterized explicitly in terms of key parameters of the QP problem, which in turn depend on the input data and the size of the problem.
Solving a Class of Nonlinear Eigenvalue Problems by Newton's Method
Gao, Weiguo; Yang, Chao; Meza, Juan C.
2009-07-02
We examine the possibility of using the standard Newton's method for solving a class of nonlinear eigenvalue problems arising from electronic structure calculation. We show that the Jacobian matrix associated with this nonlinear system has a special structure that can be exploited to reduce the computational complexity of the Newton's method. Preliminary numerical experiments indicate that the Newton's method can be more efficient for small problems in which a few smallest eigenpairs are needed.
Newton-Krylov methods applied to nonequilibrium radiation diffusion
Knoll, D.A.; Rider, W.J.; Olsen, G.L.
1998-03-10
The authors present results of applying a matrix-free Newton-Krylov method to a nonequilibrium radiation diffusion problem. Here, there is no use of operator splitting, and Newton`s method is used to convert the nonlinearities within a time step. Since the nonlinear residual is formed, it is used to monitor convergence. It is demonstrated that a simple Picard-based linearization produces a sufficient preconditioning matrix for the Krylov method, thus elevating the need to form or store a Jacobian matrix for Newton`s method. They discuss the possibility that the Newton-Krylov approach may allow larger time steps, without loss of accuracy, as compared to an operator split approach where nonlinearities are not converged within a time step.
NASA Astrophysics Data System (ADS)
Wilson, C.; Murdin, P.
2000-11-01
Isaac Newton (1642-1727) is known pre-eminently for discoveries in mathematics (binomial theorem and fundamental theorem of the calculus), optics (the heterogeneity of white light) and mechanics (laws of motion and universal gravitation). Not undisputed are some questions of priority and how in detail to characterize these achievements. Beyond question, however, is the foundational characte...
Newton-Krylov-Schwarz methods in unstructured grid Euler flow
Keyes, D.E.
1996-12-31
Newton-Krylov methods and Krylov-Schwarz (domain decomposition) methods have begun to become established in computational fluid dynamics (CFD) over the past decade. The former employ a Krylov method inside of Newton`s method in a Jacobian-free manner, through directional differencing. The latter employ an overlapping Schwarz domain decomposition to derive a preconditioner for the Krylov accelerator that relies primarily on local information, for data-parallel concurrency. They may be composed as Newton-Krylov-Schwarz (NKS) methods, which seem particularly well suited for solving nonlinear elliptic systems in high-latency, distributed-memory environments. We give a brief description of this family of algorithms, with an emphasis on domain decomposition iterative aspects. We then describe numerical simulations with Newton-Krylov-Schwarz methods on an aerodynamic application emphasizing comparisons with a standard defect-correction approach and subdomain preconditioner consistency.
Structural Optimization Using the Newton Modified Barrier Method
NASA Technical Reports Server (NTRS)
Khot, N. S.; Polyak, R. A.; Schneur, R.; Berke, L.
1995-01-01
The Newton Modified Barrier Method (NMBM) is applied to structural optimization problems with large a number of design variables and constraints. This nonlinear mathematical programming algorithm was based on the Modified Barrier Function (MBF) theory and the Newton method for unconstrained optimization. The distinctive feature of the NMBM method is the rate of convergence that is due to the fact that the design remains in the Newton area after each Lagrange multiplier update. This convergence characteristic is illustrated by application to structural problems with a varying number of design variables and constraints. The results are compared with those obtained by optimality criteria (OC) methods and by the ASTROS program.
Low-rank Quasi-Newton updates for Robust Jacobian lagging in Newton methods
Brown, J.; Brune, P.
2013-07-01
Newton-Krylov methods are standard tools for solving nonlinear problems. A common approach is to 'lag' the Jacobian when assembly or preconditioner setup is computationally expensive, in exchange for some degradation in the convergence rate and robustness. We show that this degradation may be partially mitigated by using the lagged Jacobian as an initial operator in a quasi-Newton method, which applies unassembled low-rank updates to the Jacobian until the next full reassembly. We demonstrate the effectiveness of this technique on problems in glaciology and elasticity. (authors)
Choosing the forcing terms in an inexact Newton method
Eisenstat, S.C.; Walker, H.F.
1994-12-31
An inexact Newton method is a generalization of Newton`s method for solving F(x) = 0, F: {Re}{sup n} {r_arrow} {Re}{sup n}, in which each step reduces the norm of the local linear model of F. At the kth iteration, the norm reduction is usefully expressed by the inexact Newton condition where x{sub k} is the current approximate solution and s{sub k} is the step. In many applications, an {eta}{sub k} is first specified, and then an S{sub k} is found for which the inexact Newton condition holds. Thus {eta}{sub k} is often called a {open_quotes}forcing term{close_quotes}. In practice, the choice of the forcing terms is usually critical to the efficiency of the method and can affect robustness as well. Here, the authors outline several promising choices, discuss theoretical support for them, and compare their performance in a Newton iterative (truncated Newton) method applied to several large-scale problems.
Optimization: NURBS and the quasi-Newton method
NASA Astrophysics Data System (ADS)
Coburn, Todd Dale
Optimization is important in both engineering and mathematics. The Quasi-Newton Method is widely used for optimization due to its speed and efficiency. NonUniform Rational B-Splines (NURBS) are piecewise parametric approximations to curves and surfaces. NURBS have great curve-fitting properties that can be applied to improve optimization performance. This dissertation investigated the use of NURBS in optimization, focusing primarily on the coupling of NURBS with the Quasi-Newton Method. A hybrid optimization procedure dubbed the NURBS-Quasi-Newton (NQN) Method was developed and utilized that can virtually assure that the global minimum will be found. A Method was also developed to implement Pure NURBS Optimization (PNO), which can be used to optimize non-continuous and singular functions as well as functions of point cloud data. It was concluded that NURBS offer significant benefits for optimization, both individually and coupled with Quasi-Newton Methods.
A perfect memory makes the continuous Newton method look ahead
NASA Astrophysics Data System (ADS)
Kim, M. B.; Neuberger, J. W.; Schleich, W. P.
2017-08-01
Hauser and Nedić (2005 SIAM J. Optim. 15 915) have pointed out an intriguing property of a perturbed flow line generated by the continuous Newton method: it returns to the unperturbed one once the perturbation ceases to exist. We show that this feature is a direct consequence of the phase being constant along any Newton trajectory, that is, once a phase always that phase.
Some modifications of Newton's method with fifth-order convergence
NASA Astrophysics Data System (ADS)
Kou, Jisheng; Li, Yitian; Wang, Xiuhua
2007-12-01
In this paper, we present some new modifications of Newton's method for solving non-linear equations. Analysis of convergence shows that these methods have order of convergence five. Numerical tests verifying the theory are given and based on these methods, a class of new multistep iterations is developed.
Implicit Newton-Krylov methods for modeling blast furnace stoves
Howse, J.W.; Hansen, G.A.; Cagliostro, D.J.; Muske, K.R.
1998-03-01
In this paper the authors discuss the use of an implicit Newton-Krylov method to solve a set of partial differential equations representing a physical model of a blast furnace stove. The blast furnace stove is an integral part of the iron making process in the steel industry. These stoves are used to heat air which is then used in the blast furnace to chemically reduce iron ore to iron metal. The solution technique used to solve the discrete representations of the model and control PDE`s must be robust to linear systems with disparate eigenvalues, and must converge rapidly without using tuning parameters. The disparity in eigenvalues is created by the different time scales for convection in the gas, and conduction in the brick; combined with a difference between the scaling of the model and control PDE`s. A preconditioned implicit Newton-Krylov solution technique was employed. The procedure employs Newton`s method, where the update to the current solution at each stage is computed by solving a linear system. This linear system is obtained by linearizing the discrete approximation to the PDE`s, using a numerical approximation for the Jacobian of the discretized system. This linear system is then solved for the needed update using a preconditioned Krylov subspace projection method.
Smoothed Profile Method to Simulate Colloidal Particles in Complex Fluids
NASA Astrophysics Data System (ADS)
Yamamoto, Ryoichi; Nakayama, Yasuya; Kim, Kang
A new direct numerical simulation scheme, called "Smoothed Profile (SP) method," is presented. The SP method, as a direct numerical simulation of particulate flow, provides a way to couple continuum fluid dynamics with rigid-body dynamics through smoothed profile of colloidal particle. Our formulation includes extensions to colloids in multicomponent solvents such as charged colloids in electrolyte solutions. This method enables us to compute the time evolutions of colloidal particles, ions, and host fluids simultaneously by solving Newton, advection-diffusion, and Navier-Stokes equations so that the electro-hydrodynamic couplings can be fully taken into account. The electrophoretic mobilities of charged spherical particles are calculated in several situations. The comparisons with approximation theories show quantitative agreements for dilute dispersions without any empirical parameters.
Newton's method for large bound-constrained optimization problems.
Lin, C.-J.; More, J. J.; Mathematics and Computer Science
1999-01-01
We analyze a trust region version of Newton's method for bound-constrained problems. Our approach relies on the geometry of the feasible set, not on the particular representation in terms of constraints. The convergence theory holds for linearly constrained problems and yields global and superlinear convergence without assuming either strict complementarity or linear independence of the active constraints. We also show that the convergence theory leads to an efficient implementation for large bound-constrained problems.
Approximate Newton-type methods via theory of control
NASA Astrophysics Data System (ADS)
Yap, Chui Ying; Leong, Wah June
2014-12-01
In this paper, we investigate the possible use of control theory, particularly theory on optimal control to derive some numerical methods for unconstrained optimization problems. Based upon this control theory, we derive a Levenberg-Marquardt-like method that guarantees greatest descent in a particular search region. The implementation of this method in its original form requires inversion of a non-sparse matrix or equivalently solving a linear system in every iteration. Thus, an approximation of the proposed method via quasi-Newton update is constructed. Numerical results indicate that the new method is more effective and practical.
Newton like: Minimal residual methods applied to transonic flow calculations
NASA Technical Reports Server (NTRS)
Wong, Y. S.
1984-01-01
A computational technique for the solution of the full potential equation is presented. The method consists of outer and inner iterations. The outer iterate is based on a Newton like algorithm, and a preconditioned Minimal Residual method is used to seek an approximate solution of the system of linear equations arising at each inner iterate. The present iterative scheme is formulated so that the uncertainties and difficulties associated with many iterative techniques, namely the requirements of acceleration parameters and the treatment of additional boundary conditions for the intermediate variables, are eliminated. Numerical experiments based on the new method for transonic potential flows around the NACA 0012 airfoil at different Mach numbers and different angles of attack are presented, and these results are compared with those obtained by the Approximate Factorization technique. Extention to three dimensional flow calculations and application in finite element methods for fluid dynamics problems by the present method are also discussed. The Inexact Newton like method produces a smoother reduction in the residual norm, and the number of supersonic points and circulations are rapidly established as the number of iterations is increased.
Application of Newton modified barrier method (NMBM) to structural optimization
NASA Technical Reports Server (NTRS)
Khot, N. S.; Polyak, R.; Schneur, R.; Berke, L.
1992-01-01
This paper presents the application of the NMBM to obtain a minimum weight structure with constraints on displacements and minimum sizes. The solution to the problem is obtained via minimizing the Modified Barrier Function (MBF) at each step by using the Newton Method and updating Lagrange multipliers. The Lagrange multipliers are updated by using the value of the constraints at the minimum of the MBF. Three truss problems with a different number of design variables are solved. The convergence to the minimum weight design was found to be monotonic and the algorithm is potentially robust for solving problems with a large number of design variables.
Puso, M A; Laursen, T A
2002-05-02
Smoothing of contact surfaces can be used to eliminate the chatter typically seen with node on facet contact and give a better representation of the actual contact surface. The latter affect is well demonstrated for problems with interference fits. In this work we present two methods for the smoothing of contact surfaces for 3D finite element contact. In the first method, we employ Gregory patches to smooth the faceted surface in a node on facet implementation. In the second method, we employ a Bezier interpolation of the faceted surface in a mortar method implementation of contact. As is well known, node on facet approaches can exhibit locking due to the failure of the Babuska-Brezzi condition and in some instances fail the patch test. The mortar method implementation is stable and provides optimal convergence in the energy of error. In the this work we demonstrate the superiority of the smoothed versus the non-smoothed node on facet implementations. We also show where the node on facet method fails and some results from the smoothed mortar method implementation.
Preconditioning Newton-Krylor Methods for Variably Saturated Flow
Woodward, C.; Jones, J
2000-01-07
In this paper, we compare the effectiveness of three preconditioning strategies in simulations of variably saturated flow. Using Richards' equation as our model, we solve the nonlinear system using a Newton-Krylov method. Since Krylov solvers can stagnate, resulting in slow convergence, we investigate different strategies of preconditioning the Jacobian system. Our work uses a multigrid method to solve the preconditioning systems, with three different approximations to the Jacobian matrix. One approximation lags the nonlinearities, the second results from discarding selected off-diagonal contributions, and the third matrix considered is the full Jacobian. Results indicate that although the Jacobian is more accurate, its usage as a preconditioning matrix should be limited, as it requires much more storage than the simpler approximations. Also, simply lagging the nonlinearities gives a preconditioning matrix that is almost as effective as the full Jacobian but much easier to compute.
Parallel full-waveform inversion in the frequency domain by the Gauss-Newton method
NASA Astrophysics Data System (ADS)
Zhang, Wensheng; Zhuang, Yuan
2016-06-01
In this paper, we investigate the full-waveform inversion in the frequency domain. We first test the inversion ability of three numerical optimization methods, i.e., the steepest-descent method, the Newton-CG method and the Gauss- Newton method, for a simple model. The results show that the Gauss-Newton method performs well and efficiently. Then numerical computations for a benchmark model named Marmousi model by the Gauss-Newton method are implemented. Parallel algorithm based on message passing interface (MPI) is applied as the inversion is a typical large-scale computational problem. Numerical computations show that the Gauss-Newton method has good ability to reconstruct the complex model.
Multiple predictor smoothing methods for sensitivity analysis.
Helton, Jon Craig; Storlie, Curtis B.
2006-08-01
The use of multiple predictor smoothing methods in sampling-based sensitivity analyses of complex models is investigated. Specifically, sensitivity analysis procedures based on smoothing methods employing the stepwise application of the following nonparametric regression techniques are described: (1) locally weighted regression (LOESS), (2) additive models, (3) projection pursuit regression, and (4) recursive partitioning regression. The indicated procedures are illustrated with both simple test problems and results from a performance assessment for a radioactive waste disposal facility (i.e., the Waste Isolation Pilot Plant). As shown by the example illustrations, the use of smoothing procedures based on nonparametric regression techniques can yield more informative sensitivity analysis results than can be obtained with more traditional sensitivity analysis procedures based on linear regression, rank regression or quadratic regression when nonlinear relationships between model inputs and model predictions are present.
A New Newton-Like Iterative Method for Roots of Analytic Functions
ERIC Educational Resources Information Center
Otolorin, Olayiwola
2005-01-01
A new Newton-like iterative formula for the solution of non-linear equations is proposed. To derive the formula, the convergence criteria of the one-parameter iteration formula, and also the quasilinearization in the derivation of Newton's formula are reviewed. The result is a new formula which eliminates the limitations of other methods. There is…
Several methods of smoothing motion capture data
NASA Astrophysics Data System (ADS)
Qi, Jingjing; Miao, Zhenjiang; Wang, Zhifei; Zhang, Shujun
2011-06-01
Human motion capture and editing technologies are widely used in computer animation production. We can acquire original motion data by human motion capture system, and then process it by motion editing system. However, noise embed in original motion data maybe introduced by extracting the target, three-dimensional reconstruction process, optimizing algorithm and devices itself in human motion capture system. The motion data must be modified before used to make videos, otherwise the animation figures will be jerky and their behavior is unnatural. Therefore, motion smoothing is essential. In this paper, we compare and summarize three methods of smoothing original motion capture data.
Method for producing smooth inner surfaces
Cooper, Charles A.
2016-05-17
The invention provides a method for preparing superconducting cavities, the method comprising causing polishing media to tumble by centrifugal barrel polishing within the cavities for a time sufficient to attain a surface smoothness of less than 15 nm root mean square roughness over approximately a 1 mm.sup.2 scan area. The method also provides for a method for preparing superconducting cavities, the method comprising causing polishing media bound to a carrier to tumble within the cavities. The method also provides for a method for preparing superconducting cavities, the method comprising causing polishing media in a slurry to tumble within the cavities.
NASA Astrophysics Data System (ADS)
Chtioui, Younes; Panigrahi, Suranjan; Marsh, Ronald A.
1998-11-01
The probabilistic neural network (PNN) is based on the estimation of the probability density functions. The estimation of these density functions uses smoothing parameters that represent the width of the activation functions. A two-step numerical procedure is developed for the optimization of the smoothing parameters of the PNN: a rough optimization by the conjugate gradient method and a fine optimization by the approximate Newton method. The thrust is to compare the classification performances of the improved PNN and the standard back-propagation neural network (BPNN). Comparisons are performed on a food quality problem: french fry classification into three different color classes (light, normal, and dark). The optimized PNN correctly classifies 96.19% of the test data, whereas the BPNN classifies only 93.27% of the same data. Moreover, the PNN is more stable than the BPNN with regard to the random initialization. The optimized PNN requires 1464 s for training compared to only 71 s required by the BPNN.
Solving Cocoa Pod Sigmoid Growth Model with Newton Raphson Method
NASA Astrophysics Data System (ADS)
Chang, Albert Ling Sheng; Maisin, Navies
Cocoa pod growth modelling are useful in crop management, pest and disease management and yield forecasting. Recently, the Beta Growth Function has been used to determine the pod growth model due to its unique for the plant organ growth which is zero growth rate at both the start and end of a precisely defined growth period. Specific pod size (7cm to 10cm in length) is useful in cocoa pod borer (CPB) management for pod sleeving or pesticide spraying. The Beta Growth Function is well-fitted to the pods growth data of four different cocoa clones under non-linear function with time (t) as its independent variable which measured pod length and diameter weekly started at 8 weeks after fertilization occur until pods ripen. However, the same pod length among the clones did not indicate the same pod age since the morphological characteristics for cocoa pods vary among the clones. Depending on pod size for all the clones as guideline in CPB management did not give information on pod age, therefore it is important to study the pod age at specific pod sizes on different clones. Hence, Newton Raphson method is used to solve the non-linear equation of the Beta Growth Function of four different group of cocoa pod at specific pod size.
A damped Newton variational inversion method for SAR wind retrieval
NASA Astrophysics Data System (ADS)
Jiang, Zhuhui; Li, Yuanxiang; Yu, Fangjie; Chen, Ge; Yu, Wenxian
2017-01-01
The variational inversion for synthetic aperture radar (SAR) wind retrieval can take all sources' errors into account, but its iterative computation is very time consuming. For the wind vectors, (u, v) components are commonly used for variational inversion, but they are not intuitive for practical applications. In this paper, we modify the decomposition of wind vectors in the cost function into wind speed and wind direction and adopt the damped Newton method (DNVAR) to solve the cost function. Experimental results on simulated data show that DNVAR can effectively reduce background wind vector errors. Additionally, the average number of iterations is reduced drastically compared to prior arts. Furthermore, a detailed comparison between direct SAR wind retrieval (DIRECT) and DNVAR is performed. Simulations reveal that the DNVAR errors are smaller than the background wind vector errors in all considered cases. Thus, DNVAR could be employed to retrieve SAR sea surface wind. For practical applications, when the background wind speed is within moderate and high wind speed range, DNVAR has higher accuracy and is thus preferred. Otherwise, both DNVAR and DIRECT are feasible, considering the unknown actual errors of both background wind vectors and geophysical model function. Experimental results on Envisat/advanced synthetic aperture radar data show that the wind speed accuracy of DIRECT is largely affected by the background wind direction errors, but DNVAR can reduce the wind direction errors with minor effect on the wind speed errors in comparison with the background wind errors.
Smooth electrode and method of fabricating same
Weaver, Stanton Earl; Kennerly, Stacey Joy; Aimi, Marco Francesco
2012-08-14
A smooth electrode is provided. The smooth electrode includes at least one metal layer having thickness greater than about 1 micron; wherein an average surface roughness of the smooth electrode is less than about 10 nm.
Newton's method: A link between continuous and discrete solutions of nonlinear problems
NASA Technical Reports Server (NTRS)
Thurston, G. A.
1980-01-01
Newton's method for nonlinear mechanics problems replaces the governing nonlinear equations by an iterative sequence of linear equations. When the linear equations are linear differential equations, the equations are usually solved by numerical methods. The iterative sequence in Newton's method can exhibit poor convergence properties when the nonlinear problem has multiple solutions for a fixed set of parameters, unless the iterative sequences are aimed at solving for each solution separately. The theory of the linear differential operators is often a better guide for solution strategies in applying Newton's method than the theory of linear algebra associated with the numerical analogs of the differential operators. In fact, the theory for the differential operators can suggest the choice of numerical linear operators. In this paper the method of variation of parameters from the theory of linear ordinary differential equations is examined in detail in the context of Newton's method to demonstrate how it might be used as a guide for numerical solutions.
Rotorcraft Smoothing Via Linear Time Periodic Methods
2007-07-01
Optimal Control Methodology for Rotor Vibration Smoothing . . 30 vii Page IV. Mathematic Foundations of Linear Time Periodic Systems . . . . 33 4.1 The...62 6.3 The Maximum Likelihood Estimator . . . . . . . . . . . 63 6.4 The Cramer-Rao Inequality . . . . . . . . . . . . . . . . 66 6.4.1 Statistical ...adjustments for vibration reduction. 2.2.2.4 1980’s to late 1990’s. Rotor vibrational reduction methods during the 1980’s began to adopt a mathematical
Global convergence of damped semismooth Newton methods for ℓ1 Tikhonov regularization
NASA Astrophysics Data System (ADS)
Hans, Esther; Raasch, Thorsten
2015-02-01
We are concerned with Tikhonov regularization of linear ill-posed problems with ℓ1 coefficient penalties. Griesse and Lorenz (2008 Inverse Problems 24 035007) proposed a semismooth Newton method for the efficient minimization of the corresponding Tikhonov functionals. In the class of high-precision solvers for such problems, semismooth Newton methods are particularly competitive due to their superlinear convergence properties and their ability to solve piecewise affine equations exactly within finitely many iterations. However, the convergence of semismooth Newton schemes is only local in general. In this work, we discuss the efficient globalization of B(ouligand)-semismooth Newton methods for ℓ1 Tikhonov regularization by means of damping strategies and suitable descent with respect to an associated merit functional. Numerical examples are provided which show that our method compares well with existing iterative, globally convergent approaches.
Comparing Three Methods for Teaching Newton's Second Law
NASA Astrophysics Data System (ADS)
Wittmann, Michael C.; Anderson, Mindi Kvaal; Smith, Trevor I.
2009-11-01
As a follow-up to a study comparing learning of Newton's Third Law when using three different forms of tutorial instruction, we have compared student learning of Newton's Second Law (NSL) when students use the Tutorials in Introductory Physics, Activity-Based Tutorials, or Open Source Tutorials. We split an algebra-based, life sciences physics course in 3 groups and measured students' pre- and post-instruction scores on the Force and Motion Conceptual Evaluation (FMCE). We look at only the NSL-related clusters of questions on the FMCE to compare students' performance and normalized gains. Students entering the course are not significantly different, and students using the Tutorials in Introductory Physics show the largest normalized gains in answering question on the FMCE correctly. These gains are significant in only one cluster of questions, the Force Sled cluster.
On the Nonmonotone Behavior of the Newton-Gmback Method
Catinas, Emil
2008-09-17
GMBACK is a Krylov solver for large linear systems, which is based on backward error minimization properties. The minimum backward error is guaranteed (in exact arithmetic) to decrease when the subspace dimension is increased. In this paper we consider two test problems which lead to nonlinear systems which we solve by the Newton-GMBACK. We notice that in floating point arithmetic the mentioned property does not longer hold; this leads to nonmonotone behavior of the errors, as reported in a previous paper. We also propose a remedy, which solves this drawback.
NASA Astrophysics Data System (ADS)
Movchan, A. A.; Brodskij, S. I.
The paper is concerned with the elastic-plastic analysis of an axisymmetric bimetal joint under loading. A system of nonlinear equations describing this elastic-plastic problem are solved by using a modified version of the Newton-Raphson method. To increase the computational efficiency, a procedure is proposed whereby the Newton-Raphson method is combined with a version of the conjugate gradient method.
Solving Nonlinear Solid Mechanics Problems with the Jacobian-Free Newton Krylov Method
J. D. Hales; S. R. Novascone; R. L. Williamson; D. R. Gaston; M. R. Tonks
2012-06-01
The solution of the equations governing solid mechanics is often obtained via Newton's method. This approach can be problematic if the determination, storage, or solution cost associated with the Jacobian is high. These challenges are magnified for multiphysics applications with many coupled variables. Jacobian-free Newton-Krylov (JFNK) methods avoid many of the difficulties associated with the Jacobian by using a finite difference approximation. BISON is a parallel, object-oriented, nonlinear solid mechanics and multiphysics application that leverages JFNK methods. We overview JFNK, outline the capabilities of BISON, and demonstrate the effectiveness of JFNK for solid mechanics and solid mechanics coupled to other PDEs using a series of demonstration problems.
Application of Newton's method to the postbuckling of rings under pressure loadings
NASA Technical Reports Server (NTRS)
Thurston, Gaylen A.
1989-01-01
The postbuckling response of circular rings (or long cylinders) is examined. The rings are subjected to four types of external pressure loadings; each type of pressure is defined by its magnitude and direction at points on the buckled ring. Newton's method is applied to the nonlinear differential equations of the exact inextensional theory for the ring problem. A zeroth approximation for the solution of the nonlinear equations, based on the mode shape corresponding to the first buckling pressure, is derived in closed form for each of the four types of pressure. The zeroth approximation is used to start the iteration cycle in Newton's method to compute numerical solutions of the nonlinear equations. The zeroth approximations for the postbuckling pressure-deflection curves are compared with the converged solutions from Newton's method and with similar results reported in the literature.
On a class of Newton-like methods for solving nonlinear equations
NASA Astrophysics Data System (ADS)
Argyros, Ioannis K.
2009-06-01
We provide a semilocal convergence analysis for a certain class of Newton-like methods considered also in [I.K. Argyros, A unifying local-semilocal convergence analysis and applications for two-point Newton-like methods in Banach space, J. Math. Anal. Appl. 298 (2004) 374-397; I.K. Argyros, Computational theory of iterative methods, in: C.K. Chui, L. Wuytack (Eds.), Series: Studies in Computational Mathematics, vol. 15, Elsevier Publ. Co, New York, USA, 2007; J.E. Dennis, Toward a unified convergence theory for Newton-like methods, in: L.B. Rall (Ed.), Nonlinear Functional Analysis and Applications, Academic Press, New York, 1971], in order to approximate a locally unique solution of an equation in a Banach space. Using a combination of Lipschitz and center-Lipschitz conditions, instead of only Lipschitz conditions [F.A. Potra, Sharp error bounds for a class of Newton-like methods, Libertas Math. 5 (1985) 71-84], we provide an analysis with the following advantages over the work in [F.A. Potra, Sharp error bounds for a class of Newton-like methods, Libertas Math. 5 (1985) 71-84] which improved the works in [W.E. Bosarge, P.L. Falb, A multipoint method of third order, J. Optimiz. Theory Appl. 4 (1969) 156-166; W.E. Bosarge, P.L. Falb, Infinite dimensional multipoint methods and the solution of two point boundary value problems, Numer. Math. 14 (1970) 264-286; J.E. Dennis, On the Kantorovich hypothesis for Newton's method, SIAM J. Numer. Anal. 6 (3) (1969) 493-507; J.E. Dennis, Toward a unified convergence theory for Newton-like methods, in: L.B. Rall (Ed.), Nonlinear Functional Analysis and Applications, Academic Press, New York, 1971; H.J. Kornstaedt, Ein allgemeiner Konvergenzstaz fü r verschä rfte Newton-Verfahrem, in: ISNM, vol. 28, Birkhaü ser Verlag, Basel and Stuttgart, 1975, pp. 53-69; P. Laasonen, Ein überquadratisch konvergenter iterativer algorithmus, Ann. Acad. Sci. Fenn. Ser I 450 (1969) 1-10; F.A. Potra, On a modified secant method, L'analyse num
A novel method of Newton iteration-based interval analysis for multidisciplinary systems
NASA Astrophysics Data System (ADS)
Wang, Lei; Xiong, Chuang; Wang, RuiXing; Wang, XiaoJun; Wu, Di
2017-09-01
A Newton iteration-based interval uncertainty analysis method (NI-IUAM) is proposed to analyze the propagating effect of interval uncertainty in multidisciplinary systems. NI-IUAM decomposes one multidisciplinary system into single disciplines and utilizes a Newton iteration equation to obtain the upper and lower bounds of coupled state variables at each iterative step. NI-IUAM only needs to determine the bounds of uncertain parameters and does not require specific distribution formats. In this way, NI-IUAM may greatly reduce the necessity for raw data. In addition, NI-IUAM can accelerate the convergence process as a result of the super-linear convergence of Newton iteration. The applicability of the proposed method is discussed, in particular that solutions obtained in each discipline must be compatible in multidisciplinary systems. The validity and efficiency of NI-IUAM is demonstrated by both numerical and engineering examples.
Implementing a matrix-free Newton-Krylov method in NorESM
NASA Astrophysics Data System (ADS)
Pilskog, Ingjald; Khatiwala, Samar; Tjiputra, Jerry
2017-04-01
Quasi-equilibrium ocean biogeochemistry states in Earth system models require prohibitively demanding computational time, especially for when large number of tracers are involved. This so-called spin-up typically is measured in the order of thousands of model years integration. In this study, we implement a matrix-free Newton-Krylov method (Khatiwala, 2008) in the Norwegian Earth system model (NorESM) so the spin-up time can be reduced. The idea is to construct the function F(u) = Φ(u(0),T) -u(0) = 0, which can be expressed as a matrix in which we can apply Newton-Krylov methods to find the quasi-equilibrium states. Unfortunately the interconnectivity and complexity of the processes leads to a dense matrix, making it expensive and impractical to calculate the necessary Jacobian, J = ∂F/∂u. The Newton-Krylov method remedies this issue through solving the matrix-vector product Jδu, that can be approximated by (F (un + σδun)- F (un- 1))/σ. The differencing parameterσ is typical chosen dynamically, and n is the iteration index. Matrix-free Newton-Krylov method requires a good preconditioner to improve the convergence rate. By exploiting the inherent locality of the advection-diffusion operator, and that in most biogeochemical models, the source/sink term at a grid point depends only on tracer concentrations in the same vertical column, we obtain a good, sparse preconditioner. The performance of this preconditioner can be improved again by applying both outer Broyden updates during the Newton steps and inner Broyden updates during the Krylov steps. Khatiwala, S., 2008. Fast spin up of ocean biogeochemical models using matrix-free Newton-Krylov. Ocean Model. 23 (3-4), 121-129.
Newton's method for nonlinear stochastic wave equations driven by one-dimensional Brownian motion.
Leszczynski, Henryk; Wrzosek, Monika
2017-02-01
We consider nonlinear stochastic wave equations driven by one-dimensional white noise with respect to time. The existence of solutions is proved by means of Picard iterations. Next we apply Newton's method. Moreover, a second-order convergence in a probabilistic sense is demonstrated.
Markov chain Mote Carlo solution of BK equation through Newton-Kantorovich method
NASA Astrophysics Data System (ADS)
BoŻek, Krzysztof; Kutak, Krzysztof; Placzek, Wieslaw
2013-07-01
We propose a new method for Monte Carlo solution of non-linear integral equations by combining the Newton-Kantorovich method for solving non-linear equations with the Markov Chain Monte Carlo (MCMC) method for solving linear equations. The Newton-Kantorovich method allows to express the non-linear equation as a system of the linear equations which then can be treated by the MCMC (random walk) algorithm. We apply this method to the Balitsky-Kovchegov (BK) equation describing evolution of gluon density at low x. Results of numerical computations show that the MCMC method is both precise and efficient. The presented algorithm may be particularly suited for solving more complicated and higher-dimensional non-linear integral equation, for which traditional methods become unfeasible.
3D CSEM data inversion using Newton and Halley class methods
NASA Astrophysics Data System (ADS)
Amaya, M.; Hansen, K. R.; Morten, J. P.
2016-05-01
For the first time in 3D controlled source electromagnetic data inversion, we explore the use of the Newton and the Halley optimization methods, which may show their potential when the cost function has a complex topology. The inversion is formulated as a constrained nonlinear least-squares problem which is solved by iterative optimization. These methods require the derivatives up to second order of the residuals with respect to model parameters. We show how Green's functions determine the high-order derivatives, and develop a diagrammatical representation of the residual derivatives. The Green's functions are efficiently calculated on-the-fly, making use of a finite-difference frequency-domain forward modelling code based on a multi-frontal sparse direct solver. This allow us to build the second-order derivatives of the residuals keeping the memory cost in the same order as in a Gauss-Newton (GN) scheme. Model updates are computed with a trust-region based conjugate-gradient solver which does not require the computation of a stabilizer. We present inversion results for a synthetic survey and compare the GN, Newton, and super-Halley optimization schemes, and consider two different approaches to set the initial trust-region radius. Our analysis shows that the Newton and super-Halley schemes, using the same regularization configuration, add significant information to the inversion so that the convergence is reached by different paths. In our simple resistivity model examples, the convergence speed of the Newton and the super-Halley schemes are either similar or slightly superior with respect to the convergence speed of the GN scheme, close to the minimum of the cost function. Due to the current noise levels and other measurement inaccuracies in geophysical investigations, this advantageous behaviour is at present of low consequence, but may, with the further improvement of geophysical data acquisition, be an argument for more accurate higher-order methods like those
Mohamad, Mohd Saberi; Abdullah, Afnizanfaizal
2015-01-01
This paper presents an in silico optimization method of metabolic pathway production. The metabolic pathway can be represented by a mathematical model known as the generalized mass action model, which leads to a complex nonlinear equations system. The optimization process becomes difficult when steady state and the constraints of the components in the metabolic pathway are involved. To deal with this situation, this paper presents an in silico optimization method, namely the Newton Cooperative Genetic Algorithm (NCGA). The NCGA used Newton method in dealing with the metabolic pathway, and then integrated genetic algorithm and cooperative co-evolutionary algorithm. The proposed method was experimentally applied on the benchmark metabolic pathways, and the results showed that the NCGA achieved better results compared to the existing methods. PMID:25961295
A high-order fast method for computing convolution integral with smooth kernel
NASA Astrophysics Data System (ADS)
Qiang, Ji
2010-02-01
In this paper we report on a high-order fast method to numerically calculate convolution integral with smooth non-periodic kernel. This method is based on the Newton-Cotes quadrature rule for the integral approximation and an FFT method for discrete summation. The method can have an arbitrarily high-order accuracy in principle depending on the number of points used in the integral approximation and a computational cost of O(Nlog(N)), where N is the number of grid points. For a three-point Simpson rule approximation, the method has an accuracy of O(h), where h is the size of the computational grid. Applications of the Simpson rule based algorithm to the calculation of a one-dimensional continuous Gauss transform and to the calculation of a two-dimensional electric field from a charged beam are also presented.
A high-order fast method for computing convolution integral with smooth kernel
Qiang, Ji
2009-09-28
In this paper we report on a high-order fast method to numerically calculate convolution integral with smooth non-periodic kernel. This method is based on the Newton-Cotes quadrature rule for the integral approximation and an FFT method for discrete summation. The method can have an arbitrarily high-order accuracy in principle depending on the number of points used in the integral approximation and a computational cost of O(Nlog(N)), where N is the number of grid points. For a three-point Simpson rule approximation, the method has an accuracy of O(h{sup 4}), where h is the size of the computational grid. Applications of the Simpson rule based algorithm to the calculation of a one-dimensional continuous Gauss transform and to the calculation of a two-dimensional electric field from a charged beam are also presented.
NASA Technical Reports Server (NTRS)
Chapman, G.; Kirk, D.
1974-01-01
The parameter identification scheme being used is a differential correction least squares procedure (Gauss-Newton method). The position, orientation, and derivatives of these quantities with respect to the parameters of interest (i.e., sensitivity coefficients) are determined by digital integration of the equations of motion and the parametric differential equations. The application of this technique to three vastly different sets of data is used to illustrate the versatility of the method and to indicate some of the problems that still remain.
NASA Technical Reports Server (NTRS)
Chapman, G.; Kirk, D.
1974-01-01
The parameter identification scheme being used is a differential correction least squares procedure (Gauss-Newton method). The position, orientation, and derivatives of these quantities with respect to the parameters of interest (i.e., sensitivity coefficients) are determined by digital integration of the equations of motion and the parametric differential equations. The application of this technique to three vastly different sets of data is used to illustrate the versatility of the method and to indicate some of the problems that still remain.
Improved FRFT-based method for estimating the physical parameters from Newton's rings
NASA Astrophysics Data System (ADS)
Wu, Jin-Min; Lu, Ming-Feng; Tao, Ran; Zhang, Feng; Li, Yang
2017-04-01
Newton's rings are often encountered in interferometry, and in analyzing them, we can estimate the physical parameters, such as curvature radius and the rings' center. The fractional Fourier transform (FRFT) is capable of estimating these physical parameters from the rings despite noise and obstacles, but there is still a small deviation between the estimated coordinates of the rings' center and the actual values. The least-squares fitting method is popularly used for its accuracy but it is easily affected by the initial values. Nevertheless, with the estimated results from the FRFT, it is easy to meet the requirements of initial values. In this paper, the proposed method combines the advantages of the fractional Fourier transform (FRFT) with the least-squares fitting method in analyzing Newton's rings fringe patterns. Its performance is assessed by analyzing simulated and actual Newton's rings images. The experimental results show that the proposed method is capable of estimating the parameters in the presence of noise and obstacles. Under the same conditions, the estimation results are better than those obtained with the original FRFT-based method, especially for the rings' center. Some applications are shown to illustrate that the improved FRFT-based method is an important technique for interferometric measurements.
Kim, T; Pasciak, J E; Vassilevski, P S
2004-09-20
In this paper, we consider an inexact Newton method applied to a second order nonlinear problem with higher order nonlinearities. We provide conditions under which the method has a mesh-independent rate of convergence. To do this, we are required to first, set up the problem on a scale of Hilbert spaces and second, to devise a special iterative technique which converges in a higher than first order Sobolev norm. We show that the linear (Jacobian) system solved in Newton's method can be replaced with one iterative step provided that the initial nonlinear iterate is accurate enough. The closeness criteria can be taken independent of the mesh size. Finally, the results of numerical experiments are given to support the theory.
Short Communication: A Parallel Newton-Krylov Method for Navier-Stokes Rotorcraft Codes
NASA Astrophysics Data System (ADS)
Ekici, Kivanc; Lyrintzis, Anastasios S.
2003-05-01
The application of Krylov subspace iterative methods to unsteady three-dimensional Navier-Stokes codes on massively parallel and distributed computing environments is investigated. Previously, the Euler mode of the Navier-Stokes flow solver Transonic Unsteady Rotor Navier-Stokes (TURNS) has been coupled with a Newton-Krylov scheme which uses two Conjugate-Gradient-like (CG) iterative methods. For the efficient implementation of Newton-Krylov methods to the Navier-Stokes mode of TURNS, efficient preconditioners must be used. Parallel implicit operators are used and compared as preconditioners. Results are presented for two-dimensional and three-dimensional viscous cases. The Message Passing Interface (MPI) protocol is used, because of its portability to various parallel architectures.
Truncated Newton-Raphson Methods for Quasicontinuum Simulations
2006-05-01
ed.; The Johns Hopkins University Press: Baltimore, MD, 1993. 11. Nash, S. G.; Nocedal , J. A Numerical Study of the Limited Memory BFGS Method and... Nocedal , J. Theory of Algorithms for Unconstrained Optimization. Acta Numerica 1992, 199–242. 16. Dembo, R. S.; Eisenstat, S. C.; Steihaug, T
Comparing Three Methods for Teaching Newton's Third Law
ERIC Educational Resources Information Center
Smith, Trevor I.; Wittman, Michael C.
2007-01-01
Although guided-inquiry methods for teaching introductory physics have been individually shown to be more effective at improving conceptual understanding than traditional lecture-style instruction, researchers in physics education have not studied differences among reform-based curricula in much detail. Several researchers have developed…
Modeling of hydrogen-assisted cracking in iron crystal using a quasi-Newton method.
Telitchev, Igor Ye; Vinogradov, Oleg
2008-07-01
A Quasi-Newton method was applied in the context of a molecular statics approach to simulate the phenomenon of hydrogen embrittlement of an iron lattice. The atomic system is treated as a truss-type structure. The interatomic forces between the hydrogen-iron and the iron-iron atoms are defined by Morse and modified Morse potential functions, respectively. Two-dimensional hexagonal and 3D bcc crystal structures were subjected to tensile numerical tests. It was shown that the Inverse Broyden's Algorithm-a quasi-Newton method-provides a computationally efficient technique for modeling of the hydrogen-assisted cracking in iron crystal. Simulation results demonstrate that atoms of hydrogen placed near the crack tip produce a strong deformation and crack propagation effect in iron lattice, leading to a decrease in the residual strength of numerically tested samples.
NASA Technical Reports Server (NTRS)
Achar, N. S.; Gaonkar, G. H.
1993-01-01
Helicopter trim settings of periodic initial state and control inputs are investigated for convergence of Newton iteration in computing the settings sequentially and in parallel. The trim analysis uses a shooting method and a weak version of two temporal finite element methods with displacement formulation and with mixed formulation of displacements and momenta. These three methods broadly represent two main approaches of trim analysis: adaptation of initial-value and finite element boundary-value codes to periodic boundary conditions, particularly for unstable and marginally stable systems. In each method, both the sequential and in-parallel schemes are used, and the resulting nonlinear algebraic equations are solved by damped Newton iteration with an optimally selected damping parameter. The impact of damped Newton iteration, including earlier-observed divergence problems in trim analysis, is demonstrated by the maximum condition number of the Jacobian matrices of the iterative scheme and by virtual elimination of divergence. The advantages of the in-parallel scheme over the conventional sequential scheme are also demonstrated.
NASA Technical Reports Server (NTRS)
Achar, N. S.; Gaonkar, G. H.
1993-01-01
Helicopter trim settings of periodic initial state and control inputs are investigated for convergence of Newton iteration in computing the settings sequentially and in parallel. The trim analysis uses a shooting method and a weak version of two temporal finite element methods with displacement formulation and with mixed formulation of displacements and momenta. These three methods broadly represent two main approaches of trim analysis: adaptation of initial-value and finite element boundary-value codes to periodic boundary conditions, particularly for unstable and marginally stable systems. In each method, both the sequential and in-parallel schemes are used, and the resulting nonlinear algebraic equations are solved by damped Newton iteration with an optimally selected damping parameter. The impact of damped Newton iteration, including earlier-observed divergence problems in trim analysis, is demonstrated by the maximum condition number of the Jacobian matrices of the iterative scheme and by virtual elimination of divergence. The advantages of the in-parallel scheme over the conventional sequential scheme are also demonstrated.
Fattebert, J
2008-07-29
We describe an iterative algorithm to solve electronic structure problems in Density Functional Theory. The approach is presented as a Subspace Accelerated Inexact Newton (SAIN) solver for the non-linear Kohn-Sham equations. It is related to a class of iterative algorithms known as RMM-DIIS in the electronic structure community. The method is illustrated with examples of real applications using a finite difference discretization and multigrid preconditioning.
Modified Newton-Raphson GRAPE methods for optimal control of spin systems
NASA Astrophysics Data System (ADS)
Goodwin, D. L.; Kuprov, Ilya
2016-05-01
Quadratic convergence throughout the active space is achieved for the gradient ascent pulse engineering (GRAPE) family of quantum optimal control algorithms. We demonstrate in this communication that the Hessian of the GRAPE fidelity functional is unusually cheap, having the same asymptotic complexity scaling as the functional itself. This leads to the possibility of using very efficient numerical optimization techniques. In particular, the Newton-Raphson method with a rational function optimization (RFO) regularized Hessian is shown in this work to require fewer system trajectory evaluations than any other algorithm in the GRAPE family. This communication describes algebraic and numerical implementation aspects (matrix exponential recycling, Hessian regularization, etc.) for the RFO Newton-Raphson version of GRAPE and reports benchmarks for common spin state control problems in magnetic resonance spectroscopy.
Cooley, R.L.; Hill, M.C.
1992-01-01
Three methods of solving nonlinear least-squares problems were compared for robustness and efficiency using a series of hypothetical and field problems. A modified Gauss-Newton/full Newton hybrid method (MGN/FN) and an analogous method for which part of the Hessian matrix was replaced by a quasi-Newton approximation (MGN/QN) solved some of the problems with appreciably fewer iterations than required using only a modified Gauss-Newton (MGN) method. In these problems, model nonlinearity and a large variance for the observed data apparently caused MGN to converge more slowly than MGN/FN or MGN/QN after the sum of squared errors had almost stabilized. Other problems were solved as efficiently with MGN as with MGN/FN or MGN/QN. Because MGN/FN can require significantly more computer time per iteration and more computer storage for transient problems, it is less attractive for a general purpose algorithm than MGN/QN.
Acceleration of k-Eigenvalue / Criticality Calculations using the Jacobian-Free Newton-Krylov Method
Dana Knoll; HyeongKae Park; Chris Newman
2011-02-01
We present a new approach for the $k$--eigenvalue problem using a combination of classical power iteration and the Jacobian--free Newton--Krylov method (JFNK). The method poses the $k$--eigenvalue problem as a fully coupled nonlinear system, which is solved by JFNK with an effective block preconditioning consisting of the power iteration and algebraic multigrid. We demonstrate effectiveness and algorithmic scalability of the method on a 1-D, one group problem and two 2-D two group problems and provide comparison to other efforts using silmilar algorithmic approaches.
A Perturbation Based Chaotic System Exploiting the Quasi-Newton Method for Global Optimization
NASA Astrophysics Data System (ADS)
Tatsumi, Keiji; Tanino, Tetsuzo
The chaotic system has been exploited in metaheuristic methods of solving continuous global optimization problems. Recently, the gradient method with perturbation (GP) was proposed, which was derived from the steepest descent method for the problem with additional perturbation terms, and it was reported that chaotic metaheuristics with the GP have good performances of solving some benchmark problems. Moreover, the sufficient condition of its parameter values was theoretically shown under which its updating system is chaotic. However, the sufficient condition of its chaoticity and the width of strange attractor around each local minimum, which are important properties for exploiting the chaotic system in optimization, deeply depend on the eigenvalues of the Hessian matrix of the objective function at the local minimum. Thus, if the eigenvalues of different local minima are widely different from each other, or if it is different in different problems, such properties can cause the difficulty of selecting appropriate parameter values for an effective search. Therefore, in this paper, we propose modified GPs based on the quasi-Newton method instead of the steepest descent method, where their chaoticities and the width of strange attractor do not depend on the eigenvalue of the Hessian matrix at any local minimum due to the scale invariant of the quasi-Newton method. In addition, we empirically demonstrate that the parameter selection of the proposed methods is easier than the original GP, especially with respect to the step-size, and the chaotic metaheuristics with the proposed methods can find better solutions for some multimodal functions.
A speciation solver for cement paste modeling and the semismooth Newton method
Georget, Fabien; Prévost, Jean H.; Vanderbei, Robert J.
2015-02-15
The mineral assemblage of a cement paste may vary considerably with its environment. In addition, the water content of a cement paste is relatively low and the ionic strength of the interstitial solution is often high. These conditions are extreme conditions with respect to the common assumptions made in speciation problem. Furthermore the common trial and error algorithm to find the phase assemblage does not provide any guarantee of convergence. We propose a speciation solver based on a semismooth Newton method adapted to the thermodynamic modeling of cement paste. The strong theoretical properties associated with these methods offer practical advantages. Results of numerical experiments indicate that the algorithm is reliable, robust, and efficient.
Newton-like minimal residual methods applied to transonic flow calculations
NASA Technical Reports Server (NTRS)
Wong, Y. S.
1985-01-01
A computational technique for the solution of the full potential equation is presented. The method consists of outer and inner iterations. The outer iterate is based on a Newton like algorithm, and a preconditioned Minimal Residual method is used to seek an approximate solution of the system of linear equations arising at each inner iterate. The present iterative scheme is formulated so that the uncertainties and difficulties associated with many iterative techniques, namely the requirements of acceleration parameters and the treatment of additional boundary conditions for the intermediate variables, are eliminated. Numerical experiments based on the new method for transonic potential flows around the NACA 0012 airfoil at different Mach numbers and different angles of attack are presented, and these results are compared with those obtained by the Approximate Factorization technique. Extention to three dimensional flow calculations and application in finite element methods for fluid dynamics problems by the present method are also discussed. The Inexact Newton like method produces a smoother reduction in the residual norm, and the number of supersonic points and circulations are rapidly established as the number of iterations is increased.
A growing string method for the reaction pathway defined by a Newton trajectory
NASA Astrophysics Data System (ADS)
Quapp, Wolfgang
2005-05-01
The reaction path is an important concept of theoretical chemistry. We use a projection operator for the following of the Newton trajectory (NT) along the reaction valley of the potential energy surface. We describe the numerical scheme for the string method, adapting the proposal of a growing string (GS) by [Peters et al.,J. Chem. Phys. 120, 7877 (2004)]. The combination of the Newton projector and the growing string idea is an improvement of both methods, and a great saving of the number of iterations needed to find the pathway over the saddle point. This combination GS-NT is at the best of our knowledge new. We employ two different corrector methods: first, the use of projected gradient steps, and second a conjugated gradient method, the CG+ method of Liu, Nocedal, and Waltz, generalized by projectors. The executed examples are Lennard-Jones clusters, LJ7 and LJ22, and an N-methyl-alanyl-acetamide (alanine dipeptide) rearrangement between the minima C7ax and C5. For the latter, the growing stŕing calculation is interfaced with the GASSIAN03 quantum chemical software package.
A growing string method for the reaction pathway defined by a Newton trajectory.
Quapp, Wolfgang
2005-05-01
The reaction path is an important concept of theoretical chemistry. We use a projection operator for the following of the Newton trajectory (NT) along the reaction valley of the potential energy surface. We describe the numerical scheme for the string method, adapting the proposal of a growing string (GS) by [Peters et al.,J. Chem. Phys. 120, 7877 (2004)]. The combination of the Newton projector and the growing string idea is an improvement of both methods, and a great saving of the number of iterations needed to find the pathway over the saddle point. This combination GS-NT is at the best of our knowledge new. We employ two different corrector methods: first, the use of projected gradient steps, and second a conjugated gradient method, the CG+ method of Liu, Nocedal, and Waltz, generalized by projectors. The executed examples are Lennard-Jones clusters, LJ(7) and LJ(22), and an N-methyl-alanyl-acetamide (alanine dipeptide) rearrangement between the minima C7(ax) and C5. For the latter, the growing string calculation is interfaced with the GASSIAN03 quantum chemical software package.
Convergence Properties of the Regularized Newton Method for the Unconstrained Nonconvex Optimization
Ueda, Kenji Yamashita, Nobuo
2010-08-15
The regularized Newton method (RNM) is one of the efficient solution methods for the unconstrained convex optimization. It is well-known that the RNM has good convergence properties as compared to the steepest descent method and the pure Newton's method. For example, Li, Fukushima, Qi and Yamashita showed that the RNM has a quadratic rate of convergence under the local error bound condition. Recently, Polyak showed that the global complexity bound of the RNM, which is the first iteration k such that -parallel {nabla}f(x{sub k})-parallel {<=}{epsilon}, is O({epsilon}{sup -4}), where f is the objective function and {epsilon} is a given positive constant. In this paper, we consider a RNM extended to the unconstrained 'nonconvex' optimization. We show that the extended RNM (E-RNM) has the following properties. (a) The E-RNM has a global convergence property under appropriate conditions. (b) The global complexity bound of the E-RNM is O({epsilon}{sup -2}) if {nabla}{sup 2}f is Lipschitz continuous on a certain compact set. (c) The E-RNM has a superlinear rate of convergence under the local error bound condition.
Postprocessing Fourier spectral methods: The case of smooth solutions
Garcia-Archilla, B.; Novo, J.; Titi, E.S.
1998-11-01
A postprocessing technique to improve the accuracy of Galerkin methods, when applied to dissipative partial differential equations, is examined in the particular case of smooth solutions. Pseudospectral methods are shown to perform poorly. This performance is analyzed and a refined postprocessing technique is proposed.
A method of smoothed particle hydrodynamics using spheroidal kernels
NASA Technical Reports Server (NTRS)
Fulbright, Michael S.; Benz, Willy; Davies, Melvyn B.
1995-01-01
We present a new method of three-dimensional smoothed particle hydrodynamics (SPH) designed to model systems dominated by deformation along a preferential axis. These systems cause severe problems for SPH codes using spherical kernels, which are best suited for modeling systems which retain rough spherical symmetry. Our method allows the smoothing length in the direction of the deformation to evolve independently of the smoothing length in the perpendicular plane, resulting in a kernel with a spheroidal shape. As a result the spatial resolution in the direction of deformation is significantly improved. As a test case we present the one-dimensional homologous collapse of a zero-temperature, uniform-density cloud, which serves to demonstrate the advantages of spheroidal kernels. We also present new results on the problem of the tidal disruption of a star by a massive black hole.
A method of smoothed particle hydrodynamics using spheroidal kernels
NASA Technical Reports Server (NTRS)
Fulbright, Michael S.; Benz, Willy; Davies, Melvyn B.
1995-01-01
We present a new method of three-dimensional smoothed particle hydrodynamics (SPH) designed to model systems dominated by deformation along a preferential axis. These systems cause severe problems for SPH codes using spherical kernels, which are best suited for modeling systems which retain rough spherical symmetry. Our method allows the smoothing length in the direction of the deformation to evolve independently of the smoothing length in the perpendicular plane, resulting in a kernel with a spheroidal shape. As a result the spatial resolution in the direction of deformation is significantly improved. As a test case we present the one-dimensional homologous collapse of a zero-temperature, uniform-density cloud, which serves to demonstrate the advantages of spheroidal kernels. We also present new results on the problem of the tidal disruption of a star by a massive black hole.
NASA Technical Reports Server (NTRS)
Bailey, Harry E.; Beam, Richard M.
1991-01-01
Finite-difference approximations for steady-state compressible Navier-Stokes equations, whose two spatial dimensions are written in generalized curvilinear coordinates and strong conservation-law form, are presently solved by means of Newton's method in order to obtain a lifting-airfoil flow field under subsonic and transonnic conditions. In addition to ascertaining the computational requirements of an initial guess ensuring convergence and the degree of computational efficiency obtainable via the approximate Newton method's freezing of the Jacobian matrices, attention is given to the need for auxiliary methods assessing the temporal stability of steady-state solutions. It is demonstrated that nonunique solutions of the finite-difference equations are obtainable by Newton's method in conjunction with a continuation method.
NASA Astrophysics Data System (ADS)
Tang, Peipei; Wang, Chengjing; Dai, Xiaoxia
2016-04-01
In this paper, we propose a majorized Newton-CG augmented Lagrangian-based finite element method for 3D elastic frictionless contact problems. In this scheme, we discretize the restoration problem via the finite element method and reformulate it to a constrained optimization problem. Then we apply the majorized Newton-CG augmented Lagrangian method to solve the optimization problem, which is very suitable for the ill-conditioned case. Numerical results demonstrate that the proposed method is a very efficient algorithm for various large-scale 3D restorations of geological models, especially for the restoration of geological models with complicated faults.
Likelihood Methods for Adaptive Filtering and Smoothing. Technical Report #455.
ERIC Educational Resources Information Center
Butler, Ronald W.
The dynamic linear model or Kalman filtering model provides a useful methodology for predicting the past, present, and future states of a dynamic system, such as an object in motion or an economic or social indicator that is changing systematically with time. Recursive likelihood methods for adaptive Kalman filtering and smoothing are developed.…
Cofaru, Corneliu; Philips, Wilfried; Van Paepegem, Wim
2010-11-20
The two-dimensional in-plane displacement and strain calculation problem through digital image processing methods has been studied extensively in the past three decades. Out of the various algorithms developed, the Newton-Raphson partial differential correction method performs the best quality wise and is the most widely used in practical applications despite its higher computational cost. The work presented in this paper improves the original algorithm by including adaptive spatial regularization in the minimization process used to obtain the motion data. Results indicate improvements in the strain accuracy for both small and large strains. The improvements become even more significant when employing small displacement and strain window sizes, making the new method highly suitable for situations where the underlying strain data presents both slow and fast spatial variations or contains highly localized discontinuities.
NASA Astrophysics Data System (ADS)
Hoppe, R. H. W.; Linsenmann, C.
2012-05-01
The immersed boundary method (IB) is known as a powerful technique for the numerical solution of fluid-structure interaction problems as, for instance, the motion and deformation of viscoelastic bodies immersed in an external flow. It is based on the treatment of the flow equations within an Eulerian framework and of the equations of motion of the immersed bodies with respect to a Lagrangian coordinate system including interaction equations providing the transfer between both frames. The classical IB uses finite differences, but the IBM can be set up within a finite element approach in the spatial variables as well (FE-IB). The discretization in time usually relies on the Backward Euler (BE) method for the semidiscretized flow equations and the Forward Euler (FE) method for the equations of motion of the immersed bodies. The BE/FE FE-IB is subject to a CFL-type condition, whereas the fully implicit BE/BE FE-IB is unconditionally stable. The latter one can be solved numerically by Newton-type methods whose convergence properties are dictated by an appropriate choice of the time step size, in particular, if one is faced with sudden changes in the total energy of the system. In this paper, taking advantage of the well developed affine covariant convergence theory for Newton-type methods, we study a predictor-corrector continuation strategy in time with an adaptive choice of the continuation steplength. The feasibility of the approach and its superiority to BE/FE FE-IB is illustrated by two representative numerical examples.
1979-08-01
theorem of Kantoravich [531 as given by Henrici [541, An additional discussion of the theorem and its application to the ! 1 I 29 Newton method may be...is a matrix, then n I1-11 - max . lb I (3.13) 1i n j1 bij We next cite the following lemma due to Banach ( Henrici [54], pp. 365). Lemma: Let B be a...details of this proof here; it may be found in Henrici 54], pp. 366. Let us determine how this theorem applies to the modified Newton method as compared
Moschetti, Morgan P.; Mueller, Charles S.; Boyd, Oliver S.; Petersen, Mark D.
2014-01-01
In anticipation of the update of the Alaska seismic hazard maps (ASHMs) by the U. S. Geological Survey, we report progress on the comparison of smoothed seismicity models developed using fixed and adaptive smoothing algorithms, and investigate the sensitivity of seismic hazard to the models. While fault-based sources, such as those for great earthquakes in the Alaska-Aleutian subduction zone and for the ~10 shallow crustal faults within Alaska, dominate the seismic hazard estimates for locations near to the sources, smoothed seismicity rates make important contributions to seismic hazard away from fault-based sources and where knowledge of recurrence and magnitude is not sufficient for use in hazard studies. Recent developments in adaptive smoothing methods and statistical tests for evaluating and comparing rate models prompt us to investigate the appropriateness of adaptive smoothing for the ASHMs. We develop smoothed seismicity models for Alaska using fixed and adaptive smoothing methods and compare the resulting models by calculating and evaluating the joint likelihood test. We use the earthquake catalog, and associated completeness levels, developed for the 2007 ASHM to produce fixed-bandwidth-smoothed models with smoothing distances varying from 10 to 100 km and adaptively smoothed models. Adaptive smoothing follows the method of Helmstetter et al. and defines a unique smoothing distance for each earthquake epicenter from the distance to the nth nearest neighbor. The consequence of the adaptive smoothing methods is to reduce smoothing distances, causing locally increased seismicity rates, where seismicity rates are high and to increase smoothing distances where seismicity is sparse. We follow guidance from previous studies to optimize the neighbor number (n-value) by comparing model likelihood values, which estimate the likelihood that the observed earthquake epicenters from the recent catalog are derived from the smoothed rate models. We compare likelihood
Systems identification using a modified Newton-Raphson method: A FORTRAN program
NASA Technical Reports Server (NTRS)
Taylor, L. W., Jr.; Iliff, K. W.
1972-01-01
A FORTRAN program is offered which computes a maximum likelihood estimate of the parameters of any linear, constant coefficient, state space model. For the case considered, the maximum likelihood estimate can be identical to that which minimizes simultaneously the weighted mean square difference between the computed and measured response of a system and the weighted square of the difference between the estimated and a priori parameter values. A modified Newton-Raphson or quasilinearization method is used to perform the minimization which typically requires several iterations. A starting technique is used which insures convergence for any initial values of the unknown parameters. The program and its operation are described in sufficient detail to enable the user to apply the program to his particular problem with a minimum of difficulty.
Preconditioning strategies for nonlinear conjugate gradient methods, based on quasi-Newton updates
NASA Astrophysics Data System (ADS)
Andrea, Caliciotti; Giovanni, Fasano; Massimo, Roma
2016-10-01
This paper reports two proposals of possible preconditioners for the Nonlinear Conjugate Gradient (NCG) method, in large scale unconstrained optimization. On one hand, the common idea of our preconditioners is inspired to L-BFGS quasi-Newton updates, on the other hand we aim at explicitly approximating in some sense the inverse of the Hessian matrix. Since we deal with large scale optimization problems, we propose matrix-free approaches where the preconditioners are built using symmetric low-rank updating formulae. Our distinctive new contributions rely on using information on the objective function collected as by-product of the NCG, at previous iterations. Broadly speaking, our first approach exploits the secant equation, in order to impose interpolation conditions on the objective function. In the second proposal we adopt and ad hoc modified-secant approach, in order to possibly guarantee some additional theoretical properties.
Recurrence relations for a Newton-like method in Banach spaces
NASA Astrophysics Data System (ADS)
Parida, P. K.; Gupta, D. K.
2007-09-01
The convergence of iterative methods for solving nonlinear operator equations in Banach spaces is established from the convergence of majorizing sequences. An alternative approach is developed to establish this convergence by using recurrence relations. For example, the recurrence relations are used in establishing the convergence of Newton's method [L.B. Rall, Computational Solution of Nonlinear Operator Equations, Robert E. Krieger, New York, 1979] and the third order methods such as Halley's, Chebyshev's and super Halley's [V. Candela, A. Marquina, Recurrence relations for rational cubic methods I: the Halley method, Computing 44 (1990) 169-184; V. Candela, A. Marquina, Recurrence relations for rational cubic methods II: the Halley method, Computing 45 (1990) 355-367; J.A. Ezquerro, M.A. Hernandez, Recurrence relations for Chebyshev-type methods, Appl. Math. Optim. 41 (2000) 227-236; J.M. Gutierrez, M.A. Hernandez, Third-order iterative methods for operators with bounded second derivative, J. Comput. Appl. Math. 82 (1997) 171-183; J.M. Gutierrez, M.A. Hernandez, Recurrence relations for the Super-Halley method, Comput. Math. Appl. 7(36) (1998) 1-8; M.A. Hernandez, Chebyshev's approximation algorithms and applications, Comput. Math. Appl. 41 (2001) 433-445 [10
A new axial smoothing method based on elastic mapping
Yang, J.; Huang, S.C.; Czernin, J.; Wolfenden, P.; Dahlbom, M.; Hoh, C.K.; Phelps, M.E.; Lin, K.P.
1996-12-01
New positron emission tomography (PET) scanners have higher axial and in-plane spatial resolutions but at the expense of reduced per plane sensitivity, which prevents the higher resolution from being fully realized. Normally, Gaussian-weighted interplane axial smoothing is used to reduce noise. In this study, the authors developed a new algorithm that first elastically maps adjacent planes, and then the mapped images are smoothed axially to reduce the image noise level. Compared to those obtained by the conventional axial-directional smoothing method, the images by the new method have improved signal-to-noise ratio. To quantify the signal-to-noise improvement, both simulated and real cardiac PET images were studied. Various Hanning reconstruction filters with cutoff frequency = 0.5, 0.7, 1.0x Nyquist frequency and Ramp filter were tested on simulated images. Effective in-plane resolution was measured by the effective global Gaussian resolution (EGGR) and noise reduction was evaluated by the cross-correlation coefficient. Results showed that the new method was robust to various noise levels and indicated larger noise reduction or better image feature preservation (i.e., smaller EGGR) than by the conventional method.
Regionally Smoothed Meta-Analysis Methods for GWAS Datasets.
Begum, Ferdouse; Sharker, Monir H; Sherman, Stephanie L; Tseng, George C; Feingold, Eleanor
2016-02-01
Genome-wide association studies are proven tools for finding disease genes, but it is often necessary to combine many cohorts into a meta-analysis to detect statistically significant genetic effects. Often the component studies are performed by different investigators on different populations, using different chips with minimal SNPs overlap. In some cases, raw data are not available for imputation so that only the genotyped single nucleotide polymorphisms (SNPs) results can be used in meta-analysis. Even when SNP sets are comparable, different cohorts may have peak association signals at different SNPs within the same gene due to population differences in linkage disequilibrium or environmental interactions. We hypothesize that the power to detect statistical signals in these situations will improve by using a method that simultaneously meta-analyzes and smooths the signal over nearby markers. In this study, we propose regionally smoothed meta-analysis methods and compare their performance on real and simulated data. © 2015 WILEY PERIODICALS, INC.
REGIONALLY SMOOTHED META-ANALYSIS METHODS FOR GWAS DATASETS
Begum, Ferdouse; Sharker, Monir H.; Sherman, Stephanie L.; Tseng, George C.; Feingold, Eleanor
2015-01-01
Genome-wide association studies (GWAS) are proven tools for finding disease genes, but it is often necessary to combine many cohorts into a meta-analysis to detect statistically significant genetic effects. Often the component studies are performed by different investigators on different populations, using different chips with minimal SNPs overlap. In some cases, raw data are not available for imputation so that only the genotyped SNP results can be used in meta-analysis. Even when SNP sets are comparable, different cohorts may have peak association signals at different SNPs within the same gene due to population differences in linkage disequilibrium or environmental interactions. We hypothesize that the power to detect statistical signals in these situations will improve by using a method that simultaneously meta-analyzes and smooths the signal over nearby markers. In this study we propose regionally smoothed meta-analysis (RSM) methods and compare their performance on real and simulated data. PMID:26707090
Chemical method for producing smooth surfaces on silicon wafers
Yu, Conrad
2003-01-01
An improved method for producing optically smooth surfaces in silicon wafers during wet chemical etching involves a pre-treatment rinse of the wafers before etching and a post-etching rinse. The pre-treatment with an organic solvent provides a well-wetted surface that ensures uniform mass transfer during etching, which results in optically smooth surfaces. The post-etching treatment with an acetic acid solution stops the etching instantly, preventing any uneven etching that leads to surface roughness. This method can be used to etch silicon surfaces to a depth of 200 .mu.m or more, while the finished surfaces have a surface roughness of only 15-50 .ANG. (RMS).
ERIC Educational Resources Information Center
Geiger, H. Bruce
Compared were inductive programed, deductive programed, and conventional lecture-question methods of instruction related to Newton's Second Law of Motion on outcome gains including recall of factual information, ability to solve mathematical problems, and retention. Some 266 students in three schools participated and were compared for…
ERIC Educational Resources Information Center
Geiger, H. Bruce
Compared were inductive programed, deductive programed, and conventional lecture-question methods of instruction related to Newton's Second Law of Motion on outcome gains including recall of factual information, ability to solve mathematical problems, and retention. Some 266 students in three schools participated and were compared for…
Smoothed particle hydrodynamics method from a large eddy simulation perspective
NASA Astrophysics Data System (ADS)
Di Mascio, A.; Antuono, M.; Colagrossi, A.; Marrone, S.
2017-03-01
The Smoothed Particle Hydrodynamics (SPH) method, often used for the modelling of the Navier-Stokes equations by a meshless Lagrangian approach, is revisited from the point of view of Large Eddy Simulation (LES). To this aim, the LES filtering procedure is recast in a Lagrangian framework by defining a filter that moves with the positions of the fluid particles at the filtered velocity. It is shown that the SPH smoothing procedure can be reinterpreted as a sort of LES Lagrangian filtering, and that, besides the terms coming from the LES convolution, additional contributions (never accounted for in the SPH literature) appear in the equations when formulated in a filtered fashion. Appropriate closure formulas are derived for the additional terms and a preliminary numerical test is provided to show the main features of the proposed LES-SPH model.
NASA Astrophysics Data System (ADS)
Alkharji, Mohammed N.
Most fracture characterization methods provide a general description of the fracture parameters as part of the reservoirs parameters; the fracture interaction and geometry within the reservoir is given less attention. T-Matrix and Linear Slip effective medium fracture models are implemented to invert the elastic tensor for the parameters and geometries of the fractures within the reservoir. The fracture inverse problem has an ill-posed, overdetermined, underconstrained rank-deficit system of equations. Least-squares inverse methods are used to solve the problem. A good starting initial model for the parameters is a key factor in the reliability of the inversion. Most methods assume that the starting parameters are close to the solution to avoid inaccurate local minimum solutions. The prior knowledge of the fracture parameters and their geometry is not available. We develop a hybrid, enumerative and Gauss-Newton, method that estimates the fracture parameters and geometry from the elastic tensor with no prior knowledge of the initial parameter values. The fracture parameters are separated into two groups. The first group contains the fracture parameters with no prior information, and the second group contains the parameters with known prior information. Different models are generated from the first group parameters by sampling the solution space over a predefined range of possible solutions for each parameter. Each model generated by the first group is fixed and used as a starting model to invert for the second group of parameters using the Gauss-Newton method. The least-squares residual between the observed elastic tensor and the estimated elastic tensor is calculated for each model. The model parameters that yield the least-squares residual corresponds to the correct fracture reservoir parameters and geometry. Two synthetic examples of fractured reservoirs with oil and gas saturations were inverted with no prior information about the fracture properties. The
Zhang, Zhenyue; Zha, Hongyuan; Simon, Horst
2006-07-31
In this paper, we developed numerical algorithms for computing sparse low-rank approximations of matrices, and we also provided a detailed error analysis of the proposed algorithms together with some numerical experiments. The low-rank approximations are constructed in a certain factored form with the degree of sparsity of the factors controlled by some user-specified parameters. In this paper, we cast the sparse low-rank approximation problem in the framework of penalized optimization problems. We discuss various approximation schemes for the penalized optimization problem which are more amenable to numerical computations. We also include some analysis to show the relations between the original optimization problem and the reduced one. We then develop a globally convergent discrete Newton-like iterative method for solving the approximate penalized optimization problems. We also compare the reconstruction errors of the sparse low-rank approximations computed by our new methods with those obtained using the methods in the earlier paper and several other existing methods for computing sparse low-rank approximations. Numerical examples show that the penalized methods are more robust and produce approximations with factors which have fewer columns and are sparser.
Matilainen, Kaarina; Mäntysaari, Esa A; Lidauer, Martin H; Strandén, Ismo; Thompson, Robin
2013-01-01
Estimation of variance components by Monte Carlo (MC) expectation maximization (EM) restricted maximum likelihood (REML) is computationally efficient for large data sets and complex linear mixed effects models. However, efficiency may be lost due to the need for a large number of iterations of the EM algorithm. To decrease the computing time we explored the use of faster converging Newton-type algorithms within MC REML implementations. The implemented algorithms were: MC Newton-Raphson (NR), where the information matrix was generated via sampling; MC average information(AI), where the information was computed as an average of observed and expected information; and MC Broyden's method, where the zero of the gradient was searched using a quasi-Newton-type algorithm. Performance of these algorithms was evaluated using simulated data. The final estimates were in good agreement with corresponding analytical ones. MC NR REML and MC AI REML enhanced convergence compared to MC EM REML and gave standard errors for the estimates as a by-product. MC NR REML required a larger number of MC samples, while each MC AI REML iteration demanded extra solving of mixed model equations by the number of parameters to be estimated. MC Broyden's method required the largest number of MC samples with our small data and did not give standard errors for the parameters directly. We studied the performance of three different convergence criteria for the MC AI REML algorithm. Our results indicate the importance of defining a suitable convergence criterion and critical value in order to obtain an efficient Newton-type method utilizing a MC algorithm. Overall, use of a MC algorithm with Newton-type methods proved feasible and the results encourage testing of these methods with different kinds of large-scale problem settings.
Method for smoothing the surface of a protective coating
Sangeeta, D.; Johnson, Curtis Alan; Nelson, Warren Arthur
2001-01-01
A method for smoothing the surface of a ceramic-based protective coating which exhibits roughness is disclosed. The method includes the steps of applying a ceramic-based slurry or gel coating to the protective coating surface; heating the slurry/gel coating to remove volatile material; and then further heating the slurry/gel coating to cure the coating and bond it to the underlying protective coating. The slurry/gel coating is often based on yttria-stabilized zirconia, and precursors of an oxide matrix. Related articles of manufacture are also described.
General purpose nonlinear system solver based on Newton-Krylov method.
2013-12-01
KINSOL is part of a software family called SUNDIALS: SUite of Nonlinear and Differential/Algebraic equation Solvers [1]. KINSOL is a general-purpose nonlinear system solver based on Newton-Krylov and fixed-point solver technologies [2].
NASA Astrophysics Data System (ADS)
Salucci, Marco; Oliveri, Giacomo; Massa, Andrea; Randazzo, Andrea; Pastorino, Matteo
2014-05-01
Ground penetrating radars (GPRs) are key instruments for subsurface monitoring and imaging. They can be used in different applicative fields, e.g., for the assessment of the structural stability of concrete structures and for the detection of targets buried inside inaccessible materials. In this framework, imaging systems based on the solution of the underlying inverse electromagnetic scattering problem have been acquiring an ever growing interest in the scientific community. In fact, they are able - at least in principle - to provide a quantitative reconstruction of the distributions of the dielectric properties (e.g., the dielectric permittivity and the electric conductivity) of the investigated scenario. Although good results have been obtained in recent years, there is still the need of further research, especially concerning the development of inversion procedure able to deal with the limitations arising from the non-linearity and ill-posedness of the underlying electromagnetic imaging formulation. In this work, a novel electromagnetic inverse scattering method is proposed for the reconstruction of shallow buried objects. The inversion procedure is based on the combination of different imaging modalities. In particular, an iterative multi-scaling approach [1] is adopted for focusing the reconstruction only on limited subdomains of the original investigation region. The data inversion is performed by applying an inexact-Newton method (which exhibits very good regularization properties) within the second-order Born approximation [2]. The use of this approximation allows a reduction of the problem unknowns and a mitigation of the nonlinear effects. The proposed approach has been validated by means of several numerical simulations. In particular, the reconstruction performances have been evaluated in terms of accuracy, robustness, noise levels, and computational efficiency, with particular emphasis on the comparisons with the results obtained by using the standard
Recovery Discontinuous Galerkin Jacobian-free Newton-Krylov Method for all-speed flows
HyeongKae Park; Robert Nourgaliev; Vincent Mousseau; Dana Knoll
2008-07-01
There is an increasing interest to develop the next generation simulation tools for the advanced nuclear energy systems. These tools will utilize the state-of-art numerical algorithms and computer science technology in order to maximize the predictive capability, support advanced reactor designs, reduce uncertainty and increase safety margins. In analyzing nuclear energy systems, we are interested in compressible low-Mach number, high heat flux flows with a wide range of Re, Ra, and Pr numbers. Under these conditions, the focus is placed on turbulent heat transfer, in contrast to other industries whose main interest is in capturing turbulent mixing. Our objective is to develop singlepoint turbulence closure models for large-scale engineering CFD code, using Direct Numerical Simulation (DNS) or Large Eddy Simulation (LES) tools, requireing very accurate and efficient numerical algorithms. The focus of this work is placed on fully-implicit, high-order spatiotemporal discretization based on the discontinuous Galerkin method solving the conservative form of the compressible Navier-Stokes equations. The method utilizes a local reconstruction procedure derived from weak formulation of the problem, which is inspired by the recovery diffusion flux algorithm of van Leer and Nomura [?] and by the piecewise parabolic reconstruction [?] in the finite volume method. The developed methodology is integrated into the Jacobianfree Newton-Krylov framework [?] to allow a fully-implicit solution of the problem.
NASA Astrophysics Data System (ADS)
Danaila, Ionut; Moglan, Raluca; Hecht, Frédéric; Le Masson, Stéphane
2014-10-01
We present a new numerical system using finite elements with mesh adaptivity for the simulation of solid-liquid phase change systems. In the liquid phase, the natural convection flow is simulated by solving the incompressible Navier-Stokes equations with Boussinesq approximation. A variable viscosity model allows the velocity to progressively vanish in the solid phase, through an intermediate mushy region. The phase change is modeled by introducing an implicit enthalpy source term in the heat equation. The final system of equations describing the liquid-solid system by a single domain approach is solved using a Newton iterative algorithm. The space discretization is based on a P2-P1 Taylor-Hood finite elements and mesh adaptivity by metric control is used to accurately track the solid-liquid interface or the density inversion interface for water flows. The numerical method is validated against classical benchmarks that progressively add strong non-linearities in the system of equations: natural convection of air, natural convection of water, melting of a phase-change material and water freezing. Very good agreement with experimental data is obtained for each test case, proving the capability of the method to deal with both melting and solidification problems with convection. The presented numerical method is easy to implement using FreeFem++ software using a syntax close to the mathematical formulation.
A Jacobian-Free Newton Krylov Method for Mortar-Discretized Thermomechanical Contact Problems
Glen Hansen
2011-07-01
Multibody contact problems are common within the field of multiphysics simulation. Applications involving thermomechanical contact scenarios are also quite prevalent. Such problems can be challenging to solve due to the likelihood of thermal expansion affecting contact geometry which, in turn, can change the thermal behavior of the components being analyzed. This paper explores a simple model of a light water reactor nuclear reactor fuel rod, which consists of cylindrical pellets of uranium dioxide (UO2) fuel sealed within a Zircalloy cladding tube. The tube is initially filled with helium gas, which fills the gap between the pellets and cladding tube. The accurate modeling of heat transfer across the gap between fuel pellets and the protective cladding is essential to understanding fuel performance, including cladding stress and behavior under irradiated conditions, which are factors that affect the lifetime of the fuel. The thermomechanical contact approach developed here is based on the mortar finite element method, where Lagrange multipliers are used to enforce weak continuity constraints at participating interfaces. In this formulation, the heat equation couples to linear mechanics through a thermal expansion term. Lagrange multipliers are used to formulate the continuity constraints for both heat flux and interface traction at contact interfaces. The resulting system of nonlinear algebraic equations are cast in residual form for solution of the transient problem. A Jacobian-free Newton Krylov method is used to provide for fully-coupled solution of the coupled thermal contact and heat equations.
A Jacobian-free Newton Krylov method for mortar-discretized thermomechanical contact problems
Hansen, Glen
2011-07-20
Multibody contact problems are common within the field of multiphysics simulation. Applications involving thermomechanical contact scenarios are also quite prevalent. Such problems can be challenging to solve due to the likelihood of thermal expansion affecting contact geometry which, in turn, can change the thermal behavior of the components being analyzed. This paper explores a simple model of a light water reactor nuclear fuel rod, which consists of cylindrical pellets of uranium dioxide (UO{sub 2}) fuel sealed within a Zircalloy cladding tube. The tube is initially filled with helium gas, which fills the gap between the pellets and cladding tube. The accurate modeling of heat transfer across the gap between fuel pellets and the protective cladding is essential to understanding fuel performance, including cladding stress and behavior under irradiated conditions, which are factors that affect the lifetime of the fuel. The thermomechanical contact approach developed here is based on the mortar finite element method, where Lagrange multipliers are used to enforce weak continuity constraints at participating interfaces. In this formulation, the heat equation couples to linear mechanics through a thermal expansion term. Lagrange multipliers are used to formulate the continuity constraints for both heat flux and interface traction at contact interfaces. The resulting system of nonlinear algebraic equations are cast in residual form for solution of the transient problem. A Jacobian-free Newton Krylov method is used to provide for fully-coupled solution of the coupled thermal contact and heat equations.
A convergence rates result for an iteratively regularized Gauss-Newton-Halley method in Banach space
NASA Astrophysics Data System (ADS)
Kaltenbacher, B.
2015-01-01
The use of second order information on the forward operator often comes at a very moderate additional computational price in the context of parameter identification problems for differential equation models. On the other hand the use of general (non-Hilbert) Banach spaces has recently found much interest due to its usefulness in many applications. This motivates us to extend the second order method from Kaltenbacher (2014 Numer. Math. at press), (see also Hettlich and Rundell 2000 SIAM J. Numer. Anal. 37 587620) to a Banach space setting and analyze its convergence. We here show rates results for a particular source condition and different exponents in the formulation of Tikhonov regularization in each step. This includes a complementary result on the (first order) iteratively regularized Gauss-Newton method in case of a one-homogeneous data misfit term, which corresponds to exact penalization. The results clearly show the possible advantages of using second order information, which get most pronounced in this exact penalization case. Numerical simulations for an inverse source problem for a nonlinear elliptic PDE illustrate the theoretical findings.
Smoothing Forecasting Methods for Academic Library Circulations: An Evaluation and Recommendation.
ERIC Educational Resources Information Center
Brooks, Terrence A.; Forys, John W., Jr.
1986-01-01
Circulation time-series data from 50 midwest academic libraries were used to test 110 variants of 8 smoothing forecasting methods. Data and methodologies and illustrations of two recommended methods--the single exponential smoothing method and Brown's one-parameter linear exponential smoothing method--are given. Eight references are cited. (EJS)
Smoothing Forecasting Methods for Academic Library Circulations: An Evaluation and Recommendation.
ERIC Educational Resources Information Center
Brooks, Terrence A.; Forys, John W., Jr.
1986-01-01
Circulation time-series data from 50 midwest academic libraries were used to test 110 variants of 8 smoothing forecasting methods. Data and methodologies and illustrations of two recommended methods--the single exponential smoothing method and Brown's one-parameter linear exponential smoothing method--are given. Eight references are cited. (EJS)
Modeling Electrokinetic Flows by the Smoothed Profile Method
Luo, Xian; Beskok, Ali; Karniadakis, George Em
2010-01-01
We propose an efficient modeling method for electrokinetic flows based on the Smoothed Profile Method (SPM) [1–4] and spectral element discretizations. The new method allows for arbitrary differences in the electrical conductivities between the charged surfaces and the the surrounding electrolyte solution. The electrokinetic forces are included into the flow equations so that the Poisson-Boltzmann and electric charge continuity equations are cast into forms suitable for SPM. The method is validated by benchmark problems of electroosmotic flow in straight channels and electrophoresis of charged cylinders. We also present simulation results of electrophoresis of charged microtubules, and show that the simulated electrophoretic mobility and anisotropy agree with the experimental values. PMID:20352076
Arima model and exponential smoothing method: A comparison
NASA Astrophysics Data System (ADS)
Wan Ahmad, Wan Kamarul Ariffin; Ahmad, Sabri
2013-04-01
This study shows the comparison between Autoregressive Moving Average (ARIMA) model and Exponential Smoothing Method in making a prediction. The comparison is focused on the ability of both methods in making the forecasts with the different number of data sources and the different length of forecasting period. For this purpose, the data from The Price of Crude Palm Oil (RM/tonne), Exchange Rates of Ringgit Malaysia (RM) in comparison to Great Britain Pound (GBP) and also The Price of SMR 20 Rubber Type (cents/kg) with three different time series are used in the comparison process. Then, forecasting accuracy of each model is measured by examinethe prediction error that producedby using Mean Squared Error (MSE), Mean Absolute Percentage Error (MAPE), and Mean Absolute deviation (MAD). The study shows that the ARIMA model can produce a better prediction for the long-term forecasting with limited data sources, butcannot produce a better prediction for time series with a narrow range of one point to another as in the time series for Exchange Rates. On the contrary, Exponential Smoothing Method can produce a better forecasting for Exchange Rates that has a narrow range of one point to another for its time series, while itcannot produce a better prediction for a longer forecasting period.
On the convergence of Newton-type methods under mild differentiability conditions
NASA Astrophysics Data System (ADS)
Argyros, Ioannis; Hilout, Saïd
2009-12-01
We introduce the new idea of recurrent functions to provide a new semilocal convergence analysis for Newton-type methods, under mild differentiability conditions. It turns out that our sufficient convergence conditions are weaker, and the error bounds are tighter than in earlier studies in some interesting cases (Chen, Ann Inst Stat Math 42:387-401, 1990; Chen, Numer Funct Anal Optim 10:37-48, 1989; Cianciaruso, Numer Funct Anal Optim 24:713-723, 2003; Cianciaruso, Nonlinear Funct Anal Appl 2009; Dennis 1971; Deuflhard 2004; Deuflhard, SIAM J Numer Anal 16:1-10, 1979; Gutiérrez, J Comput Appl Math 79:131-145, 1997; Hernández, J Optim Theory Appl 109:631-648, 2001; Hernández, J Comput Appl Math 115:245-254, 2000; Huang, J Comput Appl Math 47:211-217, 1993; Kantorovich 1982; Miel, Numer Math 33:391-396, 1979; Miel, Math Comput 34:185-202, 1980; Moret, Computing 33:65-73, 1984; Potra, Libertas Mathematica 5:71-84, 1985; Rheinboldt, SIAM J Numer Anal 5:42-63, 1968; Yamamoto, Numer Math 51: 545-557, 1987; Zabrejko, Numer Funct Anal Optim 9:671-684, 1987; Zinc̆ko 1963). Applications and numerical examples, involving a nonlinear integral equation of Chandrasekhar-type, and a differential equation are also provided in this study.
Recovery Discontinuous Galerkin Jacobian-Free Newton-Krylov Method for All-Speed Flows
HyeongKae Park; Robert Nourgaliev; Vincent Mousseau; Dana Knoll
2008-07-01
A novel numerical algorithm (rDG-JFNK) for all-speed fluid flows with heat conduction and viscosity is introduced. The rDG-JFNK combines the Discontinuous Galerkin spatial discretization with the implicit Runge-Kutta time integration under the Jacobian-free Newton-Krylov framework. We solve fully-compressible Navier-Stokes equations without operator-splitting of hyperbolic, diffusion and reaction terms, which enables fully-coupled high-order temporal discretization. The stability constraint is removed due to the L-stable Explicit, Singly Diagonal Implicit Runge-Kutta (ESDIRK) scheme. The governing equations are solved in the conservative form, which allows one to accurately compute shock dynamics, as well as low-speed flows. For spatial discretization, we develop a “recovery” family of DG, exhibiting nearly-spectral accuracy. To precondition the Krylov-based linear solver (GMRES), we developed an “Operator-Split”-(OS) Physics Based Preconditioner (PBP), in which we transform/simplify the fully-coupled system to a sequence of segregated scalar problems, each can be solved efficiently with Multigrid method. Each scalar problem is designed to target/cluster eigenvalues of the Jacobian matrix associated with a specific physics.
Kaushik, D. K.; Keyes, D. E.; Smith, B. F.
1999-02-24
We review and extend to the compressible regime an earlier parallelization of an implicit incompressible unstructured Euler code [9], and solve for flow over an M6 wing in subsonic, transonic, and supersonic regimes. While the parallelization philosophy of the compressible case is identical to the incompressible, we focus here on the nonlinear and linear convergence rates, which vary in different physical regimes, and on comparing the performance of currently important computational platforms. Multiple-scale problems should be marched out at desired accuracy limits, and not held hostage to often more stringent explicit stability limits. In the context of inviscid aerodynamics, this means evolving transient computations on the scale of the convective transit time, rather than the acoustic transit time, or solving steady-state problems with local CFL numbers approaching infinity. Whether time-accurate or steady, we employ Newton's method on each (pseudo-) timestep. The coupling of analysis with design in aerodynamic practice is another motivation for implicitness. Design processes that make use of sensitivity derivatives and the Hessian matrix require operations with the Jacobian matrix of the state constraints (i.e., of the governing PDE system); if the Jacobian is available for design, it may be employed with advantage in a nonlinearly implicit analysis, as well.
NASA Astrophysics Data System (ADS)
Hendry, Archibald W.
2007-05-01
Isaac Newton may have seen an apple fall, but it was Robert Hooke who had a better idea of where it would land. No one really knows whether or not Isaac Newton actually saw an apple fall in his garden. Supposedly it took place in 1666, but it was a tale he told in his old age more than 60 years later, a time when his memory was failing and his recollections of events did not always match known facts. However, one thing is certain-falling objects were to play a key part in Newton's eventual understanding of how objects move.
NASA Astrophysics Data System (ADS)
Auclair, J.-P.; Lemieux, J.-F.; Tremblay, L. B.; Ritchie, H.
2017-07-01
New numerical solvers are being considered in response to the rising computational cost of properly solving the sea ice momentum equation at high resolution. The Jacobian free version of Newton's method has allowed models to obtain the converged solution faster than other implicit solvers used previously. To further improve on this recent development, the analytical Jacobian of the 1D sea ice momentum equation is derived and used inside Newton's method. The results are promising in terms of computational efficiency. Although robustness remains an issue for some test cases, it is improved compared to the Jacobian free approach. In order to make use of the strong points of both the new and Jacobian free methods, a hybrid preconditioner using the Picard and Jacobian matrices to improve global and local convergence, respectively, is also introduced. This preconditioner combines the robustness and computational efficiency of the previously used preconditioning matrices when solving the sea ice momentum equation.
Rapid springback compensation for age forming based on quasi Newton method
NASA Astrophysics Data System (ADS)
Xiong, Wei; Gan, Zhong; Xiong, Shipeng; Xia, Yushan
2014-05-01
Iterative methods based on finite element simulation are effective approaches to design mold shape to compensate springback in sheet metal forming. However, convergence rate of iterative methods is difficult to improve greatly. To increase the springback compensate speed of designing age forming mold, process of calculating springback for a certain mold with finite element method is analyzed. Springback compensation is abstracted as finding a solution for a set of nonlinear functions and a springback compensation algorithm is presented on the basis of quasi Newton method. The accuracy of algorithm is verified by developing an ABAQUS secondary development program with MATLAB. Three rectangular integrated panels of dimensions 710 mm ×750 mm integrated panels with intersected ribs of 10 mm are selected to perform case studies. The algorithm is used to compute mold contours for the panels with cylinder, sphere and saddle contours respectively and it takes 57%, 22% and 33% iterations as compared to that of displacement adjustment (DA) method. At the end of iterations, maximum deviations on the three panels are 0.618 4 mm, 0.624 1 mm and 0.342 0 mm that are smaller than the deviations determined by DA method (0.740 8 mm, 0.740 8 mm and 0.713 7 mm respectively). In following experimental verification, mold contour for another integrated panel with 400 mm×380 mm size is designed by the algorithm. Then the panel is age formed in an autoclave and measured by a three dimensional digital measurement devise. Deviation between measuring results and the panel's design contour is less than 1 mm. Finally, the iterations with different mesh sizes (40 mm, 35 mm, 30 mm, 25 mm, 20 mm) in finite element models are compared and found no considerable difference. Another possible compensation method, Broyden-Fletcher-Shanmo method, is also presented based on the solving nonlinear functions idea. The Broyden-Fletcher-Shanmo method is employed to compute mold contour for the second panel
NASA Astrophysics Data System (ADS)
Chernyaev, Yu. A.
2016-10-01
The gradient projection method and Newton's method are generalized to the case of nonconvex constraint sets representing the set-theoretic intersection of a spherical surface with a convex closed set. Necessary extremum conditions are examined, and the convergence of the methods is analyzed.
NASA Astrophysics Data System (ADS)
Muthuvalu, Mohana Sundaram; Aruchunan, Elayaraja; Akhir, Mohd Kamalrulzaman Md; Sulaiman, Jumat; Karim, Samsul Ariffin Abdul
2014-10-01
In this paper, application of the Half-Sweep Successive Over-Relaxation (HSSOR) iterative method is extended by solving second order composite closed Newton-Cotes quadrature (2-CCNC) system. The performance of HSSOR method in solving 2-CCNC system is comparatively studied by their application on linear Fredholm integral equations of the second kind. The derivation and implementation of the method are discussed. In addition, numerical results by solving two test problems are included and compared with the standard Gauss-Seidel (GS) and Successive Over-Relaxation (SOR) methods. Numerical results demonstrate that HSSOR method is an efficient method among the tested methods.
ERIC Educational Resources Information Center
Hendry, Archibald W.
2007-01-01
Isaac Newton may have seen an apple fall, but it was Robert Hooke who had a better idea of where it would land. No one really knows whether or not Isaac Newton actually saw an apple fall in his garden. Supposedly it took place in 1666, but it was a tale he told in his old age more than 60 years later, a time when his memory was failing and his…
ERIC Educational Resources Information Center
Hendry, Archibald W.
2007-01-01
Isaac Newton may have seen an apple fall, but it was Robert Hooke who had a better idea of where it would land. No one really knows whether or not Isaac Newton actually saw an apple fall in his garden. Supposedly it took place in 1666, but it was a tale he told in his old age more than 60 years later, a time when his memory was failing and his…
NASA Technical Reports Server (NTRS)
Amar, Adam J.; Blackwell, Ben F.; Edwards, Jack R.
2007-01-01
The development and verification of a one-dimensional material thermal response code with ablation is presented. The implicit time integrator, control volume finite element spatial discretization, and Newton's method for nonlinear iteration on the entire system of residual equations have been implemented and verified for the thermochemical ablation of internally decomposing materials. This study is a continuation of the work presented in "One-Dimensional Ablation with Pyrolysis Gas Flow Using a Full Newton's Method and Finite Control Volume Procedure" (AIAA-2006-2910), which described the derivation, implementation, and verification of the constant density solid energy equation terms and boundary conditions. The present study extends the model to decomposing materials including decomposition kinetics, pyrolysis gas flow through the porous char layer, and a mixture (solid and gas) energy equation. Verification results are presented for the thermochemical ablation of a carbon-phenolic ablator which involves the solution of the entire system of governing equations.
Kernel Smoothing Methods for Non-Poissonian Seismic Hazard Analysis
NASA Astrophysics Data System (ADS)
Woo, Gordon
2017-04-01
For almost fifty years, the mainstay of probabilistic seismic hazard analysis has been the methodology developed by Cornell, which assumes that earthquake occurrence is a Poisson process, and that the spatial distribution of epicentres can be represented by a set of polygonal source zones, within which seismicity is uniform. Based on Vere-Jones' use of kernel smoothing methods for earthquake forecasting, these methods were adapted in 1994 by the author for application to probabilistic seismic hazard analysis. There is no need for ambiguous boundaries of polygonal source zones, nor for the hypothesis of time independence of earthquake sequences. In Europe, there are many regions where seismotectonic zones are not well delineated, and where there is a dynamic stress interaction between events, so that they cannot be described as independent. From the Amatrice earthquake of 24 August, 2016, the subsequent damaging earthquakes in Central Italy over months were not independent events. Removing foreshocks and aftershocks is not only an ill-defined task, it has a material effect on seismic hazard computation. Because of the spatial dispersion of epicentres, and the clustering of magnitudes for the largest events in a sequence, which might all be around magnitude 6, the specific event causing the highest ground motion can vary from one site location to another. Where significant active faults have been clearly identified geologically, they should be modelled as individual seismic sources. The remaining background seismicity should be modelled as non-Poissonian using statistical kernel smoothing methods. This approach was first applied for seismic hazard analysis at a UK nuclear power plant two decades ago, and should be included within logic-trees for future probabilistic seismic hazard at critical installations within Europe. In this paper, various salient European applications are given.
Owusu-Edusei, Kwame; Owens, Chantelle J
2009-01-01
Background Chlamydia continues to be the most prevalent disease in the United States. Effective spatial monitoring of chlamydia incidence is important for successful implementation of control and prevention programs. The objective of this study is to apply Bayesian smoothing and exploratory spatial data analysis (ESDA) methods to monitor Texas county-level chlamydia incidence rates by examining spatiotemporal patterns. We used county-level data on chlamydia incidence (for all ages, gender and races) from the National Electronic Telecommunications System for Surveillance (NETSS) for 2004 and 2005. Results Bayesian-smoothed chlamydia incidence rates were spatially dependent both in levels and in relative changes. Erath county had significantly (p < 0.05) higher smoothed rates (> 300 cases per 100,000 residents) than its contiguous neighbors (195 or less) in both years. Gaines county experienced the highest relative increase in smoothed rates (173% – 139 to 379). The relative change in smoothed chlamydia rates in Newton county was significantly (p < 0.05) higher than its contiguous neighbors. Conclusion Bayesian smoothing and ESDA methods can assist programs in using chlamydia surveillance data to identify outliers, as well as relevant changes in chlamydia incidence in specific geographic units. Secondly, it may also indirectly help in assessing existing differences and changes in chlamydia surveillance systems over time. PMID:19245686
A Particle-Particle Collision Model for Smoothed Profile Method
NASA Astrophysics Data System (ADS)
Mohaghegh, Fazlolah; Mousel, John; Udaykumar, H. S.
2014-11-01
Smoothed Profile Method (SPM) is a type of continuous forcing approach that adds the particles to the fluid using a forcing. The fluid-structure interaction is through a diffuse interface which avoids sudden transition from solid to fluid. The SPM simulation as a monolithic approach uses an indicator function field in the whole domain based on the distance from each particle's boundary where the possible particle-particle interaction can occur. A soft sphere potential based on the indicator function field has been defined to add an artificial pressure to the flow pressure in the potential overlapping regions. Thus, a repulsion force is obtained to avoid overlapping. Study of two particles which impulsively start moving in an initially uniform flow shows that the particle in the wake of the other one will have less acceleration leading to frequent collisions. Various Reynolds numbers and initial distances have been chosen to test the robustness of the method. Study of Drafting-Kissing Tumbling of two cylindrical particles shows a deviation from the benchmarks due to lack of rotation modeling. The method is shown to be accurate enough for simulating particle-particle collision and can easily be extended for particle-wall modeling and for non-spherical particles.
2010-05-01
Byrd, Lu, Nocedal , and Zhu test set. . . . . . . . 78 3.9 Results for SNOPT on the SNOPT example problems. . . . . . . . . . . . . . . . . . 79 4.1...method that does not enforce a sparsity pattern is that of Nocedal [50]. He developed the method L-BFGS, which maintains a circular buffer of a limited...at each update of the QN matrix. Liu and Nocedal [40], among others, set σk = y T k yk/s T k yk. Recent work on quasi-Newton methods includes that of
Quasi-Newton methods for parameter estimation in functional differential equations
NASA Technical Reports Server (NTRS)
Brewer, Dennis W.
1988-01-01
A state-space approach to parameter estimation in linear functional differential equations is developed using the theory of linear evolution equations. A locally convergent quasi-Newton type algorithm is applied to distributed systems with particular emphasis on parameters that induce unbounded perturbations of the state. The algorithm is computationally implemented on several functional differential equations, including coefficient and delay estimation in linear delay-differential equations.
NASA Astrophysics Data System (ADS)
González-Estrada, Octavio A.; Natarajan, Sundararajan; Ródenas, Juan José; Nguyen-Xuan, Hung; Bordas, Stéphane P. A.
2013-07-01
An error control technique aimed to assess the quality of smoothed finite element approximations is presented in this paper. Finite element techniques based on strain smoothing appeared in 2007 were shown to provide significant advantages compared to conventional finite element approximations. In particular, a widely cited strength of such methods is improved accuracy for the same computational cost. Yet, few attempts have been made to directly assess the quality of the results obtained during the simulation by evaluating an estimate of the discretization error. Here we propose a recovery type error estimator based on an enhanced recovery technique. The salient features of the recovery are: enforcement of local equilibrium and, for singular problems a "smooth + singular" decomposition of the recovered stress. We evaluate the proposed estimator on a number of test cases from linear elastic structural mechanics and obtain efficient error estimations whose effectivities, both at local and global levels, are improved compared to recovery procedures not implementing these features.
Smoothed particle hydrodynamics method for evaporating multiphase flows
NASA Astrophysics Data System (ADS)
Yang, Xiufeng; Kong, Song-Charng
2017-09-01
The smoothed particle hydrodynamics (SPH) method has been increasingly used for simulating fluid flows; however, its ability to simulate evaporating flow requires significant improvements. This paper proposes an SPH method for evaporating multiphase flows. The present SPH method can simulate the heat and mass transfers across the liquid-gas interfaces. The conservation equations of mass, momentum, and energy were reformulated based on SPH, then were used to govern the fluid flow and heat transfer in both the liquid and gas phases. The continuity equation of the vapor species was employed to simulate the vapor mass fraction in the gas phase. The vapor mass fraction at the interface was predicted by the Clausius-Clapeyron correlation. An evaporation rate was derived to predict the mass transfer from the liquid phase to the gas phase at the interface. Because of the mass transfer across the liquid-gas interface, the mass of an SPH particle was allowed to change. Alternative particle splitting and merging techniques were developed to avoid large mass difference between SPH particles of the same phase. The proposed method was tested by simulating three problems, including the Stefan problem, evaporation of a static drop, and evaporation of a drop impacting a hot surface. For the Stefan problem, the SPH results of the evaporation rate at the interface agreed well with the analytical solution. For drop evaporation, the SPH result was compared with the result predicted by a level-set method from the literature. In the case of drop impact on a hot surface, the evolution of the shape of the drop, temperature, and vapor mass fraction were predicted.
Stochastic quasi-Newton method: Application to minimal model for proteins
NASA Astrophysics Data System (ADS)
Chau, C. D.; Sevink, G. J. A.; Fraaije, J. G. E. M.
2011-01-01
Knowledge of protein folding pathways and inherent structures is of utmost importance for our understanding of biological function, including the rational design of drugs and future treatments against protein misfolds. Computational approaches have now reached the stage where they can assess folding properties and provide data that is complementary to or even inaccessible by experimental imaging techniques. Minimal models of proteins, which make possible the simulation of protein folding dynamics by (systematic) coarse graining, have provided understanding in terms of descriptors for folding, folding kinetics, and folded states. Here we focus on the efficiency of equilibration on the coarse-grained level. In particular, we applied a new regularized stochastic quasi-Newton (S-QN) method, developed for accelerated configurational space sampling while maintaining thermodynamic consistency, to analyze the folding pathway and inherent structures of a selected protein, where regularization was introduced to improve stability. The adaptive compound mobility matrix B in S-QN, determined by a factorized secant update, gives rise to an automated scaling of all modes in the protein, in particular an acceleration of protein domain dynamics or principal modes and a slowing down of fast modes or “soft” bond constraints, similar to lincs/shake algorithms, when compared to conventional Langevin dynamics. We used and analyzed a two-step strategy. Owing to the enhanced sampling properties of S-QN and increased barrier crossing at high temperatures (in reduced units), a hierarchy of inherent protein structures is first efficiently determined by applying S-QN for a single initial structure and T=1>Tθ, where Tθ is the collapse temperature. Second, S-QN simulations for several initial structures at very low temperature (T=0.01
NASA Astrophysics Data System (ADS)
Engelman, Jonathan
Changing student conceptions in physics is a difficult process and has been a topic of research for many years. The purpose of this study was to understand what prompted students to change or not change their incorrect conceptions of Newtons Second or Third Laws in response to an intervention, Interactive Video Vignettes (IVVs), designed to overcome them. This study is based on prior research reported in the literature which has found that a curricular framework of elicit, confront, resolve, and reflect (ECRR) is important for changing student conceptions (McDermott, 2001). This framework includes four essential parts such that during an instructional event student conceptions should be elicited, incorrect conceptions confronted, these conflicts resolved, and then students should be prompted to reflect on their learning. Twenty-two undergraduate student participants who completed either or both IVVs were studied to determine whether or not they experienced components of the ECRR framework at multiple points within the IVVs. A fully integrated, mixed methods design was used to address the study purpose. Both quantitative and qualitative data were collected iteratively for each participant. Successive data collections were informed by previous data collections. All data were analyzed concurrently. The quantitative strand included a pre/post test that participants took before and after completing a given IVV and was used to measure the effect of each IVV on learning. The qualitative strand included video of each participant completing the IVV as well as an audio-recorded video elicitation interview after the post-test. The qualitative data collection was designed to describe student experiences with each IVV as well as to observe how the ECRR framework was experienced. Collecting and analyzing data using this mixed methods approach helped develop a more complete understanding of how student conceptions of Newtons Second and Third Laws changed through completion of
Weighted Wilcoxon-type Smoothly Clipped Absolute Deviation Method
Wang, Lan; Li, Runze
2009-01-01
Summary Shrinkage-type variable selection procedures have recently seen increasing applications in biomedical research. However, their performance can be adversely influenced by outliers in either the response or the covariate space. This paper proposes a weighted Wilcoxon-type smoothly clipped absolute deviation (WW-SCAD) method, which deals with robust variable selection and robust estimation simultaneously. The new procedure can be conveniently implemented with the statistical software R. We establish that the WW-SCAD correctly identifies the set of zero coefficients with probability approaching one and estimates the nonzero coefficients with the rate n−1/2. Moreover, with appropriately chosen weights the WW-SCAD is robust with respect to outliers in both the x and y directions. The important special case with constant weights yields an oracle-type estimator with high efficiency at the presence of heavier-tailed random errors. The robustness of the WW-SCAD is partly justified by its asymptotic performance under local shrinking contamination. We propose a BIC-type tuning parameter selector for the WW-SCAD. The performance of the WW-SCAD is demonstrated via simulations and by an application to a study that investigates the effects of personal characteristics and dietary factors on plasma beta-carotene level. PMID:18647294
Shimozato, T; Tabushi, K; Kitoh, S; Shiota, Y; Hirayama, C; Suzuki, S
2007-01-21
To calculate photon spectra for a 10 MV x-ray beam emitted by a medical linear accelerator, we performed numerical analysis using the aluminium transmission data obtained along the central axis of the beam under the narrow beam condition corresponding to a 3x3 cm2 field at a 100 cm distance from the source. We used the BFGS quasi-Newton method based on a general nonlinear optimization technique for the numerical analysis. The attenuation coefficients, aluminium thicknesses and measured transmission data are necessary inputs for the numerical analysis. The calculated x-ray spectrum shape was smooth in the lower to higher energy regions without any angular components. The x-ray spectrum acquired by the employed method was evaluated by comparing the measurements along the central axis percentage depth dose in a water phantom and by a Monte Carlo simulation code, the electron gamma shower code. The values of the calculated percentage depth doses for a 10x10 cm2 field at a 100 cm source-to-surface distance in a water phantom were obtained using the same geometry settings as those of the water phantom measurement. The differences in the measured and calculated values were less than +/-1.0% for a broad region from the shallow part near the surface to deep parts of up to 25 cm in the water phantom.
ERIC Educational Resources Information Center
Cui, Zhongmin; Kolen, Michael J.
2009-01-01
This article considers two new smoothing methods in equipercentile equating, the cubic B-spline presmoothing method and the direct presmoothing method. Using a simulation study, these two methods are compared with established methods, the beta-4 method, the polynomial loglinear method, and the cubic spline postsmoothing method, under three sample…
ERIC Educational Resources Information Center
Cui, Zhongmin; Kolen, Michael J.
2009-01-01
This article considers two new smoothing methods in equipercentile equating, the cubic B-spline presmoothing method and the direct presmoothing method. Using a simulation study, these two methods are compared with established methods, the beta-4 method, the polynomial loglinear method, and the cubic spline postsmoothing method, under three sample…
Application of high-order numerical schemes and Newton-Krylov method to two-phase drift-flux model
Zou, Ling; Zhao, Haihua; Zhang, Hongbin
2017-08-07
This study concerns the application and solver robustness of the Newton-Krylov method in solving two-phase flow drift-flux model problems using high-order numerical schemes. In our previous studies, the Newton-Krylov method has been proven as a promising solver for two-phase flow drift-flux model problems. However, these studies were limited to use first-order numerical schemes only. Moreover, the previous approach to treating the drift-flux closure correlations was later revealed to cause deteriorated solver convergence performance, when the mesh was highly refined, and also when higher-order numerical schemes were employed. In this study, a second-order spatial discretization scheme that has been tested withmore » two-fluid two-phase flow model was extended to solve drift-flux model problems. In order to improve solver robustness, and therefore efficiency, a new approach was proposed to treating the mean drift velocity of the gas phase as a primary nonlinear variable to the equation system. With this new approach, significant improvement in solver robustness was achieved. With highly refined mesh, the proposed treatment along with the Newton-Krylov solver were extensively tested with two-phase flow problems that cover a wide range of thermal-hydraulics conditions. Satisfactory convergence performances were observed for all test cases. Numerical verification was then performed in the form of mesh convergence studies, from which expected orders of accuracy were obtained for both the first-order and the second-order spatial discretization schemes. Finally, the drift-flux model, along with numerical methods presented, were validated with three sets of flow boiling experiments that cover different flow channel geometries (round tube, rectangular tube, and rod bundle), and a wide range of test conditions (pressure, mass flux, wall heat flux, inlet subcooling and outlet void fraction).« less
A solution to the Navier-Stokes equations based upon the Newton Kantorovich method
NASA Technical Reports Server (NTRS)
Davis, J. E.; Gabrielsen, R. E.; Mehta, U. B.
1977-01-01
An implicit finite difference scheme based on the Newton-Kantorovich technique was developed for the numerical solution of the nonsteady, incompressible, two-dimensional Navier-Stokes equations in conservation-law form. The algorithm was second-order-time accurate, noniterative with regard to the nonlinear terms in the vorticity transport equation except at the earliest few time steps, and spatially factored. Numerical results were obtained with the technique for a circular cylinder at Reynolds number 15. Results indicate that the technique is in excellent agreement with other numerical techniques for all geometries and Reynolds numbers investigated, and indicates a potential for significant reduction in computation time over current iterative techniques.
NASA Astrophysics Data System (ADS)
Ho, K. F.; Beling, C. D.; Fung, S.; Cheng, Vincent K. W.; Ng, Michael K.; Yip, A. M.
2003-11-01
A generalized least-square method with Tikonov-Miller regularization and non-negativity constraints has been developed for deconvoluting two-dimensional coincidence Doppler broadening spectroscopy (CDBS) spectra. A projected Newton algorithm is employed to solve the generalized least-square problem. The algorithm has been tested on Monte Carlo generated spectra to find the best regularization parameters for different simulated experimental conditions. Good retrieval of the underlying positron-electron momentum distributions in the low momentum region is demonstrated. The algorithm has been successfully used to deconvolute experimental CDBS data from aluminum.
NASA Astrophysics Data System (ADS)
Stein, David B.; Guy, Robert D.; Thomases, Becca
2017-04-01
The Immersed Boundary method is a simple, efficient, and robust numerical scheme for solving PDE in general domains, yet for fluid problems it only achieves first-order spatial accuracy near embedded boundaries for the velocity field and fails to converge pointwise for elements of the stress tensor. In a previous work we introduced the Immersed Boundary Smooth Extension (IBSE) method, a variation of the IB method that achieves high-order accuracy for elliptic PDE by smoothly extending the unknown solution of the PDE from a given smooth domain to a larger computational domain, enabling the use of simple Cartesian-grid discretizations. In this work, we extend the IBSE method to allow for the imposition of a divergence constraint, and demonstrate high-order convergence for the Stokes and incompressible Navier-Stokes equations: up to third-order pointwise convergence for the velocity field, and second-order pointwise convergence for all elements of the stress tensor. The method is flexible to the underlying discretization: we demonstrate solutions produced using both a Fourier spectral discretization and a standard second-order finite-difference discretization.
NASA Astrophysics Data System (ADS)
Laadhari, Aymen; Saramito, Pierre; Misbah, Chaouqi; Székely, Gábor
2017-08-01
This framework is concerned with the numerical modeling of the dynamics of individual biomembranes and capillary interfaces in a surrounding Newtonian fluid. A level set approach helps to follow the interface motion. Our method features the use of high order fully implicit time integration schemes that enable to overcome stability issues related to the explicit discretization of the highly non-linear bending force or capillary force. At each time step, the tangent systems are derived and the resulting nonlinear problems are solved by a Newton-Raphson method. Based on the signed distance assumption, several inexact Newton strategies are employed to solve the capillary and vesicle problems and guarantee the second-order convergence behavior. We address in detail the main features of the proposed method, and we report several experiments in the two-dimensional case with the aim of illustrating its accuracy and efficiency. Comparative investigations with respect to the fully explicit scheme depict the stabilizing effect of the new method, which allows to use significantly larger time step sizes.
NASA Astrophysics Data System (ADS)
Yang, Haijian; Sun, Shuyu; Yang, Chao
2017-03-01
Most existing methods for solving two-phase flow problems in porous media do not take the physically feasible saturation fractions between 0 and 1 into account, which often destroys the numerical accuracy and physical interpretability of the simulation. To calculate the solution without the loss of this basic requirement, we introduce a variational inequality formulation of the saturation equilibrium with a box inequality constraint, and use a conservative finite element method for the spatial discretization and a backward differentiation formula with adaptive time stepping for the temporal integration. The resulting variational inequality system at each time step is solved by using a semismooth Newton algorithm. To accelerate the Newton convergence and improve the robustness, we employ a family of adaptive nonlinear elimination methods as a nonlinear preconditioner. Some numerical results are presented to demonstrate the robustness and efficiency of the proposed algorithm. A comparison is also included to show the superiority of the proposed fully implicit approach over the classical IMplicit Pressure-Explicit Saturation (IMPES) method in terms of the time step size and the total execution time measured on a parallel computer.
Methods and electrolytes for electrodeposition of smooth films
Zhang, Jiguang; Xu, Wu; Graff, Gordon L; Chen, Xilin; Ding, Fei; Shao, Yuyan
2015-03-17
Electrodeposition involving an electrolyte having a surface-smoothing additive can result in self-healing, instead of self-amplification, of initial protuberant tips that give rise to roughness and/or dendrite formation on the substrate and/or film surface. For electrodeposition of a first conductive material (C1) on a substrate from one or more reactants in an electrolyte solution, the electrolyte solution is characterized by a surface-smoothing additive containing cations of a second conductive material (C2), wherein cations of C2 have an effective electrochemical reduction potential in the solution lower than that of the reactants.
Impact of beam smoothing method on direct drive target performance for the NIF
Rothenberg, J.E.; Weber, S.V.
1997-01-01
The impact of smoothing method on the performance of a direct drive target is modeled and examined in terms of its 1-mode spectrum. In particular, two classes of smoothing methods are compared, smoothing by spectral dispersion (SSD) and the induced spatial incoherence (ISI) method. It is found that SSD using sinusoidal phase modulation (FM) results in poor smoothing at low 1-modes and therefore inferior target performance at both peak velocity and ignition. This disparity is most notable if the effective imprinting integration time of the target is small. However, using SSD with more generalized phase modulation can result in smoothing at low l-modes which is identical to that obtained with ISI. For either smoothing method, the calculations indicate that at peak velocity the surface perturbations are about 100 times larger than that which leads to nonlinear hydrodynamics. Modeling of the hydrodynamic nonlinearity shows that saturation can reduce the amplified nonuniformities to the level required to achieve ignition for either smoothing method. The low l- mode behavior at ignition is found to be strongly dependent on the induced divergence of the smoothing method. For the NIF parameters the target performance asymptotes for smoothing divergence larger than {approximately}100 {mu}rad.
SKRYN: A fast semismooth-Krylov-Newton method for controlling Ising spin systems
NASA Astrophysics Data System (ADS)
Ciaramella, G.; Borzì, A.
2015-05-01
The modeling and control of Ising spin systems is of fundamental importance in NMR spectroscopy applications. In this paper, two computer packages, ReHaG and SKRYN, are presented. Their purpose is to set-up and solve quantum optimal control problems governed by the Liouville master equation modeling Ising spin-1/2 systems with pointwise control constraints. In particular, the MATLAB package ReHaG allows to compute a real matrix representation of the master equation. The MATLAB package SKRYN implements a new strategy resulting in a globalized semismooth matrix-free Krylov-Newton scheme. To discretize the real representation of the Liouville master equation, a norm-preserving modified Crank-Nicolson scheme is used. Results of numerical experiments demonstrate that the SKRYN code is able to provide fast and accurate solutions to the Ising spin quantum optimization problem.
Lattice-Boltzmann method combined with smoothed-profile method for particulate suspensions
NASA Astrophysics Data System (ADS)
Jafari, Saeed; Yamamoto, Ryoichi; Rahnama, Mohamad
2011-02-01
We developed a simulation scheme based on the coupling of the lattice-Boltzmann method with the smoothed-profile method (SPM) to predict the dynamic behavior of colloidal dispersions. The SPM provides a coupling scheme between continuum fluid dynamics and rigid-body dynamics through a smoothed profile of the fluid-particle interface. In this approach, the flow is computed on fixed Eulerian grids which are also used for the particles. Owing to the use of the same grids for simulation of fluid flow and particles, this method is highly efficient. Furthermore, an external boundary is used to impose the no-slip boundary condition at the fluid-particle interface. In addition, the operations in the present method are local; it can be easily programmed for parallel machines. The methodology is validated by comparing with previously published data.
Lattice-Boltzmann method combined with smoothed-profile method for particulate suspensions.
Jafari, Saeed; Yamamoto, Ryoichi; Rahnama, Mohamad
2011-02-01
We developed a simulation scheme based on the coupling of the lattice-Boltzmann method with the smoothed-profile method (SPM) to predict the dynamic behavior of colloidal dispersions. The SPM provides a coupling scheme between continuum fluid dynamics and rigid-body dynamics through a smoothed profile of the fluid-particle interface. In this approach, the flow is computed on fixed Eulerian grids which are also used for the particles. Owing to the use of the same grids for simulation of fluid flow and particles, this method is highly efficient. Furthermore, an external boundary is used to impose the no-slip boundary condition at the fluid-particle interface. In addition, the operations in the present method are local; it can be easily programmed for parallel machines. The methodology is validated by comparing with previously published data.
Suppression of stochastic pulsation in laser-plasma interaction by smoothing methods
NASA Astrophysics Data System (ADS)
Hora, Heinrich; Aydin, Meral
1992-04-01
The control of the very complex behavior of a plasma with laser interaction by smoothing with induced spatial incoherence or other methods was related to improving the lateral uniformity of the irradiation. While this is important, it is shown from numerical hydrodynamic studies that the very strong temporal pulsation (stuttering) will mostly be suppressed by these smoothing methods too.
Suppression of stochastic pulsation in laser-plasma interaction by smoothing methods
Hora, H. ); Aydin, M. )
1992-04-15
The control of the very complex behavior of a plasma with laser interaction by smoothing with induced spatial incoherence or other methods was related to improving the lateral uniformity of the irradiation. While this is important, it is shown from numerical hydrodynamic studies that the very strong temporal pulsation (stuttering) will mostly be suppressed by these smoothing methods too.
Shadid, J.N.; Tuminaro, R.S.; Walker, H.F.
1997-02-01
The solution of the governing steady transport equations for momentum, heat and mass transfer in flowing fluids can be very difficult. These difficulties arise from the nonlinear, coupled, nonsymmetric nature of the system of algebraic equations that results from spatial discretization of the PDEs. In this manuscript the authors focus on evaluating a proposed nonlinear solution method based on an inexact Newton method with backtracking. In this context they use a particular spatial discretization based on a pressure stabilized Petrov-Galerkin finite element formulation of the low Mach number Navier-Stokes equations with heat and mass transport. The discussion considers computational efficiency, robustness and some implementation issues related to the proposed nonlinear solution scheme. Computational results are presented for several challenging CFD benchmark problems as well as two large scale 3D flow simulations.
Computational Experience with the Spectral Smoothing Method for Differentiating Noisy Data
NASA Astrophysics Data System (ADS)
Baart, M. L.
1981-07-01
When applied to non-exact (noisy) data, numerical methods for calculating derivatives, in particular- derivatives of order higher than the first, based on model, functions fitted to exact data, become unsatisfactory . The spectral smoothing method of Anderssen and Bloomfield, developed to solve this problem, entails calculation of a smoothing parameter and the choice of an optimal-order Sobolev norm that is used as regularizer. This method is used to differentiate, smooth and integrate noisy data. A likelihood function is minimized to determine the smoothing parameter. We present numerical results suggesting that this function can be jointly minimized with respect to the smoothing parameter and the order of the regularizing norm, thus yielding a fully automatic numerical differentiation procedure.
Electro-optical deflectors as a method of beam smoothing for Inertial Confinement Fusion
Rothenberg, J.E.
1997-01-01
The electro-optic deflector is analyzed and compared to smoothing by spectral dispersion for efficacy as a beam smoothing method for ICF. It is found that the electro-optic deflector is inherently somewhat less efficient when compared either on the basis of equal peak phase modulation or equal generated bandwidth.
The essence of the generalized Newton binomial theorem
NASA Astrophysics Data System (ADS)
Liu, Cheng-shi
2010-10-01
Under the frame of the homotopy analysis method, Liao gives a generalized Newton binomial theorem and thinks it as a rational base of his theory. In the paper, we prove that the generalized Newton binomial theorem is essentially the usual Newton binomial expansion at another point. Our result uncovers the essence of generalized Newton binomial theorem as a key of the homotopy analysis method.
NASA Astrophysics Data System (ADS)
Asgharzadeh, Hafez; Borazjani, Iman
2014-11-01
Time step-size restrictions and low convergence rates are major bottle necks for implicit solution of the Navier-Stokes in simulations involving complex geometries with moving boundaries. Newton-Krylov method (NKM) is a combination of a Newton-type method for super-linearly convergent solution of nonlinear equations and Krylov subspace methods for solving the Newton correction equations, which can theoretically address both bottle necks. The efficiency of this method vastly depends on the Jacobian forming scheme e.g. automatic differentiation is very expensive and Jacobian-free methods slow down as the mesh is refined. A novel, computationally efficient analytical Jacobian for NKM was developed to solve unsteady incompressible Navier-Stokes momentum equations on staggered curvilinear grids with immersed boundaries. The NKM was validated and verified against Taylor-Green vortex and pulsatile flow in a 90 degree bend and efficiently handles complex geometries such as an intracranial aneurysm with multiple overset grids, pulsatile inlet flow and immersed boundaries. The NKM method is shown to be more efficient than the semi-implicit Runge-Kutta methods and Jabobian-free Newton-Krylov methods. We believe NKM can be applied to many CFD techniques to decrease the computational cost. This work was supported partly by the NIH Grant R03EB014860, and the computational resources were partly provided by Center for Computational Research (CCR) at University at Buffalo.
An adaptive segment method for smoothing lidar signal based on noise estimation
NASA Astrophysics Data System (ADS)
Wang, Yuzhao; Luo, Pingping
2014-10-01
An adaptive segmentation smoothing method (ASSM) is introduced in the paper to smooth the signal and suppress the noise. In the ASSM, the noise is defined as the 3σ of the background signal. An integer number N is defined for finding the changing positions in the signal curve. If the difference of adjacent two points is greater than 3Nσ, the position is recorded as an end point of the smoothing segment. All the end points detected as above are recorded and the curves between them will be smoothed separately. In the traditional method, the end points of the smoothing windows in the signals are fixed. The ASSM creates changing end points in different signals and the smoothing windows could be set adaptively. The windows are always set as the half of the segmentations and then the average smoothing method will be applied in the segmentations. The Iterative process is required for reducing the end-point aberration effect in the average smoothing method and two or three times are enough. In ASSM, the signals are smoothed in the spacial area nor frequent area, that means the frequent disturbance will be avoided. A lidar echo was simulated in the experimental work. The echo was supposed to be created by a space-born lidar (e.g. CALIOP). And white Gaussian noise was added to the echo to act as the random noise resulted from environment and the detector. The novel method, ASSM, was applied to the noisy echo to filter the noise. In the test, N was set to 3 and the Iteration time is two. The results show that, the signal could be smoothed adaptively by the ASSM, but the N and the Iteration time might be optimized when the ASSM is applied in a different lidar.
ERIC Educational Resources Information Center
Nunan, E.
1973-01-01
Presents a brief biography of Sir Isaac Newton, lists contemporary scientists and scientific developments and discusses Newton's optical research and conceptual position concerning the nature of light. (JR)
ERIC Educational Resources Information Center
Nunan, E.
1973-01-01
Presents a brief biography of Sir Isaac Newton, lists contemporary scientists and scientific developments and discusses Newton's optical research and conceptual position concerning the nature of light. (JR)
Alternative methods to smooth the Earth's gravity field
NASA Technical Reports Server (NTRS)
Jekeli, C.
1981-01-01
Convolutions on the sphere with corresponding convolution theorems are developed for one and two dimensional functions. Some of these results are used in a study of isotropic smoothing operators or filters. Well known filters in Fourier spectral analysis, such as the rectangular, Gaussian, and Hanning filters, are adapted for data on a sphere. The low-pass filter most often used on gravity data is the rectangular (or Pellinen) filter. However, its spectrum has relatively large sidelobes; and therefore, this filter passes a considerable part of the upper end of the gravity spectrum. The spherical adaptations of the Gaussian and Hanning filters are more efficient in suppressing the high-frequency components of the gravity field since their frequency response functions are strongly field since their frequency response functions are strongly tapered at the high frequencies with no, or small, sidelobes. Formulas are given for practical implementation of these new filters.
NASA Astrophysics Data System (ADS)
Abrashkevich, Alexander; Puzynin, I. V.
2004-01-01
A FORTRAN program is presented which solves a system of nonlinear simultaneous equations using the continuous analog of Newton's method (CANM). The user has the option of either to provide a subroutine which calculates the Jacobian matrix or allow the program to calculate it by a forward-difference approximation. Five iterative schemes using different algorithms of determining adaptive step size of the CANM process are implemented in the program. Program summaryTitle of program: CANM Catalogue number: ADSN Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADSN Program available from: CPC Program Library, Queen's University of Belfast, Northern Ireland Licensing provisions: none Computer for which the program is designed and others on which it has been tested: Computers: IBM RS/6000 Model 320H, SGI Origin2000, SGI Octane, HP 9000/755, Intel Pentium IV PC Installation: Department of Chemistry, University of Toronto, Toronto, Canada Operating systems under which the program has been tested: IRIX64 6.1, 6.4 and 6.5, AIX 3.4, HP-UX 9.01, Linux 2.4.7 Programming language used: FORTRAN 90 Memory required to execute with typical data: depends on the number of nonlinear equations in a system. Test run requires 80 KB No. of bits in distributed program including test data, etc.: 15283 Distribution format: tar gz format No. of lines in distributed program, including test data, etc.: 1794 Peripherals used: line printer, scratch disc store External subprograms used: DGECO and DGESL [1] Keywords: nonlinear equations, Newton's method, continuous analog of Newton's method, continuous parameter, evolutionary differential equation, Euler's method Nature of physical problem: System of nonlinear simultaneous equations F i(x 1,x 2,…,x n)=0,1⩽i⩽n, is numerically solved. It can be written in vector form as F( X)= 0, X∈ Rn, where F : Rn→ Rn is a twice continuously differentiable function with domain and range in n-dimensional Euclidean space. The solutions of such systems of
An efficient method for correcting the edge artifact due to smoothing.
Maisog, J M; Chmielowska, J
1998-01-01
Spatial smoothing is a common pre-processing step in the analysis of functional brain imaging data. It can increase sensitivity to signals of specific shapes and sizes (Rosenfeld and Kak [1982]: Digital Picture Processing, vol. 2. Orlando, Fla.: Academic; Worsley et al. [1996]: Hum Brain Mapping 4:74-90). Also, some amount of spatial smoothness is required if methods from the theory of Gaussian random fields are to be used (Holmes [1994]: Statistical Issues in Functional Brain Mapping. PhD thesis, University of Glasgow). Smoothing is most often implemented as a convolution of the imaging data with a smoothing kernel, and convolution is most efficiently performed using the Convolution Theorem and the Fast Fourier Transform (Cooley and Tukey [1965]: Math Comput 19:297-301; Priestly [1981]: Spectral Analysis and Time Series. San Diego: Academic; Press et al. [1992]: Numerical Recipes in C: The Art of Scientific Computing, 2nd ed. Cambridge: Cambridge University Press). An undesirable side effect of smoothing is an artifact along the edges of the brain, where brain voxels become smoothed with non-brain voxels. This results in a dark rim which might be mistaken for hypoactivity. In this short methodological paper, we present a method for correcting functional brain images for the edge artifact due to smoothing, while retaining the use of the Convolution Theorem and the Fast Fourier Transform for efficient calculation of convolutions.
NASA Astrophysics Data System (ADS)
Virieux, J.; Bretaudeau, F.; Metivier, L.; Brossier, R.
2013-12-01
Simultaneous inversion of seismic velocities and source parameters have been a long standing challenge in seismology since the first attempts to mitigate trade-off between very different parameters influencing travel-times (Spencer and Gubbins 1980, Pavlis and Booker 1980) since the early development in the 1970s (Aki et al 1976, Aki and Lee 1976, Crosson 1976). There is a strong trade-off between earthquake source positions, initial times and velocities during the tomographic inversion: mitigating these trade-offs is usually carried empirically (Lemeur et al 1997). This procedure is not optimal and may lead to errors in the velocity reconstruction as well as in the source localization. For a better simultaneous estimation of such multi-parametric reconstruction problem, one may take benefit of improved local optimization such as full Newton method where the Hessian influence helps balancing between different physical parameter quantities and improving the coverage at the point of reconstruction. Unfortunately, the computation of the full Hessian operator is not easily computed in large models and with large datasets. Truncated Newton (TCN) is an alternative optimization approach (Métivier et al. 2012) that allows resolution of the normal equation H Δm = - g using a matrix-free conjugate gradient algorithm. It only requires to be able to compute the gradient of the misfit function and Hessian-vector products. Traveltime maps can be computed in the whole domain by numerical modeling (Vidale 1998, Zhao 2004). The gradient and the Hessian-vector products for velocities can be computed without ray-tracing using 1st and 2nd order adjoint-state methods for the cost of 1 and 2 additional modeling step (Plessix 2006, Métivier et al. 2012). Reciprocity allows to compute accurately the gradient and the full Hessian for each coordinates of the sources and for their initial times. Then the resolution of the problem is done through two nested loops. The model update Δm is
NASA Astrophysics Data System (ADS)
Kalantari, Bahman
2000-12-01
The general form of Taylor's theorem for a function f : K-->K, where K is the real line or the complex plane, gives the formula, f=Pn+Rn, where Pn is the Newton interpolating polynomial computed with respect to a confluent vector of nodes, and Rn is the remainder. Whenever f'[not equal to]0, for each m=2,...,n+1, we describe a "determinantal interpolation formula", f=Pm,n+Rm,n, where Pm,n is a rational function in x and f itself. These formulas play a dual role in the approximation of f or its inverse. For m=2, the formula is Taylor's and for m=3 is a new expansion formula and a Padé approximant. By applying the formulas to Pn, for each m[greater-or-equal, slanted]2, Pm,m-1,...,Pm,m+n-2 is a set of n rational approximations that includes Pn, and may provide a better approximation to f, than Pn. Hence each Taylor polynomial unfolds into an infinite spectrum of rational approximations. The formulas also give an infinite spectrum of rational inverse approximations, as well as a fundamental k-point iteration function Bm(k), for each k[less-than-or-equals, slant]m, defined as the ratio of two determinants that depend on the first m-k derivatives. Application of our formulas have motivated several new results obtained in sequel papers: (i) theoretical analysis of the order of Bm(k), k=1,...,m, proving that it ranges from m to the limiting ratio of generalized Fibonacci numbers of order m; (ii) computational results with the first few members of Bm(k) indicating that they outperform traditional root finding methods, e.g., Newton's; (iii) a novel polynomial rootfinding method requiring only a single input and the evaluation of the sequence of iteration functions Bm(1) at that input. This amounts to the evaluation of special Toeplitz determinants that are also computable via a recursive formula; (iv) a new strategy for general root finding; (v) new formulas for approximation of [pi],e, and other special numbers.
Smooth connection method of segment test data in road surface profile measurement
NASA Astrophysics Data System (ADS)
Duan, Hu-Ming; Ma, Ying; Shi, Feng; Zhang, Kai-Bin; Xie, Fei
2011-12-01
It's reviewed that the measurement system of road surface profile and the calculation method of segment road test data have been introduced. Because of there are sudden vertical steps at the connection points of segment data which will influence the application of road surface data in automotive engineering. So a new smooth connection method of segment test data is proposed which revised the sudden vertical steps connection by the Signal Local Baseline Adjustment (SLBA) method. Besides, there is an actual example which mentioned the detailed process of the smooth connection of segment test data by the SLBA method and the adjusting results at these connection points. The application and calculation results show that the SLBA method is simple and has achieved obvious effect in smooth connection of the segment road test data. The method of SLBA can be widely applied to segment road surface data processing or the long period vibration signal processing.
Smooth connection method of segment test data in road surface profile measurement
NASA Astrophysics Data System (ADS)
Duan, Hu-Ming; Ma, Ying; Shi, Feng; Zhang, Kai-Bin; Xie, Fei
2012-01-01
It's reviewed that the measurement system of road surface profile and the calculation method of segment road test data have been introduced. Because of there are sudden vertical steps at the connection points of segment data which will influence the application of road surface data in automotive engineering. So a new smooth connection method of segment test data is proposed which revised the sudden vertical steps connection by the Signal Local Baseline Adjustment (SLBA) method. Besides, there is an actual example which mentioned the detailed process of the smooth connection of segment test data by the SLBA method and the adjusting results at these connection points. The application and calculation results show that the SLBA method is simple and has achieved obvious effect in smooth connection of the segment road test data. The method of SLBA can be widely applied to segment road surface data processing or the long period vibration signal processing.
NASA Astrophysics Data System (ADS)
Hassane Maina, Fadji; Ackerer, Philippe
2017-06-01
The solution of the mathematical model for flow in variably saturated porous media described by the Richards equation (RE) is subject to heavy numerical difficulties due to its highly nonlinear properties and remains very challenging. Two different algorithms are used in this work to solve the mixed form of RE: the traditional iterative algorithm and a time-adaptive algorithm consisting of changing the time-step magnitude within the iteration procedure while the nonlinear parameters are computed with the state variable at the previous time. The Ross method is an example of this type of scheme, and we show that it is equivalent to the Newton-Raphson method with a time-adaptive algorithm.Both algorithms are coupled to different time-stepping strategies: the standard heuristic approach based on the number of iterations and two strategies based on the time truncation error or on the change in water saturation. Three different test cases are used to evaluate the efficiency of these algorithms.The numerical results highlight the necessity of implementing an estimate of the time truncation errors.
NASA Astrophysics Data System (ADS)
Steward, David R.
2016-11-01
Recharge from surface to groundwater is an important component of the hydrological cycle, yet its rate is difficult to quantify. Percolation through two-dimensional circular inhomogeneities in the vadose zone is studied where one soil type is embedded within a uniform background, and nonlinear interface conditions in the quasilinear formulation are solved using Newton's method with the Analytic Element Method. This numerical laboratory identifies detectable variations in pathline and pressure head distributions that manifest due to a shift in recharge rate through in a heterogeneous media. Pathlines either diverge about or converge through coarser and finer grained materials with inverse patterns forming across lower and upper elevations; however, pathline geometry is not significantly altered by recharge. Analysis of pressure head in lower regions near groundwater identifies a new phenomenon: its distribution is not significantly impacted by an inhomogeneity soil type, nor by its placement nor by recharge rate. Another revelation is that pressure head for coarser grained inhomogeneities in upper regions is completely controlled by geometry and conductivity contrasts; a shift in recharge generates a difference Δp that becomes an additive constant with the same value throughout this region. In contrast, shifts in recharge for finer grained inhomogeneities reveal patterns with abrupt variations across their interfaces. Consequently, measurements aimed at detecting shifts in recharge in a heterogeneous vadose zone by deciphering the corresponding patterns of change in pressure head should focus on finer grained inclusions well above a groundwater table.
A new flux conserving Newton's method scheme for the two-dimensional, steady Navier-Stokes equations
NASA Technical Reports Server (NTRS)
Scott, James R.; Chang, Sin-Chung
1993-01-01
A new numerical method is developed for the solution of the two-dimensional, steady Navier-Stokes equations. The method that is presented differs in significant ways from the established numerical methods for solving the Navier-Stokes equations. The major differences are described. First, the focus of the present method is on satisfying flux conservation in an integral formulation, rather than on simulating conservation laws in their differential form. Second, the present approach provides a unified treatment of the dependent variables and their unknown derivatives. All are treated as unknowns together to be solved for through simulating local and global flux conservation. Third, fluxes are balanced at cell interfaces without the use of interpolation or flux limiters. Fourth, flux conservation is achieved through the use of discrete regions known as conservation elements and solution elements. These elements are not the same as the standard control volumes used in the finite volume method. Fifth, the discrete approximation obtained on each solution element is a functional solution of both the integral and differential form of the Navier-Stokes equations. Finally, the method that is presented is a highly localized approach in which the coupling to nearby cells is only in one direction for each spatial coordinate, and involves only the immediately adjacent cells. A general third-order formulation for the steady, compressible Navier-Stokes equations is presented, and then a Newton's method scheme is developed for the solution of incompressible, low Reynolds number channel flow. It is shown that the Jacobian matrix is nearly block diagonal if the nonlinear system of discrete equations is arranged approximately and a proper pivoting strategy is used. Numerical results are presented for Reynolds numbers of 100, 1000, and 2000. Finally, it is shown that the present scheme can resolve the developing channel flow boundary layer using as few as six to ten cells per channel
A high-order Immersed Boundary method for solving fluid problems on arbitrary smooth domains
NASA Astrophysics Data System (ADS)
Stein, David; Guy, Robert; Thomases, Becca
2015-11-01
We present a robust, flexible, and high-order Immersed Boundary method for solving the equations of fluid motion on domains with smooth boundaries using FFT-based spectral methods. The solution to the PDE is coupled with an equation for a smooth extension of the unknown solution; high-order accuracy is a natural consequence of this additional global regularity. The method retains much of the simplicity of the original Immersed Boundary method, and enables the use of simple implicit and implicit/explicit timestepping schemes to be used to solve a wide range of problems. We show results for the Stokes, Navier-Stokes, and Oldroyd-B equations.
Comparison of Exponential Smoothing Methods in Forecasting Palm Oil Real Production
NASA Astrophysics Data System (ADS)
Siregar, B.; Butar-Butar, I. A.; Rahmat, RF; Andayani, U.; Fahmi, F.
2017-01-01
Palm oil has important role for the plantation subsector. Forecasting of the real palm oil production in certain period is needed by plantation companies to maintain their strategic management. This study compared several methods based on exponential smoothing (ES) technique such as single ES, double exponential smoothing holt, triple exponential smoothing, triple exponential smoothing additive and multiplicative to predict the palm oil production. We examined the accuracy of forecasting models of production data and analyzed the characteristics of the models. Programming language R was used with selected constants for double ES (α and β) and triple ES (α, β, and γ) evaluated by the technique of minimizing the root mean squared prediction error (RMSE). Our result showed that triple ES additives had lowest error rate compared to the other models with RMSE of 0.10 with a combination of parameters α = 0.6, β = 0.02, and γ = 0.02.
McHugh, P.R.; Knoll, D.A.
1992-01-01
A fully implicit solution algorithm based on Newton's method is used to solve the steady, incompressible Navier-Stokes and energy equations. An efficiently evaluated numerical Jacobian is used to simplify implementation, and mesh sequencing is used to increase the radius of convergence of the algorithm. We employ finite volume discretization using the power law scheme of Patankar to solve the benchmark backward facing step problem defined by the ASME K-12 Aerospace Heat Transfer Committee. LINPACK banded Gaussian elimination and the preconditioned transpose-free quasi-minimal residual (TFQMR) algorithm of Freund are studied as possible linear equation solvers. Implementation of the preconditioned TFQMR algorithm requires use of the switched evolution relaxation algorithm of Mulder and Van Leer to ensure convergence. The preconditioned TFQMR algorithm is more memory efficient than the direct solver, but our implementation is not as CPU efficient. Results show that for the level of grid refinement used, power law differencing was not adequate to yield the desired accuracy for this problem.
Nonequilibrium flows with smooth particle applied mechanics
Kum, Oyeon
1995-07-01
Smooth particle methods are relatively new methods for simulating solid and fluid flows through they have a 20-year history of solving complex hydrodynamic problems in astrophysics, such as colliding planets and stars, for which correct answers are unknown. The results presented in this thesis evaluate the adaptability or fitness of the method for typical hydrocode production problems. For finite hydrodynamic systems, boundary conditions are important. A reflective boundary condition with image particles is a good way to prevent a density anomaly at the boundary and to keep the fluxes continuous there. Boundary values of temperature and velocity can be separately controlled. The gradient algorithm, based on differentiating the smooth particle expression for (uρ) and (Tρ), does not show numerical instabilities for the stress tensor and heat flux vector quantities which require second derivatives in space when Fourier`s heat-flow law and Newton`s viscous force law are used. Smooth particle methods show an interesting parallel linking to them to molecular dynamics. For the inviscid Euler equation, with an isentropic ideal gas equation of state, the smooth particle algorithm generates trajectories isomorphic to those generated by molecular dynamics. The shear moduli were evaluated based on molecular dynamics calculations for the three weighting functions, B spline, Lucy, and Cusp functions. The accuracy and applicability of the methods were estimated by comparing a set of smooth particle Rayleigh-Benard problems, all in the laminar regime, to corresponding highly-accurate grid-based numerical solutions of continuum equations. Both transient and stationary smooth particle solutions reproduce the grid-based data with velocity errors on the order of 5%. The smooth particle method still provides robust solutions at high Rayleigh number where grid-based methods fails.
A Pragmatic Smoothing Method for Improving the Quality of the Results in Atomic Spectroscopy
NASA Astrophysics Data System (ADS)
Bennun, Leonardo
2017-07-01
A new smoothing method for the improvement on the identification and quantification of spectral functions based on the previous knowledge of the signals that are expected to be quantified, is presented. These signals are used as weighted coefficients in the smoothing algorithm. This smoothing method was conceived to be applied in atomic and nuclear spectroscopies preferably to these techniques where net counts are proportional to acquisition time, such as particle induced X-ray emission (PIXE) and other X-ray fluorescence spectroscopic methods, etc. This algorithm, when properly applied, does not distort the form nor the intensity of the signal, so it is well suited for all kind of spectroscopic techniques. This method is extremely effective at reducing high-frequency noise in the signal much more efficient than a single rectangular smooth of the same width. As all of smoothing techniques, the proposed method improves the precision of the results, but in this case we found also a systematic improvement on the accuracy of the results. We still have to evaluate the improvement on the quality of the results when this method is applied over real experimental results. We expect better characterization of the net area quantification of the peaks, and smaller Detection and Quantification Limits. We have applied this method to signals that obey Poisson statistics, but with the same ideas and criteria, it could be applied to time series. In a general case, when this algorithm is applied over experimental results, also it would be required that the sought characteristic functions, required for this weighted smoothing method, should be obtained from a system with strong stability. If the sought signals are not perfectly clean, this method should be carefully applied
Newton's method as applied to the Riemann problem for media with general equations of state
NASA Astrophysics Data System (ADS)
Moiseev, N. Ya.; Mukhamadieva, T. A.
2008-06-01
An approach based on Newton’s method is proposed for solving the Riemann problem for media with normal equations of state. The Riemann integrals are evaluated using a cubic approximation of an isentropic curve that is superior to the Simpson method in terms of accuracy, convergence rate, and efficiency. The potentials of the approach are demonstrated by solving problems for media obeying the Mie-Grüneisen equation of state. The algebraic equation of the isentropic curve and some exact solutions for configurations with rarefaction waves are explicitly given.
NASA Astrophysics Data System (ADS)
Neighbors, C.; Cochran, E. S.; Ryan, K. J.; Kaiser, A. E.
2015-12-01
The seismic spectrum can be modeled by assuming a Brune spectrum and estimating the parameters of seismic moment (M0), corner frequency (fc), and the high frequency site attenuation (κ). Traditionally studies either hold fixed or use a predefined set of trial values for one of the parameters (e.g., fc) and then solve for the remaining parameters. Here, we use the Gauss-Newton nonlinear least-squares method to simultaneously determine the M0, fc, and high-frequency κ for each event-station pair. We use data collected during the Canterbury, New Zealand earthquake sequence. The seismic stations include the permanent GeoNet accelerometer network as well as a dense network of nearly 200 Quake-Catcher Network (QCN) MEMs accelerometers installed following the 3 September 2010 M 7.1 Darfield earthquake. We examine over 180 aftershocks ≥ Mw3.5 that occurred from 9 September 2010 to 31 July 2011 and are captured by both networks. We use Fourier-transformed S-wave windows that include 80% of the S-wave energy and fit the acceleration spectra between 0.5 and 20 Hz. We apply a path and site correction to the data as described in Oth and Kaiser (2014). Then, the records are smoothed using a Konno and Omachi (1998) filter and uniformly resampled in log space. Initial "best guesses" for M0 and fc are determined from GNS catalog magnitudes and by assuming a 100 bar (10 MPa) stress drop and an initial κ is determined from an automated high-frequency fit method. We use a parametric inversion technique that requires a single M0 and fc for each event, while κ is allowed to vary by station to reflect varying site conditions. Final solutions for M0, fc, and κ are iteratively calculated by minimizing the residual function. After Brune (1970, 1971), the stress drop is determined from the best-fit fc. Moment magnitudes determined agree well with the GNS catalog, with a median difference of 0.12 Mw and 0.20 Mw for GeoNet and QCN inversions, respectively. Stress drop results are within
A Newton/upwind method and numerical study of shock wave/boundary layer interactions
NASA Technical Reports Server (NTRS)
Liou, Meng-Sing
1989-01-01
The objective of the paper is two-fold. First, an upwind/central differencing method for solving the steady Navier-Stokes equations is described. The symmetric line relation method is used to solve the resulting algebraic system to achieve high computational efficiency. The grid spacings used in the calculations are determined from the triple-deck theory, in terms of Mach and Reynolds numbers and other flow parameters. Thus the accuracy of the numerical solutions is improved by comparing them with experimental, analytical, and other computational results. Secondly, the shock wave/boundary layer interactions are studied numerically, with special attention given to the flow separation. The concept of free interaction is confirmed. Although the separated region varies with Mach and Reynolds numbers, it is found that the transverse velocity component behind the incident shock, which has not been identified heretofore, is also an important parameter. A small change of this quantity is sufficient to eliminate the flow separation entirely.
A Newton-Raphson Method Approach to Adjusting Multi-Source Solar Simulators
NASA Technical Reports Server (NTRS)
Snyder, David B.; Wolford, David S.
2012-01-01
NASA Glenn Research Center has been using an in house designed X25 based multi-source solar simulator since 2003. The simulator is set up for triple junction solar cells prior to measurements b y adjusting the three sources to produce the correct short circuit current, lsc, in each of three AM0 calibrated sub-cells. The past practice has been to adjust one source on one sub-cell at a time, iterating until all the sub-cells have the calibrated Isc. The new approach is to create a matrix of measured lsc for small source changes on each sub-cell. A matrix, A, is produced. This is normalized to unit changes in the sources so that Ax(delta)s = (delta)isc. This matrix can now be inverted and used with the known Isc differences from the AM0 calibrated values to indicate changes in the source settings, (delta)s = A ·'x.(delta)isc This approach is still an iterative one, but all sources are changed during each iteration step. It typically takes four to six steps to converge on the calibrated lsc values. Even though the source lamps may degrade over time, the initial matrix evaluation i s not performed each time, since measurement matrix needs to be only approximate. Because an iterative approach is used the method will still continue to be valid. This method may become more important as state-of-the-art solar cell junction responses overlap the sources of the simulator. Also, as the number of cell junctions and sources increase, this method should remain applicable.
Newton'Principia for the Common Reader
NASA Astrophysics Data System (ADS)
Chandrasekhar, S.
1995-07-01
Representing a decade's work from one of the world's most distinguished physicists, this major publication is, as far as is known, the first comprehensive analysis of Newton's Principia without recourse to secondary sources. Chandrasekhar analyses some 150 propositions which form a direct chain leading to Newton's formulation of his universal law of gravitation. In each case, Newton's proofs are arranged in a linear sequence of equations and arguments, avoiding the need to unravel the necessarily convoluted style of Newton's connected prose. In almost every case, a modern version of the proofs is given to bring into sharp focus the beauty, clarity, and breathtaking economy of Newton's methods. This book will stimulate great interest and debate among the scientific community, illuminating the brilliance of Newton's work under the steady gaze of Chandrasekhar's rare perception.
NASA Astrophysics Data System (ADS)
Li, Jiang-Tao; Decourchelle, Anne; Miceli, Marco; Vink, Jacco; Bocchino, Fabrizio
2015-11-01
Based on our newly developed methods and the XMM-Newton large program of SN1006, we extract and analyse the spectra from 3596 tessellated regions of this supernova remnant (SNR) each with 0.3-8 keV counts >104. For the first time, we map out multiple physical parameters, such as the temperature (kT), electron density (ne), ionization parameter (net), ionization age (tion), metal abundances, as well as the radio-to-X-ray slope (α) and cutoff frequency (νcutoff) of the synchrotron emission. We construct probability distribution functions of kT and net, and model them with several Gaussians, in order to characterize the average thermal and ionization states of such an extended source. We construct equivalent width (EW) maps based on continuum interpolation with the spectral model of each region. We then compare the EW maps of O VII, O VIII, O VII Kδ - ζ, Ne, Mg, Si XIII, Si XIV, and S lines constructed with this method to those constructed with linear interpolation. We further extract spectra from larger regions to confirm the features revealed by parameter and EW maps, which are often not directly detectable on X-ray intensity images. For example, O abundance is consistent with solar across the SNR, except for a low-abundance hole in the centre. This `O hole' has enhanced O VII Kδ - ζ and Fe emissions, indicating recently reverse shocked ejecta, but also has the highest net, indicating forward shocked interstellar medium (ISM). Therefore, a multitemperature model is needed to decompose these components. The asymmetric metal distributions suggest there is either an asymmetric explosion of the supernova or an asymmetric distribution of the ISM.
A Meshfree Cell-based Smoothed Point Interpolation Method for Solid Mechanics Problems
Zhang Guiyong; Liu Guirong
2010-05-21
In the framework of a weakened weak (W{sup 2}) formulation using a generalized gradient smoothing operation, this paper introduces a novel meshfree cell-based smoothed point interpolation method (CS-PIM) for solid mechanics problems. The W{sup 2} formulation seeks solutions from a normed G space which includes both continuous and discontinuous functions and allows the use of much more types of methods to create shape functions for numerical methods. When PIM shape functions are used, the functions constructed are in general not continuous over the entire problem domain and hence are not compatible. Such an interpolation is not in a traditional H{sup 1} space, but in a G{sup 1} space. By introducing the generalized gradient smoothing operation properly, the requirement on function is now further weakened upon the already weakened requirement for functions in a H{sup 1} space and G{sup 1} space can be viewed as a space of functions with weakened weak (W{sup 2}) requirement on continuity. The cell-based smoothed point interpolation method (CS-PIM) is formulated based on the W{sup 2} formulation, in which displacement field is approximated using the PIM shape functions, which possess the Kronecker delta property facilitating the enforcement of essential boundary conditions [3]. The gradient (strain) field is constructed by the generalized gradient smoothing operation within the cell-based smoothing domains, which are exactly the triangular background cells. A W{sup 2} formulation of generalized smoothed Galerkin (GS-Galerkin) weak form is used to derive the discretized system equations. It was found that the CS-PIM possesses the following attractive properties: (1) It is very easy to implement and works well with the simplest linear triangular mesh without introducing additional degrees of freedom; (2) it is at least linearly conforming; (3) this method is temporally stable and works well for dynamic analysis; (4) it possesses a close-to-exact stiffness, which is much
NASA Astrophysics Data System (ADS)
Bork, Alfred
1987-12-01
The publication of Isaac Newton's ``notions about motion'' 300 years ago was a major moment in the history of science. In the period just before 1687 Newton's correspondence was much concerned with comets. In this period two bright comets were seen. These comets appear to have been a major stimulation to Newton's work on mechanics.
Computing Modified Newton Directions Using a Partial Cholesky Factorization.
1993-03-01
Newton method for unconstrained minimization’ Department of Operations Research, Stanford University, 1989. Any opinions, findings, and conclusions or...Technical Report SOL 93-1 § March 1993 Abstract The effectiveness of Newton’s method for finding an unconstrained mini- mizer of a strictly convex...twice continuously differentiable function has prompted the proposal of various modified Newton methods for the nonconvex case. Linesearch modified Newton
Is Newton's second law really Newton's?
NASA Astrophysics Data System (ADS)
Pourciau, Bruce
2011-10-01
When we call the equation f = ma "Newton's second law," how much historical truth lies behind us? Many textbooks on introductory physics and classical mechanics claim that the Principia's second law becomes f = ma, once Newton's vocabulary has been translated into more familiar terms. But there is nothing in the Principia's second law about acceleration and nothing about a rate of change. If the Principia's second law does not assert f = ma, what does it assert, and is there some other axiom or some proposition in the Principia that does assert f = ma? Is there any historical truth behind us when we call f = ma "Newton's second law"? This article answers these questions.
A Novel Method for Modeling Neumann and Robin Boundary Conditions in Smoothed Particle Hydrodynamics
Ryan, Emily M.; Tartakovsky, Alexandre M.; Amon, Cristina
2010-08-26
In this paper we present an improved method for handling Neumann or Robin boundary conditions in smoothed particle hydrodynamics. The Neumann and Robin boundary conditions are common to many physical problems (such as heat/mass transfer), and can prove challenging to model in volumetric modeling techniques such as smoothed particle hydrodynamics (SPH). A new SPH method for diffusion type equations subject to Neumann or Robin boundary conditions is proposed. The new method is based on the continuum surface force model [1] and allows an efficient implementation of the Neumann and Robin boundary conditions in the SPH method for geometrically complex boundaries. The paper discusses the details of the method and the criteria needed to apply the model. The model is used to simulate diffusion and surface reactions and its accuracy is demonstrated through test cases for boundary conditions describing different surface reactions.
NASA Astrophysics Data System (ADS)
Zhao, Yuanyuan; Jiang, Guoliang; Hu, Jiandong; Hu, Fengjiang; Wei, Jianguang; Shi, Liang
2010-10-01
of biomolecular interaction by using Newton Iteration Method and Least Squares Method. First, the pseudo first order kinetic model of biomolecular interaction was established. Then the data of molecular interaction of HBsAg and HBsAb was obtained by bioanalyzer. Finally, we used the optical SPR bioanalyzer software which was written by ourselves to make nonlinear fit about the association and dissociation curves. The correlation coefficient R-squared is 0.99229 and 0.99593, respectively. Furthermore, the kinetic parameters and affinity constants were evaluated using the obtained data from the fitting results.
A method for smoothing segmented lung boundary in chest CT images
NASA Astrophysics Data System (ADS)
Yim, Yeny; Hong, Helen
2007-03-01
To segment low density lung regions in chest CT images, most of methods use the difference in gray-level value of pixels. However, radiodense pulmonary vessels and pleural nodules that contact with the surrounding anatomy are often excluded from the segmentation result. To smooth lung boundary segmented by gray-level processing in chest CT images, we propose a new method using scan line search. Our method consists of three main steps. First, lung boundary is extracted by our automatic segmentation method. Second, segmented lung contour is smoothed in each axial CT slice. We propose a scan line search to track the points on lung contour and find rapidly changing curvature efficiently. Finally, to provide consistent appearance between lung contours in adjacent axial slices, 2D closing in coronal plane is applied within pre-defined subvolume. Our method has been applied for performance evaluation with the aspects of visual inspection, accuracy and processing time. The results of our method show that the smoothness of lung contour was considerably increased by compensating for pulmonary vessels and pleural nodules.
Melnikov Method for a Three-Zonal Planar Hybrid Piecewise-Smooth System and Application
NASA Astrophysics Data System (ADS)
Li, Shuangbao; Ma, Wensai; Zhang, Wei; Hao, Yuxin
In this paper, we extend the well-known Melnikov method for smooth systems to a class of planar hybrid piecewise-smooth systems, defined in three domains separated by two switching manifolds x = a and x = b. The dynamics in each domain is governed by a smooth system. When an orbit reaches the separation lines, then a reset map describing an impacting rule applies instantaneously before the orbit enters into another domain. We assume that the unperturbed system has a continuum of periodic orbits transversally crossing the separation lines. Then, we wish to study the persistence of the periodic orbits under an autonomous perturbation and the reset map. To achieve this objective, we first choose four appropriate switching sections and build a Poincaré map, after that, we present a displacement function and carry on the Taylor expansion of the displacement function to the first-order in the perturbation parameter ɛ near ɛ = 0. We denote the first coefficient in the expansion as the first-order Melnikov function whose zeros provide us the persistence of periodic orbits under perturbation. Finally, we study periodic orbits of a concrete planar hybrid piecewise-smooth system by the obtained Melnikov function.
NASA Astrophysics Data System (ADS)
Divakov, D.; Sevastianov, L.; Nikolaev, N.
2017-01-01
The paper deals with a numerical solution of the problem of waveguide propagation of polarized light in smoothly-irregular transition between closed regular waveguides using the incomplete Galerkin method. This method consists in replacement of variables in the problem of reduction of the Helmholtz equation to the system of differential equations by the Kantorovich method and in formulation of the boundary conditions for the resulting system. The formulation of the boundary problem for the ODE system is realized in computer algebra system Maple. The stated boundary problem is solved using Maples libraries of numerical methods.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Demonstrating Compliance With 40 CFR 60.43 for the Newton Power Station of Central Illinois Public Service... for the Newton Power Station of Central Illinois Public Service Company 1. Designation of Affected...) Newton Power Station in Jasper County, Illinois. Each of these units is subject to the Standards...
Xiao, Zhu; Havyarimana, Vincent; Li, Tong; Wang, Dong
2016-01-01
In this paper, a novel nonlinear framework of smoothing method, non-Gaussian delayed particle smoother (nGDPS), is proposed, which enables vehicle state estimation (VSE) with high accuracy taking into account the non-Gaussianity of the measurement and process noises. Within the proposed method, the multivariate Student’s t-distribution is adopted in order to compute the probability distribution function (PDF) related to the process and measurement noises, which are assumed to be non-Gaussian distributed. A computation approach based on Ensemble Kalman Filter (EnKF) is designed to cope with the mean and the covariance matrix of the proposal non-Gaussian distribution. A delayed Gibbs sampling algorithm, which incorporates smoothing of the sampled trajectories over a fixed-delay, is proposed to deal with the sample degeneracy of particles. The performance is investigated based on the real-world data, which is collected by low-cost on-board vehicle sensors. The comparison study based on the real-world experiments and the statistical analysis demonstrates that the proposed nGDPS has significant improvement on the vehicle state accuracy and outperforms the existing filtering and smoothing methods. PMID:27187405
HyeongKae Park; Robert R. Nourgaliev; Richard C. Martineau; Dana A. Knoll
2008-09-01
We present high-order accurate spatiotemporal discretization of all-speed flow solvers using Jacobian-free Newton Krylov framework. One of the key developments in this work is the physics-based preconditioner for the all-speed flow, which makes use of traditional semi-implicit schemes. The physics-based preconditioner is developed in the primitive variable form, which allows a straightforward separation of physical phenomena. Numerical examples demonstrate that the developed preconditioner effectively reduces the number of the Krylov iterations, and the efficiency is independent of the Mach number and mesh sizes under a fixed CFL condition.
Kerfriden, P.; Gosselet, P.; Adhikari, S.; Bordas, S.
2013-01-01
This article describes a bridge between POD-based model order reduction techniques and the classical Newton/Krylov solvers. This bridge is used to derive an efficient algorithm to correct, “on-the-fly”, the reduced order modelling of highly nonlinear problems undergoing strong topological changes. Damage initiation problems are addressed and tackle via a corrected hyperreduction method. It is shown that the relevancy of reduced order model can be significantly improved with reasonable additional costs when using this algorithm, even when strong topological changes are involved. PMID:27076688
NASA Astrophysics Data System (ADS)
Chew, J. V. L.; Sulaiman, J.
2016-06-01
This paper considers Newton-MSOR iterative method for solving 1D nonlinear porous medium equation (PME). The basic concept of proposed iterative method is derived from a combination of one step nonlinear iterative method which known as Newton method with Modified Successive Over Relaxation (MSOR) method. The reliability of Newton-MSOR to obtain approximate solution for several PME problems is compared with Newton-Gauss-Seidel (Newton-GS) and Newton-Successive Over Relaxation (Newton-SOR). In this paper, the formulation and implementation of these three iterative methods have also been presented. From four examples of PME problems, numerical results showed that Newton-MSOR method requires lesser number of iterations and computational time as compared with Newton-GS and Newton-SOR methods.
NASA Astrophysics Data System (ADS)
Raymond, Samuel J.; Jones, Bruce; Williams, John R.
2016-12-01
A strategy is introduced to allow coupling of the material point method (MPM) and smoothed particle hydrodynamics (SPH) for numerical simulations. This new strategy partitions the domain into SPH and MPM regions, particles carry all state variables and as such no special treatment is required for the transition between regions. The aim of this work is to derive and validate the coupling methodology between MPM and SPH. Such coupling allows for general boundary conditions to be used in an SPH simulation without further augmentation. Additionally, as SPH is a purely particle method, and MPM is a combination of particles and a mesh. This coupling also permits a smooth transition from particle methods to mesh methods, where further coupling to mesh methods could in future provide an effective farfield boundary treatment for the SPH method. The coupling technique is introduced and described alongside a number of simulations in 1D and 2D to validate and contextualize the potential of using these two methods in a single simulation. The strategy shown here is capable of fully coupling the two methods without any complicated algorithms to transform information from one method to another.
ERIC Educational Resources Information Center
Ryder, L. H.
1987-01-01
Discusses the history of scientific thought in terms of the theories of inertia and absolute space, relativity and gravitation. Describes how Sir Isaac Newton used the work of earlier scholars in his theories and how Albert Einstein used Newton's theories in his. (CW)
Experiments with "Newton's Cradle."
ERIC Educational Resources Information Center
Ehrlich, Robert
1996-01-01
Outlines the use of the toy popularly known as Newton's Cradle or Newton's Balls in illustrating the laws of conservation of momentum and mechanical energy. Discusses in detail the joint effects of elasticity, friction, and ball alignment on the rate of damping of this apparatus. (JRH)
ERIC Educational Resources Information Center
Ryder, L. H.
1987-01-01
Discusses the history of scientific thought in terms of the theories of inertia and absolute space, relativity and gravitation. Describes how Sir Isaac Newton used the work of earlier scholars in his theories and how Albert Einstein used Newton's theories in his. (CW)
Salinger, Andy; Evans, Katherine J; Lemieux, Jean-Francois; Holland, David; Payne, Tony; Price, Stephen; Knoll, Dana
2011-01-01
We have implemented the Jacobian-free Newton-Krylov (JFNK) method for solving the rst-order ice sheet momentum equation in order to improve the numerical performance of the Community Ice Sheet Model (CISM), the land ice component of the Community Earth System Model (CESM). Our JFNK implementation is based on signicant re-use of existing code. For example, our physics-based preconditioner uses the original Picard linear solver in CISM. For several test cases spanning a range of geometries and boundary conditions, our JFNK implementation is 1.84-3.62 times more efficient than the standard Picard solver in CISM. Importantly, this computational gain of JFNK over the Picard solver increases when rening the grid. Global convergence of the JFNK solver has been signicantly improved by rescaling the equation for the basal boundary condition and through the use of an inexact Newton method. While a diverse set of test cases show that our JFNK implementation is usually robust, for some problems it may fail to converge with increasing resolution (as does the Picard solver). Globalization through parameter continuation did not remedy this problem and future work to improve robustness will explore a combination of Picard and JFNK and the use of homotopy methods.
An immersed boundary method for smoothed particle hydrodynamics of self-propelled swimmers
NASA Astrophysics Data System (ADS)
Hieber, S. E.; Koumoutsakos, P.
2008-10-01
We present a novel particle method, combining remeshed Smoothed Particle Hydrodynamics with Immersed Boundary and Level Set techniques for the simulation of flows past complex deforming geometries. The present method retains the Lagrangian adaptivity of particle methods and relies on the remeshing of particle locations in order to ensure the accuracy of the method. In fact this remeshing step enables the introduction of Immersed Boundary Techniques used in grid based methods. The method is applied to simulations of flows of isothermal and compressible fluids past steady and unsteady solid boundaries that are described using a particle Level Set formulation. The method is validated with two and three-dimensional benchmark problems of flows past cylinders and spheres and it is shown to be well suited to simulations of large scale simulations using tens of millions of particles, on flow-structure interaction problems as they pertain to self-propelled anguilliform swimmers.
Bramble, J. H.; Pasciak, J. E.; Sammon, P. H.; Thomee, V.
1989-04-01
Backward difference methods for the discretization of parabolic boundary value problems are considered in this paper. In particular, we analyze the case when the backward difference equations are only solved 'approximately' by a preconditioned iteration. We provide an analysis which shows that these methods remain stable and accurate if a suitable number of iterations (often independent of the spatial discretization and time step size) are used. Results are provided for the smooth as well as nonsmooth initial data cases. Finally, the results of numerical experiments illustrating the algorithms' performance on model problems are given.
NASA Astrophysics Data System (ADS)
Dai, Meng-Xue; Chen, Jing-Bo; Cao, Jian
2017-07-01
Full-waveform inversion (FWI) is an ill-posed optimization problem which is sensitive to noise and initial model. To alleviate the ill-posedness of the problem, regularization techniques are usually adopted. The ℓ1-norm penalty is a robust regularization method that preserves contrasts and edges. The Orthant-Wise Limited-Memory Quasi-Newton (OWL-QN) method extends the widely-used limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) method to the ℓ1-regularized optimization problems and inherits the efficiency of L-BFGS. To take advantage of the ℓ1-regularized method and the prior model information obtained from sonic logs and geological information, we implement OWL-QN algorithm in ℓ1-regularized FWI with prior model information in this paper. Numerical experiments show that this method not only improve the inversion results but also has a strong anti-noise ability.
NASA Astrophysics Data System (ADS)
Preza, Chrysanthe; Miller, Michael I.; Conchello, Jose-Angel
1993-07-01
We have shown that the linear least-squares (LLS) estimate of the intensities of a 3-D object obtained from a set of optical sections is unstable due to the inversion of small and zero-valued eigenvalues of the point-spread function (PSF) operator. The LLS solution was regularized by constraining it to lie in a subspace spanned by the eigenvectors corresponding to a selected number of the largest eigenvalues. In this paper we extend the regularized LLS solution to a maximum a posteriori (MAP) solution induced by a prior formed from a 'Good's like' smoothness penalty. This approach also yields a regularized linear estimator which reduces noise as well as edge artifacts in the reconstruction. The advantage of the linear MAP (LMAP) estimate over the current regularized LLS (RLLS) is its ability to regularize the inverse problem by smoothly penalizing components in the image associated with small eigenvalues. Computer simulations were performed using a theoretical PSF and a simple phantom to compare the two regularization techniques. It is shown that the reconstructions using the smoothness prior, give superior variance and bias results compared to the RLLS reconstructions. Encouraging reconstructions obtained with the LMAP method from real microscopical images of a 10 micrometers fluorescent bead, and a four-cell Volvox embryo are shown.
A Fast Variational Method for the Construction of Resolution Adaptive C-Smooth Molecular Surfaces.
Bajaj, Chandrajit L; Xu, Guoliang; Zhang, Qin
2009-05-01
We present a variational approach to smooth molecular (proteins, nucleic acids) surface constructions, starting from atomic coordinates, as available from the protein and nucleic-acid data banks. Molecular dynamics (MD) simulations traditionally used in understanding protein and nucleic-acid folding processes, are based on molecular force fields, and require smooth models of these molecular surfaces. To accelerate MD simulations, a popular methodology is to employ coarse grained molecular models, which represent clusters of atoms with similar physical properties by psuedo- atoms, resulting in coarser resolution molecular surfaces. We consider generation of these mixed-resolution or adaptive molecular surfaces. Our approach starts from deriving a general form second order geometric partial differential equation in the level-set formulation, by minimizing a first order energy functional which additionally includes a regularization term to minimize the occurrence of chemically infeasible molecular surface pockets or tunnel-like artifacts. To achieve even higher computational efficiency, a fast cubic B-spline C(2) interpolation algorithm is also utilized. A narrow band, tri-cubic B-spline level-set method is then used to provide C(2) smooth and resolution adaptive molecular surfaces.
Nonequilibrium Flows with Smooth Particle Applied Mechanics.
NASA Astrophysics Data System (ADS)
Kum, Oyeon
Smooth particle methods are relatively new methods for simulating solid and fluid flows though they have a 20-year history of solving complex hydrodynamic problems in astrophysics, such as colliding planets and stars, for which correct answers are unknown. The results presented in this thesis evaluate the adaptability or fitness of the method for typical hydrocode production problems. For finite hydrodynamic systems, boundary conditions are important. A reflective boundary condition with image particles is a good way to prevent a density anomaly at the boundary and to keep the fluxes continuous there. Boundary values of temperature and velocity can be separately controlled. The gradient algorithm, based on differentiating the smooth particle expressions for (urho) and (Trho), does not show numerical instabilities for the stress tensor and heat flux vector quantities which require second derivatives in space when Fourier's heat -flow law and Newton's viscous force law are used. Smooth particle methods show an interesting parallel linking them to molecular dynamics. For the inviscid Euler equation, with an isentropic ideal gas equation of state, the smooth particle algorithm generates trajectories isomorphic to those generated by molecular dynamics. The shear moduli were evaluated based on molecular dynamics calculations for the three weighting functions, B spline, Lucy, and Cusp functions. The accuracy and applicability of the methods were estimated by comparing a set of smooth particle Rayleigh -Benard problems, all in the laminar regime, to corresponding highly-accurate grid-based numerical solutions of continuum equations. Both transient and stationary smooth particle solutions reproduce the grid-based data with velocity errors on the order of 5%. The smooth particle method still provides robust solutions at high Rayleigh number where grid-based methods fails. Considerably fewer smooth particles are required than atoms in a corresponding molecular dynamics
Luanjing Guo; Hai Huang; Derek Gaston; Cody Permann; David Andrs; George Redden; Chuan Lu; Don Fox; Yoshiko Fujita
2013-03-01
Modeling large multicomponent reactive transport systems in porous media is particularly challenging when the governing partial differential algebraic equations (PDAEs) are highly nonlinear and tightly coupled due to complex nonlinear reactions and strong solution-media interactions. Here we present a preconditioned Jacobian-Free Newton-Krylov (JFNK) solution approach to solve the governing PDAEs in a fully coupled and fully implicit manner. A well-known advantage of the JFNK method is that it does not require explicitly computing and storing the Jacobian matrix during Newton nonlinear iterations. Our approach further enhances the JFNK method by utilizing physics-based, block preconditioning and a multigrid algorithm for efficient inversion of the preconditioner. This preconditioning strategy accounts for self- and optionally, cross-coupling between primary variables using diagonal and off-diagonal blocks of an approximate Jacobian, respectively. Numerical results are presented demonstrating the efficiency and massive scalability of the solution strategy for reactive transport problems involving strong solution-mineral interactions and fast kinetics. We found that the physics-based, block preconditioner significantly decreases the number of linear iterations, directly reducing computational cost; and the strongly scalable algebraic multigrid algorithm for approximate inversion of the preconditioner leads to excellent parallel scaling performance.
Celestial mechanics since Newton
NASA Astrophysics Data System (ADS)
Szebehely, Victor
With Newton's law of gravitation and laws of motion the science of celestial mechanics obtained its beginning and its fundamental principles and rules. The challenges were presented by Poincaré 200 years later with the principle of non-integrability of the gravitational problem of three or more bodies. Most of the advances in our field belong to one of the following categories: (1)|formulation of problems and solution techniques; (2)|direct and inverse approaches; (3)|stability. The analytical approaches, also known as general perturbation methods appear already in the Principia, often hidden behind geometrical presentations. Hamiltonian dynamics, transformations, topological techniques, bifurcation and mapping are today parts of analytical celestial mechanics. The significant advances in numerical techniques reached the level of brain-computer interfaces, algebraic manipulations and the demonstration of chaotic behaviour of dynamical systems. Newton was concerned with describing the field of gravitation and then establishing orbits in this field. Following him, the problems of inverse dynamics became considerably more refined when higher order gravitational coefficients were being established by satellite orbit observations. The direct problems are today culminating in meaningful orbit predictions of the outer solar system for 10 8 years. With increased observational accuracy, the formulation of the models used has changed considerably and cosmology, stellar dynamics, and dissipative systems became subjects of celestial mechanics. Successful predictions of the qualitative and quantitative behaviour of dynamical systems involved in celestial mechanics depend significantly on the stability characteristics of the problems studied. Recognition of the uncertainties and limited validity of our predictions is probably one of the greatest advances in today's celestial mechanics. This paper is concluded with a short historical review of our giants and of their
The multiscale restriction smoothed basis method for fractured porous media (F-MsRSB)
NASA Astrophysics Data System (ADS)
Shah, Swej; Møyner, Olav; Tene, Matei; Lie, Knut-Andreas; Hajibeygi, Hadi
2016-08-01
A novel multiscale method for multiphase flow in heterogeneous fractured porous media is devised. The discrete fine-scale system is described using an embedded fracture modeling approach, in which the heterogeneous rock (matrix) and highly-conductive fractures are represented on independent grids. Given this fine-scale discrete system, the method first partitions the fine-scale volumetric grid representing the matrix and the lower-dimensional grids representing fractures into independent coarse grids. Then, basis functions for matrix and fractures are constructed by restricted smoothing, which gives a flexible and robust treatment of complex geometrical features and heterogeneous coefficients. From the basis functions one constructs a prolongation operator that maps between the coarse- and fine-scale systems. The resulting method allows for general coupling of matrix and fracture basis functions, giving efficient treatment of a large variety of fracture conductivities. In addition, basis functions can be adaptively updated using efficient global smoothing strategies to account for multiphase flow effects. The method is conservative and because it is described and implemented in algebraic form, it is straightforward to employ it to both rectilinear and unstructured grids. Through a series of challenging test cases for single and multiphase flow, in which synthetic and realistic fracture maps are combined with heterogeneous petrophysical matrix properties, we validate the method and conclude that it is an efficient and accurate approach for simulating flow in complex, large-scale, fractured media.
Cen, Guanjun; Yu, Yonghao; Zeng, Xianru; Long, Xiuzhen; Wei, Dewei; Gao, Xuyuan; Zeng, Tao
2015-01-01
In insects, the frequency distribution of the measurements of sclerotized body parts is generally used to classify larval instars and is characterized by a multimodal overlap between instar stages. Nonparametric methods with fixed bandwidths, such as histograms, have significant limitations when used to fit this type of distribution, making it difficult to identify divisions between instars. Fixed bandwidths have also been chosen somewhat subjectively in the past, which is another problem. In this study, we describe an adaptive kernel smoothing method to differentiate instars based on discontinuities in the growth rates of sclerotized insect body parts. From Brooks' rule, we derived a new standard for assessing the quality of instar classification and a bandwidth selector that more accurately reflects the distributed character of specific variables. We used this method to classify the larvae of Austrosimulium tillyardianum (Diptera: Simuliidae) based on five different measurements. Based on head capsule width and head capsule length, the larvae were separated into nine instars. Based on head capsule postoccipital width and mandible length, the larvae were separated into 8 instars and 10 instars, respectively. No reasonable solution was found for antennal segment 3 length. Separation of the larvae into nine instars using head capsule width or head capsule length was most robust and agreed with Crosby's growth rule. By strengthening the distributed character of the separation variable through the use of variable bandwidths, the adaptive kernel smoothing method could identify divisions between instars more effectively and accurately than previous methods.
Cen, Guanjun; Zeng, Xianru; Long, Xiuzhen; Wei, Dewei; Gao, Xuyuan; Zeng, Tao
2015-01-01
In insects, the frequency distribution of the measurements of sclerotized body parts is generally used to classify larval instars and is characterized by a multimodal overlap between instar stages. Nonparametric methods with fixed bandwidths, such as histograms, have significant limitations when used to fit this type of distribution, making it difficult to identify divisions between instars. Fixed bandwidths have also been chosen somewhat subjectively in the past, which is another problem. In this study, we describe an adaptive kernel smoothing method to differentiate instars based on discontinuities in the growth rates of sclerotized insect body parts. From Brooks’ rule, we derived a new standard for assessing the quality of instar classification and a bandwidth selector that more accurately reflects the distributed character of specific variables. We used this method to classify the larvae of Austrosimulium tillyardianum (Diptera: Simuliidae) based on five different measurements. Based on head capsule width and head capsule length, the larvae were separated into nine instars. Based on head capsule postoccipital width and mandible length, the larvae were separated into 8 instars and 10 instars, respectively. No reasonable solution was found for antennal segment 3 length. Separation of the larvae into nine instars using head capsule width or head capsule length was most robust and agreed with Crosby’s growth rule. By strengthening the distributed character of the separation variable through the use of variable bandwidths, the adaptive kernel smoothing method could identify divisions between instars more effectively and accurately than previous methods. PMID:26546689
Asgharzadeh, Hafez; Borazjani, Iman
2017-02-15
The explicit and semi-implicit schemes in flow simulations involving complex geometries and moving boundaries suffer from time-step size restriction and low convergence rates. Implicit schemes can be used to overcome these restrictions, but implementing them to solve the Navier-Stokes equations is not straightforward due to their non-linearity. Among the implicit schemes for nonlinear equations, Newton-based techniques are preferred over fixed-point techniques because of their high convergence rate but each Newton iteration is more expensive than a fixed-point iteration. Krylov subspace methods are one of the most advanced iterative methods that can be combined with Newton methods, i.e., Newton-Krylov Methods (NKMs) to solve non-linear systems of equations. The success of NKMs vastly depends on the scheme for forming the Jacobian, e.g., automatic differentiation is very expensive, and matrix-free methods without a preconditioner slow down as the mesh is refined. A novel, computationally inexpensive analytical Jacobian for NKM is developed to solve unsteady incompressible Navier-Stokes momentum equations on staggered overset-curvilinear grids with immersed boundaries. Moreover, the analytical Jacobian is used to form preconditioner for matrix-free method in order to improve its performance. The NKM with the analytical Jacobian was validated and verified against Taylor-Green vortex, inline oscillations of a cylinder in a fluid initially at rest, and pulsatile flow in a 90 degree bend. The capability of the method in handling complex geometries with multiple overset grids and immersed boundaries is shown by simulating an intracranial aneurysm. It was shown that the NKM with an analytical Jacobian is 1.17 to 14.77 times faster than the fixed-point Runge-Kutta method, and 1.74 to 152.3 times (excluding an intensively stretched grid) faster than automatic differentiation depending on the grid (size) and the flow problem. In addition, it was shown that using only the
NASA Astrophysics Data System (ADS)
Asgharzadeh, Hafez; Borazjani, Iman
2017-02-01
The explicit and semi-implicit schemes in flow simulations involving complex geometries and moving boundaries suffer from time-step size restriction and low convergence rates. Implicit schemes can be used to overcome these restrictions, but implementing them to solve the Navier-Stokes equations is not straightforward due to their non-linearity. Among the implicit schemes for non-linear equations, Newton-based techniques are preferred over fixed-point techniques because of their high convergence rate but each Newton iteration is more expensive than a fixed-point iteration. Krylov subspace methods are one of the most advanced iterative methods that can be combined with Newton methods, i.e., Newton-Krylov Methods (NKMs) to solve non-linear systems of equations. The success of NKMs vastly depends on the scheme for forming the Jacobian, e.g., automatic differentiation is very expensive, and matrix-free methods without a preconditioner slow down as the mesh is refined. A novel, computationally inexpensive analytical Jacobian for NKM is developed to solve unsteady incompressible Navier-Stokes momentum equations on staggered overset-curvilinear grids with immersed boundaries. Moreover, the analytical Jacobian is used to form a preconditioner for matrix-free method in order to improve its performance. The NKM with the analytical Jacobian was validated and verified against Taylor-Green vortex, inline oscillations of a cylinder in a fluid initially at rest, and pulsatile flow in a 90 degree bend. The capability of the method in handling complex geometries with multiple overset grids and immersed boundaries is shown by simulating an intracranial aneurysm. It was shown that the NKM with an analytical Jacobian is 1.17 to 14.77 times faster than the fixed-point Runge-Kutta method, and 1.74 to 152.3 times (excluding an intensively stretched grid) faster than automatic differentiation depending on the grid (size) and the flow problem. In addition, it was shown that using only the
NASA Astrophysics Data System (ADS)
Viallet, M.; Goffrey, T.; Baraffe, I.; Folini, D.; Geroux, C.; Popov, M. V.; Pratt, J.; Walder, R.
2016-02-01
This work is a continuation of our efforts to develop an efficient implicit solver for multidimensional hydrodynamics for the purpose of studying important physical processes in stellar interiors, such as turbulent convection and overshooting. We present an implicit solver that results from the combination of a Jacobian-free Newton-Krylov method and a preconditioning technique tailored to the inviscid, compressible equations of stellar hydrodynamics. We assess the accuracy and performance of the solver for both 2D and 3D problems for Mach numbers down to 10-6. Although our applications concern flows in stellar interiors, the method can be applied to general advection and/or diffusion-dominated flows. The method presented in this paper opens up new avenues in 3D modeling of realistic stellar interiors allowing the study of important problems in stellar structure and evolution.
NASA Technical Reports Server (NTRS)
Verger, Aleixandre; Baret, F.; Weiss, M.; Kandasamy, S.; Vermote, E.
2013-01-01
Consistent, continuous, and long time series of global biophysical variables derived from satellite data are required for global change research. A novel climatology fitting approach called CACAO (Consistent Adjustment of the Climatology to Actual Observations) is proposed to reduce noise and fill gaps in time series by scaling and shifting the seasonal climatological patterns to the actual observations. The shift and scale CACAO parameters adjusted for each season allow quantifying shifts in the timing of seasonal phenology and inter-annual variations in magnitude as compared to the average climatology. CACAO was assessed first over simulated daily Leaf Area Index (LAI) time series with varying fractions of missing data and noise. Then, performances were analyzed over actual satellite LAI products derived from AVHRR Long-Term Data Record for the 1981-2000 period over the BELMANIP2 globally representative sample of sites. Comparison with two widely used temporal filtering methods-the asymmetric Gaussian (AG) model and the Savitzky-Golay (SG) filter as implemented in TIMESAT-revealed that CACAO achieved better performances for smoothing AVHRR time series characterized by high level of noise and frequent missing observations. The resulting smoothed time series captures well the vegetation dynamics and shows no gaps as compared to the 50-60% of still missing data after AG or SG reconstructions. Results of simulation experiments as well as confrontation with actual AVHRR time series indicate that the proposed CACAO method is more robust to noise and missing data than AG and SG methods for phenology extraction.
Ferreira, Jainara Maria Soares; Silva, Milton Fernando Andrade; Oliveira, Andressa Feitosa Bezerra; Sampaio, Fábio Correia
2008-07-01
There are only a few studies relating visual inspection methods and laser fluorescence when monitoring regression of incipient carious lesions. The purpose of this study was to monitor incipient carious lesions in smooth surfaces under varnish fluoride therapy using visual inspection methods and laser fluorescence (LF). Active white spot lesions (n = 111) in upper front teeth of 36 children were selected. The children were subjected to four or eight applications of fluoride varnish in weekly intervals. The visual systems were activity (A) and maximum dimension in millimetres (D). They were applied together with LF readings (L) in the beginning of the study (W1), in the 5th week (W5), and in the 9th (W9) week. The mean (SD) of L values in W5 and W9 were 5.6 (3.8) and 4.5 (3.3), respectively; both were significantly different from the initial score of 7.4 (5.1) in W1. There was a positive correlation between D and L in W5 (r = 0.25) and W9 (r = 0.36; P < 0.05). The mean (SD) values of L were lower following the activity criteria. Our findings support the finding that incipient carious lesions in smooth surfaces under fluoride therapy can be monitored by laser fluorescence and visual inspection methods.
Newton's diffraction measurements
NASA Astrophysics Data System (ADS)
Nauenberg, Michael
2004-05-01
This year marks the tercentenary of the publication of Newton's Opticks which contains his celebrated theory and experiments of light and colors as it evolved from the first published version in 1672. It is still fairly unknown, however, that in this book Newton also reported his experiments on diffraction fringes obtained from various "slender" objects placed in a beam of sunlight. These experiments posed an insurmountable difficulty to Newton's corpuscular theory of light, which failed to account for his observations. This failure explains the long delay in the publication of this book. In my talk I will compare Newton's experimental results on diffraction with the predictions of Fresnel's wave theory to demonstrate that his measurements were remarkable accurate. Eventually these measurements paved the way for Young's correct explanation of the diffraction fringes as a wave interference phenomenon.
Bernsen, Erik; Dijkstra, Henk A.; Thies, Jonas; Wubs, Fred W.
2010-10-20
In present-day forward time stepping ocean-climate models, capturing both the wind-driven and thermohaline components, a substantial amount of CPU time is needed in a so-called spin-up simulation to determine an equilibrium solution. In this paper, we present methodology based on Jacobian-Free Newton-Krylov methods to reduce the computational time for such a spin-up problem. We apply the method to an idealized configuration of a state-of-the-art ocean model, the Modular Ocean Model version 4 (MOM4). It is shown that a typical speed-up of a factor 10-25 with respect to the original MOM4 code can be achieved and that this speed-up increases with increasing horizontal resolution.
Hsu, D.K. ); Dayal, V. )
1992-03-09
Interference fringes due to bondline thickness variation were observed in ultrasonic scans of the reflected echo amplitude from the bondline of adhesively joined aluminum skins. To demonstrate that full-field interference patterns are observable in point-by-point ultrasonic scans, an optical setup for Newton's rings was scanned ultrasonically in a water immersion tank. The ultrasonic scan showed distinct Newton's rings whose radii were in excellent agreement with the prediction.
Method of adiabatic modes in studying problems of smoothly irregular open waveguide structures
Sevastianov, L. A.; Egorov, A. A.; Sevastyanov, A. L.
2013-02-15
Basic steps in developing an original method of adiabatic modes that makes it possible to solve the direct and inverse problems of simulating and designing three-dimensional multilayered smoothly irregular open waveguide structures are described. A new element in the method is that an approximate solution of Maxwell's equations is made to obey 'inclined' boundary conditions at the interfaces between themedia being considered. These boundary conditions take into account the obliqueness of planes tangent to nonplanar boundaries between the media and lead to new equations for coupled vector quasiwaveguide hybrid adiabatic modes. Solutions of these equations describe the phenomenon of 'entanglement' of two linear polarizations of an irregular multilayered waveguide, the appearance of a new mode in an entangled state, and the effect of rotation of the polarization plane of quasiwaveguide modes. The efficiency of the method is demonstrated by considering the example of numerically simulating a thin-film generalized waveguide Lueneburg lens.
NASA Astrophysics Data System (ADS)
Bretaudeau, F.; Metivier, L.; Brossier, R.; Virieux, J.
2013-12-01
Traveltime tomography algorithms generally use ray tracing. The use of rays in tomography may not be suitable for handling very large datasets and perform tomography in very complex media. Traveltime maps can be computed through finite-difference approach (FD) and avoid complex ray-tracing algorithm for the forward modeling (Vidale 1998, Zhao 2004). However, rays back-traced from receiver to source following the gradient of traveltime are still used to compute the Fréchet derivatives. As a consequence, the sensitivity information computed using back-traced rays is not numerically consistent with the FD modeling used (the derivatives are only a rough approximation of the true derivatives of the forward modeling). Leung & Quian (2006) proposed a new approach that avoid ray tracing where the gradient of the misfit function is computed using the adjoint-state method. An adjoint-state variable is thus computed simultaneously for all receivers using a numerical method consistent with the forward modeling, and for the computational cost of one forward modeling. However, in their formulation, the receivers have to be located at the boundary of the investigated model, and the optimization approach is limited to simple gradient-based method (i.e. steepest descent, conjugate gradient) as only the gradient is computed. However, the Hessian operator has an important role in gradient-based reconstruction methods, providing the necessary information to rescale the gradient, correct for illumination deficit and remove artifacts. Leung & Quian (2006) uses LBFGS, a quasi-Newton method that provides an improved estimation of the influence of the inverse Hessian. Lelievre et al. (2011) also proposed a tomography approach in which the Fréchet derivatives are computed directly during the forward modeling using explicit symbolic differentiation of the modeling equations, resulting in a consistent Gauss-Newton inversion. We are interested here in the use of a new optimization approach
Ducheyne, Steffen
2017-03-29
In this paper I will probe into Herman Boerhaave's (1668-1738) appropriation of Isaac Newton's natural philosophy. It will be shown that Newton's work served multiple purposes in Boerhaave's oeuvre, for he appropriated Newton's work differently in different contexts and in different episodes in his career. Three important episodes in, and contexts of, Boerhaave's appropriation of Newton's natural philosophical ideas and methods will be considered: 1710-11, the time of his often neglected lectures on the place of physics in medicine; 1715, when he delivered his most famous rectorial address; and, finally, 1731/2, in publishing his Elementa chemiae. Along the way, I will spell out the implications of Boerhaave's case for our understanding of the reception, or use, of Newton's ideas more generally.
Immersed smoothed finite element method for fluid-structure interaction simulation of aortic valves
NASA Astrophysics Data System (ADS)
Yao, Jianyao; Liu, G. R.; Narmoneva, Daria A.; Hinton, Robert B.; Zhang, Zhi-Qian
2012-12-01
This paper presents a novel numerical method for simulating the fluid-structure interaction (FSI) problems when blood flows over aortic valves. The method uses the immersed boundary/element method and the smoothed finite element method and hence it is termed as IS-FEM. The IS-FEM is a partitioned approach and does not need a body-fitted mesh for FSI simulations. It consists of three main modules: the fluid solver, the solid solver and the FSI force solver. In this work, the blood is modeled as incompressible viscous flow and solved using the characteristic-based-split scheme with FEM for spacial discretization. The leaflets of the aortic valve are modeled as Mooney-Rivlin hyperelastic materials and solved using smoothed finite element method (or S-FEM). The FSI force is calculated on the Lagrangian fictitious fluid mesh that is identical to the moving solid mesh. The octree search and neighbor-to-neighbor schemes are used to detect efficiently the FSI pairs of fluid and solid cells. As an example, a 3D idealized model of aortic valve is modeled, and the opening process of the valve is simulated using the proposed IS-FEM. Numerical results indicate that the IS-FEM can serve as an efficient tool in the study of aortic valve dynamics to reveal the details of stresses in the aortic valves, the flow velocities in the blood, and the shear forces on the interfaces. This tool can also be applied to animal models studying disease processes and may ultimately translate to a new adaptive methods working with magnetic resonance images, leading to improvements on diagnostic and prognostic paradigms, as well as surgical planning, in the care of patients.
Face-based smoothed finite element method for real-time simulation of soft tissue
NASA Astrophysics Data System (ADS)
Mendizabal, Andrea; Bessard Duparc, Rémi; Bui, Huu Phuoc; Paulus, Christoph J.; Peterlik, Igor; Cotin, Stéphane
2017-03-01
In soft tissue surgery, a tumor and other anatomical structures are usually located using the preoperative CT or MR images. However, due to the deformation of the concerned tissues, this information suffers from inaccuracy when employed directly during the surgery. In order to account for these deformations in the planning process, the use of a bio-mechanical model of the tissues is needed. Such models are often designed using the finite element method (FEM), which is, however, computationally expensive, in particular when a high accuracy of the simulation is required. In our work, we propose to use a smoothed finite element method (S-FEM) in the context of modeling of the soft tissue deformation. This numerical technique has been introduced recently to overcome the overly stiff behavior of the standard FEM and to improve the solution accuracy and the convergence rate in solid mechanics problems. In this paper, a face-based smoothed finite element method (FS-FEM) using 4-node tetrahedral elements is presented. We show that in some cases, the method allows for reducing the number of degrees of freedom, while preserving the accuracy of the discretization. The method is evaluated on a simulation of a cantilever beam loaded at the free end and on a simulation of a 3D cube under traction and compression forces. Further, it is applied to the simulation of the brain shift and of the kidney's deformation. The results demonstrate that the method outperforms the standard FEM in a bending scenario and that has similar accuracy as the standard FEM in the simulations of the brain-shift and of the kidney's deformation.
Newton and scholastic philosophy.
Levitin, Dmitri
2016-03-01
This article examines Isaac Newton's engagement with scholastic natural philosophy. In doing so, it makes two major historiographical interventions. First of all, the recent claim that Newton's use of the concepts of analysis and synthesis was derived from the Aristotelian regressus tradition is challenged on the basis of bibliographical, palaeographical and intellectual evidence. Consequently, a new, contextual explanation is offered for Newton's use of these concepts. Second, it will be shown that some of Newton's most famous pronouncements - from the General Scholium appended to the second edition of the Principia (1713) and from elsewhere - are simply incomprehensible without an understanding of specific scholastic terminology and its later reception, and that this impacts in quite significant ways on how we understand Newton's natural philosophy more generally. Contrary to the recent historiographical near-consensus, Newton did not hold an elaborate metaphysics, and his seemingly 'metaphysical' statements were in fact anti-scholastic polemical salvoes. The whole investigation will permit us a brief reconsideration of the relationship between the self-proclaimed 'new' natural philosophy and its scholastic predecessors.
A method for the accurate and smooth approximation of standard thermodynamic functions
NASA Astrophysics Data System (ADS)
Coufal, O.
2013-01-01
A method is proposed for the calculation of approximations of standard thermodynamic functions. The method is consistent with the physical properties of standard thermodynamic functions. This means that the approximation functions are, in contrast to the hitherto used approximations, continuous and smooth in every temperature interval in which no phase transformations take place. The calculation algorithm was implemented by the SmoothSTF program in the C++ language which is part of this paper. Program summaryProgram title:SmoothSTF Catalogue identifier: AENH_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AENH_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 3807 No. of bytes in distributed program, including test data, etc.: 131965 Distribution format: tar.gz Programming language: C++. Computer: Any computer with gcc version 4.3.2 compiler. Operating system: Debian GNU Linux 6.0. The program can be run in operating systems in which the gcc compiler can be installed, see http://gcc.gnu.org/install/specific.html. RAM: 256 MB are sufficient for the table of standard thermodynamic functions with 500 lines Classification: 4.9. Nature of problem: Standard thermodynamic functions (STF) of individual substances are given by thermal capacity at constant pressure, entropy and enthalpy. STF are continuous and smooth in every temperature interval in which no phase transformations take place. The temperature dependence of STF as expressed by the table of its values is for further application approximated by temperature functions. In the paper, a method is proposed for calculating approximation functions which, in contrast to the hitherto used approximations, are continuous and smooth in every temperature interval. Solution method: The approximation functions are
NASA Technical Reports Server (NTRS)
Zeng, S.; Wesseling, P.
1993-01-01
The performance of a linear multigrid method using four smoothing methods, called SCGS (Symmetrical Coupled GauBeta-Seidel), CLGS (Collective Line GauBeta-Seidel), SILU (Scalar ILU), and CILU (Collective ILU), is investigated for the incompressible Navier-Stokes equations in general coordinates, in association with Galerkin coarse grid approximation. Robustness and efficiency are measured and compared by application to test problems. The numerical results show that CILU is the most robust, SILU the least, with CLGS and SCGS in between. CLGS is the best in efficiency, SCGS and CILU follow, and SILU is the worst.
Calculation of smooth potential energy surfaces using local electron correlation methods
NASA Astrophysics Data System (ADS)
Mata, Ricardo A.; Werner, Hans-Joachim
2006-11-01
The geometry dependence of excitation domains in local correlation methods can lead to noncontinuous potential energy surfaces. We propose a simple domain merging procedure which eliminates this problem in many situations. The method is applied to heterolytic bond dissociations of ketene and propadienone, to SN2 reactions of Cl- with alkylchlorides, and in a quantum mechanical/molecular mechanical study of the chorismate mutase enzyme. It is demonstrated that smooth potentials are obtained in all cases. Furthermore, basis set superposition error effects are reduced in local calculations, and it is found that this leads to better basis set convergence when computing barrier heights or weak interactions. When the electronic structure strongly changes between reactants or products and the transition state, the domain merging procedure leads to a balanced description of all structures and accurate barrier heights.
Calculation of smooth potential energy surfaces using local electron correlation methods
Mata, Ricardo A.; Werner, Hans-Joachim
2006-11-14
The geometry dependence of excitation domains in local correlation methods can lead to noncontinuous potential energy surfaces. We propose a simple domain merging procedure which eliminates this problem in many situations. The method is applied to heterolytic bond dissociations of ketene and propadienone, to SN2 reactions of Cl{sup -} with alkylchlorides, and in a quantum mechanical/molecular mechanical study of the chorismate mutase enzyme. It is demonstrated that smooth potentials are obtained in all cases. Furthermore, basis set superposition error effects are reduced in local calculations, and it is found that this leads to better basis set convergence when computing barrier heights or weak interactions. When the electronic structure strongly changes between reactants or products and the transition state, the domain merging procedure leads to a balanced description of all structures and accurate barrier heights.
NASA Astrophysics Data System (ADS)
Fourey, G.; Hermange, C.; Le Touzé, D.; Oger, G.
2017-08-01
An efficient coupling between Smoothed Particle Hydrodynamics (SPH) and Finite Element (FE) methods dedicated to violent fluid-structure interaction (FSI) modeling is proposed in this study. The use of a Lagrangian meshless method for the fluid reduces the complexity of fluid-structure interface handling, especially in presence of complex free surface flows. The paper details the discrete SPH equations and the FSI coupling strategy adopted. Both convergence and robustness of the SPH-FE coupling are performed and discussed. More particularly, the loss and gain in stability is studied according to various coupling parameters, and different coupling algorithms are considered. Investigations are performed on 2D academic and experimental test cases in the order of increasing complexity.
Invariant measures of smooth dynamical systems, generalized functions and summation methods
NASA Astrophysics Data System (ADS)
Kozlov, V. V.
2016-04-01
We discuss conditions for the existence of invariant measures of smooth dynamical systems on compact manifolds. If there is an invariant measure with continuously differentiable density, then the divergence of the vector field along every solution tends to zero in the Cesàro sense as time increases unboundedly. Here the Cesàro convergence may be replaced, for example, by any Riesz summation method, which can be arbitrarily close to ordinary convergence (but does not coincide with it). We give an example of a system whose divergence tends to zero in the ordinary sense but none of its invariant measures is absolutely continuous with respect to the `standard' Lebesgue measure (generated by some Riemannian metric) on the phase space. We give examples of analytic systems of differential equations on analytic phase spaces admitting invariant measures of any prescribed smoothness (including a measure with integrable density), but having no invariant measures with positive continuous densities. We give a new proof of the classical Bogolyubov-Krylov theorem using generalized functions and the Hahn-Banach theorem. The properties of signed invariant measures are also discussed.
Kottke, Chris; Farjadpour, Ardavan; Johnson, Steven G
2008-03-01
We derive a correct first-order perturbation theory in electromagnetism for cases where an interface between two anisotropic dielectric materials is slightly shifted. Most previous perturbative methods give incorrect results for this case, even to lowest order, because of the complicated discontinuous boundary conditions on the electric field at such an interface. Our final expression is simply a surface integral, over the material interface, of the continuous field components from the unperturbed structure. The derivation is based on a "localized" coordinate-transformation technique, which avoids both the problem of field discontinuities and the challenge of constructing an explicit coordinate transformation by taking the limit in which the coordinate perturbation is infinitesimally localized around the boundary. Not only is our result potentially useful in evaluating boundary perturbations, e.g., from fabrication imperfections, in highly anisotropic media such as many metamaterials, but it also has a direct application in numerical electromagnetism. In particular, we show how it leads to a subpixel smoothing scheme to ameliorate staircasing effects in discretized simulations of anisotropic media, in such a way as to greatly reduce the numerical errors compared to other proposed smoothing schemes.
Application of Holt exponential smoothing and ARIMA method for data population in West Java
NASA Astrophysics Data System (ADS)
Supriatna, A.; Susanti, D.; Hertini, E.
2017-01-01
One method of time series that is often used to predict data that contains trend is Holt. Holt method using different parameters used in the original data which aims to smooth the trend value. In addition to Holt, ARIMA method can be used on a wide variety of data including data pattern containing a pattern trend. Data actual of population from 1998-2015 contains the trends so can be solved by Holt and ARIMA method to obtain the prediction value of some periods. The best method is measured by looking at the smallest MAPE and MAE error. The result using Holt method is 47.205.749 populations in 2016, 47.535.324 populations in 2017, and 48.041.672 populations in 2018, with MAPE error is 0,469744 and MAE error is 189.731. While the result using ARIMA method is 46.964.682 populations in 2016, 47.342.189 in 2017, and 47.899.696 in 2018, with MAPE error is 0,4380 and MAE is 176.626.
NASA Astrophysics Data System (ADS)
Borazjani, Iman; Asgharzadeh, Hafez
2015-11-01
Flow simulations involving complex geometries and moving boundaries suffer from time-step size restriction and low convergence rates with explicit and semi-implicit schemes. Implicit schemes can be used to overcome these restrictions. However, implementing implicit solver for nonlinear equations including Navier-Stokes is not straightforward. Newton-Krylov subspace methods (NKMs) are one of the most advanced iterative methods to solve non-linear equations such as implicit descritization of the Navier-Stokes equation. The efficiency of NKMs massively depends on the Jacobian formation method, e.g., automatic differentiation is very expensive, and matrix-free methods slow down as the mesh is refined. Analytical Jacobian is inexpensive method, but derivation of analytical Jacobian for Navier-Stokes equation on staggered grid is challenging. The NKM with a novel analytical Jacobian was developed and validated against Taylor-Green vortex and pulsatile flow in a 90 degree bend. The developed method successfully handled the complex geometries such as an intracranial aneurysm with multiple overset grids, and immersed boundaries. It is shown that the NKM with an analytical Jacobian is 3 to 25 times faster than the fixed-point implicit Runge-Kutta method, and more than 100 times faster than automatic differentiation depending on the grid (size) and the flow problem. The developed methods are fully parallelized with parallel efficiency of 80-90% on the problems tested.
A DAFT DL_POLY distributed memory adaptation of the Smoothed Particle Mesh Ewald method
NASA Astrophysics Data System (ADS)
Bush, I. J.; Todorov, I. T.; Smith, W.
2006-09-01
The Smoothed Particle Mesh Ewald method [U. Essmann, L. Perera, M.L. Berkowtz, T. Darden, H. Lee, L.G. Pedersen, J. Chem. Phys. 103 (1995) 8577] for calculating long ranged forces in molecular simulation has been adapted for the parallel molecular dynamics code DL_POLY_3 [I.T. Todorov, W. Smith, Philos. Trans. Roy. Soc. London 362 (2004) 1835], making use of a novel 3D Fast Fourier Transform (DAFT) [I.J. Bush, The Daresbury Advanced Fourier transform, Daresbury Laboratory, 1999] that perfectly matches the Domain Decomposition (DD) parallelisation strategy [W. Smith, Comput. Phys. Comm. 62 (1991) 229; M.R.S. Pinches, D. Tildesley, W. Smith, Mol. Sim. 6 (1991) 51; D. Rapaport, Comput. Phys. Comm. 62 (1991) 217] of the DL_POLY_3 code. In this article we describe software adaptations undertaken to import this functionality and provide a review of its performance.
An Implementation of the Smooth Particle Mesh Ewald Method on GPU Hardware.
Harvey, M J; De Fabritiis, G
2009-09-08
The smooth particle mesh Ewald summation method is widely used to efficiently compute long-range electrostatic force terms in molecular dynamics simulations, and there has been considerable work in developing optimized implementations for a variety of parallel computer architectures. We describe an implementation for Nvidia graphical processing units (GPUs) which are general purpose computing devices with a high degree of intrinsic parallelism and arithmetic performance. We find that, for typical biomolecular simulations (e.g., DHFR, 26K atoms), a single GPU equipped workstation is able to provide sufficient performance to permit simulation rates of ≈50 ns/day when used in conjunction with the ACEMD molecular dynamics package (1) and exhibits an accuracy comparable to that of a reference double-precision CPU implementation.
Møyner, Olav Lie, Knut-Andreas
2016-01-01
A wide variety of multiscale methods have been proposed in the literature to reduce runtime and provide better scaling for the solution of Poisson-type equations modeling flow in porous media. We present a new multiscale restricted-smoothed basis (MsRSB) method that is designed to be applicable to both rectilinear grids and unstructured grids. Like many other multiscale methods, MsRSB relies on a coarse partition of the underlying fine grid and a set of local prolongation operators (multiscale basis functions) that map unknowns associated with the fine grid cells to unknowns associated with blocks in the coarse partition. These mappings are constructed by restricted smoothing: Starting from a constant, a localized iterative scheme is applied directly to the fine-scale discretization to compute prolongation operators that are consistent with the local properties of the differential operators. The resulting method has three main advantages: First of all, both the coarse and the fine grid can have general polyhedral geometry and unstructured topology. This means that partitions and good prolongation operators can easily be constructed for complex models involving high media contrasts and unstructured cell connections introduced by faults, pinch-outs, erosion, local grid refinement, etc. In particular, the coarse partition can be adapted to geological or flow-field properties represented on cells or faces to improve accuracy. Secondly, the method is accurate and robust when compared to existing multiscale methods and does not need expensive recomputation of local basis functions to account for transient behavior: Dynamic mobility changes are incorporated by continuing to iterate a few extra steps on existing basis functions. This way, the cost of updating the prolongation operators becomes proportional to the amount of change in fluid mobility and one reduces the need for expensive, tolerance-based updates. Finally, since the MsRSB method is formulated on top of a cell
NITSOL: A Newton iterative solver for nonlinear systems
Pernice, M.; Walker, H.F.
1996-12-31
Newton iterative methods, also known as truncated Newton methods, are implementations of Newton`s method in which the linear systems that characterize Newton steps are solved approximately using iterative linear algebra methods. Here, we outline a well-developed Newton iterative algorithm together with a Fortran implementation called NITSOL. The basic algorithm is an inexact Newton method globalized by backtracking, in which each initial trial step is determined by applying an iterative linear solver until an inexact Newton criterion is satisfied. In the implementation, the user can specify inexact Newton criteria in several ways and select an iterative linear solver from among several popular {open_quotes}transpose-free{close_quotes} Krylov subspace methods. Jacobian-vector products used by the Krylov solver can be either evaluated analytically with a user-supplied routine or approximated using finite differences of function values. A flexible interface permits a wide variety of preconditioning strategies and allows the user to define a preconditioner and optionally update it periodically. We give details of these and other features and demonstrate the performance of the implementation on a representative set of test problems.
NASA Astrophysics Data System (ADS)
Lohman, R. B.; Simons, M.
2004-12-01
We examine inversions of geodetic data for fault slip and discuss how inferred results are affected by choices of regularization. The final goal of any slip inversion is to enhance our understanding of the dynamics governing fault zone processes through kinematic descriptions of fault zone behavior at various temporal and spatial scales. Important kinematic observations include ascertaining whether fault slip is correlated with topographic and gravitational anomalies, whether coseismic and postseismic slip occur on complementary or overlapping regions of the fault plane, and how aftershock distributions compare with areas of coseismic and postseismic slip. Fault slip inversions are generally poorly-determined inverse problems requiring some sort of regularization. Attempts to place inversion results in the context of understanding fault zone processes should be accompanied by careful treatment of how the applied regularization affects characteristics of the inferred slip model. Most regularization techniques involve defining a metric that quantifies the solution "simplicity". A frequently employed method defines a "simple" slip distribution as one that is spatially smooth, balancing the fit to the data vs. the spatial complexity of the slip distribution. One problem related to the use of smoothing constraints is the "smearing" of fault slip into poorly-resolved areas on the fault plane. In addition, even if the data is fit well by a point source, the fact that a point source is spatially "rough" will force the inversion to choose a smoother model with slip over a broader area. Therefore, when we interpret the area of inferred slip we must ask whether the slipping area is truly constrained by the data, or whether it could be fit equally well by a more spatially compact source with larger amplitudes of slip. We introduce an alternate regularization technique for fault slip inversions, where we seek an end member model that is the smallest region of fault slip that
Tien, Yin-Jing; Lee, Yun-Shien; Wu, Han-Ming; Chen, Chun-Houh
2008-03-20
The hierarchical clustering tree (HCT) with a dendrogram 1 and the singular value decomposition (SVD) with a dimension-reduced representative map 2 are popular methods for two-way sorting the gene-by-array matrix map employed in gene expression profiling. While HCT dendrograms tend to optimize local coherent clustering patterns, SVD leading eigenvectors usually identify better global grouping and transitional structures. This study proposes a flipping mechanism for a conventional agglomerative HCT using a rank-two ellipse (R2E, an improved SVD algorithm for sorting purpose) seriation by Chen 3 as an external reference. While HCTs always produce permutations with good local behaviour, the rank-two ellipse seriation gives the best global grouping patterns and smooth transitional trends. The resulting algorithm automatically integrates the desirable properties of each method so that users have access to a clustering and visualization environment for gene expression profiles that preserves coherent local clusters and identifies global grouping trends. We demonstrate, through four examples, that the proposed method not only possesses better numerical and statistical properties, it also provides more meaningful biomedical insights than other sorting algorithms. We suggest that sorted proximity matrices for genes and arrays, in addition to the gene-by-array expression matrix, can greatly aid in the search for comprehensive understanding of gene expression structures. Software for the proposed methods can be obtained at http://gap.stat.sinica.edu.tw/Software/GAP.
Vinding, Mads S; Maximov, Ivan I; Tošner, Zdenĕk; Nielsen, Niels Chr
2012-08-07
The use of increasingly strong magnetic fields in magnetic resonance imaging (MRI) improves sensitivity, susceptibility contrast, and spatial or spectral resolution for functional and localized spectroscopic imaging applications. However, along with these benefits come the challenges of increasing static field (B(0)) and rf field (B(1)) inhomogeneities induced by radial field susceptibility differences and poorer dielectric properties of objects in the scanner. Increasing fields also impose the need for rf irradiation at higher frequencies which may lead to elevated patient energy absorption, eventually posing a safety risk. These reasons have motivated the use of multidimensional rf pulses and parallel rf transmission, and their combination with tailoring of rf pulses for fast and low-power rf performance. For the latter application, analytical and approximate solutions are well-established in linear regimes, however, with increasing nonlinearities and constraints on the rf pulses, numerical iterative methods become attractive. Among such procedures, optimal control methods have recently demonstrated great potential. Here, we present a Krotov-based optimal control approach which as compared to earlier approaches provides very fast, monotonic convergence even without educated initial guesses. This is essential for in vivo MRI applications. The method is compared to a second-order gradient ascent method relying on the Broyden-Fletcher-Goldfarb-Shanno (BFGS) quasi-Newton method, and a hybrid scheme Krotov-BFGS is also introduced in this study. These optimal control approaches are demonstrated by the design of a 2D spatial selective rf pulse exciting the letters "JCP" in a water phantom.
Turning around Newton's Second Law
ERIC Educational Resources Information Center
Goff, John Eric
2004-01-01
Conceptual and quantitative difficulties surrounding Newton's second law often arise among introductory physics students. Simply turning around how one expresses Newton's second law may assist students in their understanding of a deceptively simple-looking equation.
Crespo, Alejandro C; Dominguez, Jose M; Barreiro, Anxo; Gómez-Gesteira, Moncho; Rogers, Benedict D
2011-01-01
Smoothed Particle Hydrodynamics (SPH) is a numerical method commonly used in Computational Fluid Dynamics (CFD) to simulate complex free-surface flows. Simulations with this mesh-free particle method far exceed the capacity of a single processor. In this paper, as part of a dual-functioning code for either central processing units (CPUs) or Graphics Processor Units (GPUs), a parallelisation using GPUs is presented. The GPU parallelisation technique uses the Compute Unified Device Architecture (CUDA) of nVidia devices. Simulations with more than one million particles on a single GPU card exhibit speedups of up to two orders of magnitude over using a single-core CPU. It is demonstrated that the code achieves different speedups with different CUDA-enabled GPUs. The numerical behaviour of the SPH code is validated with a standard benchmark test case of dam break flow impacting on an obstacle where good agreement with the experimental results is observed. Both the achieved speed-ups and the quantitative agreement with experiments suggest that CUDA-based GPU programming can be used in SPH methods with efficiency and reliability.
Crespo, Alejandro C.; Dominguez, Jose M.; Barreiro, Anxo; Gómez-Gesteira, Moncho; Rogers, Benedict D.
2011-01-01
Smoothed Particle Hydrodynamics (SPH) is a numerical method commonly used in Computational Fluid Dynamics (CFD) to simulate complex free-surface flows. Simulations with this mesh-free particle method far exceed the capacity of a single processor. In this paper, as part of a dual-functioning code for either central processing units (CPUs) or Graphics Processor Units (GPUs), a parallelisation using GPUs is presented. The GPU parallelisation technique uses the Compute Unified Device Architecture (CUDA) of nVidia devices. Simulations with more than one million particles on a single GPU card exhibit speedups of up to two orders of magnitude over using a single-core CPU. It is demonstrated that the code achieves different speedups with different CUDA-enabled GPUs. The numerical behaviour of the SPH code is validated with a standard benchmark test case of dam break flow impacting on an obstacle where good agreement with the experimental results is observed. Both the achieved speed-ups and the quantitative agreement with experiments suggest that CUDA-based GPU programming can be used in SPH methods with efficiency and reliability. PMID:21695185
NASA Astrophysics Data System (ADS)
Park, Hyeongkae; Nourgaliev, Robert; Knoll, Dana
2007-11-01
The Discontinuous Galerkin (DG) method for compressible fluid flows is incorporated into the Jacobian-Free Newton-Krylov (JFNK) framework. Advantages of combining the DG with the JFNK are two-fold: a) enabling robust and efficient high-order-accurate modeling of all-speed flows on unstructured grids, opening the possibility for high-fidelity simulation of nuclear-power-industry-relevant flows; and b) ability to tightly, robustly and high-order-accurately couple with other relevant physics (neutronics, thermal-structural response of solids, etc.). In the present study, we focus on the physics-based preconditioning (PBP) of the Krylov method (GMRES), used as the linear solver in our implicit higher-order-accurate Runge-Kutta (ESDIRK) time discretization scheme; exploiting the compactness of the spatial discretization of the DG family. In particular, we utilize the Implicit Continuous-fluid Eulerian (ICE) method and investigate its efficacy as the PBP within the JFNK-DG method. Using the eigenvalue analysis, it is found that the ICE collapses the complex components of the all eigenvalues of the Jacobian matrix (associated with pressure waves) onto the real axis, and thereby enabling at least an order of magnitude faster simulations in nearly-incompressible/weakly-compressible regimes with a significant storage saving.
2013-01-01
Background To facilitate new drug development, physiologically-based pharmacokinetic (PBPK) modeling methods receive growing attention as a tool to fully understand and predict complex pharmacokinetic phenomena. As the number of parameters to reproduce physiological functions tend to be large in PBPK models, efficient parameter estimation methods are essential. We have successfully applied a recently developed algorithm to estimate a feasible solution space, called Cluster Newton Method (CNM), to reveal the cause of irinotecan pharmacokinetic alterations in two cancer patient groups. Results After improvements in the original CNM algorithm to maintain parameter diversities, a feasible solution space was successfully estimated for 55 or 56 parameters in the irinotecan PBPK model, within ten iterations, 3000 virtual samples, and in 15 minutes (Intel Xeon E5-1620 3.60GHz × 1 or Intel Core i7-870 2.93GHz × 1). Control parameters or parameter correlations were clarified after the parameter estimation processes. Possible causes in the irinotecan pharmacokinetic alterations were suggested, but they were not conclusive. Conclusions Application of CNM achieved a feasible solution space by solving inverse problems of a system containing ordinary differential equations (ODEs). This method may give us reliable insights into other complicated phenomena, which have a large number of parameters to estimate, under limited information. It is also helpful to design prospective studies for further investigation of phenomena of interest. PMID:24555857
Yoshida, Kenta; Maeda, Kazuya; Kusuhara, Hiroyuki; Konagaya, Akihiko
2013-10-16
To facilitate new drug development, physiologically-based pharmacokinetic (PBPK) modeling methods receive growing attention as a tool to fully understand and predict complex pharmacokinetic phenomena. As the number of parameters to reproduce physiological functions tend to be large in PBPK models, efficient parameter estimation methods are essential. We have successfully applied a recently developed algorithm to estimate a feasible solution space, called Cluster Newton Method (CNM), to reveal the cause of irinotecan pharmacokinetic alterations in two cancer patient groups. After improvements in the original CNM algorithm to maintain parameter diversities, a feasible solution space was successfully estimated for 55 or 56 parameters in the irinotecan PBPK model, within ten iterations, 3000 virtual samples, and in 15 minutes (Intel Xeon E5-1620 3.60GHz × 1 or Intel Core i7-870 2.93GHz × 1). Control parameters or parameter correlations were clarified after the parameter estimation processes. Possible causes in the irinotecan pharmacokinetic alterations were suggested, but they were not conclusive. Application of CNM achieved a feasible solution space by solving inverse problems of a system containing ordinary differential equations (ODEs). This method may give us reliable insights into other complicated phenomena, which have a large number of parameters to estimate, under limited information. It is also helpful to design prospective studies for further investigation of phenomena of interest.
ERIC Educational Resources Information Center
Gardner, Don E.
The merits of double exponential smoothing are discussed relative to other types of pattern-based enrollment forecasting methods. The difficulties associated with selecting an appropriate weight factor are discussed, and their potential effects on prediction results are illustrated. Two methods for objectively selecting the "best" weight…
NASA Astrophysics Data System (ADS)
Nam, Haewon; Lee, Dongha; Doo Lee, Jong; Park, Hae-Jeong
2011-08-01
Spatial smoothing using isotropic Gaussian kernels to remove noise reduces spatial resolution and increases the partial volume effect of functional magnetic resonance images (fMRI), thereby reducing localization power. To minimize these limitations, we propose a novel anisotropic smoothing method for fMRI data. To extract an anisotropic tensor for each voxel of the functional data, we derived an intensity gradient using the distance transformation of the segmented gray matter of the fMRI-coregistered T1-weighted image. The intensity gradient was then used to determine the anisotropic smoothing kernel at each voxel of the fMRI data. Performance evaluations on both real and simulated data showed that the proposed method had 10% higher statistical power and about 20% higher gray matter localization compared to isotropic smoothing and robustness to the registration errors (up to 4 mm translations and 4° rotations) between T1 structural images and fMRI data. The proposed method also showed higher performance than the anisotropic smoothing with diffusion gradients derived from the fMRI intensity data.
"To Improve upon Hints of Things": Illustrating Isaac Newton.
Schilt, Cornelis J
2016-01-01
When Isaac Newton died in 1727 he left a rich legacy in terms of draft manuscripts, encompassing a variety of topics: natural philosophy, mathematics, alchemy, theology, and chronology, as well as papers relating to his career at the Mint. One thing that immediately strikes us is the textuality of Newton's legacy: images are sparse. Regarding his scholarly endeavours we witness the same practice. Newton's extensive drafts on theology and chronology do not contain a single illustration or map. Today we have all of Newton's draft manuscripts as witnesses of his working methods, as well as access to a significant number of books from his own library. Drawing parallels between Newton's reading practices and his natural philosophical and scholarly work, this paper seeks to understand Newton's recondite writing and publishing politics.
ERIC Educational Resources Information Center
Cox, Carol
2001-01-01
Presents the Isaac Newton Olympics in which students complete a hands-on activity at seven stations and evaluate what they have learned in the activity and how it is related to real life. Includes both student and teacher instructions for three of the activities. (YDS)
ERIC Educational Resources Information Center
Cox, Carol
2001-01-01
Presents the Isaac Newton Olympics in which students complete a hands-on activity at seven stations and evaluate what they have learned in the activity and how it is related to real life. Includes both student and teacher instructions for three of the activities. (YDS)
Maulina, Hervin; Santoso, Iman Subama, Emmistasega; Nurwantoro, Pekik; Abraha, Kamsul; Rusydi, Andrivo
2016-04-19
The extraction of the dielectric constant of nanostructured graphene on SiC substrates from spectroscopy ellipsometry measurement using the Gauss-Newton inversion (GNI) method has been done. This study aims to calculate the dielectric constant and refractive index of graphene by extracting the value of ψ and Δ from the spectroscopy ellipsometry measurement using GNI method and comparing them with previous result which was extracted using Drude-Lorentz (DL) model. The results show that GNI method can be used to calculate the dielectric constant and refractive index of nanostructured graphene on SiC substratesmore faster as compared to DL model. Moreover, the imaginary part of the dielectric constant values and coefficient of extinction drastically increases at 4.5 eV similar to that of extracted using known DL fitting. The increase is known due to the process of interband transition and the interaction between the electrons and electron-hole at M-points in the Brillouin zone of graphene.
A novel method for modeling of complex wall geometries in smoothed particle hydrodynamics
NASA Astrophysics Data System (ADS)
Eitzlmayr, Andreas; Koscher, Gerold; Khinast, Johannes
2014-10-01
Smoothed particle hydrodynamics (SPH) has become increasingly important during recent decades. Its meshless nature, inherent representation of convective transport and ability to simulate free surface flows make SPH particularly promising with regard to simulations of industrial mixing devices for high-viscous fluids, which often have complex rotating geometries and partially filled regions (e.g., twin-screw extruders). However, incorporating the required geometries remains a challenge in SPH since the most obvious and most common ways to model solid walls are based on particles (i.e., boundary particles and ghost particles), which leads to complications with arbitrarily-curved wall surfaces. To overcome this problem, we developed a systematic method for determining an adequate interaction between SPH particles and a continuous wall surface based on the underlying SPH equations. We tested our new approach by using the open-source particle simulator "LIGGGHTS" and comparing the velocity profiles to analytical solutions and SPH simulations with boundary particles. Finally, we followed the evolution of a tracer in a twin-cam mixer during the rotation, which was experimentally and numerically studied by several other authors, and ascertained good agreement with our results. This supports the validity of our newly-developed wall interaction method, which constitutes a step forward in SPH simulations of complex geometries.
A comparison of spatial smoothing methods for small area estimation with sampling weights.
Mercer, Laina; Wakefield, Jon; Chen, Cici; Lumley, Thomas
2014-05-01
Small area estimation (SAE) is an important endeavor in many fields and is used for resource allocation by both public health and government organizations. Often, complex surveys are carried out within areas, in which case it is common for the data to consist only of the response of interest and an associated sampling weight, reflecting the design. While it is appealing to use spatial smoothing models, and many approaches have been suggested for this endeavor, it is rare for spatial models to incorporate the weighting scheme, leaving the analysis potentially subject to bias. To examine the properties of various approaches to estimation we carry out a simulation study, looking at bias due to both non-response and non-random sampling. We also carry out SAE of smoking prevalence in Washington State, at the zip code level, using data from the 2006 Behavioral Risk Factor Surveillance System. The computation times for the methods we compare are short, and all approaches are implemented in R using currently available packages.
NASA Technical Reports Server (NTRS)
Pinson, Robin M.; Schmitt, Terri L.; Hanson, John M.
2008-01-01
Six degree-of-freedom (DOF) launch vehicle trajectories are designed to follow an optimized 3-DOF reference trajectory. A vehicle has a finite amount of control power that it can allocate to performing maneuvers. Therefore, the 3-DOF trajectory must be designed to refrain from using 100% of the allowable control capability to perform maneuvers, saving control power for handling off-nominal conditions, wind gusts and other perturbations. During the Ares I trajectory analysis, two maneuvers were found to be hard for the control system to implement; a roll maneuver prior to the gravity turn and an angle of attack maneuver immediately after the J-2X engine start-up. It was decided to develop an approach for creating smooth maneuvers in the optimized reference trajectories that accounts for the thrust available from the engines. A feature of this method is that no additional angular velocity in the direction of the maneuver has been added to the vehicle after the maneuver completion. This paper discusses the equations behind these new maneuvers and their implementation into the Ares I trajectory design cycle. Also discussed is a possible extension to adjusting closed-loop guidance.
Using two soft computing methods to predict wall and bed shear stress in smooth rectangular channels
NASA Astrophysics Data System (ADS)
Khozani, Zohreh Sheikh; Bonakdari, Hossein; Zaji, Amir Hossein
2017-03-01
Two soft computing methods were extended in order to predict the mean wall and bed shear stress in open channels. The genetic programming (GP) and Genetic Algorithm Artificial Neural Network (GAA) were investigated to determine the accuracy of these models in estimating wall and bed shear stress. The GP and GAA model results were compared in terms of testing dataset in order to find the best model. In modeling both bed and wall shear stress, the GP model performed better with RMSE of 0.0264 and 0.0185, respectively. Then both proposed models were compared with equations for rectangular open channels, trapezoidal channels and ducts. According to the results, the proposed models performed the best in predicting wall and bed shear stress in smooth rectangular channels. The obtained equation for rectangular channels could estimate values closer to experimental data, but the equations for ducts had poor, inaccurate results in predicting wall and bed shear stress. The equation presented for trapezoidal channels did not have acceptable accuracy in predicting wall and bed shear stress either.
Beigi, Farideh; Patel, Mitalben; Morales-Garza, Marco A; Winebrenner, Caitlin; Gobin, Andrea S; Chau, Eric; Sampaio, Luiz C; Taylor, Doris A
2017-11-01
Numerous protocols exist for isolating aortic endothelial and smooth muscle cells from small animals. However, establishing a protocol for isolating pure cell populations from large animal vessels that are more elastic has been challenging. We developed a simple sequential enzymatic approach to isolate highly purified populations of porcine aortic endothelial and smooth muscle cells. The lumen of a porcine aorta was filled with 25 U/ml dispase solution and incubated at 37°C to dissociate the endothelial cells. The smooth muscle cells were isolated by mincing the tunica media of the treated aorta and incubating the pieces in 0.2% and then 0.1% collagenase type I solution. The isolated endothelial cells stained positive for von Willebrand factor, and 97.2% of them expressed CD31. Early and late passage endothelial cells had a population doubling time of 38 hr and maintained a capacity to take up DiI-Ac-LDL and form tubes in Matrigel®. The isolated smooth muscle cells stained highly positive for alpha-smooth muscle actin, and an impurities assessment showed that only 1.8% were endothelial cells. Population doubling time for the smooth muscle cells was ∼70 hr at passages 3 and 7; and the cells positively responded to endothelin-1, as shown by a 66% increase in the intracellular calcium level. This simple protocol allows for the isolation of highly pure populations of endothelial and smooth muscle cells from porcine aorta that can survive continued passage in culture without losing functionality or becoming overgrown by fibroblasts. © 2017 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Maris, Virginie
An existing 3-D magnetotelluric (MT) inversion program written for a single processor personal computer (PC) has been modified and parallelized using OpenMP, in order to run the program efficiently on a multicore workstation. The program uses the Gauss-Newton inversion algorithm based on a staggered-grid finite-difference forward problem, requiring explicit calculation of the Frechet derivatives. The most time-consuming tasks are calculating the derivatives and determining the model parameters at each iteration. Forward modeling and derivative calculations are parallelized by assigning the calculations for each frequency to separate threads, which execute concurrently. Model parameters are obtained by factoring the Hessian using the LDLT method, implemented using a block-cyclic algorithm and compact storage. MT data from 102 tensor stations over the East Flank of the Coso Geothermal Field, California are inverted. Less than three days are required to invert the dataset for ˜ 55,000 inversion parameters on a 2.66 GHz 8-CPU PC with 16 GB of RAM. Inversion results, recovered from a halfspace rather than initial 2-D inversions, qualitatively resemble models from massively parallel 3-D inversion by other researchers and overall, exhibit an improved fit. A steeply west-dipping conductor under the western East Flank is tentatively correlated with a zone of high-temperature ionic fluids based on known well production and lost circulation intervals. Beneath the Main Field, vertical and north-trending shallow conductors are correlated with geothermal producing intervals as well.
Ariyawansa, K.A.; Lau, D.T.M.
1988-01-01
A derivation of collinear scaling algorithms for unconstrained minimization was presented. The local conic approximants to the objective function underlying these algorithms are forced to interpolate the value and gradient of the objective function at the two most recent iterates. The class of algorithms derived therein has a free parameter sequence /l brace/b/sub k//r brace/ and for a fixed choice of /l brace/b/sub k//r brace/ it contains collinear scaling algorithms that may be treated as extensions of quasi-Newton methods with Broyden family of updates. In this paper, under standard assumptions, it is shown that if b/sub k/ is set equal to the gradient of the objective function for all k and if /l brace/chemically bond1 /minus/ theta/sub k/chemically bond/r brace/ (where theta/sub k/ is the parameter in the Broyden family) is uniformly bounded, then these collinear scaling algorithms related to the Broyden family are locally and q-superlinearly convergent. 13 refs.
Higgitt, Rebekah
2004-03-01
Francis Baily's publication of the manuscripts of John Flamsteed, the first Astronomer Royal, provoked a furious response. Flamsteed had quarrelled with Isaac Newton, and described him in terms unforgivable to those who claimed him as a paragon of all virtues, both moral and scientific. Baily was condemned for putting Flamsteed's complaints in the public sphere. However, his supporters saw his work as a critique of the excessive hero-worship accorded to Newton. Written when the word 'scientist' had been newly coined, this work and the debates it provoked gives us an insight into contemporary views of the role of the man of science and of the use of science to back political, religious and moral positions.
NASA Technical Reports Server (NTRS)
Herbert, Dexter (Editor)
1992-01-01
In this 'Liftoff to Learning' series video, astronauts (Charles Veach, Gregory Harbaugh, Donald McMonagle, Michael Coats, L. Blaine Hammond, Guion Bluford, Richard Hieb) from the STS-39 Mission use physical experiments and computer animation to explain how weightlessness and gravity affects everything and everyone onboard the Space Shuttle. The physics behind the differences between weight and mass, and the concepts of 'free fall', are demonstrated along with explanations and experiments of Sir Issac Newton's three laws of motion.
Newton polyhedron and applications
Bruno, A.D.
1994-12-31
We give a simple presentation of an algorithm of selecting asymptotical first approximations of equations (algebraic and ordinary differential and partial differential). Here the first approximation of a solution of the initial equation is a solution of the corresponding first approximation of the equation. The algorithm is based on the geometry of power exponents including the Newton polyhedron. We give also a survey of applications of the algorithm in problems of Celestial Mechanics and Hydrodynamics.
NASA Technical Reports Server (NTRS)
Herbert, Dexter (Editor)
1992-01-01
In this 'Liftoff to Learning' series video, astronauts (Charles Veach, Gregory Harbaugh, Donald McMonagle, Michael Coats, L. Blaine Hammond, Guion Bluford, Richard Hieb) from the STS-39 Mission use physical experiments and computer animation to explain how weightlessness and gravity affects everything and everyone onboard the Space Shuttle. The physics behind the differences between weight and mass, and the concepts of 'free fall', are demonstrated along with explanations and experiments of Sir Issac Newton's three laws of motion.
NASA Astrophysics Data System (ADS)
Herbert, Dexter
1992-03-01
In this 'Liftoff to Learning' series video, astronauts (Charles Veach, Gregory Harbaugh, Donald McMonagle, Michael Coats, L. Blaine Hammond, Guion Bluford, Richard Hieb) from the STS-39 Mission use physical experiments and computer animation to explain how weightlessness and gravity affects everything and everyone onboard the Space Shuttle. The physics behind the differences between weight and mass, and the concepts of 'free fall', are demonstrated along with explanations and experiments of Sir Issac Newton's three laws of motion.
Globally convergent techniques in nonlinear Newton-Krylov
NASA Technical Reports Server (NTRS)
Brown, Peter N.; Saad, Youcef
1989-01-01
Some convergence theory is presented for nonlinear Krylov subspace methods. The basic idea of these methods is to use variants of Newton's iteration in conjunction with a Krylov subspace method for solving the Jacobian linear systems. These methods are variants of inexact Newton methods where the approximate Newton direction is taken from a subspace of small dimensions. The main focus is to analyze these methods when they are combined with global strategies such as linesearch techniques and model trust region algorithms. Most of the convergence results are formulated for projection onto general subspaces rather than just Krylov subspaces.
Isaac Newton and the astronomical refraction.
Lehn, Waldemar H
2008-12-01
In a short interval toward the end of 1694, Isaac Newton developed two mathematical models for the theory of the astronomical refraction and calculated two refraction tables, but did not publish his theory. Much effort has been expended, starting with Biot in 1836, in the attempt to identify the methods and equations that Newton used. In contrast to previous work, a closed form solution is identified for the refraction integral that reproduces the table for his first model (in which density decays linearly with elevation). The parameters of his second model, which includes the exponential variation of pressure in an isothermal atmosphere, have also been identified by reproducing his results. The implication is clear that in each case Newton had derived exactly the correct equations for the astronomical refraction; furthermore, he was the first to do so.
Methods and energy storage devices utilizing electrolytes having surface-smoothing additives
Xu, Wu; Zhang, Jiguang; Graff, Gordon L; Chen, Xilin; Ding, Fei
2015-11-12
Electrodeposition and energy storage devices utilizing an electrolyte having a surface-smoothing additive can result in self-healing, instead of self-amplification, of initial protuberant tips that give rise to roughness and/or dendrite formation on the substrate and anode surface. For electrodeposition of a first metal (M1) on a substrate or anode from one or more cations of M1 in an electrolyte solution, the electrolyte solution is characterized by a surface-smoothing additive containing cations of a second metal (M2), wherein cations of M2 have an effective electrochemical reduction potential in the solution lower than that of the cations of M1.
The Unknown Detective Career of Isaac Newton
Levenson, Thomas
2010-03-17
Isaac Newton's fame is such that it would seem that almost nothing remains to be discovered about his deeds or his methods. But very little attention has been paid to the three decades Newton spent in charge of the Royal Mint, and especially to the first of those years, in which he supervised the remaking of England's entire silver money supply, all the while investigating, prosecuting, and executing the nation's currency criminals. That story provides unique perspectives on both his own habits of mind and on how what has come to be called the scientific revolution played out, not just in the minds of the great, but on the mean streets of London.
Kim, B S; Putnam, A J; Kulik, T J; Mooney, D J
1998-01-05
The engineering of functional smooth muscle (SM) tissue is critical if one hopes to successfully replace the large number of tissues containing an SM component with engineered equivalents. This study reports on the effects of SM cell (SMC) seeding and culture conditions on the cellularity and composition of SM tissues engineered using biodegradable matrices (5 x 5 mm, 2-mm thick) of polyglycolic acid (PGA) fibers. Cells were seeded by injecting a cell suspension into polymer matrices in tissue culture dishes (static seeding), by stirring polymer matrices and a cell suspension in spinner flasks (stirred seeding), or by agitating polymer matrices and a cell suspension in tubes with an orbital shaker (agitated seeding). The density of SMCs adherent to these matrices was a function of cell concentration in the seeding solution, but under all conditions a larger number (approximately 1 order of magnitude) and more uniform distribution of SMCs adherent to the matrices were obtained with dynamic versus static seeding methods. The dynamic seeding methods, as compared to the static method, also ultimately resulted in new tissues that had a higher cellularity, more uniform cell distribution, and greater elastin deposition. The effects of culture conditions were next studied by culturing cell-polymer constructs in a stirred bioreactor versus static culture conditions. The stirred culture of SMC-seeded polymer matrices resulted in tissues with a cell density of 6.4 +/- 0.8 x 10(8) cells/cm3 after 5 weeks, compared to 2.0 +/- 1.1 x 10(8) cells/cm3 with static culture. The elastin and collagen synthesis rates and deposition within the engineered tissues were also increased by culture in the bioreactors. The elastin content after 5-week culture in the stirred bioreactor was 24 +/- 3%, and both the elastin content and the cellularity of these tissues are comparable to those of native SM tissue. New tissues were also created in vivo when dynamically seeded polymer matrices were
NASA Astrophysics Data System (ADS)
Lee, Chan; Kim, Hobeom; Kim, Jungdo; Im, Seyoung
2017-06-01
Polyhedral elements with an arbitrary number of nodes or non-planar faces, obtained with an edge-based smoothed finite element method, retain good geometric adaptability and accuracy in solution. This work is intended to extend the polyhedral elements to nonlinear elastic analysis with finite deformations. In order to overcome the volumetric locking problem, a smoothing domain-based selective smoothed finite element method scheme and a three-field-mixed cell-based smoothed finite element method with nodal cells were developed. Using several numerical examples, their performance and the accuracy of their solutions were examined, and their effectiveness for practical applications was demonstrated as well.
Noorkojuri, Hoda; Hajizadeh, Ebrahim; Baghestani, Ahmadreza; Pourhoseingholi, Mohamadamin
2013-01-01
Background Smoothing methods are widely used to analyze epidemiologic data, particularly in the area of environmental health where non-linear relationships are not uncommon. This study focused on three different smoothing methods in Cox models: penalized splines, restricted cubic splines and fractional polynomials. Objectives The aim of this study was to assess the effects of prognostic factors on survival of patients with gastric cancer using the smoothing methods in Cox model and Cox proportional hazards. Also, all models were compared to each other in order to find the best one. Materials and Methods We retrospectively studied 216 patients with gastric cancer who were registered in one referral cancer registry center in Tehran, Iran. Age at diagnosis, sex, presence of metastasis, tumor size, histology type, lymph node metastasis, and pathologic stages were entered in to analysis using the Cox proportional hazards model and smoothing methods in Cox model. The SPSS version 18.0 and R version 2.14.1 were used for data analysis. These models compared with Akaike information criterion. Results In this study, The 5 year survival rate was 30%. The Cox proportional hazards, penalized spline and fractional polynomial models let to similar results and Akaike information criterion showed a better performance for these three models comparing to the restricted cubic spline. Also, P-value and likelihood ratio test in restricted cubic spline was greater than other models. Note that the best model is indicated by the lowest Akaike information criterion. Conclusions The use of smoothing methods helps us to eliminate non-linear effects but it is more appropriate to use Cox proportional hazards model in medical data because of its’ ease of interpretation and capability of modeling both continuous and discrete covariates. Also, Cox proportional hazards model and smoothing methods analysis identified that age at diagnosis and tumor size were independent prognostic factors for the
HyeongKae Park; R. Nourgaliev; Richard C. Martineau; Dana A. Knoll
2008-09-01
Multidimensional, higher-order (2nd and higher) numerical methods have come to the forefront in recent years due to significant advances of computer technology and numerical algorithms, and have shown great potential as viable design tools for realistic applications. To achieve this goal, implicit high-order accurate coupling of the multiphysics simulations is a critical component. One of the issues that arise from multiphysics simulation is the necessity to resolve multiple time scales. For example, the dynamical time scales of neutron kinetics, fluid dynamics and heat conduction significantly differ (typically >1010 magnitude), with the dominant (fastest) physical mode also changing during the course of transient [Pope and Mousseau, 2007]. This leads to the severe time step restriction for stability in traditional multiphysics (i.e. operator split, semi-implicit discretization) simulations. The lower order methods suffer from an undesirable numerical dissipation. Thus implicit, higher order accurate scheme is necessary to perform seamlessly-coupled multiphysics simulations that can be used to analyze the “what-if” regulatory accident scenarios, or to design and optimize engineering systems.
Soltanian, Ali Reza; Hossein, Mahjub
2012-01-01
Kernel smoothing method is a non-parametric or graphical method for statistical estimation. In the present study was used a kernel smoothing method for finding the death hazard rates of patients with acute myocardial infarction. By employing non-parametric regression methods, the curve estimation, may have some complexity. In this article, four indices of Epanechnikov, Biquadratic, Triquadratic and Rectangle kernels were used under local and k-nearest neighbors' bandwidth. For comparing the models, were employed mean integrated squared error. To illustrate in the study, was used the dataset of acute myocardial infraction patients in Bushehr port, in the south of Iran. To obtain proper bandwidth, was used generalized cross-validation method. Corresponding to a low bandwidth value, the curve is unreadable and the regression curve is so roughly. In the event of increasing bandwidth value, the distribution has more readable and smooth. In this study, estimate of death hazard rate for the patients based on Epanechnikov kernel under local bandwidth was 1.011 x 10(-11), which had the lowest mean square error compared to k-nearest neighbors bandwidth. We obtained the death hazard rate in 10 and 30 months after the first acute myocardial infraction using Epanechnikov kernelas were 0.0031 and 0.0012, respectively. The Epanechnikov kernel for obtaining death hazard rate of patients with acute myocardial infraction has minimum mean integrated squared error compared to the other kernels. In addition, the mortality hazard rate of acute myocardial infraction in the study was low.
Rate of convergence of k-step Newton estimators to efficient likelihood estimators
Steve Verrill
2007-01-01
We make use of Cramer conditions together with the well-known local quadratic convergence of Newton?s method to establish the asymptotic closeness of k-step Newton estimators to efficient likelihood estimators. In Verrill and Johnson [2007. Confidence bounds and hypothesis tests for normal distribution coefficients of variation. USDA Forest Products Laboratory Research...
ERIC Educational Resources Information Center
Develaki, Maria
2012-01-01
The availability of teaching units on the nature of science (NOS) can reinforce classroom instruction in the subject, taking into account the related deficiencies in textbook material and teacher training. We give a sequence of teaching units in which the teaching of Newton's gravitational theory is used as a basis for reflecting on the…
ERIC Educational Resources Information Center
Develaki, Maria
2012-01-01
The availability of teaching units on the nature of science (NOS) can reinforce classroom instruction in the subject, taking into account the related deficiencies in textbook material and teacher training. We give a sequence of teaching units in which the teaching of Newton's gravitational theory is used as a basis for reflecting on the…
NASA Astrophysics Data System (ADS)
Gatsis, John
An investigation of preconditioning techniques is presented for a Newton-Krylov algorithm that is used for the computation of steady, compressible, high Reynolds number flows about airfoils. A second-order centred-difference method is used to discretize the compressible Navier-Stokes (NS) equations that govern the fluid flow. The one-equation Spalart-Allmaras turbulence model is used. The discretized equations are solved using Newton's method and the generalized minimal residual (GMRES) Krylov subspace method is used to approximately solve the linear system. These preconditioning techniques are first applied to the solution of the discretized steady convection-diffusion equation. Various orderings, iterative block incomplete LU (BILU) preconditioning and multigrid preconditioning are explored. The baseline preconditioner is a BILU factorization of a lower-order discretization of the system matrix in the Newton linearization. An ordering based on the minimum discarded fill (MDF) ordering is developed and compared to the widely popular reverse Cuthill-McKee ordering. An evolutionary algorithm is used to investigate and enhance this ordering. For the convection-diffusion equation, the MDF-based ordering performs well and RCM is superior for the NS equations. Experiments for inviscid, laminar and turbulent cases are presented to show the effectiveness of iterative BILU preconditioning in terms of reducing the number of GMRES iterations, and hence the memory requirements of the Newton-Krylov algorithm. Multigrid preconditioning also reduces the number of GMRES iterations. The framework for the iterative BILU and BILU-smoothed multigrid preconditioning algorithms is presented in detail.
2017-01-04
The THEMIS VIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. These false color images may reveal subtle variations of the surface not easily identified in a single band image. Today's false color image shows some of the floor of Newton Crater. The small dark bluish features are sand dunes. Orbit Number: 50864 Latitude: -41.5788 Longitude: 201.592 Instrument: VIS Captured: 2013-06-02 02:53 http://photojournal.jpl.nasa.gov/catalog/PIA21280
2017-07-24
The THEMIS VIS camera contains 5 filters. THe data from different filters can be combined in multiple ways to create a false color image. These false color images may reveal subtle variations of the surface not easily identified in a single band image. Today's false color image shows part of Newton Crater in Terra Sirenum. Orbit Number: 59753 Latitude: -38.4654 Longitude: 199.631 Instrument: VIS Captured: 2015-06-03 18:38 https://photojournal.jpl.nasa.gov/catalog/PIA21793
2017-07-11
The THEMIS VIS camera contains 5 filters. The data from different filters can be combined in multiple way to create a false color image. These false color images may reveal subtle variations of the surface not easily identified in a single band image. Today's false color image shows part of Newton Crater in Terra Siremun. Orbit Number: 59678 Latitude: -41.9838 Longitude: 202.593 Instrument: VIS Captured: 2015-05-28 14:26 https://photojournal.jpl.nasa.gov/catalog/PIA21703
2017-05-30
The THEMIS VIS camera contains 5 filters. The data from different filters can be combined in multiple ways to create a false color image. These false color images may reveal subtle variations of the surface not easily identified in a single band image. Today's false color images shows part of the large ridge on the floor of Newton Crater in Terra Sirenum. Orbit Number: 59416 Latitude: -41.0768 Longitude: 200.911 Instrument: VIS Captured: 2015-05-07 00:38 https://photojournal.jpl.nasa.gov/catalog/PIA21672
2014-07-01
Peter A. Flach, Nathalie Japkowicz, Stan Matwin: Smooth Receiver Op- erating Characteristics (smROC) Curves. Machine Learning and Knowledge...strengthen Canada’s ability to anticipate, prevent/mitigate, prepare for, respond to, and recover from natural disasters , serious accidents, crime and...in developing decision making rules for face recognition triaging . From a performance evaluation perspective, the problem can be decomposed into two
NASA Astrophysics Data System (ADS)
Szczesna, Dorota H.; Kulas, Zbigniew; Kasprzak, Henryk T.; Stenevi, Ulf
2009-11-01
A lateral shearing interferometer was used to examine the smoothness of the tear film. The information about the distribution and stability of the precorneal tear film is carried out by the wavefront reflected from the surface of tears and coded in interference fringes. Smooth and regular fringes indicate a smooth tear film surface. On corneae after laser in situ keratomileusis (LASIK) or radial keratotomy (RK) surgery, the interference fringes are seldom regular. The fringes are bent on bright lines, which are interpreted as tear film breakups. The high-intensity pattern seems to appear in similar location on the corneal surface after refractive surgery. Our purpose was to extract information about the pattern existing under the interference fringes and calculate its shape reproducibility over time and following eye blinks. A low-pass filter was applied and correlation coefficient was calculated to compare a selected fragment of the template image to each of the following frames in the recorded sequence. High values of the correlation coefficient suggest that irregularities of the corneal epithelium might influence tear film instability and that tear film breakup may be associated with local irregularities of the corneal topography created after the LASIK and RK surgeries.
Ma, Liyan; Qiu, Bo; Cui, Mingyue; Ding, Jianwei
2017-01-01
Depth image-based rendering (DIBR), which is used to render virtual views with a color image and the corresponding depth map, is one of the key techniques in the 2D to 3D conversion process. Due to the absence of knowledge about the 3D structure of a scene and its corresponding texture, DIBR in the 2D to 3D conversion process, inevitably leads to holes in the resulting 3D image as a result of newly-exposed areas. In this paper, we proposed a structure-aided depth map preprocessing framework in the transformed domain, which is inspired by recently proposed domain transform for its low complexity and high efficiency. Firstly, our framework integrates hybrid constraints including scene structure, edge consistency and visual saliency information in the transformed domain to improve the performance of depth map preprocess in an implicit way. Then, adaptive smooth localization is cooperated and realized in the proposed framework to further reduce over-smoothness and enhance optimization in the non-hole regions. Different from the other similar methods, the proposed method can simultaneously achieve the effects of hole filling, edge correction and local smoothing for typical depth maps in a united framework. Thanks to these advantages, it can yield visually satisfactory results with less computational complexity for high quality 2D to 3D conversion. Numerical experimental results demonstrate the excellent performances of the proposed method. PMID:28407027
Liu, Wei; Ma, Liyan; Qiu, Bo; Cui, Mingyue; Ding, Jianwei
2017-01-01
Depth image-based rendering (DIBR), which is used to render virtual views with a color image and the corresponding depth map, is one of the key techniques in the 2D to 3D conversion process. Due to the absence of knowledge about the 3D structure of a scene and its corresponding texture, DIBR in the 2D to 3D conversion process, inevitably leads to holes in the resulting 3D image as a result of newly-exposed areas. In this paper, we proposed a structure-aided depth map preprocessing framework in the transformed domain, which is inspired by recently proposed domain transform for its low complexity and high efficiency. Firstly, our framework integrates hybrid constraints including scene structure, edge consistency and visual saliency information in the transformed domain to improve the performance of depth map preprocess in an implicit way. Then, adaptive smooth localization is cooperated and realized in the proposed framework to further reduce over-smoothness and enhance optimization in the non-hole regions. Different from the other similar methods, the proposed method can simultaneously achieve the effects of hole filling, edge correction and local smoothing for typical depth maps in a united framework. Thanks to these advantages, it can yield visually satisfactory results with less computational complexity for high quality 2D to 3D conversion. Numerical experimental results demonstrate the excellent performances of the proposed method.
Isaac Newton: Man, Myth, and Mathematics.
ERIC Educational Resources Information Center
Rickey, V. Frederick
1987-01-01
This article was written in part to celebrate the anniversaries of landmark mathematical works by Newton and Descartes. It's other purpose is to dispel some myths about Sir Isaac Newton and to encourage readers to read Newton's works. (PK)
Isaac Newton: Man, Myth, and Mathematics.
ERIC Educational Resources Information Center
Rickey, V. Frederick
1987-01-01
This article was written in part to celebrate the anniversaries of landmark mathematical works by Newton and Descartes. It's other purpose is to dispel some myths about Sir Isaac Newton and to encourage readers to read Newton's works. (PK)
Newton-Krylov-Schwarz: An implicit solver for CFD
NASA Technical Reports Server (NTRS)
Cai, Xiao-Chuan; Keyes, David E.; Venkatakrishnan, V.
1995-01-01
Newton-Krylov methods and Krylov-Schwarz (domain decomposition) methods have begun to become established in computational fluid dynamics (CFD) over the past decade. The former employ a Krylov method inside of Newton's method in a Jacobian-free manner, through directional differencing. The latter employ an overlapping Schwarz domain decomposition to derive a preconditioner for the Krylov accelerator that relies primarily on local information, for data-parallel concurrency. They may be composed as Newton-Krylov-Schwarz (NKS) methods, which seem particularly well suited for solving nonlinear elliptic systems in high-latency, distributed-memory environments. We give a brief description of this family of algorithms, with an emphasis on domain decomposition iterative aspects. We then describe numerical simulations with Newton-Krylov-Schwarz methods on aerodynamics applications emphasizing comparisons with a standard defect-correction approach, subdomain preconditioner consistency, subdomain preconditioner quality, and the effect of a coarse grid.
Edme Mariotte and Newton's Cradle
ERIC Educational Resources Information Center
Cross, Rod
2012-01-01
The first recorded experiments describing the phenomena made popular by Newton's cradle appear to be those conducted by Edme Mariotte around 1670. He was quoted in Newton's "Principia," along with Wren, Wallis, and Huygens, as having conducted pioneering experiments on the collisions of pendulum balls. Each of these authors concluded that momentum…
Newton's Cradle in Physics Education
ERIC Educational Resources Information Center
Gauld, Colin F.
2006-01-01
Newton's Cradle is a series of bifilar pendulums used in physics classrooms to demonstrate the role of the principles of conservation of momentum and kinetic energy in elastic collisions. The paper reviews the way in which textbooks use Newton's Cradle and points out the unsatisfactory nature of these treatments in almost all cases. The literature…
Telecommunications Handbook: Connecting to Newton.
ERIC Educational Resources Information Center
Baker, Christopher; And Others
This handbook was written by the Argonne National Laboratory for use with their electronic bulletin board system (BBS) called Newton. Newton is an educational BBS for use by teachers, students, and parents. Topics range from discussions of science fair topics to online question and answer sessions with scientists. Future capabilities will include…
Edme Mariotte and Newton's Cradle
ERIC Educational Resources Information Center
Cross, Rod
2012-01-01
The first recorded experiments describing the phenomena made popular by Newton's cradle appear to be those conducted by Edme Mariotte around 1670. He was quoted in Newton's "Principia," along with Wren, Wallis, and Huygens, as having conducted pioneering experiments on the collisions of pendulum balls. Each of these authors concluded that momentum…
Newton's Cradle in Physics Education
ERIC Educational Resources Information Center
Gauld, Colin F.
2006-01-01
Newton's Cradle is a series of bifilar pendulums used in physics classrooms to demonstrate the role of the principles of conservation of momentum and kinetic energy in elastic collisions. The paper reviews the way in which textbooks use Newton's Cradle and points out the unsatisfactory nature of these treatments in almost all cases. The literature…
Modeling the propagation of volcanic debris avalanches by a Smoothed Particle Hydrodynamics method
NASA Astrophysics Data System (ADS)
Sosio, Rosanna; Battista Crosta, Giovanni
2010-05-01
Hazard from collapses of volcanic edifices threatens million of people which currently live on top of volcanic deposits or around volcanoes prone to fail. Nevertheless, no much effort has been dedicated for the evaluation of the hazard posed by volcanic debris avalanches (e.g. emergency plans, hazard zoning maps). This work focuses at evaluating the exceptional mobility of volcanic debris avalanches for hazard analyses purposes by providing a set of calibrated cases. We model the propagation of eight debris avalanche selected among the best known historical events originated from sector collapses of volcanic edifices. The events have large volumes (ranging from 0.01-0.02 km3 to 25 km3) and are well preserved so that their main features are recognizable from satellite images. The events developed in a variety of settings and condition and they vary with respect to their morphological constrains, materials, styles of failure. The modeling has been performed using a Lagragian numerical method adapted from Smoothed Particle Hydrodynamics to solve the depth averaged quasi-3D equation for motion (McDougall and Hungr 2004). This code has been designed and satisfactorily used to simulate rock and debris avalanches in non-volcanic settings (McDougall and Hungr, 2004). Its use is here extended to model volcanic debris avalanches which may differ from non-volcanic ones by dimensions, water content and by possible thermodynamic effects or degassing caused by active volcanic processes. The resolution of the topographic data is generally low for remote areas like the ones considered in this study, while the pre event topographies are more often not available. The effect of the poor topographic resolution on the final results has been evaluated by replicating the modeling on satellite-derived topographical grids with varying cell size (from 22 m to 90 m). The event reconstructions and the back analyses are based on the observations available from the literature. We test the
An implicit Smooth Particle Hydrodynamic code
Knapp, Charles E.
2000-05-01
An implicit version of the Smooth Particle Hydrodynamic (SPH) code SPHINX has been written and is working. In conjunction with the SPHINX code the new implicit code models fluids and solids under a wide range of conditions. SPH codes are Lagrangian, meshless and use particles to model the fluids and solids. The implicit code makes use of the Krylov iterative techniques for solving large linear-systems and a Newton-Raphson method for non-linear corrections. It uses numerical derivatives to construct the Jacobian matrix. It uses sparse techniques to save on memory storage and to reduce the amount of computation. It is believed that this is the first implicit SPH code to use Newton-Krylov techniques, and is also the first implicit SPH code to model solids. A description of SPH and the techniques used in the implicit code are presented. Then, the results of a number of tests cases are discussed, which include a shock tube problem, a Rayleigh-Taylor problem, a breaking dam problem, and a single jet of gas problem. The results are shown to be in very good agreement with analytic solutions, experimental results, and the explicit SPHINX code. In the case of the single jet of gas case it has been demonstrated that the implicit code can do a problem in much shorter time than the explicit code. The problem was, however, very unphysical, but it does demonstrate the potential of the implicit code. It is a first step toward a useful implicit SPH code.
Hu, Pingsha; Maiti, Tapabrata
2011-01-01
Microarray is a powerful tool for genome-wide gene expression analysis. In microarray expression data, often mean and variance have certain relationships. We present a non-parametric mean-variance smoothing method (NPMVS) to analyze differentially expressed genes. In this method, a nonlinear smoothing curve is fitted to estimate the relationship between mean and variance. Inference is then made upon shrinkage estimation of posterior means assuming variances are known. Different methods have been applied to simulated datasets, in which a variety of mean and variance relationships were imposed. The simulation study showed that NPMVS outperformed the other two popular shrinkage estimation methods in some mean-variance relationships; and NPMVS was competitive with the two methods in other relationships. A real biological dataset, in which a cold stress transcription factor gene, CBF2, was overexpressed, has also been analyzed with the three methods. Gene ontology and cis-element analysis showed that NPMVS identified more cold and stress responsive genes than the other two methods did. The good performance of NPMVS is mainly due to its shrinkage estimation for both means and variances. In addition, NPMVS exploits a non-parametric regression between mean and variance, instead of assuming a specific parametric relationship between mean and variance. The source code written in R is available from the authors on request.
A smooth dissipative particle dynamics method for domains with arbitrary-geometry solid boundaries
NASA Astrophysics Data System (ADS)
Gatsonis, Nikolaos A.; Potami, Raffaele; Yang, Jun
2014-01-01
A smooth dissipative particle dynamics method with dynamic virtual particle allocation (SDPD-DV) for modeling and simulation of mesoscopic fluids in wall-bounded domains is presented. The physical domain in SDPD-DV may contain external and internal solid boundaries of arbitrary geometries, periodic inlets and outlets, and the fluid region. The SDPD-DV method is realized with fluid particles, boundary particles, and dynamically allocated virtual particles. The internal or external solid boundaries of the domain can be of arbitrary geometry and are discretized with a surface grid. These boundaries are represented by boundary particles with assigned properties. The fluid domain is discretized with fluid particles of constant mass and variable volume. Conservative and dissipative force models due to virtual particles exerted on a fluid particle in the proximity of a solid boundary supplement the original SDPD formulation. The dynamic virtual particle allocation approach provides the density and the forces due to virtual particles. The integration of the SDPD equations is accomplished with a velocity-Verlet algorithm for the momentum and a Runge-Kutta for the entropy equation. The velocity integrator is supplemented by a bounce-forward algorithm in cases where the virtual particle force model is not able to prevent particle penetration. For the incompressible isothermal systems considered in this work, the pressure of a fluid particle is obtained by an artificial compressibility formulation for liquids and the ideal gas law for gases. The self-diffusion coefficient is obtained by an implementation of the generalized Einstein and the Green-Kubo relations. Field properties are obtained by sampling SDPD-DV outputs on a post-processing grid that allows harnessing the particle information on desired spatiotemporal scales. The SDPD-DV method is verified and validated with simulations in bounded and periodic domains that cover the hydrodynamic and mesoscopic regimes for
MODFLOW-NWT, A Newton formulation for MODFLOW-2005
Niswonger, Richard G.; Panday, Sorab; Ibaraki, Motomu
2011-01-01
This report documents a Newton formulation of MODFLOW-2005, called MODFLOW-NWT. MODFLOW-NWT is a standalone program that is intended for solving problems involving drying and rewetting nonlinearities of the unconfined groundwater-flow equation. MODFLOW-NWT must be used with the Upstream-Weighting (UPW) Package for calculating intercell conductances in a different manner than is done in the Block-Centered Flow (BCF), Layer Property Flow (LPF), or Hydrogeologic-Unit Flow (HUF; Anderman and Hill, 2000) Packages. The UPW Package treats nonlinearities of cell drying and rewetting by use of a continuous function of groundwater head, rather than the discrete approach of drying and rewetting that is used by the BCF, LPF, and HUF Packages. This further enables application of the Newton formulation for unconfined groundwater-flow problems because conductance derivatives required by the Newton method are smooth over the full range of head for a model cell. The NWT linearization approach generates an asymmetric matrix, which is different from the standard MODFLOW formulation that generates a symmetric matrix. Because all linear solvers presently available for use with MODFLOW-2005 solve only symmetric matrices, MODFLOW-NWT includes two previously developed asymmetric matrix-solver options. The matrix-solver options include a generalized-minimum-residual (GMRES) Solver and an Orthomin / stabilized conjugate-gradient (CGSTAB) Solver. The GMRES Solver is documented in a previously published report, such that only a brief description and input instructions are provided in this report. However, the CGSTAB Solver (called XMD) is documented in this report. Flow-property input for the UPW Package is designed based on the LPF Package and material-property input is identical to that for the LPF Package except that the rewetting and vertical-conductance correction options of the LPF Package are not available with the UPW Package. Input files constructed for the LPF Package can be used
Ye, Ting; Phan-Thien, Nhan; Lim, Chwee Teck; Peng, Lina; Shi, Huixin
2017-06-01
In biofluid flow systems, often the flow problems of fluids of complex structures, such as the flow of red blood cells (RBCs) through complex capillary vessels, need to be considered. The smoothed dissipative particle dynamics (SDPD), a particle-based method, is one of the easy and flexible methods to model such complex structure fluids. It couples the best features of the smoothed particle hydrodynamics (SPH) and dissipative particle dynamics (DPD), with parameters having specific physical meaning (coming from SPH discretization of the Navier-Stokes equations), combined with thermal fluctuations in a mesoscale simulation, in a similar manner to the DPD. On the other hand, the immersed boundary method (IBM), a preferred method for handling fluid-structure interaction problems, has also been widely used to handle the fluid-RBC interaction in RBC simulations. In this paper, we aim to couple SDPD and IBM together to carry out the simulations of RBCs in complex flow problems. First, we develop the SDPD-IBM model in details, including the SDPD model for the evolving fluid flow, the RBC model for calculating RBC deformation force, the IBM for treating fluid-RBC interaction, and the solid boundary treatment model as well. We then conduct the verification and validation of the combined SDPD-IBM method. Finally, we demonstrate the capability of the SDPD-IBM method by simulating the flows of RBCs in rectangular, cylinder, curved, bifurcated, and constricted tubes, respectively.
NASA Astrophysics Data System (ADS)
Ye, Ting; Phan-Thien, Nhan; Lim, Chwee Teck; Peng, Lina; Shi, Huixin
2017-06-01
In biofluid flow systems, often the flow problems of fluids of complex structures, such as the flow of red blood cells (RBCs) through complex capillary vessels, need to be considered. The smoothed dissipative particle dynamics (SDPD), a particle-based method, is one of the easy and flexible methods to model such complex structure fluids. It couples the best features of the smoothed particle hydrodynamics (SPH) and dissipative particle dynamics (DPD), with parameters having specific physical meaning (coming from SPH discretization of the Navier-Stokes equations), combined with thermal fluctuations in a mesoscale simulation, in a similar manner to the DPD. On the other hand, the immersed boundary method (IBM), a preferred method for handling fluid-structure interaction problems, has also been widely used to handle the fluid-RBC interaction in RBC simulations. In this paper, we aim to couple SDPD and IBM together to carry out the simulations of RBCs in complex flow problems. First, we develop the SDPD-IBM model in details, including the SDPD model for the evolving fluid flow, the RBC model for calculating RBC deformation force, the IBM for treating fluid-RBC interaction, and the solid boundary treatment model as well. We then conduct the verification and validation of the combined SDPD-IBM method. Finally, we demonstrate the capability of the SDPD-IBM method by simulating the flows of RBCs in rectangular, cylinder, curved, bifurcated, and constricted tubes, respectively.
Fourth-order solutions of nonlinear two-point boundary value problems by Newton-HSSOR iteration
NASA Astrophysics Data System (ADS)
Sulaiman, Jumat; Hasan, Mohd. Khatim; Othman, Mohamed; Karim, Samsul Ariffin Abdul
2014-06-01
In this paper, the Half-Sweep Successive Over-Relaxation (HSSOR) iterative method together with Newton scheme namely Newton-HSSOR is investigated in solving the nonlinear systems generated from the fourth-order half-sweep finite difference approximation equation for nonlinear two-point boundary value problems. The Newton scheme is proposed to linearize the nonlinear system into the form of linear system. On top of that, we also present the basic formulation and implementation of Newton-HSSOR iterative method. For comparison purpose, combinations between the Full-Sweep Gauss-Seidel (FSGS) and Full-Sweep Successive Over-Relaxation (FSSOR) iterative methods with Newton scheme, which are indicated as Newton-FSGS and Newton-FSSOR methods respectively have been implemented numerically. Numerical experiments of two problems are given to illustrate that the Newton-HSSOR method is more superior compared with the tested methods.
PEOPLE IN PHYSICS: Newton's apple
NASA Astrophysics Data System (ADS)
Sandford Smith, Daniel
1997-03-01
This essay has a long history. It was triggered at university by one of my tutors describing the dispute between Robert Hooke and Isaac Newton. He conjured up an image of Newton sitting at his desk doing calculations while Hooke went down mineshafts trying to detect a change in the strength of gravity. To someone who was finding the maths content of a physics degree somewhat challenging this was a symbolic image. I believe that the story of Newton and the apple illustrates the complex nature of scientific discovery.
Lutton, E. Josiah; Lammers, Wim J. E. P.; James, Sean
2017-01-01
Background The fibrous structure of the myometrium has previously been characterised at high resolutions in small tissue samples (< 100 mm3) and at low resolutions (∼500 μm per voxel edge) in whole-organ reconstructions. However, no high-resolution visualisation of the myometrium at the organ level has previously been attained. Methods and results We have developed a technique to reconstruct the whole myometrium from serial histological slides, at a resolution of approximately 50 μm per voxel edge. Reconstructions of samples taken from human and rat uteri are presented here, along with histological verification of the reconstructions and detailed investigation of the fibrous structure of these uteri, using a range of tools specifically developed for this analysis. These reconstruction techniques enable the high-resolution rendering of global structure previously observed at lower resolution. Moreover, structures observed previously in small portions of the myometrium can be observed in the context of the whole organ. The reconstructions are in direct correspondence with the original histological slides, which allows the inspection of the anatomical context of any features identified in the three-dimensional reconstructions. Conclusions and significance The methods presented here have been used to generate a faithful representation of myometrial smooth muscle at a resolution of ∼50 μm per voxel edge. Characterisation of the smooth muscle structure of the myometrium by means of this technique revealed a detailed view of previously identified global structures in addition to a global view of the microarchitecture. A suite of visualisation tools allows researchers to interrogate the histological microarchitecture. These methods will be applicable to other smooth muscle tissues to analyse fibrous microarchitecture. PMID:28301486
NASA Astrophysics Data System (ADS)
Cherepanov, Roman O.; Gerasimov, Alexander V.
2016-11-01
A fully conservative first order accuracy smooth particle method is proposed for elastic-plastic flows. The paper also provides an algorithm for calculating free boundary conditions. A weak variational formulation is used to achieve energy and momentum conservation and to decrease an order of spatial derivatives for the boundary condition calculation, and the Taylor series expansion is used for restoring particle inconsistence and for getting at least the first order of accuracy of spatial derivatives. The approach proposed allows us to avoid the "ghost" particle usage.
NASA Astrophysics Data System (ADS)
Nakayama, Akihiko; Leong, Lap Yan; Kong, Wei Song
2017-04-01
The basic formulation of the smoothed particle hydrodynamics (SPH) has been re-examined for analysis of gas-liquid two-phase flows with large density differences. The improved method has been verified in the calculation of dam-break flow and has been applied to an open-channel flow over steep sloped stepped spillway. In the calculation of the flow over the steps, not only is the trapped air but entrained air bubbles and water droplets are reproduced well. The detailed variation of the time-averaged mean quantities will have to be further examined but overall prediction with relatively small number of particles is done well.
An Inexact Newton-Krylov Algorithm for Constrained Diffeomorphic Image Registration.
Mang, Andreas; Biros, George
We propose numerical algorithms for solving large deformation diffeomorphic image registration problems. We formulate the nonrigid image registration problem as a problem of optimal control. This leads to an infinite-dimensional partial differential equation (PDE) constrained optimization problem. The PDE constraint consists, in its simplest form, of a hyperbolic transport equation for the evolution of the image intensity. The control variable is the velocity field. Tikhonov regularization on the control ensures well-posedness. We consider standard smoothness regularization based on H(1)- or H(2)-seminorms. We augment this regularization scheme with a constraint on the divergence of the velocity field (control variable) rendering the deformation incompressible (Stokes regularization scheme) and thus ensuring that the determinant of the deformation gradient is equal to one, up to the numerical error. We use a Fourier pseudospectral discretization in space and a Chebyshev pseudospectral discretization in time. The latter allows us to reduce the number of unknowns and enables the time-adaptive inversion for nonstationary velocity fields. We use a preconditioned, globalized, matrix-free, inexact Newton-Krylov method for numerical optimization. A parameter continuation is designed to estimate an optimal regularization parameter. Regularity is ensured by controlling the geometric properties of the deformation field. Overall, we arrive at a black-box solver that exploits computational tools that are precisely tailored for solving the optimality system. We study spectral properties of the Hessian, grid convergence, numerical accuracy, computational efficiency, and deformation regularity of our scheme. We compare the designed Newton-Krylov methods with a globalized Picard method (preconditioned gradient descent). We study the influence of a varying number of unknowns in time. The reported results demonstrate excellent numerical accuracy, guaranteed local deformation
NASA Astrophysics Data System (ADS)
Belenkiy, Ari
2007-08-01
Among the Newtonian manuscripts, owned by the Jewish National and University Library at Jerusalem and known as MS Yahuda 24, there is a proposal for the reform of the Julian and Ecclesiastical calendars, written in three drafts in early 1700. This was Newton's response to the challenge suggested by Continental mathematicians and astronomers, G.W. Leibniz in particular. This calendar, if implemented in England, would have become a formidable rival to the Gregorian calendar. Despite having a different algorithm, its solar part agrees with the latter until 2400 AD and is more precise in the long run, within a period of 5,000 years. Its lunar algorithm is simpler than the Gregorian, but remained incomplete. Though the calendar was buried under a pile of theological papers, were it now to be implemented it would have a glorious future, since it includes the most characteristic features of the Christian, Jewish, and Muslim calendars and can aspire to become the universal calendar. Looking for the best astronomical parameters Newton attempted to compute the length of the tropical year using the ancient equinox observations reported by Hipparchus of Rhodes. Though Newton had a very thin sample of data, he obtained a tropical year only a few seconds longer than the correct length. We show that the reason lies in Newton's application of a technique similar (though not identical) to the modern ordinary least squares method. Newton also had a clear understanding of qualitative variables. Open historic-astronomical problems related to inclination of the Earth's axis of rotation are discussed. In particular, ignorance about the long-range variation in inclination and nutation is likely responsible for the wide variety in the lengths of the tropical year assigned by different 17th century astronomers - the problem that led Newton to Hipparchus.
Newton and the Second Law of Motion
ERIC Educational Resources Information Center
Gauld, C. F.
1975-01-01
Deals generally with historical errors in science teaching and specifically with Newton's conception of his second law of motion. With reference to Newton's "Principia", the author concludes that Newton would not understand what we today refer to as "Newton's Second Law." (MLH)
Newton and the Second Law of Motion
ERIC Educational Resources Information Center
Gauld, C. F.
1975-01-01
Deals generally with historical errors in science teaching and specifically with Newton's conception of his second law of motion. With reference to Newton's "Principia", the author concludes that Newton would not understand what we today refer to as "Newton's Second Law." (MLH)
Barceló, M Antònia; Saez, Marc; Cano-Serral, Gemma; Martínez-Beneito, Miguel Angel; Martínez, José Miguel; Borrell, Carme; Ocaña-Riola, Ricardo; Montoya, Imanol; Calvo, Montse; López-Abente, Gonzalo; Rodríguez-Sanz, Maica; Toro, Silvia; Alcalá, José Tomás; Saurina, Carme; Sánchez-Villegas, Pablo; Figueiras, Adolfo
2008-01-01
Although there is some experience in the study of mortality inequalities in Spanish cities, there are large urban centers that have not yet been investigated using the census tract as the unit of territorial analysis. The coordinated project
Lutton, E Josiah; Lammers, Wim J E P; James, Sean; van den Berg, Hugo A; Blanks, Andrew M
2017-01-01
The fibrous structure of the myometrium has previously been characterised at high resolutions in small tissue samples (< 100 mm3) and at low resolutions (∼500 μm per voxel edge) in whole-organ reconstructions. However, no high-resolution visualisation of the myometrium at the organ level has previously been attained. We have developed a technique to reconstruct the whole myometrium from serial histological slides, at a resolution of approximately 50 μm per voxel edge. Reconstructions of samples taken from human and rat uteri are presented here, along with histological verification of the reconstructions and detailed investigation of the fibrous structure of these uteri, using a range of tools specifically developed for this analysis. These reconstruction techniques enable the high-resolution rendering of global structure previously observed at lower resolution. Moreover, structures observed previously in small portions of the myometrium can be observed in the context of the whole organ. The reconstructions are in direct correspondence with the original histological slides, which allows the inspection of the anatomical context of any features identified in the three-dimensional reconstructions. The methods presented here have been used to generate a faithful representation of myometrial smooth muscle at a resolution of ∼50 μm per voxel edge. Characterisation of the smooth muscle structure of the myometrium by means of this technique revealed a detailed view of previously identified global structures in addition to a global view of the microarchitecture. A suite of visualisation tools allows researchers to interrogate the histological microarchitecture. These methods will be applicable to other smooth muscle tissues to analyse fibrous microarchitecture.
XMM-Newton publication statistics
NASA Astrophysics Data System (ADS)
Ness, J.-U.; Parmar, A. N.; Valencic, L. A.; Smith, R.; Loiseau, N.; Salama, A.; Ehle, M.; Schartel, N.
2014-02-01
We assessed the scientific productivity of XMM-Newton by examining XMM-Newton publications and data usage statistics. We analyse 3272 refereed papers, published until the end of 2012, that directly use XMM-Newton data. The SAO/NASA Astrophysics Data System (ADS) was used to provide additional information on each paper including the number of citations. For each paper, the XMM-Newton observation identifiers and instruments used to provide the scientific results were determined. The identifiers were used to access the XMM-{Newton} Science Archive (XSA) to provide detailed information on the observations themselves and on the original proposals. The information obtained from these sources was then combined to allow the scientific productivity of the mission to be assessed. Since around three years after the launch of XMM-Newton there have been around 300 refereed papers per year that directly use XMM-Newton data. After more than 13 years in operation, this rate shows no evidence that it is decreasing. Since 2002, around 100 scientists per year become lead authors for the first time on a refereed paper which directly uses XMM-Newton data. Each refereed XMM-Newton paper receives around four citations per year in the first few years with a long-term citation rate of three citations per year, more than five years after publication. About half of the articles citing XMM-Newton articles are not primarily X-ray observational papers. The distribution of elapsed time between observations taken under the Guest Observer programme and first article peaks at 2 years with a possible second peak at 3.25 years. Observations taken under the Target of Opportunity programme are published significantly faster, after one year on average. The fraction of science time taken until the end of 2009 that has been used in at least one article is {˜ 90} %. Most observations were used more than once, yielding on average a factor of two in usage on available observing time per year. About 20 % of
Backman, Daniel E; LeSavage, Bauer L; Shah, Shivem B; Wong, Joyce Y
2017-06-01
In arterial tissue engineering, mimicking native structure and mechanical properties is essential because compliance mismatch can lead to graft failure and further disease. With bottom-up tissue engineering approaches, designing tissue components with proper microscale mechanical properties is crucial to achieve the necessary macroscale properties in the final implant. This study develops a thermoresponsive cell culture platform for growing aligned vascular smooth muscle cell (VSMC) sheets by photografting N-isopropylacrylamide (NIPAAm) onto micropatterned poly(dimethysiloxane) (PDMS). The grafting process is experimentally and computationally optimized to produce PNIPAAm-PDMS substrates optimal for VSMC attachment. To allow long-term VSMC sheet culture and increase the rate of VSMC sheet formation, PNIPAAm-PDMS surfaces were further modified with 3-aminopropyltriethoxysilane yielding a robust, thermoresponsive cell culture platform for culturing VSMC sheets. VSMC cell sheets cultured on patterned thermoresponsive substrates exhibit cellular and collagen alignment in the direction of the micropattern. Mechanical characterization of patterned, single-layer VSMC sheets reveals increased stiffness in the aligned direction compared to the perpendicular direction whereas nonpatterned cell sheets exhibit no directional dependence. Structural and mechanical anisotropy of aligned, single-layer VSMC sheets makes this platform an attractive microstructural building block for engineering a vascular graft to match the in vivo mechanical properties of native arterial tissue. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Kandasamy, S.; Baret, F.; Verger, A.; Neveux, P.; Weiss, M.
2012-12-01
Moderate resolution satellite sensors including MODIS already provide more than 10 yr of observations well suited to describe and understand the dynamics of the Earth surface. However, these time series are incomplete because of cloud cover and associated with significant uncertainties. This study compares eight methods designed to improve the continuity by filling gaps and the consistency by smoothing the time course. It includes methods exploiting the time series as a whole (Iterative caterpillar singular spectrum analysis (ICSSA), empirical mode decomposition (EMD), low pass filtering (LPF) and Whittaker smoother (Whit)) as well as methods working on limited temporal windows of few weeks to few months (Adaptive Savitzky-Golay filter (SGF), temporal smoothing and gap filling (TSGF) and asymmetric Gaussian function (AGF)) in addition to the simple climatological LAI yearly profile (Clim). Methods were applied to MODIS leaf area index product for the period 2000-2008 over 25 sites showing a large range of seasonal patterns. Performances were discussed with emphasis on the balance achieved by each method between accuracy and roughness depending on the fraction of missing observations and the length of the gaps. Results demonstrate that EMD, LPF and AGF methods were failing in case of significant fraction of gaps (%Gap > 20%), while ICSSA, Whit and SGF were always providing estimates for dates with missing data. TSGF (respectively Clim) was able to fill more than 50% of the gaps for sites with more than 60% (resp. 80%) fraction of gaps. However, investigation of the accuracy of the reconstructed values shows that it degrades rapidly for sites with more than 20% missing data, particularly for ICSSA, Whit and SGF. In these conditions, TSGF provides the best performances significantly better than the simple Clim for gaps shorter than about 100 days. The roughness of the reconstructed temporal profiles shows large differences between the several methods, with a decrease
NASA Astrophysics Data System (ADS)
Kandasamy, S.; Baret, F.; Verger, A.; Neveux, P.; Weiss, M.
2013-06-01
Moderate resolution satellite sensors including MODIS (Moderate Resolution Imaging Spectroradiometer) already provide more than 10 yr of observations well suited to describe and understand the dynamics of earth's surface. However, these time series are associated with significant uncertainties and incomplete because of cloud cover. This study compares eight methods designed to improve the continuity by filling gaps and consistency by smoothing the time course. It includes methods exploiting the time series as a whole (iterative caterpillar singular spectrum analysis (ICSSA), empirical mode decomposition (EMD), low pass filtering (LPF) and Whittaker smoother (Whit)) as well as methods working on limited temporal windows of a few weeks to few months (adaptive Savitzky-Golay filter (SGF), temporal smoothing and gap filling (TSGF), and asymmetric Gaussian function (AGF)), in addition to the simple climatological LAI yearly profile (Clim). Methods were applied to the MODIS leaf area index product for the period 2000-2008 and over 25 sites showed a large range of seasonal patterns. Performances were discussed with emphasis on the balance achieved by each method between accuracy and roughness depending on the fraction of missing observations and the length of the gaps. Results demonstrate that the EMD, LPF and AGF methods were failing because of a significant fraction of gaps (more than 20%), while ICSSA, Whit and SGF were always providing estimates for dates with missing data. TSGF (Clim) was able to fill more than 50% of the gaps for sites with more than 60% (80%) fraction of gaps. However, investigation of the accuracy of the reconstructed values shows that it degrades rapidly for sites with more than 20% missing data, particularly for ICSSA, Whit and SGF. In these conditions, TSGF provides the best performances that are significantly better than the simple Clim for gaps shorter than about 100 days. The roughness of the reconstructed temporal profiles shows large
NASA Astrophysics Data System (ADS)
Davydov, M. N.; Kedrinskii, V. K.
2013-11-01
It is demonstrated that the method of smoothed particle hydrodynamics can be used to study the flow structure in a cavitating medium with a high concentration of the gas phase and to describe the process of inversion of the two-phase state of this medium: transition from a cavitating fluid to a system consisting of a gas and particles. A numerical analysis of the dynamics of the state of a hemispherical droplet under shock-wave loading shows that focusing of the shock wave reflected from the free surface of the droplet leads to the formation of a dense, but rapidly expanding cavitation cluster at the droplet center. By the time t = 500 µs, the bubbles at the cluster center not only coalesce and form a foam-type structure, but also transform to a gas-particle system, thus, forming an almost free rapidly expanding zone. The mechanism of this process defined previously as an internal "cavitation explosion" of the droplet is validated by means of mathematical modeling of the problem by the smoothed particle hydrodynamics method. The deformation of the cavitating droplet is finalized by its decomposition into individual fragments and particles.
Wang, Bei; Wang, Xingyu; Zhang, Tao; Nakamura, Masatoshi
2013-01-01
An automatic sleep level estimation method was developed for monitoring and regulation of day time nap sleep. The recorded nap data is separated into continuous 5-second segments. Features are extracted from EEGs, EOGs and EMG. A parameter of sleep level is defined which is estimated based on the conditional probability of sleep stages. An exponential smoothing method is applied for the estimated sleep level. There were totally 12 healthy subjects, with an averaged age of 22 yeas old, participated into the experimental work. Comparing with sleep stage determination, the presented sleep level estimation method showed better performance for nap sleep interpretation. Real time monitoring and regulation of nap is realizable based on the developed technique.
Lee, Jonghyun; Kwon, Won Sik; Kim, Kyung-Soo; Kim, Soohyun
2011-08-01
In this paper, a novel actuation method for a smooth impact drive mechanism that positions dual-slider by a single piezo-element is introduced and applied to a compact zoom lens system. A mode chart that determines the state of the slider at the expansion or shrinkage periods of the piezo-element is presented, and the design guide of a driving input profile is proposed. The motion of dual-slider holding lenses is analyzed at each mode, and proper modes for zoom functions are selected for the purpose of positioning two lenses. Because the proposed actuation method allows independent movement of two lenses by a single piezo-element, the zoom lens system can be designed to be compact. For a feasibility test, a lens system composed of an afocal zoom system and a focusing lens was developed, and the passive auto-focus method was implemented.
Critical Parameters of the In Vitro Method of Vascular Smooth Muscle Cell Calcification
Hortells, Luis; Sosa, Cecilia; Millán, Ángel; Sorribas, Víctor
2015-01-01
Background Vascular calcification (VC) is primarily studied using cultures of vascular smooth muscle cells. However, the use of very different protocols and extreme conditions can provide findings unrelated to VC. In this work we aimed to determine the critical experimental parameters that affect calcification in vitro and to determine the relevance to calcification in vivo. Experimental Procedures and Results Rat VSMC calcification in vitro was studied using different concentrations of fetal calf serum, calcium, and phosphate, in different types of culture media, and using various volumes and rates of change. The bicarbonate content of the media critically affected pH and resulted in supersaturation, depending on the concentration of Ca2+ and Pi. Such supersaturation is a consequence of the high dependence of bicarbonate buffers on CO2 vapor pressure and bicarbonate concentration at pHs above 7.40. Such buffer systems cause considerable pH variations as a result of minor experimental changes. The variations are more critical for DMEM and are negligible when the bicarbonate concentration is reduced to ¼. Particle nucleation and growth were observed by dynamic light scattering and electron microscopy. Using 2mM Pi, particles of ~200nm were observed at 24 hours in MEM and at 1 hour in DMEM. These nuclei grew over time, were deposited in the cells, and caused osteogene expression or cell death, depending on the precipitation rate. TEM observations showed that the initial precipitate was amorphous calcium phosphate (ACP), which converts into hydroxyapatite over time. In blood, the scenario is different, because supersaturation is avoided by a tightly controlled pH of 7.4, which prevents the formation of PO43--containing ACP. Conclusions The precipitation of ACP in vitro is unrelated to VC in vivo. The model needs to be refined through controlled pH and the use of additional procalcifying agents other than Pi in order to reproduce calcium phosphate deposition in vivo
A smoothly decoupled particle interface: New methods for coupling explicit and implicit solvent
Wagoner, Jason A.; Pande, Vijay S.
2011-01-01
A common theme of studies using molecular simulation is a necessary compromise between computational efficiency and resolution of the forcefield that is used. Significant efforts have been directed at combining multiple levels of granularity within a single simulation in order to maintain the efficiency of coarse-grained models, while using finer resolution in regions where such details are expected to play an important role. A specific example of this paradigm is the development of hybrid solvent models, which explicitly sample the solvent degrees of freedom within a specified domain while utilizing a continuum description elsewhere. Unfortunately, these models are complicated by the presence of structural artifacts at or near the explicit∕implicit boundary. The presence of these artifacts significantly complicates the use of such models, both undermining the accuracy obtained and necessitating the parameterization of effective potentials to counteract the artificial interactions. In this work, we introduce a novel hybrid solvent model that employs a smoothly decoupled particle interface (SDPI), a switching region that gradually transitions from fully interacting particles to a continuum solvent. The resulting SDPI model allows for the use of an implicit solvent model based on a simple theory that needs to only reproduce the behavior of bulk solvent rather than the more complex features of local interactions. In this study, the SDPI model is tested on spherical hybrid domains using a coarse-grained representation of water that includes only Lennard-Jones interactions. The results demonstrate that this model is capable of reproducing solvent configurations absent of boundary artifacts, as if they were taken from full explicit simulations. PMID:21663340
Space and motion in nature and Scripture: Galileo, Descartes, Newton.
Janiak, Andrew
2015-06-01
In the Scholium to the Definitions in Principia mathematica, Newton departs from his main task of discussing space, time and motion by suddenly mentioning the proper method for interpreting Scripture. This is surprising, and it has long been ignored by scholars. In this paper, I argue that the Scripture passage in the Scholium is actually far from incidental: it reflects Newton's substantive concern, one evident in correspondence and manuscripts from the 1680s, that any general understanding of space, time and motion must enable readers to recognize the veracity of Biblical claims about natural phenomena, including the motion of the earth. This substantive concern sheds new light on an aspect of Newton's project in the Scholium. It also underscores Newton's originality in dealing with the famous problem of reconciling theological and philosophical conceptions of nature in the seventeenth century.
Wong Unhong; Wong Honcheng; Tang Zesheng
2010-05-21
The smoothed particle hydrodynamics (SPH), which is a class of meshfree particle methods (MPMs), has a wide range of applications from micro-scale to macro-scale as well as from discrete systems to continuum systems. Graphics hardware, originally designed for computer graphics, now provide unprecedented computational power for scientific computation. Particle system needs a huge amount of computations in physical simulation. In this paper, an efficient parallel implementation of a SPH method on graphics hardware using the Compute Unified Device Architecture is developed for fluid simulation. Comparing to the corresponding CPU implementation, our experimental results show that the new approach allows significant speedups of fluid simulation through handling huge amount of computations in parallel on graphics hardware.
NASA Astrophysics Data System (ADS)
Prato, Marco; Bonettini, Silvia; Loris, Ignace; Porta, Federica; Rebegoldi, Simone
2016-10-01
The scaled gradient projection (SGP) method is a first-order optimization method applicable to the constrained minimization of smooth functions and exploiting a scaling matrix multiplying the gradient and a variable steplength parameter to improve the convergence of the scheme. For a general nonconvex function, the limit points of the sequence generated by SGP have been proved to be stationary, while in the convex case and with some restrictions on the choice of the scaling matrix the sequence itself converges to a constrained minimum point. In this paper we extend these convergence results by showing that the SGP sequence converges to a limit point provided that the objective function satisfies the Kurdyka-Łojasiewicz property at each point of its domain and its gradient is Lipschitz continuous.
NASA Astrophysics Data System (ADS)
Guevara, Ivonne; Wiseman, Howard
2015-10-01
Smoothing is an estimation method whereby a classical state (probability distribution for classical variables) at a given time is conditioned on all-time (both earlier and later) observations. Here we define a smoothed quantum state for a partially monitored open quantum system, conditioned on an all-time monitoring-derived record. We calculate the smoothed distribution for a hypothetical unobserved record which, when added to the real record, would complete the monitoring, yielding a pure-state "quantum trajectory." Averaging the pure state over this smoothed distribution yields the (mixed) smoothed quantum state. We study how the choice of actual unraveling affects the purity increase over that of the conventional (filtered) state conditioned only on the past record.
Guevara, Ivonne; Wiseman, Howard
2015-10-30
Smoothing is an estimation method whereby a classical state (probability distribution for classical variables) at a given time is conditioned on all-time (both earlier and later) observations. Here we define a smoothed quantum state for a partially monitored open quantum system, conditioned on an all-time monitoring-derived record. We calculate the smoothed distribution for a hypothetical unobserved record which, when added to the real record, would complete the monitoring, yielding a pure-state "quantum trajectory." Averaging the pure state over this smoothed distribution yields the (mixed) smoothed quantum state. We study how the choice of actual unraveling affects the purity increase over that of the conventional (filtered) state conditioned only on the past record.
Ding, Zhi; Xie, Hua; Huang, Yichen; Lv, Yiqing; Yang, Ganggang; Chen, Yan; Sun, Huizhen; Zhou, Junmei; Chen, Fang
2016-01-01
To establish a simple and rapid method to remove serosa and mucosa from detrusor for the culture of bladder smooth muscle cells (SMCs). Fourteen New Zealand rabbits were randomly allocated to two groups. In the first group, pure bladder detrusor was directly obtained from bladder wall using novel method characterized by subserous injection of normal saline. In the second group, full thickness bladder wall sample was cut down, and then, mucosa and serosa were trimmed off detrusor ex vivo. Twelve detrusor samples from two groups were manually minced and enzymatically digested, respectively, to form dissociated cells whose livability was detected by trypan blue exclusion. Proliferative ability of primary culture cells was detected by CCK-8 kit, and purity of second-passage SMCs was detected by flow cytometric analyses. Another two detrusor samples from two groups were used for histological examination. Subserous injection of normal saline combined with blunt dissection can remove mucosa and serosa from detrusor layer easily and quickly. Statistical analysis revealed the first group possessed higher cell livability, shorter primary culture cell doubling time, and higher purity of SMCs than the second group (P < 0.05). Histological examination confirmed no serosa and mucosa existed on the surface of detrusor obtained by novel method, while serosa or mucosa residual can be found on the surface of detrusor obtained by traditional method. Pure detrusor can be acquired from bladder wall conveniently using novel method. This novel method brought about significantly higher purity and cell livability as compared to traditional method.
Poles tracking of weakly nonlinear structures using a Bayesian smoothing method
NASA Astrophysics Data System (ADS)
Stephan, Cyrille; Festjens, Hugo; Renaud, Franck; Dion, Jean-Luc
2017-02-01
This paper describes a method for the identification and the tracking of poles of a weakly nonlinear structure from its free responses. This method is based on a model of multichannel damped sines whose parameters evolve over time. Their variations are approximated in discrete time by a nonlinear state space model. States are estimated by an iterative process which couples a two-pass Bayesian smoother with an Expectation-Maximization (EM) algorithm. The method is applied on numerical and experimental cases. As a result, accurate frequency and damping estimates are obtained as a function of amplitude.
1990-12-01
N is taken as the first smoothed estimate, P, must be equal to P,,, at this last data point. This can be seen graphically in Figure 4. Meditch [Ref...D-A246 336 NAVAL POSTGRADUATE SCHOOL Monterey , California R AWDTIC ELECTIE THESIS INCLUDING STATE EXCITATION IN THE FIXED-INTERVAL SMOOTHING ...Filter, Smoothing , Noise Process, Maneuver Detection. 19 Abstract (continue on reverse f necessary and idcntify by block number) The effects of the state
Pan, Wenxiao; Bao, Jie; Tartakovsky, Alexandre M.
2014-02-15
Robin boundary condition for the Navier-Stokes equations is used to model slip conditions at the fluid-solid boundaries. A novel Continuous Boundary Force (CBF) method is proposed for solving the Navier-Stokes equations subject to Robin boundary condition. In the CBF method, the Robin boundary condition at boundary is replaced by the homogeneous Neumann boundary condition at the boundary and a volumetric force term added to the momentum conservation equation. Smoothed Particle Hydrodynamics (SPH) method is used to solve the resulting Navier-Stokes equations. We present solutions for two-dimensional and three-dimensional flows in domains bounded by flat and curved boundaries subject to various forms of the Robin boundary condition. The numerical accuracy and convergence are examined through comparison of the SPH-CBF results with the solutions of finite difference or finite element method. Taken the no-slip boundary condition as a special case of slip boundary condition, we demonstrate that the SPH-CBF method describes accurately both no-slip and slip conditions.
Szeliski, Richard; Zabih, Ramin; Scharstein, Daniel; Veksler, Olga; Kolmogorov, Vladimir; Agarwala, Aseem; Tappen, Marshall; Rother, Carsten
2008-06-01
Among the most exciting advances in early vision has been the development of efficient energy minimization algorithms for pixel-labeling tasks such as depth or texture computation. It has been known for decades that such problems can be elegantly expressed as Markov random fields, yet the resulting energy minimization problems have been widely viewed as intractable. Recently, algorithms such as graph cuts and loopy belief propagation (LBP) have proven to be very powerful: for example, such methods form the basis for almost all the top-performing stereo methods. However, the tradeoffs among different energy minimization algorithms are still not well understood. In this paper we describe a set of energy minimization benchmarks and use them to compare the solution quality and running time of several common energy minimization algorithms. We investigate three promising recent methods graph cuts, LBP, and tree-reweighted message passing in addition to the well-known older iterated conditional modes (ICM) algorithm. Our benchmark problems are drawn from published energy functions used for stereo, image stitching, interactive segmentation, and denoising. We also provide a general-purpose software interface that allows vision researchers to easily switch between optimization methods. Benchmarks, code, images, and results are available at http://vision.middlebury.edu/MRF/.
Joshi, Anand A.; Shattuck, David W.; Thompson, Paul M.; Leahy, Richard M.
2009-01-01
Neuroimaging data, such as 3-D maps of cortical thickness or neural activation, can often be analyzed more informatively with respect to the cortical surface rather than the entire volume of the brain. Any cortical surface-based analysis should be carried out using computations in the intrinsic geometry of the surface rather than using the metric of the ambient 3-D space. We present parameterization-based numerical methods for performing isotropic and anisotropic filtering on triangulated surface geometries. In contrast to existing FEM-based methods for triangulated geometries, our approach accounts for the metric of the surface. In order to discretize and numerically compute the isotropic and anisotropic geometric operators, we first parameterize the surface using a p-harmonic mapping. We then use this parameterization as our computational domain and account for the surface metric while carrying out isotropic and anisotropic filtering. To validate our method, we compare our numerical results to the analytical expression for isotropic diffusion on a spherical surface. We apply these methods to smoothing of mean curvature maps on the cortical surface, a step commonly required for analysis of gyrification or for registering surface-based maps across subjects. PMID:19423447
A method of smooth bivariate interpolation for data given on a generalized curvilinear grid
NASA Technical Reports Server (NTRS)
Zingg, David W.; Yarrow, Maurice
1992-01-01
A method of locally bicubic interpolation is presented for data given at the nodes of a two-dimensional generalized curvilinear grid. The physical domain is transformed to a computational domain in which the grid is uniform and rectangular by a generalized curvilinear coordinate transformation. The metrics of the transformation are obtained by finite differences in the computational domain. Metric derivatives are determined by repeated application of the chain rule for partial differentiation. Given the metrics and the metric derivatives, the partial derivatives required to determine a locally bicubic interpolant can be estimated at each data point using finite differences in the computational domain. A bilinear transformation is used to analytically transform the individual quadrilateral cells in the physical domain into unit squares, thus allowing the use of simple formulas for bicubic interpolation.
ERIC Educational Resources Information Center
Price, Beverley; Pincott, Maxine; Rebman, Ashley; Northcutt, Jen; Barsanti, Amy; Silkunas, Betty; Brighton, Susan K.; Reitz, David; Winkler, Maureen
1999-01-01
Presents discipline tips from several teachers to keep classrooms running smoothly all year. Some of the suggestions include the following: a bear-cave warning system, peer mediation, a motivational mystery, problem students acting as the teacher's assistant, a positive-behavior-reward chain, a hallway scavenger hunt (to ensure quiet passage…
ERIC Educational Resources Information Center
Price, Beverley; Pincott, Maxine; Rebman, Ashley; Northcutt, Jen; Barsanti, Amy; Silkunas, Betty; Brighton, Susan K.; Reitz, David; Winkler, Maureen
1999-01-01
Presents discipline tips from several teachers to keep classrooms running smoothly all year. Some of the suggestions include the following: a bear-cave warning system, peer mediation, a motivational mystery, problem students acting as the teacher's assistant, a positive-behavior-reward chain, a hallway scavenger hunt (to ensure quiet passage…
Lesage, Adrien; Lelièvre, Tony; Stoltz, Gabriel; Hénin, Jérôme
2016-12-27
We report a theoretical description and numerical tests of the extended-system adaptive biasing force method (eABF), together with an unbiased estimator of the free energy surface from eABF dynamics. Whereas the original ABF approach uses its running estimate of the free energy gradient as the adaptive biasing force, eABF is built on the idea that the exact free energy gradient is not necessary for efficient exploration, and that it is still possible to recover the exact free energy separately with an appropriate estimator. eABF does not directly bias the collective coordinates of interest, but rather fictitious variables that are harmonically coupled to them; therefore is does not require second derivative estimates, making it easily applicable to a wider range of problems than ABF. Furthermore, the extended variables present a smoother, coarse-grain-like sampling problem on a mollified free energy surface, leading to faster exploration and convergence. We also introduce CZAR, a simple, unbiased free energy estimator from eABF trajectories. eABF/CZAR converges to the physical free energy surface faster than standard ABF for a wide range of parameters.
NASA Astrophysics Data System (ADS)
Iga, Shin-ichi
2017-02-01
An equatorially enhanced grid is applicable to atmospheric general circulation simulations with better representations of the cumulus convection active in the tropics. This study improved the topology of previously proposed equatorially enhanced grids (Iga, 2015) [1], which had extremely large grid intervals around the poles. The proposed grids in this study are of a triangular mesh and are generated by a spring dynamics method with stretching around singular points, which are connected to five or seven neighboring grid points. The latitudinal distribution of resolution is nearly proportional to the combination of the map factors of the Mercator, Lambert conformal conic, and polar stereographic projections. The resolution contrast between the equator and pole is 2.3 ∼ 4.5 for the sampled cases, which is much smaller than that for the LML grids. This improvement requires only a small amount of additional grid resources, less than 11% of the total. The proposed grids are also examined with shallow water tests, and were found to perform better than the previous LML grids.
Modern methods for calculating ground-wave field strength over a smooth spherical Earth
NASA Astrophysics Data System (ADS)
Eckert, R. P.
1986-02-01
The report makes available the computer program that produces the proposed new FCC ground-wave propagation prediction curves for the new band of standard broadcast frequencies between 1605 and 1705 kHz. The curves are included in recommendations to the U.S. Department of State in preparation for an International Telecommunication Union Radio Conference. The history of the FCC curves is traced from the early 1930's, when the Federal Radio Commission and later the FFC faced an intensifying need for technical information concerning interference distances. A family of curves satisfactorily meeting this need was published in 1940. The FCC reexamined the matter recently in connection with the planned expansion of the AM broadcast band, and the resulting new curves are a precise representation of the mathematical theory. Mathematical background is furnished so that the computer program can be critically evaluated. This will be particularly valuable to persons implementing the program on other computers or adapting it for special applications. Technical references are identified for each of the formulas used by the program, and the history of the development of mathematical methods is outlined.
NASA Astrophysics Data System (ADS)
Develaki, Maria
2012-06-01
The availability of teaching units on the nature of science (NOS) can reinforce classroom instruction in the subject, taking into account the related deficiencies in textbook material and teacher training. We give a sequence of teaching units in which the teaching of Newton's gravitational theory is used as a basis for reflecting on the fundamental factors that enter into the cognitive and evaluative processes of science, such as creativity, empirical data, theorising, substantiating and modelling tactics. Distinguishing phases in the evolution of a theory (initial conception and formation, testing, scope and limits of the theory) helps show how the importance of these factors varies from phase to phase, while they continue to interact throughout the whole process. Our concept of how to teach NOS is based on the introduction of such special units, containing direct instruction in NOS elements incorporated into curricular science content, thus giving an initial theoretical basis with which epistemological points of other course material can be correlated during the usual classroom teaching of the subject throughout the school year. The sequence is presented in the form of teaching units that can also be used in teachers' NOS education, extended in this case by more explicit instruction in basic philosophical views of the nature of science and how they relate to and impact on teaching.
POEMS in Newton's Aerodynamic Frustum
ERIC Educational Resources Information Center
Sampedro, Jaime Cruz; Tetlalmatzi-Montiel, Margarita
2010-01-01
The golden mean is often naively seen as a sign of optimal beauty but rarely does it arise as the solution of a true optimization problem. In this article we present such a problem, demonstrating a close relationship between the golden mean and a special case of Newton's aerodynamical problem for the frustum of a cone. Then, we exhibit a parallel…
Newton's Law of Cooling Revisited
ERIC Educational Resources Information Center
Vollmer, M.
2009-01-01
The cooling of objects is often described by a law, attributed to Newton, which states that the temperature difference of a cooling body with respect to the surroundings decreases exponentially with time. Such behaviour has been observed for many laboratory experiments, which led to a wide acceptance of this approach. However, the heat transfer…
Newton's Law of Cooling Revisited
ERIC Educational Resources Information Center
Vollmer, M.
2009-01-01
The cooling of objects is often described by a law, attributed to Newton, which states that the temperature difference of a cooling body with respect to the surroundings decreases exponentially with time. Such behaviour has been observed for many laboratory experiments, which led to a wide acceptance of this approach. However, the heat transfer…
Atomism from Newton to Dalton.
ERIC Educational Resources Information Center
Schofield, Robert E.
1981-01-01
Indicates that although Newton's achievements were rooted in an atomistic theory of matter resembling aspects of modern nuclear physics, Dalton developed his chemical atomism on the basis of the character of the gross behavior of substances rather than their particulate nature. (Author/SK)
POEMS in Newton's Aerodynamic Frustum
ERIC Educational Resources Information Center
Sampedro, Jaime Cruz; Tetlalmatzi-Montiel, Margarita
2010-01-01
The golden mean is often naively seen as a sign of optimal beauty but rarely does it arise as the solution of a true optimization problem. In this article we present such a problem, demonstrating a close relationship between the golden mean and a special case of Newton's aerodynamical problem for the frustum of a cone. Then, we exhibit a parallel…
Atomism from Newton to Dalton.
ERIC Educational Resources Information Center
Schofield, Robert E.
1981-01-01
Indicates that although Newton's achievements were rooted in an atomistic theory of matter resembling aspects of modern nuclear physics, Dalton developed his chemical atomism on the basis of the character of the gross behavior of substances rather than their particulate nature. (Author/SK)
NEWTPOIS- NEWTON POISSON DISTRIBUTION PROGRAM
NASA Technical Reports Server (NTRS)
Bowerman, P. N.
1994-01-01
The cumulative poisson distribution program, NEWTPOIS, is one of two programs which make calculations involving cumulative poisson distributions. Both programs, NEWTPOIS (NPO-17715) and CUMPOIS (NPO-17714), can be used independently of one another. NEWTPOIS determines percentiles for gamma distributions with integer shape parameters and calculates percentiles for chi-square distributions with even degrees of freedom. It can be used by statisticians and others concerned with probabilities of independent events occurring over specific units of time, area, or volume. NEWTPOIS determines the Poisson parameter (lambda), that is; the mean (or expected) number of events occurring in a given unit of time, area, or space. Given that the user already knows the cumulative probability for a specific number of occurrences (n) it is usually a simple matter of substitution into the Poisson distribution summation to arrive at lambda. However, direct calculation of the Poisson parameter becomes difficult for small positive values of n and unmanageable for large values. NEWTPOIS uses Newton's iteration method to extract lambda from the initial value condition of the Poisson distribution where n=0, taking successive estimations until some user specified error term (epsilon) is reached. The NEWTPOIS program is written in C. It was developed on an IBM AT with a numeric co-processor using Microsoft C 5.0. Because the source code is written using standard C structures and functions, it should compile correctly on most C compilers. The program format is interactive, accepting epsilon, n, and the cumulative probability of the occurrence of n as inputs. It has been implemented under DOS 3.2 and has a memory requirement of 30K. NEWTPOIS was developed in 1988.
NEWTPOIS- NEWTON POISSON DISTRIBUTION PROGRAM
NASA Technical Reports Server (NTRS)
Bowerman, P. N.
1994-01-01
The cumulative poisson distribution program, NEWTPOIS, is one of two programs which make calculations involving cumulative poisson distributions. Both programs, NEWTPOIS (NPO-17715) and CUMPOIS (NPO-17714), can be used independently of one another. NEWTPOIS determines percentiles for gamma distributions with integer shape parameters and calculates percentiles for chi-square distributions with even degrees of freedom. It can be used by statisticians and others concerned with probabilities of independent events occurring over specific units of time, area, or volume. NEWTPOIS determines the Poisson parameter (lambda), that is; the mean (or expected) number of events occurring in a given unit of time, area, or space. Given that the user already knows the cumulative probability for a specific number of occurrences (n) it is usually a simple matter of substitution into the Poisson distribution summation to arrive at lambda. However, direct calculation of the Poisson parameter becomes difficult for small positive values of n and unmanageable for large values. NEWTPOIS uses Newton's iteration method to extract lambda from the initial value condition of the Poisson distribution where n=0, taking successive estimations until some user specified error term (epsilon) is reached. The NEWTPOIS program is written in C. It was developed on an IBM AT with a numeric co-processor using Microsoft C 5.0. Because the source code is written using standard C structures and functions, it should compile correctly on most C compilers. The program format is interactive, accepting epsilon, n, and the cumulative probability of the occurrence of n as inputs. It has been implemented under DOS 3.2 and has a memory requirement of 30K. NEWTPOIS was developed in 1988.
Demonstrating Newton's Second Law.
ERIC Educational Resources Information Center
Fricker, H. S.
1994-01-01
Describes an apparatus for demonstrating the second law of motion. Provides sample data and discusses the merits of this method over traditional methods of supplying a constant force. The method produces empirical best-fit lines which convincingly demonstrate that for a fixed mass, acceleration is proportional to force. (DDR)
Demonstrating Newton's Second Law.
ERIC Educational Resources Information Center
Fricker, H. S.
1994-01-01
Describes an apparatus for demonstrating the second law of motion. Provides sample data and discusses the merits of this method over traditional methods of supplying a constant force. The method produces empirical best-fit lines which convincingly demonstrate that for a fixed mass, acceleration is proportional to force. (DDR)
Li, Bin; Sang, Jizhang; Zhang, Zhongping
2016-01-01
A critical requirement to achieve high efficiency of debris laser tracking is to have sufficiently accurate orbit predictions (OP) in both the pointing direction (better than 20 arc seconds) and distance from the tracking station to the debris objects, with the former more important than the latter because of the narrow laser beam. When the two line element (TLE) is used to provide the orbit predictions, the resultant pointing errors are usually on the order of tens to hundreds of arc seconds. In practice, therefore, angular observations of debris objects are first collected using an optical tracking sensor, and then used to guide the laser beam pointing to the objects. The manual guidance may cause interrupts to the laser tracking, and consequently loss of valuable laser tracking data. This paper presents a real-time orbit determination (OD) and prediction method to realize smooth and efficient debris laser tracking. The method uses TLE-computed positions and angles over a short-arc of less than 2 min as observations in an OD process where simplified force models are considered. After the OD convergence, the OP is performed from the last observation epoch to the end of the tracking pass. Simulation and real tracking data processing results show that the pointing prediction errors are usually less than 10″, and the distance errors less than 100 m, therefore, the prediction accuracy is sufficient for the blind laser tracking. PMID:27347958
Du, Hui; He, Jianyu; Wang, Sicen; He, Langchong
2010-07-01
The dissociation equilibrium constant (K(D)) is an important affinity parameter for studying drug-receptor interactions. A vascular smooth muscle (VSM) cell membrane chromatography (CMC) method was developed for determination of the K(D) values for calcium antagonist-L-type calcium channel (L-CC) interactions. VSM cells, by means of primary culture with rat thoracic aortas, were used for preparation of the cell membrane stationary phase in the VSM/CMC model. All measurements were performed with spectrophotometric detection (237 nm) at 37 degrees C. The K(D) values obtained using frontal analysis were 3.36 x 10(-6) M for nifedipine, 1.34 x 10(-6) M for nimodipine, 6.83 x 10(-7) M for nitrendipine, 1.23 x 10(-7) M for nicardipine, 1.09 x 10(-7) M for amlodipine, and 8.51 x 10(-8) M for verapamil. This affinity rank order obtained from the VSM/CMC method had a strong positive correlation with that obtained from radioligand binding assay. The location of the binding region was examined by displacement experiments using nitrendipine as a mobile-phase additive. It was found that verapamil occupied a class of binding sites on L-CCs different from those occupied by nitrendipine. In addition, nicardipine, amlodipine, and nitrendipine had direct competition at a single common binding site. The studies showed that CMC can be applied to the investigation of drug-receptor interactions.
Jiang, Chen; Liu, Gui-Rong; Han, Xu; Zhang, Zhi-Qian; Zeng, Wei
2015-01-01
The smoothed FEM (S-FEM) is firstly extended to explore the behavior of 3D anisotropic large deformation of rabbit ventricles during the passive filling process in diastole. Because of the incompressibility of myocardium, a special method called selective face-based/node-based S-FEM using four-node tetrahedral elements (FS/NS-FEM-TET4) is adopted in order to avoid volumetric locking. To validate the proposed algorithms of FS/NS-FEM-TET4, the 3D Lame problem is implemented. The performance contest results show that our FS/NS-FEM-TET4 is accurate, volumetric locking-free and insensitive to mesh distortion than standard linear FEM because of absence of isoparametric mapping. Actually, the efficiency of FS/NS-FEM-TET4 is comparable with higher-order FEM, such as 10-node tetrahedral elements. The proposed method for Holzapfel myocardium hyperelastic strain energy is also validated by simple shear tests through the comparison outcomes reported in available references. Finally, the FS/NS-FEM-TET4 is applied in the example of the passive filling of MRI-based rabbit ventricles with fiber architecture derived from rule-based algorithm to demonstrate its efficiency. Hence, we conclude that FS/NS-FEM-TET4 is a promising alternative other than FEM in passive cardiac mechanics.
Student conception and perception of Newton's law
NASA Astrophysics Data System (ADS)
Handhika, Jeffry; Cari, C.; Soeparmi, A.; Sunarno, Widha
2016-02-01
This research aims to reveal the student's conception and perception of Newton's Law. Method of this research is qualitative with the sample is taken using purposive sampling consist of second semester (25 students), fourth semester (26 students), sixth semester VI (25 students), and eight semester (18 students) IKIP PGRI MADIUN, which have taken the first basic physics and mechanics courses The data was collected with essay questions, interview, and FCI test. It can be concluded that Mathematical language (symbol and visual) perception and intuition influence students conception. The results of analysis showed that an incorrect conception arises because students do not understand the language of physics and mathematics correctly.
Yang, Chia-Chi; Su, Fong-Chin; Guo, Lan-Yuen
2014-08-01
Mechanical neck disorder is one of the most common health issues. No related observations have applied spectral entropy to explore the smoothness of cervical movement. Therefore, the objectives were to ascertain whether the spectral entropy of time-series linear acceleration could extend to estimate the smoothness of cervical movement and compare the characteristics of the smoothness of cervical movement in patients with mechanical neck pain (MND) with healthy volunteers. The smoothness of cervical movement during cervical circumduction from 36 subjects (MND: n = 18, asymptomatic: n = 18) was quantified by the spectral entropy of time-series linear acceleration and other speed-dependent parameters, respectively. Patients with MND showed significantly longer movement time, higher value in the spectral entropy and wider band response in frequency spectrum than healthy volunteers (P = 0.01). The spectral entropy would be suitable to discriminate the smoothness of cervical movement between patients with MND with healthy volunteers and demonstrated patients with MND had significantly less smooth cervical movement.
Keynes, Newton and the Royal Society: the events of 1942 and 1943
Kuehn, Daniel
2013-01-01
Most discussions of John Maynard Keynes's activities in connection with Newton are restricted to the sale in 1936 at Sotheby's of Newton's Portsmouth Papers and to Keynes's 1946 essay ‘Newton, the Man’. This paper provides a history of Keynes's Newton-related work in the interim, highlighting especially the events of 1942 and 1943, which were particularly relevant to the Royal Society's role in the domestic and international promotion of Newton's legacy. During this period, Keynes lectured twice on Newton, leaving notes that would later be read by his brother Geoffrey in the famous commemoration of the Newton tercentenary in 1946. In 1943 Keynes assisted the Royal Society in its recognition of the Soviet celebrations and in the acquisition and preservation of more of the Newton library. In each instance Keynes took the opportunity to promote his interpretation of Newton as ‘the last of the magicians’: a scientist who had one foot in the pre-modern world and whose approach to understanding the world was as much intuitive as it was methodical. PMID:24686919
Keynes, Newton and the Royal Society: the events of 1942 and 1943.
Kuehn, Daniel
2013-03-20
Most discussions of John Maynard Keynes's activities in connection with Newton are restricted to the sale in 1936 at Sotheby's of Newton's Portsmouth Papers and to Keynes's 1946 essay 'Newton, the Man'. This paper provides a history of Keynes's Newton-related work in the interim, highlighting especially the events of 1942 and 1943, which were particularly relevant to the Royal Society's role in the domestic and international promotion of Newton's legacy. During this period, Keynes lectured twice on Newton, leaving notes that would later be read by his brother Geoffrey in the famous commemoration of the Newton tercentenary in 1946. In 1943 Keynes assisted the Royal Society in its recognition of the Soviet celebrations and in the acquisition and preservation of more of the Newton library. In each instance Keynes took the opportunity to promote his interpretation of Newton as 'the last of the magicians': a scientist who had one foot in the pre-modern world and whose approach to understanding the world was as much intuitive as it was methodical.
Shahriari, S; Kadem, L; Rogers, B D; Hassan, I
2012-11-01
This paper aims to extend the application of smoothed particle hydrodynamics (SPH), a meshfree particle method, to simulate flow inside a model of the heart's left ventricle (LV). This work is considered the first attempt to simulate flow inside a heart cavity using a meshfree particle method. Simulating this kind of flow, characterized by high pulsatility and moderate Reynolds number using SPH is challenging. As a consequence, validation of the computational code using benchmark cases is required prior to simulating the flow inside a model of the LV. In this work, this is accomplished by simulating an unsteady oscillating flow (pressure amplitude: A = 2500 N ∕ m(3) and Womersley number: W(o) = 16) and the steady lid-driven cavity flow (Re = 3200, 5000). The results are compared against analytical solutions and reference data to assess convergence. Then, both benchmark cases are combined and a pulsatile jet in a cavity is simulated and the results are compared with the finite volume method. Here, an approach to deal with inflow and outflow boundary conditions is introduced. Finally, pulsatile inlet flow in a rigid model of the LV is simulated. The results demonstrate the ability of SPH to model complex cardiovascular flows and to track the history of fluid properties. Some interesting features of SPH are also demonstrated in this study, including the relation between particle resolution and sound speed to control compressibility effects and also order of convergence in SPH simulations, which is consistently demonstrated to be between first-order and second-order at the moderate Reynolds numbers investigated.
Hooke, orbital motion, and Newton's Principia
NASA Astrophysics Data System (ADS)
Nauenberg, Michael
1994-04-01
A detailed analysis is given of a 1685 graphical construction by Robert Hooke for the polygonal path of a body moving in a periodically pulsed radial field of force. In this example the force varies linearly with the distance from the center. Hooke's method is based directly on his original idea from the mid-1660s that the orbital motion of a planet is determined by compounding its tangential velocity with a radial velocity impressed by the gravitational attraction of the sun at the center. This hypothesis corresponds to the second law of motion, as formulated two decades later by Newton, and its geometrical implementation constitutes the cornerstone of Newton's Principia. Hooke's diagram represents the first known accurate graphical evaluation of an orbit in a central field of force, and it gives evidence that he demonstrated that his resulting discrete orbit is an approximate ellipse centered at the origin of the field of force. A comparable calculation to obtain orbits for an inverse square force, which Hooke had conjectured to be the gravitational force, has not been found among his unpublished papers. Such a calculation is carried out here numerically with the Newton-Hooke geometrical construction. It is shown that for orbits of comparable or larger eccentricity than Hooke's example, a graphical approach runs into convergence difficulties due to the singularity of the gravitational force at the origin. This may help resolve the long-standing mystery why Hooke never published his controversial claim that he had demonstrated that an attractive force, which is ``...in a duplicate proportion to the Distance from the Center Reciprocall...'' implies elliptic orbits.
The Use of Kruskal-Newton Diagrams for Differential Equations
T. Fishaleck and R.B. White
2008-02-19
The method of Kruskal-Newton diagrams for the solution of differential equations with boundary layers is shown to provide rapid intuitive understanding of layer scaling and can result in the conceptual simplification of some problems. The method is illustrated using equations arising in the theory of pattern formation and in plasma physics.
Newton's Principia: Myth and Reality
NASA Astrophysics Data System (ADS)
Smith, George
2016-03-01
Myths about Newton's Principia abound. Some of them, such as the myth that the whole book was initially developed using the calculus and then transformed into a geometric mathematics, stem from remarks he made during the priority controversy with Leibniz over the calculus. Some of the most persistent, and misleading, arose from failures to read the book with care. Among the latter are the myth that he devised his theory of gravity in order to explain the already established ``laws'' of Kepler, and that in doing so he took himself to be establishing that Keplerian motion is ``absolute,'' if not with respect to ``absolute space,'' then at least with respect to the fixed stars taken as what came later to be known as an inertial frame. The talk will replace these two myths with the reality of what Newton took himself to have established.
Smoothed Particle Inference Analysis of SNR RCW 103
NASA Astrophysics Data System (ADS)
Frank, Kari A.; Burrows, David N.; Dwarkadas, Vikram
2016-04-01
We present preliminary results of applying a novel analysis method, Smoothed Particle Inference (SPI), to an XMM-Newton observation of SNR RCW 103. SPI is a Bayesian modeling process that fits a population of gas blobs ("smoothed particles") such that their superposed emission reproduces the observed spatial and spectral distribution of photons. Emission-weighted distributions of plasma properties, such as abundances and temperatures, are then extracted from the properties of the individual blobs. This technique has important advantages over analysis techniques which implicitly assume that remnants are two-dimensional objects in which each line of sight encompasses a single plasma. By contrast, SPI allows superposition of as many blobs of plasma as are needed to match the spectrum observed in each direction, without the need to bin the data spatially. This RCW 103 analysis is part of a pilot study for the larger SPIES (Smoothed Particle Inference Exploration of SNRs) project, in which SPI will be applied to a sample of 12 bright SNRs.
Newton filtrations, graded algebras and codimension of non-degenerate ideals
NASA Astrophysics Data System (ADS)
Bivià-Ausina, Carles; Fukui, Toshizumi; Saia, Marcelo José
2002-07-01
We investigate a generalization of the method introduced by Kouchnirenko to compute the codimension (colength) of an ideal under a certain non-degeneracy condition on a given system of generators of I. We also discuss Newton non-degenerate ideals and give characterizations using the notion of reductions and Newton polyhedra of ideals.
Spronck, Bart; Megens, Remco T A; Reesink, Koen D; Delhaas, Tammo
2016-04-01
When studying in vivo arterial mechanical behaviour using constitutive models, smooth muscle cells (SMCs) should be considered, while they play an important role in regulating arterial vessel tone. Current constitutive models assume a strictly circumferential SMC orientation, without any dispersion. We hypothesised that SMC orientation would show considerable dispersion in three dimensions and that helical dispersion would be greater than transversal dispersion. To test these hypotheses, we developed a method to quantify the 3D orientation of arterial SMCs. Fluorescently labelled SMC nuclei of left and right carotid arteries of ten mice were imaged using two-photon laser scanning microscopy. Arteries were imaged at a range of luminal pressures. 3D image processing was used to identify individual nuclei and their orientations. SMCs showed to be arranged in two distinct layers. Orientations were quantified by fitting a Bingham distribution to the observed orientations. As hypothesised, orientation dispersion was much larger helically than transversally. With increasing luminal pressure, transversal dispersion decreased significantly, whereas helical dispersion remained unaltered. Additionally, SMC orientations showed a statistically significant (p < 0.05) mean right-handed helix angle in both left and right arteries and in both layers, which is a relevant finding from a developmental biology perspective. In conclusion, vascular SMC orientation (1) can be quantified in 3D; (2) shows considerable dispersion, predominantly in the helical direction; and (3) has a distinct right-handed helical component in both left and right carotid arteries. The obtained quantitative distribution data are instrumental for constitutive modelling of the artery wall and illustrate the merit of our method.
The XMM-Newton serendipitous survey. VII. The third XMM-Newton serendipitous source catalogue
NASA Astrophysics Data System (ADS)
Rosen, S. R.; Webb, N. A.; Watson, M. G.; Ballet, J.; Barret, D.; Braito, V.; Carrera, F. J.; Ceballos, M. T.; Coriat, M.; Della Ceca, R.; Denkinson, G.; Esquej, P.; Farrell, S. A.; Freyberg, M.; Grisé, F.; Guillout, P.; Heil, L.; Koliopanos, F.; Law-Green, D.; Lamer, G.; Lin, D.; Martino, R.; Michel, L.; Motch, C.; Nebot Gomez-Moran, A.; Page, C. G.; Page, K.; Page, M.; Pakull, M. W.; Pye, J.; Read, A.; Rodriguez, P.; Sakano, M.; Saxton, R.; Schwope, A.; Scott, A. E.; Sturm, R.; Traulsen, I.; Yershov, V.; Zolotukhin, I.
2016-05-01
Context. Thanks to the large collecting area (3 ×~1500 cm2 at 1.5 keV) and wide field of view (30' across in full field mode) of the X-ray cameras on board the European Space Agency X-ray observatory XMM-Newton, each individual pointing can result in the detection of up to several hundred X-ray sources, most of which are newly discovered objects. Since XMM-Newton has now been in orbit for more than 15 yr, hundreds of thousands of sources have been detected. Aims: Recently, many improvements in the XMM-Newton data reduction algorithms have been made. These include enhanced source characterisation and reduced spurious source detections, refined astrometric precision of sources, greater net sensitivity for source detection, and the extraction of spectra and time series for fainter sources, both with better signal-to-noise. Thanks to these enhancements, the quality of the catalogue products has been much improved over earlier catalogues. Furthermore, almost 50% more observations are in the public domain compared to 2XMMi-DR3, allowing the XMM-Newton Survey Science Centre to produce a much larger and better quality X-ray source catalogue. Methods: The XMM-Newton Survey Science Centre has developed a pipeline to reduce the XMM-Newton data automatically. Using the latest version of this pipeline, along with better calibration, a new version of the catalogue has been produced, using XMM-Newton X-ray observations made public on or before 2013 December 31. Manual screening of all of the X-ray detections ensures the highest data quality. This catalogue is known as 3XMM. Results: In the latest release of the 3XMM catalogue, 3XMM-DR5, there are 565 962 X-ray detections comprising 396 910 unique X-ray sources. Spectra and lightcurves are provided for the 133 000 brightest sources. For all detections, the positions on the sky, a measure of the quality of the detection, and an evaluation of the X-ray variability is provided, along with the fluxes and count rates in 7 X-ray energy
Newton-like equations for a radiating particle
NASA Astrophysics Data System (ADS)
Cabo Montes de Oca, A.; Cabo Bizet, N. G.
2015-01-01
Second-order Newton equations of motion for a radiating particle are presented. It is argued that the trajectories obeying them also satisfy the Abraham-Lorentz-Dirac (ALD) equations for general 3D motions in the nonrelativistic and relativistic limits. The case of forces depending on only the proper time is here considered. For these properties to hold, it is sufficient that the external force be infinitely smooth and that a Landau-Lifshitz series formed with its time derivatives converges. This series defines in a special local way the effective forces entering the Newton equations. When the external force vanishes in an open vicinity of a given time, the effective one also becomes null. Thus, the proper solutions of the effective equations cannot show runaway or preacceleration effects. The Newton equations are numerically solved for a pulsed force given by an analytic function along the proper time axis. The simultaneous satisfaction of the ALD equations is numerically checked. Furthermore, a set of modified ALD equations for almost everywhere infinitely smooth forces, but including steplike discontinuities in some points, is also presented. The form of the equations supports the statement argued in a previous work, that the causal Lienard-Wiechert field solution surrounding a radiating particle implies that the effective force on the particle should instantaneously vanish when the external force is retired. The modified ALD equations proposed in the previous work are here derived in a generalized way including the same effect also when the force is instantly connected. The possibility of deriving a pointlike model showing a finite mass and an infinite electromagnetic energy from a reasonable Lagrangian theory is also started to be investigated here.
Visualizing and Understanding the Components of Lagrange and Newton Interpolation
ERIC Educational Resources Information Center
Yang, Yajun; Gordon, Sheldon P.
2016-01-01
This article takes a close look at Lagrange and Newton interpolation by graphically examining the component functions of each of these formulas. Although interpolation methods are often considered simply to be computational procedures, we demonstrate how the components of the polynomial terms in these formulas provide insight into where these…
A Comparison of Inexact Newton and Coordinate Descent Meshoptimization Technqiues
Diachin, L F; Knupp, P; Munson, T; Shontz, S
2004-07-08
We compare inexact Newton and coordinate descent methods for optimizing the quality of a mesh by repositioning the vertices, where quality is measured by the harmonic mean of the mean-ratio metric. The effects of problem size, element size heterogeneity, and various vertex displacement schemes on the performance of these algorithms are assessed for a series of tetrahedral meshes.
Visualizing and Understanding the Components of Lagrange and Newton Interpolation
ERIC Educational Resources Information Center
Yang, Yajun; Gordon, Sheldon P.
2016-01-01
This article takes a close look at Lagrange and Newton interpolation by graphically examining the component functions of each of these formulas. Although interpolation methods are often considered simply to be computational procedures, we demonstrate how the components of the polynomial terms in these formulas provide insight into where these…
Isaac Newton: Eighteenth-century Perspectives
NASA Astrophysics Data System (ADS)
Hall, A. Rupert
1999-05-01
This new product of the ever-flourishing Newton industry seems a bit far-fetched at first sight: who but a few specialists would be interested in the historiography of Newton biography in the eighteenth century? On closer inspection, this book by one of the most important Newton scholars of our day turns out to be of interest to a wider audience as well. It contains several biographical sketches of Newton, written in the decades after his death. The two most important ones are the Eloge by the French mathematician Bernard de Fontenelle and the Italian scholar Paolo Frisi's Elogio. The latter piece was hitherto unavailable in English translation. Both articles are well-written, interesting and sometimes even entertaining. They give us new insights into the way Newton was revered throughout Europe and how not even the slightest blemish on his personality or work could be tolerated. An example is the way in which Newton's famous controversy with Leibniz is treated: Newton is without hesitation presented as the wronged party. Hall has provided very useful historical introductions to the memoirs as well as footnotes where needed. Among the other articles discussed is a well-known memoir by John Conduitt, who was married to Newton's niece. This memoir, substantial parts of which are included in this volume, has been a major source of personal information for Newton biographers up to this day. In a concluding chapter, Hall gives a very interesting overview of the later history of Newton biography, in which he describes the gradual change from adoration to a more critical approach in Newton's various biographers. In short, this is a very useful addition to the existing biographical literature on Newton. A J Kox
Sebastian Schunert; Yousry Y. Azmy
2011-05-01
The quantification of the discretization error associated with the spatial discretization of the Discrete Ordinate(DO) equations in multidimensional Cartesian geometries is the central problem in error estimation of spatial discretization schemes for transport theory as well as computer code verification. Traditionally fine mesh solutions are employed as reference, because analytical solutions only exist in the absence of scattering. This approach, however, is inadequate when the discretization error associated with the reference solution is not small compared to the discretization error associated with the mesh under scrutiny. Typically this situation occurs if the mesh of interest is only a couple of refinement levels away from the reference solution or if the order of accuracy of the numerical method (and hence the reference as well) is lower than expected. In this work we present a Method of Manufactured Solutions (MMS) benchmark suite with variable order of smoothness of the underlying exact solution for two-dimensional Cartesian geometries which provides analytical solutions aver- aged over arbitrary orthogonal meshes for scattering and non-scattering media. It should be emphasized that the developed MMS benchmark suite first eliminates the aforementioned limitation of fine mesh reference solutions since it secures knowledge of the underlying true solution and second that it allows for an arbitrary order of smoothness of the underlying ex- act solution. The latter is of importance because even for smooth parameters and boundary conditions the DO equations can feature exact solution with limited smoothness. Moreover, the degree of smoothness is crucial for both the order of accuracy and the magnitude of the discretization error for any spatial discretization scheme.
Glendinning, Paul
2011-12-01
Newton's cradle for two balls with Hertzian interactions is considered as a hybrid system, and this makes it possible to derive return maps for the motion between collisions in an exact form despite the fact that the three-halves interaction law cannot be solved in closed form. The return maps depend on a constant whose value can only be determined numerically, but solutions can be written down explicitly in terms of this parameter, and we compare this with the results of simulations. The results are in fact independent of the details of the interaction potential.
Numerical conformal mapping methods for exterior and doubly connected regions
DeLillo, T.K.; Pfaltzgraff, J.A.
1996-12-31
Methods are presented and analyzed for approximating the conformal map from the exterior of the disk to the exterior a smooth, simple closed curve and from an annulus to a bounded, doubly connected region with smooth boundaries. The methods are Newton-like methods for computing the boundary correspondences and conformal moduli similar to Fornberg`s method for the interior of the disk. We show that the linear systems are discretizations of the identity plus a compact operator and, hence, that the conjugate gradient method converges superlinearly.
The Schrödinger-Newton equations beyond Newton
NASA Astrophysics Data System (ADS)
Manfredi, Giovanni
2015-02-01
The scope of this paper is twofold. First, we derive rigorously a low-velocity and Galilei-covariant limit of the gravitoelectromagnetic (GEM) equations. Subsequently, these reduced GEM equations are coupled to the Schrödinger equation with gravitoelectric and gravitomagnetic potentials. The resulting extended Schrödinger-Newton equations constitute a minimal model where the three fundamental constants of nature (, and ) appear naturally. We show that the relativistic correction coming from the gravitomagnetic potential scales as the ratio of the mass of the system to the Planck mass, and that it reinforces the standard Newtonian (gravitoelectric) attraction. The theory is further generalized to many particles through a Wigner function approach.
Happy Balls, Unhappy Balls, and Newton's Cradle
ERIC Educational Resources Information Center
Kagan, David
2010-01-01
The intricacies of Newton's Cradle are well covered in the literature going as far back as the time of Newton! These discussions generally center on the highly elastic collisions of metal spheres. Thanks to the invention of happy and unhappy balls, you can build and study the interaction of less elastic systems (see Fig. 1).
3, 2, 1 ... Discovering Newton's Laws
ERIC Educational Resources Information Center
Lutz, Joe; Sylvester, Kevin; Oliver, Keith; Herrington, Deborah
2017-01-01
"For every action there is an equal and opposite reaction." "Except when a bug hits your car window, the car must exert more force on the bug because Newton's laws only apply in the physics classroom, right?" Students in our classrooms were able to pick out definitions as well as examples of Newton's three laws; they could…
An eigenvalue inequality of the Newton potential
NASA Astrophysics Data System (ADS)
Suragan, Durvudkhan
2016-12-01
In this short conference paper we prove an isoperimetric inequality for the second eigenvalue of the Newton potential. In turn, the Newton potential can be related to the Laplacian with a non-local type boundary condition, so we obtain an isoperimetric result for its second eigenvalue as well.
Happy Balls, Unhappy Balls, and Newton's Cradle
ERIC Educational Resources Information Center
Kagan, David
2010-01-01
The intricacies of Newton's Cradle are well covered in the literature going as far back as the time of Newton! These discussions generally center on the highly elastic collisions of metal spheres. Thanks to the invention of happy and unhappy balls, you can build and study the interaction of less elastic systems (see Fig. 1).
A new Newton's law of cooling?
Kleiber, M
1972-12-22
Several physiologists confuse Fourier's law of animal heat flow with Newton's law of cooling. A critique of this error in 1932 remained ineffective. In 1969 Molnar tested Newton's cooling law. In 1971 Strunk found Newtonian cooling unrealistic for animals. Unfortunately, he called the Fourier formulation of animal heat flow, requiring post-Newtonian observations, a "contemporary Newtonian law of cooling."
An improved Newton iteration for the generalized inverse of a matrix, with applications
NASA Technical Reports Server (NTRS)
Pan, Victor; Schreiber, Robert
1990-01-01
The purpose here is to clarify and illustrate the potential for the use of variants of Newton's method of solving problems of practical interest on highly personal computers. The authors show how to accelerate the method substantially and how to modify it successfully to cope with ill-conditioned matrices. The authors conclude that Newton's method can be of value for some interesting computations, especially in parallel and other computing environments in which matrix products are especially easy to work with.
Bantroch, S; Bühler, T; Lam, J S
1994-01-01
Smooth, rough, and neutral forms of lipopolysaccharide (LPS) from Pseudomonas aeruginosa were used to assess the appropriate conditions for effective enzyme-linked immunosorbent assay (ELISA) of LPS. Each of these forms of well-defined LPS was tested for the efficiency of antigen coating by various methods as well as to identify an appropriate type of microtiter plate to use. For smooth LPS, the standard carbonate-bicarbonate buffer method was as efficient as the other sensitivity-enhancing plate-coating methods compared. The rough LPS, which has an overall hydrophobic characteristic, was shown to adhere effectively, regardless of the coating method used, to only one type of microtiter plate, CovaLink. This type of plate has secondary amine groups attached on its polystyrene surface by carbon chain spacers, which likely favors hydrophobic interactions between the rough LPS and the well surfaces. Dehydration methods were effective for coating microtiter plates with the neutral LPS examined, which is composed predominantly of a D-rhamnan. For the two dehydration procedures, LPS suspended in water or the organic solvent chloroform-ethanol was added directly to the wells, and the solvent was allowed to dehydrate or evaporate overnight. Precoating of plates with either polymyxin or poly-L-lysine did not give any major improvement in coating with the various forms of LPS. The possibility of using proteinase K- and sodium dodecyl sulfate-treated LPS preparations for ELISAs was also investigated. Smooth LPS prepared by this method was as effective in ELISA as LPS prepared by the hot water-phenol method, while the rough and neutral LPSs prepared this way were not satisfactory for ELISA. PMID:7496923
NASA Astrophysics Data System (ADS)
Hecht, Eugene
2015-02-01
Anyone who has taught introductory physics should know that roughly a third of the students initially believe that any object at rest will remain at rest, whereas any moving body not propelled by applied forces will promptly come to rest. Likewise, about half of those uninitiated students believe that any object moving at a constant speed must be continually pushed if it is to maintain its motion.1 That's essentially Aristotle's law of motion and it is so "obviously" borne out by experience that it was accepted by scholars for 2000 years, right through the Copernican Revolution. But, of course, it's fundamentally wrong. This paper tells the story of how the correct understanding, the law of inertia, evolved and how Newton came to make it his first law.
NASA Technical Reports Server (NTRS)
2003-01-01
MGS MOC Release No. MOC2-418, 11 July 2003
This Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) high resolution image shows part of a dark-floored valley system in northern Newton Crater. The valley might have been originally formed by liquid water; the dark material is probably sand that has blown into the valley in more recent times. The picture was acquired earlier this week on July 6, 2003, and is located near 39.2oS, 157.9oW. The picture covers an area 2.3 km (1.4 mi) across; sunlight illuminates the scene from the upper left.
NASA Astrophysics Data System (ADS)
Georgantopoulos, Ioannis
2004-10-01
Recent X-ray surveys have shown a dramatic deficit of obscured AGN at high redshift. This comes in contrast with the situation in the nearby Universe which shows a huge fraction of heavily absorbed objects. However, this discrepancy may arise because the column density information locally is ill-determined. Here, we propose to obtain snapshot (10 ksec) observations of an optically selected sample (12) of nearby Seyfert galaxies from the Ho et al.catalogue. Together with the galaxies which have been already observed, our survey will cover ALL the Sy galaxies in the above sample. The superb quality XMM-Newton spectra will accurately probe the column densities and will provide the least biased measurement of the AGN column density distribution locally.
Newton's cradle versus nonbinary collisions.
Sekimoto, Ken
2010-03-26
Newton's cradle is a classical example of a one-dimensional impact problem. In the early 1980s the naive perception of its behavior was corrected: For example, the impact of a particle does not exactly cause the release of the farthest particle of the target particle train, if the target particles have been just in contact with their own neighbors. It is also known that the naive picture would be correct if the whole process consisted of purely binary collisions. Our systematic study of particle systems with truncated power-law repulsive force shows that the quasibinary collision is recovered in the limit of hard core repulsion, or a very large exponent. In contrast, a discontinuous steplike repulsive force mimicking a hard contact, or a very small exponent, leads to a completely different process: the impacting cluster and the targeted cluster act, respectively, as if they were nondeformable blocks.
NASA Astrophysics Data System (ADS)
Breitschwerdt, Dieter
2003-03-01
The densest and closest absorbers of the soft X-ray background (SXRB) in the Milky Way are Bok globules, located just outside the Local Bubble in the Pipe Nebula at a distance of 125pc. With column densities of up to log(NH)~23, they are ideal targets for shadowing the SXRB in the energy range 0.3 - 2 keV, thus giving important information on the spatial and spectral variation of the foreground X-ray intensity on small scales. We propose Barnard 59 due to an extinction gradient of A_V~50 mag and the Fest 1-457 region due to strong small scale NH-variations for a detailed spectral study with XMM-Newton. Together with already existing XMM data of Barnard 68, this will allow to determine the ionization structure of the Local and Loop I superbubbles.
NASA Technical Reports Server (NTRS)
2003-01-01
MGS MOC Release No. MOC2-418, 11 July 2003
This Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) high resolution image shows part of a dark-floored valley system in northern Newton Crater. The valley might have been originally formed by liquid water; the dark material is probably sand that has blown into the valley in more recent times. The picture was acquired earlier this week on July 6, 2003, and is located near 39.2oS, 157.9oW. The picture covers an area 2.3 km (1.4 mi) across; sunlight illuminates the scene from the upper left.
Newtonian cosmology Newton would understand
Lemons, D.S.
1988-06-01
Isaac Newton envisioned a static, infinite, and initially uniform, zero field universe that was gravitationally unstable to local condensations of matter. By postulating the existence of such a universe and using it as a boundary condition on Newtonian gravity, a new field equation for gravity is derived, which differs from the classical one by a time-dependent cosmological term proportional to the average mass density of the universe. The new field equation not only makes Jeans' analysis of the gravitational instability of a Newtonian universe consistent, but also gives rise to a family of Newtonian evolutionary cosmologies parametrized by a time-invariant expansion velocity. This Newtonian cosmology contrasts with both 19th-century ones and with post general relativity Newtonian cosmology.
NASA Astrophysics Data System (ADS)
Mushotzky, Richard
2003-03-01
We propose an XMM-Newton survey of 10 nearby groups from the representative flux limited NORAS sample. Our proposed sample spans the range of X-ray luminosities (and thus temperatures) for groups. The high signal-to-noise data we obtain will be used to derive entropy profiles from the very centers of these groups out to approximately half of the virial radius. These entropy profiles will be used to constrain the history of heating and cooling in groups. The XMM data will also allow a detailed study of the abundance gradients of the hot gas in the elements of Fe, O, Si and S, which will provide constraints on the stars responsible for the enrichment. Finally, we will measure accurate mass and baryon profiles and thus determine the contribution of groups to the mass density of the universe.
Illustrating Newton's Second Law with the Automobile Coast-Down Test.
ERIC Educational Resources Information Center
Bryan, Ronald A.; And Others
1988-01-01
Describes a run test of automobiles for applying Newton's second law of motion and the concept of power. Explains some automobile thought-experiments and provides the method and data of an actual coast-down test. (YP)
Illustrating Newton's Second Law with the Automobile Coast-Down Test.
ERIC Educational Resources Information Center
Bryan, Ronald A.; And Others
1988-01-01
Describes a run test of automobiles for applying Newton's second law of motion and the concept of power. Explains some automobile thought-experiments and provides the method and data of an actual coast-down test. (YP)
NASA Astrophysics Data System (ADS)
Bauer, Daniela
2005-03-01
A light scratch with a needle induces histamine and neuropetide release on the line of stroke and in the surrounding tissue. Histamine and neuropeptides are vasodilaters. They create vasodilation by changing the contraction state of the vascular smooth muscles and hence vessel compliance. Smooth muscle contraction state is very difficult to measure. We propose an identification procedure that determines change in compliance. The procedure is based on numerical and experimental results. Blood flow is measured by Laser Doppler Velocimetry. Numerical data is obtained by a continuous model of hierarchically arranged porous media of the vascular network [1]. We show that compliance increases after the stroke in the entire tissue. Then, compliance decreases in the surrounding tissue, while it keeps increasing on the line of stroke. Hence, blood is transported from the surrounding tissue to the line of stroke. Thus, higher blood volume on the line of stroke is obtained. [1] Bauer, D., Grebe, R. Ehrlacher, A., 2004. A three layer continuous model of porous media to describe the first phase of skin irritation. J. Theoret. Bio. in press
First XMM-Newton Observations of an Isolated Neutron Star: RXJ0720.4-3125
NASA Technical Reports Server (NTRS)
Paerels, Frits; Mori, Kaya; Motch, Christian; Haberl, Frank; Zavlin, Vyacheslav E.; Zane, Silvia; Ramsay, Gavin; Cropper, Mark
2000-01-01
We present the high resolution spectrum of the isolated neutron star RXJ0720.4-3125, obtained with the Reflection Grating Spectrometer on XMM-Newton, complemented with the broad band spectrum observed with the EPIC PN camera. The spectrum appears smooth, with no evidence for strong photospheric absorption or emission features. We briefly discuss the implications of our failure to detect structure in the spectrum.
Conservative smoothing versus artificial viscosity
Guenther, C.; Hicks, D.L.; Swegle, J.W.
1994-08-01
This report was stimulated by some recent investigations of S.P.H. (Smoothed Particle Hydrodynamics method). Solid dynamics computations with S.P.H. show symptoms of instabilities which are not eliminated by artificial viscosities. Both analysis and experiment indicate that conservative smoothing eliminates the instabilities in S.P.H. computations which artificial viscosities cannot. Questions were raised as to whether conservative smoothing might smear solutions more than artificial viscosity. Conservative smoothing, properly used, can produce more accurate solutions than the von Neumann-Richtmyer-Landshoff artificial viscosity which has been the standard for many years. The authors illustrate this using the vNR scheme on a test problem with known exact solution involving a shock collision in an ideal gas. They show that the norms of the errors with conservative smoothing are significantly smaller than the norms of the errors with artificial viscosity.
Stabilized quasi-Newton optimization of noisy potential energy surfaces
NASA Astrophysics Data System (ADS)
Schaefer, Bastian; Ghasemi, S. Alireza; Roy, Shantanu; Goedecker, Stefan; Goedecker Group Team
Optimizations of atomic positions belong to the most frequently performed tasks in electronic structure calculations. Many simulations like global minimum searches or the identification of chemical reaction pathways can require the computation of hundreds or thousands of minimizations or saddle points. To automatize these tasks, optimization algorithms must not only be efficient but also very reliable. Unfortunately, computational noise in forces and energies is inherent to electronic structure codes. This computational noise poses a severe problem to the stability of efficient optimization methods like the limited-memory Broyden-Fletcher-Goldfarb-Shanno algorithm. In this talk a recently published technique that allows to obtain significant curvature information of noisy potential energy surfaces is presented. This technique was used to construct both, a stabilized quasi-Newton minimization method and a stabilized quasi-Newton saddle finding approach. With the help of benchmarks both the minimizer and the saddle finding approach were demonstrated to be superior to comparable existing methods.
Stabilized quasi-Newton optimization of noisy potential energy surfaces
NASA Astrophysics Data System (ADS)
Schaefer, Bastian; Alireza Ghasemi, S.; Roy, Shantanu; Goedecker, Stefan
2015-01-01
Optimizations of atomic positions belong to the most commonly performed tasks in electronic structure calculations. Many simulations like global minimum searches or characterizations of chemical reactions require performing hundreds or thousands of minimizations or saddle computations. To automatize these tasks, optimization algorithms must not only be efficient but also very reliable. Unfortunately, computational noise in forces and energies is inherent to electronic structure codes. This computational noise poses a severe problem to the stability of efficient optimization methods like the limited-memory Broyden-Fletcher-Goldfarb-Shanno algorithm. We here present a technique that allows obtaining significant curvature information of noisy potential energy surfaces. We use this technique to construct both, a stabilized quasi-Newton minimization method and a stabilized quasi-Newton saddle finding approach. We demonstrate with the help of benchmarks that both the minimizer and the saddle finding approach are superior to comparable existing methods.
Stabilized quasi-Newton optimization of noisy potential energy surfaces
Schaefer, Bastian; Goedecker, Stefan; Alireza Ghasemi, S.; Roy, Shantanu
2015-01-21
Optimizations of atomic positions belong to the most commonly performed tasks in electronic structure calculations. Many simulations like global minimum searches or characterizations of chemical reactions require performing hundreds or thousands of minimizations or saddle computations. To automatize these tasks, optimization algorithms must not only be efficient but also very reliable. Unfortunately, computational noise in forces and energies is inherent to electronic structure codes. This computational noise poses a severe problem to the stability of efficient optimization methods like the limited-memory Broyden–Fletcher–Goldfarb–Shanno algorithm. We here present a technique that allows obtaining significant curvature information of noisy potential energy surfaces. We use this technique to construct both, a stabilized quasi-Newton minimization method and a stabilized quasi-Newton saddle finding approach. We demonstrate with the help of benchmarks that both the minimizer and the saddle finding approach are superior to comparable existing methods.
A modified global Newton solver for viscous-plastic sea ice models
NASA Astrophysics Data System (ADS)
Mehlmann, C.; Richter, T.
2017-08-01
We present and analyze a modified Newton solver, the so called operator-related damped Jacobian method, with a line search globalization for the solution of the strongly nonlinear momentum equation in a viscous-plastic (VP) sea ice model.Due to large variations in the viscosities, the resulting nonlinear problem is very difficult to solve. The development of fast, robust and converging solvers is subject to present research. There are mainly three approaches for solving the nonlinear momentum equation of the VP model, a fixed-point method denoted as Picard solver, an inexact Newton method and a subcycling procedure based on an elastic-viscous-plastic model approximation. All methods tend to have problems on fine meshes by sharp structures in the solution. Convergence rates deteriorate such that either too many iterations are required to reach sufficient accuracy or convergence is not obtained at all.To improve robustness globalization and acceleration approaches, which increase the area of fast convergence, are needed. We develop an implicit scheme with improved convergence properties by combining an inexact Newton method with a Picard solver. We derive the full Jacobian of the viscous-plastic sea ice momentum equation and show that the Jacobian is a positive definite matrix, guaranteeing global convergence of a properly damped Newton iteration. We compare our modified Newton solver with line search damping to an inexact Newton method with established globalization and acceleration techniques. We present a test case that shows improved robustness of our new approach, in particular on fine meshes.
NASA Astrophysics Data System (ADS)
Song, Chi; Zhang, Xuejun; Zhang, Xin; Hu, Haifei; Zeng, Xuefeng
2017-06-01
A rigid conformal (RC) lap can smooth mid-spatial-frequency (MSF) errors, which are naturally smaller than the tool size, while still removing large-scale errors in a short time. However, the RC-lap smoothing efficiency performance is poorer than expected, and existing smoothing models cannot explicitly specify the methods to improve this efficiency. We presented an explicit time-dependent smoothing evaluation model that contained specific smoothing parameters directly derived from the parametric smoothing model and the Preston equation. Based on the time-dependent model, we proposed a strategy to improve the RC-lap smoothing efficiency, which incorporated the theoretical model, tool optimization, and efficiency limit determination. Two sets of smoothing experiments were performed to demonstrate the smoothing efficiency achieved using the time-dependent smoothing model. A high, theory-like tool influence function and a limiting tool speed of 300 RPM were o
NASA Astrophysics Data System (ADS)
Song, Chi; Zhang, Xuejun; Zhang, Xin; Hu, Haifei; Zeng, Xuefeng
2017-01-01
A rigid conformal (RC) lap can smooth mid-spatial-frequency (MSF) errors, which are naturally smaller than the tool size, while still removing large-scale errors in a short time. However, the RC-lap smoothing efficiency performance is poorer than expected, and existing smoothing models cannot explicitly specify the methods to improve this efficiency. We presented an explicit time-dependent smoothing evaluation model that contained specific smoothing parameters directly derived from the parametric smoothing model and the Preston equation. Based on the time-dependent model, we proposed a strategy to improve the RC-lap smoothing efficiency, which incorporated the theoretical model, tool optimization, and efficiency limit determination. Two sets of smoothing experiments were performed to demonstrate the smoothing efficiency achieved using the time-dependent smoothing model. A high, theory-like tool influence function and a limiting tool speed of 300 RPM were o
Torsional Newton-Cartan geometry from Galilean gauge theory
NASA Astrophysics Data System (ADS)
Banerjee, Rabin; Mukherjee, Pradip
2016-11-01
Using the recently advanced Galilean gauge theory (GGT) we give a comprehensive construction of torsional Newton-Cartan (NC) geometry. The coupling of a Galilean symmetric model with background NC geometry following GGT is illustrated by a free nonrelativistic scalar field theory. The issue of spatial diffeomorphism (Son and Wingate 2006 Ann. Phys. 321 197-224 Banerjee et al 2015 Phys. Rev. D 91 084021) is focussed from a new angle. The expression of the torsionful connection is worked out, which is in complete parallel with the relativistic theory. Also, smooth transition of the connection to its well known torsionless expression is demonstrated. A complete (implicit) expression of the torsion tensor for the NC spacetime is provided where the first-order variables occur in a suggestive way. The well known result for the temporal part of torsion is reproduced from our expression.
Effectiveness of Analytic Smoothing in Equipercentile Equating.
ERIC Educational Resources Information Center
Kolen, Michael J.
1984-01-01
An analytic procedure for smoothing in equipercentile equating using cubic smoothing splines is described and illustrated. The effectiveness of the procedure is judged by comparing the results from smoothed equipercentile equating with those from other equating methods using multiple cross-validations for a variety of sample sizes. (Author/JKS)
Effectiveness of Analytic Smoothing in Equipercentile Equating.
ERIC Educational Resources Information Center
Kolen, Michael J.
1984-01-01
An analytic procedure for smoothing in equipercentile equating using cubic smoothing splines is described and illustrated. The effectiveness of the procedure is judged by comparing the results from smoothed equipercentile equating with those from other equating methods using multiple cross-validations for a variety of sample sizes. (Author/JKS)
27 CFR 9.152 - Malibu-Newton Canyon.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 27 Alcohol, Tobacco Products and Firearms 1 2014-04-01 2014-04-01 false Malibu-Newton Canyon. 9... Malibu-Newton Canyon. (a) Name. The name of the viticultural area described in this petition is “Malibu-Newton Canyon.” (b) Approved maps. The appropriate map for determining the boundary of the Malibu-Newton...
27 CFR 9.152 - Malibu-Newton Canyon.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 27 Alcohol, Tobacco Products and Firearms 1 2010-04-01 2010-04-01 false Malibu-Newton Canyon. 9... Malibu-Newton Canyon. (a) Name. The name of the viticultural area described in this petition is “Malibu-Newton Canyon.” (b) Approved maps. The appropriate map for determining the boundary of the Malibu-Newton...
27 CFR 9.152 - Malibu-Newton Canyon.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 27 Alcohol, Tobacco Products and Firearms 1 2013-04-01 2013-04-01 false Malibu-Newton Canyon. 9... Malibu-Newton Canyon. (a) Name. The name of the viticultural area described in this petition is “Malibu-Newton Canyon.” (b) Approved maps. The appropriate map for determining the boundary of the Malibu-Newton...
Apparatus for Teaching Physics: Giant Newton's Rings.
ERIC Educational Resources Information Center
Cheung, Kai-yin; Mak, Se-yuen
1996-01-01
Describes a modification of the traditional demonstration of Newton's rings that magnifies the scale of the interference pattern so that the demonstration can be used for the whole class or for semiquantitative measurements in any high school laboratory. (JRH)
GOES-West Movie of Hurricane Newton
This animation of infrared and visible images from NOAA's GOES-West satellite shows the development and movement of Hurricane Newton from Sept. 4 through Sept. 6 at 10 a.m. EDT (1400 UTC) toward Ba...
ERIC Educational Resources Information Center
Raju, C. K.
1991-01-01
A study of time in Newtonian physics is presented. Newton's laws of motion, falsifiability and physical theories, laws of motion and law of gravitation, and Laplace's demon are discussed. Short bibliographic sketches of Laplace and Karl Popper are included. (KR)
Discovery Science: Newton All around You.
ERIC Educational Resources Information Center
Prigo, Robert; Humphrey, Gregg
1993-01-01
Presents activities for helping elementary students learn about Newton's third law of motion. Several activity cards demonstrate the concept of the law of action and reaction. The activities require only inexpensive materials that can be found around the house. (SM)
ERIC Educational Resources Information Center
Raju, C. K.
1991-01-01
A study of time in Newtonian physics is presented. Newton's laws of motion, falsifiability and physical theories, laws of motion and law of gravitation, and Laplace's demon are discussed. Short bibliographic sketches of Laplace and Karl Popper are included. (KR)
Discovery Science: Newton All around You.
ERIC Educational Resources Information Center
Prigo, Robert; Humphrey, Gregg
1993-01-01
Presents activities for helping elementary students learn about Newton's third law of motion. Several activity cards demonstrate the concept of the law of action and reaction. The activities require only inexpensive materials that can be found around the house. (SM)
Dome and Barchan Dunes in Newton Crater
2014-10-01
This observation from NASA Mars Reconnaissance Orbiter shows both dome and barchan dunes in a small sand dune field on the floor of Newton Crater, an approximately 300 kilometer 130 mile wide crater in the Southern hemisphere of Mars.
Demonstrating Newton's Third Law: Changing Aristotelian Viewpoints.
ERIC Educational Resources Information Center
Roach, Linda E.
1992-01-01
Suggests techniques to help eliminate students' misconceptions involving Newton's Third Law. Approaches suggested include teaching physics from a historical perspective, using computer programs with simulations, rewording the law, drawing free-body diagrams, and using demonstrations and examples. (PR)
Iteration of Complex Functions and Newton's Method
ERIC Educational Resources Information Center
Dwyer, Jerry; Barnard, Roger; Cook, David; Corte, Jennifer
2009-01-01
This paper discusses some common iterations of complex functions. The presentation is such that similar processes can easily be implemented and understood by undergraduate students. The aim is to illustrate some of the beauty of complex dynamics in an informal setting, while providing a couple of results that are not otherwise readily available in…
Traveling and Standing Waves in Coupled Pendula and Newton's Cradle
NASA Astrophysics Data System (ADS)
García-Azpeitia, Carlos
2016-12-01
The existence of traveling and standing waves is investigated for chains of coupled pendula with periodic boundary conditions. The results are proven by applying topological methods to subspaces of symmetric solutions. The main advantage of this approach comes from the fact that only properties of the linearized forces are required. This allows to cover a wide range of models such as Newton's cradle, the Fermi-Pasta-Ulam lattice, and the Toda lattice.
Smooth Phase Interpolated Keying
NASA Technical Reports Server (NTRS)
Borah, Deva K.
2007-01-01
Smooth phase interpolated keying (SPIK) is an improved method of computing smooth phase-modulation waveforms for radio communication systems that convey digital information. SPIK is applicable to a variety of phase-shift-keying (PSK) modulation schemes, including quaternary PSK (QPSK), octonary PSK (8PSK), and 16PSK. In comparison with a related prior method, SPIK offers advantages of better performance and less complexity of implementation. In a PSK scheme, the underlying information waveform that one seeks to convey consists of discrete rectangular steps, but the spectral width of such a waveform is excessive for practical radio communication. Therefore, the problem is to smooth the step phase waveform in such a manner as to maintain power and bandwidth efficiency without incurring an unacceptably large error rate and without introducing undesired variations in the amplitude of the affected radio signal. Although the ideal constellation of PSK phasor points does not cause amplitude variations, filtering of the modulation waveform (in which, typically, a rectangular pulse is converted to a square-root raised cosine pulse) causes amplitude fluctuations. If a power-efficient nonlinear amplifier is used in the radio communication system, the fluctuating-amplitude signal can undergo significant spectral regrowth, thus compromising the bandwidth efficiency of the system. In the related prior method, one seeks to solve the problem in a procedure that comprises two major steps: phase-value generation and phase interpolation. SPIK follows the two-step approach of the related prior method, but the details of the steps are different. In the phase-value-generation step, the phase values of symbols in the PSK constellation are determined by a phase function that is said to be maximally smooth and that is chosen to minimize the spectral spread of the modulated signal. In this step, the constellation is divided into two groups by assigning, to information symbols, phase values
An Alternative Realization of Gauss-Newton for Frequency-Domain Acoustic Waveform Inversion
NASA Astrophysics Data System (ADS)
Liu, Y.; Yang, J.; Chi, B.; Dong, L.
2014-12-01
Since FWI was studied under the least-square misfit optimization proposed by Tarantola (1984) in time domain, it has been greatly improved by many researchers. Pratt (1998) developed FWI in frequency domain using a Gauss-Newton optimization. In recent years, FWI has been widely studied under the framework of adjoint-state methods, as summarized by Plessix (2006). Preconditioning and high order gradients are important for FWI. Many researches have focused on the Newton optimization, in which the calculation of inverse Hessian is the key problem. Pseudo Hessian such as the diagonal Hessian was firstly used to approximate inverse Hessian (Choi & Shin, 2007). Then Gauss-Newton or l-BFGS method was widely studied to iteratively calculate the inverse approximate Hessian Haor full Hessian (Sheen et al., 2006). Full Hessian is the base of the exact Newton optimization. Fichtner and Trampert (2011) presented an extension of the adjoint-state method to directly compute the full Hessian; Métivier et al. (2012) proposed a general second-order adjoint-state formula for Hessian-vector product to tackle Gauss-Newton and exact Newton. Liu et al. (2014) proposed a matrix-decomposition FWI (MDFWI) based on Born kernel. They used the Born Fréchet kernel to explicitly calculate the gradient of the objective function through matrix decomposition, no full Fréchet kernel being stored in memory beforehand. However, they didn't give a method to calculate the Gauss-Newton. In this paper, We propose a method based on Born Fréchet kernel to calculate the Gauss-Newton for acoustic full waveform inversion (FWI). The Gauss-Newton is iteratively constructed without needing to store the huge approximate Hessian (Ha) or Fréchet kernel beforehand, and the inverse of Ha is not need to be calculated either. This procedure can be efficiently accomplished through matrix decomposition. More resolved result and faster convergence are obtained when this Gauss-Newton is applied in FWI based on the Born
ERIC Educational Resources Information Center
Moses, Tim; Liu, Jinghua
2011-01-01
In equating research and practice, equating functions that are smooth are typically assumed to be more accurate than equating functions with irregularities. This assumption presumes that population test score distributions are relatively smooth. In this study, two examples were used to reconsider common beliefs about smoothing and equating. The…
M Ali, M. K. E-mail: eutoco@gmail.com; Ruslan, M. H. E-mail: eutoco@gmail.com; Muthuvalu, M. S. E-mail: jumat@ums.edu.my; Wong, J. E-mail: jumat@ums.edu.my; Sulaiman, J. E-mail: hafidzruslan@eng.ukm.my; Yasir, S. Md. E-mail: hafidzruslan@eng.ukm.my
2014-06-19
The solar drying experiment of seaweed using Green V-Roof Hybrid Solar Drier (GVRHSD) was conducted in Semporna, Sabah under the metrological condition in Malaysia. Drying of sample seaweed in GVRHSD reduced the moisture content from about 93.4% to 8.2% in 4 days at average solar radiation of about 600W/m{sup 2} and mass flow rate about 0.5 kg/s. Generally the plots of drying rate need more smoothing compared moisture content data. Special cares is needed at low drying rates and moisture contents. It is shown the cubic spline (CS) have been found to be effective for moisture-time curves. The idea of this method consists of an approximation of data by a CS regression having first and second derivatives. The analytical differentiation of the spline regression permits the determination of instantaneous rate. The method of minimization of the functional of average risk was used successfully to solve the problem. This method permits to obtain the instantaneous rate to be obtained directly from the experimental data. The drying kinetics was fitted with six published exponential thin layer drying models. The models were fitted using the coefficient of determination (R{sup 2}), and root mean square error (RMSE). The modeling of models using raw data tested with the possible of exponential drying method. The result showed that the model from Two Term was found to be the best models describe the drying behavior. Besides that, the drying rate smoothed using CS shows to be effective method for moisture-time curves good estimators as well as for the missing moisture content data of seaweed Kappaphycus Striatum Variety Durian in Solar Dryer under the condition tested.
NASA Astrophysics Data System (ADS)
M Ali, M. K.; Ruslan, M. H.; Muthuvalu, M. S.; Wong, J.; Sulaiman, J.; Yasir, S. Md.
2014-06-01
The solar drying experiment of seaweed using Green V-Roof Hybrid Solar Drier (GVRHSD) was conducted in Semporna, Sabah under the metrological condition in Malaysia. Drying of sample seaweed in GVRHSD reduced the moisture content from about 93.4% to 8.2% in 4 days at average solar radiation of about 600W/m2 and mass flow rate about 0.5 kg/s. Generally the plots of drying rate need more smoothing compared moisture content data. Special cares is needed at low drying rates and moisture contents. It is shown the cubic spline (CS) have been found to be effective for moisture-time curves. The idea of this method consists of an approximation of data by a CS regression having first and second derivatives. The analytical differentiation of the spline regression permits the determination of instantaneous rate. The method of minimization of the functional of average risk was used successfully to solve the problem. This method permits to obtain the instantaneous rate to be obtained directly from the experimental data. The drying kinetics was fitted with six published exponential thin layer drying models. The models were fitted using the coefficient of determination (R2), and root mean square error (RMSE). The modeling of models using raw data tested with the possible of exponential drying method. The result showed that the model from Two Term was found to be the best models describe the drying behavior. Besides that, the drying rate smoothed using CS shows to be effective method for moisture-time curves good estimators as well as for the missing moisture content data of seaweed Kappaphycus Striatum Variety Durian in Solar Dryer under the condition tested.
XMM-Newton: Longevity through continued modification
NASA Astrophysics Data System (ADS)
Jansen, F.
2014-07-01
While the XMM-Newton observatory was built with a design lifetime of 10 years, a number of activities have been necessary during recent years to guarantee, from a resource and hardware point of view, the extension of an, otherwise limited, lifetime. This has required involving the original designers, project team members and companies where the relevant hardware elements were built to define, test and upload on board software which had not been changed in some 14 years. This, amongst others, has led to the implementation of attitude control based on simultaneous use of 4 reaction wheels - a method which has not only generated significant fuel savings, but also helped to address some performance issues with one of the reaction wheels. In the next few years a few other projects/updates will need to be implemented on-board. Not only the hardware is an item to be addressed in trying to achieve mission longevity, but also analysis software changes and calibration consolidation are items which have to be considered to keep on operating under increasing budget pressure while maintaining scientific proficiency.
Incompressible smoothed particle hydrodynamics
Ellero, Marco Serrano, Mar; Espanol, Pep
2007-10-01
We present a smoothed particle hydrodynamic model for incompressible fluids. As opposed to solving a pressure Poisson equation in order to get a divergence-free velocity field, here incompressibility is achieved by requiring as a kinematic constraint that the volume of the fluid particles is constant. We use Lagrangian multipliers to enforce this restriction. These Lagrange multipliers play the role of non-thermodynamic pressures whose actual values are fixed through the kinematic restriction. We use the SHAKE methodology familiar in constrained molecular dynamics as an efficient method for finding the non-thermodynamic pressure satisfying the constraints. The model is tested for several flow configurations.
The history of Newton's apple tree
NASA Astrophysics Data System (ADS)
Keesing, R. G.
1998-05-01
This article contains a brief introduction to Newton's early life to put into context the subsequent events in this narrative. It is followed by a summary of accounts of Newton's famous story of his discovery of universal gravitation which was occasioned by the fall of an apple in the year 1665/6. Evidence of Newton's friendship with a prosperous Yorkshire family who planted an apple tree arbour in the early years of the eighteenth century to celebrate his discovery is presented. A considerable amount of new and unpublished pictorial and documentary material is included relating to a particular apple tree which grew in the garden of Woolsthorpe Manor (Newton's birthplace) and which blew down in a storm before the year 1816. Evidence is then presented which describes how this tree was chosen to be the focus of Newton's account. Details of the propagation of the apple tree growing in the garden at Woolsthorpe in the early part of the last century are then discussed, and the results of a dendrochronological study of two of these trees is presented. It is then pointed out that there is considerable evidence to show that the apple tree presently growing at Woolsthorpe and known as 'Newton's apple tree' is in fact the same specimen which was identified in the middle of the eighteenth century and which may now be 350 years old. In conclusion early results from a radiocarbon dating study being carried out at the University of Oxford on core samples from the Woolsthorpe tree lend support to the contention that the present tree is one and the same as that identified as Newton's apple tree more than 200 years ago. Very recently genetic fingerprinting techniques have been used in an attempt to identify from which sources the various 'Newton apple trees' planted throughout the world originate. The tentative result of this work suggests that there are two separate varieties of apple tree in existence which have been accepted as 'the tree'. One may conclude that at least some of
The XMM-Newton Survey of the Small Magellanic Cloud
NASA Technical Reports Server (NTRS)
Haberl, F.; Sturm, R.; Ballet, J.; Bomans, D. J.; Buckley, D. A. H.; Coe, M. J.; Corbet, R.; Ehle, M.; Filipovic, M. D.; Gilfanov, M.;
2012-01-01
Context. Although numerous archival XMM-Newton observations existed towards the Small Magellanic Cloud (SMC) before 2009, only a fraction of the whole galaxy had been covered. Aims. Between May 2009 and March 2010, we carried out an XMM-Newton survey of the SMC, to ensure a complete coverage of both its bar and wing. Thirty-three observations of 30 different fields with a total exposure of about one Ms filled the previously missing parts. Methods. We systematically processed all available SMC data from the European Photon Imaging Camera. After rejecting observations with very high background, we included 53 archival and the 33 survey observations. We produced images in five different energy bands. We applied astrometric boresight corrections using secure identifications of X-ray sources and combined all the images to produce a mosaic covering the main body of the SMC. Results. We present an overview of the XMM-Newton observations, describe their analysis, and summarize our first results, which will be presented in detail in follow-up papers. Here, we mainly focus on extended X-ray sources, such as supernova remnants (SNRs) and clusters of galaxies, that are seen in our X-ray images. Conclusions. Our XMM-Newton survey represents the deepest complete survey of the SMC in the 0.15-12.0 keV X-ray band. We propose three new SNRs that have low surface brightnesses of a few 10-14 erg cm-2 s-1 arcmin-2 and large extents. In addition, several known remnants appear larger than previously measured at either X-rays or other wavelengths extending the size distribution of SMC SNRs to larger values.
3, 2, 1 … Discovering Newton's Laws
NASA Astrophysics Data System (ADS)
Lutz, Joe; Sylvester, Kevin; Oliver, Keith; Herrington, Deborah
2017-03-01
"For every action there is an equal and opposite reaction." "Except when a bug hits your car window, the car must exert more force on the bug because Newton's laws only apply in the physics classroom, right?" Students in our classrooms were able to pick out definitions as well as examples of Newton's three laws; they could recite the laws and even solve for force, mass, and acceleration. However, when given "real world" questions, they would quickly revert to naive explanations. This frustration led to an examination of our approach to teaching Newton's laws. Like many, we taught Newton's laws in their numerical order—first, second, and then third. Students read about the laws, copied definitions, and became proficient with vocabulary before they applied the laws in a lab setting. This paper discusses how we transformed our teaching of Newton's laws by flipping the order (3, 2, 1) and putting the activity before concept, as well as how these changes affected student outcomes.
Wu, Hulin; Xue, Hongqi; Kumar, Arun
2012-06-01
Differential equations are extensively used for modeling dynamics of physical processes in many scientific fields such as engineering, physics, and biomedical sciences. Parameter estimation of differential equation models is a challenging problem because of high computational cost and high-dimensional parameter space. In this article, we propose a novel class of methods for estimating parameters in ordinary differential equation (ODE) models, which is motivated by HIV dynamics modeling. The new methods exploit the form of numerical discretization algorithms for an ODE solver to formulate estimating equations. First, a penalized-spline approach is employed to estimate the state variables and the estimated state variables are then plugged in a discretization formula of an ODE solver to obtain the ODE parameter estimates via a regression approach. We consider three different order of discretization methods, Euler's method, trapezoidal rule, and Runge-Kutta method. A higher-order numerical algorithm reduces numerical error in the approximation of the derivative, which produces a more accurate estimate, but its computational cost is higher. To balance the computational cost and estimation accuracy, we demonstrate, via simulation studies, that the trapezoidal discretization-based estimate is the best and is recommended for practical use. The asymptotic properties for the proposed numerical discretization-based estimators are established. Comparisons between the proposed methods and existing methods show a clear benefit of the proposed methods in regards to the trade-off between computational cost and estimation accuracy. We apply the proposed methods t an HIV study to further illustrate the usefulness of the proposed approaches.
Wu, Hulin; Xue, Hongqi; Kumar, Arun
2012-01-01
Summary Differential equations are extensively used for modeling dynamics of physical processes in many scientific fields such as engineering, physics, and biomedical sciences. Parameter estimation of differential equation models is a challenging problem because of high computational cost and high-dimensional parameter space. In this paper, we propose a novel class of methods for estimating parameters in ordinary differential equation (ODE) models, which is motivated by HIV dynamics modeling. The new methods exploit the form of numerical discretization algorithms for an ODE solver to formulate estimating equations. First a penalized-spline approach is employed to estimate the state variables and the estimated state variables are then plugged in a discretization formula of an ODE solver to obtain the ODE parameter estimates via a regression approach. We consider three different order of discretization methods, Euler’s method, trapezoidal rule and Runge-Kutta method. A higher order numerical algorithm reduces numerical error in the approximation of the derivative, which produces a more accurate estimate, but its computational cost is higher. To balance the computational cost and estimation accuracy, we demonstrate, via simulation studies, that the trapezoidal discretization-based estimate is the best and is recommended for practical use. The asymptotic properties for the proposed numerical discretization-based estimators (DBE) are established. Comparisons between the proposed methods and existing methods show a clear benefit of the proposed methods in regards to the trade-off between computational cost and estimation accuracy. We apply the proposed methods to an HIV study to further illustrate the usefulness of the proposed approaches. PMID:22376200
Analyzing Collisions in Terms of Newton's Laws
NASA Astrophysics Data System (ADS)
Roeder, John L.
2003-02-01
Although the principle of momentum conservation is a consequence of Newton's second and third laws of motion, as recognized by Newton himself, this principle is typically applied in analyzing collisions as if it is a separate concept of its own. This year I sought to integrate my treatment of collisions with my coverage of Newton's laws by asking students to calculate the effect on the motion of two particles due to the forces they exerted for a specified time interval on each other. For example, "A 50-kg crate slides across the ice at 3 m/s and collides with a 25-kg crate at rest. During the collision process the 50-kg crate exerts a 500 N time-averaged force on the 25 kg for 0.1 s. What are the accelerations of the crates during the collision, and what are their velocities after the collision? What are the momenta of the crates before and after collision?"
Smooth halos in the cosmic web
NASA Astrophysics Data System (ADS)
Gaite, José
2015-04-01
Dark matter halos can be defined as smooth distributions of dark matter placed in a non-smooth cosmic web structure. This definition of halos demands a precise definition of smoothness and a characterization of the manner in which the transition from smooth halos to the cosmic web takes place. We introduce entropic measures of smoothness, related to measures of inequality previously used in economy and with the advantage of being connected with standard methods of multifractal analysis already used for characterizing the cosmic web structure in cold dark matter N-body simulations. These entropic measures provide us with a quantitative description of the transition from the small scales portrayed as a distribution of halos to the larger scales portrayed as a cosmic web and, therefore, allow us to assign definite sizes to halos. However, these ``smoothness sizes'' have no direct relation to the virial radii. Finally, we discuss the influence of N-body discreteness parameters on smoothness.
Smooth halos in the cosmic web
Gaite, José
2015-04-01
Dark matter halos can be defined as smooth distributions of dark matter placed in a non-smooth cosmic web structure. This definition of halos demands a precise definition of smoothness and a characterization of the manner in which the transition from smooth halos to the cosmic web takes place. We introduce entropic measures of smoothness, related to measures of inequality previously used in economy and with the advantage of being connected with standard methods of multifractal analysis already used for characterizing the cosmic web structure in cold dark matter N-body simulations. These entropic measures provide us with a quantitative description of the transition from the small scales portrayed as a distribution of halos to the larger scales portrayed as a cosmic web and, therefore, allow us to assign definite sizes to halos. However, these ''smoothness sizes'' have no direct relation to the virial radii. Finally, we discuss the influence of N-body discreteness parameters on smoothness.
Generating Smooth Motions For Robotic Manipulators
NASA Technical Reports Server (NTRS)
Bejczy, Antal K.; Szakaly, Zoltan F.
1993-01-01
In improved method for generating trajectory of robotic manipulator, each straight-line segment of trajectory composed of constant-velocity main portion sandwiched between smooth acceleration at start and smooth deceleration at finish. Algorithm implementing method computes velocity in each accelerating portion as sinusoidal function of position along line. This motion chosen for two reasons: closely approximates motion of human hand along straight-line trajectory, and provides very smooth transitions between constant-velocity portion and accelerated and decelerational end portions.
XMM-Newton Mobile Web Application
NASA Astrophysics Data System (ADS)
Ibarra, A.; Kennedy, M.; Rodríguez, P.; Hernández, C.; Saxton, R.; Gabriel, C.
2013-10-01
We present the first XMM-Newton web mobile application, coded using new web technologies such as HTML5, the Query mobile framework, and D3 JavaScript data-driven library. This new web mobile application focuses on re-formatted contents extracted directly from the XMM-Newton web, optimizing the contents for mobile devices. The main goals of this development were to reach all kind of handheld devices and operating systems, while minimizing software maintenance. The application therefore has been developed as a web mobile implementation rather than a more costly native application. New functionality will be added regularly.
Recursive inverse kinematics for robot arms via Kalman filtering and Bryson-Frazier smoothing
NASA Technical Reports Server (NTRS)
Rodriguez, G.; Scheid, R. E., Jr.
1987-01-01
This paper applies linear filtering and smoothing theory to solve recursively the inverse kinematics problem for serial multilink manipulators. This problem is to find a set of joint angles that achieve a prescribed tip position and/or orientation. A widely applicable numerical search solution is presented. The approach finds the minimum of a generalized distance between the desired and the actual manipulator tip position and/or orientation. Both a first-order steepest-descent gradient search and a second-order Newton-Raphson search are developed. The optimal relaxation factor required for the steepest descent method is computed recursively using an outward/inward procedure similar to those used typically for recursive inverse dynamics calculations. The second-order search requires evaluation of a gradient and an approximate Hessian. A Gauss-Markov approach is used to approximate the Hessian matrix in terms of products of first-order derivatives. This matrix is inverted recursively using a two-stage process of inward Kalman filtering followed by outward smoothing. This two-stage process is analogous to that recently developed by the author to solve by means of spatial filtering and smoothing the forward dynamics problem for serial manipulators.
Recursive inverse kinematics for robot arms via Kalman filtering and Bryson-Frazier smoothing
NASA Technical Reports Server (NTRS)
Rodriguez, G.; Scheid, R. E., Jr.
1987-01-01
This paper applies linear filtering and smoothing theory to solve recursively the inverse kinematics problem for serial multilink manipulators. This problem is to find a set of joint angles that achieve a prescribed tip position and/or orientation. A widely applicable numerical search solution is presented. The approach finds the minimum of a generalized distance between the desired and the actual manipulator tip position and/or orientation. Both a first-order steepest-descent gradient search and a second-order Newton-Raphson search are developed. The optimal relaxation factor required for the steepest descent method is computed recursively using an outward/inward procedure similar to those used typically for recursive inverse dynamics calculations. The second-order search requires evaluation of a gradient and an approximate Hessian. A Gauss-Markov approach is used to approximate the Hessian matrix in terms of products of first-order derivatives. This matrix is inverted recursively using a two-stage process of inward Kalman filtering followed by outward smoothing. This two-stage process is analogous to that recently developed by the author to solve by means of spatial filtering and smoothing the forward dynamics problem for serial manipulators.
NASA Astrophysics Data System (ADS)
Yagi, Yuji; Nakao, Atsushi; Kasahara, Amato
2012-11-01
We developed a new back-projection method that uses teleseismic P-waveforms to integrate the direct P-phase with reflected phases from structural discontinuities near the source and used it to estimate the spatiotemporal distribution of the seismic energy release of the 2011 Tohoku-oki earthquake. We projected a normalized cross-correlation of observed waveforms with corresponding Green's functions onto the seismic source region to obtain a high-resolution image of the seismic energy release. Applying this method to teleseismic P-waveform data of the 2011 Tohoku-oki earthquake, we obtained spatiotemporal distributions of seismic energy release for two frequency bands, a low-frequency dataset and a high-frequency dataset. We showed that the energy radiated in the dip direction was strongly frequency dependent. The area of major high-frequency seismic radiation extended only downdip from the hypocenter, whereas the area of major low-frequency seismic radiation propagated both downdip and updip from the hypocenter. We detected a large release of seismic energy near the Japan Trench in the area of maximum slip, which was also the source area of the gigantic tsunami, when we used only the low-frequency dataset. The timing of this large seismic energy release corresponded to an episode of smooth and rapid slip near the Japan Trench, and reflects the strong dependence of the seismic energy distribution obtained on the frequency band of the input waveform dataset. The episode of smooth and rapid slip may have been the trigger for a release of roughly all of the accumulated elastic strain in the seismic source region of the 2011 Tohoku-oki earthquake.
Smoothing spline ANOVA frailty model for recurrent event data.
Du, Pang; Jiang, Yihua; Wang, Yuedong
2011-12-01
Gap time hazard estimation is of particular interest in recurrent event data. This article proposes a fully nonparametric approach for estimating the gap time hazard. Smoothing spline analysis of variance (ANOVA) decompositions are used to model the log gap time hazard as a joint function of gap time and covariates, and general frailty is introduced to account for between-subject heterogeneity and within-subject correlation. We estimate the nonparametric gap time hazard function and parameters in the frailty distribution using a combination of the Newton-Raphson procedure, the stochastic approximation algorithm (SAA), and the Markov chain Monte Carlo (MCMC) method. The convergence of the algorithm is guaranteed by decreasing the step size of parameter update and/or increasing the MCMC sample size along iterations. Model selection procedure is also developed to identify negligible components in a functional ANOVA decomposition of the log gap time hazard. We evaluate the proposed methods with simulation studies and illustrate its use through the analysis of bladder tumor data. © 2011, The International Biometric Society.
An investigation of particles suspension using smoothed particle hydrodynamics
NASA Astrophysics Data System (ADS)
Pazouki, Arman; Negrut, Dan
2013-11-01
This contribution outlines a method for the direct numerical simulation of rigid body suspensions in a Lagrangian-Lagrangian framework using extended Smoothed Particle Hydrodynamics (XSPH) method. The dynamics of the arbitrarily shaped rigid bodies is fully resolved via Boundary Condition Enforcing (BCE) markers and updated according to the general Newton-Euler equations of motion. The simulation tool, refered to herien as Chrono::Fluid, relies on a parallel implementation that runs on Graphics Processing Unit (GPU) cards. The simulation results obtained for transient Poiseuille flow, migration of cylinder and sphere in Poiseuille flow, and distribution of particles at different cross sections of the laminar flow of dilute suspension were respectively within 0.1%, 1%, and 5% confidence interval of analytical and experimental results reported in the literature. It was shown that at low Reynolds number, Re = O(1), the radial migration (a) behaves non-monotonically as the particles relative distance (distance over diameter) increases from zero to two; and (b) decreases as the particle skewness and size increases. The scaling of Chrono::Fluid was demonstrated in conjunction with a suspension dynamics analysis in which the number of ellipsoids went up to 3e4. Financial support was provided in part by National Science Foundation grant NSF CMMI-084044.
Kastanya, Doddy Yozef Febrian; Turinsky, Paul J.
2005-05-15
A Newton-Krylov iterative solver has been developed to reduce the CPU execution time of boiling water reactor (BWR) core simulators implemented in the core simulator part of the Fuel Optimization for Reloads Multiple Objectives by Simulated Annealing for BWR (FORMOSA-B) code, which is an in-core fuel management optimization code for BWRs. This new solver utilizes Newton's method to explicitly treat strong nonlinearities in the problem, replacing the traditionally used nested iterative approach. Newton's method provides the solver with a higher-than-linear convergence rate, assuming that good initial estimates of the unknowns are provided. Within each Newton iteration, an appropriately preconditioned Krylov solver is utilized for solving the linearized system of equations. Taking advantage of the higher convergence rate provided by Newton's method and utilizing an efficient preconditioned Krylov solver, we have developed a Newton-Krylov solver to evaluate the three-dimensional, two-group neutron diffusion equations coupled with a two-phase flow model within a BWR core simulator. Numerical tests on the new solver have shown that speedups ranging from 1.6 to 2.1, with reference to the traditional approach of employing nested iterations to treat the nonlinear feedbacks, can be achieved. However, if a preconditioned Krylov solver is employed to complete the inner iterations of the traditional approach, negligible CPU time differences are noted between the Newton-Krylov and traditional (Krylov) approaches.
Newton's Metaphysics of Space as God's Emanative Effect
NASA Astrophysics Data System (ADS)
Jacquette, Dale
2014-09-01
In several of his writings, Isaac Newton proposed that physical space is God's "emanative effect" or "sensorium," revealing something interesting about the metaphysics underlying his mathematical physics. Newton's conjectures depart from Plato and Aristotle's metaphysics of space and from classical and Cambridge Neoplatonism. Present-day philosophical concepts of supervenience clarify Newton's ideas about space and offer a portrait of Newton not only as a mathematical physicist but an independent-minded rationalist philosopher.
Magnetic Levitation and Newton's Third Law
ERIC Educational Resources Information Center
Aguilar, Horacio Munguia
2007-01-01
Newton's third law is often misunderstood by students and even their professors, as has already been pointed out in the literature. Application of the law in the context of electromagnetism can be especially problematic, because the idea that the forces of "action" and "reaction" are equal and opposite independent of the medium through which they…
A Class Inquiry into Newton's Cooling Curve
ERIC Educational Resources Information Center
Bartholow, Martin
2007-01-01
Newton's cooling curve was chosen for the four-part laboratory inquiry into conditions affecting temperature change. The relationship between time and temperature is not foreseen by the average high school student before the first session. However, during several activities students examine the classic relationship, T = A exp[superscript -Ct] + B…
Three Hundred Years of Newton's Principia.
ERIC Educational Resources Information Center
Dolby, R. G. A.
1987-01-01
Discusses how the reputation of "Principia" was created and maintained. Indicates the difficulties of identifying a single unambiguous meaning for the text. Shows that knowledge of Sir Isaac Newton makes little difference to understanding the later impact of the work. (CW)
Newtons's Thermometry: The Role of Radiation.
ERIC Educational Resources Information Center
French, A. P.
1993-01-01
Discusses Newton's idea of predicting very high temperatures of objects by observing the time needed for the object to cool to some standard reference temperature. This article discusses experimental deviations from this idea and provides explanations for the observed results. (MVL)
Newton's Law: Not so Simple after All
ERIC Educational Resources Information Center
Robertson, William C.; Gallagher, Jeremiah; Miller, William
2004-01-01
One of the most basic concepts related to force and motion is Newton's first law, which essentially states, "An object at rest tends to remain at rest unless acted on by an unbalanced force. An object in motion in a straight line tends to remain in motion in a straight line unless acted upon by an unbalanced force." Judging by the time and space…
Newton's First Law: A Learning Cycle Approach
ERIC Educational Resources Information Center
McCarthy, Deborah
2005-01-01
To demonstrate how Newton's first law of motion applies to students' everyday lives, the author developed a learning cycle series of activities on inertia. The discrepant event at the heart of these activities is sure to elicit wide-eyed stares and puzzled looks from students, but also promote critical thinking and help bring an abstract concept…
Sonic Beam Model of Newton's Cradle
ERIC Educational Resources Information Center
Menger, Fredric M.; Rizvi, Syed A. A.
2016-01-01
The motions of Newton's cradle, consisting of several steel balls hanging side-by-side, have been analysed in terms of a sound pulse that travels via points of contact among the balls. This presupposes a focused energy beam. When the pulse reaches the fifth and final ball, the energy disperses and dislocates the ball with a trajectory equivalent…
Infinity and Newton's Three Laws of Motion
NASA Astrophysics Data System (ADS)
Lee, Chunghyoung
2011-12-01
It is shown that the following three common understandings of Newton's laws of motion do not hold for systems of infinitely many components. First, Newton's third law, or the law of action and reaction, is universally believed to imply that the total sum of internal forces in a system is always zero. Several examples are presented to show that this belief fails to hold for infinite systems. Second, two of these examples are of an infinitely divisible continuous body with finite mass and volume such that the sum of all the internal forces in the body is not zero and the body accelerates due to this non-null net internal force. So the two examples also demonstrate the breakdown of the common understanding that according to Newton's laws a body under no external force does not accelerate. Finally, these examples also make it clear that the expression `impressed force' in Newton's formulations of his first and second laws should be understood not as `external force' but as `exerted force' which is the sum of all the internal and external forces acting on a given body, if the body is infinitely divisible.
Bernoulli and Newton in Fluid Mechanics
ERIC Educational Resources Information Center
Smith, Norman F.
1972-01-01
Bernoulli's theorem can be better understood with the aid of Newton's laws and the law of conservation of energy. Application of this theorem should involve only cases dealing with an interchange of velocity and pressure within a fluid under isentropic conditions. (DF)
Quantum physics explains Newton's laws of motion
NASA Astrophysics Data System (ADS)
Ogborn, Jon; Taylor, Edwin F.
2005-01-01
Newton was obliged to give his laws of motion as fundamental axioms. But today we know that the quantum world is fundamental, and Newton’s laws can be seen as consequences of fundamental quantum laws. This article traces this transition from fundamental quantum mechanics to derived classical mechanics.
Bernoulli and Newton in Fluid Mechanics
ERIC Educational Resources Information Center
Smith, Norman F.
1972-01-01
Bernoulli's theorem can be better understood with the aid of Newton's laws and the law of conservation of energy. Application of this theorem should involve only cases dealing with an interchange of velocity and pressure within a fluid under isentropic conditions. (DF)
A Class Inquiry into Newton's Cooling Curve
ERIC Educational Resources Information Center
Bartholow, Martin
2007-01-01
Newton's cooling curve was chosen for the four-part laboratory inquiry into conditions affecting temperature change. The relationship between time and temperature is not foreseen by the average high school student before the first session. However, during several activities students examine the classic relationship, T = A exp[superscript -Ct] + B…
Magnetic Levitation and Newton's Third Law
ERIC Educational Resources Information Center
Aguilar, Horacio Munguia
2007-01-01
Newton's third law is often misunderstood by students and even their professors, as has already been pointed out in the literature. Application of the law in the context of electromagnetism can be especially problematic, because the idea that the forces of "action" and "reaction" are equal and opposite independent of the medium through which they…
Sonic Beam Model of Newton's Cradle
ERIC Educational Resources Information Center
Menger, Fredric M.; Rizvi, Syed A. A.
2016-01-01
The motions of Newton's cradle, consisting of several steel balls hanging side-by-side, have been analysed in terms of a sound pulse that travels via points of contact among the balls. This presupposes a focused energy beam. When the pulse reaches the fifth and final ball, the energy disperses and dislocates the ball with a trajectory equivalent…
Newton's First Law: A Learning Cycle Approach
ERIC Educational Resources Information Center
McCarthy, Deborah
2005-01-01
To demonstrate how Newton's first law of motion applies to students' everyday lives, the author developed a learning cycle series of activities on inertia. The discrepant event at the heart of these activities is sure to elicit wide-eyed stares and puzzled looks from students, but also promote critical thinking and help bring an abstract concept…
Newton's Law: Not so Simple after All
ERIC Educational Resources Information Center
Robertson, William C.; Gallagher, Jeremiah; Miller, William
2004-01-01
One of the most basic concepts related to force and motion is Newton's first law, which essentially states, "An object at rest tends to remain at rest unless acted on by an unbalanced force. An object in motion in a straight line tends to remain in motion in a straight line unless acted upon by an unbalanced force." Judging by the time and space…
NEWTON'S APPLE 14th Season Teacher's Guide.
ERIC Educational Resources Information Center
Wichmann, Sue, Ed.
This guide was developed to help teachers use the 14th season of NEWTON'S APPLE in their classrooms and contains lessons formatted to follow the National Science Education Standards. The "Overview,""Main Activity," and "Try-This" sections were created with inquiry-based learning in mind. Each lesson page begins with…
Three Hundred Years of Newton's Principia.
ERIC Educational Resources Information Center
Dolby, R. G. A.
1987-01-01
Discusses how the reputation of "Principia" was created and maintained. Indicates the difficulties of identifying a single unambiguous meaning for the text. Shows that knowledge of Sir Isaac Newton makes little difference to understanding the later impact of the work. (CW)
Constructs and Attributes in Test Validity: Reflections on Newton's Account
ERIC Educational Resources Information Center
Markus, Keith A.
2012-01-01
I congratulate Paul E. Newton on a thoughtful and evenhanded contribution to test validity theory. I especially appreciate the evident care that went into interpreting the various authors whose work Newton discusses. I found many useful insights along with the few minor points with which I might quibble. I comment on three aspects of Newton's…
Constructs and Attributes in Test Validity: Reflections on Newton's Account
ERIC Educational Resources Information Center
Markus, Keith A.
2012-01-01
I congratulate Paul E. Newton on a thoughtful and evenhanded contribution to test validity theory. I especially appreciate the evident care that went into interpreting the various authors whose work Newton discusses. I found many useful insights along with the few minor points with which I might quibble. I comment on three aspects of Newton's…
Fourier smoothing of digital photographic spectra
NASA Astrophysics Data System (ADS)
Anupama, G. C.
1990-03-01
Fourier methods of smoothing one-dimensional data are discussed with particular reference to digital photographic spectra. Data smoothed using lowpass filters with different cut-off frequencies are intercompared. A method to scale densities in order to remove the dependence of grain noise on density is described. Optimal filtering technique which models signal and noise in Fourier domain is also explained.
NASA Astrophysics Data System (ADS)
Iga, Shin-ichi
2015-09-01
A generation method for smooth, seamless, and structured triangular grids on a sphere with flexibility in resolution distribution is proposed. This method is applicable to many fields that deal with a sphere on which the required resolution is not uniform. The grids were generated using the spring dynamics method, and adjustments were made using analytical functions. The mesh topology determined its resolution distribution, derived from a combination of conformal mapping factors: polar stereographic projection (PSP), Lambert conformal conic projection (LCCP), and Mercator projection (MP). Their combination generated, for example, a tropically fine grid that had a nearly constant high-resolution belt around the equator, with a gradual decrease in resolution distribution outside of the belt. This grid can be applied to boundary-less simulations of tropical meteorology. The other example involves a regionally fine grid with a nearly constant high-resolution circular region and a gradually decreasing resolution distribution outside of the region. This is applicable to regional atmospheric simulations without grid nesting. The proposed grids are compatible with computer architecture because they possess a structured form. Each triangle of the proposed grids was highly regular, implying a high local isotropy in resolution. Finally, the proposed grids were examined by advection and shallow water simulations.
NASA Astrophysics Data System (ADS)
Hendrikse, Anne; Veldhuis, Raymond; Spreeuwers, Luuk
2013-12-01
Second-order statistics play an important role in data modeling. Nowadays, there is a tendency toward measuring more signals with higher resolution (e.g., high-resolution video), causing a rapid increase of dimensionality of the measured samples, while the number of samples remains more or less the same. As a result the eigenvalue estimates are significantly biased as described by the Marčenko Pastur equation for the limit of both the number of samples and their dimensionality going to infinity. By introducing a smoothness factor, we show that the Marčenko Pastur equation can be used in practical situations where both the number of samples and their dimensionality remain finite. Based on this result we derive methods, one already known and one new to our knowledge, to estimate the sample eigenvalues when the population eigenvalues are known. However, usually the sample eigenvalues are known and the population eigenvalues are required. We therefore applied one of the these methods in a feedback loop, resulting in an eigenvalue bias correction method. We compare this eigenvalue correction method with the state-of-the-art methods and show that our method outperforms other methods particularly in real-life situations often encountered in biometrics: underdetermined configurations, high-dimensional configurations, and configurations where the eigenvalues are exponentially distributed.
Witte, K. J.; Fill, E.; Scrlac, W.
1985-04-30
The pulse duration of an iodine laser is adjusted between 400 ps and 20 ns primarily by changing the resonator length in the range of about 2 cm to about 100 cm and secondarily by the ratio of excitation energy to threshold energy of the laser. Iodine laser pulses without pre-pulse and substructure are achieved in that the gas pressure of the laser gas of the iodine laser is adapted to the resonator length in order to limit the band width of the amplification and thus the band width of the pulse to be produced. The longer are the laser pulses to be produced the lower is the pressure chosen. A prerequisite for the above results is that the excitation of the iodine laser occurs extremely rapidly. This is advantageously achieved by photo-dissociation of a perfluoroalkyl iodide as CF/sub 3/I by means of laser providing sufficiently short output pumping pulses, e.g. an excimer laser, as a KrF laser or XeCl laser or a frequency-multiplied Nd-glass or Nd-YAG laser, or a N/sub 2/ laser (in combination with t-C/sub 4/F/sub 9/I as laser medium). In addition to the substantial advantage of the easy variability of the pulse duration the method additionally has a number of further advantages, namely pre-pulse-free rise of the laser pulse up to the maximum amplitude; exchange of the laser medium between two pulses is not necessary at pulse repetition rates below about 1 hertz; high pulse repetion rates obtainable with laser gas regeneration; switching elements for isolating a laser oscillator from a subsequent amplifier cascade for the purpose of avoiding parasitic oscillations are not as critical as with flashlamp-pumped lasers.
A Newton-Krylov Approach to Aerodynamic Shape Optimization in Three Dimensions
NASA Astrophysics Data System (ADS)
Leung, Timothy Man-Ming
A Newton-Krylov algorithm is presented for aerodynamic shape optimization in three dimensions using the Euler equations. An inexact-Newton method is used in the flow solver, a discrete-adjoint method to compute the gradient, and the quasi-Newton optimizer to find the optimum. A Krylov subspace method with approximate-Schur preconditioning is used to solve both the flow equation and the adjoint equation. Basis spline surfaces are used to parameterize the geometry, and a fast algebraic algorithm is used for grid movement. Accurate discrete- adjoint gradients can be obtained in approximately one-fourth the time required for a converged flow solution. Single- and multi-point lift-constrained drag minimization optimization cases are presented for wing design at transonic speeds. In all cases, the optimizer is able to efficiently decrease the objective function and gradient for problems with hundreds of design variables.
Newton flow of the Riemann zeta function: separatrices control the appearance of zeros
NASA Astrophysics Data System (ADS)
Neuberger, J. W.; Feiler, C.; Maier, H.; Schleich, W. P.
2014-10-01
A great many phenomena in physics can be traced back to the zeros of a function or a functional. Eigenvalue or variational problems prevalent in classical as well as quantum mechanics are examples illustrating this statement. Continuous descent methods taken with respect to the proper metric are efficient ways to attack such problems. In particular, the continuous Newton method brings out the lines of constant phase of a complex-valued function. Although the patterns created by the Newton flow are reminiscent of the field lines of electrostatics and magnetostatics they cannot be realized in this way since in general they are not curl-free. We apply the continuous Newton method to the Riemann zeta function and discuss the emerging patterns emphasizing especially the structuring of the non-trivial zeros by the separatrices. This approach might open a new road toward the Riemann hypothesis.
The use of Newton's rings for characterising ophthalmic lenses.
Illueca, C; Vazquez, C; Hernández, C; Viqueira, V
1998-07-01
The interference technique of Newton's Rings is widely used for the quality control of optical surfaces because the precision obtained with this method proves to be very satisfactory. The dimensions of the rings permits calculation of the radii of curvature of the analysed surfaces and deformation of the interference pattern can be utilised to calculate other parameters, such as astigmatism. We describe the study of progressive surfaces by means of this technique, whereby the analysis of the various points of the progressive corridor is made, and also include information on the power function for these lenses, as well as the addition and corridor length.
Newton, laplace, and the epistemology of systems biology.
Bittner, Michael L; Dougherty, Edward R
2012-01-01
For science, theoretical or applied, to significantly advance, researchers must use the most appropriate mathematical methods. A century and a half elapsed between Newton's development of the calculus and Laplace's development of celestial mechanics. One cannot imagine the latter without the former. Today, more than three-quarters of a century has elapsed since the birth of stochastic systems theory. This article provides a perspective on the utilization of systems theory as the proper vehicle for the development of systems biology and its application to complex regulatory diseases such as cancer.
Asymptotic analysis of Bayesian generalization error with Newton diagram.
Yamazaki, Keisuke; Aoyagi, Miki; Watanabe, Sumio
2010-01-01
Statistical learning machines that have singularities in the parameter space, such as hidden Markov models, Bayesian networks, and neural networks, are widely used in the field of information engineering. Singularities in the parameter space determine the accuracy of estimation in the Bayesian scenario. The Newton diagram in algebraic geometry is recognized as an effective method by which to investigate a singularity. The present paper proposes a new technique to plug the diagram in the Bayesian analysis. The proposed technique allows the generalization error to be clarified and provides a foundation for an efficient model selection. We apply the proposed technique to mixtures of binomial distributions.
NASA Technical Reports Server (NTRS)
Tapia, R. A.; Vanrooy, D. L.
1976-01-01
A quasi-Newton method is presented for minimizing a nonlinear function while constraining the variables to be nonnegative and sum to one. The nonnegativity constraints were eliminated by working with the squares of the variables and the resulting problem was solved using Tapia's general theory of quasi-Newton methods for constrained optimization. A user's guide for a computer program implementing this algorithm is provided.
Smoothed square well potential
NASA Astrophysics Data System (ADS)
Salamon, P.; Vertse, T.
2017-07-01
The classical square well potential is smoothed with a finite range smoothing function in order to get a new simple strictly finite range form for the phenomenological nuclear potential. The smoothed square well form becomes exactly zero smoothly at a finite distance, in contrast to the Woods-Saxon form. If the smoothing range is four times the diffuseness of the Woods-Saxon shape both the central and the spin-orbit terms of the Woods-Saxon shape are reproduced reasonably well. The bound single-particle energies in a Woods-Saxon potential can be well reproduced with those in the smoothed square well potential. The same is true for the complex energies of the narrow resonances.
NASA Technical Reports Server (NTRS)
Chao, C. C.; Broucke, R. A.
1976-01-01
The Newton iteration method has been widely applied to the solution of various equations such as Kepler's equation. In this study it is used in planetary and satellite theory as a general procedure for Fourier series inversion. The method is used for the construction of the 1/Delta series either in literal form or in numerical form with small eccentricities and inclinations substituted in advance. This usually results in very compact series. With the Newton iteration procedure and a computerized series manipulation technique, the Fourier series of 1/Delta of the mutual perturbations among most natural satellites can be easily constructed.
Study on the algorithm for Newton-Rapson iteration interpolation of NURBS curve and simulation
NASA Astrophysics Data System (ADS)
Zhang, Wanjun; Gao, Shanping; Cheng, Xiyan; Zhang, Feng
2017-04-01
In order to solve the problems of Newton-Rapson iteration interpolation method of NURBS Curve, Such as interpolation time bigger, calculation more complicated, and NURBS curve step error are not easy changed and so on. This paper proposed a study on the algorithm for Newton-Rapson iteration interpolation method of NURBS curve and simulation. We can use Newton-Rapson iterative that calculate (xi, yi, zi). Simulation results show that the proposed NURBS curve interpolator meet the high-speed and high-accuracy interpolation requirements of CNC systems. The interpolation of NURBS curve should be finished. The simulation results show that the algorithm is correct; it is consistent with a NURBS curve interpolation requirements.
Nonlinear smoothing identification algorithm with application to data consistency checks
NASA Technical Reports Server (NTRS)
Idan, M.
1993-01-01
A parameter identification algorithm for nonlinear systems is presented. It is based on smoothing test data with successively improved sets of model parameters. The smoothing, which is iterative, provides all of the information needed to compute the gradients of the smoothing performance measure with respect to the parameters. The parameters are updated using a quasi-Newton procedure, until convergence is achieved. The advantage of this algorithm over standard maximum likelihood identification algorithms is the computational savings in calculating the gradient. This algorithm was used for flight-test data consistency checks based on a nonlinear model of aircraft kinematics. Measurement biases and scale factors were identified. The advantages of the presented algorithm and model are discussed.
A combination method for solving nonlinear equations
NASA Astrophysics Data System (ADS)
Silalahi, B. P.; Laila, R.; Sitanggang, I. S.
2017-01-01
This paper discusses methods for finding solutions of nonlinear equations: the Newton method, the Halley method and the combination of the Newton method, the Newton inverse method and the Halley method. Computational results in terms of the accuracy, the number of iterations and the running time for solving some given problems are presented.
XMM Newton Observations of Toothbrush Cluster
NASA Astrophysics Data System (ADS)
Kara, Sinancan; Nihal Ercan, Enise; De Plaa, Jelle; Mernier, Francois
2016-07-01
Galaxy clusters are the largest gravitationally-bound objects in the universe. The member galaxies are embedded in a hot X-ray emitting Intra-Cluster Medium (ICM) that has been enriched over time with metals produced by supernovae. In this presentation we show new results from XMM-Newton regarding the merging cluster 1RXSJ0603.3+4213. This cluster, also known as the Toothbrush cluster, shows a large toothbrush-shaped radio relic associated with a merger shock North of the cluster core. We show the distribution and the abundances of the metals in this merging cluster in relation to the merger shock. The results are derived from spatially resolved X-ray spectra from the EPIC instrument aboard XMM-Newton.
Cancer Therapeutics Following Newton's Third Law.
Arbab, Ali S; Jain, Meenu; Achyut, Bhagelu R
2016-01-01
Cancer is a wound that never heals. This is suggested by the data produced after several years of cancer research and therapeutic interventions done worldwide. There is a strong similarity between Newton's third law and therapeutic behavior of tumor. According to Newton's third law "for every action, there is an equal and opposite reaction". In cancer therapeutics, tumor exerts strong pro-tumor response against applied treatment and imposes therapeutic resistance, one of the major problems seen in preclinical and clinical studies. There is an urgent need to understand the tumor biology of therapy resistant tumors following the therapy. Here, we have discussed the problem and provided possible path for future studies to treat cancer.
Ke, Guibao; Hu, Yao; Huang, Xin; Peng, Xuan; Lei, Min; Huang, Chaoli; Gu, Li; Xian, Ping; Yang, Dehua
2016-01-01
Hemorrhagic fever with renal syndrome (HFRS) is one of the most common infectious diseases globally. With the most reported cases in the world, the epidemic characteristics are still remained unclear in China. This paper utilized the seasonal-trend decomposition (STL) method to analyze the periodicity and seasonality of the HFRS data, and used the exponential smoothing model (ETS) model to predict incidence cases from July to December 2016 by using the data from January 2006 to June 2016. Analytic results demonstrated a favorable trend of HFRS in China, and with obvious periodicity and seasonality, the peak of the annual reported cases in winter concentrated on November to January of the following year, and reported in May and June also constituted another peak in summer. Eventually, the ETS (M, N and A) model was adopted for fitting and forecasting, and the fitting results indicated high accuracy (Mean absolute percentage error (MAPE) = 13.12%). The forecasting results also demonstrated a gradual decreasing trend from July to December 2016, suggesting that control measures for hemorrhagic fever were effective in China. The STL model could be well performed in the seasonal analysis of HFRS in China, and ETS could be effectively used in the time series analysis of HFRS in China. PMID:27976704
NASA Astrophysics Data System (ADS)
Ni, Pei-Nan; Tong, Jin-Chao; Tobing, Landobasa Y. M.; Qiu, Shu-Peng; Xu, Zheng-Ji; Tang, Xiao-Hong; Zhang, Dao-Hua
2017-07-01
We present a simple thermal treatment with the antimony source for the metal-organic chemical vapor deposition of thin GaSb films on GaAs (111) substrates for the first time. The properties of the as-grown GaSb films are systematically analyzed by scanning electron microscopy, atomic force microscopy, x-ray diffraction, photo-luminescence (PL) and Hall measurement. It is found that the as-grown GaSb films by the proposed method can be as thin as 35 nm and have a very smooth surface with the root mean square roughness as small as 0.777 nm. Meanwhile, the grown GaSb films also have high crystalline quality, of which the full width at half maximum of the rocking-curve is as small as 218 arcsec. Moreover, the good optical quality of the GaSb films has been demonstrated by the low-temperature PL. This work provides a simple and feasible buffer-free strategy for the growth of high-quality GaSb films directly on GaAs substrates and the strategy may also be applicable to the growth on other substrates and the hetero-growth of other materials.
NASA Astrophysics Data System (ADS)
Ni, Pei-Nan; Tong, Jin-Chao; Tobing, Landobasa Y. M.; Qiu, Shu-Peng; Xu, Zheng-Ji; Tang, Xiao-Hong; Zhang, Dao-Hua
2017-02-01
We present a simple thermal treatment with the antimony source for the metal-organic chemical vapor deposition of thin GaSb films on GaAs (111) substrates for the first time. The properties of the as-grown GaSb films are systematically analyzed by scanning electron microscopy, atomic force microscopy, x-ray diffraction, photo-luminescence (PL) and Hall measurement. It is found that the as-grown GaSb films by the proposed method can be as thin as 35 nm and have a very smooth surface with the root mean square roughness as small as 0.777 nm. Meanwhile, the grown GaSb films also have high crystalline quality, of which the full width at half maximum of the rocking-curve is as small as 218 arcsec. Moreover, the good optical quality of the GaSb films has been demonstrated by the low-temperature PL. This work provides a simple and feasible buffer-free strategy for the growth of high-quality GaSb films directly on GaAs substrates and the strategy may also be applicable to the growth on other substrates and the hetero-growth of other materials.
NASA Astrophysics Data System (ADS)
von Clarmann, T.
2014-04-01
The difference due to the content of a priori information between a constrained retrieval and the true atmospheric state is usually represented by the so-called smoothing error. In this paper it is shown that the concept of the smoothing error is questionable because it is not compliant with Gaussian error propagation. The reason for this is that the smoothing error does not represent the expected deviation of the retrieval from the true state but the expected deviation of the retrieval from the atmospheric state sampled on an arbitrary grid, which is itself a smoothed representation of the true state. The idea of a sufficiently fine sampling of this reference atmospheric state is untenable because atmospheric variability occurs on all scales, implying that there is no limit beyond which the sampling is fine enough. Even the idealization of infinitesimally fine sampling of the reference state does not help because the smoothing error is applied to quantities which are only defined in a statistical sense, which implies that a finite volume of sufficient spatial extent is needed to meaningfully talk about temperature or concentration. Smoothing differences, however, which play a role when measurements are compared, are still a useful quantity if the involved a priori covariance matrix has been evaluated on the comparison grid rather than resulting from interpolation. This is, because the undefined component of the smoothing error, which is the effect of smoothing implied by the finite grid on which the measurements are compared, cancels out when the difference is calculated.
Missal, Marcus; Heinen, Stephen J
2017-04-19
If a visual object of interest suddenly starts to move, we will try to follow it with a smooth movement of the eyes. This smooth pursuit response aims to reduce image motion on the retina that could blur visual perception. In recent years, our knowledge of the neural control of smooth pursuit initiation has sharply increased. However, stopping smooth pursuit eye movements is less well understood and will be discussed in this paper. The most straightforward way to study smooth pursuit stopping is by interrupting image motion on the retina. This causes eye velocity to decay exponentially towards zero. However, smooth pursuit stopping is not a passive response, as shown by behavioural and electrophysiological evidence. Moreover, smooth pursuit stopping is particularly influenced by active prediction of the upcoming end of the target. Here, we suggest that a particular class of inhibitory neurons of the brainstem, the omnipause neurons, could play a central role in pursuit stopping. Furthermore, the role of supplementary eye fields of the frontal cortex in smooth pursuit stopping is also discussed.This article is part of the themed issue 'Movement suppression: brain mechanisms for stopping and stillness'. © 2017 The Author(s).
Cosmological Conundrums and Discoveries Since Newton
NASA Astrophysics Data System (ADS)
Topper, David R.
Cosmology is key branch of astronomy, dealing with questions around the structure of the universe. The ancient cosmos - systematically codified by Aristotle, and later given empirical support, especially by Ptolemy - was geocentric, geostatic, and finite. Based on a common sense view of the world being as it appears to our senses, the ancient model prevailed well into the seventeenth century. The subsequent scientific revolution, however, bequeathed to the eighteenth century and after a radically different cosmic model. The radical change came in two stages. First Copernicus in the fifteenth century moved the Sun to Earth's previous place at the center of the universe, an idea adopted by Galileo, Kepler, and a few other key thinkers up to Newton. The second stage, often called the "breaking of the sphere," replaced the sphere of a few thousand stars at the edge of the finite universe with myriad stars extending into an infinite universe, filled with Newton's invisible gravity, and with our Earth being the third planet from the Sun in our solar system somewhere within that Euclidean space. Two planets were added to our solar system (one in the eighteenth and one in the nineteenth centuries), but the overall structure remained essentially as conceived by Newton when he died in 1727. This was the universe Einstein was born into in 1879.
Life after Newton: an ecological metaphysic.
Ulanowicz, R E
1999-05-01
Ecology may indeed be 'deep', as some have maintained, but perhaps much of the mystery surrounding it owes more simply to the dissonance between ecological notions and the fundamentals of the modern synthesis. Comparison of the axioms supporting the Newtonian world view with those underlying the organicist and stochastic metaphors that motivate much of ecosystems science reveals strong disagreements--especially regarding the nature of the causes of events and the scalar domains over which these causes can operate. The late Karl Popper held that the causal closure forced by our mechanical perspective on nature frustrates our attempts to achieve an 'evolutionary theory of knowledge.' He suggested that the Newtonian concept of 'force' must be generalized to encompass the contingencies that arise in evolutionary processes. His reformulation of force as 'propensity' leads quite naturally to a generalization of Newton's laws for ecology. The revised tenets appear, however, to exhibit more scope and allow for change to arise from within a system. Although Newton's laws survive (albeit in altered form) within a coalescing ecological metaphysic, the axioms that Enlightenment thinkers appended to Newton's work seem ill-suited for ecology and perhaps should yield to a new and coherent set of assumptions on how to view the processes of nature.
NASA Technical Reports Server (NTRS)
Voronov, Oleg
2007-01-01
Diamond smoothing tools have been proposed for use in conjunction with diamond cutting tools that are used in many finish-machining operations. Diamond machining (including finishing) is often used, for example, in fabrication of precise metal mirrors. A diamond smoothing tool according to the proposal would have a smooth spherical surface. For a given finish machining operation, the smoothing tool would be mounted next to the cutting tool. The smoothing tool would slide on the machined surface left behind by the cutting tool, plastically deforming the surface material and thereby reducing the roughness of the surface, closing microcracks and otherwise generally reducing or eliminating microscopic surface and subsurface defects, and increasing the microhardness of the surface layer. It has been estimated that if smoothing tools of this type were used in conjunction with cutting tools on sufficiently precise lathes, it would be possible to reduce the roughness of machined surfaces to as little as 3 nm. A tool according to the proposal would consist of a smoothing insert in a metal holder. The smoothing insert would be made from a diamond/metal functionally graded composite rod preform, which, in turn, would be made by sintering together a bulk single-crystal or polycrystalline diamond, a diamond powder, and a metallic alloy at high pressure. To form the spherical smoothing tip, the diamond end of the preform would be subjected to flat grinding, conical grinding, spherical grinding using diamond wheels, and finally spherical polishing and/or buffing using diamond powders. If the diamond were a single crystal, then it would be crystallographically oriented, relative to the machining motion, to minimize its wear and maximize its hardness. Spherically polished diamonds could also be useful for purposes other than smoothing in finish machining: They would likely also be suitable for use as heat-resistant, wear-resistant, unlubricated sliding-fit bearing inserts.
The transformation of two-tier test into four-tier test on Newton's laws concepts
NASA Astrophysics Data System (ADS)
Fratiwi, Nuzulira Janeusse; Kaniawati, Ida; Suhendi, Endi; Suyana, Iyon; Samsudin, Achmad
2017-05-01
This research is based on other forms of diagnostic tests in the format of two-tier test about the concept of Newton's Laws that the development still not yet well. Therefore, must be done a research to transform two-tier test into the four-tier test that refers to Kaltakci criteria. To achieve the purpose, the researchers used the 4D model (Defining, Designing, Developing, and Disseminating) as a method of research. The participants are 25 students at one of the senior high schools in Bandung. The result is the development of two-tier test format intothe four-tier test formatabout Newton's Laws concept. At the disseminating, there are obtained the category of students who scientific knowledge, misconceptions, and errors of the Newton's Laws concepts. The research will be the reference or preliminary research to conduct further research.
Conjugate gradient and steepest descent approach on quasi-Newton search direction
NASA Astrophysics Data System (ADS)
Sofi, A. Z. M.; Mamat, M.; Mohd, I.; Ibrahim, M. A. H.
2014-07-01
An approach of using conjugate gradient and classic steepest descent search direction onto quasi-Newton search direction had been proposed in this paper and we called it as 'scaled CGSD-QN' search direction. A new coefficient formula had been successfully constructed for being used in the 'scaled CGSD-QN' search direction and proven here that the coefficient formula is globally converge to the minimizer. The Hessian update formula that has been used in the quasi-Newton algorithm is DFP update formula. This new search direction approach was testes with some some standard unconstrained optimization test problems and proven that this new search direction approach had positively affect quasi-Newton method by using DFP update formula.
A Newton-Krylov solution to the porous medium equations in the agree code
Ward, A. M.; Seker, V.; Xu, Y.; Downar, T. J.
2012-07-01
In order to improve the convergence of the AGREE code for porous medium, a Newton-Krylov solver was developed for steady state problems. The current three-equation system was expanded and then coupled using Newton's Method. Theoretical behavior predicts second order convergence, while actual behavior was highly nonlinear. The discontinuous derivatives found in both closure and empirical relationships prevented true second order convergence. Agreement between the current solution and new Exact Newton solution was well below the convergence criteria. While convergence time did not dramatically decrease, the required number of outer iterations was reduced by approximately an order of magnitude. GMRES was also used to solve problem, where ILU without fill-in was used to precondition the iterative solver, and the performance was slightly slower than the direct solution. (authors)
Liao, Xiaolin; Xu, Genbo; Chen, Song-Lin
2014-07-22
Half-smooth tongue sole (Cynoglossus semilaevis) is one of the most important flatfish species for aquaculture in China. To produce a monosex population, we attempted to develop a marker-assisted sex control technique in this sexually size dimorphic fish. In this study, we identified a co-dominant sex-linked marker (i.e., CyseSLM) by screening genomic microsatellites and further developed a novel molecular method for sex identification in the tongue sole. CyseSLM has a sequence similarity of 73%-75% with stickleback, medaka, Fugu and Tetraodon. At this locus, two alleles (i.e., A244 and A234) were amplified from 119 tongue sole individuals with primer pairs CyseSLM-F1 and CyseSLM-R. Allele A244 was present in all individuals, while allele A234 (female-associated allele, FAA) was mostly present in females with exceptions in four male individuals. Compared with the sequence of A244, A234 has a 10-bp deletion and 28 SNPs. A specific primer (CyseSLM-F2) was then designed based on the A234 sequence, which amplified a 204 bp fragment in all females and four males with primer CyseSLM-R. A time-efficient multiplex PCR program was developed using primers CyseSLM-F2, CyseSLM-R and the newly designed primer CyseSLM-F3. The multiplex PCR products with co-dominant pattern could be detected by agarose gel electrophoresis, which accurately identified the genetic sex of the tongue sole. Therefore, we have developed a rapid and reliable method for sex identification in tongue sole with a newly identified sex-linked microsatellite marker.
Kim, Hak Rim; Leavis, Paul C; Graceffa, Philip; Gallant, Cynthia; Morgan, Kathleen G
2010-11-01
Here we report and validate a new method, suitable broadly, for use in differentiated cells and tissues, for the direct visualization of actin polymerization under physiological conditions. We have designed and tested different versions of fluorescently labeled actin, reversibly attached to the protein transduction tag TAT, and have introduced this novel reagent into intact differentiated vascular smooth muscle cells (dVSMCs). A thiol-reactive version of the TAT peptide was synthesized by adding the amino acids glycine and cysteine to its NH(2)-terminus and forming a thionitrobenzoate adduct: viz. TAT-Cys-S-STNB. This peptide reacts readily with G-actin, and the complex is rapidly taken up by freshly enzymatically isolated dVSMC, as indicated by the fluorescence of a FITC tag on the TAT peptide. By comparing different versions of the construct, we determined that the optimal construct for biological applications is a nonfluorescently labeled TAT peptide conjugated to rhodamine-labeled actin. When TAT-Cys-S-STNB-tagged rhodamine actin (TSSAR) was added to live, freshly enzymatically isolated cells, we observed punctae of incorporated actin at the cortex of the cell. The punctae are indistinguishable from those we have previously reported to occur in the same cell type when rhodamine G-actin is added to permeabilized cells. Thus this new method allows the delivery of labeled G-actin into intact cells without disrupting the native state and will allow its further use to study the effect of physiological intracellular Ca(2+) concentration transients and signal transduction on actin dynamics in intact cells.
Analysis of XMM-Newton Data from Extended Sources and the Diffuse X-Ray Background
NASA Technical Reports Server (NTRS)
Snowden, Steven
2011-01-01
Reduction of X-ray data from extended objects and the diffuse background is a complicated process that requires attention to the details of the instrumental response as well as an understanding of the multiple background components. We present methods and software that we have developed to reduce data from XMM-Newton EPIC imaging observations for both the MOS and PN instruments. The software has now been included in the Science Analysis System (SAS) package available through the XMM-Newton Science Operations Center (SOC).
XMM-Newton closes in on space's exotic matter
NASA Astrophysics Data System (ADS)
2002-11-01
search. The way they got this measurement is a first in astronomical observations and it is considered a huge achievement. The method consists of determining the compactness of the neutron star in an indirect way. The gravitational pull of a neutron star is immense - thousands of million times stronger than the Earth’s. This makes the light particles emitted by the neutron star lose energy. This energy loss is called a gravitational 'red shift'. The measurement of this red shift by XMM-Newton indicated the strength of the gravitational pull, and revealed the star’s compactness. "This is a highly precise measurement that we could not have made without both the high sensitivity of XMM-Newton and its ability to distinguish details," says Fred Jansen, ESA's XMM-Newton Project Scientist. According to the main author of the discovery, Jean Cottam of NASA’s Goddard Space Flight Center, “attempts to measure the gravitational red shift were made right after Einstein published the General Theory of Relativity, but no one had ever been able to measure the effect in a neutron star, where it was supposed to be huge. This has now been confirmed." Note to editors The result was obtained by observations of the neutron star EXO 0748-676. XMM-Newton detected the light in the form of X-rays. In particular, thanks to analysis of this X-ray radiation, the astronomers were able to identify some chemical elements, namely iron, present in the material surrounding the neutron star. They then compared the distorted signal emitted by the iron atoms in the neutron star with the one produced by iron atoms in the laboratory. In this way, they could measure the actual degree of distortion due to the gravity of EXO 0748-676. The result is published in the 7 November 2002 issue of Nature. The lead author is Jean Cottam, of NASA’s Goddard Space Flight Center (Greenbelt, United States). Other authors are Mariano Mendez, of the National Institute for Space Research, SRON (The Netherlands); and
Parmar, Nina; Ahmadi, Raheleh
2015-01-01
Muscle degeneration is a prevalent disease, particularly in aging societies where it has a huge impact on quality of life and incurs colossal health costs. Suitable donor sources of smooth muscle cells are limited and minimally invasive therapeutic approaches are sought that will augment muscle volume by delivering cells to damaged or degenerated areas of muscle. For the first time, we report the use of highly porous microcarriers produced using thermally induced phase separation (TIPS) to expand and differentiate adipose-derived mesenchymal stem cells (AdMSCs) into smooth muscle-like cells in a format that requires minimal manipulation before clinical delivery. AdMSCs readily attached to the surface of TIPS microcarriers and proliferated while maintained in suspension culture for 12 days. Switching the incubation medium to a differentiation medium containing 2 ng/mL transforming growth factor beta-1 resulted in a significant increase in both the mRNA and protein expression of cell contractile apparatus components caldesmon, calponin, and myosin heavy chains, indicative of a smooth muscle cell-like phenotype. Growth of smooth muscle cells on the surface of the microcarriers caused no change to the integrity of the polymer microspheres making them suitable for a cell-delivery vehicle. Our results indicate that TIPS microspheres provide an ideal substrate for the expansion and differentiation of AdMSCs into smooth muscle-like cells as well as a microcarrier delivery vehicle for the attached cells ready for therapeutic applications. PMID:25205072
Astrophysical smooth particle hydrodynamics
NASA Astrophysics Data System (ADS)
Rosswog, Stephan
2009-04-01
The paper presents a detailed review of the smooth particle hydrodynamics (SPH) method with particular focus on its astrophysical applications. We start by introducing the basic ideas and concepts and thereby outline all ingredients that are necessary for a practical implementation of the method in a working SPH code. Much of SPH's success relies on its excellent conservation properties and therefore the numerical conservation of physical invariants receives much attention throughout this review. The self-consistent derivation of the SPH equations from the Lagrangian of an ideal fluid is the common theme of the remainder of the text. We derive a modern, Newtonian SPH formulation from the Lagrangian of an ideal fluid. It accounts for changes of the local resolution lengths which result in corrective, so-called "grad-h-terms". We extend this strategy to special relativity for which we derive the corresponding grad-h equation set. The variational approach is further applied to a general-relativistic fluid evolving in a fixed, curved background space-time. Particular care is taken to explicitly derive all relevant equations in a coherent way.
Supporting the learning of Newton's laws with graphical data
NASA Astrophysics Data System (ADS)
Piggott, David
Teaching physics provides the opportunity for a very unique interaction between students and instructor that is not found in chemistry or biology. Physics has a heavy emphasis on trying to alter students' misconceptions about how things work in the real word. In chemistry and microbiology this is not an issue because the topics of discussion in those classes are a new experience for the students. In the case of physics the students have everyday experience with the different concepts discussed. This causes the students to build incorrect mental models explaining how different things work. In order to correct these mental models physics teachers must first get the students to vocalize these misconceptions. Then the teacher must confront the students with an example that exposes the false nature of their model. Finally, the teacher must help the student resolve these discrepancies and form the correct model. This study attempts to resolve these discrepancies by giving the students concrete evidence via graphs of Newton's laws. The results reported here indicate that this method of eliciting the misconception, confronting the misconception, and resolving the misconception is successful with Newton's third law, but only marginally successful for first and second laws.
Stochastic modification of the Schrödinger-Newton equation
NASA Astrophysics Data System (ADS)
Bera, Sayantani; Mohan, Ravi; Singh, Tejinder P.
2015-07-01
The Schrödinger-Newton (SN) equation describes the effect of self-gravity on the evolution of a quantum system, and it has been proposed that gravitationally induced decoherence drives the system to one of the stationary solutions of the SN equation. However, the equation itself lacks a decoherence mechanism, because it does not possess any stochastic feature. In the present work we derive a stochastic modification of the Schrödinger-Newton equation, starting from the Einstein-Langevin equation in the theory of stochastic semiclassical gravity. We specialize this equation to the case of a single massive point particle, and by using Karolyhazy's phase variance method, we derive the Diósi-Penrose criterion for the decoherence time. We obtain a (nonlinear) master equation corresponding to this stochastic SN equation. This equation is, however, linear at the level of the approximation we use to prove decoherence; hence, the no-signaling requirement is met. Lastly, we use physical arguments to obtain expressions for the decoherence length of extended objects.
NASA Astrophysics Data System (ADS)
Song, Bongyong; Park, Justin C.; Song, William Y.
2014-11-01
The Barzilai-Borwein (BB) 2-point step size gradient method is receiving attention for accelerating Total Variation (TV) based CBCT reconstructions. In order to become truly viable for clinical applications, however, its convergence property needs to be properly addressed. We propose a novel fast converging gradient projection BB method that requires ‘at most one function evaluation’ in each iterative step. This Selective Function Evaluation method, referred to as GPBB-SFE in this paper, exhibits the desired convergence property when it is combined with a ‘smoothed TV’ or any other differentiable prior. This way, the proposed GPBB-SFE algorithm offers fast and guaranteed convergence to the desired 3DCBCT image with minimal computational complexity. We first applied this algorithm to a Shepp-Logan numerical phantom. We then applied to a CatPhan 600 physical phantom (The Phantom Laboratory, Salem, NY) and a clinically-treated head-and-neck patient, both acquired from the TrueBeam™ system (Varian Medical Systems, Palo Alto, CA). Furthermore, we accelerated the reconstruction by implementing the algorithm on NVIDIA GTX 480 GPU card. We first compared GPBB-SFE with three recently proposed BB-based CBCT reconstruction methods available in the literature using Shepp-Logan numerical phantom with 40 projections. It is found that GPBB-SFE shows either faster convergence speed/time or superior convergence property compared to existing BB-based algorithms. With the CatPhan 600 physical phantom, the GPBB-SFE algorithm requires only 3 function evaluations in 30 iterations and reconstructs the standard, 364-projection FDK reconstruction quality image using only 60 projections. We then applied the algorithm to a clinically-treated head-and-neck patient. It was observed that the GPBB-SFE algorithm requires only 18 function evaluations in 30 iterations. Compared with the FDK algorithm with 364 projections, the GPBB-SFE algorithm produces visibly equivalent quality CBCT
Song, Bongyong; Park, Justin C; Song, William Y
2014-11-07
The Barzilai-Borwein (BB) 2-point step size gradient method is receiving attention for accelerating Total Variation (TV) based CBCT reconstructions. In order to become truly viable for clinical applications, however, its convergence property needs to be properly addressed. We propose a novel fast converging gradient projection BB method that requires 'at most one function evaluation' in each iterative step. This Selective Function Evaluation method, referred to as GPBB-SFE in this paper, exhibits the desired convergence property when it is combined with a 'smoothed TV' or any other differentiable prior. This way, the proposed GPBB-SFE algorithm offers fast and guaranteed convergence to the desired 3DCBCT image with minimal computational complexity. We first applied this algorithm to a Shepp-Logan numerical phantom. We then applied to a CatPhan 600 physical phantom (The Phantom Laboratory, Salem, NY) and a clinically-treated head-and-neck patient, both acquired from the TrueBeam™ system (Varian Medical Systems, Palo Alto, CA). Furthermore, we accelerated the reconstruction by implementing the algorithm on NVIDIA GTX 480 GPU card. We first compared GPBB-SFE with three recently proposed BB-based CBCT reconstruction methods available in the literature using Shepp-Logan numerical phantom with 40 projections. It is found that GPBB-SFE shows either faster convergence speed/time or superior convergence property compared to existing BB-based algorithms. With the CatPhan 600 physical phantom, the GPBB-SFE algorithm requires only 3 function evaluations in 30 iterations and reconstructs the standard, 364-projection FDK reconstruction quality image using only 60 projections. We then applied the algorithm to a clinically-treated head-and-neck patient. It was observed that the GPBB-SFE algorithm requires only 18 function evaluations in 30 iterations. Compared with the FDK algorithm with 364 projections, the GPBB-SFE algorithm produces visibly equivalent quality CBCT image for
Scalable parallel Newton-Krylov solvers for discontinuous Galerkin discretizations
Persson, P.-O.
2008-12-31
We present techniques for implicit solution of discontinuous Galerkin discretizations of the Navier-Stokes equations on parallel computers. While a block-Jacobi method is simple and straight-forward to parallelize, its convergence properties are poor except for simple problems. Therefore, we consider Newton-GMRES methods preconditioned with block-incomplete LU factorizations, with optimized element orderings based on a minimum discarded fill (MDF) approach. We discuss the difficulties with the parallelization of these methods, but also show that with a simple domain decomposition approach, most of the advantages of the block-ILU over the block-Jacobi preconditioner are still retained. The convergence is further improved by incorporating the matrix connectivities into the mesh partitioning process, which aims at minimizing the errors introduced from separating the partitions. We demonstrate the performance of the schemes for realistic two- and three-dimensional flow problems.
Recursive Newton-Euler formulation of manipulator dynamics
NASA Technical Reports Server (NTRS)
Nasser, M. G.
1989-01-01
A recursive Newton-Euler procedure is presented for the formulation and solution of manipulator dynamical equations. The procedure includes rotational and translational joints and a topological tree. This model was verified analytically using a planar two-link manipulator. Also, the model was tested numerically against the Walker-Orin model using the Shuttle Remote Manipulator System data. The hinge accelerations obtained from both models were identical. The computational requirements of the model vary linearly with the number of joints. The computational efficiency of this method exceeds that of Walker-Orin methods. This procedure may be viewed as a considerable generalization of Armstrong's method. A six-by-six formulation is adopted which enhances both the computational efficiency and simplicity of the model.
Newton Solver Stabilization for Stokes Solvers in Geodynamic Problems
NASA Astrophysics Data System (ADS)
Fraters, Menno; Bangerth, Wolfgang; Thieulot, Cedric; Spakman, Wim
2017-04-01
The most commonly used method by the geodynamical community for solving non-linear equations is the Picard fixed-point iteration. However, the Newton method has recently gained interest within this community because it formally leads to quadratic convergence close to the solution as compared to the global linear convergence of the Picard iteration. In mantle dynamics, a blend of pressure and strain-rate dependent visco-plastic rheologies is often used. While for power-law rheologies the Jacobian is guaranteed to be Symmetric Positive Definite (SPD), for more complex (compressible) rheologies, the Jacobian may become non-SPD. Here we present a new method for efficiently enforce the Jacobian to be SPD, necessary for our current highly efficient Stokes solvers, with a minimum loss in convergence rate. Furthermore, we show results for both incompressible and compressible models.