Convergence of multipoint Pade approximants of piecewise analytic functions
Buslaev, Viktor I
2013-02-28
The behaviour as n{yields}{infinity} of multipoint Pade approximants to a function which is (piecewise) holomorphic on a union of finitely many continua is investigated. The convergence of multipoint Pade approximants is proved for a function which extends holomorphically from these continua to a union of domains whose boundaries have a certain symmetry property. An analogue of Stahl's theorem is established for two-point Pade approximants to a pair of functions, either of which is a multivalued analytic function with finitely many branch points. Bibliography: 11 titles.
Unfolding the Second Riemann sheet with Pade Approximants: hunting resonance poles
Masjuan, Pere
2011-05-23
Based on Pade Theory, a new procedure for extracting the pole mass and width of resonances is proposed. The method is systematic and provides a model-independent treatment for the prediction and the errors of the approximation.
Padé approximants and their application to scattering from fluid media.
Denis, Max; Tsui, Jing; Thompson, Charles; Chandra, Kavitha
2010-11-01
In this work, a numerical method for modeling the scattered acoustic pressure from fluid occlusions is described. The method is based on the asymptotic series expansion of the pressure expressed in terms of sound speed contrast between the host medium and entrained fluid occlusions. Padé approximants are used to extend the applicability of the result for larger values of sound speed contrast. For scattering from a circular cylinder, an improvement in convergence between the exact and numerical solutions is demonstrated. In the case of scattering from an inhomogeneous medium, a numerical solution with reduced order of Padé approximants is presented.
PAWS/STEM - PADE APPROXIMATION WITH SCALING AND SCALED TAYLOR EXPONENTIAL MATRIX (SUN VERSION)
NASA Technical Reports Server (NTRS)
Butler, R. W.
1994-01-01
Traditional fault-tree techniques for analyzing the reliability of large, complex systems fail to model the dynamic reconfiguration capabilities of modern computer systems. Markov models, on the other hand, can describe fault-recovery (via system reconfiguration) as well as fault-occurrence. The Pade Approximation with Scaling (PAWS) and Scaled Taylor Exponential Matrix (STEM) programs provide a flexible, user-friendly, language-based interface for the creation and evaluation of Markov models describing the behavior of fault-tolerant reconfigurable computer systems. PAWS and STEM produce exact solutions for the probability of system failure and provide a conservative estimate of the number of significant digits in the solution. The calculation of the probability of entering a death state of a Markov model (representing system failure) requires the solution of a set of coupled differential equations. Because of the large disparity between the rates of fault arrivals and system recoveries, Markov models of fault-tolerant architectures inevitably lead to numerically stiff differential equations. Both PAWS and STEM have the capability to solve numerically stiff models. These complementary programs use separate methods to determine the matrix exponential in the solution of the model's system of differential equations. In general, PAWS is better suited to evaluate small and dense models. STEM operates at lower precision, but works faster than PAWS for larger models. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. Although different solution techniques are utilized on different programs, it is possible to have a common input language. The Systems Validation Methods group at NASA Langley Research Center has created a set of programs that form the basis for a reliability analysis workstation. The set of programs are: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST
PAWS/STEM - PADE APPROXIMATION WITH SCALING AND SCALED TAYLOR EXPONENTIAL MATRIX (VAX VMS VERSION)
NASA Technical Reports Server (NTRS)
Butler, R. W.
1994-01-01
Traditional fault-tree techniques for analyzing the reliability of large, complex systems fail to model the dynamic reconfiguration capabilities of modern computer systems. Markov models, on the other hand, can describe fault-recovery (via system reconfiguration) as well as fault-occurrence. The Pade Approximation with Scaling (PAWS) and Scaled Taylor Exponential Matrix (STEM) programs provide a flexible, user-friendly, language-based interface for the creation and evaluation of Markov models describing the behavior of fault-tolerant reconfigurable computer systems. PAWS and STEM produce exact solutions for the probability of system failure and provide a conservative estimate of the number of significant digits in the solution. The calculation of the probability of entering a death state of a Markov model (representing system failure) requires the solution of a set of coupled differential equations. Because of the large disparity between the rates of fault arrivals and system recoveries, Markov models of fault-tolerant architectures inevitably lead to numerically stiff differential equations. Both PAWS and STEM have the capability to solve numerically stiff models. These complementary programs use separate methods to determine the matrix exponential in the solution of the model's system of differential equations. In general, PAWS is better suited to evaluate small and dense models. STEM operates at lower precision, but works faster than PAWS for larger models. The mathematical approach chosen to solve a reliability problem may vary with the size and nature of the problem. Although different solution techniques are utilized on different programs, it is possible to have a common input language. The Systems Validation Methods group at NASA Langley Research Center has created a set of programs that form the basis for a reliability analysis workstation. The set of programs are: SURE reliability analysis program (COSMIC program LAR-13789, LAR-14921); the ASSIST
NASA Technical Reports Server (NTRS)
Vepa, R.
1976-01-01
The general behavior of unsteady airloads in the frequency domain is explained. Based on this, a systematic procedure is described whereby the airloads, produced by completely arbitrary, small, time-dependent motions of a thin lifting surface in an airstream, can be predicted. This scheme employs as raw materials any of the unsteady linearized theories that have been mechanized for simple harmonic oscillations. Each desired aerodynamic transfer function is approximated by means of an appropriate Pade approximant, that is, a rational function of finite degree polynomials in the Laplace transform variable. Although these approximations have many uses, they are proving especially valuable in the design of automatic control systems intended to modify aeroelastic behavior.
A hybrid Pade-Galerkin technique for differential equations
NASA Technical Reports Server (NTRS)
Geer, James F.; Andersen, Carl M.
1993-01-01
A three-step hybrid analysis technique, which successively uses the regular perturbation expansion method, the Pade expansion method, and then a Galerkin approximation, is presented and applied to some model boundary value problems. In the first step of the method, the regular perturbation method is used to construct an approximation to the solution in the form of a finite power series in a small parameter epsilon associated with the problem. In the second step of the method, the series approximation obtained in step one is used to construct a Pade approximation in the form of a rational function in the parameter epsilon. In the third step, the various powers of epsilon which appear in the Pade approximation are replaced by new (unknown) parameters (delta(sub j)). These new parameters are determined by requiring that the residual formed by substituting the new approximation into the governing differential equation is orthogonal to each of the perturbation coordinate functions used in step one. The technique is applied to model problems involving ordinary or partial differential equations. In general, the technique appears to provide good approximations to the solution even when the perturbation and Pade approximations fail to do so. The method is discussed and topics for future investigations are indicated.
An analytic Pade-motivated QCD coupling
Martinez, H. E.; Cvetic, G.
2010-08-04
We consider a modification of the Minimal Analytic (MA) coupling of Shirkov and Solovtsov. This modified MA (mMA) coupling reflects the desired analytic properties of the space-like observables. We show that an approximation by Dirac deltas of its discontinuity function {rho} is equivalent to a Pade(rational) approximation of the mMA coupling that keeps its analytic structure. We propose a modification to mMA that, as preliminary results indicate, could be an improvement in the evaluation of low-energy observables compared with other analytic couplings.
Sokolovski, D.; Msezane, A.Z.
2004-09-01
A semiclassical complex angular momentum theory, used to analyze atom-diatom reactive angular distributions, is applied to several well-known potential (one-particle) problems. Examples include resonance scattering, rainbow scattering, and the Eckart threshold model. Pade reconstruction of the corresponding matrix elements from the values at physical (integral) angular momenta and properties of the Pade approximants are discussed in detail.
Lin, Ying-Tsong; Collis, Jon M; Duda, Timothy F
2012-11-01
An alternating direction implicit (ADI) three-dimensional fluid parabolic equation solution method with enhanced accuracy is presented. The method uses a square-root Helmholtz operator splitting algorithm that retains cross-multiplied operator terms that have been previously neglected. With these higher-order cross terms, the valid angular range of the parabolic equation solution is improved. The method is tested for accuracy against an image solution in an idealized wedge problem. Computational efficiency improvements resulting from the ADI discretization are also discussed.
Potential of the approximation method
Amano, K.; Maruoka, A.
1996-12-31
Developing some techniques for the approximation method, we establish precise versions of the following statements concerning lower bounds for circuits that detect cliques of size s in a graph with m vertices: For 5 {le} s {le} m/4, a monotone circuit computing CLIQUE(m, s) contains at least (1/2)1.8{sup min}({radical}s-1/2,m/(4s)) gates: If a non-monotone circuit computes CLIQUE using a {open_quotes}small{close_quotes} amount of negation, then the circuit contains an exponential number of gates. The former is proved very simply using so called bottleneck counting argument within the framework of approximation, whereas the latter is verified introducing a notion of restricting negation and generalizing the sunflower contraction.
Approximate methods for equations of incompressible fluid
NASA Astrophysics Data System (ADS)
Galkin, V. A.; Dubovik, A. O.; Epifanov, A. A.
2017-02-01
Approximate methods on the basis of sequential approximations in the theory of functional solutions to systems of conservation laws is considered, including the model of dynamics of incompressible fluid. Test calculations are performed, and a comparison with exact solutions is carried out.
Approximate error conjugation gradient minimization methods
Kallman, Jeffrey S
2013-05-21
In one embodiment, a method includes selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, calculating an approximate error using the subset of rays, and calculating a minimum in a conjugate gradient direction based on the approximate error. In another embodiment, a system includes a processor for executing logic, logic for selecting a subset of rays from a set of all rays to use in an error calculation for a constrained conjugate gradient minimization problem, logic for calculating an approximate error using the subset of rays, and logic for calculating a minimum in a conjugate gradient direction based on the approximate error. In other embodiments, computer program products, methods, and systems are described capable of using approximate error in constrained conjugate gradient minimization problems.
Differential Equations, Related Problems of Pade Approximations and Computer Applications
1988-01-01
geometric sense, like the Picard-Fuchs equations satisfied by the variation of periods, possess strong arithmetic properties (global nilpotence ...result, and the (G, C)-function conditions, one needs the definition of the p-curvature. We consider a system of matrix first order linear differential...the system (1.1) in the matrix form df f /dx = Aff ; A E M (Q(x)), one can introduce the p-curvature operators Ip, associated with the system (1.1). The
Approximation methods in gravitational-radiation theory
NASA Astrophysics Data System (ADS)
Will, C. M.
1986-02-01
The observation of gravitational-radiation damping in the binary pulsar PSR 1913+16 and the ongoing experimental search for gravitational waves of extraterrestrial origin have made the theory of gravitational radiation an active branch of classical general relativity. In calculations of gravitational radiation, approximation methods play a crucial role. The author summarizes recent developments in two areas in which approximations are important: (1) the quadrupole approximation, which determines the energy flux and the radiation reaction forces in weak-field, slow-motion, source-within-the-near-zone systems such as the binary pulsar; and (2) the normal modes of oscillation of black holes, where the Wentzel-Kramers-Brillouin approximation gives accurate estimates of the complex frequencies of the modes.
Approximation methods in gravitational-radiation theory
NASA Technical Reports Server (NTRS)
Will, C. M.
1986-01-01
The observation of gravitational-radiation damping in the binary pulsar PSR 1913 + 16 and the ongoing experimental search for gravitational waves of extraterrestrial origin have made the theory of gravitational radiation an active branch of classical general relativity. In calculations of gravitational radiation, approximation methods play a crucial role. Recent developments are summarized in two areas in which approximations are important: (a) the quadrupole approxiamtion, which determines the energy flux and the radiation reaction forces in weak-field, slow-motion, source-within-the-near-zone systems such as the binary pulsar; and (b) the normal modes of oscillation of black holes, where the Wentzel-Kramers-Brillouin approximation gives accurate estimates of the complex frequencies of the modes.
Variational Bayesian Approximation methods for inverse problems
NASA Astrophysics Data System (ADS)
Mohammad-Djafari, Ali
2012-09-01
Variational Bayesian Approximation (VBA) methods are recent tools for effective Bayesian computations. In this paper, these tools are used for inverse problems where the prior models include hidden variables and where where the estimation of the hyper parameters has also to be addressed. In particular two specific prior models (Student-t and mixture of Gaussian models) are considered and details of the algorithms are given.
An approximate projection method for incompressible flow
NASA Astrophysics Data System (ADS)
Stevens, David E.; Chan, Stevens T.; Gresho, Phil
2002-12-01
This paper presents an approximate projection method for incompressible flows. This method is derived from Galerkin orthogonality conditions using equal-order piecewise linear elements for both velocity and pressure, hereafter Q1Q1. By combining an approximate projection for the velocities with a variational discretization of the continuum pressure Poisson equation, one eliminates the need to filter either the velocity or pressure fields as is often needed with equal-order element formulations. This variational approach extends to multiple types of elements; examples and results for triangular and quadrilateral elements are provided. This method is related to the method of Almgren et al. (SIAM J. Sci. Comput. 2000; 22: 1139-1159) and the PISO method of Issa (J. Comput. Phys. 1985; 62: 40-65). These methods use a combination of two elliptic solves, one to reduce the divergence of the velocities and another to approximate the pressure Poisson equation. Both Q1Q1 and the method of Almgren et al. solve the second Poisson equation with a weak error tolerance to achieve more computational efficiency.A Fourier analysis of Q1Q1 shows that a consistent mass matrix has a positive effect on both accuracy and mass conservation. A numerical comparison with the widely used Q1Q0 (piecewise linear velocities, piecewise constant pressures) on a periodic test case with an analytic solution verifies this analysis. Q1Q1 is shown to have comparable accuracy as Q1Q0 and good agreement with experiment for flow over an isolated cubic obstacle and dispersion of a point source in its wake.
Finite difference methods for approximating Heaviside functions
NASA Astrophysics Data System (ADS)
Towers, John D.
2009-05-01
We present a finite difference method for discretizing a Heaviside function H(u(x→)), where u is a level set function u:Rn ↦ R that is positive on a bounded region Ω⊂Rn. There are two variants of our algorithm, both of which are adapted from finite difference methods that we proposed for discretizing delta functions in [J.D. Towers, Two methods for discretizing a delta function supported on a level set, J. Comput. Phys. 220 (2007) 915-931; J.D. Towers, Discretizing delta functions via finite differences and gradient normalization, Preprint at http://www.miracosta.edu/home/jtowers/; J.D. Towers, A convergence rate theorem for finite difference approximations to delta functions, J. Comput. Phys. 227 (2008) 6591-6597]. We consider our approximate Heaviside functions as they are used to approximate integrals over Ω. We prove that our first approximate Heaviside function leads to second order accurate quadrature algorithms. Numerical experiments verify this second order accuracy. For our second algorithm, numerical experiments indicate at least third order accuracy if the integrand f and ∂Ω are sufficiently smooth. Numerical experiments also indicate that our approximations are effective when used to discretize certain singular source terms in partial differential equations. We mostly focus on smooth f and u. By this we mean that f is smooth in a neighborhood of Ω, u is smooth in a neighborhood of ∂Ω, and the level set u(x)=0 is a manifold of codimension one. However, our algorithms still give reasonable results if either f or u has jumps in its derivatives. Numerical experiments indicate approximately second order accuracy for both algorithms if the regularity of the data is reduced in this way, assuming that the level set u(x)=0 is a manifold. Numerical experiments indicate that dependence on the placement of Ω with respect to the grid is quite small for our algorithms. Specifically, a grid shift results in an O(hp) change in the computed solution
Approximation methods for stochastic petri nets
NASA Technical Reports Server (NTRS)
Jungnitz, Hauke Joerg
1992-01-01
Stochastic Marked Graphs are a concurrent decision free formalism provided with a powerful synchronization mechanism generalizing conventional Fork Join Queueing Networks. In some particular cases the analysis of the throughput can be done analytically. Otherwise the analysis suffers from the classical state explosion problem. Embedded in the divide and conquer paradigm, approximation techniques are introduced for the analysis of stochastic marked graphs and Macroplace/Macrotransition-nets (MPMT-nets), a new subclass introduced herein. MPMT-nets are a subclass of Petri nets that allow limited choice, concurrency and sharing of resources. The modeling power of MPMT is much larger than that of marked graphs, e.g., MPMT-nets can model manufacturing flow lines with unreliable machines and dataflow graphs where choice and synchronization occur. The basic idea leads to the notion of a cut to split the original net system into two subnets. The cuts lead to two aggregated net systems where one of the subnets is reduced to a single transition. A further reduction leads to a basic skeleton. The generalization of the idea leads to multiple cuts, where single cuts can be applied recursively leading to a hierarchical decomposition. Based on the decomposition, a response time approximation technique for the performance analysis is introduced. Also, delay equivalence, which has previously been introduced in the context of marked graphs by Woodside et al., Marie's method and flow equivalent aggregation are applied to the aggregated net systems. The experimental results show that response time approximation converges quickly and shows reasonable accuracy in most cases. The convergence of Marie's method and flow equivalent aggregation are applied to the aggregated net systems. The experimental results show that response time approximation converges quickly and shows reasonable accuracy in most cases. The convergence of Marie's is slower, but the accuracy is generally better. Delay
Hu, Jie; Luo, Meng; Jiang, Feng; Xu, Rui-Xue; Yan, Yijing
2011-06-28
Padé spectrum decomposition is an optimal sum-over-poles expansion scheme of Fermi function and Bose function [J. Hu, R. X. Xu, and Y. J. Yan, J. Chem. Phys. 133, 101106 (2010)]. In this work, we report two additional members to this family, from which the best among all sum-over-poles methods could be chosen for different cases of application. Methods are developed for determining these three Padé spectrum decomposition expansions at machine precision via simple algorithms. We exemplify the applications of present development with optimal construction of hierarchical equations-of-motion formulations for nonperturbative quantum dissipation and quantum transport dynamics. Numerical demonstrations are given for two systems. One is the transient transport current to an interacting quantum-dots system, together with the involved high-order co-tunneling dynamics. Another is the non-Markovian dynamics of a spin-boson system.
Differential equation based method for accurate approximations in optimization
NASA Technical Reports Server (NTRS)
Pritchard, Jocelyn I.; Adelman, Howard M.
1990-01-01
A method to efficiently and accurately approximate the effect of design changes on structural response is described. The key to this method is to interpret sensitivity equations as differential equations that may be solved explicitly for closed form approximations, hence, the method is denoted the Differential Equation Based (DEB) method. Approximations were developed for vibration frequencies, mode shapes and static displacements. The DEB approximation method was applied to a cantilever beam and results compared with the commonly-used linear Taylor series approximations and exact solutions. The test calculations involved perturbing the height, width, cross-sectional area, tip mass, and bending inertia of the beam. The DEB method proved to be very accurate, and in most cases, was more accurate than the linear Taylor series approximation. The method is applicable to simultaneous perturbation of several design variables. Also, the approximations may be used to calculate other system response quantities. For example, the approximations for displacements are used to approximate bending stresses.
A new approximation method for stress constraints in structural synthesis
NASA Technical Reports Server (NTRS)
Vanderplaats, Garret N.; Salajegheh, Eysa
1987-01-01
A new approximation method for dealing with stress constraints in structural synthesis is presented. The finite element nodal forces are approximated and these are used to create an explicit, but often nonlinear, approximation to the original problem. The principal motivation is to create the best approximation possible, in order to reduce the number of detailed finite element analyses needed to reach the optimum. Examples are offered and compared with published results, to demonstrate the efficiency and reliability of the proposed method.
Differential equation based method for accurate approximations in optimization
NASA Technical Reports Server (NTRS)
Pritchard, Jocelyn I.; Adelman, Howard M.
1990-01-01
This paper describes a method to efficiently and accurately approximate the effect of design changes on structural response. The key to this new method is to interpret sensitivity equations as differential equations that may be solved explicitly for closed form approximations, hence, the method is denoted the Differential Equation Based (DEB) method. Approximations were developed for vibration frequencies, mode shapes and static displacements. The DEB approximation method was applied to a cantilever beam and results compared with the commonly-used linear Taylor series approximations and exact solutions. The test calculations involved perturbing the height, width, cross-sectional area, tip mass, and bending inertia of the beam. The DEB method proved to be very accurate, and in msot cases, was more accurate than the linear Taylor series approximation. The method is applicable to simultaneous perturbation of several design variables. Also, the approximations may be used to calculate other system response quantities. For example, the approximations for displacement are used to approximate bending stresses.
Sensitivity analysis and approximation methods for general eigenvalue problems
NASA Technical Reports Server (NTRS)
Murthy, D. V.; Haftka, R. T.
1986-01-01
Optimization of dynamic systems involving complex non-hermitian matrices is often computationally expensive. Major contributors to the computational expense are the sensitivity analysis and reanalysis of a modified design. The present work seeks to alleviate this computational burden by identifying efficient sensitivity analysis and approximate reanalysis methods. For the algebraic eigenvalue problem involving non-hermitian matrices, algorithms for sensitivity analysis and approximate reanalysis are classified, compared and evaluated for efficiency and accuracy. Proper eigenvector normalization is discussed. An improved method for calculating derivatives of eigenvectors is proposed based on a more rational normalization condition and taking advantage of matrix sparsity. Important numerical aspects of this method are also discussed. To alleviate the problem of reanalysis, various approximation methods for eigenvalues are proposed and evaluated. Linear and quadratic approximations are based directly on the Taylor series. Several approximation methods are developed based on the generalized Rayleigh quotient for the eigenvalue problem. Approximation methods based on trace theorem give high accuracy without needing any derivatives. Operation counts for the computation of the approximations are given. General recommendations are made for the selection of appropriate approximation technique as a function of the matrix size, number of design variables, number of eigenvalues of interest and the number of design points at which approximation is sought.
Approximate inverse preconditioning of iterative methods for nonsymmetric linear systems
Benzi, M.; Tuma, M.
1996-12-31
A method for computing an incomplete factorization of the inverse of a nonsymmetric matrix A is presented. The resulting factorized sparse approximate inverse is used as a preconditioner in the iterative solution of Ax = b by Krylov subspace methods.
An approximation method for configuration optimization of trusses
NASA Technical Reports Server (NTRS)
Hansen, Scott R.; Vanderplaats, Garret N.
1988-01-01
Two- and three-dimensional elastic trusses are designed for minimum weight by varying the areas of the members and the location of the joints. Constraints on member stresses and Euler buckling are imposed and multiple static loading conditions are considered. The method presented here utilizes an approximate structural analysis based on first order Taylor series expansions of the member forces. A numerical optimizer minimizes the weight of the truss using information from the approximate structural analysis. Comparisons with results from other methods are made. It is shown that the method of forming an approximate structural analysis based on linearized member forces leads to a highly efficient method of truss configuration optimization.
Discontinuous Galerkin method based on non-polynomial approximation spaces
Yuan Ling . E-mail: lyuan@dam.brown.edu; Shu Chiwang . E-mail: shu@dam.brown.edu
2006-10-10
In this paper, we develop discontinuous Galerkin (DG) methods based on non-polynomial approximation spaces for numerically solving time dependent hyperbolic and parabolic and steady state hyperbolic and elliptic partial differential equations (PDEs). The algorithm is based on approximation spaces consisting of non-polynomial elementary functions such as exponential functions, trigonometric functions, etc., with the objective of obtaining better approximations for specific types of PDEs and initial and boundary conditions. It is shown that L {sup 2} stability and error estimates can be obtained when the approximation space is suitably selected. It is also shown with numerical examples that a careful selection of the approximation space to fit individual PDE and initial and boundary conditions often provides more accurate results than the DG methods based on the polynomial approximation spaces of the same order of accuracy.
Mapping biological entities using the longest approximately common prefix method
2014-01-01
Background The significant growth in the volume of electronic biomedical data in recent decades has pointed to the need for approximate string matching algorithms that can expedite tasks such as named entity recognition, duplicate detection, terminology integration, and spelling correction. The task of source integration in the Unified Medical Language System (UMLS) requires considerable expert effort despite the presence of various computational tools. This problem warrants the search for a new method for approximate string matching and its UMLS-based evaluation. Results This paper introduces the Longest Approximately Common Prefix (LACP) method as an algorithm for approximate string matching that runs in linear time. We compare the LACP method for performance, precision and speed to nine other well-known string matching algorithms. As test data, we use two multiple-source samples from the Unified Medical Language System (UMLS) and two SNOMED Clinical Terms-based samples. In addition, we present a spell checker based on the LACP method. Conclusions The Longest Approximately Common Prefix method completes its string similarity evaluations in less time than all nine string similarity methods used for comparison. The Longest Approximately Common Prefix outperforms these nine approximate string matching methods in its Maximum F1 measure when evaluated on three out of the four datasets, and in its average precision on two of the four datasets. PMID:24928653
A simple approximation method for obtaining the spanwise lift distribution
NASA Technical Reports Server (NTRS)
Schrenk, O
1940-01-01
The approximation method described makes possible lift-distribution computations in a few minutes. Comparison with an exact method shows satisfactory agreement. The method is of greater applicability than the exact method and includes also the important case of the wing with end plates.
An approximation method for fractional integro-differential equations
NASA Astrophysics Data System (ADS)
Emiroglu, Ibrahim
2015-12-01
In this work, an approximation method is proposed for fractional order linear Fredholm type integrodifferential equations with boundary conditions. The Sinc collocation method is applied to the examples and its efficiency and strength is also discussed by some special examples. The results of the proposed method are compared to the available analytic solutions.
Dual methods and approximation concepts in structural synthesis
NASA Technical Reports Server (NTRS)
Fleury, C.; Schmit, L. A., Jr.
1980-01-01
Approximation concepts and dual method algorithms are combined to create a method for minimum weight design of structural systems. Approximation concepts convert the basic mathematical programming statement of the structural synthesis problem into a sequence of explicit primal problems of separable form. These problems are solved by constructing explicit dual functions, which are maximized subject to nonnegativity constraints on the dual variables. It is shown that the joining together of approximation concepts and dual methods can be viewed as a generalized optimality criteria approach. The dual method is successfully extended to deal with pure discrete and mixed continuous-discrete design variable problems. The power of the method presented is illustrated with numerical results for example problems, including a metallic swept wing and a thin delta wing with fiber composite skins.
Improved stochastic approximation methods for discretized parabolic partial differential equations
NASA Astrophysics Data System (ADS)
Guiaş, Flavius
2016-12-01
We present improvements of the stochastic direct simulation method, a known numerical scheme based on Markov jump processes which is used for approximating solutions of ordinary differential equations. This scheme is suited especially for spatial discretizations of evolution partial differential equations (PDEs). By exploiting the full path simulation of the stochastic method, we use this first approximation as a predictor and construct improved approximations by Picard iterations, Runge-Kutta steps, or a combination. This has as consequence an increased order of convergence. We illustrate the features of the improved method at a standard benchmark problem, a reaction-diffusion equation modeling a combustion process in one space dimension (1D) and two space dimensions (2D).
Multi-level methods and approximating distribution functions
NASA Astrophysics Data System (ADS)
Wilson, D.; Baker, R. E.
2016-07-01
Biochemical reaction networks are often modelled using discrete-state, continuous-time Markov chains. System statistics of these Markov chains usually cannot be calculated analytically and therefore estimates must be generated via simulation techniques. There is a well documented class of simulation techniques known as exact stochastic simulation algorithms, an example of which is Gillespie's direct method. These algorithms often come with high computational costs, therefore approximate stochastic simulation algorithms such as the tau-leap method are used. However, in order to minimise the bias in the estimates generated using them, a relatively small value of tau is needed, rendering the computational costs comparable to Gillespie's direct method. The multi-level Monte Carlo method (Anderson and Higham, Multiscale Model. Simul. 10:146-179, 2012) provides a reduction in computational costs whilst minimising or even eliminating the bias in the estimates of system statistics. This is achieved by first crudely approximating required statistics with many sample paths of low accuracy. Then correction terms are added until a required level of accuracy is reached. Recent literature has primarily focussed on implementing the multi-level method efficiently to estimate a single system statistic. However, it is clearly also of interest to be able to approximate entire probability distributions of species counts. We present two novel methods that combine known techniques for distribution reconstruction with the multi-level method. We demonstrate the potential of our methods using a number of examples.
Approximate Newton-type methods via theory of control
NASA Astrophysics Data System (ADS)
Yap, Chui Ying; Leong, Wah June
2014-12-01
In this paper, we investigate the possible use of control theory, particularly theory on optimal control to derive some numerical methods for unconstrained optimization problems. Based upon this control theory, we derive a Levenberg-Marquardt-like method that guarantees greatest descent in a particular search region. The implementation of this method in its original form requires inversion of a non-sparse matrix or equivalently solving a linear system in every iteration. Thus, an approximation of the proposed method via quasi-Newton update is constructed. Numerical results indicate that the new method is more effective and practical.
Calculating Resonance Positions and Widths Using the Siegert Approximation Method
ERIC Educational Resources Information Center
Rapedius, Kevin
2011-01-01
Here, we present complex resonance states (or Siegert states) that describe the tunnelling decay of a trapped quantum particle from an intuitive point of view that naturally leads to the easily applicable Siegert approximation method. This can be used for analytical and numerical calculations of complex resonances of both the linear and nonlinear…
Methods to approximate reliabilities in single-step genomic evaluation
Technology Transfer Automated Retrieval System (TEKTRAN)
Reliability of predictions from single-step genomic BLUP (ssGBLUP) can be calculated by inversion, but that is not feasible for large data sets. Two methods of approximating reliability were developed based on decomposition of a function of reliability into contributions from records, pedigrees, and...
Using Propensity Score Methods to Approximate Factorial Experimental Designs
ERIC Educational Resources Information Center
Dong, Nianbo
2011-01-01
The purpose of this study is through Monte Carlo simulation to compare several propensity score methods in approximating factorial experimental design and identify best approaches in reducing bias and mean square error of parameter estimates of the main and interaction effects of two factors. Previous studies focused more on unbiased estimates of…
An approximate method for calculating aircraft downwash on parachute trajectories
Strickland, J.H.
1989-01-01
An approximate method for calculating velocities induced by aircraft on parachute trajectories is presented herein. A simple system of quadrilateral vortex panels is used to model the aircraft wing and its wake. The purpose of this work is to provide a simple analytical tool which can be used to approximate the effect of aircraft-induced velocities on parachute performance. Performance issues such as turnover and wake recontact may be strongly influenced by velocities induced by the wake of the delivering aircraft, especially if the aircraft is maneuvering at the time of parachute deployment. 7 refs., 9 figs.
Approximate method of designing a two-element airfoil
NASA Astrophysics Data System (ADS)
Abzalilov, D. F.; Mardanov, R. F.
2011-09-01
An approximate method is proposed for designing a two-element airfoil. The method is based on reducing an inverse boundary-value problem in a doubly connected domain to a problem in a singly connected domain located on a multisheet Riemann surface. The essence of the method is replacement of channels between the airfoil elements by channels of flow suction and blowing. The shape of these channels asymptotically tends to the annular shape of channels passing to infinity on the second sheet of the Riemann surface. The proposed method can be extended to designing multielement airfoils.
Source Localization using Stochastic Approximation and Least Squares Methods
Sahyoun, Samir S.; Djouadi, Seddik M.; Qi, Hairong; Drira, Anis
2009-03-05
This paper presents two approaches to locate the source of a chemical plume; Nonlinear Least Squares and Stochastic Approximation (SA) algorithms. Concentration levels of the chemical measured by special sensors are used to locate this source. Non-linear Least Squares technique is applied at different noise levels and compared with the localization using SA. For a noise corrupted data collected from a distributed set of chemical sensors, we show that SA methods are more efficient than Least Squares method. SA methods are often better at coping with noisy input information than other search methods.
Parallel iterative solvers and preconditioners using approximate hierarchical methods
Grama, A.; Kumar, V.; Sameh, A.
1996-12-31
In this paper, we report results of the performance, convergence, and accuracy of a parallel GMRES solver for Boundary Element Methods. The solver uses a hierarchical approximate matrix-vector product based on a hybrid Barnes-Hut / Fast Multipole Method. We study the impact of various accuracy parameters on the convergence and show that with minimal loss in accuracy, our solver yields significant speedups. We demonstrate the excellent parallel efficiency and scalability of our solver. The combined speedups from approximation and parallelism represent an improvement of several orders in solution time. We also develop fast and paralellizable preconditioners for this problem. We report on the performance of an inner-outer scheme and a preconditioner based on truncated Green`s function. Experimental results on a 256 processor Cray T3D are presented.
A multiscale two-point flux-approximation method
Møyner, Olav Lie, Knut-Andreas
2014-10-15
A large number of multiscale finite-volume methods have been developed over the past decade to compute conservative approximations to multiphase flow problems in heterogeneous porous media. In particular, several iterative and algebraic multiscale frameworks that seek to reduce the fine-scale residual towards machine precision have been presented. Common for all such methods is that they rely on a compatible primal–dual coarse partition, which makes it challenging to extend them to stratigraphic and unstructured grids. Herein, we propose a general idea for how one can formulate multiscale finite-volume methods using only a primal coarse partition. To this end, we use two key ingredients that are computed numerically: (i) elementary functions that correspond to flow solutions used in transmissibility upscaling, and (ii) partition-of-unity functions used to combine elementary functions into basis functions. We exemplify the idea by deriving a multiscale two-point flux-approximation (MsTPFA) method, which is robust with regards to strong heterogeneities in the permeability field and can easily handle general grids with unstructured fine- and coarse-scale connections. The method can easily be adapted to arbitrary levels of coarsening, and can be used both as a standalone solver and as a preconditioner. Several numerical experiments are presented to demonstrate that the MsTPFA method can be used to solve elliptic pressure problems on a wide variety of geological models in a robust and efficient manner.
Advances in dual algorithms and convex approximation methods
NASA Technical Reports Server (NTRS)
Smaoui, H.; Fleury, C.; Schmit, L. A.
1988-01-01
A new algorithm for solving the duals of separable convex optimization problems is presented. The algorithm is based on an active set strategy in conjunction with a variable metric method. This first order algorithm is more reliable than Newton's method used in DUAL-2 because it does not break down when the Hessian matrix becomes singular or nearly singular. A perturbation technique is introduced in order to remove the nondifferentiability of the dual function which arises when linear constraints are present in the approximate problem.
The Caratheodory-Fejer Method for Real Rational Approximation,
1981-10-01
M H GUTKNECHT N01-75-C-1132 UNCLASSIFIED STAN-NA-81-15 IIEEEEEEEEEEE L11-205 ~jj 2 I’tll H’O IIN W HR !%OM j~j 2A LEVEL 00 Ott a DTIC ChELECTEP...FEB 5 1982 ZBUTION STATrMENT A jLpproved for public releaqw, Distribution Unlimited The Carath4iodory-Fej4r method for real rational approximation Lloyd...angewandte MathematikB E’idgen6ssischc Technische Iiochschule 8092 Zurich, Switzerland Abstract. A "Carath6odory-Fej6r method" is presented ror near-best
Globbic approximation in low-resolution direct-methods phasing.
Guo, D Y; Blessing, R H; Langs, D A
2000-09-01
Probabilistic direct-methods phasing theory, originally based on a uniform atomic distribution hypothesis, is shown to be adaptable to a non-uniform bulk-solvent-compensated globbic approximation for protein crystals at low resolution. The effective number n(g) of non-H protein atoms per polyatomic glob increases with decreasing resolution; low-resolution phases depend on the positions of only N(g) = N(a)/n(g) globs rather than N(a) atoms. Test calculations were performed with measured structure-factor data and the refined structural parameters from a protein crystal with approximately 10 000 non-H protein atoms per molecule and approximately 60% solvent volume. Low-resolution data sets with d(min) ranging from 15 to 5 A gave n(g) = ad(min) + b, with a = 1.0 A(-1) and b = -1.9 for the test case. Results of tangent-formula phase-estimation trials emphasize that completeness of the low-resolution data is critically important for probabilistic phasing.
Proton Form Factor Measurements Using Polarization Method: Beyond Born Approximation
Pentchev, Lubomir
2008-10-13
Significant theoretical and experimental efforts have been made over the past 7 years aiming to explain the discrepancy between the proton form factor ratio data obtained at JLab using the polarization method and the previous Rosenbluth measurements. Preliminary results from the first high precision polarization experiment dedicated to study effects beyond Born approximation will be presented. The ratio of the transferred polarization components and, separately, the longitudinal polarization in ep elastic scattering have been measured at a fixed Q{sup 2} of 2.5 GeV{sup 2} over a wide kinematic range. The two quantities impose constraints on the real part of the ep elastic amplitudes.
Finite amplitude method for the quasiparticle random-phase approximation
Avogadro, Paolo; Nakatsukasa, Takashi
2011-07-15
We present the finite amplitude method (FAM), originally proposed in Ref. [17], for superfluid systems. A Hartree-Fock-Bogoliubov code may be transformed into a code of the quasiparticle-random-phase approximation (QRPA) with simple modifications. This technique has advantages over the conventional QRPA calculations, such as coding feasibility and computational cost. We perform the fully self-consistent linear-response calculation for the spherical neutron-rich nucleus {sup 174}Sn, modifying the hfbrad code, to demonstrate the accuracy, feasibility, and usefulness of the FAM.
Parabolic approximation method for the mode conversion-tunneling equation
Phillips, C.K.; Colestock, P.L.; Hwang, D.Q.; Swanson, D.G.
1987-07-01
The derivation of the wave equation which governs ICRF wave propagation, absorption, and mode conversion within the kinetic layer in tokamaks has been extended to include diffraction and focussing effects associated with the finite transverse dimensions of the incident wavefronts. The kinetic layer considered consists of a uniform density, uniform temperature slab model in which the equilibrium magnetic field is oriented in the z-direction and varies linearly in the x-direction. An equivalent dielectric tensor as well as a two-dimensional energy conservation equation are derived from the linearized Vlasov-Maxwell system of equations. The generalized form of the mode conversion-tunneling equation is then extracted from the Maxwell equations, using the parabolic approximation method in which transverse variations of the wave fields are assumed to be weak in comparison to the variations in the primary direction of propagation. Methods of solving the generalized wave equation are discussed. 16 refs.
Approximation method to compute domain related integrals in structural studies
NASA Astrophysics Data System (ADS)
Oanta, E.; Panait, C.; Raicu, A.; Barhalescu, M.; Axinte, T.
2015-11-01
Various engineering calculi use integral calculus in theoretical models, i.e. analytical and numerical models. For usual problems, integrals have mathematical exact solutions. If the domain of integration is complicated, there may be used several methods to calculate the integral. The first idea is to divide the domain in smaller sub-domains for which there are direct calculus relations, i.e. in strength of materials the bending moment may be computed in some discrete points using the graphical integration of the shear force diagram, which usually has a simple shape. Another example is in mathematics, where the surface of a subgraph may be approximated by a set of rectangles or trapezoids used to calculate the definite integral. The goal of the work is to introduce our studies about the calculus of the integrals in the transverse section domains, computer aided solutions and a generalizing method. The aim of our research is to create general computer based methods to execute the calculi in structural studies. Thus, we define a Boolean algebra which operates with ‘simple’ shape domains. This algebraic standpoint uses addition and subtraction, conditioned by the sign of every ‘simple’ shape (-1 for the shapes to be subtracted). By ‘simple’ shape or ‘basic’ shape we define either shapes for which there are direct calculus relations, or domains for which their frontiers are approximated by known functions and the according calculus is carried out using an algorithm. The ‘basic’ shapes are linked to the calculus of the most significant stresses in the section, refined aspect which needs special attention. Starting from this idea, in the libraries of ‘basic’ shapes, there were included rectangles, ellipses and domains whose frontiers are approximated by spline functions. The domain triangularization methods suggested that another ‘basic’ shape to be considered is the triangle. The subsequent phase was to deduce the exact relations for the
Approximate hard-sphere method for densely packed granular flows.
Guttenberg, Nicholas
2011-05-01
The simulation of granular media is usually done either with event-driven codes that treat collisions as instantaneous but have difficulty with very dense packings, or with molecular dynamics (MD) methods that approximate rigid grains using a stiff viscoelastic spring. There is a little-known method that combines several collision events into a single timestep to retain the instantaneous collisions of event-driven dynamics, but also be able to handle dense packings. However, it is poorly characterized as to its regime of validity and failure modes. We present a modification of this method to reduce the introduction of overlap error, and test it using the problem of two-dimensional (2D) granular Couette flow, a densely packed system that has been well characterized by previous work. We find that this method can successfully replicate the results of previous work up to the point of jamming, and that it can do so a factor of 10 faster than comparable MD methods.
Approximate hard-sphere method for densely packed granular flows
NASA Astrophysics Data System (ADS)
Guttenberg, Nicholas
2011-05-01
The simulation of granular media is usually done either with event-driven codes that treat collisions as instantaneous but have difficulty with very dense packings, or with molecular dynamics (MD) methods that approximate rigid grains using a stiff viscoelastic spring. There is a little-known method that combines several collision events into a single timestep to retain the instantaneous collisions of event-driven dynamics, but also be able to handle dense packings. However, it is poorly characterized as to its regime of validity and failure modes. We present a modification of this method to reduce the introduction of overlap error, and test it using the problem of two-dimensional (2D) granular Couette flow, a densely packed system that has been well characterized by previous work. We find that this method can successfully replicate the results of previous work up to the point of jamming, and that it can do so a factor of 10 faster than comparable MD methods.
Hybrid functionals and GW approximation in the FLAPW method
NASA Astrophysics Data System (ADS)
Friedrich, Christoph; Betzinger, Markus; Schlipf, Martin; Blügel, Stefan; Schindlmayr, Arno
2012-07-01
We present recent advances in numerical implementations of hybrid functionals and the GW approximation within the full-potential linearized augmented-plane-wave (FLAPW) method. The former is an approximation for the exchange-correlation contribution to the total energy functional in density-functional theory, and the latter is an approximation for the electronic self-energy in the framework of many-body perturbation theory. All implementations employ the mixed product basis, which has evolved into a versatile basis for the products of wave functions, describing the incoming and outgoing states of an electron that is scattered by interacting with another electron. It can thus be used for representing the nonlocal potential in hybrid functionals as well as the screened interaction and related quantities in GW calculations. In particular, the six-dimensional space integrals of the Hamiltonian exchange matrix elements (and exchange self-energy) decompose into sums over vector-matrix-vector products, which can be evaluated easily. The correlation part of the GW self-energy, which contains a time or frequency dependence, is calculated on the imaginary frequency axis with a subsequent analytic continuation to the real axis or, alternatively, by a direct frequency convolution of the Green function G and the dynamically screened Coulomb interaction W along a contour integration path that avoids the poles of the Green function. Hybrid-functional and GW calculations are notoriously computationally expensive. We present a number of tricks that reduce the computational cost considerably, including the use of spatial and time-reversal symmetries, modifications of the mixed product basis with the aim to optimize it for the correlation self-energy and another modification that makes the Coulomb matrix sparse, analytic expansions of the interaction potentials around the point of divergence at k = 0, and a nested density and density-matrix convergence scheme for hybrid
1991-01-29
the underpotential deposition of metals on an electrode, and obtain voltammograms that show the sharp spikes seen in recent experiments. 20 DISTRIBUTION...recursion relation. and can be computed from the fugacity series in closed form. We apply this approxiant to the underpotential deposition of metals on an...the sudden formation of films at electrodes. It has been possible to perform structural analysis of underpotential deposits of metallic monolavers [4
Atomistic Modeling of Nanostructures via the BFS Quantum Approximate Method
NASA Technical Reports Server (NTRS)
Bozzolo, Guillermo; Garces, Jorge E.; Noebe, Ronald D.; Farias, D.
2003-01-01
Ideally, computational modeling techniques for nanoscopic physics would be able to perform free of limitations on the type and number of elements, while providing comparable accuracy when dealing with bulk or surface problems. Computational efficiency is also desirable, if not mandatory, for properly dealing with the complexity of typical nano-strucured systems. A quantum approximate technique, the BFS method for alloys, which attempts to meet these demands, is introduced for the calculation of the energetics of nanostructures. The versatility of the technique is demonstrated through analysis of diverse systems, including multi-phase precipitation in a five element Ni-Al-Ti-Cr-Cu alloy and the formation of mixed composition Co-Cu islands on a metallic Cu(III) substrate.
Multivariate approximation methods and applications to geophysics and geodesy
NASA Technical Reports Server (NTRS)
Munteanu, M. J.
1979-01-01
The first report in a series is presented which is intended to be written by the author with the purpose of treating a class of approximation methods of functions in one and several variables and ways of applying them to geophysics and geodesy. The first report is divided in three parts and is devoted to the presentation of the mathematical theory and formulas. Various optimal ways of representing functions in one and several variables and the associated error when information is had about the function such as satellite data of different kinds are discussed. The framework chosen is Hilbert spaces. Experiments were performed on satellite altimeter data and on satellite to satellite tracking data.
A stochastic approximation method for assigning values to calibrators.
Schlain, B
1998-04-01
A new procedure is provided for transferring analyte concentration values from a reference material to production calibrators. This method is robust to calibration curve-fitting errors and can be accomplished using only one instrument and one set of reagents. An easily implemented stochastic approximation algorithm iteratively finds the appropriate analyte level of a standard prepared from a reference material that will yield the same average signal response as the new production calibrator. Alternatively, a production bulk calibrator material can be iteratively adjusted to give the same average signal response as some prespecified, fixed reference standard. In either case, the outputted value assignment of the production calibrator is the analyte concentration of the reference standard in the final iteration of the algorithm. Sample sizes are statistically determined as functions of known within-run signal response precisions and user-specified accuracy tolerances.
A comparison of computational methods and algorithms for the complex gamma function
NASA Technical Reports Server (NTRS)
Ng, E. W.
1974-01-01
A survey and comparison of some computational methods and algorithms for gamma and log-gamma functions of complex arguments are presented. Methods and algorithms reported include Chebyshev approximations, Pade expansion and Stirling's asymptotic series. The comparison leads to the conclusion that Algorithm 421 published in the Communications of ACM by H. Kuki is the best program either for individual application or for the inclusion in subroutine libraries.
Approximation methods for combined thermal/structural design
NASA Technical Reports Server (NTRS)
Haftka, R. T.; Shore, C. P.
1979-01-01
Two approximation concepts for combined thermal/structural design are evaluated. The first concept is an approximate thermal analysis based on the first derivatives of structural temperatures with respect to design variables. Two commonly used first-order Taylor series expansions are examined. The direct and reciprocal expansions are special members of a general family of approximations, and for some conditions other members of that family of approximations are more accurate. Several examples are used to compare the accuracy of the different expansions. The second approximation concept is the use of critical time points for combined thermal and stress analyses of structures with transient loading conditions. Significant time savings are realized by identifying critical time points and performing the stress analysis for those points only. The design of an insulated panel which is exposed to transient heating conditions is discussed.
Communication: Improved pair approximations in local coupled-cluster methods
NASA Astrophysics Data System (ADS)
Schwilk, Max; Usvyat, Denis; Werner, Hans-Joachim
2015-03-01
In local coupled cluster treatments the electron pairs can be classified according to the magnitude of their energy contributions or distances into strong, close, weak, and distant pairs. Different approximations are introduced for the latter three classes. In this communication, an improved simplified treatment of close and weak pairs is proposed, which is based on long-range cancellations of individually slowly decaying contributions in the amplitude equations. Benchmark calculations for correlation, reaction, and activation energies demonstrate that these approximations work extremely well, while pair approximations based on local second-order Møller-Plesset theory can lead to errors that are 1-2 orders of magnitude larger.
A Binomial Approximation Method for the Ising Model
NASA Astrophysics Data System (ADS)
Streib, Noah; Streib, Amanda; Beichl, Isabel; Sullivan, Francis
2014-08-01
A large portion of the computation required for the partition function of the Ising model can be captured with a simple formula. In this work, we support this claim by defining an approximation to the partition function and other thermodynamic quantities of the Ising model that requires no algorithm at all. This approximation, which uses the high temperature expansion, is solely based on the binomial distribution, and performs very well at low temperatures. At high temperatures, we provide an alternative approximation, which also serves as a lower bound on the partition function and is trivial to compute. We provide theoretical evidence and the results of numerical experiments to support the strength of these approximations.
Kuwahara, Hiroyuki; Myers, Chris J
2008-09-01
Given the substantial computational requirements of stochastic simulation, approximation is essential for efficient analysis of any realistic biochemical system. This paper introduces a new approximation method to reduce the computational cost of stochastic simulations of an enzymatic reaction scheme which in biochemical systems often includes rapidly changing fast reactions with enzyme and enzyme-substrate complex molecules present in very small counts. Our new method removes the substrate dissociation reaction by approximating the passage time of the formation of each enzyme-substrate complex molecule which is destined to a production reaction. This approach skips the firings of unimportant yet expensive reaction events, resulting in a substantial acceleration in the stochastic simulations of enzymatic reactions. Additionally, since all the parameters used in our new approach can be derived by the Michaelis-Menten parameters which can actually be measured from experimental data, applications of this approximation can be practical even without having full knowledge of the underlying enzymatic reaction. Here, we apply this new method to various enzymatic reaction systems, resulting in a speedup of orders of magnitude in temporal behavior analysis without any significant loss in accuracy. Furthermore, we show that our new method can perform better than some of the best existing approximation methods for enzymatic reactions in terms of accuracy and efficiency.
A method of approximating range size of small mammals
Stickel, L.F.
1965-01-01
In summary, trap success trends appear to provide a useful approximation to range size of easily trapped small mammals such as Peromyscus. The scale of measurement can be adjusted as desired. Further explorations of the usefulness of the plan should be made and modifications possibly developed before adoption.
Stochastic Approximation Methods for Latent Regression Item Response Models
ERIC Educational Resources Information Center
von Davier, Matthias; Sinharay, Sandip
2010-01-01
This article presents an application of a stochastic approximation expectation maximization (EM) algorithm using a Metropolis-Hastings (MH) sampler to estimate the parameters of an item response latent regression model. Latent regression item response models are extensions of item response theory (IRT) to a latent variable model with covariates…
Approximate Green's function methods for HZE transport in multilayered materials
NASA Technical Reports Server (NTRS)
Wilson, John W.; Badavi, Francis F.; Shinn, Judy L.; Costen, Robert C.
1993-01-01
A nonperturbative analytic solution of the high charge and energy (HZE) Green's function is used to implement a computer code for laboratory ion beam transport in multilayered materials. The code is established to operate on the Langley nuclear fragmentation model used in engineering applications. Computational procedures are established to generate linear energy transfer (LET) distributions for a specified ion beam and target for comparison with experimental measurements. The code was found to be highly efficient and compared well with the perturbation approximation.
Spline methods for approximating quantile functions and generating random samples
NASA Technical Reports Server (NTRS)
Schiess, J. R.; Matthews, C. G.
1985-01-01
Two cubic spline formulations are presented for representing the quantile function (inverse cumulative distribution function) of a random sample of data. Both B-spline and rational spline approximations are compared with analytic representations of the quantile function. It is also shown how these representations can be used to generate random samples for use in simulation studies. Comparisons are made on samples generated from known distributions and a sample of experimental data. The spline representations are more accurate for multimodal and skewed samples and to require much less time to generate samples than the analytic representation.
Decentralized Bayesian search using approximate dynamic programming methods.
Zhao, Yijia; Patek, Stephen D; Beling, Peter A
2008-08-01
We consider decentralized Bayesian search problems that involve a team of multiple autonomous agents searching for targets on a network of search points operating under the following constraints: 1) interagent communication is limited; 2) the agents do not have the opportunity to agree in advance on how to resolve equivalent but incompatible strategies; and 3) each agent lacks the ability to control or predict with certainty the actions of the other agents. We formulate the multiagent search-path-planning problem as a decentralized optimal control problem and introduce approximate dynamic heuristics that can be implemented in a decentralized fashion. After establishing some analytical properties of the heuristics, we present computational results for a search problem involving two agents on a 5 x 5 grid.
Effective moduli of particulate solids: Lubrication approximation method
NASA Astrophysics Data System (ADS)
Qi, F.; Phan-Thien, N.; X. J. Fan
To efficiently calculate the effective properties of a composite, which consists of rigid spherical inclusions not necessarily of the same sizes in a homogeneous isotropic elastic matrix, a method based on the lubrication forces between neighbouring particles has been developed. The method is used to evaluate the effective Lamé moduli and the Poisson's ratio of the composite, for the particles in random configurations and in cubic lattices. A good agreement with experimental results given by Smith (1975) for particles in random configurations is observed, and also the numerical results on the effective moduli agree well with the results given by Nunan & Keller (1984) for particles in cubic lattices.
An approximate method for determining of investment risk
NASA Astrophysics Data System (ADS)
Slavkova, Maria; Tzenova, Zlatina
2016-12-01
In this work a method for determining of investment risk during all economic states is considered. It is connected to matrix games with two players. A definition for risk in a matrix game is introduced. Three properties are proven. It is considered an appropriate example.
Approximate proximal point methods for convex programming problems
Eggermont, P.
1994-12-31
We study proximal point methods for the finite dimensional convex programming problem minimize f(x) such that x {element_of} C, where f : dom f {contained_in} RIR is a proper convex function and C {contained_in} R is a closed convex set.
SET: a pupil detection method using sinusoidal approximation
Javadi, Amir-Homayoun; Hakimi, Zahra; Barati, Morteza; Walsh, Vincent; Tcheang, Lili
2015-01-01
Mobile eye-tracking in external environments remains challenging, despite recent advances in eye-tracking software and hardware engineering. Many current methods fail to deal with the vast range of outdoor lighting conditions and the speed at which these can change. This confines experiments to artificial environments where conditions must be tightly controlled. Additionally, the emergence of low-cost eye tracking devices calls for the development of analysis tools that enable non-technical researchers to process the output of their images. We have developed a fast and accurate method (known as “SET”) that is suitable even for natural environments with uncontrolled, dynamic and even extreme lighting conditions. We compared the performance of SET with that of two open-source alternatives by processing two collections of eye images: images of natural outdoor scenes with extreme lighting variations (“Natural”); and images of less challenging indoor scenes (“CASIA-Iris-Thousand”). We show that SET excelled in outdoor conditions and was faster, without significant loss of accuracy, indoors. SET offers a low cost eye-tracking solution, delivering high performance even in challenging outdoor environments. It is offered through an open-source MATLAB toolkit as well as a dynamic-link library (“DLL”), which can be imported into many programming languages including C# and Visual Basic in Windows OS (www.eyegoeyetracker.co.uk). PMID:25914641
Computation of atmospheric cooling rates by exact and approximate methods
NASA Technical Reports Server (NTRS)
Ridgway, William L.; HARSHVARDHAN; Arking, Albert
1991-01-01
Infrared fluxes and cooling rates for several standard model atmospheres, with and without water vapor, carbon dioxide, and ozone, have been calculated using a line-by-line method at 0.01/cm resolution. The sensitivity of the results to the vertical integration scheme and to the model for water vapor continuum absorption is shown. Comparison with similar calculations performed at NOAA/GFDL shows agreement to within 0.5 W/sq m in fluxes at various levels and 0.05 K/d in cooling rates. Comparison with a fast, parameterized radiation code used in climate models reveals a worst case difference, when all gases are included, of 3.7 W/sq m in flux; cooling rate differences are 0.1 K/d or less when integrated over a substantial layer with point differences as large as 0.3 K/d.
Lubrication approximation in completed double layer boundary element method
NASA Astrophysics Data System (ADS)
Nasseri, S.; Phan-Thien, N.; Fan, X.-J.
This paper reports on the results of the numerical simulation of the motion of solid spherical particles in shear Stokes flows. Using the completed double layer boundary element method (CDLBEM) via distributed computing under Parallel Virtual Machine (PVM), the effective viscosity of suspension has been calculated for a finite number of spheres in a cubic array, or in a random configuration. In the simulation presented here, the short range interactions via lubrication forces are also taken into account, via the range completer in the formulation, whenever the gap between two neighbouring particles is closer than a critical gap. The results for particles in a simple cubic array agree with the results of Nunan and Keller (1984) and Stoksian Dynamics of Brady etal. (1988). To evaluate the lubrication forces between particles in a random configuration, a critical gap of 0.2 of particle's radius is suggested and the results are tested against the experimental data of Thomas (1965) and empirical equation of Krieger-Dougherty (Krieger, 1972). Finally, the quasi-steady trajectories are obtained for time-varying configuration of 125 particles.
Convergence of hausdorff approximation methods for the Edgeworth-Pareto hull of a compact set
NASA Astrophysics Data System (ADS)
Efremov, R. V.
2015-11-01
The Hausdorff methods comprise an important class of polyhedral approximation methods for convex compact bodies, since they have an optimal convergence rate and possess other useful properties. The concept of Hausdorff methods is extended to a problem arising in multicriteria optimization, namely, to the polyhedral approximation of the Edgeworth-Pareto hull (EPH) of a convex compact set. It is shown that the sequences of polyhedral sets generated by Hausdorff methods converge to the EPH to be approximated. It is shown that the Estimate Refinement method, which is most frequently used to approximate the EPH of convex compact sets, is a Hausdorff method and, hence, generates sequences of sets converging to the EPH.
Algebraic filter approach for fast approximation of nonlinear tomographic reconstruction methods
NASA Astrophysics Data System (ADS)
Plantagie, Linda; Batenburg, Kees Joost
2015-01-01
We present a computational approach for fast approximation of nonlinear tomographic reconstruction methods by filtered backprojection (FBP) methods. Algebraic reconstruction algorithms are the methods of choice in a wide range of tomographic applications, yet they require significant computation time, restricting their usefulness. We build upon recent work on the approximation of linear algebraic reconstruction methods and extend the approach to the approximation of nonlinear reconstruction methods which are common in practice. We demonstrate that if a blueprint image is available that is sufficiently similar to the scanned object, our approach can compute reconstructions that approximate iterative nonlinear methods, yet have the same speed as FBP.
NASA Astrophysics Data System (ADS)
Pospelov, A. I.
2016-08-01
Adaptive methods for the polyhedral approximation of the convex Edgeworth-Pareto hull in multiobjective monotone integer optimization problems are proposed and studied. For these methods, theoretical convergence rate estimates with respect to the number of vertices are obtained. The estimates coincide in order with those for filling and augmentation H-methods intended for the approximation of nonsmooth convex compact bodies.
NASA Technical Reports Server (NTRS)
Funaro, D.; Gottlieb, D.
1988-01-01
A new method to impose boundary conditions for pseudospectral approximations to hyperbolic equations is suggested. This method involves the collocation of the equation at the boundary nodes as well as satisfying boundary conditions. Stability and convergence results are proven for the Chebyshev approximation of linear scalar hyperbolic equations. The eigenvalues of this method applied to parabolic equations are shown to be real and negative.
The complex variable boundary element method: Applications in determining approximative boundaries
Hromadka, T.V.
1984-01-01
The complex variable boundary element method (CVBEM) is used to determine approximation functions for boundary value problems of the Laplace equation such as occurs in potential theory. By determining an approximative boundary upon which the CVBEM approximator matches the desired constant (level curves) boundary conditions, the CVBEM is found to provide the exact solution throughout the interior of the transformed problem domain. Thus, the acceptability of the CVBEM approximation is determined by the closeness-of-fit of the approximative boundary to the study problem boundary. ?? 1984.
On the interpretation of large gravimagnetic data by the modified method of S-approximations
NASA Astrophysics Data System (ADS)
Stepanova, I. E.; Raevskiy, D. N.; Shchepetilov, A. V.
2017-01-01
The modified method of S-approximations applied for processing large and superlarge gravity and magnetic prospecting data is considered. The modified S-approximations of the elements of gravitational field are obtained due to the efficient block methods for solving the system of linear algebraic equations (SLAEs) to which the geophysically meaningful problem is reduced. The results of the mathematical experiment are presented.
Mechanical System Reliability and Cost Integration Using a Sequential Linear Approximation Method
NASA Technical Reports Server (NTRS)
Kowal, Michael T.
1997-01-01
The development of new products is dependent on product designs that incorporate high levels of reliability along with a design that meets predetermined levels of system cost. Additional constraints on the product include explicit and implicit performance requirements. Existing reliability and cost prediction methods result in no direct linkage between variables affecting these two dominant product attributes. A methodology to integrate reliability and cost estimates using a sequential linear approximation method is proposed. The sequential linear approximation method utilizes probability of failure sensitivities determined from probabilistic reliability methods as well a manufacturing cost sensitivities. The application of the sequential linear approximation method to a mechanical system is demonstrated.
NASA Technical Reports Server (NTRS)
Nguyen, Nhan T.; Hornby, Gregory; Ishihara, Abe
2013-01-01
This paper describes two methods of trajectory optimization to obtain an optimal trajectory of minimum-fuel- to-climb for an aircraft. The first method is based on the adjoint method, and the second method is based on a direct trajectory optimization method using a Chebyshev polynomial approximation and cubic spine approximation. The approximate optimal trajectory will be compared with the adjoint-based optimal trajectory which is considered as the true optimal solution of the trajectory optimization problem. The adjoint-based optimization problem leads to a singular optimal control solution which results in a bang-singular-bang optimal control.
Comparison of Finite Differences and WKB approximation Methods for PT symmetric complex potentials
NASA Astrophysics Data System (ADS)
Naceri, Leila; Chekkal, Meziane; Hammou, Amine B.
2016-10-01
We consider the one dimensional schrödinger eigenvalue problem on a finite domain (Strum-Liouville problem) for several PT-symmetric complex potentials, studied by Bender and Jones using the WKB approximation method. We make a comparison between the solutions of theses PT-symmetric complex potentials using both the finite difference method (FDM) and the WKB approximation method and show quantitative and qualitative agreement between the two methods.
The Subspace Projected Approximate Matrix (SPAM) modification of the Davidson method
Shepard, R.; Tilson, J.L.; Wagner, A.F.; Minkoff, M.
1997-12-31
A modification of the Davidson subspace expansion method, a Ritz approach, is proposed in which the expansion vectors are computed from a {open_quotes}cheap{close_quotes} approximating eigenvalue equation. This approximate eigenvalue equation is assembled using projection operators constructed from the subspace expansion vectors. The method may be implemented using an inner/outer iteration scheme, or it may be implemented by modifying the usual Davidson algorithm in such a way that exact and approximate matrix-vector product computations are intersperced. A multi-level algorithm is proposed in which several levels of approximate matrices are used.
Zhang, Zhenyue; Zha, Hongyuan; Simon, Horst
2006-07-31
In this paper, we developed numerical algorithms for computing sparse low-rank approximations of matrices, and we also provided a detailed error analysis of the proposed algorithms together with some numerical experiments. The low-rank approximations are constructed in a certain factored form with the degree of sparsity of the factors controlled by some user-specified parameters. In this paper, we cast the sparse low-rank approximation problem in the framework of penalized optimization problems. We discuss various approximation schemes for the penalized optimization problem which are more amenable to numerical computations. We also include some analysis to show the relations between the original optimization problem and the reduced one. We then develop a globally convergent discrete Newton-like iterative method for solving the approximate penalized optimization problems. We also compare the reconstruction errors of the sparse low-rank approximations computed by our new methods with those obtained using the methods in the earlier paper and several other existing methods for computing sparse low-rank approximations. Numerical examples show that the penalized methods are more robust and produce approximations with factors which have fewer columns and are sparser.
NASA Technical Reports Server (NTRS)
La Budde, R. A.
1972-01-01
Sampling techniques have been used previously to evaluate Jacobian determinants that occur in classical mechanical descriptions of molecular scattering. These determinants also occur in the quasiclassical approximation. A new technique is described which can be used to evaluate Jacobian determinants which occur in either description. This method is expected to be valuable in the study of reactive scattering using the quasiclassical approximation.
The Subspace Projected Approximate Matrix (SPAM) Modification of the Davidson Method
NASA Astrophysics Data System (ADS)
Shepard, Ron; Wagner, Albert F.; Tilson, Jeffrey L.; Minkoff, Michael
2001-09-01
A modification of the iterative matrix diagonalization method of Davidson is presented that is applicable to the symmetric eigenvalue problem. This method is based on subspace projections of a sequence of one or more approximate matrices. The purpose of these approximate matrices is to improve the efficiency of the solution of the desired eigenpairs by reducing the number of matrix-vector products that must be computed with the exact matrix. Several applications are presented. These are chosen to show the range of applicability of the method, the convergence behavior for a wide range of matrix types, and also the wide range of approaches that may be employed to generate approximate matrices.
Extension of the weak-line approximation and application to correlated-k methods
Conley, A.J.; Collins, W.D.
2011-03-15
Global climate models require accurate and rapid computation of the radiative transfer through the atmosphere. Correlated-k methods are often used. One of the approximations used in correlated-k models is the weakline approximation. We introduce an approximation T/sub g/ which reduces to the weak-line limit when optical depths are small, and captures the deviation from the weak-line limit as the extinction deviates from the weak-line limit. This approximation is constructed to match the first two moments of the gamma distribution to the k-distribution of the transmission. We compare the errors of the weak-line approximation with T/sub g/ in the context of a water vapor spectrum. The extension T/sub g/ is more accurate and converges more rapidly than the weak-line approximation.
NASA Technical Reports Server (NTRS)
Hamilton, H. H., II
1982-01-01
An approximate method for calculating heating rates at general three dimensional stagnation points is presented. The application of the method for making stagnation point heating calculations during atmospheric entry is described. Comparisons with results from boundary layer calculations indicate that the method should provide an accurate method for engineering type design and analysis applications.
Test particle propagation in magnetostatic turbulence. 2: The local approximation method
NASA Technical Reports Server (NTRS)
Klimas, A. J.; Sandri, G.; Scudder, J. D.; Howell, D. R.
1976-01-01
An approximation method for statistical mechanics is presented and applied to a class of problems which contains a test particle propagation problem. All of the available basic equations used in statistical mechanics are cast in the form of a single equation which is integrodifferential in time and which is then used as the starting point for the construction of the local approximation method. Simplification of the integrodifferential equation is achieved through approximation to the Laplace transform of its kernel. The approximation is valid near the origin in the Laplace space and is based on the assumption of small Laplace variable. No other small parameter is necessary for the construction of this approximation method. The n'th level of approximation is constructed formally, and the first five levels of approximation are calculated explicitly. It is shown that each level of approximation is governed by an inhomogeneous partial differential equation in time with time independent operator coefficients. The order in time of these partial differential equations is found to increase as n does. At n = 0 the most local first order partial differential equation which governs the Markovian limit is regained.
Efficiency of the estimate refinement method for polyhedral approximation of multidimensional balls
NASA Astrophysics Data System (ADS)
Kamenev, G. K.
2016-05-01
The estimate refinement method for the polyhedral approximation of convex compact bodies is analyzed. When applied to convex bodies with a smooth boundary, this method is known to generate polytopes with an optimal order of growth of the number of vertices and facets depending on the approximation error. In previous studies, for the approximation of a multidimensional ball, the convergence rates of the method were estimated in terms of the number of faces of all dimensions and the cardinality of the facial structure (the norm of the f-vector) of the constructed polytope was shown to have an optimal rate of growth. In this paper, the asymptotic convergence rate of the method with respect to faces of all dimensions is compared with the convergence rate of best approximation polytopes. Explicit expressions are obtained for the asymptotic efficiency, including the case of low dimensions. Theoretical estimates are compared with numerical results.
Evaluation of the successive approximations method for acoustic streaming numerical simulations.
Catarino, S O; Minas, G; Miranda, J M
2016-05-01
This work evaluates the successive approximations method commonly used to predict acoustic streaming by comparing it with a direct method. The successive approximations method solves both the acoustic wave propagation and acoustic streaming by solving the first and second order Navier-Stokes equations, ignoring the first order convective effects. This method was applied to acoustic streaming in a 2D domain and the results were compared with results from the direct simulation of the Navier-Stokes equations. The velocity results showed qualitative agreement between both methods, which indicates that the successive approximations method can describe the formation of flows with recirculation. However, a large quantitative deviation was observed between the two methods. Further analysis showed that the successive approximation method solution is sensitive to the initial flow field. The direct method showed that the instantaneous flow field changes significantly due to reflections and wave interference. It was also found that convective effects contribute significantly to the wave propagation pattern. These effects must be taken into account when solving the acoustic streaming problems, since it affects the global flow. By adequately calculating the initial condition for first order step, the acoustic streaming prediction by the successive approximations method can be improved significantly.
Approximation and inference methods for stochastic biochemical kinetics—a tutorial review
NASA Astrophysics Data System (ADS)
Schnoerr, David; Sanguinetti, Guido; Grima, Ramon
2017-03-01
Stochastic fluctuations of molecule numbers are ubiquitous in biological systems. Important examples include gene expression and enzymatic processes in living cells. Such systems are typically modelled as chemical reaction networks whose dynamics are governed by the chemical master equation. Despite its simple structure, no analytic solutions to the chemical master equation are known for most systems. Moreover, stochastic simulations are computationally expensive, making systematic analysis and statistical inference a challenging task. Consequently, significant effort has been spent in recent decades on the development of efficient approximation and inference methods. This article gives an introduction to basic modelling concepts as well as an overview of state of the art methods. First, we motivate and introduce deterministic and stochastic methods for modelling chemical networks, and give an overview of simulation and exact solution methods. Next, we discuss several approximation methods, including the chemical Langevin equation, the system size expansion, moment closure approximations, time-scale separation approximations and hybrid methods. We discuss their various properties and review recent advances and remaining challenges for these methods. We present a comparison of several of these methods by means of a numerical case study and highlight some of their respective advantages and disadvantages. Finally, we discuss the problem of inference from experimental data in the Bayesian framework and review recent methods developed the literature. In summary, this review gives a self-contained introduction to modelling, approximations and inference methods for stochastic chemical kinetics.
NASA Astrophysics Data System (ADS)
Lanti, E.; Dominski, J.; Brunner, S.; McMillan, B. F.; Villard, L.
2016-11-01
This work aims at completing the implementation of a solver for the quasineutrality equation using a Padé approximation in the global gyrokinetic code ORB5. Initially [Dominski, Ph.D. thesis, 2016], the Pade approximation was only implemented for the kinetic electron model. To enable runs with adiabatic or hybrid electron models while using a Pade approximation to the polarization response, the adiabatic response term of the quasi-neutrality equation must be consistently modified. It is shown that the Pade solver is in good agreement with the arbitrary wavelength solver of ORB5 [Dominski, Ph.D. thesis, 2016]. To perform this verification, the linear dispersion relation of an ITG-TEM transition is computed for both solvers and the linear growth rates and frequencies are compared.
NASA Astrophysics Data System (ADS)
Kamenev, G. K.
2015-10-01
The estimate refinement method for the polyhedral approximation of convex compact bodies is considered. In the approximation of convex bodies with a smooth boundary, this method is known to generate polytopes with an optimal order of growth of the number of vertices and facets depending on the approximation error. The properties of the method are examined as applied to the polyhedral approximation of a multidimensional ball. As vertices of approximating polytopes, the method is shown to generate a deep holes sequence on the surface of the ball. As a result, previously obtained combinatorial properties of convex hulls of the indicated sequences, namely, the convergence rates with respect to the number of faces of all dimensions and the optimal growth of the cardinality of the facial structure (of the norm of the f-vector) can be extended to such polytopes. The combinatorial properties of the approximating polytopes generated by the estimate refinement method are compared to the properties of polytopes with a facial structure of extremal cardinality. It is shown that the polytopes generated by the method are similar to stacked polytopes, on which the minimum number of faces of all dimensions is attained for a given number of vertices.
Gait Generation for a Small Biped Robot using Approximated Optimization Method
NASA Astrophysics Data System (ADS)
Nguyen, Tinh; Tao, Linh; Hasegawa, Hiroshi
2016-11-01
This paper proposes a novel approach for gait pattern generation of a small biped robot to enhance its walking behavior. This is to aim to make the robot gait more natural and more stable in the walking process. In this study, we mention the approximated optimization method which applied the Differential Evolution algorithm (DE) to objective function approximated by Artificial Neural Network (ANN). In addition, we also present a new humanlike foot structure with toes for the biped robot in this paper. To evaluate this method achievement, the robot was simulated by multi-body dynamics simulation software, Adams (MSC software, USA). As a result, we confirmed that the biped robot with the proposed foot structure can walk naturally. The approximated optimization method based on DE algorithm and ANN is an effective approach to generate a gait pattern for the locomotion of the biped robot. This method is simpler than the conventional methods using Zero Moment Point (ZMP) criterion.
İbiş, Birol
2014-01-01
This paper aims to obtain the approximate solution of time-fractional advection-dispersion equation (FADE) involving Jumarie's modification of Riemann-Liouville derivative by the fractional variational iteration method (FVIM). FVIM provides an analytical approximate solution in the form of a convergent series. Some examples are given and the results indicate that the FVIM is of high accuracy, more efficient, and more convenient for solving time FADEs. PMID:24578662
ERIC Educational Resources Information Center
Moses, Tim
2013-01-01
The purpose of this study was to evaluate the use of adjoined and piecewise linear approximations (APLAs) of raw equipercentile equating functions as a postsmoothing equating method. APLAs are less familiar than other postsmoothing equating methods (i.e., cubic splines), but their use has been described in historical equating practices of…
Numerical solution of 2D-vector tomography problem using the method of approximate inverse
NASA Astrophysics Data System (ADS)
Svetov, Ivan; Maltseva, Svetlana; Polyakova, Anna
2016-08-01
We propose a numerical solution of reconstruction problem of a two-dimensional vector field in a unit disk from the known values of the longitudinal and transverse ray transforms. The algorithm is based on the method of approximate inverse. Numerical simulations confirm that the proposed method yields good results of reconstruction of vector fields.
Magnetic interface forward and inversion method based on Padé approximation
NASA Astrophysics Data System (ADS)
Zhang, Chong; Huang, Da-Nian; Zhang, Kai; Pu, Yi-Tao; Yu, Ping
2016-12-01
The magnetic interface forward and inversion method is realized using the Taylor series expansion to linearize the Fourier transform of the exponential function. With a large expansion step and unbounded neighborhood, the Taylor series is not convergent, and therefore, this paper presents the magnetic interface forward and inversion method based on Padé approximation instead of the Taylor series expansion. Compared with the Taylor series, Padé's expansion's convergence is more stable and its approximation more accurate. Model tests show the validity of the magnetic forward modeling and inversion of Padé approximation proposed in the paper, and when this inversion method is applied to the measured data of the Matagami area in Canada, a stable and reasonable distribution of underground interface is obtained.
NASA Technical Reports Server (NTRS)
Connor, J. N. L.; Curtis, P. R.; Farrelly, D.
1984-01-01
Methods that can be used in the numerical implementation of the uniform swallowtail approximation are described. An explicit expression for that approximation is presented to the lowest order, showing that there are three problems which must be overcome in practice before the approximation can be applied to any given problem. It is shown that a recently developed quadrature method can be used for the accurate numerical evaluation of the swallowtail canonical integral and its partial derivatives. Isometric plots of these are presented to illustrate some of their properties. The problem of obtaining the arguments of the swallowtail integral from an analytical function of its argument is considered, describing two methods of solving this problem. The asymptotic evaluation of the butterfly canonical integral is addressed.
Quantum Approximate Methods for the Atomistic Modeling of Multicomponent Alloys. Chapter 7
NASA Technical Reports Server (NTRS)
Bozzolo, Guillermo; Garces, Jorge; Mosca, Hugo; Gargano, pablo; Noebe, Ronald D.; Abel, Phillip
2007-01-01
This chapter describes the role of quantum approximate methods in the understanding of complex multicomponent alloys at the atomic level. The need to accelerate materials design programs based on economical and efficient modeling techniques provides the framework for the introduction of approximations and simplifications in otherwise rigorous theoretical schemes. As a promising example of the role that such approximate methods might have in the development of complex systems, the BFS method for alloys is presented and applied to Ru-rich Ni-base superalloys and also to the NiAI(Ti,Cu) system, highlighting the benefits that can be obtained from introducing simple modeling techniques to the investigation of such complex systems.
Simulation of mass transfer during osmotic dehydration of apple: a power law approximation method
NASA Astrophysics Data System (ADS)
Abbasi Souraki, B.; Tondro, H.; Ghavami, M.
2014-10-01
In this study, unsteady one-dimensional mass transfer during osmotic dehydration of apple was modeled using an approximate mathematical model. The mathematical model has been developed based on a power law profile approximation for moisture and solute concentrations in the spatial direction. The proposed model was validated by the experimental water loss and solute gain data, obtained from osmotic dehydration of infinite slab and cylindrical shape samples of apple in sucrose solutions (30, 40 and 50 % w/w), at different temperatures (30, 40 and 50 °C). The proposed model's predictions were also compared with the exact analytical and also a parabolic approximation model's predictions. The values of mean relative errors respect to the experimental data were estimated between 4.5 and 8.1 %, 6.5 and 10.2 %, and 15.0 and 19.1 %, for exact analytical, power law and parabolic approximation methods, respectively. Although the parabolic approximation leads to simpler relations, the power law approximation method results in higher accuracy of average concentrations over the whole domain of dehydration time. Considering both simplicity and precision of the mathematical models, the power law model for short dehydration times and the simplified exact analytical model for long dehydration times could be used for explanation of the variations of the average water loss and solute gain in the whole domain of dimensionless times.
Laplace transform homotopy perturbation method for the approximation of variational problems.
Filobello-Nino, U; Vazquez-Leal, H; Rashidi, M M; Sedighi, H M; Perez-Sesma, A; Sandoval-Hernandez, M; Sarmiento-Reyes, A; Contreras-Hernandez, A D; Pereyra-Diaz, D; Hoyos-Reyes, C; Jimenez-Fernandez, V M; Huerta-Chua, J; Castro-Gonzalez, F; Laguna-Camacho, J R
2016-01-01
This article proposes the application of Laplace Transform-Homotopy Perturbation Method and some of its modifications in order to find analytical approximate solutions for the linear and nonlinear differential equations which arise from some variational problems. As case study we will solve four ordinary differential equations, and we will show that the proposed solutions have good accuracy, even we will obtain an exact solution. In the sequel, we will see that the square residual error for the approximate solutions, belongs to the interval [0.001918936920, 0.06334882582], which confirms the accuracy of the proposed methods, taking into account the complexity and difficulty of variational problems.
NASA Astrophysics Data System (ADS)
Abedini, Mohammad; Nojoumian, Mohammad Ali; Salarieh, Hassan; Meghdari, Ali
2015-08-01
In this paper, model reference control of a fractional order system has been discussed. In order to control the fractional order plant, discrete-time approximation methods have been applied. Plant and reference model are discretized by Grünwald-Letnikov definition of the fractional order derivative using "Short Memory Principle". Unknown parameters of the fractional order system are appeared in the discrete time approximate model as combinations of parameters of the main system. The discrete time MRAC via RLS identification is modified to estimate the parameters and control the fractional order plant. Numerical results show the effectiveness of the proposed method of model reference adaptive control.
NASA Astrophysics Data System (ADS)
Lai, Xian-Jing; Cai, Xiao-Ou
2010-09-01
In this paper, the decomposition method is implemented for solving the bidirectional Sawada- Kotera (bSK) equation with two kinds of initial conditions. As a result, the Adomian polynomials have been calculated and the approximate and exact solutions of the bSK equation are obtained by means of Maple, such as solitary wave solutions, doubly-periodic solutions, two-soliton solutions. Moreover, we compare the approximate solution with the exact solution in a table and analyze the absolute error and the relative error. The results reported in this article provide further evidence of the usefulness of the Adomian decomposition method for obtaining solutions of nonlinear problems
NASA Technical Reports Server (NTRS)
Mier Muth, A. M.; Willsky, A. S.
1978-01-01
In this paper we describe a method for approximating a waveform by a spline. The method is quite efficient, as the data are processed sequentially. The basis of the approach is to view the approximation problem as a question of estimation of a polynomial in noise, with the possibility of abrupt changes in the highest derivative. This allows us to bring several powerful statistical signal processing tools into play. We also present some initial results on the application of our technique to the processing of electrocardiograms, where the knot locations themselves may be some of the most important pieces of diagnostic information.
An Extension of the Krieger-Li-Iafrate Approximation to the Optimized-Effective-Potential Method
Wilson, B.G.
1999-11-11
The Krieger-Li-Iafrate approximation can be expressed as the zeroth order result of an unstable iterative method for solving the integral equation form of the optimized-effective-potential method. By pre-conditioning the iterate a first order correction can be obtained which recovers the bulk of quantal oscillations missing in the zeroth order approximation. A comparison of calculated total energies are given with Krieger-Li-Iafrate, Local Density Functional, and Hyper-Hartree-Fock results for non-relativistic atoms and ions.
Rational approximations from power series of vector-valued meromorphic functions
NASA Technical Reports Server (NTRS)
Sidi, Avram
1992-01-01
Let F(z) be a vector-valued function, F: C yields C(sup N), which is analytic at z = 0 and meromorphic in a neighborhood of z = 0, and let its Maclaurin series be given. In this work we developed vector-valued rational approximation procedures for F(z) by applying vector extrapolation methods to the sequence of partial sums of its Maclaurin series. We analyzed some of the algebraic and analytic properties of the rational approximations thus obtained, and showed that they were akin to Pade approximations. In particular, we proved a Koenig type theorem concerning their poles and a de Montessus type theorem concerning their uniform convergence. We showed how optical approximations to multiple poles and to Laurent expansions about these poles can be constructed. Extensions of the procedures above and the accompanying theoretical results to functions defined in arbitrary linear spaces was also considered. One of the most interesting and immediate applications of the results of this work is to the matrix eigenvalue problem. In a forthcoming paper we exploited the developments of the present work to devise bona fide generalizations of the classical power method that are especially suitable for very large and sparse matrices. These generalizations can be used to approximate simultaneously several of the largest distinct eigenvalues and corresponding eigenvectors and invariant subspaces of arbitrary matrices which may or may not be diagonalizable, and are very closely related with known Krylov subspace methods.
Global collocation methods for approximation and the solution of partial differential equations
NASA Technical Reports Server (NTRS)
Solomonoff, A.; Turkel, E.
1986-01-01
Polynomial interpolation methods are applied both to the approximation of functions and to the numerical solutions of hyperbolic and elliptic partial differential equations. The derivative matrix for a general sequence of the collocation points is constructed. The approximate derivative is then found by a matrix times vector multiply. The effects of several factors on the performance of these methods including the effect of different collocation points are then explored. The resolution of the schemes for both smooth functions and functions with steep gradients or discontinuities in some derivative are also studied. The accuracy when the gradients occur both near the center of the region and in the vicinity of the boundary is investigated. The importance of the aliasing limit on the resolution of the approximation is investigated in detail. Also examined is the effect of boundary treatment on the stability and accuracy of the scheme.
Improved Parker's method for topographic models using Chebyshev series and low rank approximation
NASA Astrophysics Data System (ADS)
Wu, Leyuan; Lin, Qiang
2017-03-01
We present a new method to improve the convergence of the well-known Parker's formula for the modelling of gravity and magnetic fields caused by sources with complex topography. In the original Parker's formula, two approximations are made, which may cause considerable numerical errors and instabilities: 1) the approximation of the forward and inverse continuous Fourier transforms using their discrete counterparts, the forward and inverse Fast Fourier Transform (FFT) algorithms; 2) the approximation of the exponential function with its Taylor series expansion. In a previous paper of ours, we have made an effort addressing the first problem by applying the Gauss-FFT method instead of the standard FFT algorithm. The new Gauss-FFT based method shows improved numerical efficiency and agrees well with space-domain analytical or hybrid analytical-numerical algorithms. However, even under the simplifying assumption of a calculation surface being a level plane above all topographic sources, the method may still fail or become inaccurate under certain circumstances. When the peaks of the topography approach the observation surface too closely, the number of terms of the Taylor series expansion needed to reach a suitable precision becomes large and slows the calculation. We show in this paper that this problem is caused by the second approximation mentioned above, and it is due to the convergence property of the Taylor series expansion that the algorithm becomes inaccurate for certain topographic models with large amplitudes. Based on this observation, we present a modified Parker's method using low rank approximation (LRA) of the exponential function in virtue of the Chebfun software system. In this way, the optimal rate of convergence is achieved. Some pre-computation is needed but will not cause significant computational overheads. Synthetic and real model tests show that the method now works well for almost any practical topographic model, provided that the assumption
ERIC Educational Resources Information Center
Hummel, Thomas J.; Johnston, Charles B.
This research investigates stochastic approximation procedures of the Robbins-Monro type. Following a brief introduction to sequential experimentation, attention is focused on formal methods for selecting successive values of a single independent variable. Empirical results obtained through computer simulation are used to compare several formal…
NASA Technical Reports Server (NTRS)
Tiffany, Sherwood H.; Adams, William M., Jr.
1988-01-01
The approximation of unsteady generalized aerodynamic forces in the equations of motion of a flexible aircraft are discussed. Two methods of formulating these approximations are extended to include the same flexibility in constraining the approximations and the same methodology in optimizing nonlinear parameters as another currently used extended least-squares method. Optimal selection of nonlinear parameters is made in each of the three methods by use of the same nonlinear, nongradient optimizer. The objective of the nonlinear optimization is to obtain rational approximations to the unsteady aerodynamics whose state-space realization is lower order than that required when no optimization of the nonlinear terms is performed. The free linear parameters are determined using the least-squares matrix techniques of a Lagrange multiplier formulation of an objective function which incorporates selected linear equality constraints. State-space mathematical models resulting from different approaches are described and results are presented that show comparative evaluations from application of each of the extended methods to a numerical example.
An approximate method for solution to variable moment of inertia problems
NASA Technical Reports Server (NTRS)
Beans, E. W.
1981-01-01
An approximation method is presented for reducing a nonlinear differential equation (for the 'weather vaning' motion of a wind turbine) to an equivalent constant moment of inertia problem. The integrated average of the moment of inertia is determined. Cycle time was found to be the equivalent cycle time if the rotating speed is 4 times greater than the system's minimum natural frequency.
An analytical technique for approximating unsteady aerodynamics in the time domain
NASA Technical Reports Server (NTRS)
Dunn, H. J.
1980-01-01
An analytical technique is presented for approximating unsteady aerodynamic forces in the time domain. The order of elements of a matrix Pade approximation was postulated, and the resulting polynomial coefficients were determined through a combination of least squares estimates for the numerator coefficients and a constrained gradient search for the denominator coefficients which insures stable approximating functions. The number of differential equations required to represent the aerodynamic forces to a given accuracy tends to be smaller than that employed in certain existing techniques where the denominator coefficients are chosen a priori. Results are shown for an aeroelastic, cantilevered, semispan wing which indicate a good fit to the aerodynamic forces for oscillatory motion can be achieved with a matrix Pade approximation having fourth order numerator and second order denominator polynomials.
Approximate Solution Methods for Spectral Radiative Transfer in High Refractive Index Layers
NASA Technical Reports Server (NTRS)
Siegel, R.; Spuckler, C. M.
1994-01-01
Some ceramic materials for high temperature applications are partially transparent for radiative transfer. The refractive indices of these materials can be substantially greater than one which influences internal radiative emission and reflections. Heat transfer behavior of single and laminated layers has been obtained in the literature by numerical solutions of the radiative transfer equations coupled with heat conduction and heating at the boundaries by convection and radiation. Two-flux and diffusion methods are investigated here to obtain approximate solutions using a simpler formulation than required for exact numerical solutions. Isotropic scattering is included. The two-flux method for a single layer yields excellent results for gray and two band spectral calculations. The diffusion method yields a good approximation for spectral behavior in laminated multiple layers if the overall optical thickness is larger than about ten. A hybrid spectral model is developed using the two-flux method in the optically thin bands, and radiative diffusion in bands that are optically thick.
A numerical method for approximating antenna surfaces defined by discrete surface points
NASA Technical Reports Server (NTRS)
Lee, R. Q.; Acosta, R.
1985-01-01
A simple numerical method for the quadratic approximation of a discretely defined reflector surface is described. The numerical method was applied to interpolate the surface normal of a parabolic reflector surface from a grid of nine closest surface points to the point of incidence. After computing the surface normals, the geometrical optics and the aperture integration method using the discrete Fast Fourier Transform (FFT) were applied to compute the radiaton patterns for a symmetric and an offset antenna configurations. The computed patterns are compared to that of the analytic case and to the patterns generated from another numerical technique using the spline function approximation. In the paper, examples of computations are given. The accuracy of the numerical method is discussed.
NASA Astrophysics Data System (ADS)
Alam Khan, Najeeb; Razzaq, Oyoon Abdul
2016-03-01
In the present work a wavelets approximation method is employed to solve fuzzy boundary value differential equations (FBVDEs). Essentially, a truncated Legendre wavelets series together with the Legendre wavelets operational matrix of derivative are utilized to convert FB- VDE into a simple computational problem by reducing it into a system of fuzzy algebraic linear equations. The capability of scheme is investigated on second order FB- VDE considered under generalized H-differentiability. Solutions are represented graphically showing competency and accuracy of this method.
Approximate method of free energy calculation for spin system with arbitrary connection matrix
NASA Astrophysics Data System (ADS)
Kryzhanovsky, Boris; Litinskii, Leonid
2015-01-01
The proposed method of the free energy calculation is based on the approximation of the energy distribution in the microcanonical ensemble by the Gaussian distribution. We hope that our approach will be effective for the systems with long-range interaction, where large coordination number q ensures the correctness of the central limit theorem application. However, the method provides good results also for systems with short-range interaction when the number q is not so large.
A method for solving stochastic equations by reduced order models and local approximations
Grigoriu, M.
2012-08-01
A method is proposed for solving equations with random entries, referred to as stochastic equations (SEs). The method is based on two recent developments. The first approximates the response surface giving the solution of a stochastic equation as a function of its random parameters by a finite set of hyperplanes tangent to it at expansion points selected by geometrical arguments. The second approximates the vector of random parameters in the definition of a stochastic equation by a simple random vector, referred to as stochastic reduced order model (SROM), and uses it to construct a SROM for the solution of this equation. The proposed method is a direct extension of these two methods. It uses SROMs to select expansion points, rather than selecting these points by geometrical considerations, and represents the solution by linear and/or higher order local approximations. The implementation and the performance of the method are illustrated by numerical examples involving random eigenvalue problems and stochastic algebraic/differential equations. The method is conceptually simple, non-intrusive, efficient relative to classical Monte Carlo simulation, accurate, and guaranteed to converge to the exact solution.
NASA Technical Reports Server (NTRS)
Grantz, A. C.; Dejarnette, F. R.; Thompson, R. A.
1989-01-01
The approximate axisymmetric method presented for accurately calculating the surface and flowfield properties of fully viscous hypersonic flow over blunt-nosed bodies incorporates the turbulence model of Cebeci-Smith (1970) and the equilibrium air tables of Hansen (1959). The method is faster than the parabolized Navier-Stokes or viscous shock layer solvers that it could replace for preliminary design determinations. Surface heat transfer and pressure predictions for the present method are comparable with the more accurate viscous shock layer method as well as flight test and wind tunnel data. A starting solution is not required.
Tejero, E. M.; Gatling, G.
2009-03-15
A method for approximating arbitrary axial magnetic field profiles for a given solenoidal electromagnet coil array is described. The method casts the individual contributions from each coil as a truncated orthonormal basis for the space within the array. This truncated basis allows for the linear decomposition of an arbitrary profile function, which returns the appropriate currents for each coil to best reproduce the desired profile. We present the mathematical details of the method along with a detailed example of its use. The results from the method are used in a simulation and compared with magnetic field measuremen0008.
Bishop, R. F.; Li, P. H. Y.
2011-04-15
An approximation hierarchy, called the lattice-path-based subsystem (LPSUBm) approximation scheme, is described for the coupled-cluster method (CCM). It is applicable to systems defined on a regular spatial lattice. We then apply it to two well-studied prototypical (spin-(1/2) Heisenberg antiferromagnetic) spin-lattice models, namely, the XXZ and the XY models on the square lattice in two dimensions. Results are obtained in each case for the ground-state energy, the ground-state sublattice magnetization, and the quantum critical point. They are all in good agreement with those from such alternative methods as spin-wave theory, series expansions, quantum Monte Carlo methods, and the CCM using the alternative lattice-animal-based subsystem (LSUBm) and the distance-based subsystem (DSUBm) schemes. Each of the three CCM schemes (LSUBm, DSUBm, and LPSUBm) for use with systems defined on a regular spatial lattice is shown to have its own advantages in particular applications.
Tuleau-Malot, Christine; Rouis, Amel; Grammont, Franck; Reynaud-Bouret, Patricia
2014-07-01
The unitary events (UE) method is one of the most popular and efficient methods used over the past decade to detect patterns of coincident joint spike activity among simultaneously recorded neurons. The detection of coincidences is usually based on binned coincidence count (Grün, 1996 ), which is known to be subject to loss in synchrony detection (Grün, Diesmann, Grammont, Riehle, & Aertsen, 1999 ). This defect has been corrected by the multiple shift coincidence count (Grün et al., 1999 ). The statistical properties of this count have not been further investigated until this work, the formula being more difficult to deal with than the original binned count. First, we propose a new notion of coincidence count, the delayed coincidence count, which is equal to the multiple shift coincidence count when discretized point processes are involved as models for the spike trains. Moreover, it generalizes this notion to nondiscretized point processes, allowing us to propose a new gaussian approximation of the count. Since unknown parameters are involved in the approximation, we perform a plug-in step, where unknown parameters are replaced by estimated ones, leading to a modification of the approximating distribution. Finally the method takes the multiplicity of the tests into account via a Benjamini and Hochberg approach (Benjamini & Hochberg, 1995 ), to guarantee a prescribed control of the false discovery rate. We compare our new method, MTGAUE (multiple tests based on a gaussian approximation of the unitary events) and the UE method proposed in Grün et al. ( 1999 ) over various simulations, showing that MTGAUE extends the validity of the previous method. In particular, MTGAUE is able to detect both profusion and lack of coincidences with respect to the independence case and is robust to changes in the underlying model. Furthermore MTGAUE is applied on real data.
NASA Astrophysics Data System (ADS)
Kolesnikov, V. I.; Yakovlev, V. B.; Bardushkin, V. V.; Lavrov, I. V.; Sychev, A. P.; Yakovleva, E. N.
2013-09-01
Various methods for evaluation of the effective permittivity of heterogeneous media, namely, the effective medium approximation (Bruggeman's approximation), the Maxwell-Garnett approximation, Wiener's bounds, and the Hashin-Shtrikman variational bounds (for effective static characteristics) are combined on the basis of a generalized singular approximation.
Evaluation of approximate methods for the prediction of noise shielding by airframe components
NASA Technical Reports Server (NTRS)
Ahtye, W. F.; Mcculley, G.
1980-01-01
An evaluation of some approximate methods for the prediction of shielding of monochromatic sound and broadband noise by aircraft components is reported. Anechoic-chamber measurements of the shielding of a point source by various simple geometric shapes were made and the measured values compared with those calculated by the superposition of asymptotic closed-form solutions for the shielding by a semi-infinite plane barrier. The shields used in the measurements consisted of rectangular plates, a circular cylinder, and a rectangular plate attached to the cylinder to simulate a wing-body combination. The normalized frequency, defined as a product of the acoustic wave number and either the plate width or cylinder diameter, ranged from 4.6 to 114. Microphone traverses in front of the rectangular plates and cylinders generally showed a series of diffraction bands that matched those predicted by the approximate methods, except for differences in the magnitudes of the attenuation minima which can be attributed to experimental inaccuracies. The shielding of wing-body combinations was predicted by modifications of the approximations used for rectangular and cylindrical shielding. Although the approximations failed to predict diffraction patterns in certain regions, they did predict the average level of wing-body shielding with an average deviation of less than 3 dB.
An approximate method for calculating three-dimensional inviscid hypersonic flow fields
NASA Technical Reports Server (NTRS)
Riley, Christopher J.; Dejarnette, Fred R.
1990-01-01
An approximate solution technique was developed for 3-D inviscid, hypersonic flows. The method employs Maslen's explicit pressure equation in addition to the assumption of approximate stream surfaces in the shock layer. This approximation represents a simplification to Maslen's asymmetric method. The present method presents a tractable procedure for computing the inviscid flow over 3-D surfaces at angle of attack. The solution procedure involves iteratively changing the shock shape in the subsonic-transonic region until the correct body shape is obtained. Beyond this region, the shock surface is determined using a marching procedure. Results are presented for a spherically blunted cone, paraboloid, and elliptic cone at angle of attack. The calculated surface pressures are compared with experimental data and finite difference solutions of the Euler equations. Shock shapes and profiles of pressure are also examined. Comparisons indicate the method adequately predicts shock layer properties on blunt bodies in hypersonic flow. The speed of the calculations makes the procedure attractive for engineering design applications.
2005-03-01
synthetic aperature radar and radar detec- tion using both software modelling and mathematical analysis and techniques. vi DSTO–TR–1692 Contents 1...joined DSTO in 1990, where he has been part of research efforts in the areas of target radar cross section, digital signal processing, inverse ...Approximation of Integrals via Monte Carlo Methods, with an Application to Calculating Radar Detection Probabilities Graham V. Weinberg and Ross
A method for the accurate and smooth approximation of standard thermodynamic functions
NASA Astrophysics Data System (ADS)
Coufal, O.
2013-01-01
A method is proposed for the calculation of approximations of standard thermodynamic functions. The method is consistent with the physical properties of standard thermodynamic functions. This means that the approximation functions are, in contrast to the hitherto used approximations, continuous and smooth in every temperature interval in which no phase transformations take place. The calculation algorithm was implemented by the SmoothSTF program in the C++ language which is part of this paper. Program summaryProgram title:SmoothSTF Catalogue identifier: AENH_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AENH_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 3807 No. of bytes in distributed program, including test data, etc.: 131965 Distribution format: tar.gz Programming language: C++. Computer: Any computer with gcc version 4.3.2 compiler. Operating system: Debian GNU Linux 6.0. The program can be run in operating systems in which the gcc compiler can be installed, see http://gcc.gnu.org/install/specific.html. RAM: 256 MB are sufficient for the table of standard thermodynamic functions with 500 lines Classification: 4.9. Nature of problem: Standard thermodynamic functions (STF) of individual substances are given by thermal capacity at constant pressure, entropy and enthalpy. STF are continuous and smooth in every temperature interval in which no phase transformations take place. The temperature dependence of STF as expressed by the table of its values is for further application approximated by temperature functions. In the paper, a method is proposed for calculating approximation functions which, in contrast to the hitherto used approximations, are continuous and smooth in every temperature interval. Solution method: The approximation functions are
NASA Astrophysics Data System (ADS)
Krochik, G. M.
1980-02-01
Stimulated Raman scattering of a randomly modulated pump is investigated by the method of successive approximations. This involves expanding solutions in terms of small parameters, which are ratios of the correlation scales of random effects to other characteristic dynamic scales of the problem. Systems of closed equations are obtained for the moments of the amplitudes of the Stokes and pump waves and of the molecular vibrations. These describe the dynamics of the process allowing for changes in the pump intensity and statistics due to a three-wave interaction. By analyzing equations in higher-order approximations, it is possible to establish the conditions of validity of the first (Markov) and second approximations. In particular, it is found that these are valid for pump intensities JL both above and below the critical value Jcr near which the gain begins to increase rapidly and reproduction of the pump spectrum by the Stokes wave is initiated. Solutions are obtained for average intensities of the Stokes wave and molecular vibrations in the first approximation in a constant pump field. It is established that, for JLgtrsimJcr, the Stokes wave undergoes rapid nonsteady-state amplification which is associated with an increase in the amplitude of the molecular vibrations. The results of the calculations show good agreement with known experimental data.
Physically weighted approximations of unsteady aerodynamic forces using the minimum-state method
NASA Technical Reports Server (NTRS)
Karpel, Mordechay; Hoadley, Sherwood Tiffany
1991-01-01
The Minimum-State Method for rational approximation of unsteady aerodynamic force coefficient matrices, modified to allow physical weighting of the tabulated aerodynamic data, is presented. The approximation formula and the associated time-domain, state-space, open-loop equations of motion are given, and the numerical procedure for calculating the approximation matrices, with weighted data and with various equality constraints are described. Two data weighting options are presented. The first weighting is for normalizing the aerodynamic data to maximum unit value of each aerodynamic coefficient. The second weighting is one in which each tabulated coefficient, at each reduced frequency value, is weighted according to the effect of an incremental error of this coefficient on aeroelastic characteristics of the system. This weighting yields a better fit of the more important terms, at the expense of less important ones. The resulting approximate yields a relatively low number of aerodynamic lag states in the subsequent state-space model. The formulation forms the basis of the MIST computer program which is written in FORTRAN for use on the MicroVAX computer and interfaces with NASA's Interaction of Structures, Aerodynamics and Controls (ISAC) computer program. The program structure, capabilities and interfaces are outlined in the appendices, and a numerical example which utilizes Rockwell's Active Flexible Wing (AFW) model is given and discussed.
Nikiforov, Alexander; Gamez, Jose A.; Thiel, Walter; Huix-Rotllant, Miquel; Filatov, Michael
2014-09-28
Quantum-chemical computational methods are benchmarked for their ability to describe conical intersections in a series of organic molecules and models of biological chromophores. Reference results for the geometries, relative energies, and branching planes of conical intersections are obtained using ab initio multireference configuration interaction with single and double excitations (MRCISD). They are compared with the results from more approximate methods, namely, the state-interaction state-averaged restricted ensemble-referenced Kohn-Sham method, spin-flip time-dependent density functional theory, and a semiempirical MRCISD approach using an orthogonalization-corrected model. It is demonstrated that these approximate methods reproduce the ab initio reference data very well, with root-mean-square deviations in the optimized geometries of the order of 0.1 Å or less and with reasonable agreement in the computed relative energies. A detailed analysis of the branching plane vectors shows that all currently applied methods yield similar nuclear displacements for escaping the strong non-adiabatic coupling region near the conical intersections. Our comparisons support the use of the tested quantum-chemical methods for modeling the photochemistry of large organic and biological systems.
NASA Technical Reports Server (NTRS)
Murphy, P. C.
1984-01-01
An algorithm for maximum likelihood (ML) estimation is developed primarily for multivariable dynamic systems. The algorithm relies on a new optimization method referred to as a modified Newton-Raphson with estimated sensitivities (MNRES). The method determines sensitivities by using slope information from local surface approximations of each output variable in parameter space. The fitted surface allows sensitivity information to be updated at each iteration with a significant reduction in computational effort compared with integrating the analytically determined sensitivity equations or using a finite-difference method. Different surface-fitting methods are discussed and demonstrated. Aircraft estimation problems are solved by using both simulated and real-flight data to compare MNRES with commonly used methods; in these solutions MNRES is found to be equally accurate and substantially faster. MNRES eliminates the need to derive sensitivity equations, thus producing a more generally applicable algorithm.
A goal-oriented adaptive procedure for the quasi-continuum method with cluster approximation
NASA Astrophysics Data System (ADS)
Memarnahavandi, Arash; Larsson, Fredrik; Runesson, Kenneth
2015-04-01
We present a strategy for adaptive error control for the quasi-continuum (QC) method applied to molecular statics problems. The QC-method is introduced in two steps: Firstly, introducing QC-interpolation while accounting for the exact summation of all the bond-energies, we compute goal-oriented error estimators in a straight-forward fashion based on the pertinent adjoint (dual) problem. Secondly, for large QC-elements the bond energy and its derivatives are typically computed using an appropriate discrete quadrature using cluster approximations, which introduces a model error. The combined error is estimated approximately based on the same dual problem in conjunction with a hierarchical strategy for approximating the residual. As a model problem, we carry out atomistic-to-continuum homogenization of a graphene monolayer, where the Carbon-Carbon energy bonds are modeled via the Tersoff-Brenner potential, which involves next-nearest neighbor couplings. In particular, we are interested in computing the representative response for an imperfect lattice. Within the goal-oriented framework it becomes natural to choose the macro-scale (continuum) stress as the "quantity of interest". Two different formulations are adopted: The Basic formulation and the Global formulation. The presented numerical investigation shows the accuracy and robustness of the proposed error estimator and the pertinent adaptive algorithm.
S-curve networks and an approximate method for estimating degree distributions of complex networks
NASA Astrophysics Data System (ADS)
Guo, Jin-Li
2010-12-01
In the study of complex networks almost all theoretical models have the property of infinite growth, but the size of actual networks is finite. According to statistics from the China Internet IPv4 (Internet Protocol version 4) addresses, this paper proposes a forecasting model by using S curve (logistic curve). The growing trend of IPv4 addresses in China is forecasted. There are some reference values for optimizing the distribution of IPv4 address resource and the development of IPv6. Based on the laws of IPv4 growth, that is, the bulk growth and the finitely growing limit, it proposes a finite network model with a bulk growth. The model is said to be an S-curve network. Analysis demonstrates that the analytic method based on uniform distributions (i.e., Barabási-Albert method) is not suitable for the network. It develops an approximate method to predict the growth dynamics of the individual nodes, and uses this to calculate analytically the degree distribution and the scaling exponents. The analytical result agrees with the simulation well, obeying an approximately power-law form. This method can overcome a shortcoming of Barabási-Albert method commonly used in current network research.
NASA Astrophysics Data System (ADS)
Lehikoinen, A.; Huttunen, J. M.; Finsterle, S.; Kowalsky, M. B.; Kaipio, J. P.
2007-05-01
We extend the previously presented methodology for imaging the evolution of electrically conductive fluids in porous media. In that method, the nonstationary inversion problem was solved using Bayesian filtering. The method was demonstrated using a synthetically generated test case where the monitored target is a time-varying water plume in an unsaturated porous medium, and the imaging modality was electrical resistance tomography (ERT). The inverse problem was formulated as a state estimation problem, which is based on observation- evolution models. As an observation model for ERT, the complete electrode model was used, and for time- varying unsaturated flow, the Richards equation was used as an evolution model. Although the "true" evolution of water flow was simulated using a heterogeneous permeability field, in the inversion step the permeability was assumed to be homogeneous. This assumption leads to approximation errors that have been taken into account by constructing a statistical model between the different realizations of the accurate and the approximate fluid flow models. This statistical model was constructed using an ensemble of samples from the evolution model in a way that the construction can be carried out prior to taking observations. However, the statistics of approximation errors actually depends on observations (through the state). In this work we extend the previously presented method so that the statistics of the approximation error are adjusted based on the observations. The basic idea of the extension is to gather those samples from the ensemble which at the current time best represents the observed state. We then determine the statistics of the approximation error based on these collated samples. The extension of the methodology provides improved estimates of water saturation distributions compared to the previously presented approaches. The proposed methodology may be extended for imaging and estimating parameters of dynamical processes
NASA Astrophysics Data System (ADS)
Gambacurta, D.; Grasso, M.; Engel, J.
2015-09-01
We make use of a subtraction procedure, introduced to overcome double-counting problems in beyond-mean-field theories, in the second random-phase-approximation (SRPA) for the first time. This procedure guarantees the stability of the SRPA (so that all excitation energies are real). We show that the method fits perfectly into nuclear density-functional theory. We illustrate applications to the monopole and quadrupole response and to low-lying 0+ and 2+ states in the nucleus 16O . We show that the subtraction procedure leads to (i) results that are weakly cutoff dependent and (ii) a considerable reduction of the SRPA downwards shift with respect to the random-phase approximation (RPA) spectra (systematically found in all previous applications). This implementation of the SRPA model will allow a reliable analysis of the effects of two particle-two hole configurations (2p2h) on the excitation spectra of medium-mass and heavy nuclei.
Chemical physics without the Born-Oppenheimer approximation: The molecular coupled-cluster method
NASA Astrophysics Data System (ADS)
Monkhorst, Hendrik J.
1987-08-01
The Born-Oppenheimer (BO) and Born-Huang (BH) treatments of molecular eigenstates are reexamined. It is argued that in application of the BO approximation to nonrigid molecules and chemical dynamics involving single potential-energy surfaces (PES's), errors on the order of tens of percents can easily occur in many computed properties. Introduction of a BH expansion (in BO states) will always lead to poor convergence when the BO approximation fails; its diagonal (or adiabatic) approximation will not change this situation. The main problem in the above applications is the absence of well-developed, well-separated minima in the PES (or no minima at all). Inspired by a non-BO view of a molecule by Essén [Int. J. Quantum Chem. 12, 721 (1977)], a molecular coupled-cluster (MCC) method is formulated. An Essén molecule consists of neutral subunits (``atoms''), weakly interacting (``bonds'') in some spatial arrangement (``structure''). The quasiseparation in collective and individual motions within the molecule comes about by virtue of the virial theorem, not the smallness of the electron-to-nuclear mass ratio. The MCC method not only should converge well in the cluster sizes, but it also is capable of describing electronic shell and molecular geometric structures. It can be viewed as the workable formalism for Essén's physical picture of a molecule. The time-independent and time-dependent versions are described. The latter one is useful for scattering, chemical dynamics, laser chemistry, half-collisions, and any other phenomena that can be described as the time evolution of many-particle wave packets. Close relationship to time-dependent Hartree-Fock theory exists. A few implementational aspects are discussed, such as symmetry, conservation laws, approximations, numerical techniques, as well as a possible relation with a non-BO PES. Appendixes contain mathematical details.
NASA Astrophysics Data System (ADS)
Wu, Fuke; Tian, Tianhai; Rawlings, James B.; Yin, George
2016-05-01
The frequently used reduction technique is based on the chemical master equation for stochastic chemical kinetics with two-time scales, which yields the modified stochastic simulation algorithm (SSA). For the chemical reaction processes involving a large number of molecular species and reactions, the collection of slow reactions may still include a large number of molecular species and reactions. Consequently, the SSA is still computationally expensive. Because the chemical Langevin equations (CLEs) can effectively work for a large number of molecular species and reactions, this paper develops a reduction method based on the CLE by the stochastic averaging principle developed in the work of Khasminskii and Yin [SIAM J. Appl. Math. 56, 1766-1793 (1996); ibid. 56, 1794-1819 (1996)] to average out the fast-reacting variables. This reduction method leads to a limit averaging system, which is an approximation of the slow reactions. Because in the stochastic chemical kinetics, the CLE is seen as the approximation of the SSA, the limit averaging system can be treated as the approximation of the slow reactions. As an application, we examine the reduction of computation complexity for the gene regulatory networks with two-time scales driven by intrinsic noise. For linear and nonlinear protein production functions, the simulations show that the sample average (expectation) of the limit averaging system is close to that of the slow-reaction process based on the SSA. It demonstrates that the limit averaging system is an efficient approximation of the slow-reaction process in the sense of the weak convergence.
Approximate method for calculating heating rates on three-dimensional vehicles
NASA Astrophysics Data System (ADS)
Hamilton, H. Harris; Greene, Francis A.; Dejarnette, F. R.
1994-05-01
An approximate method for calculating heating rates on three-dimensional vehicles at angle of attack is presented. The method is based on the axisymmetric analog for three-dimensional boundary layers and uses a generalized body-fitted coordinate system. Edge conditions for the boundary-layer solution are obtained from an inviscid flowfield solution, and because of the coordinate system used, the method is applicable to any blunt body geometry for which an inviscid flowfield solution can be obtained. The method is validated by comparing with experimental heating data and with thin-layer Navier-Stokes calculations on the shuttle orbiter at both wind-tunnel and flight conditions and with thin-layer Navier-Stokes calculations on the HL-20 at wind-tunnel conditions.
Approximate method for calculating heating rates on three-dimensional vehicles
NASA Technical Reports Server (NTRS)
Hamilton, H. Harris; Greene, Francis A.; Dejarnette, F. R.
1994-01-01
An approximate method for calculating heating rates on three-dimensional vehicles at angle of attack is presented. The method is based on the axisymmetric analog for three-dimensional boundary layers and uses a generalized body-fitted coordinate system. Edge conditions for the boundary-layer solution are obtained from an inviscid flowfield solution, and because of the coordinate system used, the method is applicable to any blunt body geometry for which an inviscid flowfield solution can be obtained. The method is validated by comparing with experimental heating data and with thin-layer Navier-Stokes calculations on the shuttle orbiter at both wind-tunnel and flight conditions and with thin-layer Navier-Stokes calculations on the HL-20 at wind-tunnel conditions.
Domain decomposition methods for systems of conservation laws: Spectral collocation approximations
NASA Technical Reports Server (NTRS)
Quarteroni, Alfio
1989-01-01
Hyperbolic systems of conversation laws are considered which are discretized in space by spectral collocation methods and advanced in time by finite difference schemes. At any time-level a domain deposition method based on an iteration by subdomain procedure was introduced yielding at each step a sequence of independent subproblems (one for each subdomain) that can be solved simultaneously. The method is set for a general nonlinear problem in several space variables. The convergence analysis, however, is carried out only for a linear one-dimensional system with continuous solutions. A precise form of the error reduction factor at each iteration is derived. Although the method is applied here to the case of spectral collocation approximation only, the idea is fairly general and can be used in a different context as well. For instance, its application to space discretization by finite differences is straight forward.
NASA Technical Reports Server (NTRS)
Monchick, L.; Green, S.
1977-01-01
Two dimensionality-reducing approximations, the j sub z-conserving coupled states (sometimes called the centrifugal decoupling) method and the effective potential method, were applied to collision calculations of He with CO and with HCl. The coupled states method was found to be sensitive to the interpretation of the centrifugal angular momentum quantum number in the body-fixed frame, but the choice leading to the original McGuire-Kouri expression for the scattering amplitude - and to the simplest formulas - proved to be quite successful in reproducing differential and gas kinetic cross sections. The computationally cheaper effective potential method was much less accurate.
Shu, Yu-Chen; Chern, I-Liang; Chang, Chien C.
2014-10-15
Most elliptic interface solvers become complicated for complex interface problems at those “exceptional points” where there are not enough neighboring interior points for high order interpolation. Such complication increases especially in three dimensions. Usually, the solvers are thus reduced to low order accuracy. In this paper, we classify these exceptional points and propose two recipes to maintain order of accuracy there, aiming at improving the previous coupling interface method [26]. Yet the idea is also applicable to other interface solvers. The main idea is to have at least first order approximations for second order derivatives at those exceptional points. Recipe 1 is to use the finite difference approximation for the second order derivatives at a nearby interior grid point, whenever this is possible. Recipe 2 is to flip domain signatures and introduce a ghost state so that a second-order method can be applied. This ghost state is a smooth extension of the solution at the exceptional point from the other side of the interface. The original state is recovered by a post-processing using nearby states and jump conditions. The choice of recipes is determined by a classification scheme of the exceptional points. The method renders the solution and its gradient uniformly second-order accurate in the entire computed domain. Numerical examples are provided to illustrate the second order accuracy of the presently proposed method in approximating the gradients of the original states for some complex interfaces which we had tested previous in two and three dimensions, and a real molecule ( (1D63)) which is double-helix shape and composed of hundreds of atoms.
NASA Astrophysics Data System (ADS)
Liu, Jie; Sun, Xingsheng; Han, Xu; Jiang, Chao; Yu, Dejie
2015-05-01
Based on the Gegenbauer polynomial expansion theory and regularization method, an analytical method is proposed to identify dynamic loads acting on stochastic structures. Dynamic loads are expressed as functions of time and random parameters in time domain and the forward model of dynamic load identification is established through the discretized convolution integral of loads and the corresponding unit-pulse response functions of system. Random parameters are approximated through the random variables with λ-probability density function (PDFs) or their derivative PDFs. For this kind of random variables, Gegenbauer polynomial expansion is the unique correct choice to transform the problem of load identification for a stochastic structure into its equivalent deterministic system. Just via its equivalent deterministic system, the load identification problem of a stochastic structure can be solved by any available deterministic methods. With measured responses containing noise, the improved regularization operator is adopted to overcome the ill-posedness of load reconstruction and to obtain the stable and approximate solutions of certain inverse problems and the valid assessments of the statistics of identified loads. Numerical simulations demonstrate that with regard to stochastic structures, the identification and assessment of dynamic loads are achieved steadily and effectively by the presented method.
James, Kevin R; Dowling, David R
2008-09-01
In underwater acoustics, the accuracy of computational field predictions is commonly limited by uncertainty in environmental parameters. An approximate technique for determining the probability density function (PDF) of computed field amplitude, A, from known environmental uncertainties is presented here. The technique can be applied to several, N, uncertain parameters simultaneously, requires N+1 field calculations, and can be used with any acoustic field model. The technique implicitly assumes independent input parameters and is based on finding the optimum spatial shift between field calculations completed at two different values of each uncertain parameter. This shift information is used to convert uncertain-environmental-parameter distributions into PDF(A). The technique's accuracy is good when the shifted fields match well. Its accuracy is evaluated in range-independent underwater sound channels via an L(1) error-norm defined between approximate and numerically converged results for PDF(A). In 50-m- and 100-m-deep sound channels with 0.5% uncertainty in depth (N=1) at frequencies between 100 and 800 Hz, and for ranges from 1 to 8 km, 95% of the approximate field-amplitude distributions generated L(1) values less than 0.52 using only two field calculations. Obtaining comparable accuracy from traditional methods requires of order 10 field calculations and up to 10(N) when N>1.
Simplified method for including spatial correlations in mean-field approximations
NASA Astrophysics Data System (ADS)
Markham, Deborah C.; Simpson, Matthew J.; Baker, Ruth E.
2013-06-01
Biological systems involving proliferation, migration, and death are observed across all scales. For example, they govern cellular processes such as wound healing, as well as the population dynamics of groups of organisms. In this paper, we provide a simplified method for correcting mean-field approximations of volume-excluding birth-death-movement processes on a regular lattice. An initially uniform distribution of agents on the lattice may give rise to spatial heterogeneity, depending on the relative rates of proliferation, migration, and death. Many frameworks chosen to model these systems neglect spatial correlations, which can lead to inaccurate predictions of their behavior. For example, the logistic model is frequently chosen, which is the mean-field approximation in this case. This mean-field description can be corrected by including a system of ordinary differential equations for pairwise correlations between lattice site occupancies at various lattice distances. In this work we discuss difficulties with this method and provide a simplification in the form of a partial differential equation description for the evolution of pairwise spatial correlations over time. We test our simplified model against the more complex corrected mean-field model, finding excellent agreement. We show how our model successfully predicts system behavior in regions where the mean-field approximation shows large discrepancies. Additionally, we investigate regions of parameter space where migration is reduced relative to proliferation, which has not been examined in detail before and find our method is successful at correcting the deviations observed in the mean-field model in these parameter regimes.
Efficient time-sampling method in Coulomb-corrected strong-field approximation.
Xiao, Xiang-Ru; Wang, Mu-Xue; Xiong, Wei-Hao; Peng, Liang-You
2016-11-01
One of the main goals of strong-field physics is to understand the complex structures formed in the momentum plane of the photoelectron. For this purpose, different semiclassical methods have been developed to seek an intuitive picture of the underlying mechanism. The most popular ones are the quantum trajectory Monte Carlo (QTMC) method and the Coulomb-corrected strong-field approximation (CCSFA), both of which take the classical action into consideration and can describe the interference effect. The CCSFA is more widely applicable in a large range of laser parameters due to its nonadiabatic nature in treating the initial tunneling dynamics. However, the CCSFA is much more time consuming than the QTMC method because of the numerical solution to the saddle-point equations. In the present work, we present a time-sampling method to overcome this disadvantage. Our method is as efficient as the fast QTMC method and as accurate as the original treatment in CCSFA. The performance of our method is verified by comparing the results of these methods with that of the exact solution to the time-dependent Schrödinger equation.
Efficient time-sampling method in Coulomb-corrected strong-field approximation
NASA Astrophysics Data System (ADS)
Xiao, Xiang-Ru; Wang, Mu-Xue; Xiong, Wei-Hao; Peng, Liang-You
2016-11-01
One of the main goals of strong-field physics is to understand the complex structures formed in the momentum plane of the photoelectron. For this purpose, different semiclassical methods have been developed to seek an intuitive picture of the underlying mechanism. The most popular ones are the quantum trajectory Monte Carlo (QTMC) method and the Coulomb-corrected strong-field approximation (CCSFA), both of which take the classical action into consideration and can describe the interference effect. The CCSFA is more widely applicable in a large range of laser parameters due to its nonadiabatic nature in treating the initial tunneling dynamics. However, the CCSFA is much more time consuming than the QTMC method because of the numerical solution to the saddle-point equations. In the present work, we present a time-sampling method to overcome this disadvantage. Our method is as efficient as the fast QTMC method and as accurate as the original treatment in CCSFA. The performance of our method is verified by comparing the results of these methods with that of the exact solution to the time-dependent Schrödinger equation.
An approximate method for calculating heating rates on three-dimensional vehicles
NASA Technical Reports Server (NTRS)
Hamilton, H. H., II; Greene, Francis A.; Dejarnette, Fred R.
1993-01-01
An approximate method for calculating heating rates on three-dimensional vehicles at angle of attack is presented. The method is based on the axisymmetric analog for three-dimensional boundary layers and uses a generalized body fitted coordinate system. Edge conditions for the boundary layer solution are obtained from an inviscid flowfield solution, and because of the coordinate system used the method is applicable to any blunt body geometry for which a inviscid flowfield solution can be obtained. It is validated by comparing with experimental heating data and with Navier-Stokes calculations on the Shuttle orbiter at both wind tunnel and flight conditions and with Navier-Stokes calculations on the HL-20 at wind tunnel conditions.
Car-Parrinello treatment for an approximate density-functional theory method
NASA Astrophysics Data System (ADS)
Rapacioli, Mathias; Barthel, Robert; Heine, Thomas; Seifert, Gotthard
2007-03-01
The authors formulate a Car-Parrinello treatment for the density-functional-based tight-binding method with and without self-consistent charge corrections. This method avoids the numerical solution of the secular equations, the principal drawback for large systems if the linear combination of atomic orbital ansatz is used. The formalism is applicable to finite systems and for supercells using periodic boundary conditions within the Γ-point approximation. They show that the methodology allows the application of modern computational techniques such as sparse matrix storage and massive parallelization in a straightforward way. All present bottlenecks concerning computer time and consumption of memory and memory bandwidth can be removed. They illustrate the performance of the method by direct comparison with Born-Oppenheimer molecular dynamics calculations. Water molecules, benzene, the C60 fullerene, and liquid water have been selected as benchmark systems.
Sensitivity and Approximation of Coupled Fluid-Structure Equations by Virtual Control Method
Murea, Cornel Marius Vazquez, Carlos
2005-08-15
The formulation of a particular fluid-structure interaction as an optimal control problem is the departure point of this work. The control is the vertical component of the force acting on the interface and the observation is the vertical component of the velocity of the fluid on the interface. This approach permits us to solve the coupled fluid-structure problem by partitioned procedures. The analytic expression for the gradient of the cost function is obtained in order to devise accurate numerical methods for the minimization problem. Numerical results arising from blood flow in arteries are presented. To solve the optimal control problem numerically, we use a quasi-Newton method which employs the analytic gradient of the cost function and the approximation of the inverse Hessian is updated by the Broyden, Fletcher, Goldforb, Shano (BFGS) scheme. This algorithm is faster than fixed point with relaxation or block Newton methods.
NASA Astrophysics Data System (ADS)
Sweilam, N. H.; Abou Hasan, M. M.
2016-08-01
This paper reports a new spectral algorithm for obtaining an approximate solution for the Lévy-Feller diffusion equation depending on Legendre polynomials and Chebyshev collocation points. The Lévy-Feller diffusion equation is obtained from the standard diffusion equation by replacing the second-order space derivative with a Riesz-Feller derivative. A new formula expressing explicitly any fractional-order derivatives, in the sense of Riesz-Feller operator, of Legendre polynomials of any degree in terms of Jacobi polynomials is proved. Moreover, the Chebyshev-Legendre collocation method together with the implicit Euler method are used to reduce these types of differential equations to a system of algebraic equations which can be solved numerically. Numerical results with comparisons are given to confirm the reliability of the proposed method for the Lévy-Feller diffusion equation.
Car-Parrinello treatment for an approximate density-functional theory method.
Rapacioli, Mathias; Barthel, Robert; Heine, Thomas; Seifert, Gotthard
2007-03-28
The authors formulate a Car-Parrinello treatment for the density-functional-based tight-binding method with and without self-consistent charge corrections. This method avoids the numerical solution of the secular equations, the principal drawback for large systems if the linear combination of atomic orbital ansatz is used. The formalism is applicable to finite systems and for supercells using periodic boundary conditions within the Gamma-point approximation. They show that the methodology allows the application of modern computational techniques such as sparse matrix storage and massive parallelization in a straightforward way. All present bottlenecks concerning computer time and consumption of memory and memory bandwidth can be removed. They illustrate the performance of the method by direct comparison with Born-Oppenheimer molecular dynamics calculations. Water molecules, benzene, the C(60) fullerene, and liquid water have been selected as benchmark systems.
An Approximate Method for Analysis of Solitary Waves in Nonlinear Elastic Materials
NASA Astrophysics Data System (ADS)
Rushchitsky, J. J.; Yurchuk, V. N.
2016-05-01
Two types of solitary elastic waves are considered: a longitudinal plane displacement wave (longitudinal displacements along the abscissa axis of a Cartesian coordinate system) and a radial cylindrical displacement wave (displacements in the radial direction of a cylindrical coordinate system). The basic innovation is the use of nonlinear wave equations similar in form to describe these waves and the use of the same approximate method to analyze these equations. The distortion of the wave profile described by Whittaker (plane wave) or Macdonald (cylindrical wave) functions is described theoretically
Approximate direct reduction method: infinite series reductions to the perturbed mKdV equation
NASA Astrophysics Data System (ADS)
Jiao, Xiao-Yu; Lou, Sen-Yue
2009-09-01
The approximate direct reduction method is applied to the perturbed mKdV equation with weak fourth order dispersion and weak dissipation. The similarity reduction solutions of different orders conform to formal coherence, accounting for infinite series reduction solutions to the original equation and general formulas of similarity reduction equations. Painlevé II type equations, hyperbolic secant and Jacobi elliptic function solutions are obtained for zero-order similarity reduction equations. Higher order similarity reduction equations are linear variable coefficient ordinary differential equations.
An approximate factorization method for inverse medium scattering with unknown buried objects
NASA Astrophysics Data System (ADS)
Qu, Fenglong; Yang, Jiaqing; Zhang, Bo
2017-03-01
This paper is concerned with the inverse problem of scattering of time-harmonic acoustic waves by an inhomogeneous medium with different kinds of unknown buried objects inside. By constructing a sequence of operators which are small perturbations of the far-field operator in a suitable way, we prove that each operator in this sequence has a factorization satisfying the Range Identity. We then develop an approximate factorization method for recovering the support of the inhomogeneous medium from the far-field data. Finally, numerical examples are provided to illustrate the practicability of the inversion algorithm.
Gu, M G; Kong, F H
1998-06-23
We propose a general procedure for solving incomplete data estimation problems. The procedure can be used to find the maximum likelihood estimate or to solve estimating equations in difficult cases such as estimation with the censored or truncated regression model, the nonlinear structural measurement error model, and the random effects model. The procedure is based on the general principle of stochastic approximation and the Markov chain Monte-Carlo method. Applying the theory on adaptive algorithms, we derive conditions under which the proposed procedure converges. Simulation studies also indicate that the proposed procedure consistently converges to the maximum likelihood estimate for the structural measurement error logistic regression model.
Approximate method for calculating free vibrations of a large-wind-turbine tower structure
NASA Technical Reports Server (NTRS)
Das, S. C.; Linscott, B. S.
1977-01-01
A set of ordinary differential equations were derived for a simplified structural dynamic lumped-mass model of a typical large-wind-turbine tower structure. Dunkerley's equation was used to arrive at a solution for the fundamental natural frequencies of the tower in bending and torsion. The ERDA-NASA 100-kW wind turbine tower structure was modeled, and the fundamental frequencies were determined by the simplified method described. The approximate fundamental natural frequencies for the tower agree within 18 percent with test data and predictions analyzed.
Fine Mapping Causal Variants with an Approximate Bayesian Method Using Marginal Test Statistics.
Chen, Wenan; Larrabee, Beth R; Ovsyannikova, Inna G; Kennedy, Richard B; Haralambieva, Iana H; Poland, Gregory A; Schaid, Daniel J
2015-07-01
Two recently developed fine-mapping methods, CAVIAR and PAINTOR, demonstrate better performance over other fine-mapping methods. They also have the advantage of using only the marginal test statistics and the correlation among SNPs. Both methods leverage the fact that the marginal test statistics asymptotically follow a multivariate normal distribution and are likelihood based. However, their relationship with Bayesian fine mapping, such as BIMBAM, is not clear. In this study, we first show that CAVIAR and BIMBAM are actually approximately equivalent to each other. This leads to a fine-mapping method using marginal test statistics in the Bayesian framework, which we call CAVIAR Bayes factor (CAVIARBF). Another advantage of the Bayesian framework is that it can answer both association and fine-mapping questions. We also used simulations to compare CAVIARBF with other methods under different numbers of causal variants. The results showed that both CAVIARBF and BIMBAM have better performance than PAINTOR and other methods. Compared to BIMBAM, CAVIARBF has the advantage of using only marginal test statistics and takes about one-quarter to one-fifth of the running time. We applied different methods on two independent cohorts of the same phenotype. Results showed that CAVIARBF, BIMBAM, and PAINTOR selected the same top 3 SNPs; however, CAVIARBF and BIMBAM had better consistency in selecting the top 10 ranked SNPs between the two cohorts. Software is available at https://bitbucket.org/Wenan/caviarbf.
Ghosh, Debashree
2014-03-07
Hybrid quantum mechanics/molecular mechanics (QM/MM) methods provide an attractive way to closely retain the accuracy of the QM method with the favorable computational scaling of the MM method. Therefore, it is not surprising that QM/MM methods are being increasingly used for large chemical/biological systems. Hybrid equation of motion coupled cluster singles doubles/effective fragment potential (EOM-CCSD/EFP) methods have been developed over the last few years to understand the effect of solvents and other condensed phases on the electronic spectra of chromophores. However, the computational cost of this approach is still dominated by the steep scaling of the EOM-CCSD method. In this work, we propose and implement perturbative approximations to the EOM-CCSD method in this hybrid scheme to reduce the cost of EOM-CCSD/EFP. The timings and accuracy of this hybrid approach is tested for calculation of ionization energies, excitation energies, and electron affinities of microsolvated nucleic acid bases (thymine and cytosine), phenol, and phenolate.
NASA Astrophysics Data System (ADS)
Liu, Jie; Sun, Xingsheng; Li, Kun; Jiang, Chao; Han, Xu
2015-11-01
Aiming at structures containing random parameters with multi-peak probability density functions (PDFs) or great variable coefficients, an analytical method of probability density function discretization and approximation (PDFDA) is proposed for dynamic load identification. Dynamic loads are expressed as the functions of time and random parameters in time domain and the forward model is established through the discretized convolution integral of loads and the corresponding unit-pulse response functions. The PDF of each random parameter is discretized into several subintervals and in each subinterval the original PDF curve is approximated via uniform distribution PDF with equal probability value. Then the joint distribution model is built and hence the equivalent deterministic equations are solved to identify unknown loads. Inverse analysis is operated separately at each variable in the joint distribution model through regularization because of noise-contaminated measured responses. In order to assess the accuracy of identified results, PDF curves and statistical properties of loads are achieved based on the specially assumed distributions of identified loads. Numerical simulations demonstrate the efficiency and superiority of the presented method.
Mariño, Inés P; Míguez, Joaquín
2005-11-01
We introduce a numerical approximation method for estimating an unknown parameter of a (primary) chaotic system which is partially observed through a scalar time series. Specifically, we show that the recursive minimization of a suitably designed cost function that involves the dynamic state of a fully observed (secondary) system and the observed time series can lead to the identical synchronization of the two systems and the accurate estimation of the unknown parameter. The salient feature of the proposed technique is that the only external input to the secondary system is the unknown parameter which needs to be adjusted. We present numerical examples for the Lorenz system which show how our algorithm can be considerably faster than some previously proposed methods.
Approximate method for solving relaxation problems in terms of material`s damagability under creep
Nikitenko, A.F.; Sukhorukov, I.V.
1995-03-01
The technology of thermoforming under creep and superplasticity conditions is finding increasing application in machine building for producing articles of a preset shape. After a part is made there are residual stresses in it, which lead to its warping. To remove residual stresses, moulded articles are usually exposed to thermal fixation, i.e., the part is held in compressed state at a certain temperature. Thermal fixation is simply the process of residual stress relaxation, following by accumulation of total creep in the material. Therefore the necessity to develop engineering methods for calculating the time of thermal fixation and relaxation of residual stresses to a safe level, not resulting in warping, becomes evident. The authors present an approximate method of calculation of stress-strain rate of a body during relaxation. They use a system of equations which describes a material`s creep, simultaneously taking into account accumulation of damages in it.
Estimating the Bias of Local Polynomial Approximation Methods Using the Peano Kernel
Blair, J.; Machorro, E.; Luttman, A.
2013-03-01
The determination of uncertainty of an estimate requires both the variance and the bias of the estimate. Calculating the variance of local polynomial approximation (LPA) estimates is straightforward. We present a method, using the Peano Kernel Theorem, to estimate the bias of LPA estimates and show how this can be used to optimize the LPA parameters in terms of the bias-variance tradeoff. Figures of merit are derived and values calculated for several common methods. The results in the literature are expanded by giving bias error bounds that are valid for all lengths of the smoothing interval, generalizing the currently available asymptotic results that are only valid in the limit as the length of this interval goes to zero.
NASA Astrophysics Data System (ADS)
Zhang, Ji; Ding, Mingyue; Yuchi, Ming; Hou, Wenguang; Ye, Huashan; Qiu, Wu
2010-03-01
Factor analysis is an efficient technique to the analysis of dynamic structures in medical image sequences and recently has been used in contrast-enhanced ultrasound (CEUS) of hepatic perfusion. Time-intensity curves (TICs) extracted by factor analysis can provide much more diagnostic information for radiologists and improve the diagnostic rate of focal liver lesions (FLLs). However, one of the major drawbacks of factor analysis of dynamic structures (FADS) is nonuniqueness of the result when only the non-negativity criterion is used. In this paper, we propose a new method of replace-approximation based on apex-seeking for ambiguous FADS solutions. Due to a partial overlap of different structures, factor curves are assumed to be approximately replaced by the curves existing in medical image sequences. Therefore, how to find optimal curves is the key point of the technique. No matter how many structures are assumed, our method always starts to seek apexes from one-dimensional space where the original high-dimensional data is mapped. By finding two stable apexes from one dimensional space, the method can ascertain the third one. The process can be continued until all structures are found. This technique were tested on two phantoms of blood perfusion and compared to the two variants of apex-seeking method. The results showed that the technique outperformed two variants in comparison of region of interest measurements from phantom data. It can be applied to the estimation of TICs derived from CEUS images and separation of different physiological regions in hepatic perfusion.
NASA Astrophysics Data System (ADS)
Kaporin, I. E.
2012-02-01
In order to precondition a sparse symmetric positive definite matrix, its approximate inverse is examined, which is represented as the product of two sparse mutually adjoint triangular matrices. In this way, the solution of the corresponding system of linear algebraic equations (SLAE) by applying the preconditioned conjugate gradient method (CGM) is reduced to performing only elementary vector operations and calculating sparse matrix-vector products. A method for constructing the above preconditioner is described and analyzed. The triangular factor has a fixed sparsity pattern and is optimal in the sense that the preconditioned matrix has a minimum K-condition number. The use of polynomial preconditioning based on Chebyshev polynomials makes it possible to considerably reduce the amount of scalar product operations (at the cost of an insignificant increase in the total number of arithmetic operations). The possibility of an efficient massively parallel implementation of the resulting method for solving SLAEs is discussed. For a sequential version of this method, the results obtained by solving 56 test problems from the Florida sparse matrix collection (which are large-scale and ill-conditioned) are presented. These results show that the method is highly reliable and has low computational costs.
NASA Astrophysics Data System (ADS)
Frantz, Eric Randall
Elongation and shaping of the tokamak plasma cross -section can allow increased beta and other favorable improvements. As the cross-section is made non-circular, however, the plasma can become unstable against axisymmetric motions, the most predominant one being a nearly uniform displacement in the direction of elongation. Without additional stabilizing mechanisms, this instability has growth rates typically (TURN)10('6)sec('-1). With passive and active feedback from external conductors, the plasma can be significantly slowed down and controlled. In this work, a mathematical formulism for analyzing the vertical instability is developed in which the external conductors are treated (or broken -up) as discrete coils. The circuit equations for the plasma induced currents can be included within the same mathematical framework. The plasma equation of motion and the circuit equations are combined and manipulated into a diagonalized form that can be graphically analyzed to determine the growth rate. An effective mode approximation (EMA) to the dispersion relation in introduced to simplify and approximate the growth rate of the more exact case. Controller voltage equations for active feedback are generalized to include position and velocity feedback and time delay. A position cut-off displacement is added to model finite spatial resolution of the position detectors or a dead-band voltage level. Stability criteria are studied for EMA and the more exact case. The time dependent responses for plasma position controller voltages, and currents are determined from the Laplace transformations. Slow responses are separated from the fast ones (dependent on plasma inertia) using a typical tokamak ordering approximation. The methods developed are applied in numerous examples for the machine geometry and plasma of TNS, an inside-D configuration plasma resembling JET, INTOR, or FED.
NASA Astrophysics Data System (ADS)
Roudi, Yasser; Tyrcha, Joanna; Hertz, John
2009-05-01
We study pairwise Ising models for describing the statistics of multineuron spike trains, using data from a simulated cortical network. We explore efficient ways of finding the optimal couplings in these models and examine their statistical properties. To do this, we extract the optimal couplings for subsets of size up to 200 neurons, essentially exactly, using Boltzmann learning. We then study the quality of several approximate methods for finding the couplings by comparing their results with those found from Boltzmann learning. Two of these methods—inversion of the Thouless-Anderson-Palmer equations and an approximation proposed by Sessak and Monasson—are remarkably accurate. Using these approximations for larger subsets of neurons, we find that extracting couplings using data from a subset smaller than the full network tends systematically to overestimate their magnitude. This effect is described qualitatively by infinite-range spin-glass theory for the normal phase. We also show that a globally correlated input to the neurons in the network leads to a small increase in the average coupling. However, the pair-to-pair variation in the couplings is much larger than this and reflects intrinsic properties of the network. Finally, we study the quality of these models by comparing their entropies with that of the data. We find that they perform well for small subsets of the neurons in the network, but the fit quality starts to deteriorate as the subset size grows, signaling the need to include higher-order correlations to describe the statistics of large networks.
A novel method of automated skull registration for forensic facial approximation.
Turner, W D; Brown, R E B; Kelliher, T P; Tu, P H; Taister, M A; Miller, K W P
2005-11-25
Modern forensic facial reconstruction techniques are based on an understanding of skeletal variation and tissue depths. These techniques rely upon a skilled practitioner interpreting limited data. To (i) increase the amount of data available and (ii) lessen the subjective interpretation, we use medical imaging and statistical techniques. We introduce a software tool, reality enhancement/facial approximation by computational estimation (RE/FACE) for computer-based forensic facial reconstruction. The tool applies innovative computer-based techniques to a database of human head computed tomography (CT) scans in order to derive a statistical approximation of the soft tissue structure of a questioned skull. A core component of this tool is an algorithm for removing the variation in facial structure due to skeletal variation. This method uses models derived from the CT scans and does not require manual measurement or placement of landmarks. It does not require tissue-depth tables, can be tailored to specific racial categories by adding CT scans, and removes much of the subjectivity of manual reconstructions.
NASA Astrophysics Data System (ADS)
Werner, Hans-Joachim
2016-11-01
The accuracy of multipole approximations for distant pair energies in local second-order Møller-Plesset perturbation theory (LMP2) as introduced by Hetzer et al. [Chem. Phys. Lett. 290, 143 (1998)] is investigated for three chemical reactions involving molecules with up to 92 atoms. Various iterative and non-iterative approaches are compared, using different energy thresholds for distant pair selection. It is demonstrated that the simple non-iterative dipole-dipole approximation, which has been used in several recent pair natural orbitals (PNO)-LMP2 and PNO-LCCSD (local coupled-cluster with singles and doubles) methods, may underestimate the distant pair energies by up to 50% and can lead to significant errors in relative energies, unless very tight thresholds are used. The accuracy can be much improved by including higher multipole orders and by optimizing the distant pair amplitudes iteratively along with all other amplitudes. A new approach is presented in which very small special PNO domains for distant pairs are used in the iterative approach. This reduces the number of distant pair amplitudes by 3 orders of magnitude and keeps the additional computational effort for the iterative optimization of distant pair amplitudes minimal.
Werner, Hans-Joachim
2016-11-28
The accuracy of multipole approximations for distant pair energies in local second-order Møller-Plesset perturbation theory (LMP2) as introduced by Hetzer et al. [Chem. Phys. Lett. 290, 143 (1998)] is investigated for three chemical reactions involving molecules with up to 92 atoms. Various iterative and non-iterative approaches are compared, using different energy thresholds for distant pair selection. It is demonstrated that the simple non-iterative dipole-dipole approximation, which has been used in several recent pair natural orbitals (PNO)-LMP2 and PNO-LCCSD (local coupled-cluster with singles and doubles) methods, may underestimate the distant pair energies by up to 50% and can lead to significant errors in relative energies, unless very tight thresholds are used. The accuracy can be much improved by including higher multipole orders and by optimizing the distant pair amplitudes iteratively along with all other amplitudes. A new approach is presented in which very small special PNO domains for distant pairs are used in the iterative approach. This reduces the number of distant pair amplitudes by 3 orders of magnitude and keeps the additional computational effort for the iterative optimization of distant pair amplitudes minimal.
Model-independent mean-field theory as a local method for approximate propagation of information.
Haft, M; Hofmann, R; Tresp, V
1999-02-01
We present a systematic approach to mean-field theory (MFT) in a general probabilistic setting without assuming a particular model. The mean-field equations derived here may serve as a local, and thus very simple, method for approximate inference in probabilistic models such as Boltzmann machines or Bayesian networks. Our approach is 'model-independent' in the sense that we do not assume a particular type of dependences; in a Bayesian network, for example, we allow arbitrary tables to specify conditional dependences. In general, there are multiple solutions to the mean-field equations. We show that improved estimates can be obtained by forming a weighted mixture of the multiple mean-field solutions. Simple approximate expressions for the mixture weights are given. The general formalism derived so far is evaluated for the special case of Bayesian networks. The benefits of taking into account multiple solutions are demonstrated by using MFT for inference in a small and in a very large Bayesian network. The results are compared with the exact results.
NASA Astrophysics Data System (ADS)
Sabatier, Romuald; Fossati, Caroline; Bourennane, Salah; Di Giacomo, Antonio
2008-10-01
Model Based Optical Proximity Correction (MBOPC) is since a decade a widely used technique that permits to achieve resolutions on silicon layout smaller than the wave-length which is used in commercially-available photolithography tools. This is an important point, because masks dimensions are continuously shrinking. As for the current masks, several billions of segments have to be moved, and also, several iterations are needed to reach convergence. Therefore, fast and accurate algorithms are mandatory to perform OPC on a mask in a reasonably short time for industrial purposes. As imaging with an optical lithography system is similar to microscopy, the theory used in MBOPC is drawn from the works originally conducted for the theory of microscopy. Fourier Optics was first developed by Abbe to describe the image formed by a microscope and is often referred to as Abbe formulation. This is one of the best methods for optimizing illumination and is used in most of the commercially available lithography simulation packages. Hopkins method, developed later in 1951, is the best method for mask optimization. Consequently, Hopkins formulation, widely used for partially coherent illumination, and thus for lithography, is present in most of the commercially available OPC tools. This formulation has the advantage of a four-way transmission function independent of the mask layout. The values of this function, called Transfer Cross Coefficients (TCC), describe the illumination and projection pupils. Commonly-used algorithms, involving TCC of Hopkins formulation to compute aerial images during MBOPC treatment, are based on TCC decomposition into its eigenvectors using matricization and the well-known Singular Value Decomposition (SVD) tool. These techniques that use numerical approximation and empirical determination of the number of eigenvectors taken into account, could not match reality and lead to an information loss. They also remain highly runtime consuming. We propose an
Rational trigonometric approximations using Fourier series partial sums
NASA Technical Reports Server (NTRS)
Geer, James F.
1993-01-01
A class of approximations (S(sub N,M)) to a periodic function f which uses the ideas of Pade, or rational function, approximations based on the Fourier series representation of f, rather than on the Taylor series representation of f, is introduced and studied. Each approximation S(sub N,M) is the quotient of a trigonometric polynomial of degree N and a trigonometric polynomial of degree M. The coefficients in these polynomials are determined by requiring that an appropriate number of the Fourier coefficients of S(sub N,M) agree with those of f. Explicit expressions are derived for these coefficients in terms of the Fourier coefficients of f. It is proven that these 'Fourier-Pade' approximations converge point-wise to (f(x(exp +))+f(x(exp -)))/2 more rapidly (in some cases by a factor of 1/k(exp 2M)) than the Fourier series partial sums on which they are based. The approximations are illustrated by several examples and an application to the solution of an initial, boundary value problem for the simple heat equation is presented.
An improved approximate-Bayesian model-choice method for estimating shared evolutionary history
2014-01-01
Background To understand biological diversification, it is important to account for large-scale processes that affect the evolutionary history of groups of co-distributed populations of organisms. Such events predict temporally clustered divergences times, a pattern that can be estimated using genetic data from co-distributed species. I introduce a new approximate-Bayesian method for comparative phylogeographical model-choice that estimates the temporal distribution of divergences across taxa from multi-locus DNA sequence data. The model is an extension of that implemented in msBayes. Results By reparameterizing the model, introducing more flexible priors on demographic and divergence-time parameters, and implementing a non-parametric Dirichlet-process prior over divergence models, I improved the robustness, accuracy, and power of the method for estimating shared evolutionary history across taxa. Conclusions The results demonstrate the improved performance of the new method is due to (1) more appropriate priors on divergence-time and demographic parameters that avoid prohibitively small marginal likelihoods for models with more divergence events, and (2) the Dirichlet-process providing a flexible prior on divergence histories that does not strongly disfavor models with intermediate numbers of divergence events. The new method yields more robust estimates of posterior uncertainty, and thus greatly reduces the tendency to incorrectly estimate models of shared evolutionary history with strong support. PMID:24992937
Approximate analytic method for high-apogee twelve-hour orbits of artificial Earth's satellites
NASA Astrophysics Data System (ADS)
Vashkovyaka, M. A.; Zaslavskii, G. S.
2016-09-01
We propose an approach to the study of the evolution of high-apogee twelve-hour orbits of artificial Earth's satellites. We describe parameters of the motion model used for the artificial Earth's satellite such that the principal gravitational perturbations of the Moon and Sun, nonsphericity of the Earth, and perturbations from the light pressure force are approximately taken into account. To solve the system of averaged equations describing the evolution of the orbit parameters of an artificial satellite, we use both numeric and analytic methods. To select initial parameters of the twelve-hour orbit, we assume that the path of the satellite along the surface of the Earth is stable. Results obtained by the analytic method and by the numerical integration of the evolving system are compared. For intervals of several years, we obtain estimates of oscillation periods and amplitudes for orbital elements. To verify the results and estimate the precision of the method, we use the numerical integration of rigorous (not averaged) equations of motion of the artificial satellite: they take into account forces acting on the satellite substantially more completely and precisely. The described method can be applied not only to the investigation of orbit evolutions of artificial satellites of the Earth; it can be applied to the investigation of the orbit evolution for other planets of the Solar system provided that the corresponding research problem will arise in the future and the considered special class of resonance orbits of satellites will be used for that purpose.
A new embedded-atom method approach based on the pth moment approximation
NASA Astrophysics Data System (ADS)
Wang, Kun; Zhu, Wenjun; Xiao, Shifang; Chen, Jun; Hu, Wangyu
2016-12-01
Large scale atomistic simulations with suitable interatomic potentials are widely employed by scientists or engineers of different areas. The quick generation of high-quality interatomic potentials is urgently needed. This largely relies on the developments of potential construction methods and algorithms in this area. Quantities of interatomic potential models have been proposed and parameterized with various methods, such as the analytic method, the force-matching approach and multi-object optimization method, in order to make the potentials more transferable. Without apparently lowering the precision for describing the target system, potentials of fewer fitting parameters (FPs) are somewhat more physically reasonable. Thus, studying methods to reduce the FP number is helpful in understanding the underlying physics of simulated systems and improving the precision of potential models. In this work, we propose an embedded-atom method (EAM) potential model consisting of a new manybody term based on the pth moment approximation to the tight binding theory and the general transformation invariance of EAM potentials, and an energy modification term represented by pairwise interactions. The pairwise interactions are evaluated by an analytic-numerical scheme without the need to know their functional forms a priori. By constructing three potentials of aluminum and comparing them with a commonly used EAM potential model, several wonderful results are obtained. First, without losing the precision of potentials, our potential of aluminum has fewer potential parameters and a smaller cutoff distance when compared with some constantly-used potentials of aluminum. This is because several physical quantities, usually serving as target quantities to match in other potentials, seem to be uniquely dependent on quantities contained in our basic reference database within the new potential model. Second, a key empirical parameter in the embedding term of the commonly used EAM model is
Approximation methods of European option pricing in multiscale stochastic volatility model
NASA Astrophysics Data System (ADS)
Ni, Ying; Canhanga, Betuel; Malyarenko, Anatoliy; Silvestrov, Sergei
2017-01-01
In the classical Black-Scholes model for financial option pricing, the asset price follows a geometric Brownian motion with constant volatility. Empirical findings such as volatility smile/skew, fat-tailed asset return distributions have suggested that the constant volatility assumption might not be realistic. A general stochastic volatility model, e.g. Heston model, GARCH model and SABR volatility model, in which the variance/volatility itself follows typically a mean-reverting stochastic process, has shown to be superior in terms of capturing the empirical facts. However in order to capture more features of the volatility smile a two-factor, of double Heston type, stochastic volatility model is more useful as shown in Christoffersen, Heston and Jacobs [12]. We consider one modified form of such two-factor volatility models in which the volatility has multiscale mean-reversion rates. Our model contains two mean-reverting volatility processes with a fast and a slow reverting rate respectively. We consider the European option pricing problem under one type of the multiscale stochastic volatility model where the two volatility processes act as independent factors in the asset price process. The novelty in this paper is an approximating analytical solution using asymptotic expansion method which extends the authors earlier research in Canhanga et al. [5, 6]. In addition we propose a numerical approximating solution using Monte-Carlo simulation. For completeness and for comparison we also implement the semi-analytical solution by Chiarella and Ziveyi [11] using method of characteristics, Fourier and bivariate Laplace transforms.
Thermodynamic potential of the periodic Anderson model with the X-boson method: chain approximation
NASA Astrophysics Data System (ADS)
Franco, R.; Figueira, M. S.; Foglio, M. E.
2002-05-01
The periodic Anderson model (PAM) in the U→∞ limit has been studied in a previous work employing the cumulant expansion with the hybridization as perturbation (Figueira et al., Phys. Rev. B 50 (1994) 17 933). When the total number of electrons Nt is calculated as a function of the chemical potential μ in the “chain approximation” (CHA), there are three values of the chemical potential μ for each Nt in a small interval of Nt at low T (Physica A 208 (1994) 279). We have recently introduced the “X-boson” method, inspired in the slave boson technique of Coleman, that solves the problem of nonconservation of probability (completeness) in the CHA as well as removing the spurious phase transitions that appear with the slave boson method in the mean field approximation. In the present paper, we show that the X-boson method solves also the problem of the multiple roots of Nt( μ) that appear in the CHA.
NASA Astrophysics Data System (ADS)
Hartikainen, Markus E.; Ojalehto, Vesa; Sahlstedt, Kristian
2015-03-01
Using an interactive multiobjective optimization method called NIMBUS and an approximation method called PAINT, preferable solutions to a five-objective problem of operating a wastewater treatment plant are found. The decision maker giving preference information is an expert in wastewater treatment plant design at the engineering company Pöyry Finland Ltd. The wastewater treatment problem is computationally expensive and requires running a simulator to evaluate the values of the objective functions. This often leads to problems with interactive methods as the decision maker may get frustrated while waiting for new solutions to be computed. Thus, a newly developed PAINT method is used to speed up the iterations of the NIMBUS method. The PAINT method interpolates between a given set of Pareto optimal outcomes and constructs a computationally inexpensive mixed integer linear surrogate problem for the original wastewater treatment problem. With the mixed integer surrogate problem, the time required from the decision maker is comparatively short. In addition, a new IND-NIMBUS® PAINT module is developed to allow the smooth interoperability of the NIMBUS method and the PAINT method.
NASA Astrophysics Data System (ADS)
Wu, Kun; Zhang, Feng; Min, Jinzhong; Yu, Qiu-Run; Wang, Xin-Yue; Ma, Leiming
2016-09-01
The adding method, which could calculate the infrared radiative transfer (IRT) in inhomogeneous atmosphere with multiple layers, has been applied to δ -four-stream discrete-ordinates method (DOM). This scheme is referred as δ -4DDA. However, there is a lack of application for adding method of δ -four-stream spherical harmonic expansion approximation (SHM) to solve infrared radiative transfer through multiple layers. In this paper, the adding method for δ -four-stream SHM (δ -4SDA) will be obtained and the accuracy of it will be evaluated as well. The result of δ -4SDA in an idealized medium with homogeneous optical property is significantly more accurate than that of the adding method for δ -two-stream DOM (δ -2DDA). The relative errors of δ -2DDA can be over 15% in thin optical depths for downward emissivity, while errors of δ -4SDA are bounded by 2%. However, the result of δ -4SDA is slightly less accurate than that of δ -4DDA. In a radiation model with realistic atmospheric profile considering gaseous transmission, the accuracy for heating rate of δ -4SDA is significantly superior than that of δ -2DDA, especially for the cloudy sky. The accuracy for heating rate of δ -4SDA is slightly less accurate than that of δ -4DDA under water cloud conditions, while it is superior than that of δ -4DDA in ice cloud cases. Beside, the computational efficiency of δ -4SDA is higher than that of δ -4DDA.
NASA Astrophysics Data System (ADS)
Lotov, A. V.; Maiskaya, T. S.
2012-01-01
For multicriteria convex optimization problems, new nonadaptive methods are proposed for polyhedral approximation of the multidimensional Edgeworth-Pareto hull (EPH), which is a maximal set having the same Pareto frontier as the set of feasible criteria vectors. The methods are based on evaluating the support function of the EPH for a collection of directions generated by a suboptimal covering on the unit sphere. Such directions are constructed in advance by applying an asymptotically effective adaptive method for the polyhedral approximation of convex compact bodies, namely, by the estimate refinement method. Due to the a priori definition of the directions, the proposed EPH approximation procedure can easily be implemented with parallel computations. Moreover, the use of nonadaptive methods considerably simplifies the organization of EPH approximation on the Internet. Experiments with an applied problem (from 3 to 5 criteria) showed that the methods are fairly similar in characteristics to adaptive methods. Therefore, they can be used in parallel computations and on the Internet.
Albrecht, Andreas A; Day, Luke; Abdelhadi Ep Souki, Ouala; Steinhöfel, Kathleen
2016-02-01
The analysis of energy landscapes plays an important role in mathematical modelling, simulation and optimisation. Among the main features of interest are the number and distribution of local minima within the energy landscape. Granier and Kallel proposed in 2002 a new sampling procedure for estimating the number of local minima. In the present paper, we focus on improved heuristic implementations of the general framework devised by Granier and Kallel with regard to run-time behaviour and accuracy of predictions. The new heuristic method is demonstrated for the case of partial energy landscapes induced by RNA secondary structures. While the computation of minimum free energy RNA secondary structures has been studied for a long time, the analysis of folding landscapes has gained momentum over the past years in the context of co-transcriptional folding and deeper insights into cell processes. The new approach has been applied to ten RNA instances of length between 99 nt and 504 nt and their respective partial energy landscapes defined by secondary structures within an energy offset ΔE above the minimum free energy conformation. The number of local minima within the partial energy landscapes ranges from 1440 to 3441. Our heuristic method produces for the best approximations on average a deviation below 3.0% from the true number of local minima.
NASA Astrophysics Data System (ADS)
Miura, Shinichi; Okazaki, Susumu
2001-09-01
In this paper, the path integral molecular dynamics (PIMD) method has been extended to employ an efficient approximation of the path action referred to as the pair density matrix approximation. Configurations of the isomorphic classical systems were dynamically sampled by introducing fictitious momenta as in the PIMD based on the standard primitive approximation. The indistinguishability of the particles was handled by a pseudopotential of particle permutation that is an extension of our previous one [J. Chem. Phys. 112, 10 116 (2000)]. As a test of our methodology for Boltzmann statistics, calculations have been performed for liquid helium-4 at 4 K. We found that the PIMD with the pair density matrix approximation dramatically reduced the computational cost to obtain the structural as well as dynamical (using the centroid molecular dynamics approximation) properties at the same level of accuracy as that with the primitive approximation. With respect to the identical particles, we performed the calculation of a bosonic triatomic cluster. Unlike the primitive approximation, the pseudopotential scheme based on the pair density matrix approximation described well the bosonic correlation among the interacting atoms. Convergence with a small number of discretization of the path achieved by this approximation enables us to construct a method of avoiding the problem of the vanishing pseudopotential encountered in the calculations by the primitive approximation.
NASA Technical Reports Server (NTRS)
Pratt, D. T.
1984-01-01
Conventional algorithms for the numerical integration of ordinary differential equations (ODEs) are based on the use of polynomial functions as interpolants. However, the exact solutions of stiff ODEs behave like decaying exponential functions, which are poorly approximated by polynomials. An obvious choice of interpolant are the exponential functions themselves, or their low-order diagonal Pade (rational function) approximants. A number of explicit, A-stable, integration algorithms were derived from the use of a three-parameter exponential function as interpolant, and their relationship to low-order, polynomial-based and rational-function-based implicit and explicit methods were shown by examining their low-order diagonal Pade approximants. A robust implicit formula was derived by exponential fitting the trapezoidal rule. Application of these algorithms to integration of the ODEs governing homogenous, gas-phase chemical kinetics was demonstrated in a developmental code CREK1D, which compares favorably with the Gear-Hindmarsh code LSODE in spite of the use of a primitive stepsize control strategy.
Heats of Segregation of BCC Metals Using Ab Initio and Quantum Approximate Methods
NASA Technical Reports Server (NTRS)
Good, Brian; Chaka, Anne; Bozzolo, Guillermo
2003-01-01
Many multicomponent alloys exhibit surface segregation, in which the composition at or near a surface may be substantially different from that of the bulk. A number of phenomenological explanations for this tendency have been suggested, involving, among other things, differences among the components' surface energies, molar volumes, and heats of solution. From a theoretical standpoint, the complexity of the problem has precluded a simple, unified explanation, thus preventing the development of computational tools that would enable the identification of the driving mechanisms for segregation. In that context, we investigate the problem of surface segregation in a variety of bcc metal alloys by computing dilute-limit heats of segregation using both the quantum-approximate energy method of Bozzolo, Ferrante and Smith (BFS), and all-electron density functional theory. In addition, the composition dependence of the heats of segregation is investigated using a BFS-based Monte Carlo procedure, and, for selected cases of interest, density functional calculations. Results are discussed in the context of a simple picture that describes segregation behavior as the result of a competition between size mismatch and alloying effects
NASA Astrophysics Data System (ADS)
Wang, S.; Zhang, X. N.; Gao, D. D.; Liu, H. X.; Ye, J.; Li, L. R.
2016-08-01
As the solar photovoltaic (PV) power is applied extensively, more attentions are paid to the maintenance and fault diagnosis of PV power plants. Based on analysis of the structure of PV power station, the global partitioned gradually approximation method is proposed as a fault diagnosis algorithm to determine and locate the fault of PV panels. The PV array is divided into 16x16 blocks and numbered. On the basis of modularly processing of the PV array, the current values of each block are analyzed. The mean current value of each block is used for calculating the fault weigh factor. The fault threshold is defined to determine the fault, and the shade is considered to reduce the probability of misjudgments. A fault diagnosis system is designed and implemented with LabVIEW. And it has some functions including the data realtime display, online check, statistics, real-time prediction and fault diagnosis. Through the data from PV plants, the algorithm is verified. The results show that the fault diagnosis results are accurate, and the system works well. The validity and the possibility of the system are verified by the results as well. The developed system will be benefit for the maintenance and management of large scale PV array.
On the enhancement of the approximation order of triangular Shepard method
NASA Astrophysics Data System (ADS)
Dell'Accio, Francesco; Di Tommaso, Filomena; Hormann, Kai
2016-10-01
Shepard's method is a well-known technique for interpolating large sets of scattered data. The classical Shepard operator reconstructs an unknown function as a normalized blend of the function values at the scattered points, using the inverse distances to the scattered points as weight functions. Based on the general idea of defining interpolants by convex combinations, Little suggested to extend the bivariate Shepard operator in two ways. On the one hand, he considers a triangulation of the scattered points and substitutes function values with linear polynomials which locally interpolate the given data at the vertices of each triangle. On the other hand, he modifies the classical point-based weight functions and defines instead a normalized blend of the locally interpolating polynomials with triangle-based weight functions which depend on the product of inverse distances to the three vertices of the corresponding triangle. The resulting triangular Shepard operator interpolates all data required for its definition and reproduces polynomials up to degree 1, whereas the classical Shepard operator reproduces only constants, and has quadratic approximation order. In this paper we discuss an improvement of the triangular Shepard operator.
Garvie, Marcus R; Burkardt, John; Morgan, Jeff
2015-03-01
We describe simple finite element schemes for approximating spatially extended predator-prey dynamics with the Holling type II functional response and logistic growth of the prey. The finite element schemes generalize 'Scheme 1' in the paper by Garvie (Bull Math Biol 69(3):931-956, 2007). We present user-friendly, open-source MATLAB code for implementing the finite element methods on arbitrary-shaped two-dimensional domains with Dirichlet, Neumann, Robin, mixed Robin-Neumann, mixed Dirichlet-Neumann, and Periodic boundary conditions. Users can download, edit, and run the codes from http://www.uoguelph.ca/~mgarvie/ . In addition to discussing the well posedness of the model equations, the results of numerical experiments are presented and demonstrate the crucial role that habitat shape, initial data, and the boundary conditions play in determining the spatiotemporal dynamics of predator-prey interactions. As most previous works on this problem have focussed on square domains with standard boundary conditions, our paper makes a significant contribution to the area.
A Simple, Approximate Method for Analysis of Kerr-Newman Black Hole Dynamics and Thermodynamics
NASA Astrophysics Data System (ADS)
Pankovic, V.; Ciganovic, S.; Glavatovic, R.
2009-06-01
In this work we present a simple approximate method for analysis of the basic dynamical and thermodynamical characteristics of Kerr-Newman black hole. Instead of the complete dynamics of the black hole self-interaction, we consider only the stable (stationary) dynamical situations determined by condition that the black hole (outer) horizon "circumference" holds the integer number of the reduced Compton wave lengths corresponding to mass spectrum of a small quantum system (representing the quantum of the black hole self-interaction). Then, we show that Kerr-Newman black hole entropy represents simply the ratio of the sum of static part and rotation part of the mass of black hole on one hand, and the ground mass of small quantum system on the other hand. Also we show that Kerr-Newman black hole temperature represents the negative value of the classical potential energy of gravitational interaction between a part of black hole with reduced mass and a small quantum system in the ground mass quantum state. Finally, we suggest a bosonic great canonical distribution of the statistical ensemble of given small quantum systems in the thermodynamical equilibrium with (macroscopic) black hole as thermal reservoir. We suggest that, practically, only the ground mass quantum state is significantly degenerate while all the other, excited mass quantum states, are non-degenerate. Kerr-Newman black hole entropy is practically equivalent to the ground mass quantum state degeneration. Given statistical distribution admits a rough (qualitative) but simple modeling of Hawking radiation of the black hole too.
NASA Astrophysics Data System (ADS)
Skorupski, Krzysztof
2015-06-01
Black carbon particles soon after emission interact with organic and inorganic matter. The primary goal of this work was to approximate the accuracy of the DDA method in determining the optical properties of such composites. For the light scattering simulations the ADDA code was selected and the superposition T-Matrix code by Mackowski was used as the reference algorithm. The first part of the study was to compare alternative models of a single primary particle. When only one material is considered the largest averaged relative extinction error is associated with black carbon (δCext ≍ 2.8%). However, for inorganic and organic matter it is lowered to δCext ≍ 0.75%. There is no significant difference between spheres and ellipsoids with the same volume, and therefore, both of them can be used interchangeably. The next step was to investigate aggregates composed of Np = 50 primary particles. When the coating is omitted, the averaged relative extinction error is δCext ≍ 2.6%. Otherwise, it can be lower than δCext < 0.2%.
NASA Astrophysics Data System (ADS)
Xu, Chuanju; Lin, Yumin
2000-03-01
Based on a new global variational formulation, a spectral element approximation of the incompressible Navier-Stokes/Euler coupled problem gives rise to a global discrete saddle problem. The classical Uzawa algorithm decouples the original saddle problem into two positive definite symmetric systems. Iterative solutions of such systems are feasible and attractive for large problems. It is shown that, provided an appropriate pre-conditioner is chosen for the pressure system, the nested conjugate gradient methods can be applied to obtain rapid convergence rates. Detailed numerical examples are given to prove the quality of the pre-conditioner. Thanks to the rapid iterative convergence, the global Uzawa algorithm takes advantage of this as compared with the classical iteration by sub-domain procedures. Furthermore, a generalization of the pre-conditioned iterative algorithm to flow simulation is carried out. Comparisons of computational complexity between the Navier-Stokes/Euler coupled solution and the full Navier-Stokes solution are made. It is shown that the gain obtained by using the Navier-Stokes/Euler coupled solution is generally considerable. Copyright
Bu Sunyoung Huang Jingfang Boyer, Treavor H. Miller, Cass T.
2010-07-01
The focus of this work is on the modeling of an ion exchange process that occurs in drinking water treatment applications. The model formulation consists of a two-scale model in which a set of microscale diffusion equations representing ion exchange resin particles that vary in size and age are coupled through a boundary condition with a macroscopic ordinary differential equation (ODE), which represents the concentration of a species in a well-mixed reactor. We introduce a new age-averaged model (AAM) that averages all ion exchange particle ages for a given size particle to avoid the expensive Monte-Carlo simulation associated with previous modeling applications. We discuss two different numerical schemes to approximate both the original Monte-Carlo algorithm and the new AAM for this two-scale problem. The first scheme is based on the finite element formulation in space coupled with an existing backward difference formula-based ODE solver in time. The second scheme uses an integral equation based Krylov deferred correction (KDC) method and a fast elliptic solver (FES) for the resulting elliptic equations. Numerical results are presented to validate the new AAM algorithm, which is also shown to be more computationally efficient than the original Monte-Carlo algorithm. We also demonstrate that the higher order KDC scheme is more efficient than the traditional finite element solution approach and this advantage becomes increasingly important as the desired accuracy of the solution increases. We also discuss issues of smoothness, which affect the efficiency of the KDC-FES approach, and outline additional algorithmic changes that would further improve the efficiency of these developing methods for a wide range of applications.
Bu, Sunyoung; Huang, Jingfang; Boyer, Treavor H; Miller, Cass T
2010-07-01
The focus of this work is on the modeling of an ion exchange process that occurs in drinking water treatment applications. The model formulation consists of a two-scale model in which a set of microscale diffusion equations representing ion exchange resin particles that vary in size and age are coupled through a boundary condition with a macroscopic ordinary differential equation (ODE), which represents the concentration of a species in a well-mixed reactor. We introduce a new age-averaged model (AAM) that averages all ion exchange particle ages for a given size particle to avoid the expensive Monte-Carlo simulation associated with previous modeling applications. We discuss two different numerical schemes to approximate both the original Monte Carlo algorithm and the new AAM for this two-scale problem. The first scheme is based on the finite element formulation in space coupled with an existing backward-difference-formula-based ODE solver in time. The second scheme uses an integral equation based Krylov deferred correction (KDC) method and a fast elliptic solver (FES) for the resulting elliptic equations. Numerical results are presented to validate the new AAM algorithm, which is also shown to be more computationally efficient than the original Monte Carlo algorithm. We also demonstrate that the higher order KDC scheme is more efficient than the traditional finite element solution approach and this advantage becomes increasingly important as the desired accuracy of the solution increases. We also discuss issues of smoothness, which affect the efficiency of the KDC-FES approach, and outline additional algorithmic changes that would further improve the efficiency of these developing methods for a wide range of applications.
Bu, Sunyoung; Huang, Jingfang; Boyer, Treavor H.; Miller, Cass T.
2010-01-01
The focus of this work is on the modeling of an ion exchange process that occurs in drinking water treatment applications. The model formulation consists of a two-scale model in which a set of microscale diffusion equations representing ion exchange resin particles that vary in size and age are coupled through a boundary condition with a macroscopic ordinary differential equation (ODE), which represents the concentration of a species in a well-mixed reactor. We introduce a new age-averaged model (AAM) that averages all ion exchange particle ages for a given size particle to avoid the expensive Monte-Carlo simulation associated with previous modeling applications. We discuss two different numerical schemes to approximate both the original Monte Carlo algorithm and the new AAM for this two-scale problem. The first scheme is based on the finite element formulation in space coupled with an existing backward-difference-formula-based ODE solver in time. The second scheme uses an integral equation based Krylov deferred correction (KDC) method and a fast elliptic solver (FES) for the resulting elliptic equations. Numerical results are presented to validate the new AAM algorithm, which is also shown to be more computationally efficient than the original Monte Carlo algorithm. We also demonstrate that the higher order KDC scheme is more efficient than the traditional finite element solution approach and this advantage becomes increasingly important as the desired accuracy of the solution increases. We also discuss issues of smoothness, which affect the efficiency of the KDC-FES approach, and outline additional algorithmic changes that would further improve the efficiency of these developing methods for a wide range of applications. PMID:20577570
An angularly refineable phase space finite element method with approximate sweeping procedure
Kophazi, J.; Lathouwers, D.
2013-07-01
An angularly refineable phase space finite element method is proposed to solve the neutron transport equation. The method combines the advantages of two recently published schemes. The angular domain is discretized into small patches and patch-wise discontinuous angular basis functions are restricted to these patches, i.e. there is no overlap between basis functions corresponding to different patches. This approach yields block diagonal Jacobians with small block size and retains the possibility for S{sub n}-like approximate sweeping of the spatially discontinuous elements in order to provide efficient preconditioners for the solution procedure. On the other hand, the preservation of the full FEM framework (as opposed to collocation into a high-order S{sub n} scheme) retains the possibility of the Galerkin interpolated connection between phase space elements at arbitrary levels of discretization. Since the basis vectors are not orthonormal, a generalization of the Riemann procedure is introduced to separate the incoming and outgoing contributions in case of unstructured meshes. However, due to the properties of the angular discretization, the Riemann procedure can be avoided at a large fraction of the faces and this fraction rapidly increases as the level of refinement increases, contributing to the computational efficiency. In this paper the properties of the discretization scheme are studied with uniform refinement using an iterative solver based on the S{sub 2} sweep order of the spatial elements. The fourth order convergence of the scalar flux is shown as anticipated from earlier schemes and the rapidly decreasing fraction of required Riemann faces is illustrated. (authors)
A novel window based method for approximating the Hausdorff in 3D range imagery.
Koch, Mark William
2004-10-01
Matching a set of 3D points to another set of 3D points is an important part of any 3D object recognition system. The Hausdorff distance is known for it robustness in the face of obscuration, clutter, and noise. We show how to approximate the 3D Hausdorff fraction with linear time complexity and quadratic space complexity. We empirically demonstrate that the approximation is very good when compared to actual Hausdorff distances.
Stochastic approximation methods for fusion-rule estimation in multiple sensor systems
Rao, N.S.V.
1994-06-01
A system of N sensors S{sub 1}, S{sub 2},{hor_ellipsis},S{sub N} is considered; corresponding to an object with parameter x {element_of} {Re}{sup d}, sensor S{sub i} yields output y{sup (i)}{element_of}{Re}{sup d} according to an unknown probability distribution p{sub i}(y{sup (i)}{vert_bar}x). A training l-sample (x{sub 1}, y{sub 1}), (x{sub 2}, y{sub 2}),{hor_ellipsis},(x{sub l}, y{sub l}) is given where y{sub i} = (y{sub i}({sup 1}), y{sub i}({sup 2}),{hor_ellipsis},y{sub i}({sup N}) and y{sub i}({sup j}) is the output of S{sub j} in response to input X{sub i}. The problem is to estimate a fusion rule f : {Re}{sup Nd} {yields} {Re}{sup d}, based on the sample, such that the expected square error I(f) = {integral}[x {minus} f(y{sup 1}, y{sup 2},{hor_ellipsis},y{sup N})]{sup 2} p(y{sup 1}, y{sup 2},{hor_ellipsis},y{sup N}){vert_bar}x)p(x)dy{sup 1}dy{sup 2} {hor_ellipsis} dy{sup N}dx is to be minimized over a family of fusion rules {lambda} based on the given l-sample. Let f{sub *} {element_of} {lambda} minimize I(f); f{sub *} cannot be computed since the underlying probability distributions are unknown. Three stochastic approximation methods are presented to compute {cflx f}, such that under suitable conditions, for sufficiently large sample, P[I{cflx f} {minus} I(f{sub *}) > {epsilon}] < {delta} for arbitrarily specified {epsilon} > 0 and {delta}, 0 < {delta} < 1. The three methods are based on Robbins-Monro style algorithms, empirical risk minimization, and regression estimation algorithms.
NASA Astrophysics Data System (ADS)
Lin, Xue-lei; Lu, Xin; Ng, Micheal K.; Sun, Hai-Wei
2016-10-01
A fast accurate approximation method with multigrid solver is proposed to solve a two-dimensional fractional sub-diffusion equation. Using the finite difference discretization of fractional time derivative, a block lower triangular Toeplitz matrix is obtained where each main diagonal block contains a two-dimensional matrix for the Laplacian operator. Our idea is to make use of the block ɛ-circulant approximation via fast Fourier transforms, so that the resulting task is to solve a block diagonal system, where each diagonal block matrix is the sum of a complex scalar times the identity matrix and a Laplacian matrix. We show that the accuracy of the approximation scheme is of O (ɛ). Because of the special diagonal block structure, we employ the multigrid method to solve the resulting linear systems. The convergence of the multigrid method is studied. Numerical examples are presented to illustrate the accuracy of the proposed approximation scheme and the efficiency of the proposed solver.
Optimal approximation method to characterize the resource trade-off functions for media servers
NASA Astrophysics Data System (ADS)
Chang, Ray-I.
1999-08-01
We have proposed an algorithm to smooth the transmission of the pre-recorded VBR media stream. It takes O(n) time complexity, where n is large, this algorithm is not suitable for online resource management and admission control in media servers. To resolve this drawback, we have explored the optimal tradeoff among resources by an O(nlogn) algorithm. Based on the pre-computed resource tradeoff function, the resource management and admission control procedure is as simple as table hashing. However, this approach requires O(n) space to store and maintain the resource tradeoff function. In this paper, while giving some extra resources, a linear-time algorithm is proposed to approximate the resource tradeoff function by piecewise line segments. We can prove that the number of line segments in the obtained approximation function is minimized for the given extra resources. The proposed algorithm has been applied to approximate the bandwidth-buffer-tradeoff function of the real-world Star War movie. While an extra 0.1 Mbps bandwidth is given, the storage space required for the approximation function is over 2000 times smaller than that required for the original function. While an extra 10 KB buffer is given, the storage space for the approximation function is over 2200 over times smaller than that required for the original function. The proposed algorithm is really useful for resource management and admission control in real-world media servers.
NASA Astrophysics Data System (ADS)
Kováč, Michal
2015-03-01
Thin-walled centrically compressed members with non-symmetrical or mono-symmetrical cross-sections can buckle in a torsional-flexural buckling mode. Vlasov developed a system of governing differential equations of the stability of such member cases. Solving these coupled equations in an analytic way is only possible in simple cases. Therefore, Goľdenvejzer introduced an approximate method for the solution of this system to calculate the critical axial force of torsional-flexural buckling. Moreover, this can also be used in cases of members with various boundary conditions in bending and torsion. This approximate method for the calculation of critical force has been adopted into norms. Nowadays, we can also solve governing differential equations by numerical methods, such as the finite element method (FEM). Therefore, in this paper, the results of the approximate method and the FEM were compared to each other, while considering the FEM as a reference method. This comparison shows any discrepancies of the approximate method. Attention was also paid to when and why discrepancies occur. The approximate method can be used in practice by considering some simplifications, which ensure safe results.
Approximation methods for inverse problems involving the vibration of beams with tip bodies
NASA Technical Reports Server (NTRS)
Rosen, I. G.
1984-01-01
Two cubic spline based approximation schemes for the estimation of structural parameters associated with the transverse vibration of flexible beams with tip appendages are outlined. The identification problem is formulated as a least squares fit to data subject to the system dynamics which are given by a hybrid system of coupled ordinary and partial differential equations. The first approximation scheme is based upon an abstract semigroup formulation of the state equation while a weak/variational form is the basis for the second. Cubic spline based subspaces together with a Rayleigh-Ritz-Galerkin approach were used to construct sequences of easily solved finite dimensional approximating identification problems. Convergence results are briefly discussed and a numerical example demonstrating the feasibility of the schemes and exhibiting their relative performance for purposes of comparison is provided.
A Novel Method of the Generalized Interval-Valued Fuzzy Rough Approximation Operators
Xue, Tianyu; Xue, Zhan'ao; Cheng, Huiru; Liu, Jie; Zhu, Tailong
2014-01-01
Rough set theory is a suitable tool for dealing with the imprecision, uncertainty, incompleteness, and vagueness of knowledge. In this paper, new lower and upper approximation operators for generalized fuzzy rough sets are constructed, and their definitions are expanded to the interval-valued environment. Furthermore, the properties of this type of rough sets are analyzed. These operators are shown to be equivalent to the generalized interval fuzzy rough approximation operators introduced by Dubois, which are determined by any interval-valued fuzzy binary relation expressed in a generalized approximation space. Main properties of these operators are discussed under different interval-valued fuzzy binary relations, and the illustrative examples are given to demonstrate the main features of the proposed operators. PMID:25162065
Strömberg, Eric A; Nyberg, Joakim; Hooker, Andrew C
2016-12-01
With the increasing popularity of optimal design in drug development it is important to understand how the approximations and implementations of the Fisher information matrix (FIM) affect the resulting optimal designs. The aim of this work was to investigate the impact on design performance when using two common approximations to the population model and the full or block-diagonal FIM implementations for optimization of sampling points. Sampling schedules for two example experiments based on population models were optimized using the FO and FOCE approximations and the full and block-diagonal FIM implementations. The number of support points was compared between the designs for each example experiment. The performance of these designs based on simulation/estimations was investigated by computing bias of the parameters as well as through the use of an empirical D-criterion confidence interval. Simulations were performed when the design was computed with the true parameter values as well as with misspecified parameter values. The FOCE approximation and the Full FIM implementation yielded designs with more support points and less clustering of sample points than designs optimized with the FO approximation and the block-diagonal implementation. The D-criterion confidence intervals showed no performance differences between the full and block diagonal FIM optimal designs when assuming true parameter values. However, the FO approximated block-reduced FIM designs had higher bias than the other designs. When assuming parameter misspecification in the design evaluation, the FO Full FIM optimal design was superior to the FO block-diagonal FIM design in both of the examples.
Saitoh, T.S.; Hoshi, Akira
1999-07-01
numerical methods (e.g. Saitoh and Kato, 1994). In addition, close-contact melting heat transfer characteristics including melt flow in the liquid film under inner wall temperature distribution were analyzed and simple approximate equations were already presented by Saitoh and Hoshi (1997). In this paper, the authors will propose an analytical solution on combined close-contact and natural convection melting in horizontal cylindrical and spherical capsules, which is useful for the practical capsule bed LHTES system.
Closure to new results for an approximate method for calculating two-dimensional furrow infiltration
Technology Transfer Automated Retrieval System (TEKTRAN)
In a discussion paper, Ebrahimian and Noury (2015) raised several concerns about an approximate solution to the two-dimensional Richards equation presented by Bautista et al (2014). The solution is based on a procedure originally proposed by Warrick et al. (2007). Such a solution is of practical i...
ERIC Educational Resources Information Center
von Davier, Matthias; Sinharay, Sandip
2009-01-01
This paper presents an application of a stochastic approximation EM-algorithm using a Metropolis-Hastings sampler to estimate the parameters of an item response latent regression model. Latent regression models are extensions of item response theory (IRT) to a 2-level latent variable model in which covariates serve as predictors of the…
Existence and uniqueness results for neural network approximations.
Williamson, R C; Helmke, U
1995-01-01
Some approximation theoretic questions concerning a certain class of neural networks are considered. The networks considered are single input, single output, single hidden layer, feedforward neural networks with continuous sigmoidal activation functions, no input weights but with hidden layer thresholds and output layer weights. Specifically, questions of existence and uniqueness of best approximations on a closed interval of the real line under mean-square and uniform approximation error measures are studied. A by-product of this study is a reparametrization of the class of networks considered in terms of rational functions of a single variable. This rational reparametrization is used to apply the theory of Pade approximation to the class of networks considered. In addition, a question related to the number of local minima arising in gradient algorithms for learning is examined.
Deniz, Furkan Nur; Alagoz, Baris Baykant; Tan, Nusret; Atherton, Derek P
2016-05-01
This paper introduces an integer order approximation method for numerical implementation of fractional order derivative/integrator operators in control systems. The proposed method is based on fitting the stability boundary locus (SBL) of fractional order derivative/integrator operators and SBL of integer order transfer functions. SBL defines a boundary in the parametric design plane of controller, which separates stable and unstable regions of a feedback control system and SBL analysis is mainly employed to graphically indicate the choice of controller parameters which result in stable operation of the feedback systems. This study reveals that the SBL curves of fractional order operators can be matched with integer order models in a limited frequency range. SBL fitting method provides straightforward solutions to obtain an integer order model approximation of fractional order operators and systems according to matching points from SBL of fractional order systems in desired frequency ranges. Thus, the proposed method can effectively deal with stability preservation problems of approximate models. Illustrative examples are given to show performance of the proposed method and results are compared with the well-known approximation methods developed for fractional order systems. The integer-order approximate modeling of fractional order PID controllers is also illustrated for control applications.
NASA Astrophysics Data System (ADS)
Erhard, Jannis; Bleiziffer, Patrick; Görling, Andreas
2016-09-01
A power series approximation for the correlation kernel of time-dependent density-functional theory is presented. Using this approximation in the adiabatic-connection fluctuation-dissipation (ACFD) theorem leads to a new family of Kohn-Sham methods. The new methods yield reaction energies and barriers of unprecedented accuracy and enable a treatment of static (strong) correlation with an accuracy of high-level multireference configuration interaction methods but are single-reference methods allowing for a black-box-like handling of static correlation. The new methods exhibit a better scaling of the computational effort with the system size than rivaling wave-function-based electronic structure methods. Moreover, the new methods do not suffer from the problem of singularities in response functions plaguing previous ACFD methods and therefore are applicable to any type of electronic system.
A Diffusion Approximation and Numerical Methods for Adaptive Neuron Models with Stochastic Inputs.
Rosenbaum, Robert
2016-01-01
Characterizing the spiking statistics of neurons receiving noisy synaptic input is a central problem in computational neuroscience. Monte Carlo approaches to this problem are computationally expensive and often fail to provide mechanistic insight. Thus, the field has seen the development of mathematical and numerical approaches, often relying on a Fokker-Planck formalism. These approaches force a compromise between biological realism, accuracy and computational efficiency. In this article we develop an extension of existing diffusion approximations to more accurately approximate the response of neurons with adaptation currents and noisy synaptic currents. The implementation refines existing numerical schemes for solving the associated Fokker-Planck equations to improve computationally efficiency and accuracy. Computer code implementing the developed algorithms is made available to the public.
A Diffusion Approximation and Numerical Methods for Adaptive Neuron Models with Stochastic Inputs
Rosenbaum, Robert
2016-01-01
Characterizing the spiking statistics of neurons receiving noisy synaptic input is a central problem in computational neuroscience. Monte Carlo approaches to this problem are computationally expensive and often fail to provide mechanistic insight. Thus, the field has seen the development of mathematical and numerical approaches, often relying on a Fokker-Planck formalism. These approaches force a compromise between biological realism, accuracy and computational efficiency. In this article we develop an extension of existing diffusion approximations to more accurately approximate the response of neurons with adaptation currents and noisy synaptic currents. The implementation refines existing numerical schemes for solving the associated Fokker-Planck equations to improve computationally efficiency and accuracy. Computer code implementing the developed algorithms is made available to the public. PMID:27148036
NASA Technical Reports Server (NTRS)
Banks, H. T.; Smith, Ralph C.; Wang, Yun
1994-01-01
Based on a distributed parameter model for vibrations, an approximate finite dimensional dynamic compensator is designed to suppress vibrations (multiple modes with a broad band of frequencies) of a circular plate with Kelvin-Voigt damping and clamped boundary conditions. The control is realized via piezoceramic patches bonded to the plate and is calculated from information available from several pointwise observed state variables. Examples from computational studies as well as use in laboratory experiments are presented to demonstrate the effectiveness of this design.
Extended proton-neutron quasiparticle random-phase approximation in a boson expansion method
NASA Astrophysics Data System (ADS)
Civitarese, O.; Montani, F.; Reboiro, M.
1999-08-01
The proton-neutron quasiparticle random phase approximation (pn-QRPA) is extended to include next to leading order terms of the QRPA harmonic expansion. The procedure is tested for the case of a separable Hamiltonian in the SO(5) symmetry representation. The pn-QRPA equation of motion is solved by using a boson expansion technique adapted to the treatment of proton-neutron correlations. The resulting wave functions are used to calculate the matrix elements of double-Fermi transitions.
NASA Astrophysics Data System (ADS)
Mugunthan, Pradeep; Shoemaker, Christine A.; Regis, Rommel G.
2005-11-01
The performance of function approximation (FA) methods is compared to heuristic and derivative-based nonlinear optimization methods for automatic calibration of biokinetic parameters of a groundwater bioremediation model of chlorinated ethenes on a hypothetical and a real field case. For the hypothetical case, on the basis of 10 trials on two different objective functions, the FA methods had the lowest mean and smaller deviation of the objective function among all algorithms for a combined Nash-Sutcliffe objective and among all but the derivative-based algorithm for a total squared error objective. The best algorithms in the hypothetical case were applied to calibrate eight parameters to data obtained from a site in California. In three trials the FA methods outperformed heuristic and derivative-based methods for both objective functions. This study indicates that function approximation methods could be a more efficient alternative to heuristic and derivative-based methods for automatic calibration of computationally expensive bioremediation models.
Nakajima, Nobuharu
2013-03-01
Previously, we have proposed a lensless coherent imaging using a nonholographic and noniterative phase-retrieval method that allows the reconstruction of a complex-valued object from a single diffraction intensity measured with an aperture-array filter. The proof-of-concept experiment of this method has been demonstrated under the Fresnel diffraction approximation. In applications to microscopy, however, the measurement of the diffraction intensity with high numerical aperture beyond the Fresnel approximation is required to obtain the object information at high spatial resolution. Thus we have also presented an extension procedure to apply the method to the cases beyond the Fresnel approximation by means of computer simulations. Here the effectiveness of the procedure is demonstrated by the experiments, in which the reconstruction with about 10 times the resolution of our previous experiment has been achieved and the object information in depth direction has been retrieved.
An approximate reasoning-based method for screening high-level-waste tanks for flammable gas
Eisenhawer, S.W.; Bott, T.F.; Smith, R.E.
2000-06-01
The in situ retention of flammable gas produced by radiolysis and thermal decomposition in high-level waste can pose a safety problem if the gases are released episodically into the dome space of a storage tank. Screening efforts at the Hanford site have been directed at identifying tanks in which this situation could exist. Problems encountered in screening motivated an effort to develop and improved screening methodology. Approximate reasoning (AR) is a formalism designed to emulate the kinds of complex judgments made by subject matter experts. It uses inductive logic structures to build a sequence of forward-chaining inferences about a subject. Approximate-reasoning models incorporate natural language expressions known as linguistic variables to represent evidence. The use of fuzzy sets to represent these variables mathematically makes it practical to evaluate quantitative and qualitative information consistently. In a pilot study to investigate the utility of AR for flammable gas screening, the effort to implement such a model was found to be acceptable, and computational requirements were found to be reasonable. The preliminary results showed that important judgments about the validity of observational data and the predictive power of models could be made. These results give new insights into the problems observed in previous screening efforts.
NASA Astrophysics Data System (ADS)
Bisetti, Fabrizio
2012-06-01
Recent trends in hydrocarbon fuel research indicate that the number of species and reactions in chemical kinetic mechanisms is rapidly increasing in an effort to provide predictive capabilities for fuels of practical interest. In order to cope with the computational cost associated with the time integration of stiff, large chemical systems, a novel approach is proposed. The approach combines an exponential integrator and Krylov subspace approximations to the exponential function of the Jacobian matrix. The components of the approach are described in detail and applied to the ignition of stoichiometric methane-air and iso-octane-air mixtures, here described by two widely adopted chemical kinetic mechanisms. The approach is found to be robust even at relatively large time steps and the global error displays a nominal third-order convergence. The performance of the approach is improved by utilising an adaptive algorithm for the selection of the Krylov subspace size, which guarantees an approximation to the matrix exponential within user-defined error tolerance. The Krylov projection of the Jacobian matrix onto a low-dimensional space is interpreted as a local model reduction with a well-defined error control strategy. Finally, the performance of the approach is discussed with regard to the optimal selection of the parameters governing the accuracy of its individual components.
Liang, Xiao; Khaliq, Abdul Q. M.; Xing, Yulong
2015-01-23
In this paper, we study a local discontinuous Galerkin method combined with fourth order exponential time differencing Runge-Kutta time discretization and a fourth order conservative method for solving the nonlinear Schrödinger equations. Based on different choices of numerical fluxes, we propose both energy-conserving and energy-dissipative local discontinuous Galerkin methods, and have proven the error estimates for the semi-discrete methods applied to linear Schrödinger equation. The numerical methods are proven to be highly efficient and stable for long-range soliton computations. Finally, extensive numerical examples are provided to illustrate the accuracy, efficiency and reliability of the proposed methods.
Integral approximants for functions of higher monodromic dimension
Baker, G.A. Jr.
1987-01-01
In addition to the description of multiform, locally analytic functions as covering a many sheeted version of the complex plane, Riemann also introduced the notion of considering them as describing a space whose ''monodromic'' dimension is the number of linearly independent coverings by the monogenic analytic function at each point of the complex plane. I suggest that this latter concept is natural for integral approximants (sub-class of Hermite-Pade approximants) and discuss results for both ''horizontal'' and ''diagonal'' sequences of approximants. Some theorems are now available in both cases and make clear the natural domain of convergence of the horizontal sequences is a disk centered on the origin and that of the diagonal sequences is a suitably cut complex-plane together with its identically cut pendant Riemann sheets. Some numerical examples have also been computed.
NASA Technical Reports Server (NTRS)
Bailey, Harry E.; Beam, Richard M.
1991-01-01
Finite-difference approximations for steady-state compressible Navier-Stokes equations, whose two spatial dimensions are written in generalized curvilinear coordinates and strong conservation-law form, are presently solved by means of Newton's method in order to obtain a lifting-airfoil flow field under subsonic and transonnic conditions. In addition to ascertaining the computational requirements of an initial guess ensuring convergence and the degree of computational efficiency obtainable via the approximate Newton method's freezing of the Jacobian matrices, attention is given to the need for auxiliary methods assessing the temporal stability of steady-state solutions. It is demonstrated that nonunique solutions of the finite-difference equations are obtainable by Newton's method in conjunction with a continuation method.
Evidence of iridescence in TiO2 nanostructures: An approximation in plane wave expansion method
NASA Astrophysics Data System (ADS)
Quiroz, Heiddy P.; Barrera-Patiño, C. P.; Rey-González, R. R.; Dussan, A.
2016-11-01
Titanium dioxide nanotubes, TiO2 NTs, can be obtained by electrochemical anodization of Titanium sheets. After nanotubes are removed by mechanical stress, residual structures or traces on the surface of titanium sheets can be observed. These traces show iridescent effects. In this paper we carry out both experimental and theoretical study of those interesting and novel optical properties. For the experimental analysis we use angle resolved UV-vis spectroscopy while in the theoretical study is evaluated the photonic spectra using numerical simulations into the frequency-domain and the framework of the wave plane approximation. The iridescent effect is a strong property and independent of the sample. This behavior can be important to design new materials or compounds for several applications such as, cosmetic industry, optoelectronic devices, photocatalysis, sensors, among others.
Kim, S.
1994-12-31
Parallel iterative procedures based on domain decomposition techniques are defined and analyzed for the numerical solution of wave propagation by finite element and finite difference methods. For finite element methods, in a Lagrangian framework, an efficient way for choosing the algorithm parameter as well as the algorithm convergence are indicated. Some heuristic arguments for finding the algorithm parameter for finite difference schemes are addressed. Numerical results are presented to indicate the effectiveness of the methods.
NASA Astrophysics Data System (ADS)
Varela, Alberto J.; Calvo, Maria L.
1995-04-01
We present a comparative study between two experimental methods to determine the modulation transfer function (MTF) of a hololens system. The two hololenses were previously recorded and tested for filtering pseudocolor. In the first method we used the classical Foucault test. The second, alternative method is based on the digital image processing of a perfect edge under incoherent illumination. From the digitized intensity line profiles we obtain the MTF and cutoff frequency of the optical system according to the reciprocity between line spread function and MTF. Comments are made on the applicability and accuracy of these two methods.
Freeze, G.A.; Larson, K.W.; Davies, P.B.
1995-10-01
Eight alternative methods for approximating salt creep and disposal room closure in a multiphase flow model of the Waste Isolation Pilot Plant (WIPP) were implemented and evaluated: Three fixed-room geometries three porosity functions and two fluid-phase-salt methods. The pressure-time-porosity line interpolation method is the method used in current WIPP Performance Assessment calculations. The room closure approximation methods were calibrated against a series of room closure simulations performed using a creep closure code, SANCHO. The fixed-room geometries did not incorporate a direct coupling between room void volume and room pressure. The two porosity function methods that utilized moles of gas as an independent parameter for closure coupling. The capillary backstress method was unable to accurately simulate conditions of re-closure of the room. Two methods were found to be accurate enough to approximate the effects of room closure; the boundary backstress method and pressure-time-porosity line interpolation. The boundary backstress method is a more reliable indicator of system behavior due to a theoretical basis for modeling salt deformation as a viscous process. It is a complex method and a detailed calibration process is required. The pressure lines method is thought to be less reliable because the results were skewed towards SANCHO results in simulations where the sequence of gas generation was significantly different from the SANCHO gas-generation rate histories used for closure calibration. This limitation in the pressure lines method is most pronounced at higher gas-generation rates and is relatively insignificant at lower gas-generation rates. Due to its relative simplicity, the pressure lines method is easier to implement in multiphase flow codes and simulations have a shorter execution time.
A 3D finite element ALE method using an approximate Riemann solution
Chiravalle, V. P.; Morgan, N. R.
2016-08-09
Arbitrary Lagrangian–Eulerian finite volume methods that solve a multidimensional Riemann-like problem at the cell center in a staggered grid hydrodynamic (SGH) arrangement have been proposed. This research proposes a new 3D finite element arbitrary Lagrangian–Eulerian SGH method that incorporates a multidimensional Riemann-like problem. Here, two different Riemann jump relations are investigated. A new limiting method that greatly improves the accuracy of the SGH method on isentropic flows is investigated. A remap method that improves upon a well-known mesh relaxation and remapping technique in order to ensure total energy conservation during the remap is also presented. Numerical details and test problem results are presented.
A 3D finite element ALE method using an approximate Riemann solution
Chiravalle, V. P.; Morgan, N. R.
2016-08-09
Arbitrary Lagrangian–Eulerian finite volume methods that solve a multidimensional Riemann-like problem at the cell center in a staggered grid hydrodynamic (SGH) arrangement have been proposed. This research proposes a new 3D finite element arbitrary Lagrangian–Eulerian SGH method that incorporates a multidimensional Riemann-like problem. Here, two different Riemann jump relations are investigated. A new limiting method that greatly improves the accuracy of the SGH method on isentropic flows is investigated. A remap method that improves upon a well-known mesh relaxation and remapping technique in order to ensure total energy conservation during the remap is also presented. Numerical details and test problemmore » results are presented.« less
Accurate finite difference methods for time-harmonic wave propagation
NASA Technical Reports Server (NTRS)
Harari, Isaac; Turkel, Eli
1994-01-01
Finite difference methods for solving problems of time-harmonic acoustics are developed and analyzed. Multidimensional inhomogeneous problems with variable, possibly discontinuous, coefficients are considered, accounting for the effects of employing nonuniform grids. A weighted-average representation is less sensitive to transition in wave resolution (due to variable wave numbers or nonuniform grids) than the standard pointwise representation. Further enhancement in method performance is obtained by basing the stencils on generalizations of Pade approximation, or generalized definitions of the derivative, reducing spurious dispersion, anisotropy and reflection, and by improving the representation of source terms. The resulting schemes have fourth-order accurate local truncation error on uniform grids and third order in the nonuniform case. Guidelines for discretization pertaining to grid orientation and resolution are presented.
An approximate-reasoning-based method for screening high-level waste tanks for flammable gas
Eisenhawer, S.W.; Bott, T.F.; Smith, R.E.
1998-07-01
The in situ retention of flammable gas produced by radiolysis and thermal decomposition in high-level waste can pose a safety problem if the gases are released episodically into the dome space of a storage tank. Screening efforts at Hanford have been directed at identifying tanks in which this situation could exist. Problems encountered in screening motivated an effort to develop an improved screening methodology. Approximate reasoning (AR) is a formalism designed to emulate the kinds of complex judgments made by subject matter experts. It uses inductive logic structures to build a sequence of forward-chaining inferences about a subject. AR models incorporate natural language expressions known as linguistic variables to represent evidence. The use of fuzzy sets to represent these variables mathematically makes it practical to evaluate quantitative and qualitative information consistently. The authors performed a pilot study to investigate the utility of AR for flammable gas screening. They found that the effort to implement such a model was acceptable and that computational requirements were reasonable. The preliminary results showed that important judgments about the validity of observational data and the predictive power of models could be made. These results give new insights into the problems observed in previous screening efforts.
NASA Astrophysics Data System (ADS)
Shigeta, Yasuteru; Nagao, Hidemi; Nishikawa, Kiyoshi; Yamaguchi, Kizashi
1999-10-01
We have proposed a new numerical scheme for the non-Born-Oppenheimer density functional calculation based upon the Green function techniques within the GW approximation for evaluating molecular properties in the full quantum mechanical treatment. We numerically calculate the physical properties of the individual motion in a hydrogen molecule and a muon molecule by means of this method and discuss the isotope effect on the properties in relation to correlation effects. It is concluded that the GW approximation is work well not only for calculation of the electronic state but also for that of nuclear state.
High order filtering methods for approximating hyberbolic systems of conservation laws
NASA Technical Reports Server (NTRS)
Lafon, F.; Osher, S.
1990-01-01
In the computation of discontinuous solutions of hyperbolic systems of conservation laws, the recently developed essentially non-oscillatory (ENO) schemes appear to be very useful. However, they are computationally costly compared to simple central difference methods. A filtering method which is developed uses simple central differencing of arbitrarily high order accuracy, except when a novel local test indicates the development of spurious oscillations. At these points, the full ENO apparatus is used, maintaining the high order of accuracy, but removing spurious oscillations. Numerical results indicate the success of the method. High order of accuracy was obtained in regions of smooth flow without spurious oscillations for a wide range of problems and a significant speed up of generally a factor of almost three over the full ENO method.
Tang, Yiping
2005-11-22
The recently proposed first-order mean-spherical approximation (FMSA) [Y. Tang, J. Chem. Phys. 121, 10605 (2004)] for inhomogeneous fluids is extended to the study of interfacial phenomena. Computation is performed for the Lennard-Jones fluid, in which all phase equilibria properties and direct correlation function for density-functional theory are developed consistently and systematically from FMSA. Three functional methods, including fundamental measure theory for the repulsive force, local-density approximation, and square-gradient approximation, are applied in this interfacial investigation. Comparisons with the latest computer simulation data indicate that FMSA is satisfactory in predicting surface tension, density profile, as well as relevant phase equilibria. Furthermore, this work strongly suggests that FMSA is very capable of unifying homogeneous and inhomogeneous fluids, as well as those behaviors outside and inside the critical region within one framework.
An approximate-reasoning-based method for screening flammable gas tanks
Eisenhawer, S.W.; Bott, T.F.; Smith, R.E.
1998-03-01
High-level waste (HLW) produces flammable gases as a result of radiolysis and thermal decomposition of organics. Under certain conditions, these gases can accumulate within the waste for extended periods and then be released quickly into the dome space of the storage tank. As part of the effort to reduce the safety concerns associated with flammable gas in HLW tanks at Hanford, a flammable gas watch list (FGWL) has been established. Inclusion on the FGWL is based on criteria intended to measure the risk associated with the presence of flammable gas. It is important that all high-risk tanks be identified with high confidence so that they may be controlled. Conversely, to minimize operational complexity, the number of tanks on the watchlist should be reduced as near to the true number of flammable risk tanks as the current state of knowledge will support. This report presents an alternative to existing approaches for FGWL screening based on the theory of approximate reasoning (AR) (Zadeh 1976). The AR-based model emulates the inference process used by an expert when asked to make an evaluation. The FGWL model described here was exercised by performing two evaluations. (1) A complete tank evaluation where the entire algorithm is used. This was done for two tanks, U-106 and AW-104. U-106 is a single shell tank with large sludge and saltcake layers. AW-104 is a double shell tank with over one million gallons of supernate. Both of these tanks had failed the screening performed by Hodgson et al. (2) Partial evaluations using a submodule for the predictor likelihood for all of the tanks on the FGWL that had been flagged previously by Whitney (1995).
NASA Astrophysics Data System (ADS)
Szalay, Viktor
1999-11-01
The reconstruction of a function from knowing only its values on a finite set of grid points, that is the construction of an analytical approximation reproducing the function with good accuracy everywhere within the sampled volume, is an important problem in all branches of sciences. One such problem in chemical physics is the determination of an analytical representation of Born-Oppenheimer potential energy surfaces by ab initio calculations which give the value of the potential at a finite set of grid points in configuration space. This article describes the rudiments of iterative and direct methods of potential surface reconstruction. The major new results are the derivation, numerical demonstration, and interpretation of a reconstruction formula. The reconstruction formula derived approximates the unknown function, say V, by linear combination of functions obtained by discretizing the continuous distributed approximating functional (DAF) approximation of V over the grid of sampling. The simplest of contracted and ordinary Hermite-DAFs are shown to be sufficient for reconstruction. The linear combination coefficients can be obtained either iteratively or directly by finding the minimal norm least-squares solution of a linear system of equations. Several numerical examples of reconstructing functions of one and two variables, and very different shape are given. The examples demonstrate the robustness, high accuracy, as well as the caveats of the proposed method. As to the mathematical foundation of the method, it is shown that the reconstruction formula can be interpreted as, and in fact is, frame expansion. By recognizing the relevance of frames in determining analytical approximation to potential energy surfaces, an extremely rich and beautiful toolbox of mathematics has come to our disposal. Thus, the simple reconstruction method derived in this paper can be refined, extended, and improved in numerous ways.
High order filtering methods for approximating hyperbolic systems of conservation laws
NASA Technical Reports Server (NTRS)
Lafon, F.; Osher, S.
1991-01-01
The essentially nonoscillatory (ENO) schemes, while potentially useful in the computation of discontinuous solutions of hyperbolic conservation-law systems, are computationally costly relative to simple central-difference methods. A filtering technique is presented which employs central differencing of arbitrarily high-order accuracy except where a local test detects the presence of spurious oscillations and calls upon the full ENO apparatus to remove them. A factor-of-three speedup is thus obtained over the full-ENO method for a wide range of problems, with high-order accuracy in regions of smooth flow.
Interpolation and Approximation Theory.
ERIC Educational Resources Information Center
Kaijser, Sten
1991-01-01
Introduced are the basic ideas of interpolation and approximation theory through a combination of theory and exercises written for extramural education at the university level. Topics treated are spline methods, Lagrange interpolation, trigonometric approximation, Fourier series, and polynomial approximation. (MDH)
NASA Technical Reports Server (NTRS)
Kaneko, Hideaki; Bey, Kim S.; Hou, Gene J. W.
2004-01-01
A recent paper is generalized to a case where the spatial region is taken in R(sup 3). The region is assumed to be a thin body, such as a panel on the wing or fuselage of an aerospace vehicle. The traditional h- as well as hp-finite element methods are applied to the surface defined in the x - y variables, while, through the thickness, the technique of the p-element is employed. Time and spatial discretization scheme based upon an assumption of certain weak singularity of double vertical line u(sub t) double vertical line 2, is used to derive an optimal a priori error estimate for the current method.
Window-based method for approximating the Hausdorff in three-dimensional range imagery
Koch, Mark W.
2009-06-02
One approach to pattern recognition is to use a template from a database of objects and match it to a probe image containing the unknown. Accordingly, the Hausdorff distance can be used to measure the similarity of two sets of points. In particular, the Hausdorff can measure the goodness of a match in the presence of occlusion, clutter, and noise. However, existing 3D algorithms for calculating the Hausdorff are computationally intensive, making them impractical for pattern recognition that requires scanning of large databases. The present invention is directed to a new method that can efficiently, in time and memory, compute the Hausdorff for 3D range imagery. The method uses a window-based approach.
NASA Astrophysics Data System (ADS)
Cao, Zhanli; Wang, Fan; Yang, Mingli
2016-10-01
Various approximate approaches to calculate cluster amplitudes in equation-of-motion coupled-cluster (EOM-CC) approaches for ionization potentials (IP) and electron affinities (EA) with spin-orbit coupling (SOC) included in post self-consistent field (SCF) calculations are proposed to reduce computational effort. Our results indicate that EOM-CC based on cluster amplitudes from the approximate method CCSD-1, where the singles equation is the same as that in CCSD and the doubles amplitudes are approximated with MP2, is able to provide reasonable IPs and EAs when SOC is not present compared with CCSD results. It is an economical approach for calculating IPs and EAs and is not as sensitive to strong correlation as CC2. When SOC is included, the approximate method CCSD-3, where the same singles equation as that in SOC-CCSD is used and the doubles equation of scalar-relativistic CCSD is employed, gives rise to IPs and EAs that are in closest agreement with those of CCSD. However, SO splitting with EOM-CC from CC2 generally agrees best with that with CCSD, while that of CCSD-1 and CCSD-3 is less accurate. This indicates that a balanced treatment of SOC effects on both single and double excitation amplitudes is required to achieve reliable SO splitting.
Approximate Dirichlet Boundary Conditions in the Generalized Finite Element Method (PREPRINT)
2006-02-01
works of Babuška [2, 3], Bramble and Nitsche [13], and Bramble and Schatz [15, 16], among others, for examples of how this approach works in prac...α! = α1! . . . αn!, is the Taylor polynomial of v at y of degree m and Φj ∈ C∞c (g̃−1(ωj)) is a function with integral 1. Then, by the Bramble ...Academic Press, New York, 1972. [13] J.H. Bramble , J.A. Nitsche, A Generalized Ritz–Least–Squares Method for Dirichlet Prob- lems, SIAM J. Numer
Approximate Methods for Obtaining the Complex Natural Electromagnetic Oscillations of an Object.
1984-02-01
studying Prony’s method r for other scatterers and looking also for solutions to the problems inherent in the Prony process . E.M. Kennaugh suggested the...The search procedure is time consuming in machine computing. ILE 3. The search procedure cannot be used to process measured scattering data. 0. POLES...of the extracted poles as P. E. of real part = IReal part (Poleext.Poletrue)l , (3-11) SIPol I-oL etrue P. E. of imaginary part = IImag . part(Poleext
NASA Astrophysics Data System (ADS)
Yang, Xiaofeng; Zhao, Jia; Wang, Qi
2017-03-01
The Molecular Beam Epitaxial model is derived from the variation of a free energy, that consists of either a fourth order Ginzburg-Landau double well potential or a nonlinear logarithmic potential in terms of the gradient of a height function. One challenge in solving the MBE model numerically is how to develop proper temporal discretization for the nonlinear terms in order to preserve energy stability at the time-discrete level. In this paper, we resolve this issue by developing a first and second order time-stepping scheme based on the "Invariant Energy Quadratization" (IEQ) method. The novelty is that all nonlinear terms are treated semi-explicitly, and the resulted semi-discrete equations form a linear system at each time step. Moreover, the linear operator is symmetric positive definite and thus can be solved efficiently. We then prove that all proposed schemes are unconditionally energy stable. The semi-discrete schemes are further discretized in space using finite difference methods and implemented on GPUs for high-performance computing. Various 2D and 3D numerical examples are presented to demonstrate stability and accuracy of the proposed schemes.
Stewart, James J P
2007-12-01
Several modifications that have been made to the NDDO core-core interaction term and to the method of parameter optimization are described. These changes have resulted in a more complete parameter optimization, called PM6, which has, in turn, allowed 70 elements to be parameterized. The average unsigned error (AUE) between calculated and reference heats of formation for 4,492 species was 8.0 kcal mol(-1). For the subset of 1,373 compounds involving only the elements H, C, N, O, F, P, S, Cl, and Br, the PM6 AUE was 4.4 kcal mol(-1). The equivalent AUE for other methods were: RM1: 5.0, B3LYP 6-31G*: 5.2, PM5: 5.7, PM3: 6.3, HF 6-31G*: 7.4, and AM1: 10.0 kcal mol(-1). Several long-standing faults in AM1 and PM3 have been corrected and significant improvements have been made in the prediction of geometries.
Approximate natural vibration analysis of rectangular plates with openings using assumed mode method
NASA Astrophysics Data System (ADS)
Cho, Dae Seung; Vladimir, Nikola; Choi, Tae MuK
2013-09-01
Natural vibration analysis of plates with openings of different shape represents an important issue in naval architecture and ocean engineering applications. In this paper, a procedure for vibration analysis of plates with openings and arbitrary edge constraints is presented. It is based on the assumed mode method, where natural frequencies and modes are determined by solving an eigenvalue problem of a multi-degree-of-freedom system matrix equation derived by using Lagrange's equations of motion. The presented solution represents an extension of a procedure for natural vibration analysis of rectangular plates without openings, which has been recently presented in the literature. The effect of an opening is taken into account in an intuitive way, i.e. by subtracting its energy from the total plate energy without opening. Illustrative numerical examples include dynamic analysis of rectangular plates with rectangular, elliptic, circular as well as oval openings with various plate thicknesses and different combinations of boundary conditions. The results are compared with those obtained by the finite element method (FEM) as well as those available in the relevant literature, and very good agreement is achieved.
2007-01-01
Several modifications that have been made to the NDDO core-core interaction term and to the method of parameter optimization are described. These changes have resulted in a more complete parameter optimization, called PM6, which has, in turn, allowed 70 elements to be parameterized. The average unsigned error (AUE) between calculated and reference heats of formation for 4,492 species was 8.0 kcal mol−1. For the subset of 1,373 compounds involving only the elements H, C, N, O, F, P, S, Cl, and Br, the PM6 AUE was 4.4 kcal mol−1. The equivalent AUE for other methods were: RM1: 5.0, B3LYP 6–31G*: 5.2, PM5: 5.7, PM3: 6.3, HF 6–31G*: 7.4, and AM1: 10.0 kcal mol−1. Several long-standing faults in AM1 and PM3 have been corrected and significant improvements have been made in the prediction of geometries. Figure Calculated structure of the complex ion [Ta6Cl12]2+ (footnote): Reference value in parenthesis Electronic supplementary material The online version of this article (doi:10.1007/s00894-007-0233-4) contains supplementary material, which is available to authorized users. PMID:17828561
2006-07-31
Bramble and Nitsche [12], and Bramble and Schatz [13, 14], among others, for examples of how this approach works in practice. Another approach (used also in...with integral 1. Then, by the Bramble –Hilbert Lemma, we have (43) |v − Pj |Hs(g̃−1(ωj)) ≤ Ch m+1−s k |v|Hm+1(g̃−1(ωj)), for all 0 ≤ s ≤ m+ 1. Consider...New York, 1972. [12] J.H. Bramble , J.A. Nitsche, A Generalized Ritz–Least–Squares Method for Dirichlet Prob- lems, SIAM J. Numer. Anal, vol 10, no. 1
Maliassov, S.Y.
1996-12-31
An approach to the construction of an iterative method for solving systems of linear algebraic equations arising from nonconforming finite element discretizations with nonmatching grids for second order elliptic boundary value problems with anisotropic coefficients is considered. The technique suggested is based on decomposition of the original domain into nonoverlapping subdomains. The elliptic problem is presented in the macro-hybrid form with Lagrange multipliers at the interfaces between subdomains. A block diagonal preconditioner is proposed which is spectrally equivalent to the original saddle point matrix and has the optimal order of arithmetical complexity. The preconditioner includes blocks for preconditioning subdomain and interface problems. It is shown that constants of spectral equivalence axe independent of values of coefficients and mesh step size.
NASA Astrophysics Data System (ADS)
Syahroni, Edy; Suparmi, A.; Cari, C.
2017-01-01
The spectrum energy’s equation for Killingback potential on the model of DNA and protein interactions was obtained using WKB approximation method. The Killingbeck potential was substituted into the general equation of WKB approximation method to determine the energy. The general equation required the value of critical turning point to complete the form equation. In this research, the general form of Killingbeck potential was causing the equation of critical turning point turn into cube equation. In this case we take the value of critical turning point only with the real value. In mathematical condition, it was satisfied with requirement Discriminant was less than or equal to 0. If D=0, it would give two values of critical turning point and if D<0, it would give three values of critical turning point. In this research we present both of those requirements to complete the general Equation of Energy.
Nakano, Masayoshi Minami, Takuya Fukui, Hitoshi Yoneda, Kyohei Shigeta, Yasuteru Kishi, Ryohei; Champagne, Benoît; Botek, Edith
2015-01-22
We develop a novel method for the calculation and the analysis of the one-electron reduced densities in open-shell molecular systems using the natural orbitals and approximate spin projected occupation numbers obtained from broken symmetry (BS), i.e., spin-unrestricted (U), density functional theory (DFT) calculations. The performance of this approximate spin projection (ASP) scheme is examined for the diradical character dependence of the second hyperpolarizability (γ) using several exchange-correlation functionals, i.e., hybrid and long-range corrected UDFT schemes. It is found that the ASP-LC-UBLYP method with a range separating parameter μ = 0.47 reproduces semi-quantitatively the strongly-correlated [UCCSD(T)] result for p-quinodimethane, i.e., the γ variation as a function of the diradical character.
NASA Astrophysics Data System (ADS)
Renac, Florent
2011-06-01
An algorithm for stabilizing linear iterative schemes is developed in this study. The recursive projection method is applied in order to stabilize divergent numerical algorithms. A criterion for selecting the divergent subspace of the iteration matrix with an approximate eigenvalue problem is introduced. The performance of the present algorithm is investigated in terms of storage requirements and CPU costs and is compared to the original Krylov criterion. Theoretical results on the divergent subspace selection accuracy are established. The method is then applied to the resolution of the linear advection-diffusion equation and to a sensitivity analysis for a turbulent transonic flow in the context of aerodynamic shape optimization. Numerical experiments demonstrate better robustness and faster convergence properties of the stabilization algorithm with the new criterion based on the approximate eigenvalue problem. This criterion requires only slight additional operations and memory which vanish in the limit of large linear systems.
NASA Astrophysics Data System (ADS)
Efremenko, D.; Doicu, A.; Loyola, D.; Trautmann, T.
2012-04-01
Numerical problems appear when solving the radiative transfer equation for systems with strong anisotropic scattering. To avoid oscillations in the solution a large number of discrete ordinates is required. As a consequence, the computing time increases considerably with O(N3), where N is the number of discrete ordinates. The performance can be improved partially by the delta-M method of Wiscombe [1], but this approach distorts the initial boundary problem and can lead to errors in small viewing angles. The efficiency of the discrete ordinate method with small-angle approximation for analyzing systems containing clouds and coarsest fraction of aerosol has been demonstrated by Budak and Korkin [2]. In this work we extend the plan-parallel version of the discrete ordinate method with small-angle approximation, as described in [2], to a pseudo-spherical atmosphere. The conventional pseudo-spherical technique relies on the separation of the total radiance into the direct solar beam and the diffuse radiance [3];the direct solar radiance is treated in a spherical geometry, while the diffuse radiance is computed in a plane-parallel geometry. Taking into account that in the discrete ordinate method with small-angle approximation, the radiance is separated into an 'anisotropic' and a smooth part, and that the direct solar beam is already included into anisotropic part, we introduce a pseudo-spherical correction by substracting the direct solar beam in a plane-parallel geometry and adding it in a pseudo-spherical geometry. In our simulations we considered a scenario which is typically for the UV/UIS instruments like GOME-2: a spectral interval between 315 nm and 335 nm, and an inhomogeneous atmosphere containing a cloud layer with an asymmetry parameter of 0.9. The numerical results evidenced that the differences between the pseudo-spherical and the plan-parallel models are of about 10 % for an incident angle of 80 degrees, 1 % for 65 degrees and less than 0.3 % for 50
NASA Astrophysics Data System (ADS)
Asenchik, O. D.
2017-02-01
A method of approximate calculation of the interaction inverse matrix in the method of discrete dipoles is proposed. The knowledge of this matrix makes it possible to determine the optical response of a system to the action of an electromagnetic wave with an arbitrary shape, which can be represented as a combination of vector spherical wave functions. The number of calculation operations of the matrix in the proposed method is considerably smaller than in the case of its direct calculation. In the case of a change in the refractive index of scattering particles, two methods of approximate calculation of the interaction inverse matrix are also proposed. This makes it possible to calculate the optical response of systems with new characteristics without direct solving equations of a system with a large dimension. The accuracy of the methods is numerically determined for particles with spherical and cubic shapes. It is shown that the methods are computationally efficient and can be used to calculate the values of polarization vectors inside particles and extinction and absorption cross sections of systems.
NASA Astrophysics Data System (ADS)
Zhou, Kang; Hou, Jian; Fu, Hongfei; Wei, Bei; Liu, Yongge
2017-01-01
Relative permeability controls the flow of multiphase fluids in porous media. The estimation of relative permeability is generally solved by Levenberg-Marquardt method with finite difference Jacobian approximation (LM-FD). However, the method can hardly be used in large-scale reservoirs because of unbearably huge computational cost. To eliminate this problem, the paper introduces the idea of simultaneous perturbation to simplify the generation of the Jacobian matrix needed in the Levenberg-Marquardt procedure and denotes the improved method as LM-SP. It is verified by numerical experiments and then applied to laboratory experiments and a real commercial oilfield. Numerical experiment indicates that LM-SP uses only 16.1% computational cost to obtain similar estimation of relative permeability and prediction of production performance compared with LM-FD. Laboratory experiment also shows the LM-SP has a 60.4% decrease in simulation cost while a 68.5% increase in estimation accuracy compared with the earlier published results. This is mainly because LM-FD needs 2n (n is the number of controlling knots) simulations to approximate Jacobian in each iteration, while only 2 simulations are enough in basic LM-SP. The convergence rate and estimation accuracy of LM-SP can be improved by averaging several simultaneous perturbation Jacobian approximations but the computational cost of each iteration may be increased. Considering the estimation accuracy and computational cost, averaging two Jacobian approximations is recommended in this paper. As the number of unknown controlling knots increases from 7 to 15, the saved simulation runs by LM-SP than LM-FD increases from 114 to 1164. This indicates LM-SP is more suitable than LM-FD for multivariate problems. Field application further proves the applicability of LM-SP on large real field as well as small laboratory problems.
NASA Astrophysics Data System (ADS)
Viquerat, Jonathan; Lanteri, Stéphane
2016-01-01
During the last ten years, the discontinuous Galerkin time-domain (DGTD) method has progressively emerged as a viable alternative to well established finite-difference time-domain (FDTD) and finite-element time-domain (FETD) methods for the numerical simulation of electromagnetic wave propagation problems in the time-domain. The method is now actively studied in various application contexts including those requiring to model light/matter interactions on the nanoscale. Several recent works have demonstrated the viability of the DGDT method for nanophotonics. In this paper we further demonstrate the capabilities of the method for the simulation of near-field plasmonic interactions by considering more particularly the possibility of combining the use of a locally refined conforming tetrahedral mesh with a local adaptation of the approximation order.
NASA Astrophysics Data System (ADS)
Fernández-Seivane, L.; Oliveira, M. A.; Sanvito, S.; Ferrer, J.
2006-08-01
We propose a computational method that drastically simplifies the inclusion of the spin-orbit interaction in density functional theory when implemented over localized basis sets. Our method is based on a well-known procedure for obtaining pseudopotentials from atomic relativistic ab initio calculations and on an on-site approximation for the spin-orbit matrix elements. We have implemented the technique in the SIESTA (Soler J M et al 2002 J. Phys.: Condens. Matter 14 2745-79) code, and show that it provides accurate results for the overall band-structure and splittings of group IV and III-IV semiconductors as well as for 5d metals.
Schulz, Andreas S.; Shmoys, David B.; Williamson, David P.
1997-01-01
Increasing global competition, rapidly changing markets, and greater consumer awareness have altered the way in which corporations do business. To become more efficient, many industries have sought to model some operational aspects by gigantic optimization problems. It is not atypical to encounter models that capture 106 separate “yes” or “no” decisions to be made. Although one could, in principle, try all 2106 possible solutions to find the optimal one, such a method would be impractically slow. Unfortunately, for most of these models, no algorithms are known that find optimal solutions with reasonable computation times. Typically, industry must rely on solutions of unguaranteed quality that are constructed in an ad hoc manner. Fortunately, for some of these models there are good approximation algorithms: algorithms that produce solutions quickly that are provably close to optimal. Over the past 6 years, there has been a sequence of major breakthroughs in our understanding of the design of approximation algorithms and of limits to obtaining such performance guarantees; this area has been one of the most flourishing areas of discrete mathematics and theoretical computer science. PMID:9370525
NASA Astrophysics Data System (ADS)
Bruno, Luigi
2016-12-01
With the present paper, the author proposes a fitting method for approximating experimental data retrieved from any full-field technique. Unlike most of the fitting procedures, the method works on data distributed on a surface of any shape, and the mathematical model is able to take into account of both the 3D shape of the surface and of the experimental quantity to be fitted. The paper reports all the mathematical steps necessary for applying the method, which was tested on two sets of experimental data obtained by an out-of-plane speckle interferometer working in two different conditions of noise. Experimental results showed the capability of the method to work in presence of high level of noise.
NASA Technical Reports Server (NTRS)
Stiehl, A. L.; Haberman, R. C.; Cowles, J. H.
1988-01-01
An approximate method to compute the maximum deformation and permanent set of a beam subjected to shock wave laoding in vacuo and in water was investigated. The method equates the maximum kinetic energy of the beam (and water) to the elastic plastic work done by a static uniform load applied to a beam. Results for the water case indicate that the plastic deformation is controlled by the kinetic energy of the water. The simplified approach can result in significant savings in computer time or it can expediently be used as a check of results from a more rigorous approach. The accuracy of the method is demonstrated by various examples of beams with simple support and clamped support boundary conditions.
NASA Astrophysics Data System (ADS)
Assous, Franck; Chaskalovic, Joël
2013-03-01
In this Note, we propose a new methodology based on exploratory data mining techniques to evaluate the errors due to the description of a given real system. First, we decompose this description error into four types of sources. Then, we construct databases of the entire information produced by different numerical approximation methods, to assess and compare the significant differences between these methods, using techniques like decision trees, Kohonen's cards, or neural networks. As an example, we characterize specific states of the real system for which we can locally appreciate the accuracy between two kinds of finite elements methods. In this case, this allowed us to precise the classical Bramble-Hilbert theorem that gives a global error estimate, whereas our approach gives a local error estimate.
Regnier, D.; Verriere, M.; Dubray, N.; Schunck, N.
2015-11-30
In this study, we describe the software package FELIX that solves the equations of the time-dependent generator coordinate method (TDGCM) in NN-dimensions (N ≥ 1) under the Gaussian overlap approximation. The numerical resolution is based on the Galerkin finite element discretization of the collective space and the Crank–Nicolson scheme for time integration. The TDGCM solver is implemented entirely in C++. Several additional tools written in C++, Python or bash scripting language are also included for convenience. In this paper, the solver is tested with a series of benchmarks calculations. We also demonstrate the ability of our code to handle a realistic calculation of fission dynamics.
Hromadka, T.V.; Guymon, G.L.
1985-01-01
An algorithm is presented for the numerical solution of the Laplace equation boundary-value problem, which is assumed to apply to soil freezing or thawing. The Laplace equation is numerically approximated by the complex-variable boundary-element method. The algorithm aids in reducing integrated relative error by providing a true measure of modeling error along the solution domain boundary. This measure of error can be used to select locations for adding, removing, or relocating nodal points on the boundary or to provide bounds for the integrated relative error of unknown nodal variable values along the boundary.
NASA Astrophysics Data System (ADS)
Meshram, M. C.
2013-07-01
The Lewis-Kraichnan space-time version of Hopf functional formalism is considered for the investigation of turbulence with reacting and mixing chemical elements of type A + B → Product. The equations of motion are written in Fourier space. We first define the characteristic functional (or the moments generating functional) for the joint probability distribution of the velocity vector of the flow field and the reactants’ concentration scalar fields and translate the equations of motion in terms of the differential equations for the characteristic functional. These differential equations for the characteristic functional are further written in terms of the second characteristic functional (or the cumulant generating functional). This helps us in obtaining the equations for various order cumulants. We note from these equations for cumulants the characteristic difficulty of the theory of turbulence that the (n + 1)th order cumulant C(n+1) occurs in the equation for the dynamics of nth order cumulant Cn. We use the factorized cumulant expansion approximation method for the present investigation. Under this approximation an arbitrary nth order cumulant Cn is expressed in terms of the lower-order cumulants C(2), C(3) and C(n-1) and thus we obtain a closed but untruncated system of equations for the cumulants. On using the factorized fourth-cumulant approximation method a closed set of equations for the reactants’ energy spectrum functions and the reactants’ energy transfer functions are derived. These equations are solved numerically and the similarity laws of the solutions are derived analytically. The statistical quantities such as the reactants’ energy, the reactants’ enstrophy, the reactants’ scale of segregations and so on are calculated numerically and the statistical laws of these quantities are discussed. Also, the scope of this tool for investigation of turbulent phenomena not covered in the present study is discussed.
NASA Astrophysics Data System (ADS)
MacArt, Jonathan F.; Mueller, Michael E.
2016-12-01
Two formally second-order accurate, semi-implicit, iterative methods for the solution of scalar transport-reaction equations are developed for Direct Numerical Simulation (DNS) of low Mach number turbulent reacting flows. The first is a monolithic scheme based on a linearly implicit midpoint method utilizing an approximately factorized exact Jacobian of the transport and reaction operators. The second is an operator splitting scheme based on the Strang splitting approach. The accuracy properties of these schemes, as well as their stability, cost, and the effect of chemical mechanism size on relative performance, are assessed in two one-dimensional test configurations comprising an unsteady premixed flame and an unsteady nonpremixed ignition, which have substantially different Damköhler numbers and relative stiffness of transport to chemistry. All schemes demonstrate their formal order of accuracy in the fully-coupled convergence tests. Compared to a (non-)factorized scheme with a diagonal approximation to the chemical Jacobian, the monolithic, factorized scheme using the exact chemical Jacobian is shown to be both more stable and more economical. This is due to an improved convergence rate of the iterative procedure, and the difference between the two schemes in convergence rate grows as the time step increases. The stability properties of the Strang splitting scheme are demonstrated to outpace those of Lie splitting and monolithic schemes in simulations at high Damköhler number; however, in this regime, the monolithic scheme using the approximately factorized exact Jacobian is found to be the most economical at practical CFL numbers. The performance of the schemes is further evaluated in a simulation of a three-dimensional, spatially evolving, turbulent nonpremixed planar jet flame.
Rossi, Mariana; Liu, Hanchao; Paesani, Francesco; Bowman, Joel; Ceriotti, Michele
2014-11-14
Including quantum mechanical effects on the dynamics of nuclei in the condensed phase is challenging, because the complexity of exact methods grows exponentially with the number of quantum degrees of freedom. Efforts to circumvent these limitations can be traced down to two approaches: methods that treat a small subset of the degrees of freedom with rigorous quantum mechanics, considering the rest of the system as a static or classical environment, and methods that treat the whole system quantum mechanically, but using approximate dynamics. Here, we perform a systematic comparison between these two philosophies for the description of quantum effects in vibrational spectroscopy, taking the Embedded Local Monomer model and a mixed quantum-classical model as representatives of the first family of methods, and centroid molecular dynamics and thermostatted ring polymer molecular dynamics as examples of the latter. We use as benchmarks D2O doped with HOD and pure H2O at three distinct thermodynamic state points (ice Ih at 150 K, and the liquid at 300 K and 600 K), modeled with the simple q-TIP4P/F potential energy and dipole moment surfaces. With few exceptions the different techniques yield IR absorption frequencies that are consistent with one another within a few tens of cm(-1). Comparison with classical molecular dynamics demonstrates the importance of nuclear quantum effects up to the highest temperature, and a detailed discussion of the discrepancies between the various methods let us draw some (circumstantial) conclusions about the impact of the very different approximations that underlie them. Such cross validation between radically different approaches could indicate a way forward to further improve the state of the art in simulations of condensed-phase quantum dynamics.
Iterative methods for 3D implicit finite-difference migration using the complex Padé approximation
NASA Astrophysics Data System (ADS)
Costa, Carlos A. N.; Campos, Itamara S.; Costa, Jessé C.; Neto, Francisco A.; Schleicher, Jörg; Novais, Amélia
2013-08-01
Conventional implementations of 3D finite-difference (FD) migration use splitting techniques to accelerate performance and save computational cost. However, such techniques are plagued with numerical anisotropy that jeopardises the correct positioning of dipping reflectors in the directions not used for the operator splitting. We implement 3D downward continuation FD migration without splitting using a complex Padé approximation. In this way, the numerical anisotropy is eliminated at the expense of a computationally more intensive solution of a large-band linear system. We compare the performance of the iterative stabilized biconjugate gradient (BICGSTAB) and that of the multifrontal massively parallel direct solver (MUMPS). It turns out that the use of the complex Padé approximation not only stabilizes the solution, but also acts as an effective preconditioner for the BICGSTAB algorithm, reducing the number of iterations as compared to the implementation using the real Padé expansion. As a consequence, the iterative BICGSTAB method is more efficient than the direct MUMPS method when solving a single term in the Padé expansion. The results of both algorithms, here evaluated by computing the migration impulse response in the SEG/EAGE salt model, are of comparable quality.
NASA Technical Reports Server (NTRS)
Sidi, Avram
1992-01-01
Let F(z) be a vectored-valued function F: C approaches C sup N, which is analytic at z=0 and meromorphic in a neighborhood of z=0, and let its Maclaurin series be given. We use vector-valued rational approximation procedures for F(z) that are based on its Maclaurin series in conjunction with power iterations to develop bona fide generalizations of the power method for an arbitrary N X N matrix that may be diagonalizable or not. These generalizations can be used to obtain simultaneously several of the largest distinct eigenvalues and the corresponding invariant subspaces, and present a detailed convergence theory for them. In addition, it is shown that the generalized power methods of this work are equivalent to some Krylov subspace methods, among them the methods of Arnoldi and Lanczos. Thus, the theory provides a set of completely new results and constructions for these Krylov subspace methods. This theory suggests at the same time a new mode of usage for these Krylov subspace methods that were observed to possess computational advantages over their common mode of usage.
On the convergence of local approximations to pseudodifferential operators with applications
NASA Technical Reports Server (NTRS)
Hagstrom, Thomas
1994-01-01
We consider the approximation of a class pseudodifferential operators by sequences of operators which can be expressed as compositions of differential operators and their inverses. We show that the error in such approximations can be bounded in terms of L(1) error in approximating a convolution kernel, and use this fact to develop convergence results. Our main result is a finite time convergence analysis of the Engquist-Majda Pade approximants to the square root of the d'Alembertian. We also show that no spatially local approximation to this operator can be convergent uniformly in time. We propose some temporally local but spatially nonlocal operators with better long time behavior. These are based on Laguerre and exponential series.
Higher-order numerical methods derived from three-point polynomial interpolation
NASA Technical Reports Server (NTRS)
Rubin, S. G.; Khosla, P. K.
1976-01-01
Higher-order collocation procedures resulting in tridiagonal matrix systems are derived from polynomial spline interpolation and Hermitian finite-difference discretization. The equations generally apply for both uniform and variable meshes. Hybrid schemes resulting from different polynomial approximations for first and second derivatives lead to the nonuniform mesh extension of the so-called compact or Pade difference techniques. A variety of fourth-order methods are described and this concept is extended to sixth-order. Solutions with these procedures are presented for the similar and non-similar boundary layer equations with and without mass transfer, the Burgers equation, and the incompressible viscous flow in a driven cavity. Finally, the interpolation procedure is used to derive higher-order temporal integration schemes and results are shown for the diffusion equation.
Wang, S.W.; Georgopoulos, P.G.; Li, G.; Rabitz, H.
1998-07-01
Atmospheric chemistry mechanisms are the most computationally intensive components of photochemical air quality simulation models (PAQSMs). The development of a photochemical mechanism, that accurately describes atmospheric chemistry while being computationally efficient for use in PAQSMs, is a difficult undertaking that has traditionally been pursued through semiempirical (diagnostic) lumping approaches. The limitations of these diagnostic approaches are often associated with inaccuracies due to the fact that the lumped mechanisms have typically been optimized to fit the concentration profile of a specific species. Formal mathematical methods for model reduction have the potential (demonstrated through past applications in other areas) to provide very effective solutions to the need for computational efficiency combined with accuracy. Such methods, that can be used to condense a chemical mechanism, include kinetic lumping and domain separation. An application of the kinetic lumping method, using the direct constrained approximately lumping (DCAL) approach, to the atmospheric photochemistry of alkanes is presented in this work. It is shown that the lumped mechanism generated through the application of the DCAL method has the potential to overcome the limitations of existing semiempirical approaches, especially in relation to the consistent and accurate calculation of the time-concentration profiles of multiple species.
NASA Astrophysics Data System (ADS)
Deta, U. A.; Suparmi, Cari
2013-09-01
The approximate analytical solution of Schrodinger equation in D-Dimensions for Scarf trigonometry potential were investigated using Nikiforov-Uvarov method. The bound state energy are given in the close form and the corresponding wave function for arbitary l-state in D-dimensions are formulated in the form of generalized Jacobi Polynomials. The example of bound state energy and wave function in 3, 4, and 5 dimensions presented in condition of ground state to second excited state. The existence of arbitrary dimensions increase bound state energy and the amplitude of the wave function of this potential. The effect of the presence of Scarf trigonometry potential increase the energy spectrum of this potential.
NASA Astrophysics Data System (ADS)
Yi, Longtao; Sun, Tianxi; Wang, Kai; Qin, Min; Yang, Kui; Wang, Jinbang; Liu, Zhiguo
2016-08-01
Confocal three-dimensional micro X-ray fluorescence (3D MXRF) is an excellent surface analysis technology. For a confocal structure, only the X-rays from the confocal volume can be detected. Confocal 3D MXRF has been widely used for analysing elements, the distribution of elements and 3D image of some special samples. However, it has rarely been applied to analysing surface topography by surface scanning. In this paper, a confocal 3D MXRF technology based on polycapillary X-ray optics was proposed for determining surface topography. A corresponding surface adaptive algorithm based on a progressive approximation method was designed to obtain surface topography. The surface topography of the letter "R" on a coin of the People's Republic of China and a small pit on painted pottery were obtained. The surface topography of the "R" and the pit are clearly shown in the two figures. Compared with the method in our previous study, it exhibits a higher scanning efficiency. This approach could be used for two-dimensional (2D) elemental mapping or 3D elemental voxel mapping measurements as an auxiliary method. It also could be used for analysing elemental mapping while obtaining the surface topography of a sample in 2D elemental mapping measurement.
NASA Astrophysics Data System (ADS)
Büsing, Henrik
2013-04-01
Two-phase flow in porous media occurs in various settings, such as the sequestration of CO2 in the subsurface, radioactive waste management, the flow of oil or gas in hydrocarbon reservoirs, or groundwater remediation. To model the sequestration of CO2, we consider a fully coupled formulation of the system of nonlinear, partial differential equations. For the solution of this system, we employ the Box method after Huber & Helmig (2000) for the space discretization and the fully implicit Euler method for the time discretization. After linearization with Newton's method, it remains to solve a linear system in every Newton step. We compare different iterative methods (BiCGStab, GMRES, AGMG, c.f., [Notay (2012)]) combined with different preconditioners (ILU0, ASM, Jacobi, and AMG as preconditioner) for the solution of these systems. The required Jacobians can be obtained elegantly with automatic differentiation (AD) [Griewank & Walther (2008)], a source code transformation providing exact derivatives. We compare the performance of the different iterative methods with their respective preconditioners for these linear systems. Furthermore, we analyze linear systems obtained by approximating the Jacobian with finite differences in terms of Newton steps per time step, steps of the iterative solvers and the overall solution time. Finally, we study the influence of heterogeneities in permeability and porosity on the performance of the iterative solvers and their robustness in this respect. References [Griewank & Walther(2008)] Griewank, A. & Walther, A., 2008. Evaluating Derivatives: Principles and Techniques of Algorithmic Differentiation, SIAM, Philadelphia, PA, 2nd edn. [Huber & Helmig(2000)] Huber, R. & Helmig, R., 2000. Node-centered finite volume discretizations for the numerical simulation of multiphase flow in heterogeneous porous media, Computational Geosciences, 4, 141-164. [Notay(2012)] Notay, Y., 2012. Aggregation-based algebraic multigrid for convection
NASA Astrophysics Data System (ADS)
Neese, Frank; Wennmohs, Frank; Hansen, Andreas
2009-03-01
Coupled-electron pair approximations (CEPAs) and coupled-pair functionals (CPFs) have been popular in the 1970s and 1980s and have yielded excellent results for small molecules. Recently, interest in CEPA and CPF methods has been renewed. It has been shown that these methods lead to competitive thermochemical, kinetic, and structural predictions. They greatly surpass second order Møller-Plesset and popular density functional theory based approaches in accuracy and are intermediate in quality between CCSD and CCSD(T) in extended benchmark studies. In this work an efficient production level implementation of the closed shell CEPA and CPF methods is reported that can be applied to medium sized molecules in the range of 50-100 atoms and up to about 2000 basis functions. The internal space is spanned by localized internal orbitals. The external space is greatly compressed through the method of pair natural orbitals (PNOs) that was also introduced by the pioneers of the CEPA approaches. Our implementation also makes extended use of density fitting (or resolution of the identity) techniques in order to speed up the laborious integral transformations. The method is called local pair natural orbital CEPA (LPNO-CEPA) (LPNO-CPF). The implementation is centered around the concepts of electron pairs and matrix operations. Altogether three cutoff parameters are introduced that control the size of the significant pair list, the average number of PNOs per electron pair, and the number of contributing basis functions per PNO. With the conservatively chosen default values of these thresholds, the method recovers about 99.8% of the canonical correlation energy. This translates to absolute deviations from the canonical result of only a few kcal mol-1. Extended numerical test calculations demonstrate that LPNO-CEPA (LPNO-CPF) has essentially the same accuracy as parent CEPA (CPF) methods for thermochemistry, kinetics, weak interactions, and potential energy surfaces but is up to 500
NASA Technical Reports Server (NTRS)
Demuren, A. O.; Ibraheem, S. O.
1993-01-01
The convergence characteristics of various approximate factorizations for the 3D Euler and Navier-Stokes equations are examined using the von-Neumann stability analysis method. Three upwind-difference based factorizations and several central-difference based factorizations are considered for the Euler equations. In the upwind factorizations both the flux-vector splitting methods of Steger and Warming and van Leer are considered. Analysis of the Navier-Stokes equations is performed only on the Beam and Warming central-difference scheme. The range of CFL numbers over which each factorization is stable is presented for one-, two-, and three-dimensional flow. Also presented for each factorization is the CFL number at which the maximum eigenvalue is minimized, for all Fourier components, as well as for the high frequency range only. The latter is useful for predicting the effectiveness of multigrid procedures with these schemes as smoothers. Further, local mode analysis is performed to test the suitability of using a uniform flow field in the stability analysis. Some inconsistencies in the results from previous analyses are resolved.
Frozen Gaussian approximation-based two-level methods for multi-frequency Schrödinger equation
NASA Astrophysics Data System (ADS)
Lorin, E.; Yang, X.
2016-10-01
In this paper, we develop two-level numerical methods for the time-dependent Schrödinger equation (TDSE) in multi-frequency regime. This work is motivated by attosecond science (Corkum and Krausz, 2007), which refers to the interaction of short and intense laser pulses with quantum particles generating wide frequency spectrum light, and allowing for the coherent emission of attosecond pulses (1 attosecond=10-18 s). The principle of the proposed methods consists in decomposing a wavefunction into a low/moderate frequency (quantum) contribution, and a high frequency contribution exhibiting a semi-classical behavior. Low/moderate frequencies are computed through the direct solution to the quantum TDSE on a coarse mesh, and the high frequency contribution is computed by frozen Gaussian approximation (Herman and Kluk, 1984). This paper is devoted to the derivation of consistent, accurate and efficient algorithms performing such a decomposition and the time evolution of the wavefunction in the multi-frequency regime. Numerical simulations are provided to illustrate the accuracy and efficiency of the derived algorithms.
NASA Astrophysics Data System (ADS)
Rodrigue, Stephen Michael
Transport rates for the Kelvin-Stuart Cat Eyes driven flow are calculated using the lobe transport theory of Rom-Kedar and Wiggins through application of the Topological Approximation Method (TAM) developed by Rom-Kedar. Numerical studies by Ottino (1989) and Tsega, Michaelides, and Eschenazi (2001) of the driven or perturbed flow indicated frequency dependence of the transport. One goal of the present research is to derive an analytical expression for the transport and to study its dependence upon the perturbation frequency o. The Kelvin-Stuart Cat Eyes dynamical system consists of an infinite string of equivalent vortices exhibiting a 2pi spatial periodicity in x with an unperturbed streamfunction of H( x, y) = ln(cosh y + A cos x) - ln(1+A). The driven flow has perturbation terms of a sin(o) in both the x and y directions. Lobe dynamics transport theory states that transport occurs through the transfer of turnstile lobes, and that transport rates are equal to the area of the lobes transferred. Lobes may intersect, necessitating the calculation and removal of lobe intersection areas. The TAM requires the use of a Melnikov integral function, the zeroes of which locate the lobes, and a Whisker map (Chirikov 1979), which locates lobe intersection points. An analytical expression for the Melnikov integral function is derived for the Kelvin-Stuart Cat Eyes driven flow. Using the derived analytical Melnikov integral function, derived expressions for the periods of internal and external orbits as functions of H, and the Whisker map, the Topological Approximation Method is applied to the Kelvin-Stuart driven flow to calculate transport rates for a range of frequencies from (o = 1.21971 to o = 3.27532 as the structure index L is varied from L = 2 to L = 10. Transport rates per iteration, and cumulative transport per iteration, are calculated for 100 iterations for both internal and external lobes. The transport rates exhibit strong frequency dependence in the frequency
Technology Transfer Automated Retrieval System (TEKTRAN)
The ACCF90 computer program, which approximates reliability for animal models, was modified to estimate reliabilities for sire-maternal grandsire (MGS) models. Accuracy of the approximation was tested on a calving-ease data set for 2,968 bulls for which the inverse of the coefficient matrix could be...
Approximation of periodic functions in the classes H{sub q}{sup {Omega}} by linear methods
Pustovoitov, Nikolai N
2012-01-31
The following result is proved: if approximations in the norm of L{sub {infinity}} (of H{sub 1}) of functions in the classes H{sub {infinity}}{sup {Omega}} (in H{sub 1}{sup {Omega}}, respectively) by some linear operators have the same order of magnitude as the best approximations, then the set of norms of these operators is unbounded. Also Bernstein's and the Jackson-Nikol'skii inequalities are proved for trigonometric polynomials with spectra in the sets Q(N) (in {Gamma}(N,{Omega})). Bibliography: 15 titles.
NASA Astrophysics Data System (ADS)
Yin, George; Wang, Le Yi; Zhang, Hongwei
2014-12-01
Stochastic approximation methods have found extensive and diversified applications. Recent emergence of networked systems and cyber-physical systems has generated renewed interest in advancing stochastic approximation into a general framework to support algorithm development for information processing and decisions in such systems. This paper presents a survey on some recent developments in stochastic approximation methods and their applications. Using connected vehicles in platoon formation and coordination as a platform, we highlight some traditional and new methodologies of stochastic approximation algorithms and explain how they can be used to capture essential features in networked systems. Distinct features of networked systems with randomly switching topologies, dynamically evolving parameters, and unknown delays are presented, and control strategies are provided.
Yin, George; Wang, Le Yi; Zhang, Hongwei
2014-12-10
Stochastic approximation methods have found extensive and diversified applications. Recent emergence of networked systems and cyber-physical systems has generated renewed interest in advancing stochastic approximation into a general framework to support algorithm development for information processing and decisions in such systems. This paper presents a survey on some recent developments in stochastic approximation methods and their applications. Using connected vehicles in platoon formation and coordination as a platform, we highlight some traditional and new methodologies of stochastic approximation algorithms and explain how they can be used to capture essential features in networked systems. Distinct features of networked systems with randomly switching topologies, dynamically evolving parameters, and unknown delays are presented, and control strategies are provided.
NASA Astrophysics Data System (ADS)
Sarwar, S.; Rashidi, M. M.
2016-07-01
This paper deals with the investigation of the analytical approximate solutions for two-term fractional-order diffusion, wave-diffusion, and telegraph equations. The fractional derivatives are defined in the Caputo sense, whose orders belong to the intervals [0,1], (1,2), and [1,2], respectively. In this paper, we extended optimal homotopy asymptotic method (OHAM) for two-term fractional-order wave-diffusion equations. Highly approximate solution is obtained in series form using this extended method. Approximate solution obtained by OHAM is compared with the exact solution. It is observed that OHAM is a prevailing and convergent method for the solutions of nonlinear-fractional-order time-dependent partial differential problems. The numerical results rendering that the applied method is explicit, effective, and easy to use, for handling more general fractional-order wave diffusion, diffusion, and telegraph problems.
Rogers, J.; Porter, K.
2012-03-01
This paper updates previous work that describes time period-based and other approximation methods for estimating the capacity value of wind power and extends it to include solar power. The paper summarizes various methods presented in utility integrated resource plans, regional transmission organization methodologies, regional stakeholder initiatives, regulatory proceedings, and academic and industry studies. Time period-based approximation methods typically measure the contribution of a wind or solar plant at the time of system peak - sometimes over a period of months or the average of multiple years.
NASA Astrophysics Data System (ADS)
Espinoza-Ojeda, O. M.; Santoyo, E.; Andaverde, J.
2011-06-01
Approximate and rigorous solutions of seven heat transfer models were statistically examined, for the first time, to estimate stabilized formation temperatures (SFT) of geothermal and petroleum boreholes. Constant linear and cylindrical heat source models were used to describe the heat flow (either conductive or conductive/convective) involved during a borehole drilling. A comprehensive statistical assessment of the major error sources associated with the use of these models was carried out. The mathematical methods (based on approximate and rigorous solutions of heat transfer models) were thoroughly examined by using four statistical analyses: (i) the use of linear and quadratic regression models to infer the SFT; (ii) the application of statistical tests of linearity to evaluate the actual relationship between bottom-hole temperatures and time function data for each selected method; (iii) the comparative analysis of SFT estimates between the approximate and rigorous predictions of each analytical method using a β ratio parameter to evaluate the similarity of both solutions, and (iv) the evaluation of accuracy in each method using statistical tests of significance, and deviation percentages between 'true' formation temperatures and SFT estimates (predicted from approximate and rigorous solutions). The present study also enabled us to determine the sensitivity parameters that should be considered for a reliable calculation of SFT, as well as to define the main physical and mathematical constraints where the approximate and rigorous methods could provide consistent SFT estimates.
NASA Astrophysics Data System (ADS)
Liu, Q.; Liu, F.; Turner, I.; Anh, V.
2007-03-01
In this paper we present a random walk model for approximating a Lévy-Feller advection-dispersion process, governed by the Lévy-Feller advection-dispersion differential equation (LFADE). We show that the random walk model converges to LFADE by use of a properly scaled transition to vanishing space and time steps. We propose an explicit finite difference approximation (EFDA) for LFADE, resulting from the Grünwald-Letnikov discretization of fractional derivatives. As a result of the interpretation of the random walk model, the stability and convergence of EFDA for LFADE in a bounded domain are discussed. Finally, some numerical examples are presented to show the application of the present technique.
Hermeline, F. )
1993-05-01
This paper deals with the approximation of Vlasov-Poisson and Vlasov-Maxwell equations. We present two coupled particle-finite volume methods which use the properties of Delaunay-Voronoi meshes. These methods are applied to benchmark calculations and engineering problems such as simulation of electron injector devices. 42 refs., 13 figs.
NASA Technical Reports Server (NTRS)
Barnwell, R. W.; Davis, R. M.
1975-01-01
A user's manual is presented for a computer program which calculates inviscid flow about lifting configurations in the free-stream Mach-number range from zero to low supersonic. Angles of attack of the order of the configuration thickness-length ratio and less can be calculated. An approximate formulation was used which accounts for shock waves, leading-edge separation and wind-tunnel wall effects.
NASA Astrophysics Data System (ADS)
Predescu, Cristian
2004-05-01
In this paper I provide significant mathematical evidence in support of the existence of direct short-time approximations of any polynomial order for the computation of density matrices of physical systems described by arbitrarily smooth and bounded from below potentials. While for Theorem 2, which is “experimental,” I only provide a “physicist’s” proof, I believe the present development is mathematically sound. As a verification, I explicitly construct two short-time approximations to the density matrix having convergence orders 3 and 4, respectively. Furthermore, in Appendix B, I derive the convergence constant for the trapezoidal Trotter path integral technique. The convergence orders and constants are then verified by numerical simulations. While the two short-time approximations constructed are of sure interest to physicists and chemists involved in Monte Carlo path integral simulations, the present paper is also aimed at the mathematical community, who might find the results interesting and worth exploring. I conclude the paper by discussing the implications of the present findings with respect to the solvability of the dynamical sign problem appearing in real-time Feynman path integral simulations.
NASA Technical Reports Server (NTRS)
Rosen, I. G.
1985-01-01
Rayleigh-Ritz methods for the approximation of the natural modes for a class of vibration problems involving flexible beams with tip bodies using subspaces of piecewise polynomial spline functions are developed. An abstract operator theoretic formulation of the eigenvalue problem is derived and spectral properties investigated. The existing theory for spline-based Rayleigh-Ritz methods applied to elliptic differential operators and the approximation properties of interpolatory splines are useed to argue convergence and establish rates of convergence. An example and numerical results are discussed.
NASA Astrophysics Data System (ADS)
Meier, Patrick; Rauhut, Guntram
2015-12-01
Three different approaches for calculating Franck-Condon factors beyond the harmonic approximation are compared and discussed in detail. Duschinsky effects are accounted for either by a rotation of the initial or final wavefunctions - which are obtained from state-specific configuration-selective vibrational configuration interaction calculations - or by a rotation of the underlying multi-dimensional potential energy surfaces being determined from explicitly correlated coupled-cluster approaches. An analysis of the Duschinsky effects in dependence on the rotational angles and the anisotropy of the wavefunction is provided. Benchmark calculations for the photoelectron spectra of ClO2, HS-2 and ZnOH- are presented. An application of the favoured approach for calculating Franck-Condon factors to the oxidation of Zn(H2O)+ and Zn2(H2O)+ demonstrates its applicability to systems with more than three atoms.
NASA Astrophysics Data System (ADS)
Barry, D. A.; Parlange, J.-Y.; Li, L.; Jeng, D.-S.; Crapper, M.
2005-10-01
The solution to the Green and Ampt infiltration equation is expressible in terms of the Lambert W-1 function. Approximations for Green and Ampt infiltration are thus derivable from approximations for the W-1 function and vice versa. An infinite family of asymptotic expansions to W-1 is presented. Although these expansions do not converge near the branch point of the W function (corresponds to Green-Ampt infiltration with immediate ponding), a method is presented for approximating W-1 that is exact at the branch point and asymptotically, with interpolation between these limits. Some existing and several new simple and compact yet robust approximations applicable to Green-Ampt infiltration and flux are presented, the most accurate of which has a maximum relative error of 5 × 10 -5%. This error is orders of magnitude lower than any existing analytical approximations.
Alcock, J. . Dept. of Environmental Science); Wagner, M.E. . Geology); Srogi, L.A. . Dept. of Geology and Astronomy)
1993-03-01
Post-Taconian transcurrent faulting in the Appalachian Piedmont presents a significant problem to workers attempting to reconstruct the Early Paleozoic tectonic history. One solution to the problem is to identify blocks that lie between zones of transcurrent faulting and that retain the Early Paleozoic arrangement of litho-tectonic units. The authors propose that a comparison of metamorphic histories of different units can be used to recognize blocks of this type. The Wilmington Complex (WC) arc terrane, the pre-Taconian Laurentian margin rocks (LM) exposed in basement-cored massifs, and the Wissahickon Group metapelites (WS) that lie between them are three litho-tectonic units in the PA-DE Piedmont that comprise a block assembled in the Early Paleozoic. Evidence supporting this interpretation includes: (1) Metamorphic and lithologic differences across the WC-WS contact and detailed geologic mapping of the contact that suggest thrusting of the WC onto the WS; (2) A metamorphic gradient in the WS with highest grade, including spinel-cordierite migmatites, adjacent to the WC indicating that peak metamorphism of the WS resulted from heating by the WC; (3) A metamorphic discontinuity at the WS-LM contact, evidence for emplacement of the WS onto the LM after WS peak metamorphism; (4) A correlation of mineral assemblage in the Cockeysville Marble of the LM with distance from the WS indicating that peak metamorphism of the LM occurred after emplacement of the WS; and (5) Early Paleozoic lower intercept zircon ages for the LM that are interpreted to date Taconian regional metamorphism. Analysis of metamorphism and its timing relative to thrusting suggest that the WS was associated with the WC before the WS was emplaced onto the LM during the Taconian. It follows that these units form a block that has not been significantly disrupted by later transcurrent shear.
Gai, Litao; Bilige, Sudao; Jie, Yingmo
2016-01-01
In this paper, we successfully obtained the exact solutions and the approximate analytic solutions of the (2 + 1)-dimensional KP equation based on the Lie symmetry, the extended tanh method and the homotopy perturbation method. In first part, we obtained the symmetries of the (2 + 1)-dimensional KP equation based on the Wu-differential characteristic set algorithm and reduced it. In the second part, we constructed the abundant exact travelling wave solutions by using the extended tanh method. These solutions are expressed by the hyperbolic functions, the trigonometric functions and the rational functions respectively. It should be noted that when the parameters are taken as special values, some solitary wave solutions are derived from the hyperbolic function solutions. Finally, we apply the homotopy perturbation method to obtain the approximate analytic solutions based on four kinds of initial conditions.
de Stadler, M; Chand, K
2007-11-12
Gas centrifuges exhibit very complex flows. Within the centrifuge there is a rarefied region, a transition region, and a region with an extreme density gradient. The flow moves at hypersonic speeds and shock waves are present. However, the flow is subsonic in the axisymmetric plane. The analysis may be simplified by treating the flow as a perturbation of wheel flow. Wheel flow implies that the fluid is moving as a solid body. With the very large pressure gradient, the majority of the fluid is located very close to the rotor wall and moves at an azimuthal velocity proportional to its distance from the rotor wall; there is no slipping in the azimuthal plane. The fluid can be modeled as incompressible and subsonic in the axisymmetric plane. By treating the centrifuge as long, end effects can be appropriately modeled without performing a detailed boundary layer analysis. Onsager's pancake approximation is used to construct a simulation to model fluid flow in a gas centrifuge. The governing 6th order partial differential equation is broken down into an equivalent coupled system of three equations and then solved numerically. In addition to a discussion on the baseline solution, known problems and future work possibilities are presented.
NASA Astrophysics Data System (ADS)
Kopka, Piotr; Wawrzynczak, Anna; Borysiewicz, Mieczyslaw
2016-11-01
In this paper the Bayesian methodology, known as Approximate Bayesian Computation (ABC), is applied to the problem of the atmospheric contamination source identification. The algorithm input data are on-line arriving concentrations of the released substance registered by the distributed sensors network. This paper presents the Sequential ABC algorithm in detail and tests its efficiency in estimation of probabilistic distributions of atmospheric release parameters of a mobile contamination source. The developed algorithms are tested using the data from Over-Land Atmospheric Diffusion (OLAD) field tracer experiment. The paper demonstrates estimation of seven parameters characterizing the contamination source, i.e.: contamination source starting position (x,y), the direction of the motion of the source (d), its velocity (v), release rate (q), start time of release (ts) and its duration (td). The online-arriving new concentrations dynamically update the probability distributions of search parameters. The atmospheric dispersion Second-order Closure Integrated PUFF (SCIPUFF) Model is used as the forward model to predict the concentrations at the sensors locations.
NASA Astrophysics Data System (ADS)
Hayashi, Nobuhiko; Nagai, Yuki; Higashi, Yoichi
2010-12-01
We theoretically discuss the magnetic-field-angle dependence of the zero-energy density of states (ZEDOS) in superconductors. Point-node and line-node superconducting gaps on spherical and cylindrical Fermi surfaces are considered. The Doppler-shift (DS) method and the Kramer-Pesch approximation (KPA) are used to calculate the ZEDOS. Numerical results show that consequences of the DS method are corrected by the KPA.
Approximating Integrals Using Probability
ERIC Educational Resources Information Center
Maruszewski, Richard F., Jr.; Caudle, Kyle A.
2005-01-01
As part of a discussion on Monte Carlo methods, which outlines how to use probability expectations to approximate the value of a definite integral. The purpose of this paper is to elaborate on this technique and then to show several examples using visual basic as a programming tool. It is an interesting method because it combines two branches of…
Moilanen, Atte; Wintle, Brendan A
2007-04-01
Aggregation of reserve networks is generally considered desirable for biological and economic reasons: aggregation reduces negative edge effects and facilitates metapopulation dynamics, which plausibly leads to improved persistence of species. Economically, aggregated networks are less expensive to manage than fragmented ones. Therefore, many reserve-design methods use qualitative heuristics, such as distance-based criteria or boundary-length penalties to induce reserve aggregation. We devised a quantitative method that introduces aggregation into reserve networks. We call the method the boundary-quality penalty (BQP) because the biological value of a land unit (grid cell) is penalized when the unit occurs close enough to the edge of a reserve such that a fragmentation or edge effect would reduce population densities in the reserved cell. The BQP can be estimated for any habitat model that includes neighborhood (connectivity) effects, and it can be introduced into reserve selection software in a standardized manner. We used the BQP in a reserve-design case study of the Hunter Valley of southeastern Australia. The BQP resulted in a more highly aggregated reserve network structure. The degree of aggregation required was specified by observed (albeit modeled) biological responses to fragmentation. Estimating the effects of fragmentation on individual species and incorporating estimated effects in the objective function of reserve-selection algorithms is a coherent and defensible way to select aggregated reserves. We implemented the BQP in the context of the Zonation method, but it could as well be implemented into any other spatially explicit reserve-planning framework.
NASA Technical Reports Server (NTRS)
Jordon, D. E.; Patterson, W.; Sandlin, D. R.
1985-01-01
The XV-15 Tilt Rotor Research Aircraft download phenomenon was analyzed. This phenomenon is a direct result of the two rotor wakes impinging on the wing upper surface when the aircraft is in the hover configuration. For this study the analysis proceeded along tow lines. First was a method whereby results from actual hover tests of the XV-15 aircraft were combined with drag coefficient results from wind tunnel tests of a wing that was representative of the aircraft wing. Second, an analytical method was used that modeled that airflow caused gy the two rotors. Formulas were developed in such a way that acomputer program could be used to calculate the axial velocities were then used in conjunction with the aforementioned wind tunnel drag coefficinet results to produce download values. An attempt was made to validate the analytical results by modeling a model rotor system for which direct download values were determinrd..
Karagiannis, Georgios Lin, Guang
2014-02-15
Generalized polynomial chaos (gPC) expansions allow us to represent the solution of a stochastic system using a series of polynomial chaos basis functions. The number of gPC terms increases dramatically as the dimension of the random input variables increases. When the number of the gPC terms is larger than that of the available samples, a scenario that often occurs when the corresponding deterministic solver is computationally expensive, evaluation of the gPC expansion can be inaccurate due to over-fitting. We propose a fully Bayesian approach that allows for global recovery of the stochastic solutions, in both spatial and random domains, by coupling Bayesian model uncertainty and regularization regression methods. It allows the evaluation of the PC coefficients on a grid of spatial points, via (1) the Bayesian model average (BMA) or (2) the median probability model, and their construction as spatial functions on the spatial domain via spline interpolation. The former accounts for the model uncertainty and provides Bayes-optimal predictions; while the latter provides a sparse representation of the stochastic solutions by evaluating the expansion on a subset of dominating gPC bases. Moreover, the proposed methods quantify the importance of the gPC bases in the probabilistic sense through inclusion probabilities. We design a Markov chain Monte Carlo (MCMC) sampler that evaluates all the unknown quantities without the need of ad-hoc techniques. The proposed methods are suitable for, but not restricted to, problems whose stochastic solutions are sparse in the stochastic space with respect to the gPC bases while the deterministic solver involved is expensive. We demonstrate the accuracy and performance of the proposed methods and make comparisons with other approaches on solving elliptic SPDEs with 1-, 14- and 40-random dimensions.
Karagiannis, Georgios; Lin, Guang
2014-02-15
Generalized polynomial chaos (gPC) expansions allow the representation of the solution of a stochastic system as a series of polynomial terms. The number of gPC terms increases dramatically with the dimension of the random input variables. When the number of the gPC terms is larger than that of the available samples, a scenario that often occurs if the evaluations of the system are expensive, the evaluation of the gPC expansion can be inaccurate due to over-fitting. We propose a fully Bayesian approach that allows for global recovery of the stochastic solution, both in spacial and random domains, by coupling Bayesian model uncertainty and regularization regression methods. It allows the evaluation of the PC coefficients on a grid of spacial points via (1) Bayesian model average or (2) medial probability model, and their construction as functions on the spacial domain via spline interpolation. The former accounts the model uncertainty and provides Bayes-optimal predictions; while the latter, additionally, provides a sparse representation of the solution by evaluating the expansion on a subset of dominating gPC bases when represented as a gPC expansion. Moreover, the method quantifies the importance of the gPC bases through inclusion probabilities. We design an MCMC sampler that evaluates all the unknown quantities without the need of ad-hoc techniques. The proposed method is suitable for, but not restricted to, problems whose stochastic solution is sparse at the stochastic level with respect to the gPC bases while the deterministic solver involved is expensive. We demonstrate the good performance of the proposed method and make comparisons with others on 1D, 14D and 40D in random space elliptic stochastic partial differential equations.
NASA Astrophysics Data System (ADS)
Lorin, E.; Yang, X.; Antoine, X.
2016-06-01
The paper is devoted to develop efficient domain decomposition methods for the linear Schrödinger equation beyond the semiclassical regime, which does not carry a small enough rescaled Planck constant for asymptotic methods (e.g. geometric optics) to produce a good accuracy, but which is too computationally expensive if direct methods (e.g. finite difference) are applied. This belongs to the category of computing middle-frequency wave propagation, where neither asymptotic nor direct methods can be directly used with both efficiency and accuracy. Motivated by recent works of the authors on absorbing boundary conditions (Antoine et al. (2014) [13] and Yang and Zhang (2014) [43]), we introduce Semiclassical Schwarz Waveform Relaxation methods (SSWR), which are seamless integrations of semiclassical approximation to Schwarz Waveform Relaxation methods. Two versions are proposed respectively based on Herman-Kluk propagation and geometric optics, and we prove the convergence and provide numerical evidence of efficiency and accuracy of these methods.
NASA Astrophysics Data System (ADS)
Hashemi, M. S.; Baleanu, D.
2016-07-01
We propose a simple and accurate numerical scheme for solving the time fractional telegraph (TFT) equation within Caputo type fractional derivative. A fictitious coordinate ϑ is imposed onto the problem in order to transform the dependent variable u (x , t) into a new variable with an extra dimension. In the new space with the added fictitious dimension, a combination of method of line and group preserving scheme (GPS) is proposed to find the approximate solutions. This method preserves the geometric structure of the problem. Power and accuracy of this method has been illustrated through some examples of TFT equation.
NASA Astrophysics Data System (ADS)
Huang, H.; Meng, D. Q.; Lai, X. C.; Liu, T. W.; Long, Y.; Hu, Q. M.
2014-08-01
The combined interatomic pair potentials of TiZrNi, including Morse and Inversion Gaussian, are successfully built by the lattice inversion method. Some experimental controversies on atomic occupancies of sites 6-8 in W-TiZrNi are analyzed and settled with these inverted potentials. According to the characteristics of composition and site preference occupancy of W-TiZrNi, two stable structural models of W-TiZrNi are proposed and the possibilities are partly confirmed by experimental data. The stabilities of W-TiZrNi mostly result from the contribution of Zr atoms to the phonon densities of states in lower frequencies.
NASA Astrophysics Data System (ADS)
Vitanov, Nikolay K.
2011-03-01
We discuss the class of equations ∑i,j=0mAij(u){∂iu}/{∂ti}∂+∑k,l=0nBkl(u){∂ku}/{∂xk}∂=C(u) where Aij( u), Bkl( u) and C( u) are functions of u( x, t) as follows: (i) Aij, Bkl and C are polynomials of u; or (ii) Aij, Bkl and C can be reduced to polynomials of u by means of Taylor series for small values of u. For these two cases the above-mentioned class of equations consists of nonlinear PDEs with polynomial nonlinearities. We show that the modified method of simplest equation is powerful tool for obtaining exact traveling-wave solution of this class of equations. The balance equations for the sub-class of traveling-wave solutions of the investigated class of equations are obtained. We illustrate the method by obtaining exact traveling-wave solutions (i) of the Swift-Hohenberg equation and (ii) of the generalized Rayleigh equation for the cases when the extended tanh-equation or the equations of Bernoulli and Riccati are used as simplest equations.
NASA Astrophysics Data System (ADS)
Chen, Peng; Quarteroni, Alfio
2015-10-01
In this work we develop an adaptive and reduced computational algorithm based on dimension-adaptive sparse grid approximation and reduced basis methods for solving high-dimensional uncertainty quantification (UQ) problems. In order to tackle the computational challenge of "curse of dimensionality" commonly faced by these problems, we employ a dimension-adaptive tensor-product algorithm [16] and propose a verified version to enable effective removal of the stagnation phenomenon besides automatically detecting the importance and interaction of different dimensions. To reduce the heavy computational cost of UQ problems modelled by partial differential equations (PDE), we adopt a weighted reduced basis method [7] and develop an adaptive greedy algorithm in combination with the previous verified algorithm for efficient construction of an accurate reduced basis approximation. The efficiency and accuracy of the proposed algorithm are demonstrated by several numerical experiments.
Krause, Katharina; Klopper, Wim
2015-03-14
A generalization of the approximated coupled-cluster singles and doubles method and the algebraic diagrammatic construction scheme up to second order to two-component spinors obtained from a relativistic Hartree–Fock calculation is reported. Computational results for zero-field splittings of atoms and monoatomic cations, triplet lifetimes of two organic molecules, and the spin-forbidden part of the UV/Vis absorption spectrum of tris(ethylenediamine)cobalt(III) are presented.
Roze, Denis; Rousset, François
2003-01-01
Population structure affects the relative influence of selection and drift on the change in allele frequencies. Several models have been proposed recently, using diffusion approximations to calculate fixation probabilities, fixation times, and equilibrium properties of subdivided populations. We propose here a simple method to construct diffusion approximations in structured populations; it relies on general expressions for the expectation and variance in allele frequency change over one generation, in terms of partial derivatives of a "fitness function" and probabilities of genetic identity evaluated in a neutral model. In the limit of a very large number of demes, these probabilities can be expressed as functions of average allele frequencies in the metapopulation, provided that coalescence occurs on two different timescales, which is the case in the island model. We then use the method to derive expressions for the probability of fixation of new mutations, as a function of their dominance coefficient, the rate of partial selfing, and the rate of deme extinction. We obtain more precise approximations than those derived by recent work, in particular (but not only) when deme sizes are small. Comparisons with simulations show that the method gives good results as long as migration is stronger than selection. PMID:14704194
NASA Astrophysics Data System (ADS)
Moraes Rêgo, Patrícia Helena; Viana da Fonseca Neto, João; Ferreira, Ernesto M.
2015-08-01
The main focus of this article is to present a proposal to solve, via UDUT factorisation, the convergence and numerical stability problems that are related to the covariance matrix ill-conditioning of the recursive least squares (RLS) approach for online approximations of the algebraic Riccati equation (ARE) solution associated with the discrete linear quadratic regulator (DLQR) problem formulated in the actor-critic reinforcement learning and approximate dynamic programming context. The parameterisations of the Bellman equation, utility function and dynamic system as well as the algebra of Kronecker product assemble a framework for the solution of the DLQR problem. The condition number and the positivity parameter of the covariance matrix are associated with statistical metrics for evaluating the approximation performance of the ARE solution via RLS-based estimators. The performance of RLS approximators is also evaluated in terms of consistence and polarisation when associated with reinforcement learning methods. The used methodology contemplates realisations of online designs for DLQR controllers that is evaluated in a multivariable dynamic system model.
NASA Astrophysics Data System (ADS)
Chatterjee, Koushik; Pastorczak, Ewa; Jawulski, Konrad; Pernal, Katarzyna
2016-06-01
A perfect-pairing generalized valence bond (GVB) approximation is known to be one of the simplest approximations, which allows one to capture the essence of static correlation in molecular systems. In spite of its attractive feature of being relatively computationally efficient, this approximation misses a large portion of dynamic correlation and does not offer sufficient accuracy to be generally useful for studying electronic structure of molecules. We propose to correct the GVB model and alleviate some of its deficiencies by amending it with the correlation energy correction derived from the recently formulated extended random phase approximation (ERPA). On the examples of systems of diverse electronic structures, we show that the resulting ERPA-GVB method greatly improves upon the GVB model. ERPA-GVB recovers most of the electron correlation and it yields energy barrier heights of excellent accuracy. Thanks to a balanced treatment of static and dynamic correlation, ERPA-GVB stays reliable when one moves from systems dominated by dynamic electron correlation to those for which the static correlation comes into play.
Chatterjee, Koushik; Pastorczak, Ewa; Jawulski, Konrad; Pernal, Katarzyna
2016-06-28
A perfect-pairing generalized valence bond (GVB) approximation is known to be one of the simplest approximations, which allows one to capture the essence of static correlation in molecular systems. In spite of its attractive feature of being relatively computationally efficient, this approximation misses a large portion of dynamic correlation and does not offer sufficient accuracy to be generally useful for studying electronic structure of molecules. We propose to correct the GVB model and alleviate some of its deficiencies by amending it with the correlation energy correction derived from the recently formulated extended random phase approximation (ERPA). On the examples of systems of diverse electronic structures, we show that the resulting ERPA-GVB method greatly improves upon the GVB model. ERPA-GVB recovers most of the electron correlation and it yields energy barrier heights of excellent accuracy. Thanks to a balanced treatment of static and dynamic correlation, ERPA-GVB stays reliable when one moves from systems dominated by dynamic electron correlation to those for which the static correlation comes into play.
Approximate kernel competitive learning.
Wu, Jian-Sheng; Zheng, Wei-Shi; Lai, Jian-Huang
2015-03-01
Kernel competitive learning has been successfully used to achieve robust clustering. However, kernel competitive learning (KCL) is not scalable for large scale data processing, because (1) it has to calculate and store the full kernel matrix that is too large to be calculated and kept in the memory and (2) it cannot be computed in parallel. In this paper we develop a framework of approximate kernel competitive learning for processing large scale dataset. The proposed framework consists of two parts. First, it derives an approximate kernel competitive learning (AKCL), which learns kernel competitive learning in a subspace via sampling. We provide solid theoretical analysis on why the proposed approximation modelling would work for kernel competitive learning, and furthermore, we show that the computational complexity of AKCL is largely reduced. Second, we propose a pseudo-parallelled approximate kernel competitive learning (PAKCL) based on a set-based kernel competitive learning strategy, which overcomes the obstacle of using parallel programming in kernel competitive learning and significantly accelerates the approximate kernel competitive learning for large scale clustering. The empirical evaluation on publicly available datasets shows that the proposed AKCL and PAKCL can perform comparably as KCL, with a large reduction on computational cost. Also, the proposed methods achieve more effective clustering performance in terms of clustering precision against related approximate clustering approaches.
NASA Astrophysics Data System (ADS)
Mohammadpour, Mozhdeh; Jamshidi, Zahra
2016-05-01
The prospect of challenges in reproducing and interpretation of resonance Raman properties of molecules interacting with metal clusters has prompted the present research initiative. Resonance Raman spectra based on the time-dependent gradient approximation are examined in the framework of density functional theory using different methods for representing the exchange-correlation functional. In this work the performance of different XC functionals in the prediction of ground state properties, excitation state energies, and gradients are compared and discussed. Resonance Raman properties based on time-dependent gradient approximation for the strongly low-lying charge transfer states are calculated and compared for different methods. We draw the following conclusions: (1) for calculating the binding energy and ground state geometry, dispersion-corrected functionals give the best performance in comparison to ab initio calculations, (2) GGA and meta GGA functionals give good accuracy in calculating vibrational frequencies, (3) excited state energies determined by hybrid and range-separated hybrid functionals are in good agreement with EOM-CCSD calculations, and (4) in calculating resonance Raman properties GGA functionals give good and reasonable performance in comparison to the experiment; however, calculating the excited state gradient by using the hybrid functional on the hessian of GGA improves the results of the hybrid functional significantly. Finally, we conclude that the agreement of charge-transfer surface enhanced resonance Raman spectra with experiment is improved significantly by using the excited state gradient approximation.
Advanced Methods of Approximate Reasoning
1990-11-30
conceivable states of affairs, situations, or scenarios, is the concept of " possible world" utilized by Carnap [4] in his logical treatment of the concept of...The Laws of Thought, 1854. New York: Reprinted by Dover Books, 1958. [4] R. Carnap . The Logical Foundations of Probability. Chicago: University of3...notion of "possible world" used by Carnap [6] to develop logical bases for probability theory. The same central notion of possible state of affairs is
Taylor Approximations and Definite Integrals
ERIC Educational Resources Information Center
Gordon, Sheldon P.
2007-01-01
We investigate the possibility of approximating the value of a definite integral by approximating the integrand rather than using numerical methods to approximate the value of the definite integral. Particular cases considered include examples where the integral is improper, such as an elliptic integral. (Contains 4 tables and 2 figures.)
Combining global and local approximations
NASA Technical Reports Server (NTRS)
Haftka, Raphael T.
1991-01-01
A method based on a linear approximation to a scaling factor, designated the 'global-local approximation' (GLA) method, is presented and shown capable of extending the range of usefulness of derivative-based approximations to a more refined model. The GLA approach refines the conventional scaling factor by means of a linearly varying, rather than constant, scaling factor. The capabilities of the method are demonstrated for a simple beam example with a crude and more refined FEM model.
Combining global and local approximations
Haftka, R.T. )
1991-09-01
A method based on a linear approximation to a scaling factor, designated the 'global-local approximation' (GLA) method, is presented and shown capable of extending the range of usefulness of derivative-based approximations to a more refined model. The GLA approach refines the conventional scaling factor by means of a linearly varying, rather than constant, scaling factor. The capabilities of the method are demonstrated for a simple beam example with a crude and more refined FEM model. 6 refs.
NASA Technical Reports Server (NTRS)
Anderson, O. L.; Briley, W. R.; Mcdonald, H.
1978-01-01
An approximate analysis is presented for calculating three-dimensional, low Mach number, laminar viscous flows in curved passages with large secondary flows and corner boundary layers. The analysis is based on the decomposition of the overall velocity field into inviscid and viscous components with the overall velocity being determined from superposition. An incompressible vorticity transport equation is used to estimate inviscid secondary flow velocities to be used as corrections to the potential flow velocity field. A parabolized streamwise momentum equation coupled to an adiabatic energy equation and global continuity equation is used to obtain an approximate viscous correction to the pressure and longitudinal velocity fields. A collateral flow assumption is invoked to estimate the viscous correction to the transverse velocity fields. The approximate analysis is solved numerically using an implicit ADI solution for the viscous pressure and velocity fields. An iterative ADI procedure is used to solve for the inviscid secondary vorticity and velocity fields. This method was applied to computing the flow within a turbine vane passage with inlet flow conditions of M = 0.1 and M = 0.25, Re = 1000 and adiabatic walls, and for a constant radius curved rectangular duct with R/D = 12 and 14 and with inlet flow conditions of M = 0.1, Re = 1000, and adiabatic walls.
Ribeiro, Apoena A; Purger, Flávia; Rodrigues, Jonas A; Oliveira, Patrícia R A; Lussi, Adrian; Monteiro, Antonio Henrique; Alves, Haimon D L; Assis, Joaquim T; Vasconcellos, Adalberto B
2015-01-01
This in vivo study aimed to evaluate the influence of contact points on the approximal caries detection in primary molars, by comparing the performance of the DIAGNOdent pen and visual-tactile examination after tooth separation to bitewing radiography (BW). A total of 112 children were examined and 33 children were selected. In three periods (a, b, and c), 209 approximal surfaces were examined: (a) examiner 1 performed visual-tactile examination using the Nyvad criteria (EX1); examiner 2 used DIAGNOdent pen (LF1) and took BW; (b) 1 week later, after tooth separation, examiner 1 performed the second visual-tactile examination (EX2) and examiner 2 used DIAGNOdent again (LF2); (c) after tooth exfoliation, surfaces were directly examined using DIAGNOdent (LF3). Teeth were examined by computed microtomography as a reference standard. Analyses were based on diagnostic thresholds: D1: D 0 = health, D 1 –D 4 = disease; D2: D 0 , D 1 = health, D 2 –D 4 = disease; D3: D 0 –D 2 = health, D 3 , D 4 = disease. At D1, the highest sensitivity/specificity were observed for EX1 (1.00)/LF3 (0.68), respectively. At D2, the highest sensitivity/ specificity were observed for LF3 (0.69)/BW (1.00), respectively. At D3, the highest sensitivity/specificity were observed for LF3 (0.78)/EX1, EX2 and BW (1.00). EX1 showed higher accuracy values than LF1, and EX2 showed similar values to LF2. We concluded that the visual-tactile examination showed better results in detecting sound surfaces and approximal caries lesions without tooth separation. However, the effectiveness of approximal caries lesion detection of both methods was increased by the absence of contact points. Therefore, regardless of the method of detection, orthodontic separating elastics should be used as a complementary tool for the diagnosis of approximal noncavitated lesions in primary molars.
NASA Technical Reports Server (NTRS)
Hunter, Craig A.
1995-01-01
An analytical/numerical method has been developed to predict the static thrust performance of non-axisymmetric, two-dimensional convergent-divergent exhaust nozzles. Thermodynamic nozzle performance effects due to over- and underexpansion are modeled using one-dimensional compressible flow theory. Boundary layer development and skin friction losses are calculated using an approximate integral momentum method based on the classic karman-Polhausen solution. Angularity effects are included with these two models in a computational Nozzle Performance Analysis Code, NPAC. In four different case studies, results from NPAC are compared to experimental data obtained from subscale nozzle testing to demonstrate the capabilities and limitations of the NPAC method. In several cases, the NPAC prediction matched experimental gross thrust efficiency data to within 0.1 percent at a design NPR, and to within 0.5 percent at off-design conditions.
NASA Technical Reports Server (NTRS)
Buglia, James J.; Young, George R.; Timmons, Jesse D.; Brinkworth, Helen S.
1961-01-01
An analytical method has been developed which approximates the dispersion of a spinning symmetrical body in a vacuum, with time-varying mass and inertia characteristics, under the action of several external disturbances-initial pitching rate, thrust misalignment, and dynamic unbalance. The ratio of the roll inertia to the pitch or yaw inertia is assumed constant. Spin was found to be very effective in reducing the dispersion due to an initial pitch rate or thrust misalignment, but was completely Ineffective in reducing the dispersion of a dynamically unbalanced body.
Li, Shaohong L; Marenich, Aleksandr V; Xu, Xuefei; Truhlar, Donald G
2014-01-16
Linear response (LR) Kohn-Sham (KS) time-dependent density functional theory (TDDFT), or KS-LR, has been widely used to study electronically excited states of molecules and is the method of choice for large and complex systems. The Tamm-Dancoff approximation to TDDFT (TDDFT-TDA or KS-TDA) gives results similar to KS-LR and alleviates the instability problem of TDDFT near state intersections. However, KS-LR and KS-TDA share a debilitating feature; conical intersections of the reference state and a response state occur in F - 1 instead of the correct F - 2 dimensions, where F is the number of internal degrees of freedom. Here, we propose a new method, named the configuration interaction-corrected Tamm-Dancoff approximation (CIC-TDA), that eliminates this problem. It calculates the coupling between the reference state and an intersecting response state by interpreting the KS reference-state Slater determinant and linear response as if they were wave functions. Both formal analysis and test results show that CIC-TDA gives similar results to KS-TDA far from a conical intersection, but the intersection occurs with the correct dimensionality. We anticipate that this will allow more realistic application of TDDFT to photochemistry.
NASA Astrophysics Data System (ADS)
Izsák, Róbert; Neese, Frank
2013-07-01
The 'chain of spheres' approximation, developed earlier for the efficient evaluation of the self-consistent field exchange term, is introduced here into the evaluation of the external exchange term of higher order correlation methods. Its performance is studied in the specific case of the spin-component-scaled third-order Møller--Plesset perturbation (SCS-MP3) theory. The results indicate that the approximation performs excellently in terms of both computer time and achievable accuracy. Significant speedups over a conventional method are obtained for larger systems and basis sets. Owing to this development, SCS-MP3 calculations on molecules of the size of penicillin (42 atoms) with a polarised triple-zeta basis set can be performed in ∼3 hours using 16 cores of an Intel Xeon E7-8837 processor with a 2.67 GHz clock speed, which represents a speedup by a factor of 8-9 compared to the previously most efficient algorithm. Thus, the increased accuracy offered by SCS-MP3 can now be explored for at least medium-sized molecules.
NASA Astrophysics Data System (ADS)
Yang, Lei; Yan, Hongyong; Liu, Hong
2017-03-01
Implicit staggered-grid finite-difference (ISFD) scheme is competitive for its great accuracy and stability, whereas its coefficients are conventionally determined by the Taylor-series expansion (TE) method, leading to a loss in numerical precision. In this paper, we modify the TE method using the minimax approximation (MA), and propose a new optimal ISFD scheme based on the modified TE (MTE) with MA method. The new ISFD scheme takes the advantage of the TE method that guarantees great accuracy at small wavenumbers, and keeps the property of the MA method that keeps the numerical errors within a limited bound at the same time. Thus, it leads to great accuracy for numerical solution of the wave equations. We derive the optimal ISFD coefficients by applying the new method to the construction of the objective function, and using a Remez algorithm to minimize its maximum. Numerical analysis is made in comparison with the conventional TE-based ISFD scheme, indicating that the MTE-based ISFD scheme with appropriate parameters can widen the wavenumber range with high accuracy, and achieve greater precision than the conventional ISFD scheme. The numerical modeling results also demonstrate that the MTE-based ISFD scheme performs well in elastic wave simulation, and is more efficient than the conventional ISFD scheme for elastic modeling.
Multicriteria approximation through decomposition
Burch, C.; Krumke, S.; Marathe, M.; Phillips, C.; Sundberg, E.
1998-06-01
The authors propose a general technique called solution decomposition to devise approximation algorithms with provable performance guarantees. The technique is applicable to a large class of combinatorial optimization problems that can be formulated as integer linear programs. Two key ingredients of their technique involve finding a decomposition of a fractional solution into a convex combination of feasible integral solutions and devising generic approximation algorithms based on calls to such decompositions as oracles. The technique is closely related to randomized rounding. Their method yields as corollaries unified solutions to a number of well studied problems and it provides the first approximation algorithms with provable guarantees for a number of new problems. The particular results obtained in this paper include the following: (1) the authors demonstrate how the technique can be used to provide more understanding of previous results and new algorithms for classical problems such as Multicriteria Spanning Trees, and Suitcase Packing; (2) they also show how the ideas can be extended to apply to multicriteria optimization problems, in which they wish to minimize a certain objective function subject to one or more budget constraints. As corollaries they obtain first non-trivial multicriteria approximation algorithms for problems including the k-Hurdle and the Network Inhibition problems.
Multicriteria approximation through decomposition
Burch, C. |; Krumke, S.; Marathe, M.; Phillips, C.; Sundberg, E. |
1997-12-01
The authors propose a general technique called solution decomposition to devise approximation algorithms with provable performance guarantees. The technique is applicable to a large class of combinatorial optimization problems that can be formulated as integer linear programs. Two key ingredients of the technique involve finding a decomposition of a fractional solution into a convex combination of feasible integral solutions and devising generic approximation algorithms based on calls to such decompositions as oracles. The technique is closely related to randomized rounding. The method yields as corollaries unified solutions to a number of well studied problems and it provides the first approximation algorithms with provable guarantees for a number of new problems. The particular results obtained in this paper include the following: (1) The authors demonstrate how the technique can be used to provide more understanding of previous results and new algorithms for classical problems such as Multicriteria Spanning Trees, and Suitcase Packing. (2) They show how the ideas can be extended to apply to multicriteria optimization problems, in which they wish to minimize a certain objective function subject to one or more budget constraints. As corollaries they obtain first non-trivial multicriteria approximation algorithms for problems including the k-Hurdle and the Network Inhibition problems.
NASA Astrophysics Data System (ADS)
Skorupski, Krzysztof
2015-05-01
Black carbon (BC) particles are a product of incomplete combustion of carbon-based fuels. One of the possibilities of studying the optical properties of BC structures is to use the DDA (Discrete Dipole Approximation) method. The main goal of this work was to investigate its accuracy and to approximate the most reliable simulation parameters. For the light scattering simulations the ADDA code was used and for the reference program the superposition T-Matrix code by Mackowski was selected. The study was divided into three parts. First, DDA simulations for a single particle (sphere) were performed. The results proved that the meshing algorithm can significantly affect the particle shape, and therefore, the extinction diagrams. The volume correction procedure is recommended for sparse or asymmetrical meshes. In the next step large fractal-like aggregates were investigated. When sparse meshes are used, the impact of the volume correction procedure cannot be easily predicted. In some cases it can even lead to more erroneous results. Finally, the optical properties of fractal-like aggregates composed of spheres in point contact were compared to much more realistic structures made up of connected, non-spherical primary particles.
NASA Technical Reports Server (NTRS)
Merrill, W. C.
1978-01-01
The Routh approximation technique for reducing the complexity of system models was applied in the frequency domain to a 16th order, state variable model of the F100 engine and to a 43d order, transfer function model of a launch vehicle boost pump pressure regulator. The results motivate extending the frequency domain formulation of the Routh method to the time domain in order to handle the state variable formulation directly. The time domain formulation was derived and a characterization that specifies all possible Routh similarity transformations was given. The characterization was computed by solving two eigenvalue-eigenvector problems. The application of the time domain Routh technique to the state variable engine model is described, and some results are given. Additional computational problems are discussed, including an optimization procedure that can improve the approximation accuracy by taking advantage of the transformation characterization.
NASA Astrophysics Data System (ADS)
Mozharovskiy, A. V.; Artemenko, A. A.; Mal'tsev, A. A.; Maslennikov, R. O.; Sevast'yanov, A. G.; Ssorin, V. N.
2015-11-01
We develop a combined method for calculating the characteristics of the integrated lens antennas for millimeter-wave wireless local radio-communication systems on the basis of the geometrical and physical optics approximations. The method is based on the concepts of geometrical optics for calculating the electromagnetic-field distribution on the lens surface (with allowance for multiple internal re-reflections) and physical optics for determining the antenna-radiated fields in the Fraunhofer zone. Using the developed combined method, we study various integrated lens antennas on the basis of the data on the used-lens shape and material and the primary-feed radiation model, which is specified analytically or by computer simulation. Optimal values of the cylindrical-extension length, which ensure the maximum antenna directivity equal to 19.1 and 23.8 dBi for the greater and smaller lenses, respectively, are obtained for the hemispherical quartz-glass lenses having the cylindrical extensions with radii of 7.5 and 12.5 mm. In this case, the scanning-angle range of the considered antennas is greater than ±20° for an admissible 2-dB decrease in the directivity of the deflected beam. The calculation results obtained using the developed method are confirmed by the experimental studies performed for the prototypes of the integrated quartz-glass lens antennas within the framework of this research.
NASA Technical Reports Server (NTRS)
Shirts, R. B.; Reinhardt, W. P.
1982-01-01
Substantial short time regularity, even in the chaotic regions of phase space, is found for what is seen as a large class of systems. This regularity manifests itself through the behavior of approximate constants of motion calculated by Pade summation of the Birkhoff-Gustavson normal form expansion; it is attributed to remnants of destroyed invariant tori in phase space. The remnant torus-like manifold structures are used to justify Einstein-Brillouin-Keller semiclassical quantization procedures for obtaining quantum energy levels, even in the absence of complete tori. They also provide a theoretical basis for the calculation of rate constants for intramolecular mode-mode energy transfer. These results are illustrated by means of a thorough analysis of the Henon-Heiles oscillator problem. Possible generality of the analysis is demonstrated by brief consideration of classical dynamics for the Barbanis Hamiltonian, Zeeman effect in hydrogen and recent results of Wolf and Hase (1980) for the H-C-C fragment.
NASA Astrophysics Data System (ADS)
ANDRE, Frédéric; HOU, Longfeng; SOLOVJOV, Vladimir P.
2016-01-01
The main restriction of k-distribution approaches for applications in radiative heat transfer in gaseous media arises from the use of a scaling or correlation assumption to treat non-uniform situations. It is shown that those cases can be handled exactly by using a multidimensional k-distribution that addresses the problem of spectral correlations without using any simplifying assumptions. Nevertheless, the approach cannot be suggested for engineering applications due to its computational cost. Accordingly, a more efficient method, based on the so-called Multi-Spectral Framework, is proposed to approximate the previous exact formulation. The model is assessed against reference LBL calculations and shown to outperform usual k-distribution approaches for radiative heat transfer in non-uniform media.
Suebka, P.
1984-01-01
In Part I, the excitation spectrum of liquid He II is obtained using the two-body potential consists of a hardcore potential plus an outside attractive potential. The sum of two gaussian potential of Khanna and Das which is similar to the Lennard-Jones potential is chosen as the attractive potential. The t-matrix method due to Brueckner and Sawada is adopted with modifications to replace the interaction potential. The spectrum gives the phonon branch and the roton dip which resemble the excitation spectrum for liquid He II. The temperature dependence of the excitation spectrum enters into calculation through the zero-momentum state occupation number. A better approximation of thermodynamic functions is obtained by extending Landau's theory to the situation where the excitation is a function of temperature as well as of momentum. Our thermodynamic calculations also bear qualitative agreement with measurements on He II as expected.
NASA Astrophysics Data System (ADS)
Pérez, Alejandro; Tuckerman, Mark E.; Müser, Martin H.
2009-05-01
The problems of ergodicity and internal consistency in the centroid and ring-polymer molecular dynamics methods are addressed in the context of a comparative study of the two methods. Enhanced sampling in ring-polymer molecular dynamics (RPMD) is achieved by first performing an equilibrium path integral calculation and then launching RPMD trajectories from selected, stochastically independent equilibrium configurations. It is shown that this approach converges more rapidly than periodic resampling of velocities from a single long RPMD run. Dynamical quantities obtained from RPMD and centroid molecular dynamics (CMD) are compared to exact results for a variety of model systems. Fully converged results for correlations functions are presented for several one dimensional systems and para-hydrogen near its triple point using an improved sampling technique. Our results indicate that CMD shows very similar performance to RPMD. The quality of each method is further assessed via a new χ2 descriptor constructed by transforming approximate real-time correlation functions from CMD and RPMD trajectories to imaginary time and comparing these to numerically exact imaginary time correlation functions. For para-hydrogen near its triple point, it is found that adiabatic CMD and RPMD both have similar χ2 error.
Langaas, Mette; Bakke, Øyvind
2014-12-01
In genetic association studies, detecting disease-genotype association is a primary goal. We study seven robust test statistics for such association when the underlying genetic model is unknown, for data on disease status (case or control) and genotype (three genotypes of a biallelic genetic marker). In such studies, p-values have predominantly been calculated by asymptotic approximations or by simulated permutations. We consider an exact method, conditional enumeration. When the number of simulated permutations tends to infinity, the permutation p-value approaches the conditional enumeration p-value, but calculating the latter is much more efficient than performing simulated permutations. We have studied case-control sample sizes with 500-5000 cases and 500-15,000 controls, and significance levels from 5 × 10(-8) to 0.05, thus our results are applicable to genetic association studies with only a few genetic markers under study, intermediate follow-up studies, and genome-wide association studies. Our main findings are: (i) If all monotone genetic models are of interest, the best performance in the situations under study is achieved for the robust test statistics based on the maximum over a range of Cochran-Armitage trend tests with different scores and for the constrained likelihood ratio test. (ii) For significance levels below 0.05, for the test statistics under study, asymptotic approximations may give a test size up to 20 times the nominal level, and should therefore be used with caution. (iii) Calculating p-values based on exact conditional enumeration is a powerful, valid and computationally feasible approach, and we advocate its use in genetic association studies.
Ball, J.R.
1986-04-01
This document is a supplement to a ''Handbook for Cost Estimating'' (NUREG/CR-3971) and provides specific guidance for developing ''quick'' approximate estimates of the cost of implementing generic regulatory requirements for nuclear power plants. A method is presented for relating the known construction costs for new nuclear power plants (as contained in the Energy Economic Data Base) to the cost of performing similar work, on a back-fit basis, at existing plants. Cost factors are presented to account for variations in such important cost areas as construction labor productivity, engineering and quality assurance, replacement energy, reworking of existing features, and regional variations in the cost of materials and labor. Other cost categories addressed in this handbook include those for changes in plant operating personnel and plant documents, licensee costs, NRC costs, and costs for other government agencies. Data sheets, worksheets, and appropriate cost algorithms are included to guide the user through preparation of rough estimates. A sample estimate is prepared using the method and the estimating tools provided.
NASA Astrophysics Data System (ADS)
Galván, I. Fdez; Sánchez, M. L.; Martín, M. E.; Olivares del Valle, F. J.; Aguilar, M. A.
2003-11-01
ASEP/MD is a computer program designed to implement the Averaged Solvent Electrostatic Potential/Molecular Dynamics (ASEP/MD) method developed by our group. It can be used for the study of solvent effects and properties of molecules in their liquid state or in solution. It is written in the FORTRAN90 programming language, and should be easy to follow, understand, maintain and modify. Given the nature of the ASEP/MD method, external programs are needed for the quantum calculations and molecular dynamics simulations. The present version of ASEP/MD includes interface routines for the GAUSSIAN package, HONDO, and MOLDY, but adding support for other programs is straightforward. This article describes the program and its usage. Program summaryTitle of program: ASEP/MD Catalogue identifier:ADSF Program Summary URL:http://cpc.cs.qub.ac.uk/summaries/ADSF Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computer for which the program is designed: it has been tested on Intel-based PC and Sun Operating systems under which the program has been tested: Red Hat Linux 7.2 and SunOS 5.6 Programming language used: FORTRAN90 Memory required to execute with typical data: greatly depends on the system No. of processors used: 1 Has the code been vectorized or parallelized?: no No. of bytes in distributed program, including test data, etc.: 44 544 Distribution format: tar gzip file Keywords: Solvent effects, QM/MM methods, mean field approximation, geometry optimization Nature of physical problem: The study of molecules in solution with quantum methods is a difficult task because of the large number of molecules and configurations that must be taken into account. The quantum mechanics/molecular mechanics methods proposed to date either require massive computational power or oversimplify the solute quantum description. Method of solution: A non-traditional QM/MM method based on the mean field approximation was developed where a classical molecular
Chalasani, P.; Saias, I.; Jha, S.
1996-04-08
As increasingly large volumes of sophisticated options (called derivative securities) are traded in world financial markets, determining a fair price for these options has become an important and difficult computational problem. Many valuation codes use the binomial pricing model, in which the stock price is driven by a random walk. In this model, the value of an n-period option on a stock is the expected time-discounted value of the future cash flow on an n-period stock price path. Path-dependent options are particularly difficult to value since the future cash flow depends on the entire stock price path rather than on just the final stock price. Currently such options are approximately priced by Monte carlo methods with error bounds that hold only with high probability and which are reduced by increasing the number of simulation runs. In this paper the authors show that pricing an arbitrary path-dependent option is {number_sign}-P hard. They show that certain types f path-dependent options can be valued exactly in polynomial time. Asian options are path-dependent options that are particularly hard to price, and for these they design deterministic polynomial-time approximate algorithms. They show that the value of a perpetual American put option (which can be computed in constant time) is in many cases a good approximation to the value of an otherwise identical n-period American put option. In contrast to Monte Carlo methods, the algorithms have guaranteed error bounds that are polynormally small (and in some cases exponentially small) in the maturity n. For the error analysis they derive large-deviation results for random walks that may be of independent interest.
Phenomenological applications of rational approximants
NASA Astrophysics Data System (ADS)
Gonzàlez-Solís, Sergi; Masjuan, Pere
2016-08-01
We illustrate the powerfulness of Padé approximants (PAs) as a summation method and explore one of their extensions, the so-called quadratic approximant (QAs), to access both space- and (low-energy) time-like (TL) regions. As an introductory and pedagogical exercise, the function 1 zln(1 + z) is approximated by both kind of approximants. Then, PAs are applied to predict pseudoscalar meson Dalitz decays and to extract Vub from the semileptonic B → πℓνℓ decays. Finally, the π vector form factor in the TL region is explored using QAs.
NASA Astrophysics Data System (ADS)
Abdel Wahab, N. H.; Salah, Ahmed
2015-05-01
In this paper, the interaction of a three-level -configration atom and a one-mode quantized electromagnetic cavity field has been studied. The detuning parameters, the Kerr nonlinearity and the arbitrary form of both the field and intensity-dependent atom-field coupling have been taken into account. The wave function when the atom and the field are initially prepared in the excited state and coherent state, respectively, by using the Schrödinger equation has been given. The analytical approximation solution of this model has been obtained by using the modified homotopy analysis method (MHAM). The homotopy analysis method is mentioned summarily. MHAM can be obtained from the homotopy analysis method (HAM) applied to Laplace, inverse Laplace transform and Pade approximate. MHAM is used to increase the accuracy and accelerate the convergence rate of truncated series solution obtained by the HAM. The time-dependent parameters of the anti-bunching of photons, the amplitude-squared squeezing and the coherent properties have been calculated. The influence of the detuning parameters, Kerr nonlinearity and photon number operator on the temporal behavior of these phenomena have been analyzed. We noticed that the considered system is sensitive to variations in the presence of these parameters.
Asgharzadeh, Hafez; Borazjani, Iman
2017-02-15
diagonal of the Jacobian further improves the performance by 42 - 74% compared to the full Jacobian. The NKM with an analytical Jacobian showed better performance than the fixed point Runge-Kutta because it converged with higher time steps and in approximately 30% less iterations even when the grid was stretched and the Reynold number was increased. In fact, stretching the grid decreased the performance of all methods, but the fixed-point Runge-Kutta performance decreased 4.57 and 2.26 times more than NKM with a diagonal Jacobian when the stretching factor was increased, respectively. The NKM with a diagonal analytical Jacobian and matrix-free method with an analytical preconditioner are the fastest methods and the superiority of one to another depends on the flow problem. Furthermore, the implemented methods are fully parallelized with parallel efficiency of 80-90% on the problems tested. The NKM with the analytical Jacobian can guide building preconditioners for other techniques to improve their performance in the future.
NASA Astrophysics Data System (ADS)
Asgharzadeh, Hafez; Borazjani, Iman
2017-02-01
diagonal of the Jacobian further improves the performance by 42-74% compared to the full Jacobian. The NKM with an analytical Jacobian showed better performance than the fixed point Runge-Kutta because it converged with higher time steps and in approximately 30% less iterations even when the grid was stretched and the Reynold number was increased. In fact, stretching the grid decreased the performance of all methods, but the fixed-point Runge-Kutta performance decreased 4.57 and 2.26 times more than NKM with a diagonal and full Jacobian, respectivley, when the stretching factor was increased. The NKM with a diagonal analytical Jacobian and matrix-free method with an analytical preconditioner are the fastest methods and the superiority of one to another depends on the flow problem. Furthermore, the implemented methods are fully parallelized with parallel efficiency of 80-90% on the problems tested. The NKM with the analytical Jacobian can guide building preconditioners for other techniques to improve their performance in the future.
Edison, John R; Monson, Peter A
2014-07-14
Recently we have developed a dynamic mean field theory (DMFT) for lattice gas models of fluids in porous materials [P. A. Monson, J. Chem. Phys. 128(8), 084701 (2008)]. The theory can be used to describe the relaxation processes in the approach to equilibrium or metastable states for fluids in pores and is especially useful for studying system exhibiting adsorption/desorption hysteresis. In this paper we discuss the extension of the theory to higher order by means of the path probability method (PPM) of Kikuchi and co-workers. We show that this leads to a treatment of the dynamics that is consistent with thermodynamics coming from the Bethe-Peierls or Quasi-Chemical approximation for the equilibrium or metastable equilibrium states of the lattice model. We compare the results from the PPM with those from DMFT and from dynamic Monte Carlo simulations. We find that the predictions from PPM are qualitatively similar to those from DMFT but give somewhat improved quantitative accuracy, in part due to the superior treatment of the underlying thermodynamics. This comes at the cost of greater computational expense associated with the larger number of equations that must be solved.
NASA Astrophysics Data System (ADS)
Bacskay, George B.
1980-05-01
The vertical valence ionization potentials of Ne, H 2O and N 2 have been calculated by Rayleigh-Schrödinger perturbation and configuration interaction methods. The calculations were carried out in the space of a single determinant reference state and its single and double excitations, using both the N and N - 1 electron Hartree-Fock orbitals as hole/particle bases. The perturbation series for the ion state were generally found to converge fairly slowly in the N electron Hartree-Fock (frozen) orbital basis, but considerably faster in the appropriate N - 1 electron RHF (relaxed) orbital basis. In certain cases, however, due to near-degeneracy effects, partial, and even complete, breakdown of the (non-degenerate) perturbation treatment was observed. The effects of higher excitations on the ionization potentials were estimated by the approximate coupled pair techniques CPA' and CPA″ as well as by a Davidson type correction formula. The final, fully converged CPA″ results are generally in good agreement with those from PNO-CEPA and Green's function calculations as well as experiment.
Edison, John R.; Monson, Peter A.
2014-07-14
Recently we have developed a dynamic mean field theory (DMFT) for lattice gas models of fluids in porous materials [P. A. Monson, J. Chem. Phys. 128(8), 084701 (2008)]. The theory can be used to describe the relaxation processes in the approach to equilibrium or metastable states for fluids in pores and is especially useful for studying system exhibiting adsorption/desorption hysteresis. In this paper we discuss the extension of the theory to higher order by means of the path probability method (PPM) of Kikuchi and co-workers. We show that this leads to a treatment of the dynamics that is consistent with thermodynamics coming from the Bethe-Peierls or Quasi-Chemical approximation for the equilibrium or metastable equilibrium states of the lattice model. We compare the results from the PPM with those from DMFT and from dynamic Monte Carlo simulations. We find that the predictions from PPM are qualitatively similar to those from DMFT but give somewhat improved quantitative accuracy, in part due to the superior treatment of the underlying thermodynamics. This comes at the cost of greater computational expense associated with the larger number of equations that must be solved.
Rasin, A.
1994-04-01
We discuss the idea of approximate flavor symmetries. Relations between approximate flavor symmetries and natural flavor conservation and democracy models is explored. Implications for neutrino physics are also discussed.
NASA Astrophysics Data System (ADS)
Niiniluoto, Ilkka
2014-03-01
Approximation of laws is an important theme in the philosophy of science. If we can make sense of the idea that two scientific laws are "close" to each other, then we can also analyze such methodological notions as approximate explanation of laws, approximate reduction of theories, approximate empirical success of theories, and approximate truth of laws. Proposals for measuring the distance between quantitative scientific laws were given in Niiniluoto (1982, 1987). In this paper, these definitions are reconsidered as a response to the interesting critical remarks by Liu (1999).
Adaptive approximation models in optimization
Voronin, A.N.
1995-05-01
The paper proposes a method for optimization of functions of several variables that substantially reduces the number of objective function evaluations compared to traditional methods. The method is based on the property of iterative refinement of approximation models of the optimand function in approximation domains that contract to the extremum point. It does not require subjective specification of the starting point, step length, or other parameters of the search procedure. The method is designed for efficient optimization of unimodal functions of several (not more than 10-15) variables and can be applied to find the global extremum of polymodal functions and also for optimization of scalarized forms of vector objective functions.
Hayes, S; Taylor, R; Paterson, A
2005-12-01
Forensic facial approximation involves building a likeness of the head and face on the skull of an unidentified individual, with the aim that public broadcast of the likeness will trigger recognition in those who knew the person in life. This paper presents an overview of the collaborative practice between Ronn Taylor (Forensic Sculptor to the Victorian Institute of Forensic Medicine) and Detective Sergeant Adrian Paterson (Victoria Police Criminal Identification Squad). This collaboration involves clay modelling to determine an approximation of the person's head shape and feature location, with surface texture and more speculative elements being rendered digitally onto an image of the model. The advantages of this approach are that through clay modelling anatomical contouring is present, digital enhancement resolves some of the problems of visual perception of a representation, such as edge and shape determination, and the approximation can be easily modified as and when new information is received.
Exponential approximations in optimal design
NASA Technical Reports Server (NTRS)
Belegundu, A. D.; Rajan, S. D.; Rajgopal, J.
1990-01-01
One-point and two-point exponential functions have been developed and proved to be very effective approximations of structural response. The exponential has been compared to the linear, reciprocal and quadratic fit methods. Four test problems in structural analysis have been selected. The use of such approximations is attractive in structural optimization to reduce the numbers of exact analyses which involve computationally expensive finite element analysis.
Gadgets, approximation, and linear programming
Trevisan, L.; Sudan, M.; Sorkin, G.B.; Williamson, D.P.
1996-12-31
We present a linear-programming based method for finding {open_quotes}gadgets{close_quotes}, i.e., combinatorial structures reducing constraints of one optimization problems to constraints of another. A key step in this method is a simple observation which limits the search space to a finite one. Using this new method we present a number of new, computer-constructed gadgets for several different reductions. This method also answers a question posed by on how to prove the optimality of gadgets-we show how LP duality gives such proofs. The new gadgets improve hardness results for MAX CUT and MAX DICUT, showing that approximating these problems to within factors of 60/61 and 44/45 respectively is N P-hard. We also use the gadgets to obtain an improved approximation algorithm for MAX 3SAT which guarantees an approximation ratio of .801. This improves upon the previous best bound of .7704.
NASA Technical Reports Server (NTRS)
Dutta, Soumitra
1988-01-01
A model for approximate spatial reasoning using fuzzy logic to represent the uncertainty in the environment is presented. Algorithms are developed which can be used to reason about spatial information expressed in the form of approximate linguistic descriptions similar to the kind of spatial information processed by humans. Particular attention is given to static spatial reasoning.
Rebolini, Elisa; Izsák, Róbert; Reine, Simen Sommerfelt; Helgaker, Trygve; Pedersen, Thomas Bondo
2016-08-09
We compare the performance of three approximate methods for speeding up evaluation of the exchange contribution in Hartree-Fock and hybrid Kohn-Sham calculations: the chain-of-spheres algorithm (COSX; Neese , F. Chem. Phys. 2008 , 356 , 98 - 109 ), the pair-atomic resolution-of-identity method (PARI-K; Merlot , P. J. Comput. Chem. 2013 , 34 , 1486 - 1496 ), and the auxiliary density matrix method (ADMM; Guidon , M. J. Chem. Theory Comput. 2010 , 6 , 2348 - 2364 ). Both the efficiency relative to that of a conventional linear-scaling algorithm and the accuracy of total, atomization, and orbital energies are compared for a subset containing 25 of the 200 molecules in the Rx200 set using double-, triple-, and quadruple-ζ basis sets. The accuracy of relative energies is further compared for small alkane conformers (ACONF test set) and Diels-Alder reactions (DARC test set). Overall, we find that the COSX method provides good accuracy for orbital energies as well as total and relative energies, and the method delivers a satisfactory speedup. The PARI-K and in particular ADMM algorithms require further development and optimization to fully exploit their indisputable potential.
An approximation technique for jet impingement flow
Najafi, Mahmoud; Fincher, Donald; Rahni, Taeibi; Javadi, KH.; Massah, H.
2015-03-10
The analytical approximate solution of a non-linear jet impingement flow model will be demonstrated. We will show that this is an improvement over the series approximation obtained via the Adomian decomposition method, which is itself, a powerful method for analysing non-linear differential equations. The results of these approximations will be compared to the Runge-Kutta approximation in order to demonstrate their validity.
Intrinsic Nilpotent Approximation.
1985-06-01
RD-A1II58 265 INTRINSIC NILPOTENT APPROXIMATION(U) MASSACHUSETTS INST 1/2 OF TECH CAMBRIDGE LAB FOR INFORMATION AND, DECISION UMCLRSSI SYSTEMS C...TYPE OF REPORT & PERIOD COVERED Intrinsic Nilpotent Approximation Technical Report 6. PERFORMING ORG. REPORT NUMBER LIDS-R-1482 7. AUTHOR(.) S...certain infinite-dimensional filtered Lie algebras L by (finite-dimensional) graded nilpotent Lie algebras or g . where x E M, (x,,Z) E T*M/O. It
Anomalous diffraction approximation limits
NASA Astrophysics Data System (ADS)
Videen, Gorden; Chýlek, Petr
It has been reported in a recent article [Liu, C., Jonas, P.R., Saunders, C.P.R., 1996. Accuracy of the anomalous diffraction approximation to light scattering by column-like ice crystals. Atmos. Res., 41, pp. 63-69] that the anomalous diffraction approximation (ADA) accuracy does not depend on particle refractive index, but instead is dependent on the particle size parameter. Since this is at odds with previous research, we thought these results warranted further discussion.
NASA Astrophysics Data System (ADS)
Zhang, Xi; Lu, Jinling; Yuan, Shifei; Yang, Jun; Zhou, Xuan
2017-03-01
This paper proposes a novel parameter identification method for the lithium-ion (Li-ion) battery equivalent circuit model (ECM) considering the electrochemical properties. An improved pseudo two-dimension (P2D) model is established on basis of partial differential equations (PDEs), since the electrolyte potential is simplified from the nonlinear to linear expression while terminal voltage can be divided into the electrolyte potential, open circuit voltage (OCV), overpotential of electrodes, internal resistance drop, and so on. The model order reduction process is implemented by the simplification of the PDEs using the Laplace transform, inverse Laplace transform, Pade approximation, etc. A unified second order transfer function between cell voltage and current is obtained for the comparability with that of ECM. The final objective is to obtain the relationship between the ECM resistances/capacitances and electrochemical parameters such that in various conditions, ECM precision could be improved regarding integration of battery interior properties for further applications, e.g., SOC estimation. Finally simulation and experimental results prove the correctness and validity of the proposed methodology.
NASA Technical Reports Server (NTRS)
Dutta, Soumitra
1988-01-01
Much of human reasoning is approximate in nature. Formal models of reasoning traditionally try to be precise and reject the fuzziness of concepts in natural use and replace them with non-fuzzy scientific explicata by a process of precisiation. As an alternate to this approach, it has been suggested that rather than regard human reasoning processes as themselves approximating to some more refined and exact logical process that can be carried out with mathematical precision, the essence and power of human reasoning is in its capability to grasp and use inexact concepts directly. This view is supported by the widespread fuzziness of simple everyday terms (e.g., near tall) and the complexity of ordinary tasks (e.g., cleaning a room). Spatial reasoning is an area where humans consistently reason approximately with demonstrably good results. Consider the case of crossing a traffic intersection. We have only an approximate idea of the locations and speeds of various obstacles (e.g., persons and vehicles), but we nevertheless manage to cross such traffic intersections without any harm. The details of our mental processes which enable us to carry out such intricate tasks in such apparently simple manner are not well understood. However, it is that we try to incorporate such approximate reasoning techniques in our computer systems. Approximate spatial reasoning is very important for intelligent mobile agents (e.g., robots), specially for those operating in uncertain or unknown or dynamic domains.
ERIC Educational Resources Information Center
Dong, Nianbo; Lipsey, Mark
2014-01-01
When randomized control trials (RCT) are not feasible, researchers seek other methods to make causal inference, e.g., propensity score methods. One of the underlined assumptions for the propensity score methods to obtain unbiased treatment effect estimates is the ignorability assumption, that is, conditional on the propensity score, treatment…
NASA Astrophysics Data System (ADS)
Zhang, Shen; Wang, Hongwei; Kang, Wei; Zhang, Ping; He, X. T.
2016-04-01
An extended first-principles molecular dynamics (FPMD) method based on Kohn-Sham scheme is proposed to elevate the temperature limit of the FPMD method in the calculation of dense plasmas. The extended method treats the wave functions of high energy electrons as plane waves analytically and thus expands the application of the FPMD method to the region of hot dense plasmas without suffering from the formidable computational costs. In addition, the extended method inherits the high accuracy of the Kohn-Sham scheme and keeps the information of electronic structures. This gives an edge to the extended method in the calculation of mixtures of plasmas composed of heterogeneous ions, high-Z dense plasmas, lowering of ionization potentials, X-ray absorption/emission spectra, and opacities, which are of particular interest to astrophysics, inertial confinement fusion engineering, and laboratory astrophysics.
NASA Astrophysics Data System (ADS)
Asgharzadeh, Hafez; Borazjani, Iman
2014-11-01
Time step-size restrictions and low convergence rates are major bottle necks for implicit solution of the Navier-Stokes in simulations involving complex geometries with moving boundaries. Newton-Krylov method (NKM) is a combination of a Newton-type method for super-linearly convergent solution of nonlinear equations and Krylov subspace methods for solving the Newton correction equations, which can theoretically address both bottle necks. The efficiency of this method vastly depends on the Jacobian forming scheme e.g. automatic differentiation is very expensive and Jacobian-free methods slow down as the mesh is refined. A novel, computationally efficient analytical Jacobian for NKM was developed to solve unsteady incompressible Navier-Stokes momentum equations on staggered curvilinear grids with immersed boundaries. The NKM was validated and verified against Taylor-Green vortex and pulsatile flow in a 90 degree bend and efficiently handles complex geometries such as an intracranial aneurysm with multiple overset grids, pulsatile inlet flow and immersed boundaries. The NKM method is shown to be more efficient than the semi-implicit Runge-Kutta methods and Jabobian-free Newton-Krylov methods. We believe NKM can be applied to many CFD techniques to decrease the computational cost. This work was supported partly by the NIH Grant R03EB014860, and the computational resources were partly provided by Center for Computational Research (CCR) at University at Buffalo.
ERIC Educational Resources Information Center
Wolff, Hans
This paper deals with a stochastic process for the approximation of the root of a regression equation. This process was first suggested by Robbins and Monro. The main result here is a necessary and sufficient condition on the iteration coefficients for convergence of the process (convergence with probability one and convergence in the quadratic…
Computer Experiments for Function Approximations
Chang, A; Izmailov, I; Rizzo, S; Wynter, S; Alexandrov, O; Tong, C
2007-10-15
This research project falls in the domain of response surface methodology, which seeks cost-effective ways to accurately fit an approximate function to experimental data. Modeling and computer simulation are essential tools in modern science and engineering. A computer simulation can be viewed as a function that receives input from a given parameter space and produces an output. Running the simulation repeatedly amounts to an equivalent number of function evaluations, and for complex models, such function evaluations can be very time-consuming. It is then of paramount importance to intelligently choose a relatively small set of sample points in the parameter space at which to evaluate the given function, and then use this information to construct a surrogate function that is close to the original function and takes little time to evaluate. This study was divided into two parts. The first part consisted of comparing four sampling methods and two function approximation methods in terms of efficiency and accuracy for simple test functions. The sampling methods used were Monte Carlo, Quasi-Random LP{sub {tau}}, Maximin Latin Hypercubes, and Orthogonal-Array-Based Latin Hypercubes. The function approximation methods utilized were Multivariate Adaptive Regression Splines (MARS) and Support Vector Machines (SVM). The second part of the study concerned adaptive sampling methods with a focus on creating useful sets of sample points specifically for monotonic functions, functions with a single minimum and functions with a bounded first derivative.
Energy conservation - A test for scattering approximations
NASA Technical Reports Server (NTRS)
Acquista, C.; Holland, A. C.
1980-01-01
The roles of the extinction theorem and energy conservation in obtaining the scattering and absorption cross sections for several light scattering approximations are explored. It is shown that the Rayleigh, Rayleigh-Gans, anomalous diffraction, geometrical optics, and Shifrin approximations all lead to reasonable values of the cross sections, while the modified Mie approximation does not. Further examination of the modified Mie approximation for the ensembles of nonspherical particles reveals additional problems with that method.
NASA Astrophysics Data System (ADS)
Ferranti, Francesco; Rolain, Yves
2017-01-01
This paper proposes a novel state-space matrix interpolation technique to generate linear parameter-varying (LPV) models starting from a set of local linear time-invariant (LTI) models estimated at fixed operating conditions. Since the state-space representation of LTI models is unique up to a similarity transformation, the state-space matrices need to be represented in a common state-space form. This is needed to avoid potentially large variations as a function of the scheduling parameters of the state-space matrices to be interpolated due to underlying similarity transformations, which might degrade the accuracy of the interpolation significantly. Underlying linear state coordinate transformations for a set of local LTI models are extracted by the computation of similarity transformation matrices by means of linear least-squares approximations. These matrices are then used to transform the local LTI state-space matrices into a form suitable to achieve accurate interpolation results. The proposed LPV modeling technique is validated by pertinent numerical results.
Optimizing the Zeldovich approximation
NASA Technical Reports Server (NTRS)
Melott, Adrian L.; Pellman, Todd F.; Shandarin, Sergei F.
1994-01-01
We have recently learned that the Zeldovich approximation can be successfully used for a far wider range of gravitational instability scenarios than formerly proposed; we study here how to extend this range. In previous work (Coles, Melott and Shandarin 1993, hereafter CMS) we studied the accuracy of several analytic approximations to gravitational clustering in the mildly nonlinear regime. We found that what we called the 'truncated Zeldovich approximation' (TZA) was better than any other (except in one case the ordinary Zeldovich approximation) over a wide range from linear to mildly nonlinear (sigma approximately 3) regimes. TZA was specified by setting Fourier amplitudes equal to zero for all wavenumbers greater than k(sub nl), where k(sub nl) marks the transition to the nonlinear regime. Here, we study the cross correlation of generalized TZA with a group of n-body simulations for three shapes of window function: sharp k-truncation (as in CMS), a tophat in coordinate space, or a Gaussian. We also study the variation in the crosscorrelation as a function of initial truncation scale within each type. We find that k-truncation, which was so much better than other things tried in CMS, is the worst of these three window shapes. We find that a Gaussian window e(exp(-k(exp 2)/2k(exp 2, sub G))) applied to the initial Fourier amplitudes is the best choice. It produces a greatly improved crosscorrelation in those cases which most needed improvement, e.g. those with more small-scale power in the initial conditions. The optimum choice of kG for the Gaussian window is (a somewhat spectrum-dependent) 1 to 1.5 times k(sub nl). Although all three windows produce similar power spectra and density distribution functions after application of the Zeldovich approximation, the agreement of the phases of the Fourier components with the n-body simulation is better for the Gaussian window. We therefore ascribe the success of the best-choice Gaussian window to its superior treatment
Topics in Metric Approximation
NASA Astrophysics Data System (ADS)
Leeb, William Edward
This thesis develops effective approximations of certain metrics that occur frequently in pure and applied mathematics. We show that distances that often arise in applications, such as the Earth Mover's Distance between two probability measures, can be approximated by easily computed formulas for a wide variety of ground distances. We develop simple and easily computed characterizations both of norms measuring a function's regularity -- such as the Lipschitz norm -- and of their duals. We are particularly concerned with the tensor product of metric spaces, where the natural notion of regularity is not the Lipschitz condition but the mixed Lipschitz condition. A theme that runs throughout this thesis is that snowflake metrics (metrics raised to a power less than 1) are often better-behaved than ordinary metrics. For example, we show that snowflake metrics on finite spaces can be approximated by the average of tree metrics with a distortion bounded by intrinsic geometric characteristics of the space and not the number of points. Many of the metrics for which we characterize the Lipschitz space and its dual are snowflake metrics. We also present applications of the characterization of certain regularity norms to the problem of recovering a matrix that has been corrupted by noise. We are able to achieve an optimal rate of recovery for certain families of matrices by exploiting the relationship between mixed-variable regularity conditions and the decay of a function's coefficients in a certain orthonormal basis.
Nonlinear Filtering and Approximation Techniques
1991-09-01
Shwartz), Academic Press (1991). [191 M.Cl. ROUTBAUD, Fiting lindairc par morceaux avec petit bruit d’obserration, These. Universit6 de Provence ( 1990...Kernel System (GKS), Academic Press (1983). 181 H.J. KUSHNER, Probability methods for approximations in stochastic control and for elliptic equations... Academic Press (1977). [9] F. LE GLAND, Time discretization of nonlinear filtering equations, in: 28th. IEEE CDC, Tampa, pp. 2601-2606. IEEE Press (1989
NASA Astrophysics Data System (ADS)
Cassam-Chenaï, Patrick; Suo, Bingbing; Liu, Wenjian
2015-07-01
We introduce the electron-nucleus mean-field configuration-interaction (EN-MFCI) approach. It consists in building an effective Hamiltonian for the electrons taking into account a mean field due to the nuclear motion and, conversely, in building an effective Hamiltonian for the nuclear motion taking into account a mean field due to the electrons. The eigenvalue problems of these Hamiltonians are solved in basis sets giving partial eigensolutions for the active degrees of freedom (DOF's), that is to say, either for the electrons or for nuclear motion. The process can be iterated or electron and nuclear motion DOF's can be contracted in a CI calculation. In the EN-MFCI reduction of the molecular Schrödinger equation to an electronic and a nuclear problem, the electronic wave functions do not depend parametrically upon nuclear coordinates. So, it is different from traditional adiabatic methods. Furthermore, when contracting electronic and nuclear functions, a direct product basis set is built in contrast with methods which treat electrons and nuclei on the same footing, but where electron-nucleus explicitly correlated coordinates are used. Also, the EN-MFCI approach can make use of the partition of molecular DOF's into translational, rotational, and internal DOF's. As a result, there is no need to eliminate translations and rotations from the calculation, and the convergence of vibrational levels is facilitated by the use of appropriate internal coordinates. The method is illustrated on diatomic molecules.
Approximate Qualitative Temporal Reasoning
2001-01-01
i.e., their boundaries can be placed in such a way that they coincide with the cell boundaries of the appropriate partition of the time-line. (Think of...respect to some appropriate partition of the time-line. For example, I felt well on Saturday. When I measured my temperature I had a fever on Monday and on...Bittner / Approximate Qualitative Temporal Reasoning 49 [27] I. A. Goralwalla, Y. Leontiev , M. T. Özsu, D. Szafron, and C. Combi. Temporal granularity for
NASA Astrophysics Data System (ADS)
Shishkin, G. I.; Shishkina, L. P.
2011-06-01
In the case of the Dirichlet problem for a singularly perturbed ordinary differential reaction-diffusion equation, a new approach is used to the construction of finite difference schemes such that their solutions and their normalized first- and second-order derivatives converge in the maximum norm uniformly with respect to a perturbation parameter ɛ ∈(0, 1]; the normalized derivatives are ɛ-uniformly bounded. The key idea of this approach to the construction of ɛ-uniformly convergent finite difference schemes is the use of uniform grids for solving grid subproblems for the regular and singular components of the grid solution. Based on the asymptotic construction technique, a scheme of the solution decomposition method is constructed such that its solution and its normalized first- and second-order derivatives converge ɛ-uniformly at the rate of O( N -2ln2 N), where N + 1 is the number of points in the uniform grids. Using the Richardson technique, an improved scheme of the solution decomposition method is constructed such that its solution and its normalized first and second derivatives converge ɛ-uniformly in the maximum norm at the same rate of O( N -4ln4 N).
Xie, Hang; Jiang, Feng; Tian, Heng; Zheng, Xiao; Kwok, Yanho; Chen, Shuguang; Yam, ChiYung; Yan, YiJing; Chen, Guanhua
2012-07-28
Basing on our hierarchical equations of motion for time-dependent quantum transport [X. Zheng, G. H. Chen, Y. Mo, S. K. Koo, H. Tian, C. Y. Yam, and Y. J. Yan, J. Chem. Phys. 133, 114101 (2010)], we develop an efficient and accurate numerical algorithm to solve the Liouville-von-Neumann equation. We solve the real-time evolution of the reduced single-electron density matrix at the tight-binding level. Calculations are carried out to simulate the transient current through a linear chain of atoms, with each represented by a single orbital. The self-energy matrix is expanded in terms of multiple Lorentzian functions, and the Fermi distribution function is evaluated via the Padè spectrum decomposition. This Lorentzian-Padè decomposition scheme is employed to simulate the transient current. With sufficient Lorentzian functions used to fit the self-energy matrices, we show that the lead spectral function and the dynamics response can be treated accurately. Compared to the conventional master equation approaches, our method is much more efficient as the computational time scales cubically with the system size and linearly with the simulation time. As a result, the simulations of the transient currents through systems containing up to one hundred of atoms have been carried out. As density functional theory is also an effective one-particle theory, the Lorentzian-Padè decomposition scheme developed here can be generalized for first-principles simulation of realistic systems.
Approximate Counting of Graphical Realizations.
Erdős, Péter L; Kiss, Sándor Z; Miklós, István; Soukup, Lajos
2015-01-01
In 1999 Kannan, Tetali and Vempala proposed a MCMC method to uniformly sample all possible realizations of a given graphical degree sequence and conjectured its rapidly mixing nature. Recently their conjecture was proved affirmative for regular graphs (by Cooper, Dyer and Greenhill, 2007), for regular directed graphs (by Greenhill, 2011) and for half-regular bipartite graphs (by Miklós, Erdős and Soukup, 2013). Several heuristics on counting the number of possible realizations exist (via sampling processes), and while they work well in practice, so far no approximation guarantees exist for such an approach. This paper is the first to develop a method for counting realizations with provable approximation guarantee. In fact, we solve a slightly more general problem; besides the graphical degree sequence a small set of forbidden edges is also given. We show that for the general problem (which contains the Greenhill problem and the Miklós, Erdős and Soukup problem as special cases) the derived MCMC process is rapidly mixing. Further, we show that this new problem is self-reducible therefore it provides a fully polynomial randomized approximation scheme (a.k.a. FPRAS) for counting of all realizations.
Approximate Counting of Graphical Realizations
2015-01-01
In 1999 Kannan, Tetali and Vempala proposed a MCMC method to uniformly sample all possible realizations of a given graphical degree sequence and conjectured its rapidly mixing nature. Recently their conjecture was proved affirmative for regular graphs (by Cooper, Dyer and Greenhill, 2007), for regular directed graphs (by Greenhill, 2011) and for half-regular bipartite graphs (by Miklós, Erdős and Soukup, 2013). Several heuristics on counting the number of possible realizations exist (via sampling processes), and while they work well in practice, so far no approximation guarantees exist for such an approach. This paper is the first to develop a method for counting realizations with provable approximation guarantee. In fact, we solve a slightly more general problem; besides the graphical degree sequence a small set of forbidden edges is also given. We show that for the general problem (which contains the Greenhill problem and the Miklós, Erdős and Soukup problem as special cases) the derived MCMC process is rapidly mixing. Further, we show that this new problem is self-reducible therefore it provides a fully polynomial randomized approximation scheme (a.k.a. FPRAS) for counting of all realizations. PMID:26161994
Repetition through Successive Approximations.
ERIC Educational Resources Information Center
Littell, Katherine M.
This study was conducted in an attempt to provide an alternative to the long-established method of tape listening and repetition drills, a method that has had disappointing results. It is suggested that the rate of speed of phonic presentation is not commensurate with the rate of comprehension. The proposed method seeks to prevent cognitive…
Hierarchical Approximate Bayesian Computation
Turner, Brandon M.; Van Zandt, Trisha
2013-01-01
Approximate Bayesian computation (ABC) is a powerful technique for estimating the posterior distribution of a model’s parameters. It is especially important when the model to be fit has no explicit likelihood function, which happens for computational (or simulation-based) models such as those that are popular in cognitive neuroscience and other areas in psychology. However, ABC is usually applied only to models with few parameters. Extending ABC to hierarchical models has been difficult because high-dimensional hierarchical models add computational complexity that conventional ABC cannot accommodate. In this paper we summarize some current approaches for performing hierarchical ABC and introduce a new algorithm called Gibbs ABC. This new algorithm incorporates well-known Bayesian techniques to improve the accuracy and efficiency of the ABC approach for estimation of hierarchical models. We then use the Gibbs ABC algorithm to estimate the parameters of two models of signal detection, one with and one without a tractable likelihood function. PMID:24297436
Countably QC-Approximating Posets
Mao, Xuxin; Xu, Luoshan
2014-01-01
As a generalization of countably C-approximating posets, the concept of countably QC-approximating posets is introduced. With the countably QC-approximating property, some characterizations of generalized completely distributive lattices and generalized countably approximating posets are given. The main results are as follows: (1) a complete lattice is generalized completely distributive if and only if it is countably QC-approximating and weakly generalized countably approximating; (2) a poset L having countably directed joins is generalized countably approximating if and only if the lattice σc(L)op of all σ-Scott-closed subsets of L is weakly generalized countably approximating. PMID:25165730
Fermion tunneling beyond semiclassical approximation
Majhi, Bibhas Ranjan
2009-02-15
Applying the Hamilton-Jacobi method beyond the semiclassical approximation prescribed in R. Banerjee and B. R. Majhi, J. High Energy Phys. 06 (2008) 095 for the scalar particle, Hawking radiation as tunneling of the Dirac particle through an event horizon is analyzed. We show that, as before, all quantum corrections in the single particle action are proportional to the usual semiclassical contribution. We also compute the modifications to the Hawking temperature and Bekenstein-Hawking entropy for the Schwarzschild black hole. Finally, the coefficient of the logarithmic correction to entropy is shown to be related with the trace anomaly.
APPROXIMATE MULTIPHASE FLOW MODELING BY CHARACTERISTIC METHODS
The flow of petroleum hydrocarbons, organic solvents and other liquids that are immiscible with water presents the nation with some of the most difficult subsurface remediation problems. One aspect of contaminant transport associated releases of such liquids is the transport as a...
Advanced Concepts and Methods of Approximate Reasoning
1989-12-01
about the real-world system being studied. This characterization was derived by Carnap [5], who also proposed a conceptual procedure for the generation...of descriptions of all possible states of affairs. While Carnap considered first-order-logic systems in his characterization of the con- cept, we shall...discussion it is very important to remark, however, that the Carnap procedure is a conceptual process intended primarily to formalize the notion of possible
Inversion and approximation of Laplace transforms
NASA Technical Reports Server (NTRS)
Lear, W. M.
1980-01-01
A method of inverting Laplace transforms by using a set of orthonormal functions is reported. As a byproduct of the inversion, approximation of complicated Laplace transforms by a transform with a series of simple poles along the left half plane real axis is shown. The inversion and approximation process is simple enough to be put on a programmable hand calculator.
An approximation for inverse Laplace transforms
NASA Technical Reports Server (NTRS)
Lear, W. M.
1981-01-01
Programmable calculator runs simple finite-series approximation for Laplace transform inversions. Utilizing family of orthonormal functions, approximation is used for wide range of transforms, including those encountered in feedback control problems. Method works well as long as F(t) decays to zero as it approaches infinity and so is appliable to most physical systems.
Factorized Diffusion Map Approximation.
Amizadeh, Saeed; Valizadegan, Hamed; Hauskrecht, Milos
2012-01-01
Diffusion maps are among the most powerful Machine Learning tools to analyze and work with complex high-dimensional datasets. Unfortunately, the estimation of these maps from a finite sample is known to suffer from the curse of dimensionality. Motivated by other machine learning models for which the existence of structure in the underlying distribution of data can reduce the complexity of estimation, we study and show how the factorization of the underlying distribution into independent subspaces can help us to estimate diffusion maps more accurately. Building upon this result, we propose and develop an algorithm that can automatically factorize a high dimensional data space in order to minimize the error of estimation of its diffusion map, even in the case when the underlying distribution is not decomposable. Experiments on both the synthetic and real-world datasets demonstrate improved estimation performance of our method over the standard diffusion-map framework.
Factorized Diffusion Map Approximation
Amizadeh, Saeed; Valizadegan, Hamed; Hauskrecht, Milos
2013-01-01
Diffusion maps are among the most powerful Machine Learning tools to analyze and work with complex high-dimensional datasets. Unfortunately, the estimation of these maps from a finite sample is known to suffer from the curse of dimensionality. Motivated by other machine learning models for which the existence of structure in the underlying distribution of data can reduce the complexity of estimation, we study and show how the factorization of the underlying distribution into independent subspaces can help us to estimate diffusion maps more accurately. Building upon this result, we propose and develop an algorithm that can automatically factorize a high dimensional data space in order to minimize the error of estimation of its diffusion map, even in the case when the underlying distribution is not decomposable. Experiments on both the synthetic and real-world datasets demonstrate improved estimation performance of our method over the standard diffusion-map framework. PMID:25309676
The closure approximation in the hierarchy equations.
NASA Technical Reports Server (NTRS)
Adomian, G.
1971-01-01
The expectation of the solution process in a stochastic operator equation can be obtained from averaged equations only under very special circumstances. Conditions for validity are given and the significance and validity of the approximation in widely used hierarchy methods and the ?self-consistent field' approximation in nonequilibrium statistical mechanics are clarified. The error at any level of the hierarchy can be given and can be avoided by the use of the iterative method.
Alternative approximation concepts for space frame synthesis
NASA Technical Reports Server (NTRS)
Lust, R. V.; Schmit, L. A.
1985-01-01
A method for space frame synthesis based on the application of a full gamut of approximation concepts is presented. It is found that with the thoughtful selection of design space, objective function approximation, constraint approximation and mathematical programming problem formulation options it is possible to obtain near minimum mass designs for a significant class of space frame structural systems while requiring fewer than 10 structural analyses. Example problems are presented which demonstrate the effectiveness of the method for frame structures subjected to multiple static loading conditions with limits on structural stiffness and strength.
NASA Technical Reports Server (NTRS)
Dinyavari, M. A. H.; Friedmann, P. P.
1984-01-01
Several incompressible finite-time arbitrary-motion airfoil theories suitable for coupled flap-lag-torsional aeroelastic analysis of helicopter rotors in hover and forward flight are derived. These theories include generalized Greenberg's theory, generalized Loewy's theory, and a staggered cascade theory. The generalized Greenberg's and staggered cascade theories were derived directly in Laplace domain considering the finite length of the wake and using operational methods. The load expressions are presented in Laplace, frequency, and time domains. Approximate time domain loads for the various generalized theories, discussed in the paper, are obtained by developing finite state models using the Pade approximant of the appropriate lift deficiency functions. Three different methods for constructing Pade approximants of the lift deficiency functions were considered and the more flexible one was used. Pade approximants of Loewy's lift deficiency function, for various wake spacing and radial location parameters of a helicopter typical rotor blade section, are presented.
DALI: Derivative Approximation for LIkelihoods
NASA Astrophysics Data System (ADS)
Sellentin, Elena
2015-07-01
DALI (Derivative Approximation for LIkelihoods) is a fast approximation of non-Gaussian likelihoods. It extends the Fisher Matrix in a straightforward way and allows for a wider range of posterior shapes. The code is written in C/C++.
NASA Astrophysics Data System (ADS)
Sahal-Bréchot, Sylvie; Dimitrijević, Milan; Nessib, Nabil
2014-06-01
"Stark broadening" theory and calculations have been extensively developed for about 50 years. The theory can now be considered as mature for many applications, especially for accurate spectroscopic diagnostics and modeling, in astrophysics, laboratory plasma physics and technological plasmas, as well. This requires the knowledge of numerous collisional line profiles. In order to meet these needs, the "SCP" (semiclassical perturbation) method and numerical code were created and developed. The SCP code is now extensively used for the needs of spectroscopic diagnostics and modeling, and the results of the published calculations are displayed in the STARK-B database. The aim of the present paper is to introduce the main approximations leading to the impact of semiclassical perturbation method and to give formulae entering the numerical SCP code, in order to understand the validity conditions of the method and of the results; and also to understand some regularities and systematic trends. This would also allow one to compare the method and its results to those of other methods and codes.
Reconstructing the Nucleon-Nucleon Potential by a New Coupled-Channel Inversion Method
Pupasov, Andrey; Samsonov, Boris F.; Sparenberg, Jean-Marc; Baye, Daniel
2011-04-15
A second-order supersymmetric transformation is presented, for the two-channel Schroedinger equation with equal thresholds. It adds a Breit-Wigner term to the mixing parameter, without modifying the eigenphase shifts, and modifies the potential matrix analytically. The iteration of a few such transformations allows a precise fit of realistic mixing parameters in terms of a Pade expansion of both the scattering matrix and the effective-range function. The method is applied to build an exactly solvable potential for the neutron-proton {sup 3}S{sub 1}-{sup 3}D{sub 1} case.
Approximation concepts for efficient structural synthesis
NASA Technical Reports Server (NTRS)
Schmit, L. A., Jr.; Miura, H.
1976-01-01
It is shown that efficient structural synthesis capabilities can be created by using approximation concepts to mesh finite element structural analysis methods with nonlinear mathematical programming techniques. The history of the application of mathematical programming techniques to structural design optimization problems is reviewed. Several rather general approximation concepts are described along with the technical foundations of the ACCESS 1 computer program, which implements several approximation concepts. A substantial collection of structural design problems involving truss and idealized wing structures is presented. It is concluded that since the basic ideas employed in creating the ACCESS 1 program are rather general, its successful development supports the contention that the introduction of approximation concepts will lead to the emergence of a new generation of practical and efficient, large scale, structural synthesis capabilities in which finite element analysis methods and mathematical programming algorithms will play a central role.
Approximate Bruechner orbitals in electron propagator calculations
Ortiz, J.V.
1999-12-01
Orbitals and ground-state correlation amplitudes from the so-called Brueckner doubles approximation of coupled-cluster theory provide a useful reference state for electron propagator calculations. An operator manifold with hold, particle, two-hole-one-particle and two-particle-one-hole components is chosen. The resulting approximation, third-order algebraic diagrammatic construction [2ph-TDA, ADC (3)] and 3+ methods. The enhanced versatility of this approximation is demonstrated through calculations on valence ionization energies, core ionization energies, electron detachment energies of anions, and on a molecule with partial biradical character, ozone.
Approximate String Matching with Reduced Alphabet
NASA Astrophysics Data System (ADS)
Salmela, Leena; Tarhio, Jorma
We present a method to speed up approximate string matching by mapping the factual alphabet to a smaller alphabet. We apply the alphabet reduction scheme to a tuned version of the approximate Boyer-Moore algorithm utilizing the Four-Russians technique. Our experiments show that the alphabet reduction makes the algorithm faster. Especially in the k-mismatch case, the new variation is faster than earlier algorithms for English data with small values of k.
Multidimensional stochastic approximation Monte Carlo
NASA Astrophysics Data System (ADS)
Zablotskiy, Sergey V.; Ivanov, Victor A.; Paul, Wolfgang
2016-06-01
Stochastic Approximation Monte Carlo (SAMC) has been established as a mathematically founded powerful flat-histogram Monte Carlo method, used to determine the density of states, g (E ) , of a model system. We show here how it can be generalized for the determination of multidimensional probability distributions (or equivalently densities of states) of macroscopic or mesoscopic variables defined on the space of microstates of a statistical mechanical system. This establishes this method as a systematic way for coarse graining a model system, or, in other words, for performing a renormalization group step on a model. We discuss the formulation of the Kadanoff block spin transformation and the coarse-graining procedure for polymer models in this language. We also apply it to a standard case in the literature of two-dimensional densities of states, where two competing energetic effects are present g (E1,E2) . We show when and why care has to be exercised when obtaining the microcanonical density of states g (E1+E2) from g (E1,E2) .
Radioactivity computation of steady-state and pulsed fusion reactors operation
Attaya, H.
1994-06-01
Different mathematical methods are used to calculate the nuclear transmutation in steady-state and pulsed neutron irradiation. These methods are the Schuer decomposition, the eigenvector decomposition, and the Pade approximation of the matrix exponential function. In the case of the linear decay chain approximation, a simple algorithm is used to evaluate the transition matrices.
Separable approximations of two-body interactions
NASA Astrophysics Data System (ADS)
Haidenbauer, J.; Plessas, W.
1983-01-01
We perform a critical discussion of the efficiency of the Ernst-Shakin-Thaler method for a separable approximation of arbitrary two-body interactions by a careful examination of separable 3S1-3D1 N-N potentials that were constructed via this method by Pieper. Not only the on-shell properties of these potentials are considered, but also a comparison is made of their off-shell characteristics relative to the Reid soft-core potential. We point out a peculiarity in Pieper's application of the Ernst-Shakin-Thaler method, which leads to a resonant-like behavior of his potential 3SD1D. It is indicated where care has to be taken in order to circumvent drawbacks inherent in the Ernst-Shakin-Thaler separable approximation scheme. NUCLEAR REACTIONS Critical discussion of the Ernst-Shakin-Thaler separable approximation method. Pieper's separable N-N potentials examined on shell and off shell.
Approximating Confidence Intervals for Factor Loadings.
ERIC Educational Resources Information Center
Lambert, Zarrel V.; And Others
1991-01-01
A method is presented that eliminates some interpretational limitations arising from assumptions implicit in the use of arbitrary rules of thumb to interpret exploratory factor analytic results. The bootstrap method is presented as a way of approximating sampling distributions of estimated factor loadings. Simulated datasets illustrate the…
Approximating Functions with Exponential Functions
ERIC Educational Resources Information Center
Gordon, Sheldon P.
2005-01-01
The possibility of approximating a function with a linear combination of exponential functions of the form e[superscript x], e[superscript 2x], ... is considered as a parallel development to the notion of Taylor polynomials which approximate a function with a linear combination of power function terms. The sinusoidal functions sin "x" and cos "x"…
Approximate circuits for increased reliability
Hamlet, Jason R.; Mayo, Jackson R.
2015-12-22
Embodiments of the invention describe a Boolean circuit having a voter circuit and a plurality of approximate circuits each based, at least in part, on a reference circuit. The approximate circuits are each to generate one or more output signals based on values of received input signals. The voter circuit is to receive the one or more output signals generated by each of the approximate circuits, and is to output one or more signals corresponding to a majority value of the received signals. At least some of the approximate circuits are to generate an output value different than the reference circuit for one or more input signal values; however, for each possible input signal value, the majority values of the one or more output signals generated by the approximate circuits and received by the voter circuit correspond to output signal result values of the reference circuit.
Approximate circuits for increased reliability
Hamlet, Jason R.; Mayo, Jackson R.
2015-08-18
Embodiments of the invention describe a Boolean circuit having a voter circuit and a plurality of approximate circuits each based, at least in part, on a reference circuit. The approximate circuits are each to generate one or more output signals based on values of received input signals. The voter circuit is to receive the one or more output signals generated by each of the approximate circuits, and is to output one or more signals corresponding to a majority value of the received signals. At least some of the approximate circuits are to generate an output value different than the reference circuit for one or more input signal values; however, for each possible input signal value, the majority values of the one or more output signals generated by the approximate circuits and received by the voter circuit correspond to output signal result values of the reference circuit.
Femtolensing: Beyond the semiclassical approximation
NASA Technical Reports Server (NTRS)
Ulmer, Andrew; Goodman, Jeremy
1995-01-01
Femtolensoing is a gravitational lensing effect in which the magnification is a function not only of the position and sizes of the source and lens, but also of the wavelength of light. Femtolensing is the only known effect of 10(exp -13) - 10(exp -16) solar mass) dark-matter objects and may possibly be detectable in cosmological gamma-ray burst spectra. We present a new and efficient algorithm for femtolensing calculation in general potentials. The physical optics results presented here differ at low frequencies from the semiclassical approximation, in which the flux is attributed to a finite number of mutually coherent images. At higher frequencies, our results agree well with the semicalssical predictions. Applying our method to a point-mass lens with external shear, we find complex events that have structure at both large and small spectral resolution. In this way, we show that femtolensing may be observable for lenses up to 10(exp -11) solar mass, much larger than previously believed. Additionally, we discuss the possibility of a search femtolensing of white dwarfs in the Large Magellanic Cloud at optical wavelengths.
Bronchopulmonary segments approximation using anatomical atlas
NASA Astrophysics Data System (ADS)
Busayarat, Sata; Zrimec, Tatjana
2007-03-01
Bronchopulmonary segments are valuable as they give more accurate localization than lung lobes. Traditionally, determining the segments requires segmentation and identification of segmental bronchi, which, in turn, require volumetric imaging data. In this paper, we present a method for approximating the bronchopulmonary segments for sparse data by effectively using an anatomical atlas. The atlas is constructed from a volumetric data and contains accurate information about bronchopulmonary segments. A new ray-tracing based image registration is used for transferring the information from the atlas to a query image. Results show that the method is able to approximate the segments on sparse HRCT data with slice gap up to 25 millimeters.
Approximate learning algorithm in Boltzmann machines.
Yasuda, Muneki; Tanaka, Kazuyuki
2009-11-01
Boltzmann machines can be regarded as Markov random fields. For binary cases, they are equivalent to the Ising spin model in statistical mechanics. Learning systems in Boltzmann machines are one of the NP-hard problems. Thus, in general we have to use approximate methods to construct practical learning algorithms in this context. In this letter, we propose new and practical learning algorithms for Boltzmann machines by using the belief propagation algorithm and the linear response approximation, which are often referred as advanced mean field methods. Finally, we show the validity of our algorithm using numerical experiments.
Estimation of distribution algorithms with Kikuchi approximations.
Santana, Roberto
2005-01-01
The question of finding feasible ways for estimating probability distributions is one of the main challenges for Estimation of Distribution Algorithms (EDAs). To estimate the distribution of the selected solutions, EDAs use factorizations constructed according to graphical models. The class of factorizations that can be obtained from these probability models is highly constrained. Expanding the class of factorizations that could be employed for probability approximation is a necessary step for the conception of more robust EDAs. In this paper we introduce a method for learning a more general class of probability factorizations. The method combines a reformulation of a probability approximation procedure known in statistical physics as the Kikuchi approximation of energy, with a novel approach for finding graph decompositions. We present the Markov Network Estimation of Distribution Algorithm (MN-EDA), an EDA that uses Kikuchi approximations to estimate the distribution, and Gibbs Sampling (GS) to generate new points. A systematic empirical evaluation of MN-EDA is done in comparison with different Bayesian network based EDAs. From our experiments we conclude that the algorithm can outperform other EDAs that use traditional methods of probability approximation in the optimization of functions with strong interactions among their variables.
Approximating subtree distances between phylogenies.
Bonet, Maria Luisa; St John, Katherine; Mahindru, Ruchi; Amenta, Nina
2006-10-01
We give a 5-approximation algorithm to the rooted Subtree-Prune-and-Regraft (rSPR) distance between two phylogenies, which was recently shown to be NP-complete. This paper presents the first approximation result for this important tree distance. The algorithm follows a standard format for tree distances. The novel ideas are in the analysis. In the analysis, the cost of the algorithm uses a "cascading" scheme that accounts for possible wrong moves. This accounting is missing from previous analysis of tree distance approximation algorithms. Further, we show how all algorithms of this type can be implemented in linear time and give experimental results.
Local discontinuous Galerkin approximations to Richards’ equation
NASA Astrophysics Data System (ADS)
Li, H.; Farthing, M. W.; Dawson, C. N.; Miller, C. T.
2007-03-01
We consider the numerical approximation to Richards' equation because of its hydrological significance and intrinsic merit as a nonlinear parabolic model that admits sharp fronts in space and time that pose a special challenge to conventional numerical methods. We combine a robust and established variable order, variable step-size backward difference method for time integration with an evolving spatial discretization approach based upon the local discontinuous Galerkin (LDG) method. We formulate the approximation using a method of lines approach to uncouple the time integration from the spatial discretization. The spatial discretization is formulated as a set of four differential algebraic equations, which includes a mass conservation constraint. We demonstrate how this system of equations can be reduced to the solution of a single coupled unknown in space and time and a series of local constraint equations. We examine a variety of approximations at discontinuous element boundaries, permeability approximations, and numerical quadrature schemes. We demonstrate an optimal rate of convergence for smooth problems, and compare accuracy and efficiency for a wide variety of approaches applied to a set of common test problems. We obtain robust and efficient results that improve upon existing methods, and we recommend a future path that should yield significant additional improvements.
Dual approximations in optimal control
NASA Technical Reports Server (NTRS)
Hager, W. W.; Ianculescu, G. D.
1984-01-01
A dual approximation for the solution to an optimal control problem is analyzed. The differential equation is handled with a Lagrange multiplier while other constraints are treated explicitly. An algorithm for solving the dual problem is presented.
Approximate solutions of the hyperbolic Kepler equation
NASA Astrophysics Data System (ADS)
Avendano, Martín; Martín-Molina, Verónica; Ortigas-Galindo, Jorge
2015-12-01
We provide an approximate zero widetilde{S}(g,L) for the hyperbolic Kepler's equation S-g {{arcsinh}}(S)-L=0 for gin (0,1) and Lin [0,∞ ). We prove, by using Smale's α -theory, that Newton's method starting at our approximate zero produces a sequence that converges to the actual solution S( g, L) at quadratic speed, i.e. if S_n is the value obtained after n iterations, then |S_n-S|≤ 0.5^{2^n-1}|widetilde{S}-S|. The approximate zero widetilde{S}(g,L) is a piecewise-defined function involving several linear expressions and one with cubic and square roots. In bounded regions of (0,1) × [0,∞ ) that exclude a small neighborhood of g=1, L=0, we also provide a method to construct simpler starters involving only constants.
Mathematical algorithms for approximate reasoning
NASA Technical Reports Server (NTRS)
Murphy, John H.; Chay, Seung C.; Downs, Mary M.
1988-01-01
Most state of the art expert system environments contain a single and often ad hoc strategy for approximate reasoning. Some environments provide facilities to program the approximate reasoning algorithms. However, the next generation of expert systems should have an environment which contain a choice of several mathematical algorithms for approximate reasoning. To meet the need for validatable and verifiable coding, the expert system environment must no longer depend upon ad hoc reasoning techniques but instead must include mathematically rigorous techniques for approximate reasoning. Popular approximate reasoning techniques are reviewed, including: certainty factors, belief measures, Bayesian probabilities, fuzzy logic, and Shafer-Dempster techniques for reasoning. A group of mathematically rigorous algorithms for approximate reasoning are focused on that could form the basis of a next generation expert system environment. These algorithms are based upon the axioms of set theory and probability theory. To separate these algorithms for approximate reasoning various conditions of mutual exclusivity and independence are imposed upon the assertions. Approximate reasoning algorithms presented include: reasoning with statistically independent assertions, reasoning with mutually exclusive assertions, reasoning with assertions that exhibit minimum overlay within the state space, reasoning with assertions that exhibit maximum overlay within the state space (i.e. fuzzy logic), pessimistic reasoning (i.e. worst case analysis), optimistic reasoning (i.e. best case analysis), and reasoning with assertions with absolutely no knowledge of the possible dependency among the assertions. A robust environment for expert system construction should include the two modes of inference: modus ponens and modus tollens. Modus ponens inference is based upon reasoning towards the conclusion in a statement of logical implication, whereas modus tollens inference is based upon reasoning away
Approximating random quantum optimization problems
NASA Astrophysics Data System (ADS)
Hsu, B.; Laumann, C. R.; Läuchli, A. M.; Moessner, R.; Sondhi, S. L.
2013-06-01
We report a cluster of results regarding the difficulty of finding approximate ground states to typical instances of the quantum satisfiability problem k-body quantum satisfiability (k-QSAT) on large random graphs. As an approximation strategy, we optimize the solution space over “classical” product states, which in turn introduces a novel autonomous classical optimization problem, PSAT, over a space of continuous degrees of freedom rather than discrete bits. Our central results are (i) the derivation of a set of bounds and approximations in various limits of the problem, several of which we believe may be amenable to a rigorous treatment; (ii) a demonstration that an approximation based on a greedy algorithm borrowed from the study of frustrated magnetism performs well over a wide range in parameter space, and its performance reflects the structure of the solution space of random k-QSAT. Simulated annealing exhibits metastability in similar “hard” regions of parameter space; and (iii) a generalization of belief propagation algorithms introduced for classical problems to the case of continuous spins. This yields both approximate solutions, as well as insights into the free energy “landscape” of the approximation problem, including a so-called dynamical transition near the satisfiability threshold. Taken together, these results allow us to elucidate the phase diagram of random k-QSAT in a two-dimensional energy-density-clause-density space.
Approximating a nonlinear MTFDE from physiology
NASA Astrophysics Data System (ADS)
Teodoro, M. Filomena
2016-12-01
This paper describes a numerical scheme which approximates the solution of a nonlinear mixed type functional differential equation from nerve conduction theory. The solution of such equation is defined in all the entire real axis and tends to known values at ±∞. A numerical method extended from linear case is developed and applied to solve a nonlinear equation.
Padé approximations and diophantine geometry
Chudnovsky, D. V.; Chudnovsky, G. V.
1985-01-01
Using methods of Padé approximations we prove a converse to Eisenstein's theorem on the boundedness of denominators of coefficients in the expansion of an algebraic function, for classes of functions, parametrized by meromorphic functions. This result is applied to the Tate conjecture on the effective description of isogenies for elliptic curves. PMID:16593552
Quickly Approximating the Distance Between Two Objects
NASA Technical Reports Server (NTRS)
Hammen, David
2009-01-01
A method of quickly approximating the distance between two objects (one smaller, regarded as a point; the other larger and complexly shaped) has been devised for use in computationally simulating motions of the objects for the purpose of planning the motions to prevent collisions.
Efficient computational methods for electromagnetic imaging with applications to 3D magnetotellurics
NASA Astrophysics Data System (ADS)
Kordy, Michal Adam
The motivation for this work is the forward and inverse problem for magnetotellurics, a frequency domain electromagnetic remote-sensing geophysical method used in mineral, geothermal, and groundwater exploration. The dissertation consists of four papers. In the first paper, we prove the existence and uniqueness of a representation of any vector field in H(curl) by a vector lying in H(curl) and H(div). It allows us to represent electric or magnetic fields by another vector field, for which nodal finite element approximation may be used in the case of non-constant electromagnetic properties. With this approach, the system matrix does not become ill-posed for low-frequency. In the second paper, we consider hexahedral finite element approximation of an electric field for the magnetotelluric forward problem. The near-null space of the system matrix for low frequencies makes the numerical solution unstable in the air. We show that the proper solution may obtained by applying a correction on the null space of the curl. It is done by solving a Poisson equation using discrete Helmholtz decomposition. We parallelize the forward code on multicore workstation with large RAM. In the next paper, we use the forward code in the inversion. Regularization of the inversion is done by using the second norm of the logarithm of conductivity. The data space Gauss-Newton approach allows for significant savings in memory and computational time. We show the efficiency of the method by considering a number of synthetic inversions and we apply it to real data collected in Cascade Mountains. The last paper considers a cross-frequency interpolation of the forward response as well as the Jacobian. We consider Pade approximation through model order reduction and rational Krylov subspace. The interpolating frequencies are chosen adaptively in order to minimize the maximum error of interpolation. Two error indicator functions are compared. We prove a theorem of almost always lucky failure in the
Numerical quadratures for approximate computation of ERBS
NASA Astrophysics Data System (ADS)
Zanaty, Peter
2013-12-01
In the ground-laying paper [3] on expo-rational B-splines (ERBS), the default numerical method for approximate computation of the integral with C∞-smooth integrand in the definition of ERBS is Romberg integration. In the present work, a variety of alternative numerical quadrature methods for computation of ERBS and other integrals with smooth integrands are studied, and their performance is compared on several benchmark examples.
Congruence Approximations for Entrophy Endowed Hyperbolic Systems
NASA Technical Reports Server (NTRS)
Barth, Timothy J.; Saini, Subhash (Technical Monitor)
1998-01-01
Building upon the standard symmetrization theory for hyperbolic systems of conservation laws, congruence properties of the symmetrized system are explored. These congruence properties suggest variants of several stabilized numerical discretization procedures for hyperbolic equations (upwind finite-volume, Galerkin least-squares, discontinuous Galerkin) that benefit computationally from congruence approximation. Specifically, it becomes straightforward to construct the spatial discretization and Jacobian linearization for these schemes (given a small amount of derivative information) for possible use in Newton's method, discrete optimization, homotopy algorithms, etc. Some examples will be given for the compressible Euler equations and the nonrelativistic MHD equations using linear and quadratic spatial approximation.
Rational approximations for tomographic reconstructions
NASA Astrophysics Data System (ADS)
Reynolds, Matthew; Beylkin, Gregory; Monzón, Lucas
2013-06-01
We use optimal rational approximations of projection data collected in x-ray tomography to improve image resolution. Under the assumption that the object of interest is described by functions with jump discontinuities, for each projection we construct its rational approximation with a small (near optimal) number of terms for a given accuracy threshold. This allows us to augment the measured data, i.e., double the number of available samples in each projection or, equivalently, extend (double) the domain of their Fourier transform. We also develop a new, fast, polar coordinate Fourier domain algorithm which uses our nonlinear approximation of projection data in a natural way. Using augmented projections of the Shepp-Logan phantom, we provide a comparison between the new algorithm and the standard filtered back-projection algorithm. We demonstrate that the reconstructed image has improved resolution without additional artifacts near sharp transitions in the image.
NASA Astrophysics Data System (ADS)
Jong, Un-Gi; Yu, Chol-Jun; Ri, Jin-Song; Kim, Nam-Hyok; Ri, Guk-Chol
2016-09-01
Extensive studies have demonstrated the promising capability of the organic-inorganic hybrid halide perovskite CH3NH3PbI3 in solar cells with a high power conversion efficiency exceeding 20%. However, the intrinsic as well as extrinsic instabilities of this material remain the major challenge to the commercialization of perovskite-based solar cells. Mixing halides is expected to resolve this problem. Here, we investigate the effect of chemical substitution in the position of the halogen atom on the structural, electronic, and optical properties of mixed halide perovskites CH3NH3Pb (I1-xBrx) 3 with a pseudocubic phase using the virtual crystal approximation method within density functional theory. With an increase of Br content x from 0.0 to 1.0, the lattice constant decreases in proportion to x with the function of a (x )=6.420 -0.333 x (Å), while the band gap and the exciton binding energy increase with the quadratic function of Eg(x ) =1.542 +0.374 x +0.185 x2 (eV) and the linear function of Eb(x ) =0.045 +0.057 x (eV), respectively. The photoabsorption coefficients are also calculated, showing a blueshift of the absorption onsets for higher Br contents. We calculate the phase decomposition energy of these materials and analyze the electronic charge density difference to estimate the material stability. Based on the calculated results, we suggest that the best match between efficiency and stability can be achieved at x ≈0.2 in CH3NH3Pb (I1-xBrx) 3 perovskites.
Approximating spatially exclusive invasion processes
NASA Astrophysics Data System (ADS)
Ross, Joshua V.; Binder, Benjamin J.
2014-05-01
A number of biological processes, such as invasive plant species and cell migration, are composed of two key mechanisms: motility and reproduction. Due to the spatially exclusive interacting behavior of these processes a cellular automata (CA) model is specified to simulate a one-dimensional invasion process. Three (independence, Poisson, and 2D-Markov chain) approximations are considered that attempt to capture the average behavior of the CA. We show that our 2D-Markov chain approximation accurately predicts the state of the CA for a wide range of motility and reproduction rates.
Heat pipe transient response approximation.
Reid, R. S.
2001-01-01
A simple and concise routine that approximates the response of an alkali metal heat pipe to changes in evaporator heat transfer rate is described. This analytically based routine is compared with data from a cylindrical heat pipe with a crescent-annular wick that undergoes gradual (quasi-steady) transitions through the viscous and condenser boundary heat transfer limits. The sonic heat transfer limit can also be incorporated into this routine for heat pipes with more closely coupled condensers. The advantages and obvious limitations of this approach are discussed. For reference, a source code listing for the approximation appears at the end of this paper.
Second Approximation to Conical Flows
1950-12-01
Public Release WRIGHT AIR DEVELOPMENT CENTER AF-WP-(B)-O-29 JUL 53 100 NOTICES ’When Government drawings, specifications, or other data are used V...so that the X, the approximation always depends on the ( "/)th, etc. Here the second approximation, i.e., the terms in C and 62, are computed and...the scheme shown in Fig. 1, the isentropic equations of motion are (cV-X2) +~X~C 6 +- 4= -x- 1 It is assumed that + Ux !E . $O’/ + (8) Introducing Eqs
Saddlepoint approximations for small sample logistic regression problems.
Platt, R W
2000-02-15
Double saddlepoint approximations provide quick and accurate approximations to exact conditional tail probabilities in a variety of situations. This paper describes the use of these approximations in two logistic regression problems. An investigation of regression analysis of the log-odds ratio in a sequence or set of 2x2 tables via simulation studies shows that in practical settings the saddlepoint methods closely approximate exact conditional inference. The double saddlepoint approximation in the test for trend in a sequence of binomial random variates is also shown, via simulation studies, to be an effective approximation to exact conditional inference.
The Cell Cycle Switch Computes Approximate Majority
NASA Astrophysics Data System (ADS)
Cardelli, Luca; Csikász-Nagy, Attila
2012-09-01
Both computational and biological systems have to make decisions about switching from one state to another. The `Approximate Majority' computational algorithm provides the asymptotically fastest way to reach a common decision by all members of a population between two possible outcomes, where the decision approximately matches the initial relative majority. The network that regulates the mitotic entry of the cell-cycle in eukaryotes also makes a decision before it induces early mitotic processes. Here we show that the switch from inactive to active forms of the mitosis promoting Cyclin Dependent Kinases is driven by a system that is related to both the structure and the dynamics of the Approximate Majority computation. We investigate the behavior of these two switches by deterministic, stochastic and probabilistic methods and show that the steady states and temporal dynamics of the two systems are similar and they are exchangeable as components of oscillatory networks.
Approximation of Bivariate Functions via Smooth Extensions
Zhang, Zhihua
2014-01-01
For a smooth bivariate function defined on a general domain with arbitrary shape, it is difficult to do Fourier approximation or wavelet approximation. In order to solve these problems, in this paper, we give an extension of the bivariate function on a general domain with arbitrary shape to a smooth, periodic function in the whole space or to a smooth, compactly supported function in the whole space. These smooth extensions have simple and clear representations which are determined by this bivariate function and some polynomials. After that, we expand the smooth, periodic function into a Fourier series or a periodic wavelet series or we expand the smooth, compactly supported function into a wavelet series. Since our extensions are smooth, the obtained Fourier coefficients or wavelet coefficients decay very fast. Since our extension tools are polynomials, the moment theorem shows that a lot of wavelet coefficients vanish. From this, with the help of well-known approximation theorems, using our extension methods, the Fourier approximation and the wavelet approximation of the bivariate function on the general domain with small error are obtained. PMID:24683316
Difference equation state approximations for nonlinear hereditary control problems
NASA Technical Reports Server (NTRS)
Rosen, I. G.
1984-01-01
Discrete approximation schemes for the solution of nonlinear hereditary control problems are constructed. The methods involve approximation by a sequence of optimal control problems in which the original infinite dimensional state equation has been approximated by a finite dimensional discrete difference equation. Convergence of the state approximations is argued using linear semigroup theory and is then used to demonstrate that solutions to the approximating optimal control problems in some sense approximate solutions to the original control problem. Two schemes, one based upon piecewise constant approximation, and the other involving spline functions are discussed. Numerical results are presented, analyzed and used to compare the schemes to other available approximation methods for the solution of hereditary control problems. Previously announced in STAR as N83-33589
Difference equation state approximations for nonlinear hereditary control problems
NASA Technical Reports Server (NTRS)
Rosen, I. G.
1982-01-01
Discrete approximation schemes for the solution of nonlinear hereditary control problems are constructed. The methods involve approximation by a sequence of optimal control problems in which the original infinite dimensional state equation has been approximated by a finite dimensional discrete difference equation. Convergence of the state approximations is argued using linear semigroup theory and is then used to demonstrate that solutions to the approximating optimal control problems in some sense approximate solutions to the original control problem. Two schemes, one based upon piecewise constant approximation, and the other involving spline functions are discussed. Numerical results are presented, analyzed and used to compare the schemes to other available approximation methods for the solution of hereditary control problems.
Pythagorean Approximations and Continued Fractions
ERIC Educational Resources Information Center
Peralta, Javier
2008-01-01
In this article, we will show that the Pythagorean approximations of [the square root of] 2 coincide with those achieved in the 16th century by means of continued fractions. Assuming this fact and the known relation that connects the Fibonacci sequence with the golden section, we shall establish a procedure to obtain sequences of rational numbers…
Ranking Support Vector Machine with Kernel Approximation
Dou, Yong
2017-01-01
Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM) is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels) can give higher accuracy than linear RankSVM (RankSVM with a linear kernel) for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss) objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms. PMID:28293256
Ranking Support Vector Machine with Kernel Approximation.
Chen, Kai; Li, Rongchun; Dou, Yong; Liang, Zhengfa; Lv, Qi
2017-01-01
Learning to rank algorithm has become important in recent years due to its successful application in information retrieval, recommender system, and computational biology, and so forth. Ranking support vector machine (RankSVM) is one of the state-of-art ranking models and has been favorably used. Nonlinear RankSVM (RankSVM with nonlinear kernels) can give higher accuracy than linear RankSVM (RankSVM with a linear kernel) for complex nonlinear ranking problem. However, the learning methods for nonlinear RankSVM are still time-consuming because of the calculation of kernel matrix. In this paper, we propose a fast ranking algorithm based on kernel approximation to avoid computing the kernel matrix. We explore two types of kernel approximation methods, namely, the Nyström method and random Fourier features. Primal truncated Newton method is used to optimize the pairwise L2-loss (squared Hinge-loss) objective function of the ranking model after the nonlinear kernel approximation. Experimental results demonstrate that our proposed method gets a much faster training speed than kernel RankSVM and achieves comparable or better performance over state-of-the-art ranking algorithms.
Karton, Amir; Tarnopolsky, Alex; Lamère, Jean-François; Schatz, George C; Martin, Jan M L
2008-12-18
We present a number of near-exact, nonrelativistic, Born-Oppenheimer reference data sets for the parametrization of more approximate methods (such as DFT functionals). The data were obtained by means of the W4 ab initio computational thermochemistry protocol, which has a 95% confidence interval well below 1 kJ/mol. Our data sets include W4-08, which are total atomization energies of over 100 small molecules that cover varying degrees of nondynamical correlations, and DBH24-W4, which are W4 theory values for Truhlar's set of 24 representative barrier heights. The usual procedure of comparing calculated DFT values with experimental atomization energies is hampered by comparatively large experimental uncertainties in many experimental values and compounds errors due to deficiencies in the DFT functional with those resulting from neglect of relativity and finite nuclear mass. Comparison with accurate, explicitly nonrelativistic, ab initio data avoids these issues. We then proceed to explore the performance of B2x-PLYP-type double hybrid functionals for atomization energies and barrier heights. We find that the optimum hybrids for hydrogen-transfer reactions, heavy-atoms transfers, nucleophilic substitutions, and unimolecular and recombination reactions are quite different from one another: out of these subsets, the heavy-atom transfer reactions are by far the most sensitive to the percentages of Hartree-Fock-type exchange y and MP2-type correlation x in an (x, y) double hybrid. The (42,72) hybrid B2K-PLYP, as reported in a preliminary communication, represents the best compromise between thermochemistry and hydrogen-transfer barriers, while also yielding excellent performance for nucleophilic substitutions. By optimizing for best overall performance on both thermochemistry and the DBH24-W4 data set, however, we find a new (36,65) hybrid which we term B2GP-PLYP. At a slight expense in performance for hydrogen-transfer barrier heights and nucleophilic substitutions, we
Testing the frozen flow approximation
NASA Technical Reports Server (NTRS)
Lucchin, Francesco; Matarrese, Sabino; Melott, Adrian L.; Moscardini, Lauro
1993-01-01
We investigate the accuracy of the frozen-flow approximation (FFA), recently proposed by Matarrese, et al. (1992), for following the nonlinear evolution of cosmological density fluctuations under gravitational instability. We compare a number of statistics between results of the FFA and n-body simulations, including those used by Melott, Pellman & Shandarin (1993) to test the Zel'dovich approximation. The FFA performs reasonably well in a statistical sense, e.g. in reproducing the counts-in-cell distribution, at small scales, but it does poorly in the crosscorrelation with n-body which means it is generally not moving mass to the right place, especially in models with high small-scale power.
Ab initio dynamical vertex approximation
NASA Astrophysics Data System (ADS)
Galler, Anna; Thunström, Patrik; Gunacker, Patrik; Tomczak, Jan M.; Held, Karsten
2017-03-01
Diagrammatic extensions of dynamical mean-field theory (DMFT) such as the dynamical vertex approximation (DΓ A) allow us to include nonlocal correlations beyond DMFT on all length scales and proved their worth for model calculations. Here, we develop and implement an Ab initio DΓ A approach (AbinitioDΓ A ) for electronic structure calculations of materials. The starting point is the two-particle irreducible vertex in the two particle-hole channels which is approximated by the bare nonlocal Coulomb interaction and all local vertex corrections. From this, we calculate the full nonlocal vertex and the nonlocal self-energy through the Bethe-Salpeter equation. The AbinitioDΓ A approach naturally generates all local DMFT correlations and all nonlocal G W contributions, but also further nonlocal correlations beyond: mixed terms of the former two and nonlocal spin fluctuations. We apply this new methodology to the prototypical correlated metal SrVO3.
Exponential Approximations Using Fourier Series Partial Sums
NASA Technical Reports Server (NTRS)
Banerjee, Nana S.; Geer, James F.
1997-01-01
The problem of accurately reconstructing a piece-wise smooth, 2(pi)-periodic function f and its first few derivatives, given only a truncated Fourier series representation of f, is studied and solved. The reconstruction process is divided into two steps. In the first step, the first 2N + 1 Fourier coefficients of f are used to approximate the locations and magnitudes of the discontinuities in f and its first M derivatives. This is accomplished by first finding initial estimates of these quantities based on certain properties of Gibbs phenomenon, and then refining these estimates by fitting the asymptotic form of the Fourier coefficients to the given coefficients using a least-squares approach. It is conjectured that the locations of the singularities are approximated to within O(N(sup -M-2), and the associated jump of the k(sup th) derivative of f is approximated to within O(N(sup -M-l+k), as N approaches infinity, and the method is robust. These estimates are then used with a class of singular basis functions, which have certain 'built-in' singularities, to construct a new sequence of approximations to f. Each of these new approximations is the sum of a piecewise smooth function and a new Fourier series partial sum. When N is proportional to M, it is shown that these new approximations, and their derivatives, converge exponentially in the maximum norm to f, and its corresponding derivatives, except in the union of a finite number of small open intervals containing the points of singularity of f. The total measure of these intervals decreases exponentially to zero as M approaches infinity. The technique is illustrated with several examples.
Reliable Function Approximation and Estimation
2016-08-16
Journal on Mathematical Analysis 47 (6), 2015. 4606-4629. (P3) The Sample Complexity of Weighted Sparse Approximation. B. Bah and R. Ward. IEEE...solving systems of quadratic equations. S. Sanghavi, C. White, and R. Ward. Results in Mathematics , 2016. (O5) Relax, no need to round: Integrality of...Theoretical Computer Science. (O6) A unified framework for linear dimensionality reduction in L1. F Krahmer and R Ward. Results in Mathematics , 2014. 1-23
Approximate reasoning using terminological models
NASA Technical Reports Server (NTRS)
Yen, John; Vaidya, Nitin
1992-01-01
Term Subsumption Systems (TSS) form a knowledge-representation scheme in AI that can express the defining characteristics of concepts through a formal language that has a well-defined semantics and incorporates a reasoning mechanism that can deduce whether one concept subsumes another. However, TSS's have very limited ability to deal with the issue of uncertainty in knowledge bases. The objective of this research is to address issues in combining approximate reasoning with term subsumption systems. To do this, we have extended an existing AI architecture (CLASP) that is built on the top of a term subsumption system (LOOM). First, the assertional component of LOOM has been extended for asserting and representing uncertain propositions. Second, we have extended the pattern matcher of CLASP for plausible rule-based inferences. Third, an approximate reasoning model has been added to facilitate various kinds of approximate reasoning. And finally, the issue of inconsistency in truth values due to inheritance is addressed using justification of those values. This architecture enhances the reasoning capabilities of expert systems by providing support for reasoning under uncertainty using knowledge captured in TSS. Also, as definitional knowledge is explicit and separate from heuristic knowledge for plausible inferences, the maintainability of expert systems could be improved.
Spline Approximation of Thin Shell Dynamics
NASA Technical Reports Server (NTRS)
delRosario, R. C. H.; Smith, R. C.
1996-01-01
A spline-based method for approximating thin shell dynamics is presented here. While the method is developed in the context of the Donnell-Mushtari thin shell equations, it can be easily extended to the Byrne-Flugge-Lur'ye equations or other models for shells of revolution as warranted by applications. The primary requirements for the method include accuracy, flexibility and efficiency in smart material applications. To accomplish this, the method was designed to be flexible with regard to boundary conditions, material nonhomogeneities due to sensors and actuators, and inputs from smart material actuators such as piezoceramic patches. The accuracy of the method was also of primary concern, both to guarantee full resolution of structural dynamics and to facilitate the development of PDE-based controllers which ultimately require real-time implementation. Several numerical examples provide initial evidence demonstrating the efficacy of the method.
Microscopic justification of the equal filling approximation
Perez-Martin, Sara; Robledo, L. M.
2008-07-15
The equal filling approximation, a procedure widely used in mean-field calculations to treat the dynamics of odd nuclei in a time-reversal invariant way, is justified as the consequence of a variational principle over an average energy functional. The ideas of statistical quantum mechanics are employed in the justification. As an illustration of the method, the ground and lowest-lying states of some octupole deformed radium isotopes are computed.
Nonlinear amplitude approximation for bilinear systems
NASA Astrophysics Data System (ADS)
Jung, Chulwoo; D'Souza, Kiran; Epureanu, Bogdan I.
2014-06-01
An efficient method to predict vibration amplitudes at the resonant frequencies of dynamical systems with piecewise-linear nonlinearity is developed. This technique is referred to as bilinear amplitude approximation (BAA). BAA constructs a single vibration cycle at each resonant frequency to approximate the periodic steady-state response of the system. It is postulated that the steady-state response is piece-wise linear and can be approximated by analyzing the response over two time intervals during which the system behaves linearly. Overall the dynamics is nonlinear, but the system is in a distinct linear state during each of the two time intervals. Thus, the approximated vibration cycle is constructed using linear analyses. The equation of motion for analyzing the vibration of each state is projected along the overlapping space spanned by the linear mode shapes active in each of the states. This overlapping space is where the vibratory energy is transferred from one state to the other when the system switches from one state to the other. The overlapping space can be obtained using singular value decomposition. The space where the energy is transferred is used together with transition conditions of displacement and velocity compatibility to construct a single vibration cycle and to compute the amplitude of the dynamics. Since the BAA method does not require numerical integration of nonlinear models, computational costs are very low. In this paper, the BAA method is first applied to a single-degree-of-freedom system. Then, a three-degree-of-freedom system is introduced to demonstrate a more general application of BAA. Finally, the BAA method is applied to a full bladed disk with a crack. Results comparing numerical solutions from full-order nonlinear analysis and results obtained using BAA are presented for all systems.
Stochastic approximation boosting for incomplete data problems.
Sexton, Joseph; Laake, Petter
2009-12-01
Boosting is a powerful approach to fitting regression models. This article describes a boosting algorithm for likelihood-based estimation with incomplete data. The algorithm combines boosting with a variant of stochastic approximation that uses Markov chain Monte Carlo to deal with the missing data. Applications to fitting generalized linear and additive models with missing covariates are given. The method is applied to the Pima Indians Diabetes Data where over half of the cases contain missing values.
A coastal ocean model with subgrid approximation
NASA Astrophysics Data System (ADS)
Walters, Roy A.
2016-06-01
A wide variety of coastal ocean models exist, each having attributes that reflect specific application areas. The model presented here is based on finite element methods with unstructured grids containing triangular and quadrilateral elements. The model optimizes robustness, accuracy, and efficiency by using semi-implicit methods in time in order to remove the most restrictive stability constraints, by using a semi-Lagrangian advection approximation to remove Courant number constraints, and by solving a wave equation at the discrete level for enhanced efficiency. An added feature is the approximation of the effects of subgrid objects. Here, the Reynolds-averaged Navier-Stokes equations and the incompressibility constraint are volume averaged over one or more computational cells. This procedure gives rise to new terms which must be approximated as a closure problem. A study of tidal power generation is presented as an example of this method. A problem that arises is specifying appropriate thrust and power coefficients for the volume averaged velocity when they are usually referenced to free stream velocity. A new contribution here is the evaluation of three approaches to this problem: an iteration procedure and two mapping formulations. All three sets of results for thrust (form drag) and power are in reasonable agreement.
On approximating hereditary dynamics by systems of ordinary differential equations
NASA Technical Reports Server (NTRS)
Cliff, E. M.; Burns, J. A.
1978-01-01
The paper deals with methods of obtaining approximate solutions to linear retarded functional differential equations (hereditary systems). The basic notion is to project the infinite dimensional space of initial functions for the hereditary system onto a finite dimensional subspace. Within this framework, two particular schemes are discussed. The first uses well-known piecewise constant approximations, while the second is a new method based on piecewise linear approximating functions. Numerical results are given.
A piecewise linear approximation scheme for hereditary optimal control problems
NASA Technical Reports Server (NTRS)
Cliff, E. M.; Burns, J. A.
1977-01-01
An approximation scheme based on 'piecewise linear' approximations of L2 spaces is employed to formulate a numerical method for solving quadratic optimal control problems governed by linear retarded functional differential equations. This piecewise linear method is an extension of the so called averaging technique. It is shown that the Riccati equation for the linear approximation is solved by simple transformation of the averaging solution. Thus, the computational requirements are essentially the same. Numerical results are given.
Improved non-approximability results
Bellare, M.; Sudan, M.
1994-12-31
We indicate strong non-approximability factors for central problems: N{sup 1/4} for Max Clique; N{sup 1/10} for Chromatic Number; and 66/65 for Max 3SAT. Underlying the Max Clique result is a proof system in which the verifier examines only three {open_quotes}free bits{close_quotes} to attain an error of 1/2. Underlying the Chromatic Number result is a reduction from Max Clique which is more efficient than previous ones.
Generalized Gradient Approximation Made Simple
Perdew, J.P.; Burke, K.; Ernzerhof, M.
1996-10-01
Generalized gradient approximations (GGA{close_quote}s) for the exchange-correlation energy improve upon the local spin density (LSD) description of atoms, molecules, and solids. We present a simple derivation of a simple GGA, in which all parameters (other than those in LSD) are fundamental constants. Only general features of the detailed construction underlying the Perdew-Wang 1991 (PW91) GGA are invoked. Improvements over PW91 include an accurate description of the linear response of the uniform electron gas, correct behavior under uniform scaling, and a smoother potential. {copyright} {ital 1996 The American Physical Society.}
Approximate transferability in conjugated polyalkenes
NASA Astrophysics Data System (ADS)
Eskandari, Keiamars; Mandado, Marcos; Mosquera, Ricardo A.
2007-03-01
QTAIM computed atomic and bond properties, as well as delocalization indices (obtained from electron densities computed at HF, MP2 and B3LYP levels) of several linear and branched conjugated polyalkenes and O- and N-containing conjugated polyenes have been employed to assess approximate transferable CH groups. The values of these properties indicate the effects of the functional group extend to four CH groups, whereas those of the terminal carbon affect up to three carbons. Ternary carbons also modify significantly the properties of atoms in α, β and γ.
The blind leading the blind: Mutual refinement of approximate theories
NASA Technical Reports Server (NTRS)
Kedar, Smadar T.; Bresina, John L.; Dent, C. Lisa
1991-01-01
The mutual refinement theory, a method for refining world models in a reactive system, is described. The method detects failures, explains their causes, and repairs the approximate models which cause the failures. The approach focuses on using one approximate model to refine another.
Landau-Zener approximations for resonant neutrino oscillations
Whisnant, K.
1988-07-15
A simple method for calculating the effects of resonant neutrino oscillations using Landau-Zener approximations is presented. For any given set of oscillation parameters, the method is to use the Landau-Zener approximation which works best in that region.
Approximation concepts for numerical airfoil optimization
NASA Technical Reports Server (NTRS)
Vanderplaats, G. N.
1979-01-01
An efficient algorithm for airfoil optimization is presented. The algorithm utilizes approximation concepts to reduce the number of aerodynamic analyses required to reach the optimum design. Examples are presented and compared with previous results. Optimization efficiency improvements of more than a factor of 2 are demonstrated. Improvements in efficiency are demonstrated when analysis data obtained in previous designs are utilized. The method is a general optimization procedure and is not limited to this application. The method is intended for application to a wide range of engineering design problems.
Wavelet Approximation in Data Assimilation
NASA Technical Reports Server (NTRS)
Tangborn, Andrew; Atlas, Robert (Technical Monitor)
2002-01-01
Estimation of the state of the atmosphere with the Kalman filter remains a distant goal because of high computational cost of evolving the error covariance for both linear and nonlinear systems. Wavelet approximation is presented here as a possible solution that efficiently compresses both global and local covariance information. We demonstrate the compression characteristics on the the error correlation field from a global two-dimensional chemical constituent assimilation, and implement an adaptive wavelet approximation scheme on the assimilation of the one-dimensional Burger's equation. In the former problem, we show that 99%, of the error correlation can be represented by just 3% of the wavelet coefficients, with good representation of localized features. In the Burger's equation assimilation, the discrete linearized equations (tangent linear model) and analysis covariance are projected onto a wavelet basis and truncated to just 6%, of the coefficients. A nearly optimal forecast is achieved and we show that errors due to truncation of the dynamics are no greater than the errors due to covariance truncation.
Laguerre approximation of random foams
NASA Astrophysics Data System (ADS)
Liebscher, André
2015-09-01
Stochastic models for the microstructure of foams are valuable tools to study the relations between microstructure characteristics and macroscopic properties. Owing to the physical laws behind the formation of foams, Laguerre tessellations have turned out to be suitable models for foams. Laguerre tessellations are weighted generalizations of Voronoi tessellations, where polyhedral cells are formed through the interaction of weighted generator points. While both share the same topology, the cell curvature of foams allows only an approximation by Laguerre tessellations. This makes the model fitting a challenging task, especially when the preservation of the local topology is required. In this work, we propose an inversion-based approach to fit a Laguerre tessellation model to a foam. The idea is to find a set of generator points whose tessellation best fits the foam's cell system. For this purpose, we transform the model fitting into a minimization problem that can be solved by gradient descent-based optimization. The proposed algorithm restores the generators of a tessellation if it is known to be Laguerre. If, as in the case of foams, no exact solution is possible, an approximative solution is obtained that maintains the local topology.
Adaptive Discontinuous Galerkin Approximation to Richards' Equation
NASA Astrophysics Data System (ADS)
Li, H.; Farthing, M. W.; Miller, C. T.
2006-12-01
Due to the occurrence of large gradients in fluid pressure as a function of space and time resulting from nonlinearities in closure relations, numerical solutions to Richards' equations are notoriously difficult for certain media properties and auxiliary conditions that occur routinely in describing physical systems of interest. These difficulties have motivated a substantial amount of work aimed at improving numerical approximations to this physically important and mathematically rich model. In this work, we build upon recent advances in temporal and spatial discretization methods by developing spatially and temporally adaptive solution approaches based upon the local discontinuous Galerkin method in space and a higher order backward difference method in time. Spatial step-size adaption, h adaption, approaches are evaluated and a so-called hp-adaption strategy is considered as well, which adjusts both the step size and the order of the approximation. Solution algorithms are advanced and performance is evaluated. The spatially and temporally adaptive approaches are shown to be robust and offer significant increases in computational efficiency compared to similar state-of-the-art methods that adapt in time alone. In addition, we extend the proposed methods to two dimensions and provide preliminary numerical results.
Hamilton's Principle and Approximate Solutions to Problems in Classical Mechanics
ERIC Educational Resources Information Center
Schlitt, D. W.
1977-01-01
Shows how to use the Ritz method for obtaining approximate solutions to problems expressed in variational form directly from the variational equation. Application of this method to classical mechanics is given. (MLH)
Planetary ephemerides approximation for radar astronomy
NASA Technical Reports Server (NTRS)
Sadr, R.; Shahshahani, M.
1991-01-01
The planetary ephemerides approximation for radar astronomy is discussed, and, in particular, the effect of this approximation on the performance of the programmable local oscillator (PLO) used in Goldstone Solar System Radar is presented. Four different approaches are considered and it is shown that the Gram polynomials outperform the commonly used technique based on Chebyshev polynomials. These methods are used to analyze the mean square, the phase error, and the frequency tracking error in the presence of the worst case Doppler shift that one may encounter within the solar system. It is shown that in the worst case the phase error is under one degree and the frequency tracking error less than one hertz when the frequency to the PLO is updated every millisecond.
Approximated solutions to Born-Infeld dynamics
NASA Astrophysics Data System (ADS)
Ferraro, Rafael; Nigro, Mauro
2016-02-01
The Born-Infeld equation in the plane is usefully captured in complex language. The general exact solution can be written as a combination of holomorphic and anti-holomorphic functions. However, this solution only expresses the potential in an implicit way. We rework the formulation to obtain the complex potential in an explicit way, by means of a perturbative procedure. We take care of the secular behavior common to this kind of approach, by resorting to a symmetry the equation has at the considered order of approximation. We apply the method to build approximated solutions to Born-Infeld electrodynamics. We solve for BI electromagnetic waves traveling in opposite directions. We study the propagation at interfaces, with the aim of searching for effects susceptible to experimental detection. In particular, we show that a reflected wave is produced when a wave is incident on a semi-space containing a magnetostatic field.
Approximate inverse preconditioners for general sparse matrices
Chow, E.; Saad, Y.
1994-12-31
Preconditioned Krylov subspace methods are often very efficient in solving sparse linear matrices that arise from the discretization of elliptic partial differential equations. However, for general sparse indifinite matrices, the usual ILU preconditioners fail, often because of the fact that the resulting factors L and U give rise to unstable forward and backward sweeps. In such cases, alternative preconditioners based on approximate inverses may be attractive. We are currently developing a number of such preconditioners based on iterating on each column to get the approximate inverse. For this approach to be efficient, the iteration must be done in sparse mode, i.e., we must use sparse-matrix by sparse-vector type operatoins. We will discuss a few options and compare their performance on standard problems from the Harwell-Boeing collection.
Analytical approximations for spiral waves
Löber, Jakob Engel, Harald
2013-12-15
We propose a non-perturbative attempt to solve the kinematic equations for spiral waves in excitable media. From the eikonal equation for the wave front we derive an implicit analytical relation between rotation frequency Ω and core radius R{sub 0}. For free, rigidly rotating spiral waves our analytical prediction is in good agreement with numerical solutions of the linear eikonal equation not only for very large but also for intermediate and small values of the core radius. An equivalent Ω(R{sub +}) dependence improves the result by Keener and Tyson for spiral waves pinned to a circular defect of radius R{sub +} with Neumann boundaries at the periphery. Simultaneously, analytical approximations for the shape of free and pinned spirals are given. We discuss the reasons why the ansatz fails to correctly describe the dependence of the rotation frequency on the excitability of the medium.
Approximating metal-insulator transitions
NASA Astrophysics Data System (ADS)
Danieli, Carlo; Rayanov, Kristian; Pavlov, Boris; Martin, Gaven; Flach, Sergej
2015-12-01
We consider quantum wave propagation in one-dimensional quasiperiodic lattices. We propose an iterative construction of quasiperiodic potentials from sequences of potentials with increasing spatial period. At each finite iteration step, the eigenstates reflect the properties of the limiting quasiperiodic potential properties up to a controlled maximum system size. We then observe approximate Metal-Insulator Transitions (MIT) at the finite iteration steps. We also report evidence on mobility edges, which are at variance to the celebrated Aubry-André model. The dynamics near the MIT shows a critical slowing down of the ballistic group velocity in the metallic phase, similar to the divergence of the localization length in the insulating phase.
Analytical approximations for spiral waves.
Löber, Jakob; Engel, Harald
2013-12-01
We propose a non-perturbative attempt to solve the kinematic equations for spiral waves in excitable media. From the eikonal equation for the wave front we derive an implicit analytical relation between rotation frequency Ω and core radius R(0). For free, rigidly rotating spiral waves our analytical prediction is in good agreement with numerical solutions of the linear eikonal equation not only for very large but also for intermediate and small values of the core radius. An equivalent Ω(R(+)) dependence improves the result by Keener and Tyson for spiral waves pinned to a circular defect of radius R(+) with Neumann boundaries at the periphery. Simultaneously, analytical approximations for the shape of free and pinned spirals are given. We discuss the reasons why the ansatz fails to correctly describe the dependence of the rotation frequency on the excitability of the medium.
Indexing the approximate number system.
Inglis, Matthew; Gilmore, Camilla
2014-01-01
Much recent research attention has focused on understanding individual differences in the approximate number system, a cognitive system believed to underlie human mathematical competence. To date researchers have used four main indices of ANS acuity, and have typically assumed that they measure similar properties. Here we report a study which questions this assumption. We demonstrate that the numerical ratio effect has poor test-retest reliability and that it does not relate to either Weber fractions or accuracy on nonsymbolic comparison tasks. Furthermore, we show that Weber fractions follow a strongly skewed distribution and that they have lower test-retest reliability than a simple accuracy measure. We conclude by arguing that in the future researchers interested in indexing individual differences in ANS acuity should use accuracy figures, not Weber fractions or numerical ratio effects.
Legendre-Tau approximations for functional differential equations
NASA Technical Reports Server (NTRS)
Ito, K.; Teglas, R.
1983-01-01
The numerical approximation of solutions to linear functional differential equations are considered using the so called Legendre tau method. The functional differential equation is first reformulated as a partial differential equation with a nonlocal boundary condition involving time differentiation. The approximate solution is then represented as a truncated Legendre series with time varying coefficients which satisfy a certain system of ordinary differential equations. The method is very easy to code and yields very accurate approximations. Convergence is established, various numerical examples are presented, and comparison between the latter and cubic spline approximations is made.
Monotonically improving approximate answers to relational algebra queries
NASA Technical Reports Server (NTRS)
Smith, Kenneth P.; Liu, J. W. S.
1989-01-01
We present here a query processing method that produces approximate answers to queries posed in standard relational algebra. This method is monotone in the sense that the accuracy of the approximate result improves with the amount of time spent producing the result. This strategy enables us to trade the time to produce the result for the accuracy of the result. An approximate relational model that characterizes appromimate relations and a partial order for comparing them is developed. Relational operators which operate on and return approximate relations are defined.
Angular Distributions of Synchrotron Radiation in the Nonrelativistic Approximation
NASA Astrophysics Data System (ADS)
Bagrov, V. G.; Loginov, A. S.
2017-03-01
The angular distribution functions of the polarized components of synchrotron radiation in the nonrelativistic approximation are investigated using methods of classical and quantum theory. Particles of zero spin (bosons) and spin 1/2 (electrons) are considered in the quantum theory. It is shown that in the first nonzero approximation the angular distribution functions, calculated by methods of classical and quantum theory, coincide identically. Quantum corrections to the angular distribution functions appear only in the subsequent approximation whereas the total radiated power contains quantum and spin corrections already in the first approximation.
Approximate analytic solutions to the NPDD: Short exposure approximations
NASA Astrophysics Data System (ADS)
Close, Ciara E.; Sheridan, John T.
2014-04-01
There have been many attempts to accurately describe the photochemical processes that take places in photopolymer materials. As the models have become more accurate, solving them has become more numerically intensive and more 'opaque'. Recent models incorporate the major photochemical reactions taking place as well as the diffusion effects resulting from the photo-polymerisation process, and have accurately described these processes in a number of different materials. It is our aim to develop accessible mathematical expressions which provide physical insights and simple quantitative predictions of practical value to material designers and users. In this paper, starting with the Non-Local Photo-Polymerisation Driven Diffusion (NPDD) model coupled integro-differential equations, we first simplify these equations and validate the accuracy of the resulting approximate model. This new set of governing equations are then used to produce accurate analytic solutions (polynomials) describing the evolution of the monomer and polymer concentrations, and the grating refractive index modulation, in the case of short low intensity sinusoidal exposures. The physical significance of the results and their consequences for holographic data storage (HDS) are then discussed.
Approximation abilities of neuro-fuzzy networks
NASA Astrophysics Data System (ADS)
Mrówczyńska, Maria
2010-01-01
The paper presents the operation of two neuro-fuzzy systems of an adaptive type, intended for solving problems of the approximation of multi-variable functions in the domain of real numbers. Neuro-fuzzy systems being a combination of the methodology of artificial neural networks and fuzzy sets operate on the basis of a set of fuzzy rules "if-then", generated by means of the self-organization of data grouping and the estimation of relations between fuzzy experiment results. The article includes a description of neuro-fuzzy systems by Takaga-Sugeno-Kang (TSK) and Wang-Mendel (WM), and in order to complement the problem in question, a hierarchical structural self-organizing method of teaching a fuzzy network. A multi-layer structure of the systems is a structure analogous to the structure of "classic" neural networks. In its final part the article presents selected areas of application of neuro-fuzzy systems in the field of geodesy and surveying engineering. Numerical examples showing how the systems work concerned: the approximation of functions of several variables to be used as algorithms in the Geographic Information Systems (the approximation of a terrain model), the transformation of coordinates, and the prediction of a time series. The accuracy characteristics of the results obtained have been taken into consideration.
Approximate Graph Edit Distance in Quadratic Time.
Riesen, Kaspar; Ferrer, Miquel; Bunke, Horst
2015-09-14
Graph edit distance is one of the most flexible and general graph matching models available. The major drawback of graph edit distance, however, is its computational complexity that restricts its applicability to graphs of rather small size. Recently the authors of the present paper introduced a general approximation framework for the graph edit distance problem. The basic idea of this specific algorithm is to first compute an optimal assignment of independent local graph structures (including substitutions, deletions, and insertions of nodes and edges). This optimal assignment is complete and consistent with respect to the involved nodes of both graphs and can thus be used to instantly derive an admissible (yet suboptimal) solution for the original graph edit distance problem in O(n3) time. For large scale graphs or graph sets, however, the cubic time complexity may still be too high. Therefore, we propose to use suboptimal algorithms with quadratic rather than cubic time for solving the basic assignment problem. In particular, the present paper introduces five different greedy assignment algorithms in the context of graph edit distance approximation. In an experimental evaluation we show that these methods have great potential for further speeding up the computation of graph edit distance while the approximated distances remain sufficiently accurate for graph based pattern classification.
Analyticity of quantum states in one-dimensional tight-binding model
NASA Astrophysics Data System (ADS)
Yamada, Hiroaki S.; Ikeda, Kensuke S.
2014-09-01
Analytical complexity of quantum wavefunction whose argument is extended into the complex plane provides an important information about the potentiality of manifesting complex quantum dynamics such as time-irreversibility, dissipation and so on. We examine Pade approximation and some complementary methods to investigate the complex-analytical properties of some quantum states such as impurity states, Anderson-localized states and localized states of Harper model. The impurity states can be characterized by simple poles of the Pade approximation, and the localized states of Anderson model and Harper model can be characterized by an accumulation of poles and zeros of the Pade approximated function along a critical border, which implies a natural boundary (NB). A complementary method based on shifting the expansion-center is used to confirm the existence of the NB numerically, and it is strongly suggested that the both Anderson-localized state and localized states of Harper model have NBs in the complex extension. Moreover, we discuss an interesting relationship between our research and the natural boundary problem of the potential function whose close connection to the localization problem was discovered quite recently by some mathematicians. In addition, we examine the usefulness of the Pade approximation for numerically predicting the existence of NB by means of two typical examples, lacunary power series and random power series.
Approximate analytical calculations of photon geodesics in the Schwarzschild metric
NASA Astrophysics Data System (ADS)
De Falco, Vittorio; Falanga, Maurizio; Stella, Luigi
2016-10-01
We develop a method for deriving approximate analytical formulae to integrate photon geodesics in a Schwarzschild spacetime. Based on this, we derive the approximate equations for light bending and propagation delay that have been introduced empirically. We then derive for the first time an approximate analytical equation for the solid angle. We discuss the accuracy and range of applicability of the new equations and present a few simple applications of them to known astrophysical problems.
NEW APPROACHES: Analysis, graphs, approximations: a toolbox for solving problems
NASA Astrophysics Data System (ADS)
Newburgh, Ronald
1997-11-01
A simple kinematic problem is solved by using three different techniques - analysis, graphs and approximations. Using three different techniques is pedagogically sound for it leads the student to the realization that the physics of a problem rather than the solution technique is the more important for understanding. The approximation technique is a modification of the Newton - Raphson method but is considerably simpler, avoiding calculation of derivatives. It also offers an opportunity to introduce approximation techniques at the very beginning of physics study.
Space-Time Approximation with Sparse Grids
Griebel, M; Oeltz, D; Vassilevski, P S
2005-04-14
In this article we introduce approximation spaces for parabolic problems which are based on the tensor product construction of a multiscale basis in space and a multiscale basis in time. Proper truncation then leads to so-called space-time sparse grid spaces. For a uniform discretization of the spatial space of dimension d with O(N{sup d}) degrees of freedom, these spaces involve for d > 1 also only O(N{sup d}) degrees of freedom for the discretization of the whole space-time problem. But they provide the same approximation rate as classical space-time Finite Element spaces which need O(N{sup d+1}) degrees of freedoms. This makes these approximation spaces well suited for conventional parabolic and for time-dependent optimization problems. We analyze the approximation properties and the dimension of these sparse grid space-time spaces for general stable multiscale bases. We then restrict ourselves to an interpolatory multiscale basis, i.e. a hierarchical basis. Here, to be able to handle also complicated spatial domains {Omega}, we construct the hierarchical basis from a given spatial Finite Element basis as follows: First we determine coarse grid points recursively over the levels by the coarsening step of the algebraic multigrid method. Then, we derive interpolatory prolongation operators between the respective coarse and fine grid points by a least squares approach. This way we obtain an algebraic hierarchical basis for the spatial domain which we then use in our space-time sparse grid approach. We give numerical results on the convergence rate of the interpolation error of these spaces for various space-time problems with two spatial dimensions. Also implementational issues, data structures and questions of adaptivity are addressed to some extent.
Multidimensional WKB approximation for particle tunneling
Zamastil, J.
2005-08-15
A method for obtaining the WKB wave function describing the particle tunneling outside of a two-dimensional potential well is suggested. The Cartesian coordinates (x,y) are chosen in such a way that the x axis has the direction of the probability flux at large distances from the well. The WKB wave function is then obtained by simultaneous expansion of the wave function in the coordinate y and the parameter determining the curvature of the escape path. It is argued, both physically and mathematically, that these two expansions are mutually consistent. It is shown that the method provides systematic approximation to the outgoing probability flux. Both the technical and conceptual advantages of this approach in comparison with the usual approach based on the solution of classical equations of motion are pointed out. The method is applied to the problem of the coupled anharmonic oscillators and verified through the dispersion relations.
Investigating Material Approximations in Spacecraft Radiation Analysis
NASA Technical Reports Server (NTRS)
Walker, Steven A.; Slaba, Tony C.; Clowdsley, Martha S.; Blattnig, Steve R.
2011-01-01
During the design process, the configuration of space vehicles and habitats changes frequently and the merits of design changes must be evaluated. Methods for rapidly assessing astronaut exposure are therefore required. Typically, approximations are made to simplify the geometry and speed up the evaluation of each design. In this work, the error associated with two common approximations used to simplify space radiation vehicle analyses, scaling into equivalent materials and material reordering, are investigated. Over thirty materials commonly found in spacesuits, vehicles, and human bodies are considered. Each material is placed in a material group (aluminum, polyethylene, or tissue), and the error associated with scaling and reordering was quantified for each material. Of the scaling methods investigated, range scaling is shown to be the superior method, especially for shields less than 30 g/cm2 exposed to a solar particle event. More complicated, realistic slabs are examined to quantify the separate and combined effects of using equivalent materials and reordering. The error associated with material reordering is shown to be at least comparable to, if not greater than, the error associated with range scaling. In general, scaling and reordering errors were found to grow with the difference between the average nuclear charge of the actual material and average nuclear charge of the equivalent material. Based on this result, a different set of equivalent materials (titanium, aluminum, and tissue) are substituted for the commonly used aluminum, polyethylene, and tissue. The realistic cases are scaled and reordered using the new equivalent materials, and the reduced error is shown.
Shear viscosity in the postquasistatic approximation
Peralta, C.; Rosales, L.; Rodriguez-Mueller, B.; Barreto, W.
2010-05-15
We apply the postquasistatic approximation, an iterative method for the evolution of self-gravitating spheres of matter, to study the evolution of anisotropic nonadiabatic radiating and dissipative distributions in general relativity. Dissipation is described by viscosity and free-streaming radiation, assuming an equation of state to model anisotropy induced by the shear viscosity. We match the interior solution, in noncomoving coordinates, with the Vaidya exterior solution. Two simple models are presented, based on the Schwarzschild and Tolman VI solutions, in the nonadiabatic and adiabatic limit. In both cases, the eventual collapse or expansion of the distribution is mainly controlled by the anisotropy induced by the viscosity.
Approximation of Dynamical System's Separatrix Curves
NASA Astrophysics Data System (ADS)
Cavoretto, Roberto; Chaudhuri, Sanjay; De Rossi, Alessandra; Menduni, Eleonora; Moretti, Francesca; Rodi, Maria Caterina; Venturino, Ezio
2011-09-01
In dynamical systems saddle points partition the domain into basins of attractions of the remaining locally stable equilibria. This problem is rather common especially in population dynamics models, like prey-predator or competition systems. In this paper we construct programs for the detection of points lying on the separatrix curve, i.e. the curve which partitions the domain. Finally, an efficient algorithm, which is based on the Partition of Unity method with local approximants given by Wendland's functions, is used for reconstructing the separatrix curve.
NASA Astrophysics Data System (ADS)
Hinds, Arianne T.
2011-09-01
Spatial transformations whose kernels employ sinusoidal functions for the decorrelation of signals remain as fundamental components of image and video coding systems. Practical implementations are designed in fixed precision for which the most challenging task is to approximate these constants with values that are both efficient in terms of complexity and accurate with respect to their mathematical definitions. Scaled architectures, for example, as used in the implementations of the order-8 Discrete Cosine Transform and its corresponding inverse both specified in ISO/IEC 23002-2 (MPEG C Pt. 2), can be utilized to mitigate the complexity of these approximations. That is, the implementation of the transform can be designed such that it is completed in two stages: 1) the main transform matrix in which the sinusoidal constants are roughly approximated, and 2) a separate scaling stage to further refine the approximations. This paper describes a methodology termed the Common Factor Method, for finding fixed-point approximations of such irrational values suitable for use in scaled architectures. The order-16 Discrete Cosine Transform provides a framework in which to demonstrate the methodology, but the methodology itself can be employed to design fixed-point implementations of other linear transformations.
Convergence to approximate solutions and perturbation resilience of iterative algorithms
NASA Astrophysics Data System (ADS)
Reich, Simeon; Zaslavski, Alexander J.
2017-04-01
We first consider nonexpansive self-mappings of a metric space and study the asymptotic behavior of their inexact orbits. We then apply our results to the analysis of iterative methods for finding approximate fixed points of nonexpansive mappings and approximate zeros of monotone operators.
Perturbation approximation for orbits in axially symmetric funnels
NASA Astrophysics Data System (ADS)
Nauenberg, Michael
2014-11-01
A perturbation method that can be traced back to Isaac Newton is applied to obtain approximate analytic solutions for objects sliding in axially symmetric funnels in near circular orbits. Some experimental observations are presented for balls rolling in inverted cones with different opening angles, and in a funnel with a hyperbolic surface that approximately simulates the gravitational force.
Perturbed kernel approximation on homogeneous manifolds
NASA Astrophysics Data System (ADS)
Levesley, J.; Sun, X.
2007-02-01
Current methods for interpolation and approximation within a native space rely heavily on the strict positive-definiteness of the underlying kernels. If the domains of approximation are the unit spheres in euclidean spaces, then zonal kernels (kernels that are invariant under the orthogonal group action) are strongly favored. In the implementation of these methods to handle real world problems, however, some or all of the symmetries and positive-definiteness may be lost in digitalization due to small random errors that occur unpredictably during various stages of the execution. Perturbation analysis is therefore needed to address the stability problem encountered. In this paper we study two kinds of perturbations of positive-definite kernels: small random perturbations and perturbations by Dunkl's intertwining operators [C. Dunkl, Y. Xu, Orthogonal polynomials of several variables, Encyclopedia of Mathematics and Its Applications, vol. 81, Cambridge University Press, Cambridge, 2001]. We show that with some reasonable assumptions, a small random perturbation of a strictly positive-definite kernel can still provide vehicles for interpolation and enjoy the same error estimates. We examine the actions of the Dunkl intertwining operators on zonal (strictly) positive-definite kernels on spheres. We show that the resulted kernels are (strictly) positive-definite on spheres of lower dimensions.
Approximate solutions to fractional subdiffusion equations
NASA Astrophysics Data System (ADS)
Hristov, J.
2011-03-01
The work presents integral solutions of the fractional subdiffusion equation by an integral method, as an alternative approach to the solutions employing hypergeometric functions. The integral solution suggests a preliminary defined profile with unknown coefficients and the concept of penetration (boundary layer). The prescribed profile satisfies the boundary conditions imposed by the boundary layer that allows its coefficients to be expressed through its depth as unique parameter. The integral approach to the fractional subdiffusion equation suggests a replacement of the real distribution function by the approximate profile. The solution was performed with Riemann-Liouville time-fractional derivative since the integral approach avoids the definition of the initial value of the time-derivative required by the Laplace transformed equations and leading to a transition to Caputo derivatives. The method is demonstrated by solutions to two simple fractional subdiffusion equations (Dirichlet problems): 1) Time-Fractional Diffusion Equation, and 2) Time-Fractional Drift Equation, both of them having fundamental solutions expressed through the M-Wright function. The solutions demonstrate some basic issues of the suggested integral approach, among them: a) Choice of the profile, b) Integration problem emerging when the distribution (profile) is replaced by a prescribed one with unknown coefficients; c) Optimization of the profile in view to minimize the average error of approximations; d) Numerical results allowing comparisons to the known solutions expressed to the M-Wright function and error estimations.
Topics in Multivariate Approximation Theory.
1982-05-01
include tensor products, multivariate polynomial interpolation , esp. Kergin Interpolation , and the recent developments of multivariate B-splines. t1...AMS (MOS) Subject Classifications: 41-02, 41A05, 41A10, 41A15, 41A63, 41A65 Key Words: multivariate, B-splines, Kergin interpolation , linear projectors...splines and in multivariate polynomial interpolation . These developments may well provide the theoretical foundation for efficient methods of
Eigenvector Approximation Leading to Exponential Speedup of Quantum Eigenvalue Calculation
NASA Astrophysics Data System (ADS)
Jaksch, Peter; Papageorgiou, Anargyros
2003-12-01
We present an efficient method for preparing the initial state required by the eigenvalue approximation quantum algorithm of Abrams and Lloyd. Our method can be applied when solving continuous Hermitian eigenproblems, e.g., the Schrödinger equation, on a discrete grid. We start with a classically obtained eigenvector for a problem discretized on a coarse grid, and we efficiently construct, quantum mechanically, an approximation of the same eigenvector on a fine grid. We use this approximation as the initial state for the eigenvalue estimation algorithm, and show the relationship between its success probability and the size of the coarse grid.
NASA Astrophysics Data System (ADS)
Francisco, E.; Seijo, L.; Pueyo, L.
1986-07-01
The method of maximum overlap, often applied to the problem of basis set reduction, is formulated in terms of weighted least squares with orthogonality restrictions. An analytical solution for the linear parameters of the reduced set is given. In this form, the method is a general and efficient scheme for reducing basis sets. As an application, orthogonal radial wavefunctions of the STO type have been obtained for the 3 d transition metal atoms and ions by simulation of the high-quality sets of Clementi and Roetti. The performance of the reduction has been evaluated by examining several one- and two-electron interactions. Results of these tests reveal that the new functions are highly accurate simulations of the reference AO's. They appear to be appropriate for molecular and solid state calculations.
Revisiting approximate dynamic programming and its convergence.
Heydari, Ali
2014-12-01
Value iteration-based approximate/adaptive dynamic programming (ADP) as an approximate solution to infinite-horizon optimal control problems with deterministic dynamics and continuous state and action spaces is investigated. The learning iterations are decomposed into an outer loop and an inner loop. A relatively simple proof for the convergence of the outer-loop iterations to the optimal solution is provided using a novel idea with some new features. It presents an analogy between the value function during the iterations and the value function of a fixed-final-time optimal control problem. The inner loop is utilized to avoid the need for solving a set of nonlinear equations or a nonlinear optimization problem numerically, at each iteration of ADP for the policy update. Sufficient conditions for the uniqueness of the solution to the policy update equation and for the convergence of the inner-loop iterations to the solution are obtained. Afterwards, the results are formed as a learning algorithm for training a neurocontroller or creating a look-up table to be used for optimal control of nonlinear systems with different initial conditions. Finally, some of the features of the investigated method are numerically analyzed.
Variational extensions of the mean spherical approximation
NASA Astrophysics Data System (ADS)
Blum, L.; Ubriaco, M.
2000-04-01
In a previous work we have proposed a method to study complex systems with objects of arbitrary size. For certain specific forms of the atomic and molecular interactions, surprisingly simple and accurate theories (The Variational Mean Spherical Scaling Approximation, VMSSA) [(Velazquez, Blum J. Chem. Phys. 110 (1990) 10 931; Blum, Velazquez, J. Quantum Chem. (Theochem), in press)] can be obtained. The basic idea is that if the interactions can be expressed in a rapidly converging sum of (complex) exponentials, then the Ornstein-Zernike equation (OZ) has an analytical solution. This analytical solution is used to construct a robust interpolation scheme, the variation mean spherical scaling approximation (VMSSA). The Helmholtz excess free energy Δ A=Δ E- TΔ S is then written as a function of a scaling matrix Γ. Both the excess energy Δ E( Γ) and the excess entropy Δ S( Γ) will be functionals of Γ. In previous work of this series the form of this functional was found for the two- (Blum, Herrera, Mol. Phys. 96 (1999) 821) and three-exponential closures of the OZ equation (Blum, J. Stat. Phys., submitted for publication). In this paper we extend this to M Yukawas, a complete basis set: We obtain a solution for the one-component case and give a closed-form expression for the MSA excess entropy, which is also the VMSSA entropy.
Wiedensohler, A. ); Aalto, P. ); Covert, D. ); Heintzenberg, J. ); McMurry, P.H. )
1994-08-01
Four different methods for measuring ultrafine particle size distributions in the 3-10-nm particle diameter range are compared and discussed. These methods all use an ultrafine condensation particle counter (TSI Inc. Model 3025 or its prototype) as the detector, but use different approaches to determine the size of the particles counted. Size classification was achieved using a Hauke Model VIE-06 differential mobility analyzer, a specially configured TSI Model 3040S diffusion battery, an ultrafine condensation particle counter with a variable condenser temperature, and an ultrafine condensation particle counter with a pulse height analyzer for signals produced by the optical detector. The response of these systems to ultrafine particles of known size and composition was studied during a workshop held in Lund, Sweden, during July 1991. After this workshop, measurements of ultrafine particles were made on the Swedish icebreaker Oden during the International Arctic Ocean Expedition 1991 (August 1, 1991 through October 7, 1991). In this article, the results of these laboratory and field measurements are discussed. The strengths and limitations of these measurement methods are emphasized. 30 refs., 9 figs., 3 tabs.
Impact of inflow transport approximation on light water reactor analysis
NASA Astrophysics Data System (ADS)
Choi, Sooyoung; Smith, Kord; Lee, Hyun Chul; Lee, Deokjung
2015-10-01
The impact of the inflow transport approximation on light water reactor analysis is investigated, and it is verified that the inflow transport approximation significantly improves the accuracy of the transport and transport/diffusion solutions. A methodology for an inflow transport approximation is implemented in order to generate an accurate transport cross section. The inflow transport approximation is compared to the conventional methods, which are the consistent-PN and the outflow transport approximations. The three transport approximations are implemented in the lattice physics code STREAM, and verification is performed for various verification problems in order to investigate their effects and accuracy. From the verification, it is noted that the consistent-PN and the outflow transport approximations cause significant error in calculating the eigenvalue and the power distribution. The inflow transport approximation shows very accurate and precise results for the verification problems. The inflow transport approximation shows significant improvements not only for the high leakage problem but also for practical large core problem analyses.
Signal Approximation with a Wavelet Neural Network
1992-12-01
specialized electronic devices like the Intel Electronically Trainable Analog Neural Network (ETANN) chip. The WNN representation allows the...accurately approximated with a WNN trained with irregularly sampled data. Signal approximation, Wavelet neural network .
Rough Set Approximations in Formal Concept Analysis
NASA Astrophysics Data System (ADS)
Yamaguchi, Daisuke; Murata, Atsuo; Li, Guo-Dong; Nagai, Masatake
Conventional set approximations are based on a set of attributes; however, these approximations cannot relate an object to the corresponding attribute. In this study, a new model for set approximation based on individual attributes is proposed for interval-valued data. Defining an indiscernibility relation is omitted since each attribute value itself has a set of values. Two types of approximations, single- and multiattribute approximations, are presented. A multi-attribute approximation has two solutions: a maximum and a minimum solution. A maximum solution is a set of objects that satisfy the condition of approximation for at least one attribute. A minimum solution is a set of objects that satisfy the condition for all attributes. The proposed set approximation is helpful in finding the features of objects relating to condition attributes when interval-valued data are given. The proposed model contributes to feature extraction in interval-valued information systems.
Compressive Imaging via Approximate Message Passing
2015-09-04
We propose novel compressive imaging algorithms that employ approximate message passing (AMP), which is an iterative signal estimation algorithm that...Approved for Public Release; Distribution Unlimited Final Report: Compressive Imaging via Approximate Message Passing The views, opinions and/or findings...Research Triangle Park, NC 27709-2211 approximate message passing , compressive imaging, compressive sensing, hyperspectral imaging, signal reconstruction
Fractal Trigonometric Polynomials for Restricted Range Approximation
NASA Astrophysics Data System (ADS)
Chand, A. K. B.; Navascués, M. A.; Viswanathan, P.; Katiyar, S. K.
2016-05-01
One-sided approximation tackles the problem of approximation of a prescribed function by simple traditional functions such as polynomials or trigonometric functions that lie completely above or below it. In this paper, we use the concept of fractal interpolation function (FIF), precisely of fractal trigonometric polynomials, to construct one-sided uniform approximants for some classes of continuous functions.
Razafinjanahary, H.; Rogemond, F.; Chermette, H.
1994-08-15
The MS-LSD method remains a method of interest when rapidity and small computer resources are required; its main drawback is some lack of accuracy, mainly due to the muffin-tin distribution of the potential. In the case of large clusters or molecules, the use of an empty sphere to fill, in part, the large intersphere region can improve greatly the results. Calculations bearing on C{sub 60} has been undertaken to underline this trend, because, on the one hand, the fullerenes exhibit a remarkable possibility to fit a large empty sphere in the center of the cluster and, on the other hand, numerous accurate calculations have already been published, allowing quantitative comparison with results. The author`s calculations suggest that in case of added empty sphere the results compare well with the results of more accurate calculations. The calculated electron affinity for C{sub 60} and C{sub 60}{sup {minus}} are in reasonable agreement with experimental values, but the stability of C{sub 60}{sup 2-} in gas phase is not found. 35 refs., 3 figs., 5 tabs.
CT reconstruction via denoising approximate message passing
NASA Astrophysics Data System (ADS)
Perelli, Alessandro; Lexa, Michael A.; Can, Ali; Davies, Mike E.
2016-05-01
In this paper, we adapt and apply a compressed sensing based reconstruction algorithm to the problem of computed tomography reconstruction for luggage inspection. Specifically, we propose a variant of the denoising generalized approximate message passing (D-GAMP) algorithm and compare its performance to the performance of traditional filtered back projection and to a penalized weighted least squares (PWLS) based reconstruction method. D-GAMP is an iterative algorithm that at each iteration estimates the conditional probability of the image given the measurements and employs a non-linear "denoising" function which implicitly imposes an image prior. Results on real baggage show that D-GAMP is well-suited to limited-view acquisitions.
Improved approximations for control augmented structural synthesis
NASA Technical Reports Server (NTRS)
Thomas, H. L.; Schmit, L. A.
1990-01-01
A methodology for control-augmented structural synthesis is presented for structure-control systems which can be modeled as an assemblage of beam, truss, and nonstructural mass elements augmented by a noncollocated direct output feedback control system. Truss areas, beam cross sectional dimensions, nonstructural masses and rotary inertias, and controller position and velocity gains are treated simultaneously as design variables. The structural mass and a control-system performance index can be minimized simultaneously, with design constraints placed on static stresses and displacements, dynamic harmonic displacements and forces, structural frequencies, and closed-loop eigenvalues and damping ratios. Intermediate design-variable and response-quantity concepts are used to generate new approximations for displacements and actuator forces under harmonic dynamic loads and for system complex eigenvalues. This improves the overall efficiency of the procedure by reducing the number of complete analyses required for convergence. Numerical results which illustrate the effectiveness of the method are given.
Robust Generalized Low Rank Approximations of Matrices.
Shi, Jiarong; Yang, Wei; Zheng, Xiuyun
2015-01-01
In recent years, the intrinsic low rank structure of some datasets has been extensively exploited to reduce dimensionality, remove noise and complete the missing entries. As a well-known technique for dimensionality reduction and data compression, Generalized Low Rank Approximations of Matrices (GLRAM) claims its superiority on computation time and compression ratio over the SVD. However, GLRAM is very sensitive to sparse large noise or outliers and its robust version does not have been explored or solved yet. To address this problem, this paper proposes a robust method for GLRAM, named Robust GLRAM (RGLRAM). We first formulate RGLRAM as an l1-norm optimization problem which minimizes the l1-norm of the approximation errors. Secondly, we apply the technique of Augmented Lagrange Multipliers (ALM) to solve this l1-norm minimization problem and derive a corresponding iterative scheme. Then the weak convergence of the proposed algorithm is discussed under mild conditions. Next, we investigate a special case of RGLRAM and extend RGLRAM to a general tensor case. Finally, the extensive experiments on synthetic data show that it is possible for RGLRAM to exactly recover both the low rank and the sparse components while it may be difficult for previous state-of-the-art algorithms. We also discuss three issues on RGLRAM: the sensitivity to initialization, the generalization ability and the relationship between the running time and the size/number of matrices. Moreover, the experimental results on images of faces with large corruptions illustrate that RGLRAM obtains the best denoising and compression performance than other methods.
Robust Generalized Low Rank Approximations of Matrices
Shi, Jiarong; Yang, Wei; Zheng, Xiuyun
2015-01-01
In recent years, the intrinsic low rank structure of some datasets has been extensively exploited to reduce dimensionality, remove noise and complete the missing entries. As a well-known technique for dimensionality reduction and data compression, Generalized Low Rank Approximations of Matrices (GLRAM) claims its superiority on computation time and compression ratio over the SVD. However, GLRAM is very sensitive to sparse large noise or outliers and its robust version does not have been explored or solved yet. To address this problem, this paper proposes a robust method for GLRAM, named Robust GLRAM (RGLRAM). We first formulate RGLRAM as an l1-norm optimization problem which minimizes the l1-norm of the approximation errors. Secondly, we apply the technique of Augmented Lagrange Multipliers (ALM) to solve this l1-norm minimization problem and derive a corresponding iterative scheme. Then the weak convergence of the proposed algorithm is discussed under mild conditions. Next, we investigate a special case of RGLRAM and extend RGLRAM to a general tensor case. Finally, the extensive experiments on synthetic data show that it is possible for RGLRAM to exactly recover both the low rank and the sparse components while it may be difficult for previous state-of-the-art algorithms. We also discuss three issues on RGLRAM: the sensitivity to initialization, the generalization ability and the relationship between the running time and the size/number of matrices. Moreover, the experimental results on images of faces with large corruptions illustrate that RGLRAM obtains the best denoising and compression performance than other methods. PMID:26367116
NASA Astrophysics Data System (ADS)
Pratiwi, B. N.; Suparmi, A.; Cari, C.; Husein, A. S.; Yunianto, M.
2016-08-01
We apllied asymptotic iteration method (AIM) to obtain the analytical solution of the Dirac equation in case exact pseudospin symmetry in the presence of modified Pcischl- Teller potential and trigonometric Scarf II non-central potential. The Dirac equation was solved by variables separation into one dimensional Dirac equation, the radial part and angular part equation. The radial and angular part equation can be reduced into hypergeometric type equation by variable substitution and wavefunction substitution and then transform it into AIM type equation to obtain relativistic energy eigenvalue and wavefunctions. Relativistic energy was calculated numerically by Matlab software. And then relativistic energy spectrum and wavefunctions were visualized by Matlab software. The results show that the increase in the radial quantum number nr causes decrease in the relativistic energy spectrum. The negative value of energy is taken due to the pseudospin symmetry limit. Several quantum wavefunctions were presented in terms of the hypergeometric functions.
An Approximate Approach to Automatic Kernel Selection.
Ding, Lizhong; Liao, Shizhong
2016-02-02
Kernel selection is a fundamental problem of kernel-based learning algorithms. In this paper, we propose an approximate approach to automatic kernel selection for regression from the perspective of kernel matrix approximation. We first introduce multilevel circulant matrices into automatic kernel selection, and develop two approximate kernel selection algorithms by exploiting the computational virtues of multilevel circulant matrices. The complexity of the proposed algorithms is quasi-linear in the number of data points. Then, we prove an approximation error bound to measure the effect of the approximation in kernel matrices by multilevel circulant matrices on the hypothesis and further show that the approximate hypothesis produced with multilevel circulant matrices converges to the accurate hypothesis produced with kernel matrices. Experimental evaluations on benchmark datasets demonstrate the effectiveness of approximate kernel selection.
Fast approximation of self-similar network traffic
Paxson, V.
1995-01-01
Recent network traffic studies argue that network arrival processes are much more faithfully modeled using statistically self-similar processes instead of traditional Poisson processes [LTWW94a, PF94]. One difficulty in dealing with self-similar models is how to efficiently synthesize traces (sample paths) corresponding to self-similar traffic. We present a fast Fourier transform method for synthesizing approximate self-similar sample paths and assess its performance and validity. We find that the method is as fast or faster than existing methods and appears to generate a closer approximation to true self-similar sample paths than the other known fast method (Random Midpoint Displacement). We then discuss issues in using such synthesized sample paths for simulating network traffic, and how an approximation used by our method can dramatically speed up evaluation of Whittle`s estimator for H, the Hurst parameter giving the strength of long-range dependence present in a self-similar time series.
First and second order convex approximation strategies in structural optimization
NASA Technical Reports Server (NTRS)
Fleury, C.
1989-01-01
In this paper, various methods based on convex approximation schemes are discussed that have demonstrated strong potential for efficient solution of structural optimization problems. First, the convex linearization method (Conlin) is briefly described, as well as one of its recent generalizations, the method of moving asymptotes (MMA). Both Conlin and MMA can be interpreted as first-order convex approximation methods that attempt to estimate the curvature of the problem functions on the basis of semiempirical rules. Attention is next directed toward methods that use diagonal second derivatives in order to provide a sound basis for building up high-quality explicit approximations of the behavior constraints. In particular, it is shown how second-order information can be effectively used without demanding a prohibitive computational cost. Various first-order and second-order approaches are compared by applying them to simple problems that have a closed form solution.
Cosmic shear covariance: the log-normal approximation
NASA Astrophysics Data System (ADS)
Hilbert, S.; Hartlap, J.; Schneider, P.
2011-12-01
Context. Accurate estimates of the errors on the cosmological parameters inferred from cosmic shear surveys require accurate estimates of the covariance of the cosmic shear correlation functions. Aims: We seek approximations to the cosmic shear covariance that are as easy to use as the common approximations based on normal (Gaussian) statistics, but yield more accurate covariance matrices and parameter errors. Methods: We derive expressions for the cosmic shear covariance under the assumption that the underlying convergence field follows log-normal statistics. We also derive a simplified version of this log-normal approximation by only retaining the most important terms beyond normal statistics. We use numerical simulations of weak lensing to study how well the normal, log-normal, and simplified log-normal approximations as well as empirical corrections to the normal approximation proposed in the literature reproduce shear covariances for cosmic shear surveys. We also investigate the resulting confidence regions for cosmological parameters inferred from such surveys. Results: We find that the normal approximation substantially underestimates the cosmic shear covariances and the inferred parameter confidence regions, in particular for surveys with small fields of view and large galaxy densities, but also for very wide surveys. In contrast, the log-normal approximation yields more realistic covariances and confidence regions, but also requires evaluating slightly more complicated expressions. However, the simplified log-normal approximation, although as simple as the normal approximation, yields confidence regions that are almost as accurate as those obtained from the log-normal approximation. The empirical corrections to the normal approximation do not yield more accurate covariances and confidence regions than the (simplified) log-normal approximation. Moreover, they fail to produce positive-semidefinite data covariance matrices in certain cases, rendering them
Using quadratic simplicial elements for hierarchical approximation and visualization
NASA Astrophysics Data System (ADS)
Wiley, David F.; Childs, Henry R.; Hamann, Bernd; Joy, Kenneth I.; Max, Nelson
2002-03-01
Best quadratic simplicial spline approximations can be computed, using quadratic Bernstein-Bezier basis functions, by identifying and bisecting simplicial elements with largest errors. Our method begins with an initial triangulation of the domain; a best quadratic spline approximation is computed; errors are computed for all simplices; and simplices of maximal error are subdivided. This process is repeated until a user-specified global error tolerance is met. The initial approximations for the unit square and cube are given by two quadratic triangles and five quadratic tetrahedra, respectively. Our more complex triangulation and approximation method that respects field discontinuities and geometrical features allows us to better approximate data. Data is visualized by using the hierarchy of increasingly better quadratic approximations generated by this process. Many visualization problems arise for quadratic elements. First tessellating quadratic elements with smaller linear ones and then rendering the smaller linear elements is one way to visualize quadratic elements. Our results show a significant reduction in the number of simplices required to approximate data sets when using quadratic elements as compared to using linear elements.
New Tests of the Fixed Hotspot Approximation
NASA Astrophysics Data System (ADS)
Gordon, R. G.; Andrews, D. L.; Horner-Johnson, B. C.; Kumar, R. R.
2005-05-01
We present new methods for estimating uncertainties in plate reconstructions relative to the hotspots and new tests of the fixed hotspot approximation. We find no significant motion between Pacific hotspots, on the one hand, and Indo-Atlantic hotspots, on the other, for the past ~ 50 Myr, but large and significant apparent motion before 50 Ma. Whether this motion is truly due to motion between hotspots or alternatively due to flaws in the global plate motion circuit can be tested with paleomagnetic data. These tests give results consistent with the fixed hotspot approximation and indicate significant misfits when a relative plate motion circuit through Antarctica is employed for times before 50 Ma. If all of the misfit to the global plate motion circuit is due to motion between East and West Antarctica, then that motion is 800 ± 500 km near the Ross Sea Embayment and progressively less along the Trans-Antarctic Mountains toward the Weddell Sea. Further paleomagnetic tests of the fixed hotspot approximation can be made. Cenozoic and Cretaceous paleomagnetic data from the Pacific plate, along with reconstructions of the Pacific plate relative to the hotspots, can be used to estimate an apparent polar wander (APW) path of Pacific hotspots. An APW path of Indo-Atlantic hotspots can be similarly estimated (e.g. Besse & Courtillot 2002). If both paths diverge in similar ways from the north pole of the hotspot reference frame, it would indicate that the hotspots have moved in unison relative to the spin axis, which may be attributed to true polar wander. If the two paths diverge from one another, motion between Pacific hotspots and Indo-Atlantic hotspots would be indicated. The general agreement of the two paths shows that the former is more important than the latter. The data require little or no motion between groups of hotspots, but up to ~10 mm/yr of motion is allowed within uncertainties. The results disagree, in particular, with the recent extreme interpretation of
Integral approximations to classical diffusion and smoothed particle hydrodynamics
Du, Qiang; Lehoucq, R. B.; Tartakovsky, A. M.
2014-12-31
The contribution of the paper is the approximation of a classical diffusion operator by an integral equation with a volume constraint. A particular focus is on classical diffusion problems associated with Neumann boundary conditions. By exploiting this approximation, we can also approximate other quantities such as the flux out of a domain. Our analysis of the model equation on the continuum level is closely related to the recent work on nonlocal diffusion and peridynamic mechanics. In particular, we elucidate the role of a volumetric constraint as an approximation to a classical Neumann boundary condition in the presence of physical boundary. The volume-constrained integral equation then provides the basis for accurate and robust discretization methods. As a result, an immediate application is to the understanding and improvement of the Smoothed Particle Hydrodynamics (SPH) method.